[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# others\n.DS_Store\ndata/*\n!data/data.md\n\n# under construction\nanim/SE3/*\nanim/SO3/*\nmakefile\n*.ipynb"
  },
  {
    "path": ".vscode/settings.json",
    "content": "{\n    // \"python.linting.mypyEnabled\": true,\n    \n    \"files.associations\": {\n        \"__bit_reference\": \"cpp\",\n        \"__bits\": \"cpp\",\n        \"__config\": \"cpp\",\n        \"__debug\": \"cpp\",\n        \"__errc\": \"cpp\",\n        \"__locale\": \"cpp\",\n        \"__mutex_base\": \"cpp\",\n        \"__nullptr\": \"cpp\",\n        \"__split_buffer\": \"cpp\",\n        \"__string\": \"cpp\",\n        \"__threading_support\": \"cpp\",\n        \"__tuple\": \"cpp\",\n        \"atomic\": \"cpp\",\n        \"bitset\": \"cpp\",\n        \"cctype\": \"cpp\",\n        \"chrono\": \"cpp\",\n        \"clocale\": \"cpp\",\n        \"compare\": \"cpp\",\n        \"concepts\": \"cpp\",\n        \"cstdarg\": \"cpp\",\n        \"cstddef\": \"cpp\",\n        \"cstdint\": \"cpp\",\n        \"cstdio\": \"cpp\",\n        \"cstdlib\": \"cpp\",\n        \"cstring\": \"cpp\",\n        \"ctime\": \"cpp\",\n        \"cwchar\": \"cpp\",\n        \"cwctype\": \"cpp\",\n        \"exception\": \"cpp\",\n        \"initializer_list\": \"cpp\",\n        \"ios\": \"cpp\",\n        \"iosfwd\": \"cpp\",\n        \"iostream\": \"cpp\",\n        \"istream\": \"cpp\",\n        \"limits\": \"cpp\",\n        \"locale\": \"cpp\",\n        \"memory\": \"cpp\",\n        \"mutex\": \"cpp\",\n        \"new\": \"cpp\",\n        \"ostream\": \"cpp\",\n        \"ratio\": \"cpp\",\n        \"stdexcept\": \"cpp\",\n        \"streambuf\": \"cpp\",\n        \"string\": \"cpp\",\n        \"string_view\": \"cpp\",\n        \"system_error\": \"cpp\",\n        \"tuple\": \"cpp\",\n        \"type_traits\": \"cpp\",\n        \"typeinfo\": \"cpp\",\n        \"variant\": \"cpp\",\n        \"vector\": \"cpp\",\n        \"array\": \"cpp\",\n        \"list\": \"cpp\"\n    }\n}"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2023 Kosuke Fukazawa\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n    <picture>\n    <img alt=\"cat\" src=\"figs/icon.png\" width=512>\n    </picture>\n</p>\n\n<h2 align=\"center\">\nCAT: Character Animation Tools for Python\n</h2>\n\nThis repository includes scripts for Character Animation.\nAll the code is written entirely in python.\n\nIt is useful for pre-processing and post-processing motions in **Deep Learning**.  \nIt will be also useful for create character animations.\n\n## :star: Requirements\nI tested on python3.10 (for match-case syntax).\n\n### Package\n* NumPy\n* SciPy\n* matplotlib\n* chumpy (if you use vanilla SMPL for AIST++).\n* easydict\n<!-- \nTBD\n* PyTorch\n* JAX\n* taichi \n* faiss-gpu\n-->\n\n### Motion Data\nSome of the scripts in this repository need motion data below.  \nPlease download them and place them in [`data/`](data) or link them as symbolic links at [`data/`](data). For more information please see [`data/data.md`](data/data.md).\n* [Ubisoft La Forge Animation Dataset (LAFAN1)](https://github.com/ubisoft/ubisoft-laforge-animation-dataset)\n* [SMPL](https://smpl.is.tue.mpg.de/)\n* [SMPL+H](https://mano.is.tue.mpg.de/)\n* [SMPL-X](https://smpl-x.is.tue.mpg.de/)\n* [AMASS](https://amass.is.tue.mpg.de/)\n* [AIST++](https://google.github.io/aistplusplus_dataset/factsfigures.html)\n\n\n## :question: How to use?\n\n### 1. Load and Save Animation\n<details>\n<summary>open</summary>  \n\n#### 1.1 Load Animation from [bvh](https://research.cs.wisc.edu/graphics/Courses/cs-838-1999/Jeff/BVH.html) file.\n```Python\nfrom anim import bvh\nfrom anim.animation import Animation\nanim_bvh: Animation = bvh.load(filepath=\"data/**.bvh\")\n```\n\n#### 1.2 Load Animation from [AIST++](https://google.github.io/aistplusplus_dataset/factsfigures.html).\nYou need to install chumpy to use vanilla SMPL model.\n```Python\nfrom anim import aistpp\nanim: Animation = aistpp.load(\n    aistpp_motion_path=\"data/aistpp/**.pkl\",\n    smpl_path=\"data/smpl/neutral/model.pkl\"\n)\n``` \n\n#### 1.3 Load Animation from [AMASS](https://amass.is.tue.mpg.de/).\nI recommend you to download extended SMPL+H model (16 beta components).\n```Python\nfrom anim import amass\nanim: Animation = amass.load(\n    amass_motion_path=\"data/amass/**.npz\",\n    smplh_path=\"data/smplh/neutral/model.npz\"\n)\n```\n\n#### 1.4 Save as bvh.\nYou can convert SMPL based motion files (AIST++, AMASS) to BVH files.\n```Python\nfrom anim import bvh\nfrom anim.animation import Animation\n\n...\n\nanim: Animation\nbvh.save(\n    filepath=\"data/***.bvh\",\n    anim=anim\n)\n\n```\n\n<!-- \n#### 1.5 Other animation format\nTBD  \n-->\n\n</details>\n\n\n### 2. Get motion features\n<details>\n<summary>open</summary>\n\n#### 2.1 Get positions (global, root-centric, character space).\n```Python\nimport numpy as np\nfrom anim.animation import Animation\n\n...\n\nanim: Animation\nglobal_positions: np.ndarray = anim.gpos\nrcentric_positions: np.ndarray = anim.rtpos\ncspace_positions: np.ndarray = anim.cpos\n```\n\n#### 2.2 Get velocities(positions, rotations).\n```Python\nanim: Animation\npos_velocities: np.ndarray = anim.gposvel\nrot_velocities: np.ndarray = anim.lrotvel\n``` \n\n#### 2.3 Get mirrored Animation.  \n(**caution**: Skel offsets must be symmetric.)\n```Python\nanim: Animation\nanim_M: Animation = anim.mirror()\n```\n</details>\n\n\n### 3. Inverse Kinematics\n<details>\n<summary>open</summary>\n\n#### 3.1 Two bone IK\nAnalytical method of foot IK example (define heels positon and knees forward vector).\n```bash\npython anim/inverse_kinematics/two_bone_ik.py\n```\n![two_bone_ik](figs/two_bone_ik.png)\n\n#### 3.2 CCD-IK\nSimple demo.\n```bash\npython anim/inverse_kinematics/ccd_ik.py\n```\n![ccd_ik](figs/ccd_ik.png)\n\n#### 3.3 FABRIK\nSimple demo.\n```bash\npython anim/inverse_kinematics/fabrik.py\n```\n![fabrik](figs/fabrik.png)\n\n<!-- \n#### 3.4 Jacobian IK\nTBD \n-->\n\n</details>\n\n\n### 4. Motion Blending\n<details>\n<summary>open</summary>\n\n#### 4.1 Linear blending for pose.\nTBD\n\n</details>\n\n\n### 5. Motion Matching\n<details>\n<summary>open</summary>\n\n#### 5.1 Character Control by predefined trajectories\n\n```python\npython test/path_following.py\n```\n![sim_motion](figs/sim_motion.gif)\n![sim_sidestep](figs/sim_motion_sidestep.gif)\n\n<!-- \n#### 5.2 Character Control by user input\n```python\npython test/character_controller.py\n``` \n-->\n\n</details>\n\n<!--\n### 6. Graph animation\n<details>\n<summary>open</summary>\n\n#### Motion Graphs\nTBD\n\n#### State Machine\nTBD\n\n</details>\n\n\n### 7. KeyFrame Animation\n<details>\n<summary>open</summary>\n\n#### Extract keys from Animation\nTBD\n\n#### KeyFrame reduction\nTBD\n\n#### Create Animation from keyframes\nTBD\n\n</details>\n\n\n### 8. Crowd Simulation\n<details>\n<summary>open</summary>\n\n#### Collision avoidance\nTBD\n\n#### Following path\nTBD\n\n</details>\n\n\n### 9. Skinning\n<details>\n<summary>open</summary>\n\n#### Linear Blend Skinning\nTBD\n\n#### Dual-Quaternions Blend Skinning\nTBD\n\n</details>\n\n\n### 10. Animation Retarget\n<details>\n<summary>open</summary>\n\n#### Rotation copy\nTBD\n\n#### IK-based retarget\nTBD\n\n</details>\n-->\n\n\n<!-- \n## :zap: Deep Learning \n-->\n\n\n## :eyes: Notification\n* [`util/quat.py`](util/quat.py) inspired by [Motion-Matching](https://github.com/orangeduck/Motion-Matching).\n* This repository is MIT licensed, but some datasets requires a separate license. Please check them.\n\n## :speech_balloon: Contact\nThis repository is under construction.  \nFeel free to contact me on [issue](https://github.com/KosukeFukazawa/CharacterAnimationTools/issues).\n\n## :books:License\nThis code is distributed under an [MIT LICENSE](LICENSE)."
  },
  {
    "path": "anim/aistpp.py",
    "content": "\"\"\"\naistpp.py\nAIST++ : \n    format: ***.pkl (pickle file)\n    Parameters :\n        \"smpl_loss\" (float): I don't know how to use it.\n        \"smpl_poses\" (np.ndarray): SMPL pose paramters (rotations). shape: (num_frame, num_J * 3 = 72).\n        \"smpl_scaling\" (float): Scaling parameters of the skeleton.\n        \"smpl_trans\" (np.ndarray): Root translations. shape: (num_frame, 3).\n\"\"\"\n# loading AIST++ data.\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nimport numpy as np\nfrom anim.skel import Skel\nfrom anim.animation import Animation\nfrom anim.smpl import load_model, calc_skel_offsets, SMPL_JOINT_NAMES\nfrom util import quat\nfrom util.load import pickle_load\n\ndef load(\n    aistpp_motion_path: Path,\n    skel: Skel=None,\n    smpl_path: Path=Path(\"data/smpl/neutral/model.pkl\"),\n    scale: float=100.0,\n    fps: int=30,\n) -> Animation:\n    \n    NUM_JOINTS = 24\n    NUM_BETAS = 10\n    \n    # Load a motion.\n    if isinstance(aistpp_motion_path, str):\n        aistpp_motion_path = Path(aistpp_motion_path)\n    name = aistpp_motion_path.stem\n    aistpp_dict = pickle_load(aistpp_motion_path)\n    poses = aistpp_dict[\"smpl_poses\"]\n    num_frames = len(poses)\n    poses = poses.reshape(num_frames, -1, 3)\n    quats = quat.from_axis_angle(poses)\n    trans = aistpp_dict[\"smpl_trans\"] / aistpp_dict[\"smpl_scaling\"]\n\n    if skel == None:\n        # Load a Skel.\n        smpl_dict = load_model(smpl_path)\n        parents = list(map(int, smpl_dict[\"kintree_table\"][0][:NUM_JOINTS]))\n        parents[0] = -1\n        J_regressor = smpl_dict[\"J_regressor\"]\n        shapedirs = smpl_dict[\"shapedirs\"]\n        v_template = smpl_dict[\"v_template\"]\n        betas = np.zeros([NUM_BETAS])\n        J_positions = calc_skel_offsets(betas, J_regressor, shapedirs, v_template)[:NUM_JOINTS] * scale\n        root_offset = J_positions[0]\n        offsets = J_positions - J_positions[parents]\n        offsets[0] = root_offset\n        names = SMPL_JOINT_NAMES[:NUM_JOINTS]\n        skel = Skel.from_names_parents_offsets(names, parents, offsets)\n\n    # We need to apply smpl root offsets to translations.\n    trans += root_offset\n    \n    return Animation(skel=skel, quats=quats, trans=trans, fps=fps, anim_name=name,)"
  },
  {
    "path": "anim/amass.py",
    "content": "\"\"\"\namass.py\nAMASS :\n    Format: ***.npz (numpy binary file)\n    Parameters:\n        \"trans\" (np.ndarray): Root translations. shape: (num_frames, 3).\n        \"gender\" (np.ndarray): Gender name. \"male\", \"female\", \"neutral\".\n        \"mocap_framerate\" (np.ndarray): Fps of mocap data.\n        \"betas\" (np.ndarray): PCA based body shape parameters. shape: (num_betas=16).\n        \"dmpls\" (np.ndarray): Dynamic body parameters of DMPL. We do not use. Unused for skeleton. shape: (num_frames, num_dmpls=8).\n        \"poses\" (np.ndarray): SMPLH pose parameters (rotations). shape: (num_frames, num_J * 3 = 156).\n\"\"\"\n# loading amass data.\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nimport numpy as np\nfrom anim.skel import Skel\nfrom anim.animation import Animation\nfrom anim.smpl import load_model, calc_skel_offsets, SMPL_JOINT_NAMES, SMPLH_JOINT_NAMES\nfrom util import quat\n\ndef load(\n    amass_motion_path: Path,\n    skel: Skel=None,\n    smplh_path: Path=Path(\"data/smplh/neutral/model.npz\"),\n    remove_betas: bool=False,\n    gender: str=None,\n    num_betas: int=16,\n    scale: float=100.0,\n    load_hand=True,\n) -> Animation:\n    \"\"\"\n    args:\n        amass_motion_file (Path): Path to the AMASS motion file.\n        smplh_path (Path): Optional. Path to the SMPLH model.\n        remove_betas (bool): remove beta parameters from AMASS.\n        gender (str): Gender of SMPLH model.\n        num_betas (int): Number of betas to use. Defaults to 16.\n        num_dmpls (int): Number of dmpl parameters to use. Defaults to 8.\n        scale (float): Scaling paramter of the skeleton. Defaults to 100.0.\n        load_hand (bool): Whether to use hand joints. Defaults to True.\n        load_dmpl (bool): Whether to use DMPL. Defaults to False.\n    Return:\n        anim (Animation)\n    \"\"\"\n    if load_hand: \n        NUM_JOINTS = 52\n        names = SMPLH_JOINT_NAMES[:NUM_JOINTS]\n    else: \n        NUM_JOINTS = 24\n        names = SMPL_JOINT_NAMES[:NUM_JOINTS]\n\n    if not isinstance(amass_motion_path, Path):\n        amass_motion_path = Path(amass_motion_path)\n    if not isinstance(smplh_path, Path):\n        smplh_path = Path(smplh_path)\n    \n    \n    trans_quat = quat.mul(quat.from_angle_axis(-np.pi / 2, [0, 1, 0]), quat.from_angle_axis(-np.pi / 2, [1, 0, 0]))\n\n    # Load AMASS info.\n    amass_dict = np.load(amass_motion_path, allow_pickle=True)\n    if gender == None: gender = str(amass_dict[\"gender\"])\n    fps = int(amass_dict[\"mocap_framerate\"])\n    if remove_betas: betas = np.zeros([num_betas])\n    else: betas = amass_dict[\"betas\"][:num_betas]\n    axangles = amass_dict[\"poses\"]\n    num_frames = len(axangles)\n    axangles = axangles.reshape([num_frames, -1, 3])[:,:NUM_JOINTS]\n    quats = quat.from_axis_angle(axangles)\n    root_rot = quat.mul(trans_quat[None], quats[:,0])\n    quats[:,0] = root_rot\n\n    if skel == None:\n        # Load SMPL parmeters.\n        smplh_dict = load_model(smplh_path, gender)\n        parents = smplh_dict[\"kintree_table\"][0][:NUM_JOINTS]\n        parents[0] = -1\n        J_regressor = smplh_dict[\"J_regressor\"]\n        shapedirs = smplh_dict[\"shapedirs\"]\n        v_template = smplh_dict[\"v_template\"]\n        \n        J_positions = calc_skel_offsets(betas, J_regressor, shapedirs, v_template)[:NUM_JOINTS] * scale\n        root_offset = J_positions[0]\n        offsets = J_positions - J_positions[parents]\n        offsets[0] = root_offset\n        skel = Skel.from_names_parents_offsets(names, parents, offsets, skel_name=\"SMPLH\")\n    \n    root_pos = skel.offsets[0][None].repeat(len(quats), axis=0)\n    trans = amass_dict[\"trans\"] * scale + root_pos\n    trans = trans @ quat.to_xform(trans_quat).T\n    anim = Animation(skel=skel, quats=quats, trans=trans, fps=fps, anim_name=amass_motion_path.stem)\n\n    return anim"
  },
  {
    "path": "anim/animation.py",
    "content": "\"\"\"\nanimation.py\n\"\"\"\n# Basic animation class. Can use for \n# deep-learning (pre/post)processing \n# and visualization etc.\n\nfrom __future__ import annotations\n\nimport numpy as np\nimport scipy.ndimage as ndimage\nfrom util import quat\nfrom util import dualquat as dq\nfrom anim.skel import Skel\n\n\nclass Animation:\n    def __init__(\n        self,\n        skel: Skel,\n        quats: np.ndarray,\n        trans: np.ndarray,\n        positions: np.ndarray = None,\n        fps: int=30,\n        anim_name: str=\"animation\",\n    ) -> None:\n        \"\"\" Class for Motions representations.\n\n        Args:\n            skel (Skel): Skelton definition.\n            quats (np.ndarray): Joints rotations. shape: (T, J, 4)\n            trans (np.ndarray): Root transitions. shape: (T, 3)\n            positions (np.ndarray): Joints positions. shape: (T, J, 3). Defaults to None.\n            fps (int): Frame per seconds for animation. Defaults to 30.\n            anim_name (str, optional): Name of animation. Defaults to \"animation\".\n        \"\"\"\n        self.skel = skel\n        self.quats = quats\n        self.trans = trans\n        self.positions = positions\n        self.fps = fps\n        self.name = anim_name\n    \n    def __len__(self) -> int: return len(self.trans)\n    \n    def __add__(self, other: Animation) -> Animation:\n        quats: np.ndarray = np.concatenate([self.quats, other.quats], axis=0)\n        trans: np.ndarray = np.concatenate([self.trans, other.trans], axis=0)\n        return Animation(\n            skel=self.skel,\n            quats=quats,\n            trans=trans,\n            fps=self.fps,\n            anim_name=self.name,\n        )\n    \n    def __getitem__(self, index: int | slice) -> Animation:\n        if isinstance(index, int):\n            if index == -1:\n                index = len(self) - 1\n            return Animation(\n                skel=self.skel,\n                quats=self.quats[index:index+1],\n                trans=self.trans[index:index+1],\n                fps=self.fps,\n                anim_name=self.name,\n            )\n        elif isinstance(index, slice):\n            return Animation(\n                skel=self.skel,\n                quats=self.quats[index],\n                trans=self.trans[index],\n                fps=self.fps,\n                anim_name=self.name,\n            )\n        else:\n            raise TypeError\n    \n    def cat(self, other: Animation) -> None:\n        assert isinstance(other, Animation)\n\n        self.quats = np.concatenate(\n            [self.quats, other.quats], axis=0)\n        self.trans = np.concatenate(\n            [self.trans, other.trans], axis=0\n        )\n    \n    @property\n    def parents(self) -> list[int]:\n        return self.skel.parents\n    \n    @property\n    def joint_names(self) -> list[str]:\n        return self.skel.names\n    \n    @property\n    def offsets(self) -> np.ndarray:\n        return self.skel.offsets\n    \n    # =====================\n    #  Rotation conversion\n    # =====================\n    @property\n    def grot(self) -> np.ndarray:\n        \"\"\"global rotations(quaternions) for each joints. shape: (T, J, 4)\"\"\"\n        return quat.fk_rot(lrot=self.quats, parents=self.parents)\n    \n    @property\n    def crot(self) -> np.ndarray:\n        \"\"\"Projected root(simulation bone) centric global rotations.\"\"\"\n        \n        # Root rotations relative to the forward vector.\n        root_rot = self.proj_root_rot\n        # Root positions projected on the ground (y=0).\n        root_pos = self.proj_root_pos()\n        # Make relative to simulation bone.\n        lrot = self.quats.copy()\n        lpos = self.lpos.copy()\n        lrot[:, 0] = quat.inv_mul(root_rot, lrot[:, 0])\n        lpos[:, 0] = quat.inv_mul_vec(root_rot, lpos[:, 0] - root_pos)\n        return quat.fk_rot(lrot, self.parents)\n    \n    @property\n    def axangs(self) -> np.ndarray:\n        \"\"\"axis-angle rotation representations. shape: (T, J, 3)\"\"\"\n        return quat.to_axis_angle(self.quats)\n    \n    @property\n    def xforms(self) -> np.ndarray:\n        \"\"\"rotation matrix. shape: (T, J, 3, 3)\"\"\"\n        return quat.to_xform(self.quats)\n    \n    @property\n    def ortho6ds(self) -> np.ndarray:\n        return quat.to_xform_xy(self.quats)\n    \n    @property\n    def sw_tws(self) -> tuple[np.ndarray, np.ndarray]:\n        \"\"\"Twist-Swing decomposition.\n           This function is based on HybrIK.\n           Remove root rotations.\n           quat.mul(swings, twists) reproduce original rotations.\n        return:\n            twists: (T, J-1, 4), except ROOT.\n            swings: (T, J-1, 4)\n        \"\"\"\n        \n        rots: np.ndarray = self.quats.copy()\n        pos:  np.ndarray = self.lpos.copy()\n        \n        children: np.ndarray = \\\n            -np.ones(len(self.parents), dtype=int)\n        for i in range(1, len(self.parents)):\n            children[self.parents[i]] = i\n        \n        swings: list[np.ndarray]  = []\n        twists: list[np.ndarray]  = []\n        \n        for i in range(1, len(self.parents)):\n            # ルートに最も近いJointはtwistのみ / only twist for nearest joint to root.\n            if children[i] < 0:\n                swings.append(quat.eye([len(self), 1]))\n                twists.append(rots[:, i:i+1])\n                continue\n            \n            u: np.ndarray = pos[:, children[i]:children[i]+1]\n            rot: np.ndarray = rots[:, i:i+1]\n            u = u / np.linalg.norm(u, axis=-1, keepdims=True)\n            v: np.ndarray = quat.mul_vec(rot, u)\n            \n            swing: np.ndarray = quat.normalize(quat.between(u, v))\n            swings.append(swing)\n            twist: np.ndarray = quat.inv_mul(swing, rot)\n            twists.append(twist)\n        \n        swings_np: np.ndarray = np.concatenate(swings, axis=1)\n        twists_np: np.ndarray = np.concatenate(twists, axis=1)\n\n        return swings_np, twists_np\n\n    # ==================\n    #  Position from FK\n    # ==================\n    def set_positions_from_fk(self) -> None:\n        self.positions = self.gpos\n    \n    @property\n    def lpos(self) -> np.ndarray:\n        lpos: np.ndarray = self.offsets[None].repeat(len(self), axis=0)\n        lpos[:,0] = self.trans\n        return lpos\n    \n    @property\n    def gpos(self) -> np.ndarray:\n        \"\"\"Global space positions.\"\"\"\n        \n        _, gpos = quat.fk(\n            lrot=self.quats, \n            lpos=self.lpos, \n            parents=self.parents\n        )\n        return gpos\n    \n    @property\n    def rtpos(self) -> np.ndarray:\n        \"\"\"Root-centric local positions.\"\"\"\n        \n        lrots: np.ndarray = self.quats.copy()\n        lposs: np.ndarray = self.lpos.copy()\n        # ROOT to zero.\n        lrots[:,0] = quat.eye([len(self)])\n        lposs[:,0] = np.zeros([len(self), 3])\n        \n        _, rtpos = quat.fk(\n            lrot=lrots, \n            lpos=lposs, \n            parents=self.parents\n        )\n        return rtpos\n    \n    @property\n    def cpos(self) -> np.ndarray:\n        \"\"\"Projected root(simulation bone) centric positions.\"\"\"\n        lrot = self.quats.copy()\n        lpos = self.lpos\n        c_root_rot, c_root_pos = self.croot()\n        lrot[:, 0] = c_root_rot\n        lpos[:, 0] = c_root_pos\n        \n        crot, cpos = quat.fk(lrot, lpos, self.parents)\n        return cpos\n    \n    def croot(self, idx: int=None) -> tuple[np.ndarray, np.ndarray]:\n        \"\"\"return character space info.\n        return:\n            crot: character space root rotations. [T, 4]\n            cpos: character space root positions. [T, 3]\n        \"\"\"\n        # Root rotations relative to the forward vector.\n        root_rot = self.proj_root_rot\n        # Root positions projected on the ground (y=0).\n        root_pos = self.proj_root_pos()\n        # Make relative to simulation bone.\n        crot = quat.inv_mul(root_rot, self.quats[:, 0])\n        cpos = quat.inv_mul_vec(root_rot, self.trans - root_pos)\n        if idx is not None:\n            return crot[idx], cpos[idx]\n        else: return crot, cpos\n    \n    # ============\n    #   Velocity\n    # ============\n    @property\n    def gposvel(self) -> np.ndarray:\n        gpos : np.ndarray = self.gpos\n        gpvel: np.ndarray = np.zeros_like(gpos)\n        gpvel[1:] = (gpos[1:] - gpos[:-1]) * self.fps # relative position from previous frame.\n        gpvel[0] = gpvel[1] - (gpvel[3] - gpvel[2])\n        return gpvel\n    \n    @property\n    def cposvel(self) -> np.ndarray:\n        cpos : np.ndarray = self.cpos\n        cpvel: np.ndarray = np.zeros_like(cpos)\n        cpvel[1:] = (cpos[1:] - cpos[:-1]) * self.fps # relative position from previous frame.\n        cpvel[0] = cpvel[1] - (cpvel[3] - cpvel[2])\n        return cpvel\n\n    @property\n    def lrotvel(self) -> np.ndarray:\n        \"\"\"Calculate rotation velocities with \n           rotation vector style.\"\"\"    \n        \n        lrot : np.ndarray = self.quats.copy()\n        lrvel: np.ndarray = np.zeros_like(self.lpos)\n        lrvel[1:] = quat.to_axis_angle(quat.abs(quat.mul(lrot[1:], quat.inv(lrot[:-1])))) * self.fps\n        lrvel[0] = lrvel[1] - (lrvel[3] - lrvel[2])\n        return lrvel\n    \n    # ==============\n    #   4x4 matrix\n    # ==============\n    @property\n    def local_transform(self) -> np.ndarray:\n        xforms: np.ndarray = self.xforms\n        offsets: np.ndarray = self.offsets\n        transform: np.ndarray = np.zeros(xforms.shape[:-2] + (4, 4,))\n        transform[..., :3, :3] = xforms\n        transform[..., :3,  3] = offsets\n        transform[..., 3, 3] = 1\n        return transform\n    \n    @property\n    def global_transform(self) -> np.ndarray:\n        ltrans: np.ndarray = self.local_transform.copy()\n        parents: list[int] = self.parents\n        \n        gtrans = [ltrans[...,:1,:,:]]\n        for i in range(1, len(parents)):\n            gtrans.append(np.matmul(gtrans[parents[i]],ltrans[...,i:i+1,:,:]))\n        \n        return np.concatenate(gtrans, axis=-3)\n    \n    # ==================\n    #  dual quaternions\n    # ==================\n    @property\n    def local_dualquat(self) -> np.ndarray:\n        return dq.from_rot_and_trans(self.quats, self.lpos)\n    \n    @property\n    def global_dualquat(self) -> np.ndarray:\n        return dq.fk(self.local_dualquat, self.parents)\n    \n    # =============\n    #  trajectory \n    # =============\n    def proj_root_pos(self, remove_vertical: bool=False) -> np.ndarray:\n        \"\"\"Root position projected on the ground (world space).\n        return:\n            Projected bone positons as ndarray of shape [len(self), 3] or [len(self), 2](remove_vertical).\n        \"\"\"\n        vertical = self.skel.vertical\n        if remove_vertical:\n            settle_ax = []\n            for i, ax in enumerate(vertical):\n                if ax == 0:\n                    settle_ax.append(i)\n            return self.trans[..., settle_ax]\n        else:\n            return self.trans * np.array([abs(abs(ax) - 1) for ax in vertical])\n    \n    @property\n    def proj_root_rot(self) -> np.ndarray:\n        # root rotations relative to the forward on the ground. [len(self), 4]\n        forward = self.skel.forward\n        return quat.normalize(\n            quat.between(np.array(forward), self.root_direction())\n        )\n    \n    def root_direction(self, remove_vertical: bool=False) -> np.ndarray:\n        \"\"\"Forward orientation vectors on the ground (world space).\n        return:\n            Forward vectors as ndarray of shape [..., 3] or [..., 2](remove_vertical).\n        \"\"\"\n        # Calculate forward vectors except vertical axis.\n        vertical = self.skel.vertical\n        rt_rots = self.quats[..., 0, :]\n        forwards = np.zeros(shape=rt_rots.shape[:-1] + (3,))\n        forwards[...,] = self.skel.rest_forward\n        rt_fwd = quat.mul_vec(rt_rots, forwards) * np.array([abs(abs(ax) - 1) for ax in vertical]) # [T, 3]\n        # Normalize vectors.\n        norm_rt_fwd = rt_fwd / np.linalg.norm(rt_fwd, axis=-1, keepdims=True)\n        if remove_vertical:\n            settle_ax = []\n            for i, ax in enumerate(vertical):\n                if ax == 0:\n                    settle_ax.append(i)\n            norm_rt_fwd = norm_rt_fwd[..., settle_ax]\n        return norm_rt_fwd\n    \n    # ===================\n    #  Future trajectory\n    # ===================\n    def future_traj_poss(self, frame: int, remove_vertical: bool=True, cspace=True) -> np.ndarray:\n        \"\"\"Calculate future trajectory positions on simulation bone.\n        Args:\n            frame (int): how many ahead frame to see.\n            remove_vertical (bool, optional): remove vertical axis positions. Defaults to True.\n            cspace (bool, optional): use local character space. Defaults to True.\n        Returns:\n            np.ndarray: future trajectories positions. shape=(len(self), 3) or (len(self), 2)\n        \"\"\"\n        idxs = self.clamp_future_idxs(frame)\n        proj_root_pos = self.proj_root_pos()\n        if cspace:\n            traj_pos = quat.inv_mul_vec(self.proj_root_rot, proj_root_pos[idxs] - proj_root_pos)\n        else:\n            traj_pos = proj_root_pos[idxs]\n\n        if remove_vertical:\n            vertical = self.skel.vertical\n            settle_ax = []\n            for i, ax in enumerate(vertical):\n                if ax == 0:\n                    settle_ax.append(i)\n            return traj_pos[..., settle_ax]\n        else: \n            return traj_pos\n\n    def future_traj_dirs(self, frame: int, remove_vertical: bool=True, cspace=True) -> np.ndarray:\n        \"\"\"Calculate future trajectory directions on simulation bone (local character space).\n        Args:\n            frame (int): how many ahead frame to see.\n            remove_vertical (bool, optional): remove vertical axis. Defaults to True.\n            cspace (bool, optional): use local character space. Defaults to True.\n        Returns:\n            np.ndarray: future trajectories directions. shape=(len(self), 3) or (len(self), 2)\n        \"\"\"\n        idxs = self.clamp_future_idxs(frame)\n        root_directions = self.root_direction()\n        if cspace:\n            traj_dir = quat.inv_mul_vec(self.proj_root_rot, root_directions[idxs])\n        else:\n            traj_dir = root_directions[idxs]\n        \n        if remove_vertical:\n            vertical = self.skel.vertical\n            settle_ax = []\n            for i, ax in enumerate(vertical):\n                if ax == 0:\n                    settle_ax.append(i)\n            return traj_dir[..., settle_ax]\n        else: \n            return traj_dir\n    \n    def clamp_future_idxs(self, offset: int) -> np.ndarray:\n        \"\"\"Function to calculate the frame array for `offset` frame ahead.\n        If `offset` frame ahead does not exist, \n        return the last frame.\n        \"\"\"\n        idxs = np.arange(len(self)) + offset\n        idxs[-(offset + 1):] = idxs[-(offset + 1)]\n        return idxs\n    \n    # =====================\n    #    Other functions\n    # =====================\n    def calc_foot_contact(\n        self, \n        method: str=\"velocity\",\n        threshold: float=0.15,\n        left_foot_name: str=\"LeftToe\",\n        right_foot_name: str=\"RightToe\",\n    ) -> np.ndarray:\n        if method == \"velocity\":\n            contact_vel = np.linalg.norm(\n                self.gposvel[:, \n                    [self.joint_names.index(left_foot_name), \n                    self.joint_names.index(right_foot_name)]\n                ], axis=-1\n            )\n            contacts = contact_vel < threshold\n        elif method == \"position\":\n            # vertical axis position for each frame.\n            settle_idx = 0\n            inverse = False # Is the negative direction vertical?\n            for i, ax in enumerate(self.skel.vertical):\n                if ax == 1:\n                    settle_idx = i\n                if ax == -1:\n                    settle_idx = i\n                    inverse = True \n            contact_pos = self.gpos[:,\n                [self.joint_names.index(left_foot_name), \n                self.joint_names.index(right_foot_name)], settle_idx]\n            if inverse:\n                contact_pos *= -1\n            contacts = contact_pos < threshold\n        else:\n            raise ValueError(\"unknown value selected on `method`.\")\n        for ci in range(contacts.shape[1]):\n            contacts[:, ci] = ndimage.median_filter(\n                contacts[:, ci],\n                size=6,\n                mode=\"nearest\"\n            )\n        return contacts\n    \n    def mirror(self, dataset: str=None) -> Animation:\n        vertical = self.skel.vertical\n        forward = self.skel.forward\n        mirror_axis = []\n        for vert_ax, fwd_ax in zip(vertical, forward):\n            if abs(vert_ax) == 1 or abs(fwd_ax) == 1:\n                mirror_axis.append(1)\n            else:\n                mirror_axis.append(-1)\n        if dataset == \"lafan1\":\n            quatM, lposM = animation_mirror(\n                lrot=self.quats,\n                lpos=self.lpos,\n                names=self.joint_names,\n                parents=self.parents\n            )\n            transM = lposM[:, 0]\n        else:\n            quatM, transM = mirror_rot_trans(\n                lrot=self.quats,\n                trans=self.trans,\n                names=self.joint_names,\n                parents=self.parents,\n                mirror_axis=mirror_axis,\n            )\n        return Animation(\n            skel=self.skel,\n            quats=quatM,\n            trans=transM,\n            fps=self.fps,\n            anim_name=self.name+\"_M\",\n        )\n    \n    @staticmethod\n    def no_animation(\n        skel: Skel, \n        fps: int=30,\n        num_frame: int=1,\n        anim_name: str=\"animation\",\n    ) -> Animation:\n        \"\"\"Create a unit animation (no rotation, no transition) from Skel\"\"\"\n        quats = quat.eye([num_frame, len(skel)])\n        trans = np.zeros([num_frame, 3])\n        return Animation(skel, quats, trans, None, fps, anim_name)\n\n\ndef mirror_rot_trans(\n    lrot: np.ndarray,\n    trans: np.ndarray,\n    names: list[str],\n    parents: np.ndarray | list[int],\n    mirror_axis: list[int]\n) -> tuple[np.ndarray, np.ndarray]:\n    \n    lower_names = [n.lower() for n in names]\n    joints_mirror: np.ndarray = np.array([(\n        lower_names.index(\"left\"+n[5:]) if n.startswith(\"right\") else\n        lower_names.index(\"right\"+n[4:]) if n.startswith(\"left\") else \n        lower_names.index(n)) for n in lower_names])\n\n    mirror_pos: np.ndarray = np.array(mirror_axis)\n    mirror_rot: np.ndarray = np.array([1,] + [-ax for ax in mirror_axis])\n    grot: np.ndarray = quat.fk_rot(lrot, parents)\n    trans_mirror: np.ndarray = mirror_pos * trans\n    grot_mirror: np.ndarray = mirror_rot * grot[:,joints_mirror]\n    \n    return quat.ik_rot(grot_mirror, parents), trans_mirror\n\n# for LAFAN1 dataset. This function from \n# https://github.com/orangeduck/Motion-Matching/blob/5e57a1352812494cca83396785cfc4985df9170c/resources/generate_database.py#L15\ndef animation_mirror(lrot, lpos, names, parents):\n\n    joints_mirror = np.array([(\n        names.index('Left'+n[5:]) if n.startswith('Right') else (\n        names.index('Right'+n[4:]) if n.startswith('Left') else \n        names.index(n))) for n in names])\n\n    mirror_pos = np.array([-1, 1, 1])\n    mirror_rot = np.array([[-1, -1, 1], [1, 1, -1], [1, 1, -1]])\n\n    grot, gpos = quat.fk(lrot, lpos, parents)\n\n    gpos_mirror = mirror_pos * gpos[:,joints_mirror]\n    grot_mirror = quat.from_xform(mirror_rot * quat.to_xform(grot[:,joints_mirror]))\n    \n    return quat.ik(grot_mirror, gpos_mirror, parents)"
  },
  {
    "path": "anim/blend.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\n\nfrom anim.animation import Animation\nfrom util import quat\n\n# =====================\n#  Basic interpolation\n# =====================\n# Linear Interpolation (LERP) of objects.\ndef lerp(x, y, t):\n    return (1 - t) * x + t * y\n\n# LERP for quaternions.\ndef quat_lerp(x: np.ndarray, y: np.ndarray, t: float) -> np.ndarray:\n    assert x.shape == y.shape, \"Two quats must be the same shape.\"\n    return quat.normalize(lerp(x, y, t))\n\n# Spherical linear interpolation (SLERP) for quaternions.\ndef slerp(x: np.ndarray, y: np.ndarray, t: float) -> np.ndarray:\n    assert x.shape == y.shape, \"Two quats must be the same shape.\"\n    if t == 0:\n        return x\n    elif t == 1:\n        return y\n    if quat.dot(x, y) < 0:\n        y = - y\n    ca = quat.dot(x, y)\n    theta = np.arccos(np.clip(ca, 0, 1))\n    r = quat.normalize(y - x * ca)\n    return x * np.cos(theta * t) + r * np.sin(theta * t)\n\n\n# =========\n#  Damping\n# =========\n\n# predict `t`"
  },
  {
    "path": "anim/bvh.py",
    "content": "\"\"\"\nbvh.py\n\"\"\"\n# loading and writing a biovision hierarchy data (BVH).\n\nfrom __future__ import annotations\n\nfrom io import TextIOWrapper\nfrom pathlib import Path\nimport logging\nimport numpy as np\nfrom anim.skel import Joint, Skel\nfrom anim.animation import Animation\nfrom util import quat\n\ndef load(\n    filepath: Path | str, \n    start: int=None, \n    end: int=None, \n    order: str=None,\n    rest_forward: list[int] = [0, 0, 1],\n    rest_vertical: list[int] = [0, 1, 0],\n    forward_axis: str = \"z\",\n    vertical_axis: str = \"y\",\n    load_skel: bool=True,\n    load_pose: bool=True,\n    skel: Skel=None,\n    skel_name: str=None,\n) -> Skel | Animation:\n    \n    if not load_skel and not load_pose:\n        logging.info(\"Either `load_skel` or `load_pose` must be specified.\")\n        raise ValueError\n    \n    if isinstance(filepath, str):\n        filepath = Path(filepath)\n    \n    # List bvh file by each row (line) and each word (column).\n    with open(filepath, \"r\") as f:\n        lines: list[str] = [line.strip() for line in f if line != \"\"]\n        motion_idx: int = lines.index(\"MOTION\")\n        lines: list[str | list[str]] = \\\n            list(map(lambda x: x.split(), lines))\n        f.close()\n    \n    # Load HIERARCHY term.\n    if load_skel:\n        skel, order = load_hierarchy(\n            lines = lines[:motion_idx], \n            skel_name=skel_name,\n            rest_forward=rest_forward,\n            rest_vertical=rest_vertical,\n            forward_axis=forward_axis,\n            vertical_axis=vertical_axis\n        )\n\n    # Load MOTION term.\n    if load_pose:\n        assert not skel is None, \"You need to load `Skel` or define `Skel`.\" \n        name = filepath.name.split(\".\")[0]\n        \n        fps, trans, quats, poss = load_motion(\n            lines = lines[motion_idx:],\n            start = start,\n            end = end,\n            order = order,\n            skel = skel\n        )\n        anim = Animation(skel, quats, trans, fps=fps, anim_name=name)\n        positions = anim.gpos # [T, J, 3]\n        for joint_idx, pos in poss.items():\n            positions[:, joint_idx] = pos\n        anim.positions = positions\n        return anim\n    \n    return skel\n\ndef load_hierarchy(\n    lines: list[str | list[str]],\n    skel_name: str,\n    rest_forward: list[int],\n    rest_vertical: list[int],\n    forward_axis: str,\n    vertical_axis: str,\n) -> Skel:\n    \n    channelmap: dict[str, str] = {\n        \"Xrotation\" : \"x\",\n        \"Yrotation\" : \"y\",\n        \"Zrotation\" : \"z\",   \n    }\n    \n    stacks: list[int] = [-1]\n    parents: list[int] = []\n    name_list: list[str] = []\n    joints: list[Joint] = []\n    depth:int = 0\n    end_site: bool = False\n    \n    for line in lines:\n        if \"ROOT\" in line or \"JOINT\" in line:\n            parents.append(stacks[-1])\n            stacks.append(len(parents) - 1)\n            name_list.append(line[1])\n            \n        elif \"OFFSET\" in line:\n            if not end_site:\n                offset = np.array(list(map(float, line[1:])))\n                joints.append(Joint(\n                    name = name_list[-1],\n                    parent = parents[-1],\n                    offset = offset,\n                    root = (parents[-1]==-1),\n                    ))\n        \n        elif \"CHANNELS\" in line:\n            dof = int(line[1])\n            if dof == 3:\n                order = \"\".join([channelmap[p] for p in line[2:2+3]])\n            elif dof == 6:\n                order = \"\".join([channelmap[p] for p in line[5:5+3]])\n            joints[-1].dof = dof\n        \n        elif \"End\" in line:\n            end_site = True\n            stacks.append(len(parents) - 1)\n        \n        elif \"{\" in line:\n            depth += 1\n        \n        elif \"}\" in line:\n            depth -= 1\n            end_site = False\n            stacks.pop()\n    \n    assert depth == 0, \"Brackets are not closed.\"\n    \n    skel = Skel(joints, skel_name, rest_forward, rest_vertical, forward_axis, vertical_axis)\n    return skel, order\n\ndef load_motion(\n    lines: list[list[str]],\n    order: str,\n    skel: Skel,\n    start: int=None,\n    end: int=None,\n) -> tuple[int, np.ndarray, np.ndarray, dict[int: np.ndarray]]:\n    \n    fps: int = round(1 / float(lines[2][2]))\n    lines: list[list[str]] = lines[3:]\n    \n    np_lines = np.array(\n        list(map(lambda line: list(map(float, line)), lines))\n    ) # T × dim_J matrix\n    np_lines = np_lines[start:end]\n    \n    eangles = []\n    poss = {}\n    ckpt = 0\n    joint_idx = 0\n    for dof in skel.dofs:\n        if dof == 3:\n            eangle = np_lines[:, ckpt:ckpt+3]\n            eangles.append(eangle[:, None])\n            ckpt += 3\n            joint_idx += 1\n        elif dof == 6:\n            pos = np_lines[:, ckpt:ckpt+3] # [T, 3]\n            eangle = np_lines[:, ckpt+3:ckpt+6]\n            poss[joint_idx] = pos\n            eangles.append(eangle[:, None])\n            ckpt += 6\n            joint_idx += 1\n    \n    assert sum(skel.dofs) == np_lines.shape[1], \\\n        \"Skel and Motion are not compatible.\"\n    \n    eangles = np.concatenate(eangles, axis=1)\n    \n    trans = poss[0]\n    quats = quat.unroll(quat.from_euler(eangles, order))\n    \n    return fps, trans, quats, poss\n\ndef save(\n    filepath: Path | str,\n    anim: Animation, \n    order: str=\"zyx\",\n    save_pos: bool=False,\n    ) -> bool:\n    \n    skel = anim.skel\n    trans = anim.trans\n    quats = anim.quats\n    positions = anim.positions\n    fps = anim.fps\n    \n    with open(filepath, \"w\") as f:\n        # write hierarchy data.\n        f.write(\"HIERARCHY\\n\")\n        index_order = save_hierarchy(f, skel, 0, order, 0, save_pos)\n        \n        # write motion data.\n        f.write(\"MOTION\\n\")\n        f.write(\"Frames: %d\\n\" % len(trans))\n        f.write(\"Frame Time: %f\\n\" % (1.0 / fps))\n        save_motion(f, trans, quats, positions, save_pos, order, index_order)\n        f.close()\n\ndef save_hierarchy(\n    f: TextIOWrapper,\n    skel: Skel,\n    index: int,\n    order: str,\n    depth: int,\n    save_pos: bool,\n) -> list[int]:\n\n    def order2xyzrotation(order: str) -> str:\n        cmap: dict[str, str] = {\n            \"x\" : \"Xrotation\",\n            \"y\" : \"Yrotation\",\n            \"z\" : \"Zrotation\",   \n        }\n        return \"%s %s %s\" % (cmap[order[0]], cmap[order[1]], cmap[order[2]])\n    \n    joint = skel[index]\n    index_order = [index]\n    if joint.root:\n        f.write(\"\\t\" * depth + \"ROOT %s\\n\" % joint.name)\n    else:\n        f.write(\"\\t\" * depth + \"JOINT %s\\n\" % joint.name)\n    f.write(\"\\t\" * depth + \"{\\n\")\n    depth += 1\n    offset = joint.offset\n    f.write(\"\\t\" * depth + \\\n        \"OFFSET %f %f %f\\n\" % (offset[0], offset[1], offset[2]))\n    \n    if joint.root or save_pos:\n        f.write(\"\\t\" * depth + \\\n            \"CHANNELS 6 Xposition Yposition Zposition %s\\n\"\\\n            % order2xyzrotation(order))\n    else:\n        f.write(\"\\t\" * depth + \"CHANNELS 3 %s\\n\"\\\n            % order2xyzrotation(order))\n    \n    children_idxs = skel.get_children(index, return_idx=True)\n    for child_idx in children_idxs:\n        ch_index_order = save_hierarchy(f, skel, child_idx, order, depth, save_pos)\n        index_order.extend(ch_index_order)\n    if children_idxs == []:\n        f.write(\"\\t\" * depth + \"End Site\\n\")\n        f.write(\"\\t\" * depth + \"{\\n\")\n        f.write(\"\\t\" * (depth + 1) + \"OFFSET %f %f %f\\n\" \\\n            % (0, 0, 0))\n        f.write(\"\\t\" * depth + \"}\\n\")\n    \n    depth -= 1\n    f.write(\"\\t\" * depth + \"}\\n\")\n    return index_order\n\ndef save_motion(\n    f: TextIOWrapper,\n    trans: np.ndarray,\n    quats: np.ndarray,\n    positions: np.ndarray,\n    save_pos: bool,\n    order: str,\n    index_order: list[int],\n) -> None:\n    \n    def write_position_rotation(\n        pos: np.ndarray, \n        rot: np.ndarray,\n        ) -> str:\n        pos, rot = pos.tolist(), rot.tolist()\n        return \"%f %f %f %f %f %f \" \\\n            % (pos[0], pos[1], pos[2], rot[0], rot[1], rot[2])\n\n    def write_rotation(rot: np.ndarray) -> str:\n        rot = rot.tolist()\n        return \"%f %f %f \" %(rot[0], rot[1], rot[2])\n    \n    eangles = np.rad2deg(quat.to_euler(quats, order)) # (T, J, 3)\n    for i in range(len(trans)):\n        for j in index_order:\n            if j == 0:\n                f.write(\n                    \"%s\" % write_position_rotation(trans[i], eangles[i, j])\n                    )\n            elif save_pos:\n                f.write(\n                    \"%s\" % write_position_rotation(positions[i, j], eangles[i, j])\n                    ) \n            else:\n                f.write(\n                    \"%s\" % write_rotation(eangles[i, j])\n                )\n        f.write(\"\\n\")"
  },
  {
    "path": "anim/inverse_kinematics/ccd_ik.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nROOT = Path(__file__).resolve().parent.parent.parent\nsys.path.append(str(ROOT))\n\nimport copy\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom util import quat\nfrom anim.skel import Joint, Skel\nfrom anim.animation import Animation\n\ndef normalize_vector(vector: np.ndarray):\n    norm = np.linalg.norm(vector, axis=-1, keepdims=True)\n    norm = np.where(norm==0, 1e-10, norm)\n    return vector / norm\n\n# まずはEnd effectorのみの実装(Jointの末端が一つのみ)\ndef simple_ccd_ik(\n    anim: Animation,\n    tgt_pos: np.ndarray,\n    iter: int=10,\n) -> Animation:\n    anim_ = copy.deepcopy(anim)\n    \n    for i in range(iter):\n        # except end effector\n        for parent in reversed(anim_.parents):\n            if parent == -1:\n                continue\n            cur_quat = anim_.quats[:,parent] # (T, 4)\n            \n            # 現在のJointを基準としたeffectorの位置(global座標系)\n            rel_effpos = anim_.gpos[:,-1] - anim_.gpos[:,parent]\n            # rel_effpos = quat.mul_vec(quat.inv(cur_quat), rel_effpos)\n            eff_vec = normalize_vector(rel_effpos)\n            \n            # 現在のJointを基準としたtargetの位置(global座標系)\n            rel_tgtpos = tgt_pos - anim.gpos[:,parent]\n            # rel_tgtpos = quat.mul_vec(quat.inv(cur_quat), rel_tgtpos)\n            tgt_vec = normalize_vector(rel_tgtpos)\n            \n            rel_quat = quat.normalize(quat.between(eff_vec, tgt_vec))\n            anim_.quats[:,parent] = quat.abs(quat.normalize(quat.mul(rel_quat, cur_quat)))\n    \n    return anim_\n\nif __name__==\"__main__\":\n    \"\"\"使用例\"\"\"\n    \n    # 簡単なJointの定義\n    joints = []\n    joints.append(Joint(name=\"ROOT\", parent=-1, offset=np.zeros(3), root=True, dof=6))\n    joints.append(Joint(name=\"J1\", parent= 0, offset=np.array([1, 0, 0])))\n    joints.append(Joint(name=\"J2\", parent= 1, offset=np.array([1, 0, 0])))\n    joints.append(Joint(name=\"J3\", parent= 2, offset=np.array([1, 0, 0])))\n    #    Root       J1         J2         J3\n    #     *----------*----------*----------*\n    # (0, 0, 0)  (1, 0, 0)  (2, 0, 0)  (3, 0, 0)\n\n\n    skel = Skel(joints=joints, skel_name=\"sample_skel\")\n    anim = Animation.no_animation(skel, num_frame=2, anim_name=\"sample_anim\")\n    \n    # first and second frame\n    target_position = np.array([[1, 1, 1], [1, 2, 1]])\n    \n    new_anim = simple_ccd_ik(anim, target_position)\n    \n    fig = plt.figure(figsize=(11, 7))\n    \n    # first frame\n    ax1 = fig.add_subplot(121, projection=\"3d\")\n    ax1.set_xlim3d(-2, 2)\n    ax1.set_ylim3d(-2, 2)\n    ax1.set_zlim3d(-2, 2)\n    ax1.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[0]\n    poss = new_anim.gpos[0]\n    tgt  = target_position[0]\n    \n    ax1.plot(orig[:,0], orig[:,1], orig[:,2], alpha=0.4)\n    ax1.scatter(orig[:,0], orig[:,1], orig[:,2], alpha=0.4, label=\"orig\")\n    ax1.plot(poss[:,0], poss[:,1], poss[:,2])\n    ax1.scatter(poss[:,0], poss[:,1], poss[:,2], label=\"calc\")\n    ax1.scatter(tgt[0], tgt[1], tgt[2], s=50, marker=\"*\", label=\"target\")\n    ax1.legend()\n\n    # second frame\n    ax2 = fig.add_subplot(122, projection=\"3d\")\n    ax2.set_xlim3d(-2, 2)\n    ax2.set_ylim3d(-2, 2)\n    ax2.set_zlim3d(-2, 2)\n    ax2.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[1]\n    poss = new_anim.gpos[1]\n    tgt  = target_position[1]\n    \n    ax2.plot(orig[:,0], orig[:,1], orig[:,2], alpha=0.4)\n    ax2.scatter(orig[:,0], orig[:,1], orig[:,2], alpha=0.4, label=\"orig\")\n    ax2.plot(poss[:,0], poss[:,1], poss[:,2])\n    ax2.scatter(poss[:,0], poss[:,1], poss[:,2], label=\"calc\")\n    ax2.scatter(tgt[0], tgt[1], tgt[2], s=50, marker=\"*\", label=\"target\")\n    ax2.legend()\n    \n    plt.show()"
  },
  {
    "path": "anim/inverse_kinematics/fabrik.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nROOT = Path(__file__).resolve().parent.parent.parent\nsys.path.append(str(ROOT))\n\nimport copy\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom util import quat\nfrom anim.skel import Joint, Skel\nfrom anim.animation import Animation\n\ndef normalize_vector(vector: np.ndarray):\n    norm = np.linalg.norm(vector, axis=-1, keepdims=True)\n    norm = np.where(norm==0, 1e-10, norm)\n    return vector / norm\n\ndef backward_reaching(\n    anim: Animation,\n    tgt_pos: np.ndarray,\n) -> Animation:\n    \n    joint_idxs = anim.skel.indices\n    target = tgt_pos.copy() # (T, 3)\n    cur_gposs = anim.gpos.copy()\n    bone_lengths = anim.skel.bone_lengths\n    for j, parent in zip(reversed(joint_idxs), reversed(anim.parents)):\n        if parent == -1:\n            continue\n        cur_quat = anim.quats[:,parent] # (T, 4)\n        \n        # 現在のJointを基準とした更新前の子Joint位置(global座標系)\n        rel_chpos = cur_gposs[:,j] - cur_gposs[:,parent]\n        ch_vec = normalize_vector(rel_chpos)\n        \n        # 現在のJointを基準とした更新後の子Joint位置(global座標系)\n        rel_tgtpos = target - cur_gposs[:, parent]\n        tgt_vec = normalize_vector(rel_tgtpos)\n        \n        rel_quat = quat.normalize(quat.between(ch_vec, tgt_vec))\n        anim.quats[:,parent] = quat.abs(quat.normalize(quat.mul(rel_quat, cur_quat)))\n        \n        # 親がROOTのとき、offsetを変更\n        if parent == 0:\n            anim.trans = target - tgt_vec * bone_lengths[j]\n        else:\n            # targetの更新\n            target = target - tgt_vec * bone_lengths[j]\n        \n    return anim\n\ndef forward_reaching(\n    anim: Animation,\n    base: np.ndarray,\n) -> Animation:\n    \n    joint_idxs = np.arange(len(anim.skel))\n    target = base.copy() # (T, 3)\n    past_gposs = anim.gpos.copy()\n    \n    for j, parent in zip(joint_idxs, anim.parents):\n        if parent == -1:\n            continue\n        elif parent == 0:\n            anim.trans = target\n            continue\n        cur_quat = anim.quats[:,parent] # (T, 4)\n        \n        # 過去の親 -> 過去の子のJoint位置\n        past_chpos = past_gposs[:,j] - past_gposs[:,parent]\n        pa_vec = normalize_vector(past_chpos)\n        \n        # 現在の親 -> 過去の子のJoint位置\n        cur_chpos = past_gposs[:,j] - anim.gpos[:,parent]\n        cur_vec = normalize_vector(cur_chpos)\n        \n        rel_quat = quat.normalize(quat.between(pa_vec, cur_vec))\n        anim.quats[:,parent] = quat.abs(quat.normalize(quat.mul(rel_quat, cur_quat)))\n        \n    return anim\n\n# まずはEnd effectorのみの実装(Jointの末端が一つのみ)\ndef simple_fabrik(\n    anim: Animation,\n    tgt_pos: np.ndarray,\n    iter: int=10,\n) -> Animation:\n    \n    anim_ = copy.deepcopy(anim)\n    # define base(root) positions.\n    base = anim.trans.copy() # (T, 3)\n    \n    for i in range(iter):\n        anim_ = backward_reaching(anim_, tgt_pos)\n        anim_ = forward_reaching(anim_, base)\n    \n    return anim_\n\nif __name__==\"__main__\":\n    \"\"\"使用例\"\"\"\n    \n    # 簡単なJointの定義\n    joints = []\n    joints.append(Joint(name=\"ROOT\", parent=-1, offset=np.zeros(3), root=True, dof=6))\n    joints.append(Joint(name=\"J1\", parent= 0, offset=np.array([1, 0, 0])))\n    joints.append(Joint(name=\"J2\", parent= 1, offset=np.array([1, 0, 0])))\n    joints.append(Joint(name=\"J3\", parent= 2, offset=np.array([1, 0, 0])))\n    #    Root       J1         J2         J3\n    #     *----------*----------*----------*\n    # (0, 0, 0)  (1, 0, 0)  (2, 0, 0)  (3, 0, 0)\n\n\n    skel = Skel(joints=joints, skel_name=\"sample_skel\")\n    anim = Animation.no_animation(skel, num_frame=2, anim_name=\"sample_anim\")\n    \n    # target positions for first frame and second frame.\n    tgts = np.array([\n        [1, 1, 1], \n        [1, 2, 1]]\n    )\n    \n    new_anim = simple_fabrik(anim, tgts)\n    \n    fig = plt.figure(figsize=(11, 7))\n    \n    # first frame\n    ax1 = fig.add_subplot(121, projection=\"3d\")\n    ax1.set_xlim3d(-2, 2)\n    ax1.set_ylim3d(-2, 2)\n    ax1.set_zlim3d(-2, 2)\n    ax1.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[0]\n    poss = new_anim.gpos[0]\n    tgt  = tgts[0]\n    \n    ax1.plot(orig[:,0], orig[:,1], orig[:,2], alpha=0.4)\n    ax1.scatter(orig[:,0], orig[:,1], orig[:,2], alpha=0.4, label=\"orig\")\n    ax1.plot(poss[:,0], poss[:,1], poss[:,2])\n    ax1.scatter(poss[:,0], poss[:,1], poss[:,2], label=\"calc\")\n    ax1.scatter(tgt[0], tgt[1], tgt[2], s=50, marker=\"*\", label=\"target\")\n    ax1.legend()\n\n    # second frame\n    ax2 = fig.add_subplot(122, projection=\"3d\")\n    ax2.set_xlim3d(-2, 2)\n    ax2.set_ylim3d(-2, 2)\n    ax2.set_zlim3d(-2, 2)\n    ax2.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[1]\n    poss = new_anim.gpos[1]\n    tgt  = tgts[1]\n    \n    ax2.plot(orig[:,0], orig[:,1], orig[:,2], alpha=0.4)\n    ax2.scatter(orig[:,0], orig[:,1], orig[:,2], alpha=0.4, label=\"orig\")\n    ax2.plot(poss[:,0], poss[:,1], poss[:,2])\n    ax2.scatter(poss[:,0], poss[:,1], poss[:,2], label=\"calc\")\n    ax2.scatter(tgt[0], tgt[1], tgt[2], s=50, marker=\"*\", label=\"target\")\n    ax2.legend()\n    \n    plt.savefig(\"demo.png\")"
  },
  {
    "path": "anim/inverse_kinematics/jacobi_ik.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nROOT = Path(__file__).resolve().parent.parent.parent\nsys.path.append(str(ROOT))\n\nimport copy\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom util import quat\nfrom anim.skel import Joint, Skel\nfrom anim.animation import Animation\n\ndef simple_jacobi_ik(\n    anim: Animation,\n    tgt_pos: np.ndarray,\n    iter: int=10,\n) -> Animation:\n    return\n\nif __name__==\"__main__\":\n    \"\"\"使用例\"\"\"\n    \n    # 簡単なJointの定義\n    joints = []\n    joints.append(Joint(name=\"ROOT\", index=0, parent=-1, offset=np.zeros(3), root=True, dof=6))\n    joints.append(Joint(name=\"J1\", index=1, parent= 0, offset=np.array([1, 0, 0])))\n    joints.append(Joint(name=\"J2\", index=2, parent= 1, offset=np.array([1, 0, 0])))\n    joints.append(Joint(name=\"J3\", index=3, parent= 2, offset=np.array([1, 0, 0])))\n    #    Root       J1         J2         J3\n    #     *----------*----------*----------*\n    # (0, 0, 0)  (1, 0, 0)  (2, 0, 0)  (3, 0, 0)\n\n\n    skel = Skel(joints=joints, skel_name=\"sample_skel\")\n    anim = Animation.no_animation(skel, num_frame=2, anim_name=\"sample_anim\")\n    \n    # target positions for first frame and second frame.\n    tgts = np.array([\n        [1, 1, 1], \n        [1, 2, 1]]\n    )\n    \n    new_anim = simple_jacobi_ik(anim, tgts)\n    \n    fig = plt.figure(figsize=(11, 7))\n    \n    # first frame\n    ax1 = fig.add_subplot(121, projection=\"3d\")\n    ax1.set_xlim3d(-2, 2)\n    ax1.set_ylim3d(-2, 2)\n    ax1.set_zlim3d(-2, 2)\n    ax1.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[0]\n    poss = new_anim.gpos[0]\n    tgt  = tgts[0]\n    \n    ax1.plot(poss[:,0], poss[:,1], poss[:,2])\n    ax1.scatter(poss[:,0], poss[:,1], poss[:,2], label=\"calc\")\n    ax1.plot(orig[:,0], orig[:,1], orig[:,2])\n    ax1.scatter(orig[:,0], orig[:,1], orig[:,2], label=\"orig\")\n    ax1.scatter(tgt[0], tgt[1], tgt[2], label=\"target\")\n    ax1.legend()\n\n    # second frame\n    ax2 = fig.add_subplot(122, projection=\"3d\")\n    ax2.set_xlim3d(-2, 2)\n    ax2.set_ylim3d(-2, 2)\n    ax2.set_zlim3d(-2, 2)\n    ax2.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[1]\n    poss = new_anim.gpos[1]\n    tgt  = tgts[1]\n    \n    ax2.plot(poss[:,0], poss[:,1], poss[:,2])\n    ax2.scatter(poss[:,0], poss[:,1], poss[:,2], label=\"calc\")\n    ax2.plot(orig[:,0], orig[:,1], orig[:,2])\n    ax2.scatter(orig[:,0], orig[:,1], orig[:,2], label=\"orig\")\n    ax2.scatter(tgt[0], tgt[1], tgt[2], label=\"target\")\n    ax2.legend()\n    \n    plt.show()"
  },
  {
    "path": "anim/inverse_kinematics/two_bone_ik.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nROOT = Path(__file__).resolve().parent.parent.parent\nsys.path.append(str(ROOT))\n\nimport copy\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom util import quat\nfrom anim.skel import Joint, Skel\nfrom anim.animation import Animation\n\ndef normalize(tensor: np.ndarray, axis:int=-1) -> np.ndarray:\n    \"\"\" Returns a tensor normalized in the `axis` dimension.\n    args: \n        tensor -> tensor / norm(tensor, axis=-axis)\n        axis: dimemsion.\n    \"\"\"\n    norm = np.linalg.norm(tensor, axis=axis, keepdims=True)\n    norm = np.where(norm==0, 1e-10, norm)\n    return tensor / norm\n    \n\ndef two_bone_ik(\n    anim: Animation,\n    base_joint: int,\n    middle_joint: int,\n    eff_joint: int,\n    tgt_pos: np.ndarray,\n    fwd: np.ndarray,\n    max_length_buffer: float=1e-3,\n) -> Animation:\n    \"\"\"Simple two bone IK method. This algorithm can be solved analytically.\n    args:\n        anim: Animation\n        base_joint: index of base (fixed) joint.\n        middle_joint: index of middle joint.\n        eff_joint: index of effector joint.\n        tgt_pos: (T, 3), target positions for effector.\n        fwd: (T, 3), forward vectors for middle joint.\n        max_length_buffer: エラー回避用\n    \"\"\"\n    anim = copy.deepcopy(anim)\n    tgt = tgt_pos.copy()\n    \n    # 連結チェック\n    assert anim.parents[eff_joint] == middle_joint and \\\n           anim.parents[middle_joint] == base_joint\n    \n    # 骨の最大の伸びを取得\n    lbm: float = np.linalg.norm(anim.offsets[middle_joint], axis=-1)\n    lme: float = np.linalg.norm(anim.offsets[eff_joint], axis=-1)\n    max_length: float = lbm + lme - max_length_buffer\n    \n    bt_poss = tgt - anim.gpos[:, base_joint] # (T, 3)\n    bt_lengths = np.linalg.norm(bt_poss, axis=-1) # (T)\n    # targetが骨の届かない位置に存在するフレームを取得\n    error_frame = np.where(bt_lengths > max_length)\n    # base -> effectorの長さ1のベクトル\n    bt_vec = normalize(bt_poss) # (T, 3)\n    # 届かないフレームのみ更新\n    tgt[error_frame] = anim.gpos[error_frame, base_joint] + \\\n                           bt_vec[error_frame] * max_length\n    \n    # get normal vector for axis of rotations.\n    axis_rot = np.cross(bt_vec, fwd, axis=-1)\n    axis_rot = normalize(axis_rot) # (T, 3)\n    \n    # get the relative positions.\n    be_poss = anim.gpos[:, eff_joint] - anim.gpos[:, base_joint]\n    be_vec = normalize(be_poss) # (T, 3)\n    bm_poss = anim.gpos[:, middle_joint] - anim.gpos[:, base_joint]\n    bm_vec = normalize(bm_poss) # (T, 3)\n    me_poss = anim.gpos[:, eff_joint] - anim.gpos[:, middle_joint]\n    me_vec = normalize(me_poss) # (T, 3)\n    \n    # get the current interior angles of the base and middle joints.\n    # einsum calculate dot product.\n    be_bm_0 = np.arccos(np.clip(np.einsum(\"ij,ij->i\", be_vec, bm_vec), -1, 1)) # (T,)\n    mb_me_0 = np.arccos(np.clip(np.einsum(\"ij,ij->i\", -bm_vec, me_vec), -1, 1)) # (T,)\n    \n    # get the new interior angles of the base and middle joints.\n    lbt = np.linalg.norm(tgt - anim.gpos[:, base_joint], axis=-1)\n    be_bm_1 = np.arccos(np.clip((lbm * lbm + lbt * lbt - lme * lme)/(2 * lbm * lbt) , -1, 1)) # (T,)\n    mb_me_1 = np.arccos(np.clip((lbm * lbm + lme * lme - lbt * lbt)/(2 * lbm * lme) , -1, 1)) # (T,)\n    \n    r0 = quat.from_angle_axis(be_bm_1 - be_bm_0, axis_rot)\n    r1 = quat.from_angle_axis(mb_me_1 - mb_me_0, axis_rot)\n    \n    # effectorをtarget方向へ向けるための回転\n    r2 = quat.from_angle_axis(\n        np.arccos(np.clip(np.einsum(\"ij,ij->i\", be_vec, bt_vec), -1, 1)),\n        normalize(np.cross(be_vec, bt_vec))\n    )\n    \n    anim.quats[:, base_joint] =  quat.mul(r2, quat.mul(r0, anim.quats[:, base_joint]))\n    anim.quats[:, middle_joint] = quat.mul(r1, anim.quats[:, middle_joint])\n    \n    return anim\n\nif __name__==\"__main__\":\n    \"\"\"使用例\"\"\"\n    \n    # 簡単なJointの定義\n    joints = []\n    joints.append(Joint(name=\"ROOT\", parent=-1, offset=np.array([0, 2, 0]), root=True, dof=6))\n    joints.append(Joint(name=\"J1\", parent= 0, offset=np.array([0, -1, 0.1])))\n    joints.append(Joint(name=\"J2\", parent= 1, offset=np.array([0, -1, -0.1])))\n    joints.append(Joint(name=\"J3\", parent= 2, offset=np.array([0, 0, 0.2])))\n\n    skel = Skel(joints=joints, skel_name=\"sample_skel\")\n    anim = Animation.no_animation(skel, num_frame=2, anim_name=\"sample_anim\")\n    anim.trans = np.array([\n        [0, 2, 0],\n        [0, 2, 0]\n    ])\n    \n    # target positions for first frame and second frame.\n    tgts = np.array([\n        [0, 1, 0], \n        [0.2, 0.8, 0.2]\n    ])\n    \n    fwd = np.array([\n        [0.2, 0, 1],\n        [0.4, 0.2, 0.5]\n    ])\n    \n    fwd_ = normalize(fwd) * 0.2\n    \n    new_anim = two_bone_ik(\n        anim=anim, \n        base_joint=0, \n        middle_joint=1, \n        eff_joint=2,\n        tgt_pos=tgts,\n        fwd=fwd)\n    \n    fig = plt.figure(figsize=(11, 7))\n    \n    # first frame\n    ax1 = fig.add_subplot(121, projection=\"3d\")\n    ax1.set_xlim3d(-1, 1)\n    ax1.set_ylim3d(-1, 1)\n    ax1.set_zlim3d( 0, 2)\n    ax1.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[0]\n    poss = new_anim.gpos[0]\n    tgt  = tgts[0]\n    \n    ax1.plot(orig[:,0], -orig[:,2], orig[:,1], alpha=0.4)\n    ax1.scatter(orig[:,0], -orig[:,2], orig[:,1], label=\"orig\", alpha=0.4)\n    ax1.plot(poss[:,0], -poss[:,2], poss[:,1])\n    ax1.scatter(poss[:,0], -poss[:,2], poss[:,1], label=\"calc\")\n    ax1.plot(\n        [poss[1,0], poss[1,0]+fwd_[0, 0]], \n        [-poss[1,2], -poss[1,2]-fwd_[0, 2]], \n        [poss[1,1], poss[1,1]+fwd_[0, 1]], label=\"fwd\")\n    ax1.scatter(tgt[0], -tgt[2], tgt[1], s=80, color=\"red\", marker=\"*\", label=\"target\")\n    ax1.legend()\n\n    # second frame\n    ax2 = fig.add_subplot(122, projection=\"3d\")\n    ax2.set_xlim3d(-1, 1)\n    ax2.set_ylim3d(-1, 1)\n    ax2.set_zlim3d(0, 2)\n    ax2.set_box_aspect((1,1,1))\n    \n    orig = anim.gpos[1]\n    poss = new_anim.gpos[1]\n    tgt  = tgts[1]\n    \n    ax2.plot(orig[:,0], -orig[:,2], orig[:,1], alpha=0.4)\n    ax2.scatter(orig[:,0], -orig[:,2], orig[:,1], label=\"orig\", alpha=0.4)\n    ax2.plot(poss[:,0], -poss[:,2], poss[:,1])\n    ax2.scatter(poss[:,0], -poss[:,2], poss[:,1], label=\"calc\")\n    ax2.plot(\n        [poss[1,0], poss[1,0]+fwd_[1, 0]], \n        [-poss[1,2], -poss[1,2]-fwd_[1, 2]], \n        [poss[1,1], poss[1,1]+fwd_[1, 1]], label=\"fwd\")\n    ax2.scatter(tgt[0], -tgt[2], tgt[1], s=80, color=\"red\", marker=\"*\", label=\"target\")\n    ax2.legend()\n    \n    plt.show()"
  },
  {
    "path": "anim/keyframe.py",
    "content": "from __future__ import annotations\n\nfrom dataclasses import dataclass\nimport numpy as np\n\nfrom anim.skel import Skel\n\n@dataclass\nclass KeyFrame:\n    frame: int\n    joint: str | int\n    rotation: np.ndarray = None\n    position: np.ndarray = None\n    interp: str = \"linear\" # interpolation method.\n\n\nclass KeyFrameAnimation:\n    def __init__(\n        self,\n        skel: Skel,\n        keys: list[KeyFrame],\n        fps: int,\n        anim_name: str=None,\n        positions: np.ndarray=None,\n    ) -> None:\n        self.skel = skel\n        # TBD"
  },
  {
    "path": "anim/motion_matching/database.py",
    "content": "from __future__ import annotations\n\nfrom dataclasses import dataclass\nimport numpy as np\nfrom scipy.spatial import KDTree\nfrom util import quat\nfrom anim.skel import Skel\nfrom anim.animation import Animation\n\nclass Database(Animation):\n    def __init__(\n        self, \n        anims: list[Animation]=None, \n        skel: Skel=None,\n        quats: np.ndarray=None,\n        trans: np.ndarray=None,\n        fps: int=None,\n        starts: list[int]=None,\n        names: list[str]=None,\n        db_name: str=None,\n    ) -> None:\n        self.name = db_name\n        \n        if anims is not None:\n            self.skel = anims[0].skel # all `Skel` must be the same.\n            self.quats = np.concatenate([anim.quats for anim in anims], axis=0)\n            self.trans = np.concatenate([anim.trans for anim in anims], axis=0)\n            self.fps = anims[0].fps # all fps must be the same.\n            self.starts = []\n            self.ends = []\n            self.names = []\n            cur_frame = 0\n            for anim in anims:\n                self.starts.append(cur_frame)\n                cur_frame += len(anim)\n                self.ends.append(cur_frame-1)\n                self.names.append(anim.name)\n        else:\n            self.skel = skel\n            self.quats = quats\n            self.trans = trans\n            self.fps = fps\n            self.starts = starts\n            self.names = names\n        \n        self.num_anim = len(self.starts)\n        \n    def __len__(self) -> int: return len(self.trans)\n    \n    def __add__(self, other: Database) -> Database:\n        assert isinstance(other, Database)\n        starts: list[int] = self.starts.copy()\n        ends: list[int] = self.ends.copy()\n        names:  list[str] = self.names.copy()\n        starts.extend(list(map(lambda x: x+len(self), other.starts)))\n        ends.extend(list(map(lambda x: x+len(self), other.ends)))\n        names.extend(other.names)\n        quats: np.ndarray = np.concatenate([self.quats, other.quats], axis=0)\n        trans: np.ndarray = np.concatenate([self.trans, other.trans], axis=0)\n        \n        return Database(\n            skel = self.skel,\n            quats=quats,\n            trans=trans,\n            fps=self.fps,\n            starts=starts,\n            names=names,\n            db_name=self.name,\n        )\n    \n    def cat(self, other: Database) -> None:\n        assert isinstance(other, Database)\n        self.starts.extend(list(map(lambda x: x+len(self), other.starts)))\n        self.ends.extend(list(map(lambda x: x+len(self), other.ends)))\n        self.names.extend(other.names)\n        self.quats = np.concatenate([self.quats, other.quats], axis=0)\n        self.trans = np.concatenate([self.trans, other.trans], axis=0)\n\n    # ============\n    #   Velocity\n    # ============\n    @property\n    def gposvel(self) -> np.ndarray:\n        gpos : np.ndarray = self.gpos\n        gpvel: np.ndarray = np.zeros_like(gpos)\n        gpvel[1:] = (gpos[1:] - gpos[:-1]) * self.fps # relative position from previous frame.\n        for start in self.starts:\n            gpvel[start] = gpvel[start+1] - (gpvel[start+3] - gpvel[start+2])\n        return gpvel\n    \n    @property\n    def cposvel(self) -> np.ndarray:\n        cpos : np.ndarray = self.cpos\n        cpvel: np.ndarray = np.zeros_like(cpos)\n        cpvel[1:] = (cpos[1:] - cpos[:-1]) * self.fps # relative position from previous frame.\n        for start in self.starts:\n            cpvel[start] = cpvel[start+1] - (cpvel[start+3] - cpvel[start+2])\n        return cpvel\n\n    @property\n    def lrotvel(self) -> np.ndarray:\n        \"\"\"Calculate rotation velocities with rotation vector style.\"\"\"    \n        \n        lrot : np.ndarray = self.quats.copy()\n        lrvel: np.ndarray = np.zeros_like(self.lpos)\n        lrvel[1:] = quat.to_axis_angle(\n            quat.abs(\n                quat.mul(lrot[1:], quat.inv(lrot[:-1])))\n            ) * self.fps\n        for start in self.starts:\n            lrvel[start] = lrvel[start+1] - (lrvel[start+3] - lrvel[start+2])\n        return lrvel\n    \n    # ===================\n    #  Future trajectory\n    # ===================\n    def clamp_future_idxs(self, offset: int) -> np.ndarray:\n        \"\"\"Function to calculate the frame array for `offset` frame ahead.\n        If `offset` frame ahead does not exist, \n        return the last frame in each animation.\n        \"\"\"\n        idxs = np.arange(len(self)) + offset\n        for end in self.ends:\n            idxs = np.where((idxs > end) & (idxs <= end + offset), end, idxs)\n        return idxs\n\n@dataclass\nclass MatchingDatabase:\n    features: np.ndarray # [T, num_features]\n    means: np.ndarray    # [num_features]\n    stds: np.ndarray     # [num_features]\n    indices: list[int]\n\n    # The elements belows are used only AABB.\n    dense_bound_size: int = None\n    dense_bound_mins: np.ndarray = None # [num_bounds, num_features]\n    dense_bound_maxs: np.ndarray = None # [num_bounds, num_features]\n    sparse_bound_size: int = None\n    sparse_bound_mins: np.ndarray = None # [num_bounds, num_features]\n    sparse_bound_maxs: np.ndarray = None # [num_bounds, num_features]\n    \n    # kd-tree\n    kdtree: KDTree = None\n    \n    def __len__(self):\n        return len(self.features)"
  },
  {
    "path": "anim/motion_matching/mm.py",
    "content": "from __future__ import annotations\n\nimport sys\nimport numpy as np\nfrom scipy.spatial import KDTree\n# import taichi as ti\n\nfrom anim.motion_matching.database import Database, MatchingDatabase\n\ndef normalize_features(features: np.ndarray, weights: np.ndarray, axis=0) -> tuple:\n    \"\"\"Normalize function for matching features. (features: [T, num_features])\"\"\"\n    means = np.mean(features, axis=axis) # [num_feat]\n    stds = np.std(features, axis=axis)   # [num_feat]\n    scales = np.where(stds==0, 1e-10, stds / weights)\n    norms = (features - means[None]) / scales[None]\n    return norms, means, scales\n\ndef normalize_query(query: np.ndarray, means:np.ndarray, stds:np.ndarray) -> np.ndarray:\n    \"\"\"Normalize function for `query`. \n       `means` and `stds` are calcurated by normalize_features().\n    \"\"\"\n    return (query - means) / stds\n\ndef calc_box_distance(\n    best_cost: float,\n    query: np.ndarray,    # [n_features]\n    box_mins: np.ndarray, # [n_features]\n    box_maxs: np.ndarray, # [n_features]\n) -> tuple[float, bool]:\n    \"\"\"Calculate distance between `query` to an AABB.\"\"\"\n    cost = 0.0\n    smaller = True\n    for i, feat in enumerate(query):\n        cost += np.square(feat - np.clip(feat, box_mins[i], box_maxs[i]))\n        if cost >= best_cost:\n            smaller = False\n            break\n    return cost, smaller\n    \ndef create_position_features(db: Database) -> np.ndarray:\n    left_foot_idx = db.skel.names.index(\"LeftFoot\")\n    right_foot_idx = db.skel.names.index(\"RightFoot\")\n    features = db.cpos[:, [left_foot_idx, right_foot_idx]].reshape(len(db), -1) # (T, 2 * 3)\n    return features\n\ndef create_velocity_features(db: Database) -> np.ndarray:\n    left_foot_idx = db.skel.names.index(\"LeftFoot\")\n    right_foot_idx = db.skel.names.index(\"RightFoot\")\n    hips_idx = db.skel.names.index(\"Hips\")\n    features = db.cposvel[:, [left_foot_idx, right_foot_idx, hips_idx]].reshape(len(db), -1) # (T, 3 * 3)\n    return features\n\ndef create_traj_features(db: Database) -> np.ndarray:\n    traj_pos0 = db.future_traj_poss(20, remove_vertical=True)\n    traj_pos1 = db.future_traj_poss(40, remove_vertical=True)\n    traj_pos2 = db.future_traj_poss(60, remove_vertical=True)\n    pos_features = np.concatenate([traj_pos0, traj_pos1, traj_pos2], axis=-1) # [T, 2*3]\n    \n    traj_dir0 = db.future_traj_dirs(20, remove_vertical=True)\n    traj_dir1 = db.future_traj_dirs(40, remove_vertical=True)\n    traj_dir2 = db.future_traj_dirs(60, remove_vertical=True)\n    dir_features = np.concatenate([traj_dir0, traj_dir1, traj_dir2], axis=-1) # [T, 2*3]\n    \n    features = np.concatenate([pos_features, dir_features], axis=-1) # [T, 2*3 + 2*3]\n    return features\n\ndef create_matching_features(\n    db: Database,\n    w_pos_foot: float,\n    w_vel_foot: float,\n    w_vel_hips: float,\n    w_traj_pos: float,\n    w_traj_dir: float,\n    ignore_end: int\n) -> tuple[np.ndarray, np.ndarray, np.ndarray]:\n    \"\"\"Create a `MatchingDatabase` to find the best frame with motion matching.\n       Each feature is calculated with `create_***_features()` above, \n       and this function integrates them.\n    \"\"\"\n    pos_feats = create_position_features(db)\n    vel_feats = create_velocity_features(db)\n    traj_feats = create_traj_features(db)\n    features = np.concatenate([pos_feats, vel_feats, traj_feats], axis=-1)\n    \n    # remove ignore index from mdb and create a list that ties mdb indices to db indices.\n    indices = []\n    rmv_starts = []\n    for i, (start, end) in enumerate(zip(db.starts, db.ends)):\n        rmv_starts.append(end - ignore_end * (i + 1) + 1)\n        \n        for j in range(start, end - ignore_end + 1):\n            indices.append(j)\n    for st in rmv_starts:\n        np.delete(features, slice(st, st+ignore_end), axis=0)\n    \n    # normalize\n    weights = np.array(\n        [w_pos_foot] * 6 +\n        [w_vel_foot] * 6 +\n        [w_vel_hips] * 3 +\n        [w_traj_pos] * 6 +\n        [w_traj_dir] * 6\n    )\n    fnorms, means, stds = normalize_features(features, weights)\n    \n    return fnorms, means, stds, indices\n\ndef create_matching_aabb(features: np.ndarray, size: int | None) -> tuple[np.ndarray, np.ndarray]:\n    \"\"\"Create Axis-Aligned Bounding Boxes (AABB) on frame dim.\n       This will speed up the Nearest Neighbour search.\n       based on https://dl.acm.org/doi/abs/10.1145/3386569.3392440.\n    \"\"\"\n    num_frames = len(features)\n    bound_mins = []\n    bound_maxs = []\n    \n    num_bounds = (num_frames - 1) // size + 1 # advance\n    frame_bound_ends = [(i + 1) * size for i in range(num_bounds)]\n    frame_bound_ends[-1] = num_frames\n    cur_idx = 0\n    for end in frame_bound_ends:\n        bound_mins.append(np.min(features[cur_idx:end], axis=0, keepdims=True)) # [1, num_features]\n        bound_maxs.append(np.max(features[cur_idx:end], axis=0, keepdims=True))\n    \n    np_bound_mins = np.concatenate(bound_mins, axis=0) # [num_bounds, num_features]\n    np_bound_maxs = np.concatenate(bound_maxs, axis=0)\n    return np_bound_mins, np_bound_maxs\n\ndef create_matching_database(\n    db: Database,\n    method: str,\n    w_pos_foot: float,\n    w_vel_foot: float,\n    w_vel_hips: float,\n    w_traj_pos: float,\n    w_traj_dir: float,\n    ignore_end: int,\n    dense_bound_size: int = None, \n    sparse_bound_size: int = None,\n) -> MatchingDatabase:\n    \"\"\"Create a simple `MatchingDatabase`.\"\"\"\n    fnorms, means, stds, indices = create_matching_features(\n        db, w_pos_foot, w_vel_foot, w_vel_hips, w_traj_pos, w_traj_dir, ignore_end\n    )\n    if method == \"aabb\":\n        dense_bound_mins, dense_bound_maxs = create_matching_aabb(fnorms, dense_bound_size)\n        sparse_bound_mins, sparse_bound_maxs = create_matching_aabb(fnorms, sparse_bound_size)\n        return MatchingDatabase(\n            fnorms, means, stds, indices,\n            dense_bound_size, dense_bound_mins, dense_bound_maxs,\n            sparse_bound_size, sparse_bound_mins, sparse_bound_maxs,\n        )\n    \n    elif method == \"kd-tree\":\n        kdtree = KDTree(fnorms)\n        return MatchingDatabase(fnorms, means, stds, indices, kdtree=kdtree)\n    \n    elif method in [\"brute-force\", \"faiss\"]: \n        return MatchingDatabase(fnorms, means, stds, indices)\n    \n    else: \n        raise ValueError(\"{} is not in methods.\".format(method))\n\n# def create_cost_matrix(\n#     features: np.ndarray, # [num_frames, num_features]\n#     method: str=\"taichi\",\n# ):\n#     num_frames, num_features = features.shape\n#     if method == \"taichi\":\n#         ti_features = ti.Vector.field(n=num_features, dtype=float, shape=(num_frames))\n#         ti_features.from_numpy(features)\n#         cost_mat = ti.field(float, shape=(num_frames, num_frames))\n#         cost_mat.fill(0)\n#         _create_cost_matrix(ti_features, cost_mat, num_features)\n#         return cost_mat.to_numpy()\n\n# @ti.kernel\n# def _create_cost_matrix(\n#     features: ti.template(),\n#     cost_mat: ti.template(),\n#     num_features: int\n# ):\n#     for i, j in cost_mat:\n#         for k in ti.static(range(num_features)):\n#             cost_mat[i, j] += (features[i][k] - features[j][k]) ** 2\n\ndef motion_matching_search(\n    cur_idx: int,\n    method: str, # [\"brute-force\", \"aabb\", \"kd-tree\", \"faiss\"]\n    mdb: MatchingDatabase,\n    query: np.ndarray, # [num_features]\n    norm_query: bool=True # If we need normalize query.\n):\n    \"\"\"Search best index in `MatchingDatabase`.\n       This function is called when \n       * motion can be updated (every N frames),\n       * motion should be updated (when the animation ends, `cur_idx`==-1),\n       etc.\n    \"\"\"\n    if norm_query:\n        qnorm = normalize_query(query, mdb.means, mdb.stds) # [num_features]\n    else: \n        qnorm = query\n\n    if not cur_idx == -1: # can be updated\n        cur_idx = mdb.indices.index(cur_idx) # db index -> mdb index\n        # mdb.features[cur_idx] will be replaced to controlled character's features. \n        best_cost: float = np.sum(np.square(mdb.features[cur_idx] - qnorm))\n    else: best_cost: float = sys.float_info.max # should be updated\n    \n    if method == \"brute-force\":\n        return brute_force_search(mdb, qnorm, cur_idx, best_cost)\n    elif method == \"aabb\":\n        return aabb_search(mdb, qnorm, cur_idx, best_cost)\n    elif method == \"kd-tree\":\n        return kd_search(mdb, qnorm, cur_idx, best_cost)\n    elif method == \"faiss\":\n        return faiss_search(mdb, qnorm, cur_idx, best_cost)\n\ndef brute_force_search(\n    mdb: MatchingDatabase,\n    qnorm: np.ndarray, # [num_features]\n    cur_idx: int,\n    best_cost: float,\n) -> int:\n    costs = np.sum(np.square(mdb.features - qnorm[None].repeat(len(mdb), axis=0)), axis=-1)\n    cur_cost = np.min(costs)\n    if cur_cost < best_cost:\n        return mdb.indices[np.argmin(costs)] # mdb index -> db index\n    else:\n        return mdb.indices[cur_idx]\n\ndef aabb_search(\n    mdb: MatchingDatabase,\n    qnorm: np.ndarray, # [num_features]\n    cur_idx: int,\n    best_cost: float,\n) -> int:\n    i = 0\n    best_idx = cur_idx\n    # for all sparse boxes\n    while i < len(mdb):\n        sparse_box_idx = i // mdb.sparse_bound_size # box index\n        i_sparse_next = (sparse_box_idx + 1) * mdb.sparse_bound_size # frame index on next sparce box\n        # Calculate distance between query to sparse bounding box.\n        cur_cost, smaller = calc_box_distance(\n            best_cost, qnorm, \n            mdb.dense_bound_mins[sparse_box_idx], \n            mdb.sparse_bound_maxs[sparse_box_idx],\n        )\n        \n        if not smaller:\n            i = i_sparse_next\n            continue\n        \n        # for all dense boxes\n        while i < len(mdb) and i < i_sparse_next:\n            dense_box_idx = i // mdb.dense_bound_size # box index\n            i_dense_next = (dense_box_idx + 1) * mdb.dense_bound_size # frame index\n\n            # Calculate distance to a dense bounding box.\n            cur_cost, smaller = calc_box_distance(\n                best_cost, qnorm, \n                mdb.sparse_bound_mins[sparse_box_idx], \n                mdb.sparse_bound_maxs[sparse_box_idx]\n            )\n\n            if not smaller:\n                i = i_dense_next\n                continue\n\n            # Calculate distance to a feature inside box.\n            while i < len(mdb) and i < i_dense_next:\n                cur_cost = np.sum(np.square(qnorm - mdb.features[i]))\n                if cur_cost < best_cost:\n                    best_idx = i\n                    best_cost = cur_cost\n                \n                i += 1\n    return mdb.indices[best_idx] # mdb index -> db index\n\ndef kd_search(\n    mdb: MatchingDatabase,\n    qnorm: np.ndarray,\n    cur_idx: int,\n    best_cost: float,\n) -> int:\n    dist, index = mdb.kdtree.query(qnorm, k=1)\n    if dist < best_cost:\n        return mdb.indices[index] # mdb index -> db index\n    else:\n        return mdb.indices[cur_idx]\n\ndef faiss_search(\n    mdb: MatchingDatabase,\n    qnorm: np.ndarray,\n    cur_idx: int,\n    best_cost: float,\n) -> int:\n    return"
  },
  {
    "path": "anim/pose.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\nfrom util import quat\nfrom anim.skel import Skel\n\nclass Pose:\n    def __init__(\n        self,\n        skel: Skel,\n        quats: np.ndarray,\n        root_pos: np.ndarray,\n    ) -> None:\n        self.skel = skel\n        self.quats = quats #[J,]\n        self.root_pos = root_pos # [3,]\n        \n    def forward_kinematics(self):\n        if not hasattr(self, \"lpos\"):\n            offsets = self.skel.offsets\n            parents = self.skel.parents\n            offsets[0] = self.root_pos\n        else:\n            offsets = self.lpos\n        return quat.fk(self.quats, offsets, parents)\n    \n    def set_local_positions(self):\n        offsets = self.skel.offsets\n        offsets[0] = self.root_pos\n        self.lpos = offsets\n    \n    def set_global_positions(self):\n        _, self.global_positions = self.forward_kinematics()\n    \n    def set_global_rotations(self):\n        self.global_rotations, _ = self.forward_kinematics()\n    \n    def set_gpos_and_grot(self):\n        self.global_rotations, self.global_positions = \\\n            self.forward_kinematics()"
  },
  {
    "path": "anim/skel.py",
    "content": "from __future__ import annotations\n\nfrom dataclasses import dataclass\nimport numpy as np\n\n@dataclass\nclass Joint:\n    name: str\n    parent: int\n    offset: np.ndarray\n    root: bool = False\n    dof: int = 3\n\ndef axis_to_vector(axis: str):\n    match axis:\n        case \"x\":\n            return [1, 0, 0]\n        case \"y\":\n            return [0, 1, 0]\n        case \"z\":\n            return [0, 0, 1]\n        case \"-x\":\n            return [-1, 0, 0]\n        case \"-y\":\n            return [0, -1, 0]\n        case \"-z\":\n            return [0, 0, -1]\n        case _:\n            raise ValueError\n\nclass Skel:\n    def __init__(\n        self,\n        joints: list[Joint],\n        skel_name: str=\"skeleton\",\n        rest_forward: list[int]=[0, 0, 1],\n        rest_vertical: list[int]=[0, 1, 0],\n        forward_axis: str=\"z\",\n        vertical_axis: str=\"y\",\n    ) -> None:\n        \"\"\"Class for skeleton offset definition.\n\n        Args:\n            joints (list[Joint]): list of Joints.\n            skel_name (str, optional): name of skeleton. Defaults to \"skeleton\".\n            rest_forward  (list[int], optional): forward direction of the rest pose. Defaults to [0, 0, 1].\n            rest_vertical (list[int], optional): vertical direction of the rest pose. Defaults to [0, 1, 0].\n            forward_axis (str, optional): forward axis of the coodinates. Defaults to \"z\".\n            vertical_axis (str, optional): vertical axis of the coodinates. Defaults to \"y\".\n        \"\"\"\n        self.skel_name = skel_name\n        self.joints = joints\n        self.rest_forward = rest_forward\n        self.rest_vertical = rest_vertical\n        self.forward_axis = forward_axis\n        self.forward = axis_to_vector(forward_axis)\n        self.vertical_axis = vertical_axis\n        self.vertical = axis_to_vector(vertical_axis)\n    \n    def __len__(self) -> int:\n        return len(self.joints)\n    \n    def __getitem__(self, index: int | slice | str) -> Joint | list[Joint]:\n        return self.get_joint(index)\n    \n    @property\n    def indices(self) -> list[int]:\n        return [idx for idx in range(len(self))]\n    \n    @property\n    def parents(self) -> list[int]:\n        \"\"\"Get list of all joint's parent indices.\"\"\"\n        return [j.parent for j in self.joints]\n    \n    @property\n    def children(self) -> dict[int: list[int]]:\n        children_dict = {}\n        for i in range(len(self)):\n            children_dict[i] = []\n        for i, parent in enumerate(self.parents):\n            if parent == -1:\n                continue\n            children_dict[parent].append(i)\n        return children_dict\n    \n    @property\n    def names(self) -> list[str]:\n        \"\"\"Get all joints names.\"\"\"\n        return [j.name for j in self.joints]\n    \n    @property\n    def offsets(self) -> np.ndarray:\n        \"\"\"Offset coordinates of all joints (np.ndarray).\"\"\"\n        offsets: list[np.ndarray] = []\n        for joint in self.joints:\n            offsets.append(joint.offset)\n        return np.array(offsets)\n    \n    @property\n    def dofs(self) -> list[int]:\n        return [j.dof for j in self.joints]\n    \n    @property\n    def joint_depths(self) -> list[int]:\n        \"\"\"Get hierarchical distance between joints to ROOT.\"\"\"\n        def get_depth(joint_idx: int, cur_depth: int=0):\n            parent_idx = self.parents[joint_idx]\n            if  parent_idx != -1:\n                depth = cur_depth + 1\n                cur_depth = get_depth(parent_idx, depth)\n            return cur_depth\n        return [get_depth(idx) for idx in self.indices]\n    \n    @property\n    def bone_lengths(self) -> np.ndarray:\n        lengths = np.linalg.norm(self.offsets, axis=-1)\n        lengths[0] = 0\n        return lengths\n    \n    def get_joint(self, index: int | slice | str) -> Joint | list[Joint]:\n        \"\"\"Get Joint from index or slice or joint name.\"\"\"\n        if isinstance(index, str):\n            index: int = self.get_index_from_jname(index)\n        return self.joints[index]\n    \n    def get_index_from_jname(self, jname: str) -> int:\n        \"\"\"Get joint index from joint name.\"\"\"\n        jname_list = self.names\n        return jname_list.index(jname)\n    \n    def get_children(\n        self, \n        index: int | str,\n        return_idx: bool = False\n    ) -> list[Joint] | list[int]:\n        \"\"\"Get list of children joints or children indices.\n        args:\n            index: index of joint or name of joint.\n            return_idx: if True, return joint index.\n        return:\n            list of joints or list of indices (if return_idx).\n        \"\"\"\n        if isinstance(index, str):\n            index: int = self.get_index_from_jname(index)\n        \n        children_idx = self.children[index]\n        if return_idx:\n            return children_idx\n        else:\n            children = []\n            for child_idx in children_idx:\n                children.append(self.joints[child_idx])\n            return children\n    \n    def get_parent(\n        self, \n        index: int | str,\n        return_idx: bool = False\n    ) -> Joint | int | None:\n        \"\"\"Get parent joint or parent index.\n        args:\n            index (int | str): index of joint or name of joint.\n            return_idx (bool): if True, return joint index.\n        return:\n            None : if parent doesn't exists.\n            index (int): if return_idx = True.\n            joint (Joint): if return_idx = False.\n        \"\"\"\n        if isinstance(index, str):\n            _index: int = self.get_index_from_jname(index)\n        elif isinstance(index, int):\n            _index: int = index\n        \n        if return_idx: return self.parents[_index]\n        elif self.parents[_index] == -1:\n            return None\n        else:\n            return self.joints[self.parents[_index]]\n    \n    @staticmethod\n    def from_names_parents_offsets(\n        names: list[str],\n        parents: list[int],\n        offsets: np.ndarray,\n        skel_name: str=\"skeleton\",\n        rest_forward: list[int]=[0, 0, 1],\n        rest_vertical: list[int] = [0, 1, 0],\n        forward_axis: str = \"z\",\n        vertical_axis: str = \"y\"\n    ) -> Skel:\n        \"\"\"Construct new Skel from names, parents, offsets.\n        \n        args:\n            names(list[str]) : list of joints names.\n            parents(list[int]) : list of joints parents indices.\n            offsets(np.ndarray): joint offset relative coordinates from parent joints.\n            skel_name (str, optional): name of skeleton. Defaults to \"skeleton\".\n            rest_forward  (list[int], optional): forward direction of the rest pose. Defaults to [0, 0, 1].\n            rest_vertical (list[int], optional): vertical direction of the rest pose. Defaults to [0, 1, 0].\n            forward_axis (str, optional): forward axis of the coodinates. Defaults to \"z\".\n            vertical_axis (str, optional): vertical axis of the coodinates. Defaults to \"y\".\n        return:\n            Skel\n        \"\"\"\n        ln, lp, lo = len(names), len(parents), len(offsets)\n        assert len(set([ln, lp, lo])) == 1\n        joints = []\n        for name, parent, offset in zip(names, parents, offsets):\n            dof = 6 if parent == -1 else 3\n            joints.append(Joint(name, parent, offset, (parent==-1), dof))\n        return Skel(joints, skel_name, rest_forward, rest_vertical, forward_axis, vertical_axis)"
  },
  {
    "path": "anim/smpl.py",
    "content": "\"\"\"\nsmpl.py\n\nJOINT_NAMES, SMPLH_JOINT_NAMES, SMPL_JOINT_NAMES are from \\\nhttps://github.com/vchoutas/smplx/blob/main/smplx/joint_names.py\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\nimport numpy as np\nfrom util.load import pickle_load\n\nJOINT_NAMES = [\n    \"pelvis\",\n    \"left_hip\",\n    \"right_hip\",\n    \"spine1\",\n    \"left_knee\",\n    \"right_knee\",\n    \"spine2\",\n    \"left_ankle\",\n    \"right_ankle\",\n    \"spine3\",\n    \"left_foot\",\n    \"right_foot\",\n    \"neck\",\n    \"left_collar\",\n    \"right_collar\",\n    \"head\",\n    \"left_shoulder\",\n    \"right_shoulder\",\n    \"left_elbow\",\n    \"right_elbow\",\n    \"left_wrist\",\n    \"right_wrist\",\n    \"jaw\",\n    \"left_eye_smplhf\",\n    \"right_eye_smplhf\",\n    \"left_index1\",\n    \"left_index2\",\n    \"left_index3\",\n    \"left_middle1\",\n    \"left_middle2\",\n    \"left_middle3\",\n    \"left_pinky1\",\n    \"left_pinky2\",\n    \"left_pinky3\",\n    \"left_ring1\",\n    \"left_ring2\",\n    \"left_ring3\",\n    \"left_thumb1\",\n    \"left_thumb2\",\n    \"left_thumb3\",\n    \"right_index1\",\n    \"right_index2\",\n    \"right_index3\",\n    \"right_middle1\",\n    \"right_middle2\",\n    \"right_middle3\",\n    \"right_pinky1\",\n    \"right_pinky2\",\n    \"right_pinky3\",\n    \"right_ring1\",\n    \"right_ring2\",\n    \"right_ring3\",\n    \"right_thumb1\",\n    \"right_thumb2\",\n    \"right_thumb3\",\n    \"nose\",\n    \"right_eye\",\n    \"left_eye\",\n    \"right_ear\",\n    \"left_ear\",\n    \"left_big_toe\",\n    \"left_small_toe\",\n    \"left_heel\",\n    \"right_big_toe\",\n    \"right_small_toe\",\n    \"right_heel\",\n    \"left_thumb\",\n    \"left_index\",\n    \"left_middle\",\n    \"left_ring\",\n    \"left_pinky\",\n    \"right_thumb\",\n    \"right_index\",\n    \"right_middle\",\n    \"right_ring\",\n    \"right_pinky\",\n    \"right_eye_brow1\",\n    \"right_eye_brow2\",\n    \"right_eye_brow3\",\n    \"right_eye_brow4\",\n    \"right_eye_brow5\",\n    \"left_eye_brow5\",\n    \"left_eye_brow4\",\n    \"left_eye_brow3\",\n    \"left_eye_brow2\",\n    \"left_eye_brow1\",\n    \"nose1\",\n    \"nose2\",\n    \"nose3\",\n    \"nose4\",\n    \"right_nose_2\",\n    \"right_nose_1\",\n    \"nose_middle\",\n    \"left_nose_1\",\n    \"left_nose_2\",\n    \"right_eye1\",\n    \"right_eye2\",\n    \"right_eye3\",\n    \"right_eye4\",\n    \"right_eye5\",\n    \"right_eye6\",\n    \"left_eye4\",\n    \"left_eye3\",\n    \"left_eye2\",\n    \"left_eye1\",\n    \"left_eye6\",\n    \"left_eye5\",\n    \"right_mouth_1\",\n    \"right_mouth_2\",\n    \"right_mouth_3\",\n    \"mouth_top\",\n    \"left_mouth_3\",\n    \"left_mouth_2\",\n    \"left_mouth_1\",\n    \"left_mouth_5\",  # 59 in OpenPose output\n    \"left_mouth_4\",  # 58 in OpenPose output\n    \"mouth_bottom\",\n    \"right_mouth_4\",\n    \"right_mouth_5\",\n    \"right_lip_1\",\n    \"right_lip_2\",\n    \"lip_top\",\n    \"left_lip_2\",\n    \"left_lip_1\",\n    \"left_lip_3\",\n    \"lip_bottom\",\n    \"right_lip_3\",\n    # Face contour\n    \"right_contour_1\",\n    \"right_contour_2\",\n    \"right_contour_3\",\n    \"right_contour_4\",\n    \"right_contour_5\",\n    \"right_contour_6\",\n    \"right_contour_7\",\n    \"right_contour_8\",\n    \"contour_middle\",\n    \"left_contour_8\",\n    \"left_contour_7\",\n    \"left_contour_6\",\n    \"left_contour_5\",\n    \"left_contour_4\",\n    \"left_contour_3\",\n    \"left_contour_2\",\n    \"left_contour_1\",\n]\n\n\nSMPLH_JOINT_NAMES = [\n    \"pelvis\",\n    \"left_hip\",\n    \"right_hip\",\n    \"spine1\",\n    \"left_knee\",\n    \"right_knee\",\n    \"spine2\",\n    \"left_ankle\",\n    \"right_ankle\",\n    \"spine3\",\n    \"left_foot\",\n    \"right_foot\",\n    \"neck\",\n    \"left_collar\",\n    \"right_collar\",\n    \"head\",\n    \"left_shoulder\",\n    \"right_shoulder\",\n    \"left_elbow\",\n    \"right_elbow\",\n    \"left_wrist\",\n    \"right_wrist\",\n    \"left_index1\",\n    \"left_index2\",\n    \"left_index3\",\n    \"left_middle1\",\n    \"left_middle2\",\n    \"left_middle3\",\n    \"left_pinky1\",\n    \"left_pinky2\",\n    \"left_pinky3\",\n    \"left_ring1\",\n    \"left_ring2\",\n    \"left_ring3\",\n    \"left_thumb1\",\n    \"left_thumb2\",\n    \"left_thumb3\",\n    \"right_index1\",\n    \"right_index2\",\n    \"right_index3\",\n    \"right_middle1\",\n    \"right_middle2\",\n    \"right_middle3\",\n    \"right_pinky1\",\n    \"right_pinky2\",\n    \"right_pinky3\",\n    \"right_ring1\",\n    \"right_ring2\",\n    \"right_ring3\",\n    \"right_thumb1\",\n    \"right_thumb2\",\n    \"right_thumb3\",\n    \"nose\",\n    \"right_eye\",\n    \"left_eye\",\n    \"right_ear\",\n    \"left_ear\",\n    \"left_big_toe\",\n    \"left_small_toe\",\n    \"left_heel\",\n    \"right_big_toe\",\n    \"right_small_toe\",\n    \"right_heel\",\n    \"left_thumb\",\n    \"left_index\",\n    \"left_middle\",\n    \"left_ring\",\n    \"left_pinky\",\n    \"right_thumb\",\n    \"right_index\",\n    \"right_middle\",\n    \"right_ring\",\n    \"right_pinky\",\n]\n\nSMPL_JOINT_NAMES = [\n    \"pelvis\",\n    \"left_hip\",\n    \"right_hip\",\n    \"spine1\",\n    \"left_knee\",\n    \"right_knee\",\n    \"spine2\",\n    \"left_ankle\",\n    \"right_ankle\",\n    \"spine3\",\n    \"left_foot\",\n    \"right_foot\",\n    \"neck\",\n    \"left_collar\",\n    \"right_collar\",\n    \"head\",\n    \"left_shoulder\",\n    \"right_shoulder\",\n    \"left_elbow\",\n    \"right_elbow\",\n    \"left_wrist\",\n    \"right_wrist\",\n    \"left_hand\",\n    \"right_hand\",\n]\n\ndef load_model(model_path: Path, gender: str=None):\n    if isinstance(model_path, str):\n        model_path = Path(model_path)\n    if model_path.suffix == \"\":\n        model_path = model_path / gender / \"model.npz\"\n    match model_path.suffix:\n        case \".npz\":\n            model_dict = np.load(model_path, allow_pickle=True)\n        case \".pkl\":\n            try:\n                model_dict = pickle_load(model_path)\n            except:\n                model_dict = pickle_load(model_path, encoding=\"latin1\")\n        case _ :  \n            ValueError(\"This file is not supported.\")\n    return model_dict\n\ndef calc_skel_offsets(\n        betas: np.ndarray,\n        J_regressor: np.ndarray,\n        shapedirs: np.ndarray,\n        v_template: np.ndarray,\n) -> np.ndarray:\n    \"\"\"\n    args:\n        betas (np.ndarray): body shape parameters of SMPL. shape: (num_betas).\n        J_regressor (np.ndarray): Joint regressor from SMPL vertices. shape: (num_J, num_vertices).\n        shapedirs (np.ndarray): Matrix to convert from PCA (betas) to SMPL blendshapes. shape: (num_vertices, 3, num_betas).\n        v_template (np.ndarray): Template body vertices of SMPL. shape: (num_vertices, 3).\n    return:\n        joints (np.ndarray): Offset joint positons in 3D space (absolute coordinates). shape: (num_J, 3).\n    \"\"\"\n    num_betas = min(len(betas), shapedirs.shape[-1])\n    betas, shapedirs = betas[:num_betas], shapedirs[..., :num_betas]\n    blendshape = shapedirs @ betas\n    vertices = v_template + blendshape\n    joints = J_regressor @ vertices\n\n    return joints\n\n"
  },
  {
    "path": "anitaichi/animation/anim.py",
    "content": "import numpy as np\nimport taichi as ti\nfrom anitaichi.animation.skel import Skel\nfrom anitaichi.transform import ti_quat, quat\n\ndef forward_kinematics_rotations(\n    parents: list[int],  # (J)\n    rots: np.ndarray,    # (T, J, 4) or (J, 4)\n):\n    grots = []\n    for i, pa in enumerate(parents):\n        if pa == -1:\n            grots.append(rots[...,i:i+1,:])\n        else:\n            grots.append(quat.mul(grots[pa], rots[...,i:i+1,:]))\n    return np.concatenate(grots, axis=-2)\n\n# Forward Kinematics using numpy.\n# (Since it is difficult to parallelize each bone, use this for Pose FK.)\ndef forward_kinematics(\n    offsets: np.ndarray, # (J, 3)\n    parents: list[int],  # (J)\n    grots: np.ndarray,   # (T, J, 4) or (J, 4)\n    trans: np.ndarray    # (T, 3) or (3)\n) -> np.ndarray:\n    gposs = []\n    offsets = offsets[None].repeat(len(grots), axis=0)\n    for i, pa in enumerate(parents):\n        if pa == -1:\n            gposs.append(trans[..., None, :])\n        else:\n            gposs.append(quat.mul_vec(grots[..., pa:pa+1, :], offsets[..., i:i+1, :]) + gposs[pa])\n    return np.concatenate(gposs, axis=-2)\n\n# Forward kinematics using taichi.\n# (Parallelization in the time direction.)\n@ti.kernel\ndef ti_forward_kinematics(\n    parents: ti.template(), # (J)\n    offsets: ti.template(), # (J) * 3\n    lrots: ti.template(),   # (T, J) * 4\n    grots: ti.template(),   # (T, J) * 4\n    gposs: ti.template(),   # (T, J) * 3\n):\n    # parallelize time dimention.\n    for i in ti.ndrange(grots.shape[0]):\n        for j in ti.ndrange(grots.shape[1]):\n            if parents[j] == -1:\n                grots[i, j] = lrots[i, j]\n                gposs[i, j] = offsets[j]\n            else:\n                grots[i, j] = ti_quat.mul(grots[i, parents[j]], lrots[i, j])\n                gposs[i, j] = ti_quat.mul_vec3(grots[i, parents[j]], offsets[j]) + gposs[i, parents[j]]\n\n# @ti.kernel\n# def calc_local_positions(\n#     parents: ti.template(), # (J)\n#     offsets: ti.template(), # (J) * 3\n#     grots: ti.template(),   # (T, J) * 4\n#     lposs: ti.template()    # (T, J) * 3\n# ):\n#     for i, j in grots:\n#         if parents[j] == -1:\n#             lposs[i, j] = offsets[j]\n#         else:\n#             lposs[i, j] = ti_quat.mul_vec3(grots[i, j], offsets[j])\n\n# Convert numpy array to taichi vector field.\ndef convert_to_vector_field(array: np.ndarray):\n    shape, dim_vector = array.shape[:-1], array.shape[-1]\n    field = ti.Vector.field(dim_vector, dtype=float, shape=shape)\n    field.from_numpy(array)\n    return field\n\n\nclass Pose:\n    def __init__(\n        self,\n        skel: Skel,\n        rots: np.ndarray,\n        root_pos: np.ndarray,\n    ) -> None:\n        self.skel = skel\n        self.rots = rots\n        self.root_pos = root_pos\n    \n    @property\n    def local_rotations(self):\n        return self.rots\n    \n    @property\n    def global_rotations(self):\n        parents = self.skel.parents\n        lrots = self.rots\n        grots = forward_kinematics_rotations(parents, lrots)\n        return grots\n    \n    @property\n    def global_positions(self):\n        offsets = self.skel.offsets\n        parents = self.skel.parents\n        grots = self.global_rotations\n        trans = self.root_pos\n        gposs = forward_kinematics(offsets, parents, grots, trans)\n        return gposs\n\n\nclass Animation:\n    def __init__(\n        self,\n        skel: Skel,\n        rots: np.ndarray,\n        trans: np.ndarray,\n        fps: int,\n        anim_name: str,\n    ) -> None:\n        self.skel = skel\n        self.rots = rots\n        self.trans = trans\n\n        self.fps = fps\n        self.anim_name = anim_name\n\n    def __len__(self):\n        return len(self.rots)\n\n    @property\n    def local_rotations(self):\n        return self.rots\n\n    @property\n    def global_rotations(self):\n        parents = self.skel.parents\n        lrots = self.rots\n        grots = forward_kinematics_rotations(parents, lrots)\n        return grots\n    \n    # @property\n    # def local_positions_field(self):\n    #     parents = self.skel.parents_field\n    #     offsets = self.skel.offsets_field\n    #     grots = self.global_rotations\n    #     lposs = ti.Vector.field(3, dtype=float, shape=grots.shape)\n    #     calc_local_positions(parents, offsets, grots, lposs)\n    #     return lposs\n\n    @property\n    def global_positions(self):\n        offsets = self.skel.offsets\n        parents = self.skel.parents\n        grots = self.global_rotations\n        trans = self.trans\n        gposs = forward_kinematics(offsets, parents, grots, trans)\n        return gposs\n    \n    def ti_fk(self):\n        parents = self.skel.parents_field # (J)\n        offsets = self.skel.offsets_field # (J) * 3\n        lrots = convert_to_vector_field(self.local_rotations)  # (T, J) * 4\n        grots = ti.Vector.field(4, dtype=float, shape=lrots.shape) # (T, J) * 4\n        gposs = ti.Vector.field(3, dtype=float, shape=lrots.shape) # (T, J) * 3\n        ti_forward_kinematics(parents, offsets, lrots, grots, gposs)\n        self.global_rotations_field = grots\n        self.global_positions_field = gposs\n    \n    def to_pose(self, frame_num: int) -> Pose:\n        skel = self.skel\n        rots = self.rots[frame_num]\n        root_pos = self.trans[frame_num]\n        return Pose(skel, rots, root_pos)\n\n    def to_poses(self, frames: slice) -> list[Pose]:\n        if isinstance(frames, int):\n            return self.to_pose(frames)\n        skel = self.skel\n        anim_rots = self.rots[frames]\n        anim_trans = self.trans[frames]\n        poses = []\n        for rots, root_pos in zip(anim_rots, anim_trans):\n            poses.append(Pose(skel, rots, root_pos))\n        return poses\n"
  },
  {
    "path": "anitaichi/animation/anim_loader/bvh.py",
    "content": "\"\"\"\nbvh.py\n\"\"\"\n# loading and writing a biovision hierarchy data (BVH).\n\nfrom __future__ import annotations\n\nfrom io import TextIOWrapper\nfrom pathlib import Path\nimport logging\nimport numpy as np\n\nfrom anitaichi.animation.anim import Animation\nfrom anitaichi.animation.skel import Skel, Joint\nfrom anitaichi.transform import quat\n\ndef load(\n    filepath: Path | str, \n    start: int=None, \n    end: int=None, \n    order: str=None,\n    load_skel: bool=True,\n    load_pose: bool=True,\n    skel: Skel=None,\n    skel_name: str=None,\n) -> Skel | Animation:\n    \n    if not load_skel and not load_pose:\n        logging.info(\"Either load_skel or load_pose must be specified.\")\n        raise ValueError\n    \n    if isinstance(filepath, str):\n        filepath = Path(filepath)\n    \n    # List bvh file by each row (line) and each word (column).\n    with open(filepath, \"r\") as f:\n        lines: list[str] = [line.strip() for line in f if line != \"\"]\n        motion_idx: int = lines.index(\"MOTION\")\n        lines: list[str | list[str]] = \\\n            list(map(lambda x: x.split(), lines))\n        f.close()\n    \n    # Load HIERARCHY term.\n    if load_skel:\n        skel, order = load_hierarchy(\n            lines = lines[:motion_idx], \n            skel_name=skel_name,\n        )\n\n    # Load MOTION term.\n    if load_pose:\n        assert not skel is None, \"You need to load skeleton or define skeleton.\" \n        name = filepath.name.split(\".\")[0]\n        \n        fps, trans, quats = load_motion(\n            lines = lines[motion_idx:],\n            start = start,\n            end = end,\n            order = order,\n            skel = skel\n        )\n        return Animation(\n            skel=skel,\n            rots=quats,\n            trans=trans,\n            fps=fps,\n            anim_name=name,\n        )\n    \n    return skel\n\ndef load_hierarchy(\n    lines: list[str | list[str]],\n    skel_name: str,\n) -> Skel:\n    \n    channelmap: dict[str, str] = {\n        \"Xrotation\" : \"x\",\n        \"Yrotation\" : \"y\",\n        \"Zrotation\" : \"z\",   \n    }\n    \n    stacks: list[int] = [-1]\n    parents: list[int] = []\n    name_list: list[str] = []\n    idx = 0\n    joints: list[Joint] = []\n    depth:int = 0\n    end_site: bool = False\n    \n    for line in lines:\n        if \"ROOT\" in line or \"JOINT\" in line:\n            parents.append(stacks[-1])\n            stacks.append(len(parents) - 1)\n            name_list.append(line[1])\n            \n        elif \"OFFSET\" in line:\n            if not end_site:\n                offset = list(map(float, line[1:]))\n                joints.append(Joint(\n                    name = name_list[-1],\n                    index = idx,\n                    parent = parents[-1],\n                    offset = offset,\n                    ))\n                idx += 1\n        \n        elif \"CHANNELS\" in line:\n            dof = int(line[1])\n            if dof == 3:\n                order = \"\".join([channelmap[p] for p in line[2:2+3]])\n            elif dof == 6:\n                order = \"\".join([channelmap[p] for p in line[5:5+3]])\n            joints[-1].dof = dof\n        \n        elif \"End\" in line:\n            end_site = True\n            stacks.append(len(parents) - 1)\n        \n        elif \"{\" in line:\n            depth += 1\n        \n        elif \"}\" in line:\n            depth -= 1\n            end_site = False\n            stacks.pop()\n    \n    assert depth == 0, \"Brackets are not closed.\"\n    \n    skel = Skel(joints=joints, skel_name=skel_name)\n    return skel, order\n\ndef load_motion(\n    lines: list[list[str]],\n    order: str,\n    skel: Skel,\n    start: int=None,\n    end: int=None,\n) -> tuple[int, np.ndarray, np.ndarray]:\n    \n    fps: int = round(1 / float(lines[2][2]))\n    lines: list[list[str]] = lines[3:]\n    \n    np_lines = np.array(\n        list(map(lambda line: list(map(float, line)), lines))\n    ) # T × dim_J(J × 3 + 3 or J × 6) matrix.\n    np_lines = np_lines[start:end]\n    \n    dofs = []\n    eangles = []\n    poss = []\n    ckpt = 0\n    for joint in skel.joints:\n        dof = joint.dof\n        dofs.append(dof)\n        if dof == 3:\n            eangle = np_lines[:, ckpt:ckpt+3]\n            eangles.append(eangle[:, None])\n            ckpt += 3\n        elif dof == 6:\n            pos = np_lines[:, ckpt:ckpt+3] # [T, 3]\n            eangle = np_lines[:, ckpt+3:ckpt+6]\n            poss.append(pos[:, None]) # [T, 1, 3]\n            eangles.append(eangle[:, None]) # [T, 1, 3]\n            ckpt += 6\n    \n    assert sum(dofs) == np_lines.shape[1], \\\n        \"Skel and Motion are not compatible.\"\n    \n    poss = np.concatenate(poss, axis=1)\n    eangles = np.concatenate(eangles, axis=1)\n    \n    trans = poss[:, 0]\n    quats = quat.unroll(quat.from_euler(eangles, order))\n    \n    return fps, trans, quats\n\ndef save(\n    filepath: Path | str,\n    anim: Animation, \n    order: str=\"zyx\",\n    ) -> bool:\n    \n    skel = anim.skel\n    trans = anim.trans.to_numpy()\n    quats = anim.rots.to_numpy()\n    fps = anim.fps\n    \n    with open(filepath, \"w\") as f:\n        # write hierarchy data.\n        f.write(\"HIERARCHY\\n\")\n        index_order = save_hierarchy(f, skel, 0, order, 0)\n        \n        # write motion data.\n        f.write(\"MOTION\\n\")\n        f.write(\"Frames: %d\\n\" % len(trans))\n        f.write(\"Frame Time: %f\\n\" % (1.0 / fps))\n        save_motion(f, trans, quats, order, index_order)\n        f.close()\n\ndef save_hierarchy(\n    f: TextIOWrapper,\n    skel: Skel,\n    index: int,\n    order: str,\n    depth: int,\n) -> list[int]:\n\n    def order2xyzrotation(order: str) -> str:\n        cmap: dict[str, str] = {\n            \"x\" : \"Xrotation\",\n            \"y\" : \"Yrotation\",\n            \"z\" : \"Zrotation\",   \n        }\n        return \"%s %s %s\" % (cmap[order[0]], cmap[order[1]], cmap[order[2]])\n    \n    joint = skel[index]\n    index_order = [index]\n    if joint.root:\n        f.write(\"\\t\" * depth + \"ROOT %s\\n\" % joint.name)\n    else:\n        f.write(\"\\t\" * depth + \"JOINT %s\\n\" % joint.name)\n    f.write(\"\\t\" * depth + \"{\\n\")\n    depth += 1\n    offset = joint.offset\n    f.write(\"\\t\" * depth + \\\n        \"OFFSET %f %f %f\\n\" % (offset[0], offset[1], offset[2]))\n    \n    if joint.root:\n        f.write(\"\\t\" * depth + \\\n            \"CHANNELS 6 Xposition Yposition Zposition %s\\n\"\\\n            % order2xyzrotation(order))\n    else:\n        f.write(\"\\t\" * depth + \"CHANNELS 3 %s\\n\"\\\n            % order2xyzrotation(order))\n    \n    children_idxs = skel.get_children(index, return_idx=True)\n    for child_idx in children_idxs:\n        ch_index_order = save_hierarchy(f, skel, child_idx, order, depth)\n        index_order.extend(ch_index_order)\n    if children_idxs == []:\n        f.write(\"\\t\" * depth + \"End Site\\n\")\n        f.write(\"\\t\" * depth + \"{\\n\")\n        f.write(\"\\t\" * (depth + 1) + \"OFFSET %f %f %f\\n\" \\\n            % (0, 0, 0))\n        f.write(\"\\t\" * depth + \"}\\n\")\n    \n    depth -= 1\n    f.write(\"\\t\" * depth + \"}\\n\")\n    return index_order\n\ndef save_motion(\n    f: TextIOWrapper,\n    trans: np.ndarray,\n    quats: np.ndarray,\n    order: str,\n    index_order: list[int],\n) -> None:\n    \n    def write_position_rotation(\n        pos: np.ndarray, \n        rot: np.ndarray,\n        ) -> str:\n        pos, rot = pos.tolist(), rot.tolist()\n        return \"%f %f %f %f %f %f \" \\\n            % (pos[0], pos[1], pos[2], rot[0], rot[1], rot[2])\n\n    def write_rotation(rot: np.ndarray) -> str:\n        rot = rot.tolist()\n        return \"%f %f %f \" %(rot[0], rot[1], rot[2])\n    \n    eangles = np.rad2deg(quat.to_euler(quats, order)) # (T, J, 3)\n    for i in range(len(trans)):\n        for j in index_order:\n            if j == 0:\n                f.write(\n                    \"%s\" % write_position_rotation(trans[i], eangles[i, j])\n                    )\n            else:\n                f.write(\n                    \"%s\" % write_rotation(eangles[i, j])\n                )\n        f.write(\"\\n\")"
  },
  {
    "path": "anitaichi/animation/skel.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\nimport taichi as ti\n\nclass Joint:\n    def __init__(self, name: str, index: int, parent: int, offset: list[float], dof:int =None, root: bool=None):\n        self.name = name\n        self.index = index\n        self.parent = parent\n        self.offset = offset\n        if dof != None:\n            self.dof = dof\n        else: self.dof = 6 if (parent==-1) else 3\n        if root != None:\n            self.root = root\n        self.root: bool = (parent==-1)\n\nclass Skel:\n    def __init__(self, joints: list[Joint], skel_name: str=None):\n        self.joints = joints\n        self.num_joints = len(joints)\n        self.skel_name = skel_name\n    \n    def __len__(self):\n        return self.num_joints\n    \n    # ==================\n    # Read only property\n    # ==================\n    @property\n    def offsets(self) -> np.ndarray: # For numpy array\n        offsets = []\n        for j in self.joints:\n            offsets.append(j.offset)\n        return np.array(offsets)\n\n    @property\n    def offsets_field(self): # For Taichi field\n        offsets = ti.Vector.field(3, dtype=float, shape=(self.num_joints,))\n        for i in range(len(self.joints)):\n            offsets[i] = self.joints[i].offset\n        return offsets\n    \n    @property\n    def parents(self) -> list[int]:\n        return [j.parent for j in self.joints]\n\n    @property\n    def parents_field(self): \n        parents_idx_field = ti.field(int, shape=len(self.joints))\n        parents_idx = [joint.parent for joint in self.joints]\n        for i in range(len(self.joints)):\n            parents_idx_field[i] = parents_idx[i]\n        return parents_idx_field"
  },
  {
    "path": "anitaichi/transform/quat.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\n\nfrom anitaichi.transform import vec\n\n# Make origin quaternions (No rotations).\ndef eye(shape, dtype=np.float32):\n    return np.ones(list(shape) + [4], dtype=dtype) * np.asarray([1, 0, 0, 0], dtype=dtype)\n\n# Return norm of quaternions.\ndef length(x):\n    return np.sqrt(np.sum(x * x, axis=-1))\n\n# Make unit quaternions.\ndef normalize(x, eps=1e-8):\n    return x / (length(x)[...,None] + eps)\n\ndef abs(x):\n    return np.where(x[...,0:1] > 0.0, x, -x)\n\n# Calculate inverse rotations.\ndef inv(q):\n    return np.array([1, -1, -1, -1], dtype=np.float32) * q\n\n# Calculate the dot product of two quaternions.\ndef dot(x, y):\n    return np.sum(x * y, axis=-1)[...,None] if x.ndim > 1 else np.sum(x * y, axis=-1)\n\n# Multiply two quaternions (return rotations).\ndef mul(x, y):\n    x0, x1, x2, x3 = x[..., 0:1], x[..., 1:2], x[..., 2:3], x[..., 3:4]\n    y0, y1, y2, y3 = y[..., 0:1], y[..., 1:2], y[..., 2:3], y[..., 3:4]\n\n    return np.concatenate([\n        y0 * x0 - y1 * x1 - y2 * x2 - y3 * x3,\n        y0 * x1 + y1 * x0 - y2 * x3 + y3 * x2,\n        y0 * x2 + y1 * x3 + y2 * x0 - y3 * x1,\n        y0 * x3 - y1 * x2 + y2 * x1 + y3 * x0], axis=-1)\n\ndef inv_mul(x, y):\n    return mul(inv(x), y)\n\ndef mul_inv(x, y):\n    return mul(x, inv(y))\n\n# Multiply quaternions and vectors (return vectors).\ndef mul_vec(q, x):\n    t = 2.0 * vec.cross(q[..., 1:], x)\n    return x + q[..., 0][..., None] * t + vec.cross(q[..., 1:], t)\n\ndef inv_mul_vec(q, x):\n    return mul_vec(inv(q), x)\n\ndef unroll(x):\n    y = x.copy()\n    for i in range(1, len(x)):\n        d0 = np.sum( y[i] * y[i-1], axis=-1)\n        d1 = np.sum(-y[i] * y[i-1], axis=-1)\n        y[i][d0 < d1] = -y[i][d0 < d1]\n    return y\n\n# Calculate quaternions between two unit 3D vectors (x to y).\ndef between(x, y):\n    return np.concatenate([\n        np.sqrt(np.sum(x*x, axis=-1) * np.sum(y*y, axis=-1))[...,None] + \n        np.sum(x * y, axis=-1)[...,None], \n        vec.cross(x, y)], axis=-1)\n\ndef log(x, eps=1e-5):\n    length = np.sqrt(np.sum(np.square(x[...,1:]), axis=-1))[...,None]\n    halfangle = np.where(length < eps, np.ones_like(length), np.arctan2(length, x[...,0:1]) / length)\n    return halfangle * x[...,1:]\n    \ndef exp(x, eps=1e-5):\n    halfangle = np.sqrt(np.sum(np.square(x), axis=-1))[...,None]\n    c = np.where(halfangle < eps, np.ones_like(halfangle), np.cos(halfangle))\n    s = np.where(halfangle < eps, np.ones_like(halfangle), np.sinc(halfangle / np.pi))\n    return np.concatenate([c, s * x], axis=-1)\n\n# Calculate global space rotations and positions from local space.\ndef fk(lrot, lpos, parents):\n    \n    gp, gr = [lpos[...,:1,:]], [lrot[...,:1,:]]\n    for i in range(1, len(parents)):\n        gp.append(mul_vec(gr[parents[i]], lpos[...,i:i+1,:]) + gp[parents[i]])\n        gr.append(mul    (gr[parents[i]], lrot[...,i:i+1,:]))\n        \n    return np.concatenate(gr, axis=-2), np.concatenate(gp, axis=-2)\n\ndef fk_rot(lrot, parents):\n    \n    gr = [lrot[...,:1,:]]\n    for i in range(1, len(parents)):\n        gr.append(mul(gr[parents[i]], lrot[...,i:i+1,:]))\n        \n    return np.concatenate(gr, axis=-2)\n\n# Calculate root centric space rotations and positions.\ndef fk_centric(lrot, lpos, parents):\n    rrot, rpos = lrot.copy(), lpos.copy()\n    rrot[:,0,:] = eye([rrot.shape[0]])\n    rpos[:,0,:] = 0\n    return fk(rrot, rpos, parents)\n\n# Calculate local space rotations and positions from global space.\ndef ik(grot, gpos, parents):\n    \n    return (\n        np.concatenate([\n            grot[...,:1,:],\n            mul(inv(grot[...,parents[1:],:]), grot[...,1:,:]),\n        ], axis=-2),\n        np.concatenate([\n            gpos[...,:1,:],\n            mul_vec(\n                inv(grot[...,parents[1:],:]),\n                gpos[...,1:,:] - gpos[...,parents[1:],:]),\n        ], axis=-2))\n\ndef ik_rot(grot, parents):\n    \n    return np.concatenate([grot[...,:1,:], mul(inv(grot[...,parents[1:],:]), grot[...,1:,:]),\n        ], axis=-2)\n    \ndef fk_vel(lrot, lpos, lvel, lang, parents):\n    \n    gp, gr, gv, ga = [lpos[...,:1,:]], [lrot[...,:1,:]], [lvel[...,:1,:]], [lang[...,:1,:]]\n    for i in range(1, len(parents)):\n        gp.append(mul_vec(gr[parents[i]], lpos[...,i:i+1,:]) + gp[parents[i]])\n        gr.append(mul    (gr[parents[i]], lrot[...,i:i+1,:]))\n        gv.append(mul_vec(gr[parents[i]], lvel[...,i:i+1,:]) + \n            vec.cross(ga[parents[i]], mul_vec(gr[parents[i]], lpos[...,i:i+1,:])) +\n            gv[parents[i]])\n        ga.append(mul_vec(gr[parents[i]], lang[...,i:i+1,:]) + ga[parents[i]])\n        \n    return (\n        np.concatenate(gr, axis=-2), \n        np.concatenate(gp, axis=-2),\n        np.concatenate(gv, axis=-2),\n        np.concatenate(ga, axis=-2))\n\n# Linear Interpolation of two vectors\ndef lerp(x, y, t):\n    return (1 - t) * x + t * y\n\n# LERP of quaternions\ndef quat_lerp(x, y, t):\n    return normalize(lerp(x, y, t))\n\n# Spherical linear interpolation of quaternions\ndef slerp(x, y, t):\n    if t == 0:\n        return x\n    elif t == 1:\n        return y\n    \n    if dot(x, y) < 0:\n        y = - y\n    ca = dot(x, y)\n    theta = np.arccos(np.clip(ca, 0, 1))\n    \n    r = normalize(y - x * ca)\n    \n    return x * np.cos(theta * t) + r * np.sin(theta * t)\n\n\n################################################################\n# Conversion from quaternions to other rotation representations.\n################################################################\n\n# Calculate axis-angle from  quaternions.\n# This function is based on ACTOR\n# https://github.com/Mathux/ACTOR/blob/d3b0afe674e01fa2b65c89784816c3435df0a9a5/src/utils/rotation_conversions.py#L481\ndef to_axis_angle(x, eps=1e-5):\n    norm = np.linalg.norm(x[...,1:],axis=-1,keepdims=True)\n    half_angle = np.arctan2(norm, x[...,:1])\n    angle = 2 * half_angle\n    small_angle = np.abs(angle) < eps\n    sin_half_angle_over_angle = np.empty_like(angle)\n    sin_half_angle_over_angle[~small_angle] = (\n        np.sin(half_angle[~small_angle]) / angle[~small_angle]\n    )\n    # for x small, sin(x/2) is about x/2 - (x/2)^3/6\n    # so sin(x/2)/x is about 1/2 - (x*x)/48\n    sin_half_angle_over_angle[small_angle] = (\n        0.5 - (angle[small_angle] * angle[small_angle]) / 48\n    )\n    return x[..., 1:] / sin_half_angle_over_angle\n\n# Calculate euler angles from quaternions.(!Under construction.)\ndef to_euler(x, order='zyx'):\n    x0, x1, x2, x3 = x[..., 0:1], x[..., 1:2], x[..., 2:3], x[..., 3:4]\n\n    if order == 'zyx':\n        return np.concatenate([\n            np.arctan2(2.0 * (x0 * x3 + x1 * x2), 1.0 - 2.0 * (x2 * x2 + x3 * x3)),\n            np.arcsin(np.clip(2.0 * (x0 * x2 - x3 * x1), -1.0, 1.0)),\n            np.arctan2(2.0 * (x0 * x1 + x2 * x3), 1.0 - 2.0 * (x1 * x1 + x2 * x2)),\n        ], axis=-1)\n    elif order == 'xzy':\n        return np.concatenate([\n            np.arctan2(2.0 * (x1 * x0 - x2 * x3), -x1 * x1 + x2 * x2 - x3 * x3 + x0 * x0),\n            np.arctan2(2.0 * (x2 * x0 - x1 * x3), x1 * x1 - x2 * x2 - x3 * x3 + x0 * x0),\n            np.arcsin(np.clip(2.0 * (x1 * x2 + x3 * x0), -1.0, 1.0))\n        ], axis=-1)\n    else:\n        raise NotImplementedError('Cannot convert to ordering %s' % order)\n\n# Calculate rotation matrix from quaternions.\ndef to_xform(x):\n\n    qw, qx, qy, qz = x[...,0:1], x[...,1:2], x[...,2:3], x[...,3:4]\n    \n    x2, y2, z2 = qx + qx, qy + qy, qz + qz\n    xx, yy, wx = qx * x2, qy * y2, qw * x2\n    xy, yz, wy = qx * y2, qy * z2, qw * y2\n    xz, zz, wz = qx * z2, qz * z2, qw * z2\n    \n    return np.concatenate([\n        np.concatenate([1.0 - (yy + zz), xy - wz, xz + wy], axis=-1)[...,None,:],\n        np.concatenate([xy + wz, 1.0 - (xx + zz), yz - wx], axis=-1)[...,None,:],\n        np.concatenate([xz - wy, yz + wx, 1.0 - (xx + yy)], axis=-1)[...,None,:],\n    ], axis=-2)\n\n# Calculate 6d orthogonal rotation representation (ortho6d) from quaternions.\n# https://github.com/papagina/RotationContinuity\ndef to_xform_xy(x):\n\n    qw, qx, qy, qz = x[...,0:1], x[...,1:2], x[...,2:3], x[...,3:4]\n    \n    x2, y2, z2 = qx + qx, qy + qy, qz + qz\n    xx, yy, wx = qx * x2, qy * y2, qw * x2\n    xy, yz, wy = qx * y2, qy * z2, qw * y2\n    xz, zz, wz = qx * z2, qz * z2, qw * z2\n    \n    return np.concatenate([\n        np.concatenate([1.0 - (yy + zz), xy - wz], axis=-1)[...,None,:],\n        np.concatenate([xy + wz, 1.0 - (xx + zz)], axis=-1)[...,None,:],\n        np.concatenate([xz - wy, yz + wx], axis=-1)[...,None,:],\n    ], axis=-2)\n\n# Calculate scaled angle axis from quaternions.\ndef to_scaled_angle_axis(x, eps=1e-5):\n    return 2.0 * log(x, eps)\n\n\n################################################################\n# Conversion from other rotation representations to quaternions.\n################################################################\n\n# Calculate quaternions from axis and angle.\ndef from_angle_axis(angle, axis):\n    c = np.cos(angle / 2.0)[..., None]\n    s = np.sin(angle / 2.0)[..., None]\n    q = np.concatenate([c, s * axis], axis=-1)\n    return q\n\n# Calculate quaternions from axis-angle.\ndef from_axis_angle(rots):\n    angle = np.linalg.norm(rots, axis=-1)\n    axis = rots / angle[...,None]\n    return from_angle_axis(angle, axis)\n\n# Calculate quaternions from euler angles.\ndef from_euler(e, order='zyx', mode=\"degree\"):\n    if mode==\"degree\":\n        e = np.deg2rad(e)\n    axis = {\n        'x': np.asarray([1, 0, 0], dtype=np.float32),\n        'y': np.asarray([0, 1, 0], dtype=np.float32),\n        'z': np.asarray([0, 0, 1], dtype=np.float32)}\n\n    q0 = from_angle_axis(e[..., 0], axis[order[0]])\n    q1 = from_angle_axis(e[..., 1], axis[order[1]])\n    q2 = from_angle_axis(e[..., 2], axis[order[2]])\n\n    return mul(q0, mul(q1, q2))\n\n# Calculate quaternions from rotation matrix.\ndef from_xform(ts):\n    \n    return normalize(\n        np.where((ts[...,2,2] < 0.0)[...,None],\n            np.where((ts[...,0,0] >  ts[...,1,1])[...,None],\n                np.concatenate([\n                    (ts[...,2,1]-ts[...,1,2])[...,None], \n                    (1.0 + ts[...,0,0] - ts[...,1,1] - ts[...,2,2])[...,None], \n                    (ts[...,1,0]+ts[...,0,1])[...,None], \n                    (ts[...,0,2]+ts[...,2,0])[...,None]], axis=-1),\n                np.concatenate([\n                    (ts[...,0,2]-ts[...,2,0])[...,None], \n                    (ts[...,1,0]+ts[...,0,1])[...,None], \n                    (1.0 - ts[...,0,0] + ts[...,1,1] - ts[...,2,2])[...,None], \n                    (ts[...,2,1]+ts[...,1,2])[...,None]], axis=-1)),\n            np.where((ts[...,0,0] < -ts[...,1,1])[...,None],\n                np.concatenate([\n                    (ts[...,1,0]-ts[...,0,1])[...,None], \n                    (ts[...,0,2]+ts[...,2,0])[...,None], \n                    (ts[...,2,1]+ts[...,1,2])[...,None], \n                    (1.0 - ts[...,0,0] - ts[...,1,1] + ts[...,2,2])[...,None]], axis=-1),\n                np.concatenate([\n                    (1.0 + ts[...,0,0] + ts[...,1,1] + ts[...,2,2])[...,None], \n                    (ts[...,2,1]-ts[...,1,2])[...,None], \n                    (ts[...,0,2]-ts[...,2,0])[...,None], \n                    (ts[...,1,0]-ts[...,0,1])[...,None]], axis=-1))))\n\n# Calculate quaternions from ortho6d.\ndef from_xform_xy(x):\n\n    c2 = vec.cross(x[...,0], x[...,1])\n    c2 = c2 / np.sqrt(np.sum(np.square(c2), axis=-1))[...,None]\n    c1 = vec.cross(c2, x[...,0])\n    c1 = c1 / np.sqrt(np.sum(np.square(c1), axis=-1))[...,None]\n    c0 = x[...,0]\n    \n    return from_xform(np.concatenate([\n        c0[...,None], \n        c1[...,None], \n        c2[...,None]], axis=-1))\n\n# Calculate quaternions from scaled angle axis.\ndef from_scaled_angle_axis(x, eps=1e-5):\n    return exp(x / 2.0, eps)"
  },
  {
    "path": "anitaichi/transform/ti_quat.py",
    "content": "import taichi as ti\nimport taichi.math as tm\n\nfrom anitaichi.transform import ti_vec\n\n@ti.func\ndef eye():\n    return ti.Vector([1.0, 0.0, 0.0, 0.0])\n\n@ti.func\ndef normalize(q):\n    return tm.normalize(q)\n\n@ti.func\ndef inv(q):\n    return ti.Vector([-q[0], q[1], q[2], q[3]])\n\n@ti.func\ndef abs(q):\n    return -q if q[0] < 0.0 else q\n\n@ti.func\ndef mul(x, y):\n    qout0 = y[0] * x[0] - y[1] * x[1] - y[2] * x[2] - y[3] * x[3]\n    qout1 = y[0] * x[1] + y[1] * x[0] - y[2] * x[3] + y[3] * x[2]\n    qout2 = y[0] * x[2] + y[1] * x[3] + y[2] * x[0] - y[3] * x[1]\n    qout3 = y[0] * x[3] - y[1] * x[2] + y[2] * x[1] + y[3] * x[0]\n    return ti.Vector([qout0, qout1, qout2, qout3])\n\n@ti.func\ndef mul_vec3(q, v):\n    v_quat = ti.Vector([q[1], q[2], q[3]])\n    t = 2.0 * ti_vec.cross(v_quat, v)\n    return v + q[0] * t + ti_vec.cross(v_quat, t)\n\n@ti.func\ndef from_angle_and_axis(angle, axis):\n    c = tm.cos(angle / 2.0)\n    s = tm.sin(angle / 2.0)\n    return ti.Vector([c, s * axis.x, s * axis.y, s * axis.z])\n\n@ti.func\ndef from_euler(euler, order):\n    axis = {\n        \"x\": ti.Vector([1.0, 0.0, 0.0]),\n        \"y\": ti.Vector([0.0, 1.0, 0.0]),\n        \"z\": ti.Vector([0.0, 0.0, 1.0])\n    }\n    q0 = from_angle_and_axis(euler[0], axis[order[0]])\n    q1 = from_angle_and_axis(euler[1], axis[order[1]])\n    q2 = from_angle_and_axis(euler[2], axis[order[2]])\n\n    return mul(q0, mul(q1, q2))\n"
  },
  {
    "path": "anitaichi/transform/ti_vec.py",
    "content": "import taichi as ti\nimport taichi.math as tm\n\n@ti.func\ndef cross(v, w):\n    out0 = v.y * w.z - v.z * w.y\n    out1 = v.z * w.x - v.x * w.z\n    out2 = v.x * w.y - v.y * w.x\n    return ti.Vector([out0, out1, out2])\n\n@ti.func\ndef dot(v, w):\n    return ti.Vector([v.x * w.x, v.y * w.y, v.z * w.z])\n\n@ti.func\ndef length(v):\n    return tm.sqrt(v.x * v.x + v.y + v.y + v.z + v.z)\n\n@ti.func\ndef normalize(v):\n    return tm.normalize(v)"
  },
  {
    "path": "anitaichi/transform/vec.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\n\n# Calculate cross object of two 3D vectors.\ndef cross(a, b):\n    return np.concatenate([\n        a[...,1:2]*b[...,2:3] - a[...,2:3]*b[...,1:2],\n        a[...,2:3]*b[...,0:1] - a[...,0:1]*b[...,2:3],\n        a[...,0:1]*b[...,1:2] - a[...,1:2]*b[...,0:1]], axis=-1)\n\ndef dot(a, b, keepdims=False):\n    return np.sum(a*b, axis=-1, keepdims=keepdims)\n\ndef length(v, keepdims=False):\n    return np.linalg.norm(v, axis=-1, keepdims=keepdims)\n\ndef normalize(v):\n    lengths = length(v, keepdims=True)\n    lengths = np.where(lengths==0, 1e-10, lengths) # avoid 0 divide\n    return v / lengths\n\n"
  },
  {
    "path": "blender/fbx2bvh.py",
    "content": ""
  },
  {
    "path": "configs/DeepLearning/LMM.toml",
    "content": "# Common settings\nexp_name = \"original\"\n\n# device settings\ndevice = \"cuda\"\ngpus = [0]\n\n# which model do you train? [\"all\", \"decompressor\", \"stepper\", \"projector\"]\ntrain_model = \"all\"\n\n# if you need preprocess (if false, load from processed_dir)\nneed_preprocess = true\n# processed data dir\nprocessed_dir = \"model/LMM/preprocessed\"\nprocessed_file_name = \"dataset.pkl\"\n\n# saved files settings\ncheckpoint_dir = \"model/LMM/checkpoints\"\nsave_config_dir = \"model/LMM/configs\"\n\n# random seed\nseed = 0\n\n# Dataset settings\n[dataset]\n  # dataset dir\n  dir = \"data/lafan1\"\n  files = [\"pushAndStumble1_subject5.bvh\", \"run1_subject5.bvh\", \"walk1_subject5.bvh\"]\n  starts = [194, 90, 80]\n  ends = [351, 7086, 7791]\n  ignore_end = 20\n  # matching features weights. (for decompressor)\n  # TBD: Specify feature that is used.\n  # [w_pos_foot, w_vel_foot, w_vel_hips, w_traj_pos, w_traj_dir]\n  feat_weights = [0.75, 1.0, 1.0, 1.0, 1.5]\n  [dataset.train]\n  [dataset.val]\n  [dataset.test]\n\n# Model settings\n[decompressor]\n  model = \"mlp\"\n  [decompressor.train]\n    max_iter = 300000\n    log_freq = 100\n    save_freq = 50000\n\n    num_epochs = 10\n    batch_size = 32\n    [decompressor.train.loss]\n      losses = [\"mse\"]\n      weights = [1.0]\n    [decompressor.train.optim]\n      method = \"radam\"\n      lr = 0.001\n      momentum = 0.9\n      betas = [0.5, 0.999]\n\n[stepper]\n  model = \"mlp\"\n  [stepper.train]\n    max_iter = 300000\n    log_freq = 100\n    save_freq = 50000\n    [stepper.train.loss]\n      losses = [\"mse\"]\n      weights = [1.0]\n    [stepper.train.optim]\n      method = \"radam\"\n      lr = 0.00003\n      betas = [0.5, 0.999]\n\n[projector]\n  model = \"mlp\"\n  [projector.train]\n    max_iter = 300000\n    log_freq = 100\n    save_freq = 50000\n    [projector.train.loss]\n      losses = [\"mse\"]\n      weights = [1.0]\n    [projector.train.optim]\n      method = \"radam\"\n      lr = 0.00003\n      betas = [0.5, 0.999]\n"
  },
  {
    "path": "data/data.md",
    "content": "# data\nPlease place your data here.  \nIn my case, I create symbolic links for datasets as shown below.\n\n```\ndata/\n|_ aistpp/\n|_ amass/\n|_ lafan1/\n|_ smpl/\n    |_ female/\n        |_ model.pkl\n    |_ male/\n        |_ model.pkl\n    |_ neutral/\n        |_ model.pkl\n|_smplh/\n    |_ female/\n        |_ model.npz\n    |_ male/\n        |_ model.npz\n    |_ neutral/\n        |_ model.npz\n|_smplx/\n    |_ female/\n        |_ model.npz\n    |_ male/\n        |_ model.npz\n    |_ neutral/\n        |_ model.npz\n```"
  },
  {
    "path": "model/LMM/LMM.md",
    "content": "<div align=\"center\">\n\n# Learned Motion Matching\n\n[**original repo**](https://github.com/orangeduck/Motion-Matching)  \n[**paper**](https://dl.acm.org/doi/abs/10.1145/3386569.3392440)  \n\n</div>\n\n\nUnofficial JAX imprementation of LMM.  If you want to train, simply run\n```python\npython model/LMM/train.py\n```\nIf you want to change experimental settings, please change settings on [`configs/DeepLearning/LMM.toml`](../../configs/DeepLearning/LMM.toml)."
  },
  {
    "path": "model/LMM/decompressor.py",
    "content": "from __future__ import annotations\n\nimport jax.numpy as jnp\nfrom flax import linen as nn\n\nclass Compressor(nn.Module):\n    \"\"\"A simple MLP network.\"\"\"\n    output_size: int\n    hidden_size: int = 512\n    \n    @nn.compact\n    def __call__(self, x):\n        nbatch, nwindow = x.shape[:2]\n        x = x.reshape([nbatch * nwindow, -1])\n        for _ in range(3):\n            x = nn.Dense(self.hidden_size)(x)\n            x = nn.elu(x)\n        x = nn.Dense(self.output_size)(x)\n        \n        return x.reshape([nbatch, nwindow, -1])\n\n\nclass Decompressor(nn.Module):\n    output_size: int\n    hidden_size: int = 512\n    \n    @nn.compact\n    def __call__(self, x):\n        nbatch, nwindow = x.shape[:2]\n        x = x.reshape([nbatch * nwindow, -1])\n        x = nn.Dense(self.hidden_size)(x)\n        x = nn.elu(x)\n        x = nn.Dense(self.output_size)(x)\n        \n        return x.reshape([nbatch, nwindow, -1])\n\n"
  },
  {
    "path": "model/LMM/preprocessing.py",
    "content": "from __future__ import annotations\n\nfrom pathlib import Path\nimport pickle\n\nfrom anim import bvh\nfrom anim.animation import Animation\nfrom anim.motion_matching.mm import create_matching_features\n\ndef preprocess_motion_data(BASEPATH: Path, cfg, save_dir: Path):\n    dataset_dir = BASEPATH / cfg.dir\n    anims = []\n    for file, start, end in zip(cfg.files, cfg.starts, cfg.ends):\n        anims.append(bvh.load(dataset_dir / file, start, end))\n    "
  },
  {
    "path": "model/LMM/projector.py",
    "content": "from __future__ import annotations\n\nimport jax.numpy as jnp\nfrom flax import linen as nn\n\nclass Projector(nn.Module):\n    \"\"\"A simple MLP network.\"\"\"\n    output_size: int\n    hidden_size: int = 512\n    \n    @nn.compact\n    def __call__(self, x):\n        nbatch, nwindow = x.shape[:2]\n        x = x.reshape([nbatch * nwindow, -1])\n        for _ in range(3):\n            x = nn.Dense(self.hidden_size)(x)\n            x = nn.elu(x)\n        x = nn.Dense(self.output_size)(x)\n        \n        return x.reshape([nbatch, nwindow, -1])\n\n\n\n"
  },
  {
    "path": "model/LMM/setting.py",
    "content": "\"\"\"\nsetting.py\nDevice setting etc..\n\"\"\"\nfrom __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nimport shutil\n\nfrom util.load import pickle_load, toml_load\nfrom model.LMM.preprocessing import preprocess_motion_data\n\nclass Config:\n    def __init__(self, BASEPATH: Path, setting_path: str) -> None:\n        # Load a setting file.\n        try:\n            if (BASEPATH / setting_path).exists():\n                setting_path = BASEPATH / setting_path\n                setting = toml_load(setting_path)\n            else:\n                setting = toml_load(setting_path)\n        except:\n            setting_path = BASEPATH / \"configs/DeepLearning/LMM.toml\"\n            setting = toml_load(setting_path)\n        sys.stdout.write(f\"Config file: {str(setting_path)}\\n\")\n        \n        self.setting = setting\n        self.BASEPATH = BASEPATH\n        self.seed = setting.seed\n        \n        # Device settings (gpu or cpu)\n        self.device = setting.device\n        # GPU settings.\n        if self.device == \"gpu\" or \"cuda\":\n            if isinstance(setting.gpus, list):\n                # if you use multi gpus\n                if len(setting.gpus) > 1:\n                    self.gpus = setting.gpus\n                    self.multi_gpus_setting()\n                else:\n                # if you use single gpu\n                    self.gpus = setting.gpus[0]\n                    self.single_gpu_setting()\n            else:\n                self.gpus = setting.gpus\n                self.single_gpu_setting()\n        \n        # Dataset settings.\n        # if you need preprocess\n        if setting.need_preprocess:\n            save_dir = BASEPATH / setting.processed_dir\n            if not save_dir.exists():\n                save_dir.mkdir(parents=True)\n                sys.stdout.write(f\"Create preprocessed folder at: {str(save_dir)}\\n\")\n            self.dataset = preprocess_motion_data(BASEPATH, setting.dataset, save_dir / setting.processed_file_name)\n        else:\n            self.dataset = pickle_load(save_dir / setting.processed_file_name)\n        \n        # Save experiment settings.\n        ckpt_dir = BASEPATH / setting.checkpoint_dir / setting.exp_name\n        # TBD: if exists, load checkpoints and resume the experiment.\n        if not ckpt_dir.exists():\n            ckpt_dir.mkdir(parents=True)\n            sys.stdout.write(f\"Create checkpoint folder at: {str(ckpt_dir)}\\n\")\n        self.ckpt_dir = ckpt_dir\n            \n        save_cfg_dir = BASEPATH / setting.save_config_dir\n        save_cfg_path = save_cfg_dir / f\"{setting.exp_name}.toml\"\n        if not save_cfg_dir.exists():\n            save_cfg_dir.mkdir(parents=True)\n            sys.stdout.write(f\"Create config folder at: {str(save_cfg_dir)}\\n\")\n        shutil.copy(str(setting_path), str(save_cfg_path))\n\n    def single_gpu_setting(self):\n        pass\n    \n    def multi_gpus_setting(self):\n        pass"
  },
  {
    "path": "model/LMM/stepper.py",
    "content": "from __future__ import annotations\n\nimport jax.numpy as jnp\nfrom flax import linen as nn\n\nclass Stepper(nn.Module):\n    \"\"\"A simple MLP network.\"\"\"\n    output_size: int\n    hidden_size: int = 512\n    \n    @nn.compact\n    def __call__(self, x):\n        nbatch, nwindow = x.shape[:2]\n        x = x.reshape([nbatch * nwindow, -1])\n        for _ in range(3):\n            x = nn.Dense(self.hidden_size)(x)\n            x = nn.elu(x)\n        x = nn.Dense(self.output_size)(x)\n        \n        return x.reshape([nbatch, nwindow, -1])"
  },
  {
    "path": "model/LMM/test.py",
    "content": ""
  },
  {
    "path": "model/LMM/train.py",
    "content": "\"\"\"\ntrain.py\nYou can define your configulation file like below.\n```\npython train.py configs/DeepLearning/LMM.toml\n```\n\"\"\"\nfrom __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nimport numpy as np\nimport jax.numpy as jnp\nfrom jax import jit, partial, random\nfrom flax import linen as nn\nfrom flax.training import train_state\nimport optax\n\nBASEPATH = Path(__file__).resolve().parent.parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom model.LMM.setting import Config\nfrom model.LMM.decompressor import Compressor, Decompressor\nfrom model.LMM.stepper import Stepper\nfrom model.LMM.projector import Projector\n\ndef main(cfg: Config):\n    ckpt_dir = cfg.ckpt_dir\n    \n    match cfg.setting.train_model:\n        case \"all\":\n            sys.stdout.write(\"Train models: decompressor, stepper, projector\\n\")\n            train_decompressor(cfg.setting.decompressor, ckpt_dir) \n            train_stepper(cfg.setting.stepper, ckpt_dir)\n            train_projector(cfg.setting.projector, ckpt_dir)\n        case \"decompressor\":\n            sys.stdout.write(\"Train model: decompressor\\n\")\n            train_decompressor(cfg.setting.decompressor, ckpt_dir)\n        case \"stepper\":\n            sys.stdout.write(\"Train model: stepper\\n\")\n            train_stepper(cfg.setting.stepper, ckpt_dir)\n        case \"projector\":\n            sys.stdout.write(\"Train model: projector\\n\")\n            train_projector(cfg.setting.projector, ckpt_dir)\n        case _:\n            raise ValueError(\"Invalid value. Please check LMM.toml.\")\n\ndef train_decompressor(cfg, ckpt_dir):\n    pass\n\ndef train_stepper(cfg, ckpt_dir):\n    pass\n\ndef train_projector(cfg, ckpt_dir):\n    pass\n\nif __name__ == \"__main__\":\n    setting_path = sys.argv[1]\n    cfg = Config(BASEPATH, setting_path)\n    \n    main(cfg)"
  },
  {
    "path": "test/bvh_viewer.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\n\nimport numpy as np\nimport taichi as ti\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom anitaichi.animation.anim_loader import bvh\n\n@ti.kernel\ndef get_frame(\n    gposs: ti.types.ndarray(),\n    cur_pose: ti.template(),\n    frame: int\n):\n    for i in cur_pose:\n        cur_pose[i] = gposs[frame, i] / 100\n\ndef update_camera(camera, anim, frame):\n    root_pos = anim.trans[frame] / 100\n    camera_pos = root_pos + np.array([0, 0, -10])\n\n    camera.position(*camera_pos)\n    camera.lookat(*root_pos)\n    scene.set_camera(camera)\n\nif __name__ == \"__main__\":\n    # Starting taichi on CUDA.\n    ti.init(arch=ti.cuda)\n    \n    window = ti.ui.Window(\"bvh viewer\", (1024, 1024), vsync=True)\n    gui = window.get_gui()\n    canvas = window.get_canvas()\n    canvas.set_background_color((1, 1, 1))\n    scene = ti.ui.Scene()\n    camera = ti.ui.Camera()\n\n    bvh_file = BASEPATH / \"data/aiming1_subject1.bvh\"\n    anim = bvh.load(bvh_file)\n    anim.ti_fk()\n    cur_pose = ti.Vector.field(3, dtype=float, shape=len(anim.skel))\n    frame = 0\n\n    while window.running:\n        get_frame(anim.global_positions_field, cur_pose, frame)\n        scene.particles(cur_pose, radius=0.05, color=(1.0, 0.0, 0.0))\n        scene.ambient_light((0.5, 0.5, 0.5))\n        update_camera(camera, anim, frame)\n\n        frame += 1\n        if frame == len(anim):\n            frame = 0\n        \n        canvas.scene(scene)\n        window.show()"
  },
  {
    "path": "test/calc_matching_time.py",
    "content": "# Calculate time for motion matching.\nimport sys\nfrom pathlib import Path\nimport timeit\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom anim import bvh\nfrom anim.motion_matching.database import Database\nfrom anim.motion_matching.mm import create_matching_database, motion_matching_search\n\ndef create_database(bvh_dir: Path, files: list, starts: list, ends: list) -> Database:\n    if isinstance(bvh_dir, str):\n        bvh_dir = Path(bvh_dir)\n    anims = []\n    for file, start, end in zip(files, starts, ends):\n        anim = bvh.load(bvh_dir / file, start, end)\n        anims.append(anim)\n    return Database(anims)\n\nbvh_dir = BASEPATH / \"data/lafan1\"\n\nfiles = [\"pushAndStumble1_subject5.bvh\", \"run1_subject5.bvh\", \"walk1_subject5.bvh\"]\nstarts = [194, 90, 80]\nends = [351, 7086, 7791]\n\nmatching_method = \"kd-tree\"\n\ndb = create_database(bvh_dir, files, starts, ends)\nmdb = create_matching_database(\n    db=db, method=matching_method,\n    w_pos_foot=0.75, w_vel_foot=1, w_vel_hips=1, w_traj_pos=1, w_traj_dir=1.5,\n    ignore_end=20, dense_bound_size=16, sparse_bound_size=64,\n)\n\ncur_idx = 100\nquery = mdb.features[mdb.indices.index(1320)]\n\niter = 100\nscore = timeit.timeit(\n    \"motion_matching_search(cur_idx, matching_method, mdb, query, False)\", \n    globals=globals(), \n    number=iter,\n)\nprint(score / iter)"
  },
  {
    "path": "test/character_controller.py",
    "content": "# TBD\n\nimport sys\nfrom pathlib import Path\nimport pickle\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom anim import bvh\nfrom anim.motion_matching.database import Database\nfrom anim.motion_matching.mm import create_matching_database, motion_matching_search\n\ndef create_database(bvh_dir: Path, files: list, starts: list, ends: list) -> Database:\n    if isinstance(bvh_dir, str):\n        bvh_dir = Path(bvh_dir)\n    anims = []\n    for file, start, end in zip(files, starts, ends):\n        anim = bvh.load(bvh_dir / file, start, end)\n        anims.append(anim)\n    return Database(anims)\n\ndef main():\n    # init configs (this will replace to a yaml file).\n    matching_method = \"aabb\"\n    ignore_end = 20\n    \n    bvh_dir = BASEPATH / \"data/lafan1\"\n    files = [\"pushAndStumble1_subject5.bvh\", \"run1_subject5.bvh\", \"walk1_subject5.bvh\"]\n    starts = [194, 90, 80]\n    ends = [351, 7086, 7791]\n    \n    load_database = False\n    save_database = True\n    \n    search_time = 0.1\n    search_timer = search_time\n    \n    desired_velocity_change_threshold = 50.0\n    desired_rotation_change_threshold = 50.0\n    desired_gait = 0.0\n    desired_gait_velocity = 0.0\n    \n    simulation_velocity_halflife = 0.27\n    simulation_rotation_halflife = 0.27\n    \n    simulation_run_fwrd_speed = 4.0\n    simulation_run_side_speed = 3.0\n    simulation_run_back_speed = 2.5\n    simulation_walk_fwrd_speed = 1.75\n    simulation_walk_side_speed = 1.5\n    simulation_walk_back_speed = 1.25\n    \n    dt = 1 / 60\n    \n    \n    # Create `Database` and `MatchingDatabase`.\n    db = create_database(bvh_dir, files, starts, ends)\n    mdb = create_matching_database(\n        db=db, method=matching_method,\n        w_pos_foot=0.75, w_vel_foot=1, w_vel_hips=1, w_traj_pos=1, w_traj_dir=1.5,\n        dense_bound_size=16, sparse_bound_size=64,\n    )\n    if save_database:\n        with open(BASEPATH / \"data/motion_matching/db.pkl\", \"wb\") as f:\n            pickle.dump(db, f)\n        with open(BASEPATH / \"data/motion_matching/db.pkl\", \"wb\") as f:\n            pickle.dump(mdb, f)\n\n    # Initialize parameters\n    frame_idx = 0\n    \n    # Init the pose\n    \n    # Init the simulation (future direction etc.)\n    \n    # Init the camera\n    \n    \n    \n    \n    \n    \n    \"\"\"ここ以下は繰り返し\"\"\"\n    \n    # Generate query for motion matching.\n    query = None\n    \n    # Check if we reached the end of the current anim.\n    if frame_idx in db.ends: \n        end_of_anim = True\n        cur_idx = -1\n    else: \n        end_of_anim = False\n        cur_idx = frame_idx\n    \n    if search_timer <= 0 or end_of_anim:\n        # Motion Matching Search!\n        cur_idx = motion_matching_search(cur_idx, matching_method, db, mdb, query, ignore_end)\n        \n        if cur_idx != frame_idx:\n            frame_idx = cur_idx\n\n        search_time = search_timer\n    \n    search_timer -= dt\n    frame_idx += 1\n    \n    # Update the next pose\n    \n    # Update the simulation\n    \n    # Update the camera\n    \n    # Drawing\n    \n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "test/inverse_kinematics.py",
    "content": ""
  },
  {
    "path": "test/path_following.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nimport pickle\nimport numpy as np\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom util.load import pickle_load\nfrom util import quat\nfrom anim import bvh\nfrom anim.animation import Animation\nfrom anim.motion_matching.database import Database\nfrom anim.motion_matching.mm import create_matching_database, motion_matching_search\n\ndef create_database(bvh_dir: Path, files: list, starts: list, ends: list) -> Database:\n    if isinstance(bvh_dir, str):\n        bvh_dir = Path(bvh_dir)\n    anims = []\n    for file, start, end in zip(files, starts, ends):\n        anim = bvh.load(bvh_dir / file, start, end)\n        anims.append(anim)\n    return Database(anims)\n\ndef create_query(db: Database, idx: int, traj_feats: np.ndarray) -> np.ndarray:\n    left_foot_idx = db.skel.names.index(\"LeftFoot\")\n    right_foot_idx = db.skel.names.index(\"RightFoot\")\n    hips_idx = db.skel.names.index(\"Hips\")\n    \n    fpos = db.cpos[idx, [left_foot_idx, right_foot_idx]].ravel()\n    vels = db.cposvel[idx,[left_foot_idx, right_foot_idx, hips_idx]].ravel()\n    return np.concatenate([fpos, vels, traj_feats])\n\ndef create_circle_traj():\n    # Create path (circle)\n    t = np.linspace(0, np.pi * 2, 100)\n    x = 500 * np.cos(t)\n    z = 500 * np.sin(t)\n    dir_x = -z\n    dir_z = x\n    norm = np.sqrt(dir_x ** 2 + dir_z ** 2)\n    dir_x = dir_x / norm\n    dir_z = dir_z / norm\n    x -= 500\n    \n    return np.array([\n        x[20], z[20], x[40], z[40], x[60], z[60],\n        dir_x[20], dir_z[20], dir_x[40], dir_z[40], dir_x[60], dir_z[60]\n    ])\n    \n\ndef create_animation_from_idxs(db: Database, anim_frames: list[int]) -> Animation:\n    quats = []\n    trans = []\n    t = np.linspace(0, np.pi * 2, 100)\n    x = 500 * np.cos(t) - 500\n    z = 500 * np.sin(t)\n    \n    for i, frame in enumerate(anim_frames):\n        c_root_rot, root_pos = db.croot(frame)\n        rot = db.quats[frame]\n        i = i % 100\n        rot[0] = quat.mul(quat.from_angle_axis(t[i], [0, 1, 0]), c_root_rot)\n        root_pos[0] = x[i]\n        root_pos[2] = z[i]\n        \n        quats.append(rot)\n        trans.append(root_pos)\n    \n    quats = np.array(quats)\n    trans = np.array(trans)\n    return Animation(db.skel, quats, trans, db.fps)\n\ndef main():\n    matching_method = \"kd-tree\"\n    \n    bvh_dir = BASEPATH / \"data/lafan1\"\n    files = [\"pushAndStumble1_subject5.bvh\", \"run1_subject5.bvh\", \"walk1_subject5.bvh\"]\n    starts = [194, 90, 80]\n    ends = [351, 7086, 7791]\n    \n    load_database = False\n    save_database = True\n    \n    search_time = 5\n    search_timer = 0\n    \n    # Create `Database` and `MatchingDatabase`.\n    if load_database:\n        db = pickle_load(BASEPATH / \"data/db.pkl\")\n        mdb = pickle_load(BASEPATH / \"data/mdb.pkl\")\n    else:\n        db = create_database(bvh_dir, files, starts, ends)\n        mdb = create_matching_database(\n            db=db, method=matching_method,\n            w_pos_foot=0.75, w_vel_foot=1, w_vel_hips=1, w_traj_pos=1, w_traj_dir=1.5,\n            ignore_end=20, dense_bound_size=16, sparse_bound_size=64,\n        )\n    \n    if save_database:\n        with open(BASEPATH / \"data/db.pkl\", \"wb\") as f:\n            pickle.dump(db, f)\n        with open(BASEPATH / \"data/mdb.pkl\", \"wb\") as f:\n            pickle.dump(mdb, f)\n\n    # Initialize parameters\n    frame_idx = 0\n    frame_sum = 0\n    animation_length = 120 # frame\n    anim_frames = [] # animated frame will apend here.\n    \n    # We define the path that moves forward 3m every 20 frames.\n    #               20,        40,       60    frames ahead.\n    # path_poss = [ 200, 0,    400, 0,   600, 0 ]\n    # path_dirs = [  0, 1,      0, 1,     0, 1  ]\n    # traj_feats = np.concatenate([path_poss, path_dirs])\n    \n    # Create path (circle)\n    traj_feats = create_circle_traj()\n    \n    # initial query\n    query = create_query(db, frame_idx, traj_feats)\n    \n        # Animation loop\n    while frame_sum < animation_length:\n        # Check if we reached the end of the current anim.\n        if frame_idx in db.ends: \n            end_of_anim = True\n            cur_idx = -1\n        else: \n            end_of_anim = False\n            cur_idx = frame_idx\n        \n        if search_timer <= 0 or end_of_anim:\n            # Motion Matching Search!\n            cur_idx = motion_matching_search(cur_idx, matching_method, mdb, query)\n            \n            if cur_idx != frame_idx:\n                frame_idx = cur_idx\n\n            search_timer = search_time\n        \n        # update frame\n        search_timer -= 1\n        frame_idx += 1\n        frame_sum += 1\n        anim_frames.append(frame_idx)\n        \n        # Create new query\n        query = create_query(db, frame_idx, traj_feats)\n        \n        sys.stdout.write(\"Processed frame: {}\\n\".format(frame_sum))\n    \n    # Save animation.\n    print(\"Creating animation..\")\n    sim_anim = create_animation_from_idxs(db, anim_frames)\n    bvh.save(BASEPATH / \"data/sim_motion.bvh\", sim_anim)\n    print(\"saved at {}\".format(str(BASEPATH / \"data/sim_motion.bvh\")))\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "test/plotting.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nimport numpy as np\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg\nimport matplotlib.ticker as ticker\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\nimport tkinter as tk\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom anim import bvh\nfrom anim.motion_matching.database import Database\nfrom anim.motion_matching.mm import create_matching_database\n\nif __name__ == \"__main__\":\n    bvh_folder = BASEPATH / \"data\"\n    bvh_file = \"sim_motion.bvh\"\n    start = 90\n    # end = 7086\n    end = 2090\n    save_fig = True\n    \n    anim = bvh.load(bvh_folder / bvh_file, start, end)\n    db = Database([anim])\n    mdb = create_matching_database(\n        db=db, method=\"brute-force\",\n        w_pos_foot=0.75, w_vel_foot=1, w_vel_hips=1, w_traj_pos=1, w_traj_dir=1.5,\n        ignore_end=20, dense_bound_size=16, sparse_bound_size=64,\n    )\n    bar_lim = np.max(np.abs(mdb.features))\n    gposs = db.gpos / 100\n    cposs = db.cpos / 100\n    parents = db.parents\n    \n    proj_root_poss = db.proj_root_pos(remove_vertical=True)\n    root_dirs = db.root_direction(remove_vertical=True)\n    future_20_gposs = db.future_traj_poss(20, cspace=False)\n    future_20_gdirs = db.future_traj_dirs(20, cspace=False)\n    future_20_cposs = db.future_traj_poss(20, cspace=True)\n    future_20_cdirs = db.future_traj_dirs(20, cspace=True)\n    \n    fig = plt.figure(figsize=(7, 7))\n    axg = fig.add_subplot(221, projection=\"3d\")\n    axc = fig.add_subplot(223, projection=\"3d\")\n    axf = fig.add_subplot(122)\n    \n    def update(index: int):\n        gposes = gposs[index]\n        cposes = cposs[index]\n        features = mdb.features[index]\n        \n        proj_root_pos = proj_root_poss[index] / 100\n        root_dir = root_dirs[index]\n        \n        future_20_gpos = future_20_gposs[index] / 100\n        future_20_gdir = future_20_gdirs[index]\n        future_20_cpos = future_20_cposs[index] / 100\n        future_20_cdir = future_20_cdirs[index]\n        \n        # global space viewer settings\n        axg.cla()\n        axg.set_title(\"global\")\n        axg.grid(axis=\"y\")\n        axg.set_yticklabels([])\n        axg.yaxis.pane.set_facecolor(\"gray\")\n        axg.yaxis.set_major_locator(ticker.NullLocator())\n        axg.yaxis.set_minor_locator(ticker.NullLocator())\n        axg.view_init(elev=120, azim=-90)\n        axg.set_xlim3d(proj_root_pos[0]-3, proj_root_pos[0]+3)\n        axg.set_ylim3d(0, 3)\n        axg.set_zlim3d(proj_root_pos[1]-3, proj_root_pos[1]+3)\n        axg.set_box_aspect((1, 0.5, 1))\n        axg.scatter(gposes[:, 0], gposes[:, 1], gposes[:, 2], c=\"red\", s=3, label=\"joints\")\n        for i, parent in enumerate(parents):\n            if parent != -1:\n                axg.plot(\n                    [gposes[parent,0], gposes[i,0]],\n                    [gposes[parent,1], gposes[i,1]],\n                    [gposes[parent,2], gposes[i,2]], \n                    c=\"red\", alpha=0.8\n                )\n        axg.plot(\n            [proj_root_pos[0], proj_root_pos[0]+root_dir[0]],\n            [0, 0],\n            [proj_root_pos[1], proj_root_pos[1]+root_dir[1]],\n            label=\"root direction\", c=\"blue\"\n        )\n        axg.plot(\n            [future_20_gpos[0], future_20_gpos[0]+future_20_gdir[0]],\n            [0, 0],\n            [future_20_gpos[1], future_20_gpos[1]+future_20_gdir[1]],\n            label=\"future direction 20\", c=\"green\"\n        )\n        axg.legend(bbox_to_anchor=(0, 1), loc=\"upper left\", borderaxespad=0)\n        \n        # character space viewer settings\n        axc.cla()\n        axc.set_title(\"character\")\n        axc.grid(axis=\"y\")\n        axc.set_yticklabels([])\n        axc.yaxis.pane.set_facecolor(\"gray\")\n        axc.yaxis.set_major_locator(ticker.NullLocator())\n        axc.yaxis.set_minor_locator(ticker.NullLocator())\n        axc.view_init(elev=120, azim=-90)\n        axc.set_xlim3d(-3, 3)\n        axc.set_ylim3d(0, 3)\n        axc.set_zlim3d(-3, 3)\n        axc.set_box_aspect((1,0.5,1))\n        axc.scatter(cposes[:, 0], cposes[:, 1], cposes[:, 2], c=\"red\", s=3, label=\"joints\")\n        for i, parent in enumerate(parents):\n            if parent != -1:\n                axc.plot(\n                    [cposes[parent,0], cposes[i,0]],\n                    [cposes[parent,1], cposes[i,1]],\n                    [cposes[parent,2], cposes[i,2]], \n                    c=\"red\", alpha=0.8\n                )\n        axc.plot([0, 0], [0, 0], [0, 1], label=\"root direction\", c=\"blue\")\n        axc.plot(\n            [future_20_cpos[0], future_20_cpos[0]+future_20_cdir[0]],\n            [0, 0],\n            [future_20_cpos[1], future_20_cpos[1]+future_20_cdir[1]],\n            label=\"future direction 20\", c=\"green\"\n        )\n        \n        # features bar settings\n        axf.cla()\n        axf.set_title(\"matching features\")\n        axf.grid(axis=\"x\")\n        axf.set_xlim(-bar_lim, bar_lim)\n        axf.barh(np.arange(len(features)), features)\n        \n        fig.suptitle(\"frame: {}\".format(index+1))\n        sys.stdout.write(\"processed frame: {}\\n\".format(index+1))\n        \n    ani = FuncAnimation(fig, update, frames=np.arange(len(db)), interval=1000/anim.fps, repeat=False)\n    \n    if save_fig:\n        fig_path = (BASEPATH / \"figs\" / bvh_file).with_suffix(\".gif\")\n        ani.save(fig_path)\n        sys.stdout.write(\"Saved figure as \" + str(fig_path) + \"\\n\")\n    else:\n        plt.show()"
  },
  {
    "path": "test/plotting_cspace.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nimport numpy as np\nimport matplotlib.ticker as ticker\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom anim import bvh\n\nif __name__ == \"__main__\":\n    bvh_folder = BASEPATH / \"data\"\n    bvh_file = \"sim_motion.bvh\"\n    start, end = None, None\n    elev, azim = 120, -90\n    save_fig = False\n    \n    anim = bvh.load(bvh_folder / bvh_file, start, end)\n    poss = anim.cpos / 100\n    parents = anim.parents\n    \n    future_20_poss = anim.future_traj_poss(20, cspace=True)\n    future_20_dirs = anim.future_traj_dirs(20, cspace=True)\n    \n    fig = plt.figure(figsize=(7, 7))\n    ax = fig.add_subplot(111, projection=\"3d\")\n    \n    def update(index: int):\n        poses = poss[index]\n        \n        future_20_pos = future_20_poss[index] / 100\n        future_20_dir = future_20_dirs[index]\n        \n        ax.cla()\n        ax.set_title(\"character\") \n        ax.grid(axis=\"y\")\n        ax.set_yticklabels([])\n        ax.yaxis.pane.set_facecolor(\"gray\")\n        ax.yaxis.set_major_locator(ticker.NullLocator())\n        ax.yaxis.set_minor_locator(ticker.NullLocator())\n        ax.view_init(elev=elev, azim=azim)\n        ax.set_xlim3d(-3, 3)\n        ax.set_ylim3d(0, 3)\n        ax.set_zlim3d(-3, 3)\n        ax.set_box_aspect((2, 1, 2))\n        \n        ax.scatter(poses[:, 0], poses[:, 1], poses[:, 2], c=\"red\", label=\"joints\")\n        for i, parent in enumerate(parents):\n            if parent != -1:\n                ax.plot(\n                    [poses[parent,0], poses[i,0]],\n                    [poses[parent,1], poses[i,1]],\n                    [poses[parent,2], poses[i,2]], \n                    c=\"red\", alpha=0.8\n                )\n        ax.plot(\n            [future_20_pos[0], future_20_pos[0]+future_20_dir[0]],\n            [0, 0],\n            [future_20_pos[1], future_20_pos[1]+future_20_dir[1]],\n            label=\"20 future direction\", c=\"green\"\n        )\n        ax.legend()\n        fig.suptitle(\"frame: {}\".format(index+1))\n        sys.stdout.write(\"processed frame: {}\\n\".format(index+1))\n    ani = FuncAnimation(fig, update, frames=np.arange(len(anim)), interval=1000/anim.fps, repeat=True)\n    \n    if save_fig:\n        fig_path = (BASEPATH / \"figs\" / bvh_file).with_suffix(\".gif\")\n        ani.save(fig_path)\n    else:\n        plt.show()"
  },
  {
    "path": "test/plotting_global.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nimport numpy as np\nimport matplotlib.ticker as ticker\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\n\nBASEPATH = Path(__file__).resolve().parent.parent\nsys.path.append(str(BASEPATH))\n\nfrom anim import bvh\n\nif __name__ == \"__main__\":\n    bvh_folder = BASEPATH / \"data\"\n    bvh_file = \"sim_motion.bvh\"\n    start, end = None, None\n    elev, azim = 120, -90\n    save_fig = False\n    \n    anim = bvh.load(bvh_folder / bvh_file, start, end)\n    poss = anim.gpos / 100\n    parents = anim.parents\n    \n    proj_root_poss = anim.proj_root_pos(remove_vertical=True)\n    root_dirs = anim.root_direction(remove_vertical=True)\n    future_20_poss = anim.future_traj_poss(20, cspace=False)\n    future_20_dirs = anim.future_traj_dirs(20, cspace=False)\n    \n    fig = plt.figure(figsize=(7, 7))\n    ax = fig.add_subplot(111, projection=\"3d\")\n    \n    def update(index: int):\n        poses = poss[index]\n        \n        # root direction\n        proj_root_pos = proj_root_poss[index] / 100\n        root_dir = root_dirs[index]\n        \n        # future root direction\n        future_20_pos = future_20_poss[index] / 100\n        future_20_dir = future_20_dirs[index]\n        \n        # plot settings\n        ax.cla()\n        ax.set_title(\"global\") \n        ax.grid(axis=\"y\")\n        ax.set_yticklabels([])\n        ax.yaxis.pane.set_facecolor(\"gray\")\n        ax.yaxis.set_major_locator(ticker.NullLocator())\n        ax.yaxis.set_minor_locator(ticker.NullLocator())\n        ax.view_init(elev=elev, azim=azim)\n        ax.set_xlim3d(proj_root_pos[0]-3, proj_root_pos[0]+3)\n        ax.set_ylim3d(0, 3)\n        ax.set_zlim3d(proj_root_pos[1]-3, proj_root_pos[1]+3)\n        ax.set_box_aspect((2, 1, 2))\n        \n        ax.scatter(poses[:, 0], poses[:, 1], poses[:, 2], c=\"red\", label=\"joints\")\n        for i, parent in enumerate(parents):\n            if parent != -1:\n                ax.plot(\n                    [poses[parent,0], poses[i,0]],\n                    [poses[parent,1], poses[i,1]],\n                    [poses[parent,2], poses[i,2]], \n                    c=\"red\", alpha=0.8\n                )\n        ax.plot(\n            [proj_root_pos[0], proj_root_pos[0]+root_dir[0]],\n            [0, 0],\n            [proj_root_pos[1], proj_root_pos[1]+root_dir[1]],\n            label=\"root direction\", c=\"blue\"\n        )\n        ax.plot(\n            [future_20_pos[0], future_20_pos[0]+future_20_dir[0]],\n            [0, 0],\n            [future_20_pos[1], future_20_pos[1]+future_20_dir[1]],\n            label=\"20 future direction\", c=\"green\"\n        )\n        ax.legend()\n        fig.suptitle(\"frame: {}\".format(index+1))\n        sys.stdout.write(\"Processed frame: {}\\n\".format(index+1))\n    ani = FuncAnimation(fig, update, frames=np.arange(len(anim)), interval=1000/anim.fps, repeat=True)\n    \n    if save_fig:\n        fig_path = (BASEPATH / \"figs\" / bvh_file).with_suffix(\".gif\")\n        ani.save(fig_path)\n        sys.stdout.write(\"Saved figure as \" + str(fig_path) + \"\\n\")\n    else:\n        plt.show()"
  },
  {
    "path": "todo.md",
    "content": "# TODO Works\n## Motion Matching\n- [x] 実装したマッチングアルゴリズムが実時間上で動くことを確認(複数回) - 現状1回あたり0.33s \n- [ ] FAISSを用いたGPUマッチングの実装 - faiss-gpuがAnacondaからインストールできない？\n- [ ] [`anim/pose.py`](anim/pose.py)に`Pose`クラスを実装。関節位置などを保存する入れ子とする。\n- [ ] 予め与えられたカーブからのアニメーション作成(オフライン、事前にsimulation boneの位置と向きを配列(T, pos + dir)に保存し、それに沿うようなアニメーションを作成する)\n- [ ] キーボード入力に対応\n- [ ] シミュレーションの作成(キーボードに対応させてsimulation boneを移動、回転)\n- [ ] 将来の軌道カーブの作成(キーボード入力から作成)\n- [ ] グラフィックスインターフェイスの用意(PyOpenGL or Taichi?)\n- [ ] InertializationやIKを無視した、単なるポーズ再生の切り替えで動くことを確認(リアルタイム)\n- [ ] ダンピングアルゴリズムの実装([`anim/blend.py`](anim/blend.py))\n- [ ] FootIKを[`Animation`](anim/animation.py)に取り入れる\n- [ ] FootIKや Inertializationによる自然なアニメーションの作成(リアルタイム)\n- [ ] AMASSデータも使えるようにする\n\n## その他\n- [ ] Motion Blendの完全な実装\n- [ ] Motion Graphsの作成(MMの機能を一部コピー)\n- [ ] FullbodyIKを[`Animation`](anim/animation.py)に取り入れる(Jacobian and FABRIK)。\n- [ ] Learned Motion Matchingの追実装\n- [ ] Crowd animationの方策を検討\n- [ ] LBSの実装\n"
  },
  {
    "path": "update.md",
    "content": "## Update 1.\n1. Add reading and writing of positions by bvh.\n2. Specifies forward and vertical direction.\n3. Add foot contact detection."
  },
  {
    "path": "util/dualquat.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\nfrom util import quat\n\ndef eye(shape: list[int], dtype=np.float32) -> np.ndarray:\n    return np.ones(list(shape) + [8], dtype=dtype) * \\\n        np.asarray([1, 0, 0, 0, 0, 0, 0, 0], dtype=dtype)\n\ndef normalize(dq: np.ndarray, eps=1e-8) -> np.ndarray:\n    mag = quat.length(dq[...,:4])\n    mag = np.where(mag==0, eps, mag)\n    return dq / mag[None]\n\ndef abs(dq: np.ndarray) -> np.ndarray:\n    real = np.where(dq[...,0:1] > 0.0, dq[...,:4], -dq[...,:4])\n    dual = np.where(dq[...,4:5] > 0.0, dq[...,4:], -dq[...,4:])\n    return np.concatenate([real, dual], axis=-1)\n\ndef inv(dq: np.ndarray) -> np.ndarray:\n    real = quat.inv(dq[...,:4])\n    dual = -quat.mul_inv(quat.inv_mul(dq[...,:4], dq[...,4:]), dq[...,:4])\n    return np.concatenate([real, dual], axis=-1)\n\ndef mul(x: np.ndarray, y: np.ndarray) -> np.ndarray:\n    real = quat.mul(x[...,:4], y[...,:4])\n    dual = quat.mul(x[...,:4], y[...,4:]) + quat.mul(x[...,4:], y[...,:4])\n    return np.concatenate([real, dual], axis=-1)\n\ndef from_trans(trans: np.ndarray) -> np.ndarray:\n    dual = np.zeros(trans.shape[:-1] + (4,))\n    dual[...,1:] = trans * 0.5\n    return np.concatenate([quat.eye(trans.shape[:,-1]), dual], axis=-1)\n\ndef from_rot(rot: np.ndarray) -> np.ndarray:\n    return np.concatenate([rot, np.zeros(rot.shape[:-1] + (4,))], axis=-1)\n\ndef from_rot_and_trans(rot: np.ndarray, trans: np.ndarray) -> np.ndarray:\n    rot = quat.normalize(rot)\n    dual = np.zeros(trans.shape[:-1] + (4,))\n    dual[...,1:] = trans\n    dual = quat.mul(dual, rot) * 0.5\n    return np.concatenate([rot, dual], axis=-1)\n\ndef to_trans(dq: np.ndarray) -> np.ndarray:\n    return 2 * quat.mul_inv(dq[...,4:], dq[...,:4])[...,1:]\n\ndef to_rot(dq: np.ndarray) -> np.ndarray:\n    return dq[...,:4]\n\ndef fk(dq: np.ndarray, parents: list[int]) -> np.ndarray:\n    gdq = [dq[...,:1,:]]\n    for i in range(1, len(parents)):\n        gdq.append(mul(gdq[parents[i]], dq[...,i:i+1,:]))\n    return np.concatenate(gdq, axis=-2)"
  },
  {
    "path": "util/load.py",
    "content": "\"\"\"\nload.py\n\"\"\"\nfrom __future__ import annotations\n\nfrom pathlib import Path\nimport toml, yaml, pickle, json\nfrom easydict import EasyDict\n\ndef toml_load(path: Path):\n    with open(path, \"r\") as f:\n        cfg = toml.load(f)\n    cfg = EasyDict(cfg)\n    return cfg\n\ndef yaml_load(path: Path):\n    with open(path, \"r\") as f:\n        cfg = yaml.load(f, Loader=yaml.Loader)\n    cfg = EasyDict(cfg)\n    return cfg\n\ndef pickle_load(path: Path, encoding: str=\"ASCII\"):\n    with open(path, \"rb\") as f:\n        cfg = pickle.load(f, encoding=encoding)\n    return cfg\n\ndef json_load(path: Path):\n    with open(path, \"r\") as f:\n        cfg = json.loads(f.read())\n    cfg = EasyDict(cfg)\n    return cfg"
  },
  {
    "path": "util/quat.py",
    "content": "from __future__ import annotations\n\nimport numpy as np\n\n# Calculate cross object of two 3D vectors.\ndef _fast_cross(a, b):\n    return np.concatenate([\n        a[...,1:2]*b[...,2:3] - a[...,2:3]*b[...,1:2],\n        a[...,2:3]*b[...,0:1] - a[...,0:1]*b[...,2:3],\n        a[...,0:1]*b[...,1:2] - a[...,1:2]*b[...,0:1]], axis=-1)\n\n# Make origin quaternions (No rotations).\ndef eye(shape, dtype=np.float32):\n    return np.ones(list(shape) + [4], dtype=dtype) * np.asarray([1, 0, 0, 0], dtype=dtype)\n\n# Return norm of quaternions.\ndef length(x):\n    return np.sqrt(np.sum(x * x, axis=-1))\n\n# Make unit quaternions.\ndef normalize(x, eps=1e-8):\n    return x / (length(x)[...,None] + eps)\n\ndef abs(x):\n    return np.where(x[...,0:1] > 0.0, x, -x)\n\n# Calculate inverse rotations.\ndef inv(q):\n    return np.array([1, -1, -1, -1], dtype=np.float32) * q\n\n# Calculate the dot product of two quaternions.\ndef dot(x, y):\n    return np.sum(x * y, axis=-1)[...,None] if x.ndim > 1 else np.sum(x * y, axis=-1)\n\n# Multiply two quaternions (return rotations).\ndef mul(x, y):\n    x0, x1, x2, x3 = x[..., 0:1], x[..., 1:2], x[..., 2:3], x[..., 3:4]\n    y0, y1, y2, y3 = y[..., 0:1], y[..., 1:2], y[..., 2:3], y[..., 3:4]\n\n    return np.concatenate([\n        y0 * x0 - y1 * x1 - y2 * x2 - y3 * x3,\n        y0 * x1 + y1 * x0 - y2 * x3 + y3 * x2,\n        y0 * x2 + y1 * x3 + y2 * x0 - y3 * x1,\n        y0 * x3 - y1 * x2 + y2 * x1 + y3 * x0], axis=-1)\n\ndef inv_mul(x, y):\n    return mul(inv(x), y)\n\ndef mul_inv(x, y):\n    return mul(x, inv(y))\n\n# Multiply quaternions and vectors (return vectors).\ndef mul_vec(q, x):\n    t = 2.0 * _fast_cross(q[..., 1:], x)\n    return x + q[..., 0][..., None] * t + _fast_cross(q[..., 1:], t)\n\ndef inv_mul_vec(q, x):\n    return mul_vec(inv(q), x)\n\ndef unroll(x):\n    y = x.copy()\n    for i in range(1, len(x)):\n        d0 = np.sum( y[i] * y[i-1], axis=-1)\n        d1 = np.sum(-y[i] * y[i-1], axis=-1)\n        y[i][d0 < d1] = -y[i][d0 < d1]\n    return y\n\n# Calculate quaternions between two unit 3D vectors (x to y).\ndef between(x, y):\n    return np.concatenate([\n        np.sqrt(np.sum(x*x, axis=-1) * np.sum(y*y, axis=-1))[...,None] + \n        np.sum(x * y, axis=-1)[...,None], \n        _fast_cross(x, y)], axis=-1)\n\ndef log(x, eps=1e-5):\n    length = np.sqrt(np.sum(np.square(x[...,1:]), axis=-1))[...,None]\n    halfangle = np.where(length < eps, np.ones_like(length), np.arctan2(length, x[...,0:1]) / length)\n    return halfangle * x[...,1:]\n    \ndef exp(x, eps=1e-5):\n    halfangle = np.sqrt(np.sum(np.square(x), axis=-1))[...,None]\n    c = np.where(halfangle < eps, np.ones_like(halfangle), np.cos(halfangle))\n    s = np.where(halfangle < eps, np.ones_like(halfangle), np.sinc(halfangle / np.pi))\n    return np.concatenate([c, s * x], axis=-1)\n\n# Calculate global space rotations and positions from local space.\ndef fk(lrot, lpos, parents):\n    \n    gp, gr = [lpos[...,:1,:]], [lrot[...,:1,:]]\n    for i in range(1, len(parents)):\n        gp.append(mul_vec(gr[parents[i]], lpos[...,i:i+1,:]) + gp[parents[i]])\n        gr.append(mul    (gr[parents[i]], lrot[...,i:i+1,:]))\n        \n    return np.concatenate(gr, axis=-2), np.concatenate(gp, axis=-2)\n\ndef fk_rot(lrot, parents):\n    \n    gr = [lrot[...,:1,:]]\n    for i in range(1, len(parents)):\n        gr.append(mul(gr[parents[i]], lrot[...,i:i+1,:]))\n        \n    return np.concatenate(gr, axis=-2)\n\ndef fk_vel(lrot, lpos, lvel, lang, parents):\n    \n    gp, gr, gv, ga = [lpos[...,:1,:]], [lrot[...,:1,:]], [lvel[...,:1,:]], [lang[...,:1,:]]\n    for i in range(1, len(parents)):\n        gp.append(mul_vec(gr[parents[i]], lpos[...,i:i+1,:]) + gp[parents[i]])\n        gr.append(mul    (gr[parents[i]], lrot[...,i:i+1,:]))\n        gv.append(mul_vec(gr[parents[i]], lvel[...,i:i+1,:]) + \n            _fast_cross(ga[parents[i]], mul_vec(gr[parents[i]], lpos[...,i:i+1,:])) +\n            gv[parents[i]])\n        ga.append(mul_vec(gr[parents[i]], lang[...,i:i+1,:]) + ga[parents[i]])\n        \n    return (\n        np.concatenate(gr, axis=-2), \n        np.concatenate(gp, axis=-2),\n        np.concatenate(gv, axis=-2),\n        np.concatenate(ga, axis=-2))\n\n# Calculate local space rotations and positions from global space.\ndef ik(grot, gpos, parents):\n    \n    return (\n        np.concatenate([\n            grot[...,:1,:],\n            mul(inv(grot[...,parents[1:],:]), grot[...,1:,:]),\n        ], axis=-2),\n        np.concatenate([\n            gpos[...,:1,:],\n            mul_vec(\n                inv(grot[...,parents[1:],:]),\n                gpos[...,1:,:] - gpos[...,parents[1:],:]),\n        ], axis=-2))\n\ndef ik_rot(grot, parents):\n    \n    return np.concatenate([grot[...,:1,:], mul(inv(grot[...,parents[1:],:]), grot[...,1:,:]),\n        ], axis=-2)\n\n# ================================================================\n#  Conversion from quaternions to other rotation representations.\n# ================================================================\n# Calculate axis-angle from  quaternions.\n# This function is based on ACTOR\n# https://github.com/Mathux/ACTOR/blob/d3b0afe674e01fa2b65c89784816c3435df0a9a5/src/utils/rotation_conversions.py#L481\ndef to_axis_angle(x, eps=1e-5):\n    norm = np.linalg.norm(x[...,1:],axis=-1,keepdims=True)\n    half_angle = np.arctan2(norm, x[...,:1])\n    angle = 2 * half_angle\n    small_angle = np.abs(angle) < eps\n    sin_half_angle_over_angle = np.empty_like(angle)\n    sin_half_angle_over_angle[~small_angle] = (\n        np.sin(half_angle[~small_angle]) / angle[~small_angle]\n    )\n    # for x small, sin(x/2) is about x/2 - (x/2)^3/6\n    # so sin(x/2)/x is about 1/2 - (x*x)/48\n    sin_half_angle_over_angle[small_angle] = (\n        0.5 - (angle[small_angle] * angle[small_angle]) / 48\n    )\n    return x[..., 1:] / sin_half_angle_over_angle\n\n# Calculate euler angles from quaternions.(!Under construction.)\ndef to_euler(x, order='zyx'):\n    x0, x1, x2, x3 = x[..., 0:1], x[..., 1:2], x[..., 2:3], x[..., 3:4]\n\n    if order == 'zyx':\n        return np.concatenate([\n            np.arctan2(2.0 * (x0 * x3 + x1 * x2), 1.0 - 2.0 * (x2 * x2 + x3 * x3)),\n            np.arcsin(np.clip(2.0 * (x0 * x2 - x3 * x1), -1.0, 1.0)),\n            np.arctan2(2.0 * (x0 * x1 + x2 * x3), 1.0 - 2.0 * (x1 * x1 + x2 * x2)),\n        ], axis=-1)\n    elif order == 'xzy':\n        return np.concatenate([\n            np.arctan2(2.0 * (x1 * x0 - x2 * x3), -x1 * x1 + x2 * x2 - x3 * x3 + x0 * x0),\n            np.arctan2(2.0 * (x2 * x0 - x1 * x3), x1 * x1 - x2 * x2 - x3 * x3 + x0 * x0),\n            np.arcsin(np.clip(2.0 * (x1 * x2 + x3 * x0), -1.0, 1.0))\n        ], axis=-1)\n    else:\n        raise NotImplementedError('Cannot convert to ordering %s' % order)\n\n# Calculate rotation matrix from quaternions.\ndef to_xform(x):\n\n    qw, qx, qy, qz = x[...,0:1], x[...,1:2], x[...,2:3], x[...,3:4]\n    \n    x2, y2, z2 = qx + qx, qy + qy, qz + qz\n    xx, yy, wx = qx * x2, qy * y2, qw * x2\n    xy, yz, wy = qx * y2, qy * z2, qw * y2\n    xz, zz, wz = qx * z2, qz * z2, qw * z2\n    \n    return np.concatenate([\n        np.concatenate([1.0 - (yy + zz), xy - wz, xz + wy], axis=-1)[...,None,:],\n        np.concatenate([xy + wz, 1.0 - (xx + zz), yz - wx], axis=-1)[...,None,:],\n        np.concatenate([xz - wy, yz + wx, 1.0 - (xx + yy)], axis=-1)[...,None,:],\n    ], axis=-2)\n\n# Calculate 6d orthogonal rotation representation (ortho6d) from quaternions.\n# https://github.com/papagina/RotationContinuity\ndef to_xform_xy(x):\n\n    qw, qx, qy, qz = x[...,0:1], x[...,1:2], x[...,2:3], x[...,3:4]\n    \n    x2, y2, z2 = qx + qx, qy + qy, qz + qz\n    xx, yy, wx = qx * x2, qy * y2, qw * x2\n    xy, yz, wy = qx * y2, qy * z2, qw * y2\n    xz, zz, wz = qx * z2, qz * z2, qw * z2\n    \n    return np.concatenate([\n        np.concatenate([1.0 - (yy + zz), xy - wz], axis=-1)[...,None,:],\n        np.concatenate([xy + wz, 1.0 - (xx + zz)], axis=-1)[...,None,:],\n        np.concatenate([xz - wy, yz + wx], axis=-1)[...,None,:],\n    ], axis=-2)\n\n# Calculate scaled angle axis from quaternions.\ndef to_scaled_angle_axis(x, eps=1e-5):\n    return 2.0 * log(x, eps)\n\n\n# ================================================================\n#  Conversion from other rotation representations to quaternions.\n# ================================================================\n\n# Calculate quaternions from axis and angle.\ndef from_angle_axis(angle, axis):\n    c = np.cos(angle / 2.0)[..., None]\n    s = np.sin(angle / 2.0)[..., None]\n    q = np.concatenate([c, s * axis], axis=-1)\n    return q\n\n# Calculate quaternions from axis-angle.\ndef from_axis_angle(rots, eps=1e-5):\n    angle = np.linalg.norm(rots, axis=-1)\n    angle = np.where(angle==0, eps, angle)\n    axis = rots / angle[...,None]\n    return from_angle_axis(angle, axis)\n\n# Calculate quaternions from euler angles.\ndef from_euler(e, order='zyx', mode=\"degree\"):\n    if mode==\"degree\":\n        e = np.deg2rad(e)\n    axis = {\n        'x': np.asarray([1, 0, 0], dtype=np.float32),\n        'y': np.asarray([0, 1, 0], dtype=np.float32),\n        'z': np.asarray([0, 0, 1], dtype=np.float32)}\n\n    q0 = from_angle_axis(e[..., 0], axis[order[0]])\n    q1 = from_angle_axis(e[..., 1], axis[order[1]])\n    q2 = from_angle_axis(e[..., 2], axis[order[2]])\n\n    return mul(q0, mul(q1, q2))\n\n# Calculate quaternions from rotation matrix.\ndef from_xform(ts):\n    \n    return normalize(\n        np.where((ts[...,2,2] < 0.0)[...,None],\n            np.where((ts[...,0,0] >  ts[...,1,1])[...,None],\n                np.concatenate([\n                    (ts[...,2,1]-ts[...,1,2])[...,None], \n                    (1.0 + ts[...,0,0] - ts[...,1,1] - ts[...,2,2])[...,None], \n                    (ts[...,1,0]+ts[...,0,1])[...,None], \n                    (ts[...,0,2]+ts[...,2,0])[...,None]], axis=-1),\n                np.concatenate([\n                    (ts[...,0,2]-ts[...,2,0])[...,None], \n                    (ts[...,1,0]+ts[...,0,1])[...,None], \n                    (1.0 - ts[...,0,0] + ts[...,1,1] - ts[...,2,2])[...,None], \n                    (ts[...,2,1]+ts[...,1,2])[...,None]], axis=-1)),\n            np.where((ts[...,0,0] < -ts[...,1,1])[...,None],\n                np.concatenate([\n                    (ts[...,1,0]-ts[...,0,1])[...,None], \n                    (ts[...,0,2]+ts[...,2,0])[...,None], \n                    (ts[...,2,1]+ts[...,1,2])[...,None], \n                    (1.0 - ts[...,0,0] - ts[...,1,1] + ts[...,2,2])[...,None]], axis=-1),\n                np.concatenate([\n                    (1.0 + ts[...,0,0] + ts[...,1,1] + ts[...,2,2])[...,None], \n                    (ts[...,2,1]-ts[...,1,2])[...,None], \n                    (ts[...,0,2]-ts[...,2,0])[...,None], \n                    (ts[...,1,0]-ts[...,0,1])[...,None]], axis=-1))))\n\n# Calculate quaternions from ortho6d.\ndef from_xform_xy(x):\n\n    c2 = _fast_cross(x[...,0], x[...,1])\n    c2 = c2 / np.sqrt(np.sum(np.square(c2), axis=-1))[...,None]\n    c1 = _fast_cross(c2, x[...,0])\n    c1 = c1 / np.sqrt(np.sum(np.square(c1), axis=-1))[...,None]\n    c0 = x[...,0]\n    \n    return from_xform(np.concatenate([\n        c0[...,None], \n        c1[...,None], \n        c2[...,None]], axis=-1))\n\n# Calculate quaternions from scaled angle axis.\ndef from_scaled_angle_axis(x, eps=1e-5):\n    return exp(x / 2.0, eps)"
  }
]