[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\n.venv-py2\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n# vscode\n.vscode\n\n# data\ndata\n\n__pycache__\n\n*.log\nlog/\n\noutput/\nckpt/\n\n*.pth"
  },
  {
    "path": "README.md",
    "content": "# GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF (ICRA 2023)\n\nThis is the official repository of [**GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF**](https://arxiv.org/abs/2210.06575).\n\nFor more information, please visit our [**project page**](https://pku-epic.github.io/GraspNeRF/).\n\n## Introduction\n<img src=\"images/teaser.png\" width=\"640\">\n\nIn this work, we propose a multiview RGB-based 6-DoF grasp detection network, **GraspNeRF**, \nthat leverages the **generalizable neural radiance field (NeRF)** to achieve material-agnostic object grasping in clutter. \nCompared to the existing NeRF-based 3-DoF grasp detection methods that rely on densely captured input images and time-consuming per-scene optimization, \nour system can perform zero-shot NeRF construction with **sparse RGB inputs** and reliably detect **6-DoF grasps**, both in **real-time**. \nThe proposed framework jointly learns generalizable NeRF and grasp detection in an **end-to-end** manner, optimizing the scene representation construction for the grasping. \nFor training data, we generate a large-scale photorealistic **domain-randomized synthetic dataset** of grasping in cluttered tabletop scenes that enables direct transfer to the real world. \nExperiments in synthetic and real-world environments demonstrate that our method significantly outperforms all the baselines in all the experiments.\n\n## Overview\nThis repository provides:\n- PyTorch code, and weights of GraspNeRF.\n- Grasp Simulator based on blender and pybullet.\n- Multiview 6-DoF Grasping Dataset Generator and Examples.\n\n## Dependency\n1. Please run \n```\npip install -r requirements.txt\n```\nto install dependency.\n\n2. (optional) Please install [blender 2.93.3--Ubuntu](https://www.blender.org/) if you need simulation.\n\n## Data & Checkpoints\n1. Please generate or download and uncompress the [example data](https://drive.google.com/file/d/1Ku-EotayUhfv5DtXAvFitGzzdMF84Ve2/view?usp=share_link) to `data/` for training, and [rendering assets](https://drive.google.com/file/d/1Udvi2QQ6AtYDLUWY0oH-PO2R6kZBxJLT/view?usp=share_link) to `data/assets` for simulation. \nSpecifically, download [imagenet valset](https://image-net.org/data/ILSVRC/2010/ILSVRC2010_images_val.tar) to `data/assets/imagenet/images/val` which is used as random texture in simulation. \n2. We provide pretrained weights for testing. Please download the [checkpoint](https://drive.google.com/file/d/1k-Cy4NO2isCBYc3az-34HEdcNxDptDgU/view?usp=share_link) to `src/nr/ckpt/test`. \n\n## Testing\nOur grasp simulation pipeline is depend on blender and pybullet. Please verify the installation before running simulation.\n\nAfter the dependency and assets are ready, please run \n```\nbash run_simgrasp.sh\n```\n\n## Training\nAfter the training data is ready, please run\n```\nbash train.sh GPU_ID\n```\ne.g. `bash train.sh 0`.\n\n## Data Generator\n1. Download the scene descriptor files from [GIGA](https://github.com/UT-Austin-RPL/GIGA#pre-generated-data) and [assets](https://drive.google.com/file/d/1-59zcQ8h5esT_ogjaDjtzQ6sG70WNWzU/view?usp=share_link).\n2. For example, run \n```\nbash run_pile_rand.sh \n```\nin ./data_generator for pile data generation.\n\n## Citation\nIf you find our work useful in your research, please consider citing:\n\n```\n@article{Dai2023GraspNeRF,\n  title={GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF},\n  author={Qiyu Dai and Yan Zhu and Yiran Geng and Ciyu Ruan and Jiazhao Zhang and He Wang},\n  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},\n  year={2023}\n```\n\n## License\n\n This work and the dataset are licensed under [CC BY-NC 4.0][cc-by-nc].\n\n [![CC BY-NC 4.0][cc-by-nc-image]][cc-by-nc]\n\n [cc-by-nc]: https://creativecommons.org/licenses/by-nc/4.0/\n [cc-by-nc-image]: https://licensebuttons.net/l/by-nc/4.0/88x31.png\n\n## Contact\nIf you have any questions, please open a github issue or contact us:\n\nQiyu Dai: qiyudai@pku.edu.cn, Yan Zhu: zhuyan_@stu.pku.edu.cn, He Wang: hewang@pku.edu.cn\n"
  },
  {
    "path": "data_generator/modify_material.py",
    "content": "from mathutils import Vector\nimport bpy\nimport random\nimport math\n\ndef modify_material(mat_links, mat_nodes, material_name, mat_randomize_mode, is_texture=False, orign_base_color=None, tex_node=None, is_transfer=True, is_arm=False):\n    if is_transfer:\n        if material_name.split(\"_\")[0] == \"metal\" or material_name.split(\"_\")[0] == \"porcelain\" or material_name.split(\"_\")[0] == \"plasticsp\" or material_name.split(\"_\")[0] == \"paintsp\":\n            tex_mix_prop = random.uniform(0.85, 0.98)\n        else:\n            tex_mix_prop = random.uniform(0.7, 0.95)\n        mix_prop = random.uniform(0.6, 0.9)\n\n\n        if mat_randomize_mode == \"specular_texmix\" or mat_randomize_mode == \"mixed\" \\\n            or material_name.split(\"_\")[0] == \"metal\" or material_name.split(\"_\")[0] == \"porcelain\" \\\n            or material_name.split(\"_\")[0] == \"plasticsp\" or material_name.split(\"_\")[0] == \"paintsp\":\n            transfer_rand = random.randint(0,2)\n        else:\n            transfer_rand = 1\n\n        if transfer_rand == 1:\n            transfer_flag = True\n        else:\n            transfer_flag = False\n            tex_mix_prop = 1\n            mix_prop = 1\n            if not is_arm:\n                bs_color_rand = random.uniform(-0.2, 0.2)\n            else:\n                bs_color_rand = 0\n            r_rand = bs_color_rand\n            g_rand = bs_color_rand\n            b_rand = bs_color_rand\n\n\n    else:\n        tex_mix_prop = 1\n        mix_prop = 1\n        transfer_flag = False\n        if not is_arm:\n            bs_color_rand = random.uniform(-0.2, 0.2)\n        else:\n            bs_color_rand = 0\n        r_rand = bs_color_rand\n        g_rand = bs_color_rand\n        b_rand = bs_color_rand\n    \n    bsdfnode_list = [n for n in mat_nodes if isinstance(n, bpy.types.ShaderNodeBsdfPrincipled)]\n    if bsdfnode_list != []:\n        for bsdfnode in bsdfnode_list:\n            if not bsdfnode.inputs[4].links:    # metallic\n                src_value = bsdfnode.inputs[4].default_value\n                if material_name.split(\"_\")[0] == \"metal\":\n                    new_value = src_value + random.uniform(-0.05, 0.05)\n                elif material_name.split(\"_\")[0] == \"porcelain\":\n                    new_value = src_value + random.uniform(-0.05, 0.1)\n                elif material_name.split(\"_\")[0] == \"plasticsp\":\n                    new_value = src_value + random.uniform(-0.05, 0.1)\n                else:\n                    new_value = src_value + random.uniform(-0.05, 0.05)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[4].default_value = new_value \n            if not bsdfnode.inputs[5].links:    # specular\n                src_value = bsdfnode.inputs[5].default_value\n                #if material_name.split(\"_\")[0] == \"metal\":\n                new_value = src_value + random.uniform(0, 0.3)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[5].default_value = new_value\n            if not bsdfnode.inputs[6].links:    # specularTint\n                src_value = bsdfnode.inputs[6].default_value\n                new_value = src_value + random.uniform(-1, 1)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[6].default_value = new_value\n            if not bsdfnode.inputs[7].links:    # roughness\n                src_value = bsdfnode.inputs[7].default_value\n                if material_name.split(\"_\")[0] == \"metal\" or material_name.split(\"_\")[0] == \"porcelain\" or material_name.split(\"_\")[0] == \"plasticsp\" or material_name.split(\"_\")[0] == \"paintsp\":\n                    new_value = src_value + random.uniform(-0.2, 0.01)\n                else:\n                    new_value = src_value + random.uniform(-0.03, 0.1)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[7].default_value = new_value\n            if not bsdfnode.inputs[8].links:    # anisotropic\n                src_value = bsdfnode.inputs[8].default_value\n                new_value = src_value + random.uniform(-0.1, 0.1)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[8].default_value = new_value\n            if not bsdfnode.inputs[9].links:    # anisotropicRotation\n                src_value = bsdfnode.inputs[9].default_value\n                new_value = src_value + random.uniform(-0.3, 0.3)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[9].default_value = new_value\n            if not bsdfnode.inputs[10].links:    # sheen\n                src_value = bsdfnode.inputs[10].default_value\n                new_value = src_value + random.uniform(-0.1, 0.1)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[10].default_value = new_value\n            if not bsdfnode.inputs[11].links:    # sheenTint\n                src_value = bsdfnode.inputs[11].default_value\n                new_value = src_value + random.uniform(-0.2, 0.2)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[11].default_value = new_value\n            if not bsdfnode.inputs[12].links:    # clearcoat\n                src_value = bsdfnode.inputs[12].default_value\n                new_value = src_value + random.uniform(-0.2, 0.2)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[12].default_value = new_value\n            if not bsdfnode.inputs[13].links:    # clearcoatGloss\n                src_value = bsdfnode.inputs[13].default_value\n                new_value = src_value + random.uniform(-0.2, 0.2)\n                if new_value > 1.0: new_value = 1.0\n                elif new_value < 0: new_value = 0.0\n                bsdfnode.inputs[13].default_value = new_value\n\n    ## metal\n    if material_name == \"metal_0\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.3, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 1.0)         # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.3, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Image Texture.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_1\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.9, 1.00)        # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.5, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[7].default_value = random.uniform(0.08, 0.25)         # roughness\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.04, 0.5)         # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.3, 0.7)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.8, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n        \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Tangent\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[22]) \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_10\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.5)         # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.3, 0.7)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n        \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            bsdf_new.location = Vector((-800, 0))\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            mix_new.location = Vector((-800, 0))\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7\n\n            mat_links.new(mat_nodes[\"Image Texture\"].outputs[1], mat_nodes[\"Principled BSDF-new\"].inputs[19])\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[4])\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_11\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.8)         # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 0.8)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n        \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[4])    \n            mat_links.new(mat_nodes[\"Image Texture.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])  \n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])                  \n    elif material_name == \"metal_12\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.8)         # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 0.8)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n        \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])    \n            mat_links.new(mat_nodes[\"Reroute.006\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])  \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_13\":\n        \n        # mat_nodes[\"Principled BSDF.001\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF.001\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF.001\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF.001\"].inputs[8].default_value = random.uniform(0.3, 0.7)         # anisotropic\n        # mat_nodes[\"Principled BSDF.001\"].inputs[9].default_value = random.uniform(0.0, 0.8)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF.001\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF.001\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.001\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20]) \n            mat_links.new(mat_nodes[\"Mix.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])  \n\n            mat_links.new(mat_nodes[\"Principled BSDF.001\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output.001\"].inputs[\"Surface\"])  \n        else:\n            bs_color = mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_14\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.5)         # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 0.5)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n        \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.85 \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7 \n            \n            mat_links.new(mat_nodes[\"Group\"].outputs[1], mat_nodes[\"Principled BSDF-new\"].inputs[7])  \n            mat_links.new(mat_nodes[\"Group\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])   \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_2\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.5, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.95)        # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)        # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Image Texture.003\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7]) \n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])  \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])       \n    elif material_name == \"metal_3\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.5, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.2)        # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)        # clearcoatGloss\n        mat_nodes[\"Gamma\"].inputs[1].default_value = random.uniform(3.0, 4.0)\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Gamma\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7]) \n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])  \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])       \n    elif material_name == \"metal_4\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.1, 0.5)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.2)        # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 0.5)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.5)        # clearcoatGloss\n \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7]) \n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])  \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])  \n    elif material_name == \"metal_5\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.2, 0.4)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.6, 0.9)        # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.8, 1.0)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7  \n\n            mat_links.new(mat_nodes[\"Voronoi Texture\"].outputs[1], mat_nodes[\"Principled BSDF-new\"].inputs[7]) \n            mat_links.new(mat_nodes[\"Tangent\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[22])   \n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[21])  \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])  \n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_6\":\n        \n        # mat_nodes[\"BSDF guidé\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"BSDF guidé\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"BSDF guidé\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"BSDF guidé\"].inputs[8].default_value = random.uniform(0.0, 0.2)        # anisotropic\n        # mat_nodes[\"BSDF guidé\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"BSDF guidé\"].inputs[12].default_value = random.uniform(0.0, 0.3)        # clearcoat\n        # mat_nodes[\"BSDF guidé\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n        mat_nodes[\"Valeur\"].outputs[0].default_value = random.uniform(0.1, 0.3)\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            bsdf_new.location = Vector((-800, 0))\n            for key, input in enumerate(mat_nodes[\"BSDF guidé\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            mix_new.location = Vector((-800, 0))\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7 \n\n            mat_links.new(mat_nodes[\"Mélanger.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"BSDF guidé\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Sortie de matériau\"].inputs[0])\n    elif material_name == \"metal_7\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.7, 0.9)        # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 0.3)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n        \n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            #bsdf_new.location = Vector((-800, 0))\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            #mix_new.location = Vector((-800, 0))\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9 \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.7\n\n            mat_links.new(mat_nodes[\"Reroute.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Tangent\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[22])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_8\":\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n            bsdf_1_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_1_new.name = 'Principled BSDF-1-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.001\"].inputs):\n                bsdf_1_new.inputs[key].default_value = input.default_value\n            bsdf_2_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_2_new.name = 'Principled BSDF-2-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.002\"].inputs):\n                bsdf_2_new.inputs[key].default_value = input.default_value\n            bsdf_3_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_3_new.name = 'Principled BSDF-3-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.003\"].inputs):\n                bsdf_3_new.inputs[key].default_value = input.default_value\n            \n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            mix_1_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_1_new.name = 'Mix Shader-1-new'\n            mix_2_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_2_new.name = 'Mix Shader-2-new'\n            mix_3_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_3_new.name = 'Mix Shader-3-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-1-new\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-2-new\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-3-new\"].inputs[0])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.6\n                mat_nodes[\"Mix Shader-1-new\"].inputs[0].default_value = 0.6\n                mat_nodes[\"Mix Shader-2-new\"].inputs[0].default_value = 0.6\n                mat_nodes[\"Mix Shader-3-new\"].inputs[0].default_value = 0.6\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Principled BSDF-1-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Principled BSDF-2-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Principled BSDF-3-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.5\n                mat_nodes[\"Mix Shader-1-new\"].inputs[0].default_value = 0.5\n                mat_nodes[\"Mix Shader-2-new\"].inputs[0].default_value = 0.5\n                mat_nodes[\"Mix Shader-3-new\"].inputs[0].default_value = 0.5\n\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-1-new\"].inputs[20]) \n            mat_links.new(mat_nodes[\"Bump.001\"].outputs[0], mat_nodes[\"Principled BSDF-2-new\"].inputs[20])   \n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.001\"].outputs[0], mat_nodes[\"Mix Shader-1-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-1-new\"].outputs[0], mat_nodes[\"Mix Shader-1-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-1-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[2])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.002\"].outputs[0], mat_nodes[\"Mix Shader-2-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-2-new\"].outputs[0], mat_nodes[\"Mix Shader-2-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-2-new\"].outputs[0], mat_nodes[\"Mix Shader.001\"].inputs[1])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.003\"].outputs[0], mat_nodes[\"Mix Shader-3-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-3-new\"].outputs[0], mat_nodes[\"Mix Shader-3-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-3-new\"].outputs[0], mat_nodes[\"Mix Shader.001\"].inputs[2])        \n    elif material_name == \"metal_9\":\n        \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[7].default_value = random.uniform(0.01, 0.3)         # roughness\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 0.3)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n        mat_nodes[\"Anisotropic BSDF\"].inputs[1].default_value = random.uniform(0.11, 0.25)\n        mat_nodes[\"Anisotropic BSDF\"].inputs[2].default_value = random.uniform(0.4, 0.6)\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Anisotropic BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n                mat_nodes[\"Anisotropic BSDF\"].inputs[0].default_value = list(orign_base_color)\n\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n\n    ## porcelain\n    elif material_name == \"porcelain_0\":\n        if transfer_flag == True:\n            # if is_texture:\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n            # else:\n            #     mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9   \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop#0.8   \n\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])    \n    elif material_name == \"porcelain_1\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n            else:\n                mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)  \n        else: \n            bs_color = mat_nodes[\"Mix\"].inputs[1].default_value\n        \n            new_bs_color_r = bs_color[0] + random.uniform(-0.3, 0.3)\n            new_bs_color_g = bs_color[1] + random.uniform(-0.3, 0.3)\n            new_bs_color_b = bs_color[2] + random.uniform(-0.3, 0.3)\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0.2\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0.2\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0.2\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(new_bs_color)\n    elif material_name == \"porcelain_2\":\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop#0.9   \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.8   \n\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])    \n    elif material_name == \"porcelain_3\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix.001\"].inputs[1])\n            else:\n                mat_nodes[\"Mix.001\"].inputs[1].default_value = list(orign_base_color)   \n        else: \n            bs_color = mat_nodes[\"Mix.001\"].inputs[1].default_value\n        \n            new_bs_color_r = bs_color[0] + random.uniform(-0.3, 0.3)\n            new_bs_color_g = bs_color[1] + random.uniform(-0.3, 0.3)\n            new_bs_color_b = bs_color[2] + random.uniform(-0.3, 0.3)\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0.2\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0.2\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0.2\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Mix.001\"].inputs[1].default_value = list(new_bs_color)\n    elif material_name == \"porcelain_4\":\n        if transfer_flag == True:\n            # if is_texture:\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n            # else:\n            #     mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n            #     mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Glossy BSDF\"].inputs[1].default_value = random.uniform(0.05, 0.15)\n\n            diff_new = mat_nodes.new(type='ShaderNodeBsdfDiffuse')\n            diff_new.name = 'Diffuse BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Diffuse BSDF\"].inputs):\n                diff_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF-new\"].inputs[0])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0   \n            else:\n                mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9   \n\n            mat_links.new(mat_nodes[\"Diffuse BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Diffuse BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n    elif material_name == \"porcelain_5\":\n        if transfer_flag == True:\n            # if is_texture:\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n            # else:\n            #     mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n            #     mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(orign_base_color)\n            diff_new = mat_nodes.new(type='ShaderNodeBsdfDiffuse')\n            diff_new.name = 'Diffuse BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Diffuse BSDF\"].inputs):\n                diff_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF-new\"].inputs[0])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0   \n            else:\n                mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9   \n\n            mat_links.new(mat_nodes[\"Diffuse BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Diffuse BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n    elif material_name == \"porcelain_6\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)  \n        else: \n            bs_color = mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value\n        \n            new_bs_color_r = bs_color[0] + random.uniform(-0.3, 0.3)\n            new_bs_color_g = bs_color[1] + random.uniform(-0.3, 0.3)\n            new_bs_color_b = bs_color[2] + random.uniform(-0.3, 0.3)\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0.2\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0.2\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0.2\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(new_bs_color)\n    \n    ## plastic\n    elif material_name == \"plastic_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(orign_base_color)            \n    elif material_name == \"plastic_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(orign_base_color)    \n    elif material_name == \"plastic_3\":\n        mat_nodes[\"值(明度)\"].outputs[0].default_value = random.uniform(0.05, 0.25)\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)  \n    elif material_name == \"plastic_5\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)    \n    elif material_name == \"plastic_6\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.012\"].inputs[0])    \n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.021\"].inputs[0])    \n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.022\"].inputs[0])    \n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.033\"].inputs[0])  \n        else:\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"RGB.001\"].outputs[0].default_value = list(orign_base_color)\n            \"\"\"\n            mat_nodes[\"RGB.002\"].outputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"RGB.003\"].outputs[0].default_value = list(orign_base_color)\n            \"\"\"\n\n    ## rubber\n    elif material_name == \"rubber_0\":\n        diff_new = mat_nodes.new(type='ShaderNodeBsdfDiffuse')\n        diff_new.name = 'Diffuse BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Diffuse BSDF\"].inputs):\n            diff_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF-new\"].inputs[0])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0   \n        else:\n            mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9   \n\n        mat_links.new(mat_nodes[\"Diffuse BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Diffuse BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n    elif material_name == \"rubber_1\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n\n        mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"RGB Curves\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"rubber_2\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n\n        mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"RGB Curves\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"rubber_3\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"rubber_4\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    \n    ######################## 2022.02.04\n    ## plastic_specular\n    elif material_name == \"plasticsp_0\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.001\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute\"].inputs[0])\n            else:\n                mat_nodes[\"RGB.001\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB.001\"].outputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB.001\"].outputs[0].default_value = list(new_bs_color)\n    elif material_name == \"plasticsp_1\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    \n    ## paint_specular\n    elif material_name == \"paintsp_0\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB\"].outputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_1\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[2])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix.001\"].inputs[1])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Hue Saturation Value\"].inputs[4])\n            else:\n                mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB\"].outputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_2\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Group\"].inputs[0])\n            else:\n                mat_nodes[\"Group\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Group\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Group\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_3\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            #bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            new_bs_color = [random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1), 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_4\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF.001\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Glossy BSDF.001\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n            mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(new_bs_color)\n            mat_nodes[\"Glossy BSDF.001\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_5\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Invert\"].inputs[1])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.002\"].inputs[0])\n            else:\n                mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB\"].outputs[0].default_value\n            \n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n                \n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(new_bs_color)\n\n    ## rubber\n    elif material_name == \"rubber_5\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n\n        mat_links.new(mat_nodes[\"Mix.005\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7]) \n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n\n    ## plastic\n    elif material_name == \"plastic_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_4\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"plastic_7\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])    \n        else:\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_8\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Group\"].inputs[0])    \n        else:\n            mat_nodes[\"Group\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_9\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)            \n    elif material_name == \"plastic_10\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)            \n    elif material_name == \"plastic_11\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_12\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[2])    \n        else:\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_13\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_14\":\n        mat_nodes[\"Math.005\"].inputs[1].default_value = random.uniform(0.05, 0.3)\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    \n    ## paper\n    elif material_name == \"paper_0\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9  \n\n        mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"Mix.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"paper_1\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.8, 0.95)  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.8, 0.9)  \n\n        mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"paper_2\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.9, 0.95)  \n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.9, 0.95)  \n\n        mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"Bright/Contrast\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    \n    ## leather\n    elif material_name == \"leather_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"leather_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)\n    elif material_name == \"leather_3\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_4\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_5\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"lether\"].inputs[0])\n        else:\n            mat_nodes[\"lether\"].inputs[0].default_value = list(orign_base_color)\n\n    ## wood (全部不作处理，保留原始材质)\n    elif material_name == \"wood_0\":\n        pass\n    elif material_name == \"wood_1\":\n        pass\n    elif material_name == \"wood_2\":\n        pass\n    elif material_name == \"wood_3\":\n        pass\n    elif material_name == \"wood_4\":\n        pass\n    elif material_name == \"wood_5\":\n        pass\n    elif material_name == \"wood_6\":\n        pass\n    elif material_name == \"wood_7\":\n        pass\n    elif material_name == \"wood_8\":\n        pass\n    elif material_name == \"wood_9\":\n        pass\n\n    ## fabric\n    elif material_name == \"fabric_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"fabric_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"fabric_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color) \n\n    ## clay\n    elif material_name == \"clay_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"clay_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"clay_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"clay_3\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color) \n    elif material_name == \"clay_4\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color) \n    elif material_name == \"clay_5\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color) \n\n    ## glass\n    elif material_name == \"glass_0\":\n        mat_nodes[\"Mix Shader\"].inputs[0].default_value = random.uniform(0.1, 0.3)\n        mat_nodes[\"Glossy BSDF\"].inputs[1].default_value = random.uniform(0.1, 0.3)\n    elif material_name == \"glass_4\":\n        mat_nodes[\"Layer Weight\"].inputs[0].default_value = random.uniform(0.3, 0.7)\n        mat_nodes[\"Glossy BSDF\"].inputs[1].default_value = random.uniform(0.05, 0.2)\n    elif material_name == \"glass_5\":\n        mat_nodes[\"Layer Weight\"].inputs[0].default_value = random.uniform(0.2, 0.4)\n        mat_nodes[\"Glossy BSDF\"].inputs[1].default_value = random.uniform(0.0, 0.1)\n    elif material_name == \"glass_14\":\n        mat_nodes[\"Glass BSDF.005\"].inputs[1].default_value = random.uniform(0.0, 0.1)\n        mat_nodes[\"Glass BSDF.006\"].inputs[1].default_value = random.uniform(0.0, 0.1)\n        mat_nodes[\"Glass BSDF.007\"].inputs[1].default_value = random.uniform(0.0, 0.1)\n        mat_nodes[\"Glass BSDF.008\"].inputs[1].default_value = random.uniform(0.0, 0.1)\n        mat_nodes[\"Layer Weight\"].inputs[0].default_value = random.uniform(0.81, 0.87)\n        mat_nodes[\"Layer Weight.001\"].inputs[0].default_value = random.uniform(0.65, 0.71)\n        mat_nodes[\"Layer Weight.002\"].inputs[0].default_value = random.uniform(0.81, 0.87)\n        color_value = random.uniform(0.599459, 0.70)\n        mat_nodes[\"Transparent BSDF\"].inputs[0].default_value = list([color_value, color_value, color_value, 1])\n\n    # elif material_name == \"glass_15\":\n    #     mat_nodes[\"Glass BSDF\"].inputs[1].default_value = random.uniform(0.0, 0.1)\n    #     mat_nodes[\"Glass BSDF\"].inputs[2].default_value = random.uniform(1.325, 1.335)\n    #     color_value = random.uniform(0.297, 0.35)\n    #     mat_nodes[\"Transparent BSDF\"].inputs[0].default_value = list([color_value, color_value, color_value, 1])\n\n    ########################\n    else:\n        print(material_name + \" no change\")\n\n\n\ndef set_modify_material(obj, material, obj_texture_img_list, mat_randomize_mode, is_transfer=True):\n    for mat_slot in obj.material_slots:\n        if mat_slot.material:\n            if mat_slot.material.node_tree:\n                srcmat = material\n                mat = srcmat.copy()\n                mat.name = mat_slot.material.name   # rename\n                mat_links = mat.node_tree.links\n                mat_nodes = mat.node_tree.nodes\n                bsdf_node = mat_slot.material.node_tree.nodes.get(\"Principled BSDF\", None)\n                if bsdf_node is not None:\n                    tex_node = mat_slot.material.node_tree.nodes.new(type='ShaderNodeTexImage')\n                    tex_node.name = 'objtexture_tex'\n                    tex_node.extension = 'EXTEND'\n                    flag = random.randint(0, len(obj_texture_img_list)-1)\n                    tex_node.image = obj_texture_img_list[flag]\n\n                    tex_node_orign = mat_slot.material.node_tree.nodes.get('objtexture_tex', None)\n                    if tex_node_orign is not None:\n                        #mat = mat_slot.material.copy() \n                        # Get the bl_idname to create a new node of the same type\n                        tex_node = mat_nodes.new(tex_node_orign.bl_idname)\n                        texture_img = bpy.data.images[tex_node_orign.image.name]\n                        # Assign the default values from the old node to the new node\n                        tex_node.image = texture_img\n                        tex_node.projection = 'SPHERE'\n                        #tex_node.location = Vector((-800, 0))\n                        mapping_node = mat_nodes.new(type='ShaderNodeMapping')\n                        mapping_node.name = 'objtexture_mapping'\n                        texcoord_node = mat_nodes.new(type='ShaderNodeTexCoord')\n                        texcoord_node.name = 'objtexture_texcoord'\n                        mat_links.new(mapping_node.outputs[0], tex_node.inputs[0])\n                        mat_links.new(texcoord_node.outputs[0], mapping_node.inputs[0])\n\n                        modify_material(mat_links, mat_nodes, srcmat.name, mat_randomize_mode, is_texture=True, tex_node=tex_node, is_transfer=is_transfer)\n\n                    else:\n\n                        orign_base_color = mat_slot.material.node_tree.nodes[\"Principled BSDF\"].inputs[0].default_value\n                        if orign_base_color[0] == 0.0 and orign_base_color[1] == 0.0 and orign_base_color[2] == 0.0:\n                            orign_base_color = [0.05, 0.05, 0.05, 1]\n\n                        modify_material(mat_links, mat_nodes, srcmat.name, mat_randomize_mode, is_texture=False, orign_base_color=orign_base_color, is_transfer=is_transfer)\n\n\n                bpy.data.materials.remove(mat_slot.material)\n                mat_slot.material = mat\n\n\ndef set_modify_raw_material(obj):\n    for mat_slot in obj.material_slots:\n        if mat_slot.material:\n            if mat_slot.material.node_tree:\n                bsdf_node = mat_slot.material.node_tree.nodes.get(\"Principled BSDF\", None)\n                if bsdf_node is not None:\n                    tex_node_orign = mat_slot.material.node_tree.nodes.get(\"Image Texture\", None)\n                    if tex_node_orign is None:\n                            orign_base_color = mat_slot.material.node_tree.nodes[\"Principled BSDF\"].inputs[0].default_value\n                            if orign_base_color[0] == 0.0 and orign_base_color[1] == 0.0 and orign_base_color[2] == 0.0:\n                                mat = mat_slot.material.copy()\n                                mat.name = mat_slot.material.name   # rename\n                                mat_nodes = mat.node_tree.nodes   \n                                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list([0.05, 0.05, 0.05, 1])\n\n                                bpy.data.materials.remove(mat_slot.material)\n                                mat_slot.material = mat\n\n\ndef set_modify_table_material(obj, material, selected_realtable_img):\n\n    srcmat = material\n    #print(srcmat.name)\n    mat = srcmat.copy()\n    mat_links = mat.node_tree.links\n    mat_nodes = mat.node_tree.nodes\n\n    tex_node = mat_nodes.new(type='ShaderNodeTexImage')\n    tex_node.name = 'realtable_tex'\n    tex_node.extension = 'EXTEND'\n    tex_node.image = selected_realtable_img\n    mapping_node = mat_nodes.new(type='ShaderNodeMapping')\n    mapping_node.name = 'realtable_mapping'\n    texcoord_node = mat_nodes.new(type='ShaderNodeTexCoord')\n    texcoord_node.name = 'realtable_texcoord'\n\n    mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n    mat_links.new(mapping_node.outputs[0], tex_node.inputs[0])\n    mat_links.new(texcoord_node.outputs[2], mapping_node.inputs[0])\n\n    obj.active_material = mat\n\n\ndef set_modify_floor_material(obj, material, selected_realfloor_img):\n\n    srcmat = material\n    mat = srcmat.copy()\n    mat_links = mat.node_tree.links\n    mat_nodes = mat.node_tree.nodes\n\n    bsdfnode_list = [n for n in mat_nodes if isinstance(n, bpy.types.ShaderNodeBsdfPrincipled)]\n    if bsdfnode_list == []:\n        obj.active_material = material\n    else:\n        for bsdfnode in bsdfnode_list:\n            tex_node = mat_nodes.new(type='ShaderNodeTexImage')\n            tex_node.name = 'realfloor_tex'\n            tex_node.extension = 'REPEAT'\n            tex_node.image = selected_realfloor_img\n            mapping_node = mat_nodes.new(type='ShaderNodeMapping')\n            mapping_node.name = 'realfloor_mapping'\n            texcoord_node = mat_nodes.new(type='ShaderNodeTexCoord')\n            texcoord_node.name = 'realfloor_texcoord'\n\n\n            mat_links.new(tex_node.outputs[0], bsdfnode.inputs[0])\n            mat_links.new(mapping_node.outputs[0], tex_node.inputs[0])\n            mat_links.new(texcoord_node.outputs[2], mapping_node.inputs[0])\n\n\n            obj.active_material = mat\n\n\ndef set_modify_arm_material(obj, material):\n    for mat_slot in obj.material_slots:\n\n        if mat_slot.material:\n            if mat_slot.material.node_tree:\n                \n                srcmat = material\n\n                mat = srcmat.copy()\n                mat.name = mat_slot.material.name   # rename\n                mat_links = mat.node_tree.links\n                mat_nodes = mat.node_tree.nodes\n\n                rgb = random.uniform(0.50, 1.00)\n                orign_base_color = [rgb, rgb, rgb, 1]\n\n                modify_material(mat_links, mat_nodes, srcmat.name, mat_randomize_mode=\"diffuse\", is_texture=False, orign_base_color=orign_base_color, is_transfer=False, is_arm=True)\n\n                bpy.data.materials.remove(mat_slot.material)\n                mat_slot.material = mat\n\n"
  },
  {
    "path": "data_generator/render_pile_STD_rand.py",
    "content": "import os\nimport random\nimport bpy\nimport math\nimport numpy as np\nfrom mathutils import Vector, Matrix\nimport copy\nimport sys\nimport time\nimport argparse\nfrom scipy.spatial.transform import Rotation\nfrom bpy_extras.object_utils import world_to_camera_view\n\nsys.path.append(os.getcwd())\nfrom modify_material import set_modify_material, set_modify_raw_material, set_modify_table_material, set_modify_floor_material, set_modify_arm_material\n\n\nargv = sys.argv\nargv = argv[argv.index(\"--\") + 1:]  # get all args after \"--\"\n\nRENDERING_PATH = os.getcwd()\nLIGHT_EMITTER_ENERGY = 5\nLIGHT_ENV_MAP_ENERGY_IR = 0.035\nLIGHT_ENV_MAP_ENERGY_RGB = 1.0\nCYCLES_SAMPLE = 32\nDEVICE_LIST = [6]\n\nSCENE_NUM = int(argv[0]) if len(argv) > 0 else 0\nCAMERA_TYPE = \"realsense\"\nNUM_FRAME_PER_SCENE = 24\n\nRENDER_START_FRAME = 0\nRENDER_END_FRAME = 24\nRENDER_FRAMES_LIST = list(range(RENDER_START_FRAME, RENDER_END_FRAME))\n\n\nlook_at_shift = np.array([0,0,0])\nnum_point_ver = 6\nnum_point_hor = 4\nbeta_range = (random.uniform(12, 16)*math.pi/180, random.uniform(41, 45)*math.pi/180)\nr = random.uniform(0.40, 0.50)\n\n\ncode_root = \"/data/InterNeRF/renderer/renderer_giga_GPU6-0_rand_M\"\nemitter_pattern_path = os.path.join(code_root, \"pattern\", \"test_pattern.png\")\nenv_map_path =  os.path.join(code_root, \"envmap_lib\")\ndefault_background_texture_path = os.path.join(code_root, \"texture\", \"texture_0.jpg\")\ntable_CAD_model_path = os.path.join(code_root, \"table_obj\", \"table.obj\")\ntable_plane_CAD_model_path = os.path.join(code_root, \"table_obj\", \"table_plane.obj\")\narm_CAD_model_path = os.path.join(code_root, \"table_obj\", \"arm.obj\")\nTABLE_CAD_MODEL_HEIGHT = 0.75\nreal_table_image_root_path = os.path.join(code_root, \"realtable\")\nreal_floor_image_root_path = os.path.join(code_root, \"realfloor\")\nobj_texture_image_root_path = \"/data/InterNeRF/data/imagenet\"\nobj_texture_image_idxfile = \"train_paths.txt\"\n\noutput_root_path = \"/data/InterNeRF/data/giga_hemisphere_train_0827/pile_full/6_M_rand/pile_%d-%d\"%(830*(SCENE_NUM//830), 830*(SCENE_NUM//830)+830)\n\nraw_urdfs_and_poses_dir_path = \"/data/InterNeRF/data/GIGA/data_pile_train_raw/mesh_pose_list\"\n\n\n# render mode\nrender_mode_list = {'RGB': 1,\n                    'IR': 0,\n                    'NOCS': 0,\n                    'Mask': 1, \n                    'Normal': 1}\n\n# material randomization mode (diffuse, transparent, specular, specular_tex, specular_texmix, mixed)\nmy_material_randomize_mode = 'mixed'\n\n# set depth sensor parameter\ncamera_width = 640\ncamera_height = 360\ncamera_fov = 69.75 / 180 * math.pi\nbaseline_distance = 0.055\n\n# set background parameter\nbackground_size = 3.\nbackground_position = (0., 0., 0.)\nbackground_scale = (1., 1., 1.)\n\n\n# set camera randomize paramater\n# start_point_range: (range_r, range_vector),   range_r: (r_min, r_max),    range_vector: (x_min, x_max, y_min, y_max)\n# look_at_range: (x_min, x_max, y_min, y_max, z_min, z_max)\n# up_range: (x_min, x_max, y_min, y_max)\nstart_point_range = ((0.5, 0.95), (-0.6, 0.6, -0.6, 0.6))\nup_range = (-0.18, -0.18, -0.18, 0.18)\nlook_at_range = (background_position[0] - 0.05, background_position[0] + 0.05, \n                 background_position[1] - 0.05, background_position[1] + 0.05,\n                 background_position[2] - 0.05, background_position[2] + 0.05)\n\n\ng_syn_light_num_lowbound = 4\ng_syn_light_num_highbound = 6\ng_syn_light_dist_lowbound = 8\ng_syn_light_dist_highbound = 12\ng_syn_light_azimuth_degree_lowbound = 0\ng_syn_light_azimuth_degree_highbound = 360\ng_syn_light_elevation_degree_lowbound = 0\ng_syn_light_elevation_degree_highbound = 90\ng_syn_light_energy_mean = 3\ng_syn_light_energy_std = 0.5\ng_syn_light_environment_energy_lowbound = 0\ng_syn_light_environment_energy_highbound = 1\n\ndef obj_centered_camera_pos(dist, azimuth_deg, elevation_deg):\n    phi = float(elevation_deg) / 180 * math.pi\n    theta = float(azimuth_deg) / 180 * math.pi\n    x = (dist * math.cos(theta) * math.cos(phi))\n    y = (dist * math.sin(theta) * math.cos(phi))\n    z = (dist * math.sin(phi))\n    return (x, y, z)\n\ndef quaternionFromYawPitchRoll(yaw, pitch, roll):\n    c1 = math.cos(yaw / 2.0)\n    c2 = math.cos(pitch / 2.0)\n    c3 = math.cos(roll / 2.0)    \n    s1 = math.sin(yaw / 2.0)\n    s2 = math.sin(pitch / 2.0)\n    s3 = math.sin(roll / 2.0)    \n    q1 = c1 * c2 * c3 + s1 * s2 * s3\n    q2 = c1 * c2 * s3 - s1 * s2 * c3\n    q3 = c1 * s2 * c3 + s1 * c2 * s3\n    q4 = s1 * c2 * c3 - c1 * s2 * s3\n    return (q1, q2, q3, q4)\n\ndef camPosToQuaternion(cx, cy, cz):\n    q1a = 0\n    q1b = 0\n    q1c = math.sqrt(2) / 2\n    q1d = math.sqrt(2) / 2\n    camDist = math.sqrt(cx * cx + cy * cy + cz * cz)\n    cx = cx / camDist\n    cy = cy / camDist\n    cz = cz / camDist    \n    t = math.sqrt(cx * cx + cy * cy) \n    tx = cx / t\n    ty = cy / t\n    yaw = math.acos(ty)\n    if tx > 0:\n        yaw = 2 * math.pi - yaw\n    pitch = 0\n    tmp = min(max(tx*cx + ty*cy, -1),1)\n    #roll = math.acos(tx * cx + ty * cy)\n    roll = math.acos(tmp)\n    if cz < 0:\n        roll = -roll    \n    print(\"%f %f %f\" % (yaw, pitch, roll))\n    q2a, q2b, q2c, q2d = quaternionFromYawPitchRoll(yaw, pitch, roll)    \n    q1 = q1a * q2a - q1b * q2b - q1c * q2c - q1d * q2d\n    q2 = q1b * q2a + q1a * q2b + q1d * q2c - q1c * q2d\n    q3 = q1c * q2a - q1d * q2b + q1a * q2c + q1b * q2d\n    q4 = q1d * q2a + q1c * q2b - q1b * q2c + q1a * q2d\n    return (q1, q2, q3, q4)\n\ndef camRotQuaternion(cx, cy, cz, theta): \n    theta = theta / 180.0 * math.pi\n    camDist = math.sqrt(cx * cx + cy * cy + cz * cz)\n    cx = -cx / camDist\n    cy = -cy / camDist\n    cz = -cz / camDist\n    q1 = math.cos(theta * 0.5)\n    q2 = -cx * math.sin(theta * 0.5)\n    q3 = -cy * math.sin(theta * 0.5)\n    q4 = -cz * math.sin(theta * 0.5)\n    return (q1, q2, q3, q4)\n\ndef quaternionProduct(qx, qy): \n    a = qx[0]\n    b = qx[1]\n    c = qx[2]\n    d = qx[3]\n    e = qy[0]\n    f = qy[1]\n    g = qy[2]\n    h = qy[3]\n    q1 = a * e - b * f - c * g - d * h\n    q2 = a * f + b * e + c * h - d * g\n    q3 = a * g - b * h + c * e + d * f\n    q4 = a * h + b * g - c * f + d * e    \n    return (q1, q2, q3, q4)\n\ndef quaternionToRotation(q):\n    w, x, y, z = q\n    r00 = 1 - 2 * y ** 2 - 2 * z ** 2\n    r01 = 2 * x * y + 2 * w * z\n    r02 = 2 * x * z - 2 * w * y\n\n    r10 = 2 * x * y - 2 * w * z\n    r11 = 1 - 2 * x ** 2 - 2 * z ** 2\n    r12 = 2 * y * z + 2 * w * x\n\n    r20 = 2 * x * z + 2 * w * y\n    r21 = 2 * y * z - 2 * w * x\n    r22 = 1 - 2 * x ** 2 - 2 * y ** 2\n    r = [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]]\n    return r\n\ndef quaternionToRotation_xyzw(q):\n    x, y, z, w = q\n    r00 = 1 - 2 * y ** 2 - 2 * z ** 2\n    r01 = 2 * x * y + 2 * w * z\n    r02 = 2 * x * z - 2 * w * y\n\n    r10 = 2 * x * y - 2 * w * z\n    r11 = 1 - 2 * x ** 2 - 2 * z ** 2\n    r12 = 2 * y * z + 2 * w * x\n\n    r20 = 2 * x * z + 2 * w * y\n    r21 = 2 * y * z - 2 * w * x\n    r22 = 1 - 2 * x ** 2 - 2 * y ** 2\n    r = [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]]\n    return r\n\ndef quaternionFromRotMat(rotation_matrix):\n    rotation_matrix = np.reshape(rotation_matrix, (1, 9))[0]\n    w = math.sqrt(rotation_matrix[0]+rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    x = math.sqrt(rotation_matrix[0]-rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    y = math.sqrt(-rotation_matrix[0]+rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    z = math.sqrt(-rotation_matrix[0]-rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    a = [w,x,y,z]\n    m = a.index(max(a))\n    if m == 0:\n        x = (rotation_matrix[7]-rotation_matrix[5])/(4*w)\n        y = (rotation_matrix[2]-rotation_matrix[6])/(4*w)\n        z = (rotation_matrix[3]-rotation_matrix[1])/(4*w)\n    if m == 1:\n        w = (rotation_matrix[7]-rotation_matrix[5])/(4*x)\n        y = (rotation_matrix[1]+rotation_matrix[3])/(4*x)\n        z = (rotation_matrix[6]+rotation_matrix[2])/(4*x)\n    if m == 2:\n        w = (rotation_matrix[2]-rotation_matrix[6])/(4*y)\n        x = (rotation_matrix[1]+rotation_matrix[3])/(4*y)\n        z = (rotation_matrix[5]+rotation_matrix[7])/(4*y)\n    if m == 3:\n        w = (rotation_matrix[3]-rotation_matrix[1])/(4*z)\n        x = (rotation_matrix[6]+rotation_matrix[2])/(4*z)\n        y = (rotation_matrix[5]+rotation_matrix[7])/(4*z)\n    quaternion = (w,x,y,z)\n    return quaternion\n\ndef quaternionFromRotMat_xyzw(rotation_matrix):\n    rotation_matrix = np.reshape(rotation_matrix, (1, 9))[0]\n    w = math.sqrt(rotation_matrix[0]+rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    x = math.sqrt(rotation_matrix[0]-rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    y = math.sqrt(-rotation_matrix[0]+rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    z = math.sqrt(-rotation_matrix[0]-rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    a = [x,y,z,w]\n    m = a.index(max(a))\n    if m == 0:\n        x = (rotation_matrix[7]-rotation_matrix[5])/(4*w)\n        y = (rotation_matrix[2]-rotation_matrix[6])/(4*w)\n        z = (rotation_matrix[3]-rotation_matrix[1])/(4*w)\n    if m == 1:\n        w = (rotation_matrix[7]-rotation_matrix[5])/(4*x)\n        y = (rotation_matrix[1]+rotation_matrix[3])/(4*x)\n        z = (rotation_matrix[6]+rotation_matrix[2])/(4*x)\n    if m == 2:\n        w = (rotation_matrix[2]-rotation_matrix[6])/(4*y)\n        x = (rotation_matrix[1]+rotation_matrix[3])/(4*y)\n        z = (rotation_matrix[5]+rotation_matrix[7])/(4*y)\n    if m == 3:\n        w = (rotation_matrix[3]-rotation_matrix[1])/(4*z)\n        x = (rotation_matrix[6]+rotation_matrix[2])/(4*z)\n        y = (rotation_matrix[5]+rotation_matrix[7])/(4*z)\n    quaternion = (x,y,z,w)\n    return quaternion\n\ndef rotVector(q, vector_ori):\n    r = quaternionToRotation(q)\n    x_ori = vector_ori[0]\n    y_ori = vector_ori[1]\n    z_ori = vector_ori[2]\n    x_rot = r[0][0] * x_ori + r[1][0] * y_ori + r[2][0] * z_ori\n    y_rot = r[0][1] * x_ori + r[1][1] * y_ori + r[2][1] * z_ori\n    z_rot = r[0][2] * x_ori + r[1][2] * y_ori + r[2][2] * z_ori\n    return (x_rot, y_rot, z_rot)\n\ndef cameraLPosToCameraRPos(q_l, pos_l, baseline_dis):\n    vector_camera_l_y = (1, 0, 0)\n    vector_rot = rotVector(q_l, vector_camera_l_y)\n    pos_r = (pos_l[0] + vector_rot[0] * baseline_dis,\n             pos_l[1] + vector_rot[1] * baseline_dis,\n             pos_l[2] + vector_rot[2] * baseline_dis)\n    return pos_r\n\ndef getRTFromAToB(pointCloudA, pointCloudB):\n\n    muA = np.mean(pointCloudA, axis=0)\n    muB = np.mean(pointCloudB, axis=0)\n\n    zeroMeanA = pointCloudA - muA\n    zeroMeanB = pointCloudB - muB\n\n    covMat = np.matmul(np.transpose(zeroMeanA), zeroMeanB)\n    U, S, Vt = np.linalg.svd(covMat)\n    R = np.matmul(Vt.T, U.T)\n\n    if np.linalg.det(R) < 0:\n        print(\"Reflection detected\")\n        Vt[2, :] *= -1\n        R = Vt.T * U.T\n    T = (-np.matmul(R, muA.T) + muB.T).reshape(3, 1)\n    return R, T\n\ndef cameraPositionRandomize(start_point_range, look_at_range, up_range):\n    r_range, vector_range = start_point_range\n    r_min, r_max = r_range\n    x_min, x_max, y_min, y_max = vector_range\n    r = random.uniform(r_min, r_max)\n    x = random.uniform(x_min, x_max)\n    y = random.uniform(y_min, y_max)\n    z = math.sqrt(1 - x**2 - y**2)\n    vector_camera_axis = np.array([x, y, z])\n\n    x_min, x_max, y_min, y_max = up_range\n    x = random.uniform(x_min, x_max)\n    y = random.uniform(y_min, y_max)    \n    z = math.sqrt(1 - x**2 - y**2)\n    up = np.array([x, y, z])\n\n    x_min, x_max, y_min, y_max, z_min, z_max = look_at_range\n    look_at = np.array([random.uniform(x_min, x_max),\n                        random.uniform(y_min, y_max),\n                        random.uniform(z_min, z_max)])\n    position = look_at + r * vector_camera_axis\n\n    vectorZ = - (look_at - position)/np.linalg.norm(look_at - position)\n    vectorX = np.cross(up, vectorZ)/np.linalg.norm(np.cross(up, vectorZ))\n    vectorY = np.cross(vectorZ, vectorX)/np.linalg.norm(np.cross(vectorX, vectorZ))\n\n    # points in camera coordinates\n    pointSensor= np.array([[0., 0., 0.], [1., 0., 0.], [0., 2., 0.], [0., 0., 3.]])\n\n    # points in world coordinates \n    pointWorld = np.array([position,\n                            position + vectorX,\n                            position + vectorY * 2,\n                            position + vectorZ * 3])\n\n    resR, resT = getRTFromAToB(pointSensor, pointWorld)\n    resQ = quaternionFromRotMat(resR)\n    return resQ, resT    \n\ndef add_noise_to_transformation_matrix(Rot, Trans, angle_std=2, translation_std=0.01):\n    axis = np.random.rand(3)\n    axis /= np.linalg.norm(axis)\n    angle = np.random.uniform(0, angle_std) / 180 * np.pi\n    Rot = Rotation.from_rotvec(angle * axis).as_matrix() @ Rot\n    direction = np.random.rand(3)\n    direction /= np.linalg.norm(direction)\n    length = np.random.uniform(0, translation_std)\n    Trans += direction.reshape(Trans.shape) * length\n    return Rot, Trans\n\ndef genCameraPosition(look_at):\n    quat_list = []\n    rot_list = []\n    trans_list = []\n    position_list = []\n    \n    alpha = 0\n    alpha_delta = (2 * math.pi) / num_point_ver\n    for i in range(num_point_ver):\n        alpha = alpha + alpha_delta\n        flag_x = 1\n        flag_y = 1\n        alpha1 = alpha\n        if alpha > math.pi/2 and alpha <= math.pi: \n            alpha1 = math.pi - alpha #alpha - math.pi/2\n            flag_x = -1\n            flag_y = 1\n        elif alpha > math.pi and alpha <= math.pi*(3/2):\n            alpha1 = alpha - math.pi #math.pi*(3/2) - alpha\n            flag_x = -1\n            flag_y = -1\n        elif alpha > math.pi*(3/2):\n            alpha1 = math.pi*2 - alpha #alpha - math.pi*(3/2)\n            flag_x = 1\n            flag_y = -1\n    \n        beta = beta_range[0]\n        beta_delta = (beta_range[1]-beta_range[0])/(num_point_hor-1)\n        for j in range(num_point_hor):\n            if j != 0:\n                beta = beta + beta_delta \n            x = flag_x * (r * math.sin(beta)) * math.cos(alpha1)\n            y = flag_y * (r * math.sin(beta)) * math.sin(alpha1)\n            z = r * math.cos(beta)\n            position = np.array([x, y, z]) + look_at\n            look_at = look_at\n            up = np.array([0, 0, 1])\n\n            vectorZ = - (look_at - position)/np.linalg.norm(look_at - position)\n            vectorX = np.cross(up, vectorZ)/np.linalg.norm(np.cross(up, vectorZ))\n            vectorY = np.cross(vectorZ, vectorX)/np.linalg.norm(np.cross(vectorX, vectorZ))\n\n            # points in camera coordinates\n            pointSensor= np.array([[0., 0., 0.], [1., 0., 0.], [0., 2., 0.], [0., 0., 3.]])\n\n            # points in world coordinates \n            pointWorld = np.array([position,\n                                   position + vectorX,\n                                   position + vectorY * 2,\n                                   position + vectorZ * 3])\n\n            # get R and T\n            resR, resT = getRTFromAToB(pointSensor, pointWorld)\n            \n            # add noise\n            resR, resT = add_noise_to_transformation_matrix(resR, resT)\n\n            # add to list\n            resQ = quaternionFromRotMat(resR)\n            quat_list.append(resQ)\n            rot_list.append(resR)\n            trans_list.append(resT)\n            #position_list.append(position)\n            position_list.append(resT.reshape(3))\n    return quat_list, trans_list, rot_list, position_list\n\ndef quanternion_mul(q1, q2):\n    s1 = q1[0]\n    v1 = np.array(q1[1:])\n    s2 = q2[0]\n    v2 = np.array(q2[1:])\n    s = s1 * s2 - np.dot(v1, v2)\n    v = s1 * v2 + s2 * v1 + np.cross(v1, v2)\n    return (s, v[0], v[1], v[2])\n\n\nclass BlenderRenderer(object):\n\n    def __init__(self, viewport_size_x=640, viewport_size_y=360):\n        '''\n        viewport_size_x, viewport_size_y: rendering viewport resolution\n        '''\n\n        # remove all objects, cameras and lights\n        for obj in bpy.data.meshes:\n            bpy.data.meshes.remove(obj)\n\n        for cam in bpy.data.cameras:\n            bpy.data.cameras.remove(cam)\n\n        for light in bpy.data.lights:\n            bpy.data.lights.remove(light)\n\n        for obj in bpy.data.objects:\n            bpy.data.objects.remove(obj, do_unlink=True)\n\n        # remove all materials\n        # for item in bpy.data.materials:\n        #     bpy.data.materials.remove(item)\n\n        render_context = bpy.context.scene.render\n\n        # add left camera\n        camera_l_data = bpy.data.cameras.new(name=\"camera_l\")\n        camera_l_object = bpy.data.objects.new(name=\"camera_l\", object_data=camera_l_data)\n        bpy.context.collection.objects.link(camera_l_object)\n\n        # add right camera\n        camera_r_data = bpy.data.cameras.new(name=\"camera_r\")\n        camera_r_object = bpy.data.objects.new(name=\"camera_r\", object_data=camera_r_data)\n        bpy.context.collection.objects.link(camera_r_object)\n\n        camera_l = bpy.data.objects[\"camera_l\"]\n        camera_r = bpy.data.objects[\"camera_r\"]\n\n        # set the camera postion and orientation so that it is in\n        # the front of the object\n        camera_l.location = (1, 0, 0)\n        camera_r.location = (1, 0, 0)\n\n        # add emitter light\n        light_emitter_data = bpy.data.lights.new(name=\"light_emitter\", type='SPOT')\n        light_emitter_object = bpy.data.objects.new(name=\"light_emitter\", object_data=light_emitter_data)\n        bpy.context.collection.objects.link(light_emitter_object)\n\n        light_emitter = bpy.data.objects[\"light_emitter\"]\n        light_emitter.location = (1, 0, 0)\n        light_emitter.data.energy = LIGHT_EMITTER_ENERGY\n\n        # render setting\n        render_context.resolution_percentage = 100\n        self.render_context = render_context\n\n        self.camera_l = camera_l\n        self.camera_r = camera_r\n\n        self.light_emitter = light_emitter\n\n        self.model_loaded = False\n        self.background_added = None\n\n        self.render_context.resolution_x = viewport_size_x\n        self.render_context.resolution_y = viewport_size_y\n\n        self.my_material = {}\n        self.render_mode = 'IR'\n\n        # output setting \n        self.render_context.image_settings.file_format = 'PNG'\n        self.render_context.image_settings.compression = 0\n        self.render_context.image_settings.color_mode = 'BW'\n        self.render_context.image_settings.color_depth = '8'\n\n        # cycles setting\n        self.render_context.engine = 'CYCLES'\n        bpy.context.scene.cycles.progressive = 'BRANCHED_PATH'\n        bpy.context.scene.cycles.use_denoising = True\n        bpy.context.scene.cycles.denoiser = 'NLM'\n        bpy.context.scene.cycles.film_exposure = 0.5\n\n        # self.render_context.use_antialiasing = False\n        ##########\n        bpy.context.scene.view_layers[\"View Layer\"].use_sky = True\n        ##########\n\n        # switch on nodes\n        bpy.context.scene.use_nodes = True\n        tree = bpy.context.scene.node_tree\n        links = tree.links\n  \n        # clear default nodes\n        for n in tree.nodes:\n            tree.nodes.remove(n)\n  \n        # create input render layer node\n        rl = tree.nodes.new('CompositorNodeRLayers')\n\n        # create output node\n        self.fileOutput = tree.nodes.new(type=\"CompositorNodeOutputFile\")\n        self.fileOutput.base_path = \"./new_data/0000\"\n        self.fileOutput.format.file_format = 'OPEN_EXR'\n        self.fileOutput.format.color_depth= '32'\n        self.fileOutput.file_slots[0].path = 'depth#'\n        # links.new(map.outputs[0], fileOutput.inputs[0])\n        links.new(rl.outputs[2], self.fileOutput.inputs[0])\n        # links.new(gamma.outputs[0], fileOutput.inputs[0])\n\n        # depth sensor pattern\n        self.pattern = []\n        # environment map\n        self.env_map = []\n        ###\n        self.realtable_img_list = []\n        self.realfloor_img_list = []\n        self.obj_texture_img_list = []\n        ###\n\n    def loadImages(self, pattern_path, env_map_path, real_table_image_root_path, real_floor_image_root_path):\n        # load pattern image\n        self.pattern = bpy.data.images.load(filepath=pattern_path)\n        # load env map\n        for item in os.listdir(env_map_path):\n            if item.split('.')[-1] == 'hdr':\n                self.env_map.append(bpy.data.images.load(filepath=os.path.join(env_map_path, item)))\n        ###\n        # load real table images\n        for item in os.listdir(real_table_image_root_path):\n            if item.split('.')[-1] == 'jpg':\n                self.realtable_img_list.append(bpy.data.images.load(filepath=os.path.join(real_table_image_root_path, item)))\n        # load real floor images\n        for item in os.listdir(real_floor_image_root_path):\n            if item.split('.')[-1] == 'jpg':\n                self.realfloor_img_list.append(bpy.data.images.load(filepath=os.path.join(real_floor_image_root_path, item)))\n        # load obj texture images\n        f_teximg_idx = open(os.path.join(obj_texture_image_root_path,obj_texture_image_idxfile),\"r\")  \n        lines = f_teximg_idx.readlines() \n        # for item in lines:\n        #     item = item[:-1]      # 去掉\"\\n\"  \n        # #for item in os.listdir(obj_texture_image_root_path):\n        #     #if item.split('.')[-1] == 'jpg':\n        #     self.obj_texture_img_list.append(bpy.data.images.load(filepath=os.path.join(obj_texture_image_root_path, \"images\", item)))\n\n        start = random.randint(0,99900)\n        end = start+100\n        for item in lines[start:end]:\n            item = item[:-1]      # 去掉\"\\n\"  \n            self.obj_texture_img_list.append(bpy.data.images.load(filepath=os.path.join(obj_texture_image_root_path, \"images\", item)))\n        ###\n\n    def addEnvMap(self):\n        # Get the environment node tree of the current scene\n        node_tree = bpy.context.scene.world.node_tree\n        tree_nodes = node_tree.nodes\n\n        # Clear all nodes\n        tree_nodes.clear()\n\n        # Add Background node\n        node_background = tree_nodes.new(type='ShaderNodeBackground')\n\n        # Add Environment Texture node\n        node_environment = tree_nodes.new('ShaderNodeTexEnvironment')\n        # Load and assign the image to the node property\n        # node_environment.image = bpy.data.images.load(\"/Users/zhangjiyao/Desktop/test_addon/envmap_lib/autoshop_01_1k.hdr\") # Relative path\n        node_environment.location = -300,0\n\n        node_tex_coord = tree_nodes.new(type='ShaderNodeTexCoord')\n        node_tex_coord.location = -700,0\n\n        node_mapping = tree_nodes.new(type='ShaderNodeMapping')\n        node_mapping.location = -500,0\n\n        # Add Output node\n        node_output = tree_nodes.new(type='ShaderNodeOutputWorld')   \n        node_output.location = 200,0\n\n        # Link all nodes\n        links = node_tree.links\n        links.new(node_environment.outputs[\"Color\"], node_background.inputs[\"Color\"])\n        links.new(node_background.outputs[\"Background\"], node_output.inputs[\"Surface\"])\n        links.new(node_tex_coord.outputs[\"Generated\"], node_mapping.inputs[\"Vector\"])\n        links.new(node_mapping.outputs[\"Vector\"], node_environment.inputs[\"Vector\"])\n\n        #### bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = 1.0\n        random_energy = random.uniform(LIGHT_ENV_MAP_ENERGY_RGB * 0.8, LIGHT_ENV_MAP_ENERGY_RGB * 1.2)\n        bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = random_energy\n        ####\n\n    def setEnvMap(self, env_map_id, rotation_elur_z):\n        # Get the environment node tree of the current scene\n        node_tree = bpy.context.scene.world.node_tree\n\n        # Get Environment Texture node\n        node_environment = node_tree.nodes['Environment Texture']\n        # Load and assign the image to the node property\n        node_environment.image = self.env_map[env_map_id]\n\n        node_mapping = node_tree.nodes['Mapping']\n        node_mapping.inputs[2].default_value[2] = rotation_elur_z\n\n    def addMaskMaterial(self, num=20):\n        background_material_name_list = [\"mask_background\", \"mask_table\", \"mask_tableplane\", \"mask_arm\"]\n        for material_name in background_material_name_list:\n            material_class = (bpy.data.materials.get(material_name) or bpy.data.materials.new(material_name))         # test if material exists, if it does not exist, create it:\n\n            # enable 'Use nodes'\n            material_class.use_nodes = True\n            node_tree = material_class.node_tree\n\n            # remove default nodes\n            material_class.node_tree.nodes.clear()\n\n            # add new nodes  \n            node_1 = node_tree.nodes.new('ShaderNodeOutputMaterial')\n            node_2= node_tree.nodes.new('ShaderNodeBrightContrast')\n\n            # link nodes\n            node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n            node_2.inputs[0].default_value = (1, 1, 1, 1)\n            self.my_material[material_name] =  material_class\n\n        #print(\"##############################\", self.my_material)\n\n        for i in range(num):\n            class_name = str(i + 1)\n            # set the material of background    \n            material_name = \"mask_\" + class_name\n\n            # test if material exists\n            # if it does not exist, create it:\n            material_class = (bpy.data.materials.get(material_name) or \n                bpy.data.materials.new(material_name))\n\n            # enable 'Use nodes'\n            material_class.use_nodes = True\n            node_tree = material_class.node_tree\n\n            # remove default nodes\n            material_class.node_tree.nodes.clear()\n\n            # add new nodes  \n            node_1 = node_tree.nodes.new('ShaderNodeOutputMaterial')\n            node_2= node_tree.nodes.new('ShaderNodeBrightContrast')\n\n            # link nodes\n            node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n\n            if class_name.split('_')[0] == 'background' or class_name.split('_')[0] == 'table' or class_name.split('_')[0] == 'tableplane' or class_name.split('_')[0] == 'arm':\n                node_2.inputs[0].default_value = (1, 1, 1, 1)\n            else:\n                node_2.inputs[0].default_value = ((i + 1)/255., 0., 0., 1)\n\n            self.my_material[material_name] =  material_class\n\n    def addNOCSMaterial(self):\n        material_name = 'coord_color'\n        mat = (bpy.data.materials.get(material_name) or bpy.data.materials.new(material_name))\n\n        mat.use_nodes = True\n        node_tree = mat.node_tree\n        nodes = node_tree.nodes\n        nodes.clear()        \n\n        links = node_tree.links\n        links.clear()\n\n        vcol_R = nodes.new(type=\"ShaderNodeVertexColor\")\n        vcol_R.layer_name = \"Col_R\" # the vertex color layer name\n        vcol_G = nodes.new(type=\"ShaderNodeVertexColor\")\n        vcol_G.layer_name = \"Col_G\" # the vertex color layer name\n        vcol_B = nodes.new(type=\"ShaderNodeVertexColor\")\n        vcol_B.layer_name = \"Col_B\" # the vertex color layer name\n\n        node_Output = node_tree.nodes.new('ShaderNodeOutputMaterial')\n        node_Emission = node_tree.nodes.new('ShaderNodeEmission')\n        node_LightPath = node_tree.nodes.new('ShaderNodeLightPath')\n        node_Mix = node_tree.nodes.new('ShaderNodeMixShader')\n        node_Combine = node_tree.nodes.new(type=\"ShaderNodeCombineRGB\")\n\n\n        # make links\n        node_tree.links.new(vcol_R.outputs[1], node_Combine.inputs[0])\n        node_tree.links.new(vcol_G.outputs[1], node_Combine.inputs[1])\n        node_tree.links.new(vcol_B.outputs[1], node_Combine.inputs[2])\n        node_tree.links.new(node_Combine.outputs[0], node_Emission.inputs[0])\n\n        node_tree.links.new(node_LightPath.outputs[0], node_Mix.inputs[0])\n        node_tree.links.new(node_Emission.outputs[0], node_Mix.inputs[2])\n        node_tree.links.new(node_Mix.outputs[0], node_Output.inputs[0])\n\n        self.my_material[material_name] = mat\n\n    def addNormalMaterial(self):\n        material_name = 'normal'\n        mat = (bpy.data.materials.get(material_name) or bpy.data.materials.new(material_name))\n        mat.use_nodes = True\n        node_tree = mat.node_tree\n        nodes = node_tree.nodes\n        nodes.clear()\n            \n        links = node_tree.links\n        links.clear()\n            \n        # Nodes :\n        new_node = nodes.new(type='ShaderNodeMath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (151.59744262695312, 854.5482177734375)\n        new_node.name = 'Math'\n        new_node.operation = 'MULTIPLY'\n        new_node.select = False\n        new_node.use_clamp = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n        new_node.inputs[1].default_value = 1.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeLightPath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (602.9912719726562, 1046.660888671875)\n        new_node.name = 'Light Path'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.outputs[0].default_value = 0.0\n        new_node.outputs[1].default_value = 0.0\n        new_node.outputs[2].default_value = 0.0\n        new_node.outputs[3].default_value = 0.0\n        new_node.outputs[4].default_value = 0.0\n        new_node.outputs[5].default_value = 0.0\n        new_node.outputs[6].default_value = 0.0\n        new_node.outputs[7].default_value = 0.0\n        new_node.outputs[8].default_value = 0.0\n        new_node.outputs[9].default_value = 0.0\n        new_node.outputs[10].default_value = 0.0\n        new_node.outputs[11].default_value = 0.0\n        new_node.outputs[12].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeOutputMaterial')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.is_active_output = True\n        new_node.location = (1168.93017578125, 701.84033203125)\n        new_node.name = 'Material Output'\n        new_node.select = False\n        new_node.target = 'ALL'\n        new_node.width = 140.0\n        new_node.inputs[2].default_value = [0.0, 0.0, 0.0]\n\n        new_node = nodes.new(type='ShaderNodeBsdfTransparent')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (731.72900390625, 721.4832763671875)\n        new_node.name = 'Transparent BSDF'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [1.0, 1.0, 1.0, 1.0]\n\n        new_node = nodes.new(type='ShaderNodeCombineXYZ')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (594.4229736328125, 602.9271240234375)\n        new_node.name = 'Combine XYZ'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.0\n        new_node.inputs[1].default_value = 0.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = [0.0, 0.0, 0.0]\n\n        new_node = nodes.new(type='ShaderNodeMixShader')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (992.7239990234375, 707.2142333984375)\n        new_node.name = 'Mix Shader'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n\n        new_node = nodes.new(type='ShaderNodeEmission')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (774.0802612304688, 608.2547607421875)\n        new_node.name = 'Emission'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [1.0, 1.0, 1.0, 1.0]\n        new_node.inputs[1].default_value = 1.0\n\n        new_node = nodes.new(type='ShaderNodeSeparateXYZ')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (-130.12167358398438, 558.1497802734375)\n        new_node.name = 'Separate XYZ'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[0].default_value = 0.0\n        new_node.outputs[1].default_value = 0.0\n        new_node.outputs[2].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeMath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (162.43240356445312, 618.8094482421875)\n        new_node.name = 'Math.002'\n        new_node.operation = 'MULTIPLY'\n        new_node.select = False\n        new_node.use_clamp = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n        new_node.inputs[1].default_value = 1.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeMath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (126.8158187866211, 364.5539855957031)\n        new_node.name = 'Math.001'\n        new_node.operation = 'MULTIPLY'\n        new_node.select = False\n        new_node.use_clamp = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n        new_node.inputs[1].default_value = -1.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeVectorTransform')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.convert_from = 'WORLD'\n        new_node.convert_to = 'CAMERA'\n        new_node.location = (-397.0209045410156, 594.7037353515625)\n        new_node.name = 'Vector Transform'\n        new_node.select = False\n        new_node.vector_type = 'VECTOR'\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [0.5, 0.5, 0.5]\n        new_node.outputs[0].default_value = [0.0, 0.0, 0.0]\n\n        new_node = nodes.new(type='ShaderNodeNewGeometry')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (-651.8067016601562, 593.0455932617188)\n        new_node.name = 'Geometry'\n        new_node.width = 140.0\n        new_node.outputs[0].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[1].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[2].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[3].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[4].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[5].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[6].default_value = 0.0\n        new_node.outputs[7].default_value = 0.0\n        new_node.outputs[8].default_value = 0.0\n\n        # Links :\n\n        links.new(nodes[\"Light Path\"].outputs[0], nodes[\"Mix Shader\"].inputs[0])    \n        links.new(nodes[\"Separate XYZ\"].outputs[0], nodes[\"Math\"].inputs[0])    \n        links.new(nodes[\"Separate XYZ\"].outputs[1], nodes[\"Math.002\"].inputs[0])    \n        links.new(nodes[\"Separate XYZ\"].outputs[2], nodes[\"Math.001\"].inputs[0])    \n        links.new(nodes[\"Vector Transform\"].outputs[0], nodes[\"Separate XYZ\"].inputs[0])    \n        links.new(nodes[\"Combine XYZ\"].outputs[0], nodes[\"Emission\"].inputs[0])    \n        links.new(nodes[\"Math\"].outputs[0], nodes[\"Combine XYZ\"].inputs[0])    \n        links.new(nodes[\"Math.002\"].outputs[0], nodes[\"Combine XYZ\"].inputs[1])    \n        links.new(nodes[\"Math.001\"].outputs[0], nodes[\"Combine XYZ\"].inputs[2])    \n        links.new(nodes[\"Transparent BSDF\"].outputs[0], nodes[\"Mix Shader\"].inputs[1])    \n        links.new(nodes[\"Emission\"].outputs[0], nodes[\"Mix Shader\"].inputs[2])    \n        links.new(nodes[\"Mix Shader\"].outputs[0], nodes[\"Material Output\"].inputs[0])    \n        links.new(nodes[\"Geometry\"].outputs[1], nodes[\"Vector Transform\"].inputs[0])    \n\n        self.my_material[material_name] = mat\n\n    def addMaterialLib(self, material_class_instance_pairs):\n        for mat in bpy.data.materials:\n            name = mat.name\n            name_class = str(name.split('_')[0])\n            if name_class != 'Dots Stroke' and name_class != 'default':\n                #print(name)\n                if name_class not in self.my_material:\n                    self.my_material[name_class] = [mat]\n                else:\n                    self.my_material[name_class].append(mat)    # e.g. self.my_material['metal'] = [.....]\n\n    def setCamera(self, quaternion, translation, fov, baseline_distance):\n        self.camera_l.data.angle = fov\n        self.camera_r.data.angle = self.camera_l.data.angle\n        cx = translation[0]\n        cy = translation[1]\n        cz = translation[2]\n\n        self.camera_l.location[0] = cx\n        self.camera_l.location[1] = cy \n        self.camera_l.location[2] = cz\n\n        self.camera_l.rotation_mode = 'QUATERNION'\n        self.camera_l.rotation_quaternion[0] = quaternion[0]\n        self.camera_l.rotation_quaternion[1] = quaternion[1]\n        self.camera_l.rotation_quaternion[2] = quaternion[2]\n        self.camera_l.rotation_quaternion[3] = quaternion[3]\n\n        self.camera_r.rotation_mode = 'QUATERNION'\n        self.camera_r.rotation_quaternion[0] = quaternion[0]\n        self.camera_r.rotation_quaternion[1] = quaternion[1]\n        self.camera_r.rotation_quaternion[2] = quaternion[2]\n        self.camera_r.rotation_quaternion[3] = quaternion[3]\n        cx, cy, cz = cameraLPosToCameraRPos(quaternion, (cx, cy, cz), baseline_distance)\n        self.camera_r.location[0] = cx\n        self.camera_r.location[1] = cy \n        self.camera_r.location[2] = cz\n\n    def setLighting(self):\n        # emitter        \n        #self.light_emitter.location = self.camera_r.location\n        self.light_emitter.location = self.camera_l.location + 0.51 * (self.camera_r.location - self.camera_l.location)\n        self.light_emitter.rotation_mode = 'QUATERNION'\n        self.light_emitter.rotation_quaternion = self.camera_r.rotation_quaternion\n\n        # emitter setting\n        bpy.context.view_layer.objects.active = None\n        # bpy.ops.object.select_all(action=\"DESELECT\")\n        self.render_context.engine = 'CYCLES'\n        self.light_emitter.select_set(True)\n        self.light_emitter.data.use_nodes = True\n        self.light_emitter.data.type = \"POINT\"\n        self.light_emitter.data.shadow_soft_size = 0.001\n        random_energy = random.uniform(LIGHT_EMITTER_ENERGY * 0.9, LIGHT_EMITTER_ENERGY * 1.1)\n        self.light_emitter.data.energy = random_energy\n\n        # remove default node\n        light_emitter = bpy.data.objects[\"light_emitter\"].data\n        light_emitter.node_tree.nodes.clear()\n        # light_projector.node_tree.nodes.remove(light_projector.node_tree.nodes.get('光输出')) #title of the existing node when materials.new\n        # light_projector.node_tree.nodes.remove(light_projector.node_tree.nodes.get('自发光(发射)')) #title of the existing node when materials.new\n\n        # add new nodes\n        light_output = light_emitter.node_tree.nodes.new(\"ShaderNodeOutputLight\")\n        node_1 = light_emitter.node_tree.nodes.new(\"ShaderNodeEmission\")\n        node_2 = light_emitter.node_tree.nodes.new(\"ShaderNodeTexImage\")\n        node_3 = light_emitter.node_tree.nodes.new(\"ShaderNodeMapping\")\n        node_4 = light_emitter.node_tree.nodes.new(\"ShaderNodeVectorMath\")\n        node_5 = light_emitter.node_tree.nodes.new(\"ShaderNodeSeparateXYZ\")\n        node_6 = light_emitter.node_tree.nodes.new(\"ShaderNodeTexCoord\")\n\n        # link nodes\n        light_emitter.node_tree.links.new(light_output.inputs[0], node_1.outputs[0])\n        light_emitter.node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n        light_emitter.node_tree.links.new(node_2.inputs[0], node_3.outputs[0])\n        light_emitter.node_tree.links.new(node_3.inputs[0], node_4.outputs[0])\n        light_emitter.node_tree.links.new(node_4.inputs[0], node_6.outputs[1])\n        light_emitter.node_tree.links.new(node_4.inputs[1], node_5.outputs[2])\n        light_emitter.node_tree.links.new(node_5.inputs[0], node_6.outputs[1])\n\n        # set parameter of nodes\n        node_1.inputs[1].default_value = 1.0        # scale\n        node_2.extension = 'CLIP'\n        # node_2.interpolation = 'Cubic'\n\n        node_3.inputs[1].default_value[0] = 0.5\n        node_3.inputs[1].default_value[1] = 0.5\n        node_3.inputs[1].default_value[2] = 0\n        node_3.inputs[2].default_value[0] = 0\n        node_3.inputs[2].default_value[1] = 0\n        node_3.inputs[2].default_value[2] = 0.05\n\n        # scale of pattern\n        node_3.inputs[3].default_value[0] = 0.6\n        node_3.inputs[3].default_value[1] = 0.85\n        node_3.inputs[3].default_value[2] = 0\n        node_4.operation = 'DIVIDE'\n\n        # pattern path\n        node_2.image = self.pattern\n\n    def lightModeSelect(self, light_mode):\n        if light_mode == \"RGB\":\n            self.light_emitter.hide_render = True\n            # set the environment map energy\n            #### random_energy = random.uniform(LIGHT_ENV_MAP_ENERGY_RGB * 0.8, LIGHT_ENV_MAP_ENERGY_RGB * 1.2)\n            #### bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = random_energy\n\n        elif light_mode == \"IR\":\n            self.light_emitter.hide_render = False\n            # set the environment map energy\n            random_energy = random.uniform(LIGHT_ENV_MAP_ENERGY_IR * 0.8, LIGHT_ENV_MAP_ENERGY_IR * 1.2)\n            bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = random_energy\n        \n        elif light_mode == \"Mask\" or light_mode == \"NOCS\" or light_mode == \"Normal\":\n            self.light_emitter.hide_render = True\n            bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = 0\n\n\n        else:\n            print(\"Not support the mode!\")    \n\n    def outputModeSelect(self, output_mode):\n        if output_mode == \"RGB\":\n            self.render_context.image_settings.file_format = 'PNG'\n            self.render_context.image_settings.compression = 0\n            self.render_context.image_settings.color_mode = 'RGB'\n            self.render_context.image_settings.color_depth = '8'\n            bpy.context.scene.view_settings.view_transform = 'Filmic'\n            bpy.context.scene.render.filter_size = 1.5\n            self.render_context.resolution_x = 640 ### 1280\n            self.render_context.resolution_y = 360 ### 720\n        elif output_mode == \"IR\":\n            self.render_context.image_settings.file_format = 'PNG'\n            self.render_context.image_settings.compression = 0\n            self.render_context.image_settings.color_mode = 'BW'\n            self.render_context.image_settings.color_depth = '8'\n            bpy.context.scene.view_settings.view_transform = 'Filmic'\n            bpy.context.scene.render.filter_size = 1.5\n            self.render_context.resolution_x = 640 ### 1280\n            self.render_context.resolution_y = 360 ### 720\n        elif output_mode == \"Mask\":\n            self.render_context.image_settings.file_format = 'OPEN_EXR'\n            self.render_context.image_settings.color_mode = 'RGB'\n            bpy.context.scene.view_settings.view_transform = 'Raw'\n            bpy.context.scene.render.filter_size = 0\n            self.render_context.resolution_x = 640\n            self.render_context.resolution_y = 360\n        elif output_mode == \"NOCS\":\n            # self.render_context.image_settings.file_format = 'OPEN_EXR'\n            self.render_context.image_settings.file_format = 'PNG'            \n            self.render_context.image_settings.color_mode = 'RGB'\n            self.render_context.image_settings.color_depth = '8'\n            bpy.context.scene.view_settings.view_transform = 'Raw'\n            bpy.context.scene.render.filter_size = 0\n            self.render_context.resolution_x = 640\n            self.render_context.resolution_y = 360\n        elif output_mode == \"Normal\":\n            self.render_context.image_settings.file_format = 'OPEN_EXR'\n            self.render_context.image_settings.color_mode = 'RGB'\n            bpy.context.scene.view_settings.view_transform = 'Raw'\n            bpy.context.scene.render.filter_size = 1.5\n            self.render_context.resolution_x = 640\n            self.render_context.resolution_y = 360\n        else:\n            print(\"Not support the mode!\")    \n\n    def renderEngineSelect(self, engine_mode):\n\n        if engine_mode == \"CYCLES\":\n            self.render_context.engine = 'CYCLES'\n            bpy.context.scene.cycles.progressive = 'BRANCHED_PATH'\n            bpy.context.scene.cycles.use_denoising = True\n            bpy.context.scene.cycles.denoiser = 'NLM'\n            bpy.context.scene.cycles.film_exposure = 1.0\n            bpy.context.scene.cycles.aa_samples = CYCLES_SAMPLE\n\n            ## Set the device_type\n            bpy.context.preferences.addons[\"cycles\"].preferences.compute_device_type = \"CUDA\" # or \"OPENCL\"\n            ## Set the device and feature set\n            # bpy.context.scene.cycles.device = \"CPU\"\n\n            ## get_devices() to let Blender detects GPU device\n            cuda_devices, _ = bpy.context.preferences.addons[\"cycles\"].preferences.get_devices()\n            print(bpy.context.preferences.addons[\"cycles\"].preferences.compute_device_type)\n            for d in bpy.context.preferences.addons[\"cycles\"].preferences.devices:\n                d[\"use\"] = 1 # Using all devices, include GPU and CPU\n                print(d[\"name\"], d[\"use\"])\n            '''\n            '''\n            device_list = DEVICE_LIST\n            activated_gpus = []\n            for i, device in enumerate(cuda_devices):\n                if (i in device_list):\n                    device.use = True\n                    activated_gpus.append(device.name)\n                else:\n                    device.use = False\n\n\n        elif engine_mode == \"EEVEE\":\n            bpy.context.scene.render.engine = 'BLENDER_EEVEE'\n        else:\n            print(\"Not support the mode!\")    \n\n    def addBackground(self, size, position, scale):\n        # set the material of background    \n        material_name = \"default_background\"\n\n        # test if material exists\n        # if it does not exist, create it:\n        material_background = (bpy.data.materials.get(material_name) or \n            bpy.data.materials.new(material_name))\n\n        # enable 'Use nodes'\n        material_background.use_nodes = True\n        node_tree = material_background.node_tree\n\n        # remove default nodes\n        material_background.node_tree.nodes.clear()\n        # material_background.node_tree.nodes.remove(material_background.node_tree.nodes.get('Principled BSDF')) #title of the existing node when materials.new\n        # material_background.node_tree.nodes.remove(material_background.node_tree.nodes.get('Material Output')) #title of the existing node when materials.new\n\n        # add new nodes  \n        node_1 = node_tree.nodes.new('ShaderNodeOutputMaterial')\n        node_2 = node_tree.nodes.new('ShaderNodeBsdfPrincipled')\n        node_3 = node_tree.nodes.new('ShaderNodeTexImage')\n\n        # link nodes\n        node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n        node_tree.links.new(node_2.inputs[0], node_3.outputs[0])\n\n        # add texture image\n        node_3.image = bpy.data.images.load(filepath=default_background_texture_path)\n        self.my_material['default_background'] = material_background\n\n        # add background plane\n        for i in range(-2, 3, 1):\n            for j in range(-2, 3, 1):\n                position_i_j = (i * size + position[0], j * size + position[1], position[2] - TABLE_CAD_MODEL_HEIGHT)\n                bpy.ops.mesh.primitive_plane_add(size=size, enter_editmode=False, align='WORLD', location=position_i_j, scale=scale)\n                bpy.ops.rigidbody.object_add()\n                bpy.context.object.rigid_body.type = 'PASSIVE'\n                bpy.context.object.rigid_body.collision_shape = 'BOX'\n        for i in range(-2, 3, 1):\n            for j in [-2, 2]:\n                position_i_j = (i * size + position[0], j * size + position[1], position[2] - 0.25)# - TABLE_CAD_MODEL_HEIGHT)\n                rotation_elur = (math.pi / 2., 0., 0.)\n                bpy.ops.mesh.primitive_plane_add(size=size, enter_editmode=False, align='WORLD', location=position_i_j, rotation = rotation_elur)\n                bpy.ops.rigidbody.object_add()\n                bpy.context.object.rigid_body.type = 'PASSIVE'\n                bpy.context.object.rigid_body.collision_shape = 'BOX'    \n        for j in range(-2, 3, 1):\n            for i in [-2, 2]:\n                position_i_j = (i * size + position[0], j * size + position[1], position[2] - 0.25)# - TABLE_CAD_MODEL_HEIGHT)\n                rotation_elur = (0, math.pi / 2, 0)\n                bpy.ops.mesh.primitive_plane_add(size=size, enter_editmode=False, align='WORLD', location=position_i_j, rotation = rotation_elur)\n                bpy.ops.rigidbody.object_add()\n                bpy.context.object.rigid_body.type = 'PASSIVE'\n                bpy.context.object.rigid_body.collision_shape = 'BOX'        \n        count = 0\n        for obj in bpy.data.objects:\n            if obj.type == \"MESH\":\n                obj.name = \"background_\" + str(count)\n                obj.data.name = \"background_\" + str(count)\n                obj.active_material = material_background\n                count += 1\n\n        self.background_added = True\n\n    def clearModel(self):\n        '''\n        # delete all meshes\n        for item in bpy.data.meshes:\n            bpy.data.meshes.remove(item)\n        for item in bpy.data.materials:\n            bpy.data.materials.remove(item)\n        '''\n\n        # remove all objects except background\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                bpy.data.meshes.remove(obj.data)\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                bpy.data.objects.remove(obj, do_unlink=True)\n\n        # remove all default material\n        for mat in bpy.data.materials:\n            name = mat.name.split('.')\n            if name[0] == 'Material':\n                bpy.data.materials.remove(mat)\n\n    def loadModel(self, file_path):\n        self.model_loaded = True\n        try:\n            if file_path.endswith('obj'):\n                bpy.ops.import_scene.obj(filepath=file_path)\n            elif file_path.endswith('3ds'):\n                bpy.ops.import_scene.autodesk_3ds(filepath=file_path)\n            elif file_path.endswith('dae'):\n                # Must install OpenCollada. Please read README.md\n                bpy.ops.wm.collada_import(filepath=file_path)\n            else:\n                self.model_loaded = False\n                raise Exception(\"Loading failed: %s\" % (file_path))\n        except Exception:\n            self.model_loaded = False\n\n    def render(self, image_name=\"tmp\", image_path=RENDERING_PATH):\n        # Render the object\n        if not self.model_loaded:\n            print(\"Model not loaded.\")\n            return      \n\n        if self.render_mode == \"IR\":\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"IR\")\n            self.outputModeSelect(\"IR\")\n            self.renderEngineSelect(\"CYCLES\")\n\n        elif self.render_mode == 'RGB':\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"RGB\")\n            self.outputModeSelect(\"RGB\")\n            self.renderEngineSelect(\"CYCLES\")\n\n        elif self.render_mode == \"Mask\":\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"Mask\")\n            self.outputModeSelect(\"Mask\")\n            # self.renderEngineSelect(\"EEVEE\")\n            self.renderEngineSelect(\"CYCLES\")\n            bpy.context.scene.cycles.use_denoising = False\n            bpy.context.scene.cycles.aa_samples = 1\n\n        elif self.render_mode == \"NOCS\":\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"NOCS\")\n            self.outputModeSelect(\"NOCS\")\n            # self.renderEngineSelect(\"EEVEE\")\n            self.renderEngineSelect(\"CYCLES\")\n            bpy.context.scene.cycles.use_denoising = False\n            bpy.context.scene.cycles.aa_samples = 1\n\n        elif self.render_mode == \"Normal\":\n            bpy.context.scene.use_nodes = True\n            self.fileOutput.base_path = image_path.replace(\"normal\",\"depth\")\n            self.fileOutput.file_slots[0].path = image_name[:4]+\"_#\"# + 'depth_#'\n\n            # set light and render mode\n            self.lightModeSelect(\"Normal\")\n            self.outputModeSelect(\"Normal\")\n            # self.renderEngineSelect(\"EEVEE\")\n            self.renderEngineSelect(\"CYCLES\")\n            bpy.context.scene.cycles.use_denoising = False\n            bpy.context.scene.cycles.aa_samples = 32\n\n        else:\n            print(\"The render mode is not supported\")\n            return \n\n        bpy.context.scene.render.filepath = os.path.join(image_path, image_name)\n        bpy.ops.render.render(write_still=True)  # save straight to file\n\n    def set_material_randomize_mode(self, class_material_pairs, mat_randomize_mode, instance, material_type_in_mixed_mode):\n        if mat_randomize_mode in ['mixed','diffuse','transparent','specular_tex','specular_texmix']:\n            if material_type_in_mixed_mode == 'raw':\n                print(instance.name, 'material type: raw')\n                set_modify_raw_material(instance)\n            else:\n                print(instance.name, 'material type: ', material_type_in_mixed_mode)\n                material = random.sample(self.my_material[material_type_in_mixed_mode], 1)[0]\n                set_modify_material(instance, material, self.obj_texture_img_list, mat_randomize_mode=mat_randomize_mode) \n        elif mat_randomize_mode == 'specular':\n            material = random.sample(self.my_material[material_type_in_mixed_mode], 1)[0]\n            print(instance.name, 'material type: ', material_type_in_mixed_mode)\n            set_modify_material(instance, material, self.obj_texture_img_list, mat_randomize_mode=mat_randomize_mode, is_transfer=False) \n        else:\n            print(\"No such mat_randomize_mode!\")\n\n    def get_instance_pose(self):\n        instance_pose = {}\n        bpy.context.view_layer.update()\n        cam = self.camera_l\n        mat_rot_x = Matrix.Rotation(math.radians(180.0), 4, 'X')\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                instance_id = obj.name.split('_')[0]\n                mat_rel = cam.matrix_world.inverted() @ obj.matrix_world\n                # location\n                relative_location = [mat_rel.translation[0],\n                                     - mat_rel.translation[1],\n                                     - mat_rel.translation[2]]\n                # rotation\n                # relative_rotation_euler = mat_rel.to_euler() # must be converted from radians to degrees\n                relative_rotation_quat = [mat_rel.to_quaternion()[0],\n                                          mat_rel.to_quaternion()[1],\n                                          mat_rel.to_quaternion()[2],\n                                          mat_rel.to_quaternion()[3]]\n                quat_x = [0, 1, 0, 0]\n                quat = quanternion_mul(quat_x, relative_rotation_quat)\n                quat = [quat[0], - quat[1], - quat[2], - quat[3]]\n                instance_pose[str(instance_id)] = [quat, relative_location]\n\n        return instance_pose\n\n    def check_visible(self, threshold=(0.1, 0.9, 0.1, 0.9)):\n        w_min, x_max, h_min, h_max = threshold\n        visible_objects_list = []\n        bpy.context.view_layer.update()\n        cs, ce = self.camera_l.data.clip_start, self.camera_l.data.clip_end\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                obj_center = obj.matrix_world.translation\n                co_ndc = world_to_camera_view(scene, self.camera_l, obj_center)\n                if (w_min < co_ndc.x < x_max and\n                    h_min < co_ndc.y < h_max and\n                    cs < co_ndc.z <  ce):\n                    obj.select_set(True)\n                    visible_objects_list.append(obj)\n                else:\n                    obj.select_set(False)\n        return visible_objects_list\n\n\ndef setModelPosition(instance, location, quaternion):\n    instance.rotation_mode = 'QUATERNION'\n    instance.rotation_quaternion[0] = quaternion[0]\n    instance.rotation_quaternion[1] = quaternion[1]\n    instance.rotation_quaternion[2] = quaternion[2]\n    instance.rotation_quaternion[3] = quaternion[3]\n    instance.location = location\n\ndef setRigidBody(instance):\n    bpy.context.view_layer.objects.active = instance \n    object_single = bpy.context.active_object\n\n    # add rigid body constraints to cube\n    bpy.ops.rigidbody.object_add()\n    bpy.context.object.rigid_body.mass = 1\n    bpy.context.object.rigid_body.kinematic = True\n    bpy.context.object.rigid_body.collision_shape = 'CONVEX_HULL'\n    bpy.context.object.rigid_body.restitution = 0.01\n    bpy.context.object.rigid_body.angular_damping = 0.8\n    bpy.context.object.rigid_body.linear_damping = 0.99\n\n    bpy.context.object.rigid_body.kinematic = False\n    object_single.keyframe_insert(data_path='rigid_body.kinematic', frame=0)\n\ndef set_visiable_objects(visible_objects_list):\n    for obj in bpy.data.objects:\n        if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n            if obj in visible_objects_list:\n                obj.hide_render = False\n            else:\n                obj.hide_render = True\n\ndef generate_CAD_model_list(urdf_path_list):\n    CAD_model_list = {}\n    for instance_path in urdf_path_list:\n        class_name = 'other'\n        print(instance_path)\n        class_list = []\n        class_list.append([instance_path, class_name])\n        if class_name == 'other' and 'other' in CAD_model_list:\n            CAD_model_list[class_name] = CAD_model_list[class_name] + class_list\n        else:\n            CAD_model_list[class_name] = class_list\n    return CAD_model_list\n\ndef generate_material_type(obj_name, class_material_pairs, instance_material_except_pairs, instance_material_include_pairs, material_class_instance_pairs, material_type):\n    specular_type_for_ins_list = []\n    transparent_type_for_ins_list = []\n    diffuse_type_for_ins_list = []\n    for key in instance_material_except_pairs:\n            if key in material_class_instance_pairs['specular']:\n                specular_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['transparent']:\n                transparent_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['diffuse']:\n                diffuse_type_for_ins_list.append(key)\n    for key in instance_material_include_pairs:\n        ### if ins_idx in instance_material_include_pairs[key]:\n            if key in material_class_instance_pairs['specular']:\n                specular_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['transparent']:\n                transparent_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['diffuse']:\n                diffuse_type_for_ins_list.append(key)\n    \n    if material_type == \"transparent\":\n        return random.sample(transparent_type_for_ins_list, 1)[0]\n    elif material_type == \"diffuse\":\n        return random.sample(diffuse_type_for_ins_list, 1)[0]\n    elif material_type == \"specular\" or material_type == \"specular_tex\" or material_type == \"specular_texmix\":\n        return random.sample(specular_type_for_ins_list, 1)[0]\n    elif material_type == \"mixed\":\n        # randomly select one material class\n        flag = random.randint(0, 2) # D:S:T=1:2:2\n        # select the raw material\n        if flag == 0:\n            # flag = random.randint(0, 7) # 1:7\n            # if flag == 0:\n            #     return 'raw'\n            # else:\n            return random.sample(diffuse_type_for_ins_list, 1)[0] ### 'diffuse'\n        # select one from specular and transparent\n        elif flag == 1:\n            return random.sample(specular_type_for_ins_list, 1)[0] ### 'specular'\n        else:\n            return random.sample(transparent_type_for_ins_list, 1)[0]  ### 'transparent'\n    else:\n        print(\"Material type error: \", material_type)\n   \n\n\n################################\n#\n#            Main\n#\n################################\n### set random seed\nrandom.seed(1143+SCENE_NUM) \n\n# load VGN obj path and tsdf pose\nurdfs_and_poses_files_list = sorted(os.listdir(raw_urdfs_and_poses_dir_path))\nurdfs_and_poses_dict = np.load(os.path.join(raw_urdfs_and_poses_dir_path, urdfs_and_poses_files_list[SCENE_NUM]), allow_pickle=True)['pc']\nurdf_path_list = list(urdfs_and_poses_dict[:,0])\nobj_scale_list = list(urdfs_and_poses_dict[:,1])\nobj_RT_list = list(urdfs_and_poses_dict[:,2])\n\nobj_quat_list = []\nobj_trans_list = []\nfor RT in obj_RT_list:\n    R = RT[:3,:3]\n    T = RT[:3,3]\n    T = T + np.array([-0.15,-0.15,-0.0503])\n    quat = quaternionFromRotMat(R)\n    obj_quat_list.append(quat)\n    obj_trans_list.append(T)\n\n\ng_synset_name_label_pairs = {'other': 0}   \n\nmaterial_class_instance_pairs = {'specular': ['metal','porcelain','plasticsp','paintsp'],\n                                 'transparent': ['glass'],\n                                 'diffuse': ['plastic','rubber','paper','leather','wood','clay','fabric'],\n                                 'background': ['background']}\n\n\nclass_material_pairs = {'specular': ['other'],\n                        'transparent': ['other'],\n                        'diffuse': ['other']}\n\ninstance_material_except_pairs = {'metal': [],\n                                  'porcelain': [],\n                                  'plasticsp': [],\n                                  'paintsp':[],\n\n                                  'glass': [],\n                                  \n                                  'plastic': [],\n                                  'rubber': [],     \n                                  'leather': [],\n                                  'wood':[],\n                                  'paper':[],\n                                  'fabric':[],\n                                  'clay':[],   \n                                  }\n\ninstance_material_include_pairs = {\n                                  }\n\nmaterial_class_id_dict = {'raw': 0,\n                        'diffuse': 1,\n                        'transparent': 2,\n                        'specular': 3}\n\nmaterial_type_id_dict = {'raw': 0,\n                        'metal': 1,\n                        'porcelain': 2,\n                        'plasticsp': 3,\n                        'paintsp':4,\n                        'glass': 5, \n                        'plastic': 6,\n                        'rubber': 7,     \n                        'leather': 8,\n                        'wood':9,\n                        'paper':10,\n                        'fabric':11,\n                        'clay':12,               \n                        }\n\n\nmax_instance_num = 20\n\nif not os.path.exists(output_root_path):\n    os.makedirs(output_root_path)\n\n# generate CAD model list\nCAD_model_list = generate_CAD_model_list(urdf_path_list)\n\nrenderer = BlenderRenderer(viewport_size_x=camera_width, viewport_size_y=camera_height)\nrenderer.loadImages(emitter_pattern_path, env_map_path, real_table_image_root_path, real_floor_image_root_path)\nrenderer.addEnvMap()\nrenderer.addBackground(background_size, background_position, background_scale)\nrenderer.addMaterialLib(material_class_instance_pairs)  ###\nrenderer.addMaskMaterial(max_instance_num)\nrenderer.addNOCSMaterial()\nrenderer.addNormalMaterial()\n\n\nrenderer.clearModel()\n# set scene output path\npath_scene = os.path.join(output_root_path, urdfs_and_poses_files_list[SCENE_NUM][:-4])### \"scene_\"+str(SCENE_NUM).zfill(4))\nif os.path.exists(path_scene)==False:\n    os.makedirs(path_scene)\n\n\n# camera pose list, environment light list and background material_listz\nquaternion_list = []\ntranslation_list = []\n\n# environment map list\nenv_map_id_list = []\nrotation_elur_z_list = []\n\n# background material list\nbackground_material_list = []\n\n# table material list\ntable_material_list = []\n\n\nlook_at = look_at_shift\nquat_list, trans_list, rot_list, position_list = genCameraPosition(look_at)\nnp.savetxt(os.path.join(path_scene, \"cam_pos_pc.txt\"), np.array(position_list))\n\nrot_array = np.array(rot_list)  # (256, 3, 3)\ntrans_array = np.array(trans_list)  #  (256, 3, 1)\ncam_RT = np.concatenate([rot_array, trans_array], 2)\nzero_one = np.expand_dims([[0, 0, 0, 1]],0).repeat(rot_array.shape[0],axis=0)\ncam_RT = np.concatenate([cam_RT, zero_one], 1)  # (256, 4, 4)\nnp.save(os.path.join(path_scene, \"camera_pose.npy\"), cam_RT)\n\n\n# generate camara pose list\nfor i in range(NUM_FRAME_PER_SCENE):\n    \n    quaternion = quat_list[i]\n    translation = trans_list[i]\n    quaternion_list.append(quaternion)\n    translation_list.append(translation)\n\n# generate environment map list\nenv_map_id_list.append(random.randint(0, len(renderer.env_map) - 1))\nrotation_elur_z_list.append(random.uniform(-math.pi, math.pi))\n    \n# generate background material list \nif my_material_randomize_mode == 'raw':\n    background_material_list.append(renderer.my_material['default_background'])\n    # bpy.data.objects['background'].active_material = renderer.my_material['default_background']\nelse:\n    material_selected = random.sample(renderer.my_material['background'], 2)[0]\n    background_material_list.append(material_selected)\n    # bpy.data.objects['background'].active_material = material_selected\n    ###\n    material_selected = random.sample(renderer.my_material['background'], 2)[1]\n    table_material_list.append(material_selected)\n    ###\n    #print(background_material_list, table_material_list)\n\n# read objects from floder\nmeta_output = {}\nselect_model_list = []\nselect_model_list_other = []\nselect_model_list_transparent = []\nselect_model_list_dis = []\nselect_number = 1\n\n\nfor item in CAD_model_list:\n    if item in ['other']:\n        test = CAD_model_list[item]\n        for model in test:\n            select_model_list.append(model)\n    else:\n        print(\"No such category!\")\n    \n\n# load table obj\nrenderer.loadModel(table_CAD_model_path)\nobj = bpy.data.objects['table']\ntable_scale = 0.001\nobj.scale = (table_scale, table_scale, table_scale)\ny_transform = np.array([[0,0,-1],[0,1,0],[1,0,0]])\ntransform = y_transform\nobj_world_pose_quat = quaternionFromRotMat(transform)\nobj_world_pose_T_shift = np.array([random.uniform(-0.02,0.02),random.uniform(-0.0843,-0.0243),-0.0751])\nobj_world_pose_T = obj_world_pose_T_shift\nsetModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n\n# load table plane\nobj_world_pose_T = obj_world_pose_T_shift\nobj_world_pose_T[2] = 0\nbpy.ops.mesh.primitive_plane_add(size=1., enter_editmode=False, align='WORLD', location=obj_world_pose_T)\nbpy.ops.rigidbody.object_add()\nbpy.context.object.rigid_body.type = 'PASSIVE'\nbpy.context.object.rigid_body.collision_shape = 'BOX'\nobj = bpy.data.objects['Plane']\nobj.name = 'tableplane'\nobj.data.name = 'tableplane'\nobj.scale = (0.898, 1.3, 1.)\n\n# load arm\nrenderer.loadModel(arm_CAD_model_path)\nobj = bpy.data.objects['arm']\nclass_scale = 0.001\nobj.scale = (class_scale, class_scale, class_scale)\nx_transform = np.array([[1,0,0],[0,0,-1],[0,1,0]])\ntransform = x_transform\nobj_world_pose_quat = quaternionFromRotMat(transform)\nobj_world_pose_T_shift = np.array([0,random.uniform(-0.43,-0.41),0])\nobj_world_pose_T = obj_world_pose_T_shift\nsetModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n\n\ninstance_id = 1\nimported_obj_name_list = []\nfor model in select_model_list:\n    instance_path = model[0]\n    class_name = model[1]\n    instance_folder = model[0].split('/')[-1][:-4]\n    instance_name = str(instance_id) + \"_\" + class_name + \"_\" + instance_folder\n\n    material_type_in_mixed_mode = generate_material_type(instance_name, class_material_pairs, instance_material_except_pairs, instance_material_include_pairs, material_class_instance_pairs, my_material_randomize_mode)\n\n    # download CAD model and rename\n    renderer.loadModel(instance_path)\n    import_obj_name = instance_folder\n    obj = bpy.data.objects[import_obj_name]\n    obj.name = instance_name\n    obj.data.name = instance_name\n\n    print(len(obj_trans_list), instance_id)\n    obj_world_pose_T = obj_trans_list[instance_id-1]\n    obj_world_pose_quat = obj_quat_list[instance_id-1]\n    setModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n\n    # set object as rigid body\n    setRigidBody(obj)\n\n    # set material\n    renderer.set_material_randomize_mode(class_material_pairs, my_material_randomize_mode, obj, material_type_in_mixed_mode)\n    \n    \n    # generate meta file\n    class_scale = obj_scale_list[instance_id-1]\n    obj.scale = (class_scale, class_scale, class_scale)\n\n    # material type\n    material_class_id = None\n    for key in material_class_instance_pairs:\n        if material_type_in_mixed_mode == 'raw':\n            material_class_id = material_class_id_dict[material_type_in_mixed_mode]\n            break\n        elif material_type_in_mixed_mode in material_class_instance_pairs[key]:\n            material_class_id = material_class_id_dict[key]\n            break\n    if material_class_id == None:\n        print(\"material_class_id error!\")\n\n    meta_output[str(instance_id)] = [str(instance_folder), \n                                     str(material_class_id),\n                                     str(material_type_id_dict[material_type_in_mixed_mode])\n                                     ]\n\n    instance_id += 1\n\n# set the key frame\nscene = bpy.data.scenes['Scene']\n\n# generate meta.txt\nmeta_dir_path = os.path.join(path_scene, 'meta')\nif os.path.exists(meta_dir_path)==False:\n    os.makedirs(meta_dir_path)\n\nfor i in RENDER_FRAMES_LIST: #range(RENDER_START_FRAME, RENDER_END_FRAME):\n    # output the meta file\n    path_meta = os.path.join(meta_dir_path, str(i).zfill(4) + \".txt\")\n    if os.path.exists(path_meta):\n        os.remove(path_meta)\n    \n    file_write_obj = open(path_meta, 'w')\n    for index in meta_output:\n        file_write_obj.write(index)\n        file_write_obj.write(' ')\n        for item in meta_output[index]:\n            file_write_obj.write(item)\n            file_write_obj.write(' ')\n        file_write_obj.write('\\n')\n    file_write_obj.close()\n\n\n# render IR image and RGB image\nif render_mode_list['IR'] or render_mode_list['RGB']:\n    renderer.setEnvMap(env_map_id_list[0], rotation_elur_z_list[0])\n    # 随机选一张real floor image\n    flag = random.randint(0, len(renderer.realfloor_img_list)-1)\n    selected_realfloor_img = renderer.realfloor_img_list[flag]\n\n    # 随机选一张real table image\n    flag = random.randint(0, len(renderer.realtable_img_list)-1)\n    selected_realtable_img = renderer.realtable_img_list[flag] \n\n    arm_material_type_in_mixed_mode = generate_material_type(None, class_material_pairs, instance_material_except_pairs, instance_material_include_pairs, material_class_instance_pairs, material_type=\"diffuse\")\n    arm_material = random.sample(renderer.my_material[arm_material_type_in_mixed_mode], 1)[0]\n    ###\n    for i in RENDER_FRAMES_LIST:\n        renderer.setCamera(quaternion_list[i], translation_list[i], camera_fov, baseline_distance)\n        renderer.setLighting()\n        for obj in bpy.data.objects:\n            if obj.type == \"MESH\" and obj.name.split('_')[0] == 'background':\n                if obj.name == 'background_0':\n                    flag = random.randint(0, 3)\n                    if flag == 0:\n                        set_modify_floor_material(obj, background_material_list[0], selected_realfloor_img) ### renderer.realfloor_img_list)\n                    else:\n                        obj.active_material = background_material_list[0]\n                else:\n                    background_0_obj = bpy.data.objects['background_0']\n                    obj.active_material = background_0_obj.material_slots[0].material\n            elif obj.type == \"MESH\" and obj.name == 'table':\n                flag = random.randint(0, 2)\n                if flag == 0:\n                    set_modify_table_material(obj, table_material_list[0], selected_realtable_img)### renderer.realtable_img_list)\n                else:\n                    obj.active_material = table_material_list[0]\n            elif obj.type == \"MESH\" and obj.name == 'tableplane':\n                    table_obj = bpy.data.objects['table']\n                    obj.active_material = table_obj.material_slots[0].material\n            elif obj.type == \"MESH\" and obj.name == 'arm':\n                set_modify_arm_material(obj, arm_material)\n\n        # render IR image            \n        if render_mode_list['IR']:\n            ir_l_dir_path = os.path.join(path_scene, 'ir_l')\n            if os.path.exists(ir_l_dir_path)==False:\n                os.makedirs(ir_l_dir_path)\n            ir_r_dir_path = os.path.join(path_scene, 'ir_r')\n            if os.path.exists(ir_r_dir_path)==False:\n                os.makedirs(ir_r_dir_path)\n\n            renderer.render_mode = \"IR\"\n            camera = bpy.data.objects['camera_l']\n            scene.camera = camera\n            save_path = ir_l_dir_path\n            save_name = str(i).zfill(4)\n            renderer.render(save_name, save_path)\n\n            camera = bpy.data.objects['camera_r']\n            scene.camera = camera\n            save_path = ir_r_dir_path\n            save_name = str(i).zfill(4)\n            renderer.render(save_name, save_path)\n        \n        # render RGB image\n        if render_mode_list['RGB']:\n            rgb_dir_path = os.path.join(path_scene, 'rgb')\n            if os.path.exists(rgb_dir_path)==False:\n                os.makedirs(rgb_dir_path)\n\n            renderer.render_mode = \"RGB\"\n            camera = bpy.data.objects['camera_l']\n            scene.camera = camera\n            save_path = rgb_dir_path\n            save_name = str(i).zfill(4)\n            renderer.render(save_name, save_path)\n    \n# render mask map and depth map\nif render_mode_list['Mask']:\n    # set instance mask as material\n    for obj in bpy.data.objects:\n        if obj.type == \"MESH\":\n            obj.data.materials.clear()\n            material_name = \"mask_\" + obj.name.split('_')[0]\n            obj.active_material = renderer.my_material[material_name]\n    \n    # render mask map and depth map\n    for i in RENDER_FRAMES_LIST:\n        renderer.setCamera(quaternion_list[i], translation_list[i], camera_fov, baseline_distance)\n\n        mask_dir_path = os.path.join(path_scene, 'mask')\n        if os.path.exists(mask_dir_path)==False:\n            os.makedirs(mask_dir_path)\n\n        renderer.render_mode = \"Mask\"\n        camera = bpy.data.objects['camera_l']\n        scene.camera = camera\n        save_path = mask_dir_path\n        save_name = str(i).zfill(4)\n        renderer.render(save_name, save_path)\n\n# render normal map\nif render_mode_list['Normal']:\n    # set normal as material\n    for obj in bpy.data.objects:\n        if obj.type == 'MESH':\n            obj.data.materials.clear()\n            obj.active_material = renderer.my_material[\"normal\"]\n\n    # render normal map\n    for i in RENDER_FRAMES_LIST:\n        renderer.setCamera(quaternion_list[i], translation_list[i], camera_fov, baseline_distance)\n        \n        normal_dir_path = os.path.join(path_scene, 'normal')\n        if os.path.exists(normal_dir_path)==False:\n            os.makedirs(normal_dir_path)\n        depth_dir_path = os.path.join(path_scene, 'depth')\n        if os.path.exists(depth_dir_path)==False:\n            os.makedirs(depth_dir_path)\n\n        renderer.render_mode = \"Normal\"\n        camera = bpy.data.objects['camera_l']\n        scene.camera = camera\n        save_path = normal_dir_path\n        save_name = str(i).zfill(4)\n        renderer.render(save_name, save_path)\n\ncontext = bpy.context\nfor ob in context.selected_objects:\n    ob.animation_data_clear()\n\nprint(bpy.data.materials) \nprint(len(bpy.data.materials))\n\nsrc_depth_list = os.listdir(os.path.join(output_root_path,urdfs_and_poses_files_list[SCENE_NUM][:-4],\"depth\"))\nfor src_depth in src_depth_list:\n    source = os.path.join(output_root_path,urdfs_and_poses_files_list[SCENE_NUM][:-4],\"depth\",src_depth)\n    target = source.replace(\"_0.exr\",\".exr\")\n    os.rename(source,target)"
  },
  {
    "path": "data_generator/run_pile_rand.sh",
    "content": "#!/bin/bash\n\ncd /data/InterNeRF/renderer/renderer_giga_GPU6-0_rand_M\n\n# 830*6\nmycount=0;\nwhile (( $mycount < 100 )); do\n    /home/xxx/blender-2.93.3-linux-x64/blender material_lib_v2.blend --background -noaudio --python render_pile_STD_rand.py -- $mycount;\n((mycount=$mycount+1));\ndone;"
  },
  {
    "path": "requirements.txt",
    "content": "torch\ntensorflow\neasydict\ninplace-abn\nplyfile\nnumpy\nscikit-image\npyyaml\nh5py\nopencv-python\ntqdm\nmatplotlib\nscipy\nlpips\ntransforms3d\nkornia\nsklearn\ncatkin_pkg\nblack\njupyterlab\npandas\nmpi4py\nopen3d\npybullet==2.7.9\npytorch-ignite\ntensorboard"
  },
  {
    "path": "run_simgrasp.sh",
    "content": "#!/bin/bash\n\nGPUID=0\nBLENDER_BIN=blender\n\nRENDERER_ASSET_DIR=./data/assets\nBLENDER_PROJ_PATH=./data/assets/material_lib_graspnet-v2.blend\nSIM_LOG_DIR=\"./log/`date '+%Y%m%d-%H%M%S'`\"\n\nscene=\"pile\"\nobject_set=\"pile_subdiv\"\nmaterial_type=\"specular_and_transparent\"\nrender_frame_list=\"2,6,10,14,18,22\"\ncheck_seen_scene=0\nexpname=0\n\nNUM_TRIALS=200\nMETHOD='graspnerf'\n\nmycount=0 \nwhile (( $mycount < $NUM_TRIALS )); do\n   $BLENDER_BIN $BLENDER_PROJ_PATH --background --python scripts/sim_grasp.py \\\n   -- $mycount $GPUID $expname $scene $object_set $check_seen_scene $material_type \\\n   $RENDERER_ASSET_DIR $SIM_LOG_DIR 0 $render_frame_list $METHOD\n\n   python ./scripts/stat_expresult.py -- $SIM_LOG_DIR $expname\n((mycount=$mycount+1));\ndone;\n\npython ./scripts/stat_expresult.py -- $SIM_LOG_DIR $expname"
  },
  {
    "path": "scripts/sim_grasp.py",
    "content": "import argparse\nimport os\nimport sys\nfrom pathlib import Path\n\ndef main(args, round_idx, gpuid, render_frame_list):\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(gpuid)\n    \n    sys.path.append(\"src\")\n    from nr.main import GraspNeRFPlanner \n\n    if args.method == \"graspnerf\":\n        grasp_planner = GraspNeRFPlanner(args)\n    else:\n        print(\"No such method!\")\n        raise NotImplementedError\n       \n    from gd.experiments import clutter_removal\n    clutter_removal.run(\n        grasp_plan_fn=grasp_planner,\n        logdir=args.logdir,\n        description=args.description,\n        scene=args.scene,\n        object_set=args.object_set,\n        num_objects=args.num_objects,\n        num_rounds=args.num_rounds,\n        seed=args.seed,\n        sim_gui=args.sim_gui,\n        rviz=args.rviz,\n        round_idx = round_idx,\n        renderer_root_dir = args.renderer_root_dir,\n        gpuid = gpuid,\n        args = args,\n        render_frame_list = render_frame_list\n    )\n   \nclass ArgumentParserForBlender(argparse.ArgumentParser):\n    \"\"\"\n    This class is identical to its superclass, except for the parse_args\n    method (see docstring). It resolves the ambiguity generated when calling\n    Blender from the CLI with a python script, and both Blender and the script\n    have arguments. E.g., the following call will make Blender crash because\n    it will try to process the script's -a and -b flags:\n    >>> blender --python my_script.py -a 1 -b 2\n\n    To bypass this issue this class uses the fact that Blender will ignore all\n    arguments given after a double-dash ('--'). The approach is that all\n    arguments before '--' go to Blender, arguments after go to the script.\n    The following calls work fine:\n    >>> blender --python my_script.py -- -a 1 -b 2\n    >>> blender --python my_script.py --\n    \"\"\"\n\n    def _get_argv_after_doubledash(self):\n        \"\"\"\n        Given the sys.argv as a list of strings, this method returns the\n        sublist right after the '--' element (if present, otherwise returns\n        an empty list).\n        \"\"\"\n        try:\n            idx = sys.argv.index(\"---\")\n            return sys.argv[idx+1:] # the list after '--'\n        except ValueError as e: # '--' not in the list:\n            return []\n\n    # overrides superclass\n    def parse_args(self):\n        \"\"\"\n        This method is expected to behave identically as in the superclass,\n        except that the sys.argv list will be pre-processed using\n        _get_argv_after_doubledash before. See the docstring of the class for\n        usage examples and details.\n        \"\"\"\n        return super().parse_args(args=self._get_argv_after_doubledash())\n\nif __name__ == \"__main__\":\n    argv = sys.argv\n    argv = argv[argv.index(\"--\") + 1:]  # get all args after \"--\"\n    round_idx = int(argv[0])\n    gpuid = int(argv[1])\n    expname = str(argv[2])\n    scene = str(argv[3])\n    object_set = str(argv[4])\n    check_seen_scene = bool(int(argv[5]))\n    material_type = str(argv[6])\n    blender_asset_dir = str(argv[7])\n    log_root_dir = str(argv[8])\n    use_gt_tsdf = bool(int(argv[9]))\n    render_frame_list=[int(frame_id) for frame_id in str(argv[10]).replace(' ','').split(\",\")]\n    method = str(argv[11])\n    print(\"########## Simulation Start ##########\")\n    print(\"Round %d\\nmethod: %s\\nmaterial_type: %s\\nviews: %s \"%(round_idx, method, material_type, str(render_frame_list)))\n    print(\"######################################\")\n\n    parser = ArgumentParserForBlender() ### argparse.ArgumentParser()\n    parser.add_argument(\"---model\", type=Path, default=\"\")\n    parser.add_argument(\"---logdir\", type=Path, default=expname)\n    parser.add_argument(\"---description\", type=str, default=\"\")\n    parser.add_argument(\"---scene\", type=str, choices=[\"pile\", \"packed\", \"single\"], default=scene)\n    parser.add_argument(\"---object-set\", type=str, default=object_set)\n    parser.add_argument(\"---num-objects\", type=int, default=5)\n    parser.add_argument(\"---num-rounds\", type=int, default=200)\n    parser.add_argument(\"---seed\", type=int, default=42)\n    parser.add_argument(\"---sim-gui\", type=bool, default=False)\n    parser.add_argument(\"---rviz\", action=\"store_true\")\n    \n    ###\n    parser.add_argument(\"---renderer_root_dir\", type=str, default=blender_asset_dir)\n    parser.add_argument(\"---log_root_dir\", type=str, default=log_root_dir)\n    parser.add_argument(\"---obj_texture_image_root_path\", type=str, default=blender_asset_dir+\"/imagenet\") #TODO   \n    parser.add_argument(\"---cfg_fn\", type=str, default=\"src/nr/configs/nrvgn_sdf.yaml\")\n    parser.add_argument('---database_name', type=str, default='vgn_syn/train/packed/packed_170-220/032cd891d9be4a16be5ea4be9f7eca2b/w_0.8', help='<dataset_name>/<scene_name>/<scene_setting>')\n\n    parser.add_argument(\"---gen_scene_descriptor\", type=bool, default=False)\n    parser.add_argument(\"---load_scene_descriptor\", type=bool, default=True)\n    parser.add_argument(\"---material_type\", type=str, default=material_type)\n    parser.add_argument(\"---method\", type=str, default=method)\n\n    # pybullet camera parameter\n    parser.add_argument(\"---camera_focal\", type=float, default=446.31) #TODO \n\n    ###\n    args = parser.parse_args()\n    main(args, round_idx, gpuid, render_frame_list)"
  },
  {
    "path": "scripts/stat_expresult.py",
    "content": "from pathlib import Path\r\nimport sys\r\nimport pandas as pd\r\nimport os\r\nimport numpy as np\r\n\r\nargv = sys.argv\r\nargv = argv[argv.index(\"--\") + 1:]  # get all args after \"--\"\r\nlog_root_dir = str(argv[0])\r\nexpname = str(argv[1])\r\n\r\nclass Data(object):\r\n    \"\"\"Object for loading and analyzing experimental data.\"\"\"\r\n\r\n    def __init__(self, logdir):\r\n        self.logdir = logdir\r\n        self.rounds = pd.read_csv(logdir / \"rounds.csv\")\r\n        self.grasps = pd.read_csv(logdir / \"grasps.csv\")\r\n\r\n    def num_rounds(self):\r\n        return len(self.rounds.index)\r\n\r\n    def num_grasps(self):\r\n        return len(self.grasps.index)\r\n\r\n    def success_rate(self):\r\n        return self.grasps[\"label\"].mean() * 100\r\n\r\n    def percent_cleared(self):\r\n        df = (\r\n            self.grasps[[\"round_id\", \"label\"]]\r\n            .groupby(\"round_id\")\r\n            .sum()\r\n            .rename(columns={\"label\": \"cleared_count\"})\r\n            .merge(self.rounds, on=\"round_id\")\r\n        )\r\n        return df[\"cleared_count\"].sum() / df[\"object_count\"].sum() * 100\r\n\r\n    def avg_planning_time(self):\r\n        return self.grasps[\"planning_time\"].mean()\r\n\r\n    def read_grasp(self, i):\r\n        scene_id, grasp, label = io.read_grasp(self.grasps, i)\r\n        score = self.grasps.loc[i, \"score\"]\r\n        scene_data = np.load(self.logdir / \"scenes\" / (scene_id + \".npz\"))\r\n\r\n        return scene_data[\"points\"], grasp, score, label\r\n\r\n##############################\r\n# Combine all trials\r\n##############################\r\nroot_path = os.path.join(log_root_dir, \"exp_results\", expname)\r\n\r\nround_dir_list = sorted(os.listdir(root_path))\r\n\r\nif not os.path.exists(root_path + \"_combine\"):\r\n    os.makedirs(root_path + \"_combine\")\r\n\r\ndf = pd.DataFrame()\r\nfor i in range(len(round_dir_list)):\r\n    df_round = pd.read_csv(os.path.join(root_path, round_dir_list[i], \"grasps.csv\"))\r\n    df_round[\"round_id\"] = i\r\n    df = pd.concat([df, df_round])\r\ndf = df.reset_index(drop=True)\r\ndf.to_csv(os.path.join(root_path + \"_combine\", \"grasps.csv\"), index=False)\r\n\r\n\r\ndf = pd.DataFrame()\r\nfor i in range(len(round_dir_list)):\r\n    df_round = pd.read_csv(os.path.join(root_path, round_dir_list[i], \"rounds.csv\"))\r\n    df_round[\"round_id\"] = i\r\n    df = pd.concat([df, df_round])\r\ndf = df.reset_index(drop=True)\r\ndf.to_csv(os.path.join(root_path + \"_combine\", \"rounds.csv\"), index=False)\r\n\r\n##############################\r\n# Print Stat\r\n##############################\r\nlogdir = Path(os.path.join(log_root_dir, \"exp_results\", expname+\"_combine\"))\r\ndata = Data(logdir)\r\n\r\n# First, we compute the following metrics for the experiment:\r\n# * **Success rate**: the ratio of successful grasp executions,\r\n# * **Percent cleared**: the percentage of objects removed during each round,\r\ntry:\r\n    print(\"Path:              \",str(logdir))\r\n    print(\"Num grasps:        \", data.num_grasps())\r\n    print(\"Success rate:      \", data.success_rate())\r\n    print(\"Percent cleared:   \", data.percent_cleared())\r\nexcept:\r\n    print(\"[W] Incomplete results, exit\")\r\n    exit()\r\n##############################\r\n# Calc first-time grasping SR\r\n##############################\r\n\r\nsum_label = 0\r\nfirstgrasp_fail_expidx_list = []\r\nfor i in range(len(round_dir_list)):\r\n    #print(i)\r\n    df_round = pd.read_csv(os.path.join(root_path, round_dir_list[i], \"grasps.csv\"))\r\n    df = df_round.iloc[0:1,:]   \r\n    \r\n    label = df[[\"label\"]].to_numpy(np.float32)\r\n    if label.shape[0] == 0:\r\n        firstgrasp_fail_expidx_list.append(i)\r\n        continue\r\n    sum_label += label[0,0]\r\n    if label[0,0]==0:\r\n        firstgrasp_fail_expidx_list.append(i)\r\n\r\nprint(\"First grasp success rate: \", sum_label / len(round_dir_list))\r\nprint(\"First grasp fail:\", len(firstgrasp_fail_expidx_list),\"/\",len(round_dir_list), \", exp id: \", firstgrasp_fail_expidx_list)\r\n"
  },
  {
    "path": "src/gd/__init__.py",
    "content": ""
  },
  {
    "path": "src/gd/baselines.py",
    "content": "import time\n\nfrom gpd_ros.msg import GraspConfigList\nimport numpy as np\nfrom sensor_msgs.msg import PointCloud2\nimport rospy\n\nfrom gd.grasp import Grasp\nfrom gd.utils import ros_utils\nfrom gd.utils.transform import Rotation, Transform\n\n\nclass GPD(object):\n    def __init__(self):\n        self.input_topic = \"/cloud_stitched\"\n        self.output_topic = \"/detect_grasps/clustered_grasps\"\n        self.cloud_pub = rospy.Publisher(self.input_topic, PointCloud2, queue_size=1)\n\n    def __call__(self, state):\n        points = np.asarray(state.pc.points)\n        msg = ros_utils.to_cloud_msg(points, frame=\"task\")\n        self.cloud_pub.publish(msg)\n\n        tic = time.time()\n        result = rospy.wait_for_message(self.output_topic, GraspConfigList)\n        toc = time.time() - tic\n\n        grasps, scores = self.to_grasp_list(result)\n\n        return grasps, scores, toc\n\n    def to_grasp_list(self, grasp_configs):\n        grasps, scores = [], []\n        for grasp_config in grasp_configs.grasps:\n            # orientation\n            x_axis = ros_utils.from_vector3_msg(grasp_config.axis)\n            y_axis = -ros_utils.from_vector3_msg(grasp_config.binormal)\n            z_axis = ros_utils.from_vector3_msg(grasp_config.approach)\n            orientation = Rotation.from_matrix(np.vstack([x_axis, y_axis, z_axis]).T)\n            # position\n            position = ros_utils.from_point_msg(grasp_config.position)\n            # width\n            width = grasp_config.width.data\n            # score\n            score = grasp_config.score.data\n\n            if score < 0.0:\n                continue  # negative score is larger than positive score (https://github.com/atenpas/gpd/issues/32#issuecomment-387846534)\n\n            grasps.append(Grasp(Transform(orientation, position), width))\n            scores.append(score)\n\n        return grasps, scores\n"
  },
  {
    "path": "src/gd/dataset.py",
    "content": "import numpy as np\nfrom scipy import ndimage\nimport torch.utils.data\n\nfrom gd.io import *\nfrom gd.perception import *\nfrom gd.utils.transform import Rotation, Transform\n\n\nclass Dataset(torch.utils.data.Dataset):\n    def __init__(self, root, augment=False):\n        self.root = root\n        self.augment = augment\n        self.df = read_df(root)\n\n    def __len__(self):\n        return len(self.df.index)\n\n    def __getitem__(self, i):\n        scene_id = self.df.loc[i, \"scene_id\"]\n        ori = Rotation.from_quat(self.df.loc[i, \"qx\":\"qw\"].to_numpy(np.single))\n        pos = self.df.loc[i, \"i\":\"k\"].to_numpy(np.single)\n        width = self.df.loc[i, \"width\"].astype(np.single)\n        label = self.df.loc[i, \"label\"].astype(np.long)\n        voxel_grid = read_voxel_grid(self.root, scene_id)\n\n        if self.augment:\n            voxel_grid, ori, pos = apply_transform(voxel_grid, ori, pos)\n\n        index = np.round(pos).astype(np.long)\n        rotations = np.empty((2, 4), dtype=np.single)\n        R = Rotation.from_rotvec(np.pi * np.r_[0.0, 0.0, 1.0])\n        rotations[0] = ori.as_quat()\n        rotations[1] = (ori * R).as_quat()\n\n        x, y, index = voxel_grid, (label, rotations, width), index\n\n        return x, y, index\n\n\ndef apply_transform(voxel_grid, orientation, position):\n    angle = np.pi / 2.0 * np.random.choice(4)\n    R_augment = Rotation.from_rotvec(np.r_[0.0, 0.0, angle])\n\n    z_offset = np.random.uniform(6, 34) - position[2]\n\n    t_augment = np.r_[0.0, 0.0, z_offset]\n    T_augment = Transform(R_augment, t_augment)\n\n    T_center = Transform(Rotation.identity(), np.r_[20.0, 20.0, 20.0])\n    T = T_center * T_augment * T_center.inverse()\n\n    # transform voxel grid\n    T_inv = T.inverse()\n    matrix, offset = T_inv.rotation.as_matrix(), T_inv.translation\n    voxel_grid[0] = ndimage.affine_transform(voxel_grid[0], matrix, offset, order=0)\n\n    # transform grasp pose\n    position = T.transform_point(position)\n    orientation = T.rotation * orientation\n\n    return voxel_grid, orientation, position\n"
  },
  {
    "path": "src/gd/detection.py",
    "content": "import time\n\nimport numpy as np\nfrom scipy import ndimage\nimport torch\n\n\nfrom gd.grasp import *\nfrom gd.utils.transform import Transform, Rotation\nfrom gd.networks import load_network\n\n\nclass VGN(object):\n    def __init__(self, model_path, rviz=False):\n        self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n        self.net = load_network(model_path, self.device)\n        self.rviz = rviz\n\n    def __call__(self, state):\n        tsdf_vol = state.tsdf.get_grid()\n        voxel_size = state.tsdf.voxel_size\n\n        tic = time.time()\n        qual_vol, rot_vol, width_vol = predict(tsdf_vol, self.net, self.device)\n        qual_vol, rot_vol, width_vol = process(tsdf_vol, qual_vol, rot_vol, width_vol)\n        grasps, scores = select(qual_vol.copy(), rot_vol, width_vol)\n        toc = time.time() - tic\n\n        grasps, scores = np.asarray(grasps), np.asarray(scores)\n\n        if len(grasps) > 0:\n            p = np.random.permutation(len(grasps))\n            grasps = [from_voxel_coordinates(g, voxel_size) for g in grasps[p]]\n            scores = scores[p]\n\n        if self.rviz:\n            from gd import vis\n            vis.draw_quality(qual_vol, state.tsdf.voxel_size, threshold=0.01)\n\n        return grasps, scores, toc\n\n\ndef predict(tsdf_vol, net, device):\n    assert tsdf_vol.shape == (1, 40, 40, 40)\n\n    # move input to the GPU\n    tsdf_vol = torch.from_numpy(tsdf_vol).unsqueeze(0).to(device)\n\n    # forward pass\n    with torch.no_grad():\n        qual_vol, rot_vol, width_vol = net(tsdf_vol)\n\n    # move output back to the CPU\n    qual_vol = qual_vol.cpu().squeeze().numpy()\n    rot_vol = rot_vol.cpu().squeeze().numpy()\n    width_vol = width_vol.cpu().squeeze().numpy()\n    return qual_vol, rot_vol, width_vol\n\n\ndef process(\n    tsdf_vol,\n    qual_vol,\n    rot_vol,\n    width_vol,\n    gaussian_filter_sigma=1.0,\n    min_width=1.33,\n    max_width=9.33,\n):\n    tsdf_vol = tsdf_vol.squeeze()\n\n    # smooth quality volume with a Gaussian\n    qual_vol = ndimage.gaussian_filter(\n        qual_vol, sigma=gaussian_filter_sigma, mode=\"nearest\"\n    )\n\n    # mask out voxels too far away from the surface\n    outside_voxels = tsdf_vol > 0.5\n    inside_voxels = np.logical_and(1e-3 < tsdf_vol, tsdf_vol < 0.5)\n    valid_voxels = ndimage.morphology.binary_dilation(\n        outside_voxels, iterations=2, mask=np.logical_not(inside_voxels)\n    )\n    qual_vol[valid_voxels == False] = 0.0\n\n    # reject voxels with predicted widths that are too small or too large\n    qual_vol[np.logical_or(width_vol < min_width, width_vol > max_width)] = 0.0\n\n    return qual_vol, rot_vol, width_vol\n\n\ndef select(qual_vol, rot_vol, width_vol, threshold=0.90, max_filter_size=4):\n    # threshold on grasp quality\n    qual_vol[qual_vol < threshold] = 0.0\n\n    # non maximum suppression\n    max_vol = ndimage.maximum_filter(qual_vol, size=max_filter_size)\n    qual_vol = np.where(qual_vol == max_vol, qual_vol, 0.0)\n    mask = np.where(qual_vol, 1.0, 0.0)\n\n    # construct grasps\n    grasps, scores = [], []\n    for index in np.argwhere(mask):\n        grasp, score = select_index(qual_vol, rot_vol, width_vol, index)\n        grasps.append(grasp)\n        scores.append(score)\n\n    return grasps, scores\n\n\ndef select_index(qual_vol, rot_vol, width_vol, index):\n    i, j, k = index\n    score = qual_vol[i, j, k]\n    ori = Rotation.from_quat(rot_vol[:, i, j, k])\n    pos = np.array([i, j, k], dtype=np.float64)\n    width = width_vol[i, j, k]\n    return Grasp(Transform(ori, pos), width), score\n"
  },
  {
    "path": "src/gd/experiments/__init__.py",
    "content": ""
  },
  {
    "path": "src/gd/experiments/clutter_removal.py",
    "content": "import collections\nimport uuid\nimport os\nimport numpy as np\nimport pandas as pd\nimport sys\nimport shutil\nfrom pathlib import Path\nfrom gd import io\nfrom gd.grasp import *\nfrom gd.simulation import ClutterRemovalSim\nsys.path.append(\"./\")\nfrom rd.render import blender_init_scene, blender_render, blender_update_sceneobj\n\nMAX_CONSECUTIVE_FAILURES = 2\n\nState = collections.namedtuple(\"State\", [\"tsdf\", \"pc\"])\n\ndef copydirs(from_file, to_file):\n    if not os.path.exists(to_file):   \n        os.makedirs(to_file)\n    files = os.listdir(from_file)  \n    for f in files:\n        if os.path.isdir(from_file + '/' + f):  \n            copydirs(from_file + '/' + f, to_file + '/' + f)  \n        else:\n            shutil.copy(from_file + '/' + f, to_file + '/' + f)  \n\n\ndef run(\n    grasp_plan_fn,\n    logdir,\n    description,\n    scene,\n    object_set,\n    num_objects=5,\n    n=6,\n    N=None,\n    num_rounds=40,\n    seed=1,\n    sim_gui=False,\n    rviz=False,\n    round_idx=0,\n    renderer_root_dir=\"\",\n    gpuid=None,\n    args=None,\n    render_frame_list=[]\n):\n    \"\"\"Run several rounds of simulated clutter removal experiments.\n\n    Each round, m objects are randomly placed in a tray. Then, the grasping pipeline is\n    run until (a) no objects remain, (b) the planner failed to find a grasp hypothesis,\n    or (c) maximum number of consecutive failed grasp attempts.\n    \"\"\"\n    sim = ClutterRemovalSim(scene, object_set, gui=sim_gui, seed=seed, renderer_root_dir=renderer_root_dir, args=args)\n    logger = Logger(args.log_root_dir, logdir, description, round_idx)\n\n    # output modality\n    output_modality_dict = {'RGB': 1,\n                            'IR': 0,\n                            'NOCS': 0,\n                            'Mask': 0,\n                            'Normal': 0}\n\n    for n_round in range(round_idx, round_idx+1):\n        urdfs_and_poses_dict = sim.reset(num_objects, round_idx)\n        renderer, quaternion_list, translation_list, path_scene = blender_init_scene(renderer_root_dir, args.log_root_dir, args.obj_texture_image_root_path, scene, urdfs_and_poses_dict, round_idx, logdir, False, args.material_type, gpuid, output_modality_dict)\n\n        render_finished = False\n        render_fail_times = 0\n        while not render_finished and render_fail_times < 3:\n            try:\n                blender_render(renderer, quaternion_list, translation_list, path_scene, render_frame_list, output_modality_dict, args.camera_focal, is_init=True)\n                render_finished = True\n            except:\n                render_fail_times += 1\n        if not render_finished:\n            raise RuntimeError(\"Blender render failed for 3 times.\")\n\n        path_scene_backup = os.path.join(path_scene + \"_backup\", \"%d_init\"%n_round)\n        if os.path.exists(path_scene_backup) == False:\n            os.makedirs(path_scene_backup)\n        copydirs(path_scene, path_scene_backup)\n\n        round_id = logger.last_round_id() + 1\n        logger.log_round(round_id, sim.num_objects)\n\n        consecutive_failures = 1\n        last_label = None\n\n        n_grasp = 0\n        while sim.num_objects > 0 and consecutive_failures < MAX_CONSECUTIVE_FAILURES:\n            timings = {}\n\n            timings[\"integration\"] = 0\n\n            gt_tsdf, gt_pc, _ = sim.acquire_tsdf(n=n, N=N)\n\n            if args.method == \"graspnerf\":\n                grasps, scores, timings[\"planning\"] = grasp_plan_fn(render_frame_list, round_idx, n_grasp, gt_tsdf)\n            else:\n                raise NotImplementedError\n\n            if len(grasps) == 0:\n                print(\"no detections found, abort this round\")\n                break\n            else:\n                print(f\"{len(grasps)} detections found.\")\n\n            # execute grasp\n            grasp, score = grasps[0], scores[0]\n            (label, _), remain_obj_inws_infos = sim.execute_grasp(grasp, allow_contact=True)\n\n            # render the modified scene after grasping\n            obj_name_list = [str(value[0]).split(\"/\")[-1][:-5] for value in remain_obj_inws_infos]\n            obj_quat_list = [value[2][[3, 0, 1, 2]] for value in remain_obj_inws_infos]\n            obj_trans_list = [value[3] for value in remain_obj_inws_infos]\n            obj_uid_list = [value[4] for value in remain_obj_inws_infos]\n\n            # update blender scene\n            blender_update_sceneobj(obj_name_list, obj_trans_list, obj_quat_list, obj_uid_list)\n\n            # render updated scene\n            render_finished = False\n            render_fail_times = 0\n            while not render_finished and render_fail_times < 3:\n                try:\n                    blender_render(renderer, quaternion_list, translation_list, path_scene, render_frame_list, output_modality_dict, args.camera_focal)\n                    render_finished = True\n                except:\n                    render_fail_times += 1\n            if not render_finished:\n                raise RuntimeError(\"Blender render failed for 3 times.\")\n        \n\n            path_scene_backup = os.path.join(path_scene+\"_backup\", \"%d_%d\"%(n_round,n_grasp))\n            if os.path.exists(path_scene_backup)==False:\n                os.makedirs(path_scene_backup)\n            copydirs(path_scene, path_scene_backup)\n\n            # log the grasp\n            logger.log_grasp(round_id, timings, grasp, score, label)\n\n            if last_label == Label.FAILURE and label == Label.FAILURE:\n                consecutive_failures += 1\n            else:\n                consecutive_failures = 1\n            last_label = label\n\n            n_grasp += 1\n\n\nclass Logger(object):\n    def __init__(self, log_root_dir, expname, description, round_idx):\n        self.logdir = Path(os.path.join(log_root_dir, \"exp_results\", expname , \"%04d\"%int(round_idx)))#description\n        self.scenes_dir = self.logdir / \"scenes\"\n        self.scenes_dir.mkdir(parents=True, exist_ok=True)\n\n        self.rounds_csv_path = self.logdir / \"rounds.csv\"\n        self.grasps_csv_path = self.logdir / \"grasps.csv\"\n        self._create_csv_files_if_needed()\n\n    def _create_csv_files_if_needed(self):\n        if not self.rounds_csv_path.exists():\n            io.create_csv(self.rounds_csv_path, [\"round_id\", \"object_count\"])\n\n        if not self.grasps_csv_path.exists():\n            columns = [\n                \"round_id\",\n                \"scene_id\",\n                \"qx\",\n                \"qy\",\n                \"qz\",\n                \"qw\",\n                \"x\",\n                \"y\",\n                \"z\",\n                \"width\",\n                \"score\",\n                \"label\",\n                \"integration_time\",\n                \"planning_time\",\n            ]\n            io.create_csv(self.grasps_csv_path, columns)\n\n    def last_round_id(self):\n        df = pd.read_csv(self.rounds_csv_path)\n        return -1 if df.empty else df[\"round_id\"].max()\n\n    def log_round(self, round_id, object_count):\n        io.append_csv(self.rounds_csv_path, round_id, object_count)\n\n    def log_grasp(self, round_id, timings, grasp, score, label):\n        # log scene\n        scene_id = uuid.uuid4().hex\n\n        # log grasp\n        qx, qy, qz, qw = grasp.pose.rotation.as_quat()\n        x, y, z = grasp.pose.translation\n        width = grasp.width\n        label = int(label)\n        io.append_csv(\n            self.grasps_csv_path,\n            round_id,\n            scene_id,\n            qx,\n            qy,\n            qz,\n            qw,\n            x,\n            y,\n            z,\n            width,\n            score,\n            label,\n            timings[\"integration\"],\n            timings[\"planning\"],\n        )\n\n\nclass Data(object):\n    \"\"\"Object for loading and analyzing experimental data.\"\"\"\n\n    def __init__(self, logdir):\n        self.logdir = logdir\n        self.rounds = pd.read_csv(logdir / \"rounds.csv\")\n        self.grasps = pd.read_csv(logdir / \"grasps.csv\")\n\n    def num_rounds(self):\n        return len(self.rounds.index)\n\n    def num_grasps(self):\n        return len(self.grasps.index)\n\n    def success_rate(self):\n        return self.grasps[\"label\"].mean() * 100\n\n    def percent_cleared(self):\n        df = (\n            self.grasps[[\"round_id\", \"label\"]]\n            .groupby(\"round_id\")\n            .sum()\n            .rename(columns={\"label\": \"cleared_count\"})\n            .merge(self.rounds, on=\"round_id\")\n        )\n        return df[\"cleared_count\"].sum() / df[\"object_count\"].sum() * 100\n\n    def avg_planning_time(self):\n        return self.grasps[\"planning_time\"].mean()\n\n    def read_grasp(self, i):\n        scene_id, grasp, label = io.read_grasp(self.grasps, i)\n        score = self.grasps.loc[i, \"score\"]\n        scene_data = np.load(self.logdir / \"scenes\" / (scene_id + \".npz\"))\n\n        return scene_data[\"points\"], grasp, score, label"
  },
  {
    "path": "src/gd/grasp.py",
    "content": "import enum\n\n\nclass Label(enum.IntEnum):\n    FAILURE = 0  # grasp execution failed due to collision or slippage\n    SUCCESS = 1  # object was successfully removed\n\n\nclass Grasp(object):\n    \"\"\"Grasp parameterized as pose of a 2-finger robot hand.\n    \n    TODO(mbreyer): clarify definition of grasp frame\n    \"\"\"\n\n    def __init__(self, pose, width):\n        self.pose = pose\n        self.width = width\n\n\ndef to_voxel_coordinates(grasp, voxel_size):\n    pose = grasp.pose\n    pose.translation /= voxel_size\n    width = grasp.width / voxel_size\n    return Grasp(pose, width)\n\n\ndef from_voxel_coordinates(grasp, voxel_size):\n    pose = grasp.pose\n    pose.translation *= voxel_size\n    width = grasp.width * voxel_size\n    return Grasp(pose, width)\n"
  },
  {
    "path": "src/gd/io.py",
    "content": "import json\nimport uuid\n\nimport numpy as np\nimport pandas as pd\n\nfrom gd.grasp import Grasp\nfrom gd.perception import *\nfrom gd.utils.transform import Rotation, Transform\n\n\ndef write_setup(root, size, intrinsic, max_opening_width, finger_depth):\n    data = {\n        \"size\": size,\n        \"intrinsic\": intrinsic.to_dict(),\n        \"max_opening_width\": max_opening_width,\n        \"finger_depth\": finger_depth,\n    }\n    write_json(data, root / \"setup.json\")\n\n\ndef read_setup(root):\n    data = read_json(root / \"setup.json\")\n    size = data[\"size\"]\n    intrinsic = CameraIntrinsic.from_dict(data[\"intrinsic\"])\n    max_opening_width = data[\"max_opening_width\"]\n    finger_depth = data[\"finger_depth\"]\n    return size, intrinsic, max_opening_width, finger_depth\n\n\ndef write_sensor_data(root, depth_imgs, extrinsics):\n    scene_id = uuid.uuid4().hex\n    path = root / \"scenes\" / (scene_id + \".npz\")\n    np.savez_compressed(path, depth_imgs=depth_imgs, extrinsics=extrinsics)\n    return scene_id\n\n\ndef read_sensor_data(root, scene_id):\n    data = np.load(root / \"scenes\" / (scene_id + \".npz\"))\n    return data[\"depth_imgs\"], data[\"extrinsics\"]\n\n\ndef write_grasp(root, scene_id, grasp, label):\n    # TODO concurrent writes could be an issue\n    csv_path = root / \"grasps.csv\"\n    if not csv_path.exists():\n        create_csv(\n            csv_path,\n            [\"scene_id\", \"qx\", \"qy\", \"qz\", \"qw\", \"x\", \"y\", \"z\", \"width\", \"label\"],\n        )\n    qx, qy, qz, qw = grasp.pose.rotation.as_quat()\n    x, y, z = grasp.pose.translation\n    width = grasp.width\n    append_csv(csv_path, scene_id, qx, qy, qz, qw, x, y, z, width, label)\n\n\ndef read_grasp(df, i):\n    scene_id = df.loc[i, \"scene_id\"]\n    orientation = Rotation.from_quat(df.loc[i, \"qx\":\"qw\"].to_numpy(np.double))\n    position = df.loc[i, \"x\":\"z\"].to_numpy(np.double)\n    width = df.loc[i, \"width\"]\n    label = df.loc[i, \"label\"]\n    grasp = Grasp(Transform(orientation, position), width)\n    return scene_id, grasp, label\n\n\ndef read_df(root):\n    return pd.read_csv(root / \"grasps.csv\")\n\n\ndef write_df(df, root):\n    df.to_csv(root / \"grasps.csv\", index=False)\n\n\ndef write_voxel_grid(root, scene_id, voxel_grid):\n    path = root / \"scenes\" / (scene_id + \".npz\")\n    np.savez_compressed(path, grid=voxel_grid)\n\n\ndef read_voxel_grid(root, scene_id):\n    path = root / \"scenes\" / (scene_id + \".npz\")\n    return np.load(path)[\"grid\"]\n\n\ndef read_json(path):\n    with path.open(\"r\") as f:\n        data = json.load(f)\n    return data\n\n\ndef write_json(data, path):\n    with path.open(\"w\") as f:\n        json.dump(data, f, indent=4)\n\n\ndef create_csv(path, columns):\n    with path.open(\"w\") as f:\n        f.write(\",\".join(columns))\n        f.write(\"\\n\")\n\n\ndef append_csv(path, *args):\n    row = \",\".join([str(arg) for arg in args])\n    with path.open(\"a\") as f:\n        f.write(row)\n        f.write(\"\\n\")\n"
  },
  {
    "path": "src/gd/networks.py",
    "content": "from builtins import super\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom scipy import ndimage\n\n\ndef get_network(name):\n    models = {\n        \"conv\": ConvNet(),\n    }\n    return models[name.lower()]\n\n\ndef load_network(path, device):\n    \"\"\"Construct the neural network and load parameters from the specified file.\n\n    Args:\n        path: Path to the model parameters. The name must conform to `vgn_name_[_...]`.\n\n    \"\"\"\n    model_name = path.stem.split(\"_\")[1]\n    net = get_network(model_name).to(device)\n    net.load_state_dict(torch.load(path, map_location=device))\n    return net\n\n\ndef conv(in_channels, out_channels, kernel_size):\n    return nn.Conv3d(in_channels, out_channels, kernel_size, padding=kernel_size // 2)\n\n\ndef conv_stride(in_channels, out_channels, kernel_size):\n    return nn.Conv3d(\n        in_channels, out_channels, kernel_size, stride=2, padding=kernel_size // 2\n    )\n\n\nclass ConvNet(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.encoder = Encoder(1, [16, 32, 64], [5, 3, 3])\n        self.decoder = Decoder(64, [64, 32, 16], [3, 3, 5])\n        self.conv_qual = conv(16, 1, 5)\n        self.conv_rot = conv(16, 4, 5)\n        self.conv_width = conv(16, 1, 5)\n\n    def forward(self, x):\n        x = self.encoder(x)\n        x = self.decoder(x)\n        qual_out = torch.sigmoid(self.conv_qual(x))\n        rot_out = F.normalize(self.conv_rot(x), dim=1)\n        width_out = self.conv_width(x)\n        return qual_out, rot_out, width_out\n\n\nclass Encoder(nn.Module):\n    def __init__(self, in_channels, filters, kernels):\n        super().__init__()\n        self.conv1 = conv_stride(in_channels, filters[0], kernels[0])\n        self.conv2 = conv_stride(filters[0], filters[1], kernels[1])\n        self.conv3 = conv_stride(filters[1], filters[2], kernels[2])\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n\n        x = self.conv2(x)\n        x = F.relu(x)\n\n        x = self.conv3(x)\n        x = F.relu(x)\n\n        return x\n\n\nclass Decoder(nn.Module):\n    def __init__(self, in_channels, filters, kernels):\n        super().__init__()\n        self.conv1 = conv(in_channels, filters[0], kernels[0])\n        self.conv2 = conv(filters[0], filters[1], kernels[1])\n        self.conv3 = conv(filters[1], filters[2], kernels[2])\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n\n        x = F.interpolate(x, 10)\n        x = self.conv2(x)\n        x = F.relu(x)\n\n        x = F.interpolate(x, 20)\n        x = self.conv3(x)\n        x = F.relu(x)\n\n        x = F.interpolate(x, 40)\n        return x\n\n\ndef count_num_trainable_parameters(net):\n    return sum(p.numel() for p in net.parameters() if p.requires_grad)\n"
  },
  {
    "path": "src/gd/perception.py",
    "content": "from math import cos, sin\nimport time\n\nimport numpy as np\nimport open3d as o3d\n\nfrom gd.utils.transform import Transform\n\n\nclass CameraIntrinsic(object):\n    \"\"\"Intrinsic parameters of a pinhole camera model.\n\n    Attributes:\n        width (int): The width in pixels of the camera.\n        height(int): The height in pixels of the camera.\n        K: The intrinsic camera matrix.\n    \"\"\"\n\n    def __init__(self, width, height, fx, fy, cx, cy, channel=1):\n        self.width = width\n        self.height = height\n        self.channel = channel\n        self.K = np.array([[fx, 0.0, cx], [0.0, fy, cy], [0.0, 0.0, 1.0]])\n\n    @property\n    def fx(self):\n        return self.K[0, 0]\n\n    @property\n    def fy(self):\n        return self.K[1, 1]\n\n    @property\n    def cx(self):\n        return self.K[0, 2]\n\n    @property\n    def cy(self):\n        return self.K[1, 2]\n\n    def to_dict(self):\n        \"\"\"Serialize intrinsic parameters to a dict object.\"\"\"\n        data = {\n            \"width\": self.width,\n            \"height\": self.height,\n            \"channel\": self.channel,\n            \"K\": self.K.flatten().tolist(),\n        }\n        return data\n\n    @classmethod\n    def from_dict(cls, data):\n        \"\"\"Deserialize intrinisic parameters from a dict object.\"\"\"\n        intrinsic = cls(\n            width=data[\"width\"],\n            height=data[\"height\"],\n            channel=data[\"channel\"],\n            fx=data[\"K\"][0],\n            fy=data[\"K\"][4],\n            cx=data[\"K\"][2],\n            cy=data[\"K\"][5],\n        )\n        return intrinsic\n\n\nclass TSDFVolume(object):\n    \"\"\"Integration of multiple depth images using a TSDF.\"\"\"\n\n    def __init__(self, size, resolution):\n        self.size = size\n        self.resolution = resolution\n        self.voxel_size = self.size / self.resolution\n        self.sdf_trunc = 4 * self.voxel_size\n\n        self._volume = o3d.pipelines.integration.UniformTSDFVolume(\n            length=self.size,\n            resolution=self.resolution,\n            sdf_trunc=self.sdf_trunc,\n            color_type=o3d.pipelines.integration.TSDFVolumeColorType.NoColor,\n        )\n\n    def integrate(self, depth_img, intrinsic, extrinsic):\n        \"\"\"\n        Args:\n            depth_img: The depth image.\n            intrinsic: The intrinsic parameters of a pinhole camera model.\n            extrinsics: The transform from the TSDF to camera coordinates, T_eye_task.\n        \"\"\"\n        rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(\n            o3d.geometry.Image(np.empty_like(depth_img)),\n            o3d.geometry.Image(depth_img),\n            depth_scale=1.0,\n            depth_trunc=2.0,\n            convert_rgb_to_intensity=False,\n        )\n\n        intrinsic = o3d.camera.PinholeCameraIntrinsic(\n            width=intrinsic.width,\n            height=intrinsic.height,\n            fx=intrinsic.fx,\n            fy=intrinsic.fy,\n            cx=intrinsic.cx,\n            cy=intrinsic.cy,\n        )\n\n\n        self._volume.integrate(rgbd, intrinsic, extrinsic)\n\n    def get_grid(self):\n        cloud = self._volume.extract_voxel_point_cloud()\n        points = np.asarray(cloud.points)\n        distances = np.asarray(cloud.colors)[:, [0]]\n        grid = np.zeros((1, 40, 40, 40), dtype=np.float32)\n        for idx, point in enumerate(points):\n            i, j, k = np.floor(point / self.voxel_size).astype(int)\n            grid[0, i, j, k] = distances[idx]\n        return grid\n\n    def get_cloud(self):\n        return self._volume.extract_point_cloud()\n\n\ndef create_tsdf(size, resolution, depth_imgs, intrinsic, extrinsics):\n    tsdf = TSDFVolume(size, resolution)\n    for i in range(depth_imgs.shape[0]):\n        extrinsic = Transform.from_list(extrinsics[i])\n        tsdf.integrate(depth_imgs[i], intrinsic, extrinsic)\n    return tsdf\n\n\ndef camera_on_sphere(origin, radius, theta, phi):\n    eye = np.r_[\n        radius * sin(theta) * cos(phi),\n        radius * sin(theta) * sin(phi),\n        radius * cos(theta),\n    ]\n    target = np.array([0.0, 0.0, 0.0])\n    up = np.array([0.0, 0.0, 1.0])  # this breaks when looking straight down\n    return Transform.look_at(eye, target, up) * origin.inverse()\n"
  },
  {
    "path": "src/gd/simulation.py",
    "content": "from pathlib import Path\nimport time\nimport os\nimport numpy as np\nimport pybullet\n\nfrom gd.grasp import Label\nfrom gd.perception import *\nfrom gd.utils import btsim, workspace_lines\nfrom gd.utils.transform import Rotation, Transform\n\n\nclass ClutterRemovalSim(object):\n    def __init__(self, scene, object_set, gui=True, seed=None, renderer_root_dir=\"\", args=None):\n        assert scene in [\"pile\", \"packed\", \"single\"]\n\n        self.urdf_root = Path(renderer_root_dir + \"/data/urdfs\")\n        self.scene = scene\n        self.object_set = object_set\n        self.discover_objects()\n\n        self.global_scaling = {\"blocks\": 1.67}.get(object_set, 1.0)\n        self.gui = gui\n\n        self.rng = np.random.RandomState(seed) if seed else np.random\n        self.world = btsim.BtWorld(self.gui)\n        self.gripper = Gripper(self.world)\n        self.size = 6 * self.gripper.finger_depth\n        intrinsic = CameraIntrinsic(640, 480, 540.0, 540.0, 320.0, 240.0) # TODO: cfg\n        self.camera = self.world.add_camera(intrinsic, 0.1, 2.0)\n\n        ##\n        self.args = args\n        self.renderer_root_dir = renderer_root_dir\n        if self.args.load_scene_descriptor:\n            if self.scene == \"pile\":\n                dir_name = \"pile_pile_test_200\"\n            elif self.scene == \"packed\":\n                dir_name = \"packed_packed_test_200\"\n            elif self.scene == \"single\":\n                dir_name = \"single_single_test_200\"\n            scene_root_dir = os.path.join(renderer_root_dir, \"data/mesh_pose_list\", dir_name)\n            self.scene_descriptor_list = [os.path.join(scene_root_dir, i) for i in sorted(os.listdir(scene_root_dir))]\n\n    @property\n    def num_objects(self):\n        return max(0, self.world.p.getNumBodies() - 1)  # remove table from body count\n\n    def discover_objects(self):\n        root = self.urdf_root / self.object_set\n        self.object_urdfs = [f for f in root.iterdir() if f.suffix == \".urdf\"]\n\n    def save_state(self):\n        self._snapshot_id = self.world.save_state()\n\n    def restore_state(self):\n        self.world.restore_state(self._snapshot_id)\n\n    def reset(self, object_count, n_round):\n        self.world.reset()\n        self.world.set_gravity([0.0, 0.0, -9.81])\n        self.draw_workspace()\n\n        if self.gui:\n            self.world.p.resetDebugVisualizerCamera(\n                cameraDistance=1.0,\n                cameraYaw=0.0,\n                cameraPitch=-45,\n                cameraTargetPosition=[0.15, 0.50, -0.3],\n            )\n\n        table_height = self.gripper.finger_depth\n        self.place_table(table_height)\n\n        ##\n        if self.args.gen_scene_descriptor:\n            if self.scene == \"pile\":\n                urdfs_and_poses_dict = self.generate_pile_scene(object_count, table_height)\n                return urdfs_and_poses_dict\n            elif self.scene == \"packed\":\n                urdfs_and_poses_dict = self.generate_packed_scene(object_count, table_height)\n                return urdfs_and_poses_dict\n            else:\n                raise ValueError(\"Invalid scene argument\")\n        elif self.args.load_scene_descriptor:\n            scene_descriptor_npz = self.scene_descriptor_list[n_round]\n\n            if self.scene == \"pile\":\n                urdfs_and_poses_dict = self.generate_pile_scene(object_count, table_height, scene_descriptor_npz)\n            elif self.scene == \"packed\":\n                urdfs_and_poses_dict = self.generate_packed_scene(object_count, table_height, scene_descriptor_npz)\n            elif self.scene == \"single\":\n                urdfs_and_poses_dict = self.generate_packedsingle_scene(object_count, table_height, scene_descriptor_npz)\n            else:\n                raise ValueError(\"Invalid scene argument\")\n            return urdfs_and_poses_dict\n        else:\n            raise NotImplementedError\n\n    def draw_workspace(self):\n        points = workspace_lines(self.size)\n        color = [0.5, 0.5, 0.5]\n        for i in range(0, len(points), 2):\n            self.world.p.addUserDebugLine(\n                lineFromXYZ=points[i], lineToXYZ=points[i + 1], lineColorRGB=color\n            )\n\n    def place_table(self, height):\n        urdf = self.urdf_root / \"setup\" / \"plane.urdf\"\n        pose = Transform(Rotation.identity(), [0.15, 0.15, height])\n        self.world.load_urdf(urdf, pose, scale=0.6)\n\n        # define valid volume for sampling grasps\n        lx, ux = 0.02, self.size - 0.02\n        ly, uy = 0.02, self.size - 0.02\n        lz, uz = height + 0.005, self.size\n        self.lower = np.r_[lx, ly, lz]\n        self.upper = np.r_[ux, uy, uz]\n\n    def generate_seen_scene(self, table_height, mesh_pose_npz):\n        # place box\n        urdf = self.urdf_root / \"setup\" / \"box.urdf\"\n        pose = Transform(Rotation.identity(), np.r_[0.02, 0.02, table_height])\n        box = self.world.load_urdf(urdf, pose, scale=1.3)\n\n        # read mesh_pose_npz\n        print(\"########## scene name: \", mesh_pose_npz)\n        if self.args.check_seen_scene:\n            urdfs_and_poses_dict = np.load(mesh_pose_npz, allow_pickle=True)['pc']\n            urdf_path_list = list(urdfs_and_poses_dict[:,0])\n            obj_scale_list = list(urdfs_and_poses_dict[:,1])\n            obj_RT_list = list(urdfs_and_poses_dict[:,2])\n\n        urdfs_and_poses_dict = {}     ##\n        for i in range(len(urdf_path_list)):\n            urdf = os.path.join(self.renderer_root_dir, urdf_path_list[i].replace(\"_visual.obj\",\".urdf\"))\n            RT = obj_RT_list[i]\n            R = RT[:3,:3]\n            T = RT[:3,3]\n            rotation = Rotation.from_matrix(R)\n            pose = Transform(rotation ,T)\n            scale = obj_scale_list[i]\n            body = self.world.load_urdf(urdf, pose, scale=scale)\n            body.set_pose(pose=Transform(rotation, T))\n\n        # remove box\n        self.world.remove_body(box)\n\n        removed_object = True\n        while removed_object:\n            removed_object, obj_body_list = self.remove_objects_outside_workspace()\n\n        for urdf, scale, rest_pose_quat, rest_pose_trans, body_uid in obj_body_list:\n            urdfs_and_poses_dict[body_uid] = [scale, rest_pose_quat, rest_pose_trans, str(urdf)]\n\n        return urdfs_and_poses_dict\n\n    def generate_pile_scene(self, object_count, table_height, scene_descriptor_npz=None):\n        # place box\n        urdf = self.urdf_root / \"setup\" / \"box.urdf\"\n        pose = Transform(Rotation.identity(), np.r_[0.02, 0.02, table_height])\n        box = self.world.load_urdf(urdf, pose, scale=1.3)\n\n        urdfs_and_poses_dict = {}\n        if self.args.gen_scene_descriptor:\n            urdf_path_list = self.rng.choice(self.object_urdfs, size=object_count)\n        elif self.args.load_scene_descriptor:\n            dict = np.load(scene_descriptor_npz, allow_pickle=True).item()\n            obj_scale_list = [value[0] for value in dict.values()]\n            obj_quat_list = [value[1] for value in dict.values()]\n            obj_xy_list = [value[2] for value in dict.values()]\n            if self.scene != self.object_set:\n                urdf_path_list = [os.path.join(self.renderer_root_dir, value[3].replace(self.scene, self.object_set)) for value in dict.values()]\n            else:\n                urdf_path_list = [os.path.join(self.renderer_root_dir, value[3]) for value in dict.values()]\n\n        # drop objects\n        for i in range(len(urdf_path_list)):\n            if self.args.gen_scene_descriptor:\n                rotation = Rotation.random(random_state=self.rng)\n                xy = self.rng.uniform(1.0 / 3.0 * self.size, 2.0 / 3.0 * self.size, 2)\n                pose = Transform(rotation, np.r_[xy, table_height + 0.2])\n                scale = self.rng.uniform(0.8, 1.0)\n                # save info\n                urdfs_and_poses_dict[i] = [scale, pose.rotation.as_quat(), xy, str(urdf_path_list[i])]     # (x, y, z, w)\n            elif self.args.load_scene_descriptor:\n                rotation = Rotation.from_quat(obj_quat_list[i])\n                xy = obj_xy_list[i]\n                pose = Transform(rotation, np.r_[xy, table_height + 0.2])\n                scale = obj_scale_list[i]\n            self.world.load_urdf(urdf_path_list[i], pose, scale=self.global_scaling * scale)\n            self.wait_for_objects_to_rest(timeout=1.0)\n\n        # remove box\n        self.world.remove_body(box)\n        obj_body_list = self.remove_and_wait()\n\n        if self.args.gen_scene_descriptor:\n            return urdfs_and_poses_dict\n        else:\n            for urdf, scale, rest_pose_quat, rest_pose_trans, body_uid in obj_body_list:\n                urdfs_and_poses_dict[body_uid] = [scale, rest_pose_quat, rest_pose_trans, str(urdf)]\n            return urdfs_and_poses_dict\n\n    def generate_packed_scene(self, object_count, table_height, scene_descriptor_npz=None):\n        attempts = 0\n        max_attempts = 12\n\n        if self.args.gen_scene_descriptor:\n            urdfs_and_poses_dict = {}\n        elif self.args.load_scene_descriptor:\n            dict = np.load(scene_descriptor_npz, allow_pickle=True).item()\n            obj_scale_list = [value[0] for value in dict.values()]\n            obj_angle_list = [value[1] for value in dict.values()]\n            obj_x_list = [value[2] for value in dict.values()]\n            obj_y_list = [value[3] for value in dict.values()]\n            if self.scene != self.object_set:\n                urdf_path_list = [os.path.join(self.renderer_root_dir, value[4].replace(self.scene, self.object_set)) for value in dict.values()]\n            else:\n                urdf_path_list = [os.path.join(self.renderer_root_dir, value[4]) for value in dict.values()]\n\n        while self.num_objects < object_count and attempts < max_attempts:\n            self.save_state()\n            if self.args.gen_scene_descriptor:\n                urdf = self.rng.choice(self.object_urdfs)\n                x = self.rng.uniform(0.08, 0.22)\n                y = self.rng.uniform(0.08, 0.22)\n                angle = self.rng.uniform(0.0, 2.0 * np.pi)\n                scale = self.rng.uniform(0.7, 0.9)\n                # save info\n                urdfs_and_poses_dict[attempts] = [scale, angle, x, y, str(urdf)]     # (x, y, z, w)\n            elif self.args.load_scene_descriptor:\n                urdf = urdf_path_list[attempts]\n                angle = obj_angle_list[attempts]\n                x = obj_x_list[attempts]\n                y = obj_y_list[attempts]\n                scale = obj_scale_list[attempts]\n\n            rotation = Rotation.from_rotvec(angle * np.r_[0.0, 0.0, 1.0])\n            z = 1.0\n            pose = Transform(rotation, np.r_[x, y, z])\n            body = self.world.load_urdf(urdf, pose, scale=self.global_scaling * scale)\n            lower, upper = self.world.p.getAABB(body.uid)\n            z = table_height + 0.5 * (upper[2] - lower[2]) + 0.002\n            body.set_pose(pose=Transform(rotation, np.r_[x, y, z]))\n\n            self.world.step()\n\n            if self.world.get_contacts(body):\n                self.world.remove_body(body)\n                self.restore_state()\n            else:\n                self.remove_and_wait()\n            attempts += 1\n\n        if self.args.gen_scene_descriptor:\n            return urdfs_and_poses_dict\n        else:\n            remain_obj_inws_infos = []\n            for body in list(self.world.bodies.values()):\n                urdf = self.world.bodies_urdfs[body.uid][0]\n                scale = self.world.bodies_urdfs[body.uid][1]\n                if str(urdf).split(\"/\")[-1] != \"box.urdf\" and str(urdf).split(\"/\")[-1] != \"plane.urdf\":\n                    rest_pose = body.get_pose()\n                    rest_pose_quat = rest_pose.rotation.as_quat()  # (x, y, z, w)\n                    rest_pose_trans = rest_pose.translation\n                    remain_obj_inws_infos.append([urdf, scale, rest_pose_quat, rest_pose_trans, str(body.uid)])\n            urdfs_and_poses_dict = {}\n            for urdf, scale, rest_pose_quat, rest_pose_trans, body_uid in remain_obj_inws_infos:\n                urdfs_and_poses_dict[body_uid] = [scale, rest_pose_quat, rest_pose_trans, str(urdf)]\n            return urdfs_and_poses_dict\n\n    def generate_packedsingle_scene(self, object_count, table_height, scene_descriptor_npz=None):\n        attempts = 0\n\n        if self.args.gen_scene_descriptor:\n            urdfs_and_poses_dict = {}\n        elif self.args.load_scene_descriptor:\n            dict = np.load(scene_descriptor_npz, allow_pickle=True).item()\n            obj_scale_list = [value[0] for value in dict.values()]\n            obj_angle_list = [value[1] for value in dict.values()]\n            obj_x_list = [value[2] for value in dict.values()]\n            obj_y_list = [value[3] for value in dict.values()]\n            if self.scene != self.object_set:\n                urdf_path_list = [os.path.join(self.renderer_root_dir, value[4].replace(self.scene, self.object_set)) for value in dict.values()]\n            else:\n                urdf_path_list = [os.path.join(self.renderer_root_dir, value[4]) for value in dict.values()]\n\n        for _ in range(1):\n            self.save_state()\n            if self.args.gen_scene_descriptor:\n                urdf = self.rng.choice(self.object_urdfs)\n                x = self.rng.uniform(0.08, 0.22)\n                y = self.rng.uniform(0.08, 0.22)\n                angle = self.rng.uniform(0.0, 2.0 * np.pi)\n                scale = self.rng.uniform(0.7, 0.9)\n                # save info\n                urdfs_and_poses_dict[attempts] = [scale, angle, x, y, str(urdf)]     # (x, y, z, w)\n            elif self.args.load_scene_descriptor:\n                urdf = urdf_path_list[attempts]\n                angle = obj_angle_list[attempts]\n                x = obj_x_list[attempts]\n                y = obj_y_list[attempts]\n                scale = obj_scale_list[attempts]\n\n            rotation = Rotation.from_rotvec(angle * np.r_[0.0, 0.0, 1.0])\n            z = 1.0\n            pose = Transform(rotation, np.r_[x, y, z])\n            body = self.world.load_urdf(urdf, pose, scale=self.global_scaling * scale)\n            lower, upper = self.world.p.getAABB(body.uid)\n            z = table_height + 0.5 * (upper[2] - lower[2]) + 0.002\n            body.set_pose(pose=Transform(rotation, np.r_[x, y, z]))\n\n            self.world.step()\n\n            if self.world.get_contacts(body):\n                self.world.remove_body(body)\n                self.restore_state()\n            else:\n                self.remove_and_wait()\n            attempts += 1\n\n        if self.args.gen_scene_descriptor:\n            return urdfs_and_poses_dict\n        else:\n            remain_obj_inws_infos = []\n            for body in list(self.world.bodies.values()):\n                urdf = self.world.bodies_urdfs[body.uid][0]\n                scale = self.world.bodies_urdfs[body.uid][1]\n                if str(urdf).split(\"/\")[-1] != \"box.urdf\" and str(urdf).split(\"/\")[-1] != \"plane.urdf\":\n                    rest_pose = body.get_pose()\n                    rest_pose_quat = rest_pose.rotation.as_quat()  # (x, y, z, w)\n                    rest_pose_trans = rest_pose.translation\n                    remain_obj_inws_infos.append([urdf, scale, rest_pose_quat, rest_pose_trans, str(body.uid)])\n            urdfs_and_poses_dict = {}\n            for urdf, scale, rest_pose_quat, rest_pose_trans, body_uid in remain_obj_inws_infos:\n                urdfs_and_poses_dict[body_uid] = [scale, rest_pose_quat, rest_pose_trans, str(urdf)]\n            return urdfs_and_poses_dict\n\n\n    def acquire_tsdf(self, n, N=None):\n        \"\"\"Render synthetic depth images from n viewpoints and integrate into a TSDF.\n\n        If N is None, the n viewpoints are equally distributed on circular trajectory.\n\n        If N is given, the first n viewpoints on a circular trajectory consisting of N points are rendered.\n        \"\"\"\n        tsdf = TSDFVolume(self.size, 40)\n        high_res_tsdf = TSDFVolume(self.size, 120)\n\n        origin = Transform(Rotation.identity(), np.r_[self.size / 2, self.size / 2, 0])\n        r = 2.0 * self.size\n        theta = np.pi / 6.0\n\n        N = N if N else n\n        phi_list = 2.0 * np.pi * np.arange(n) / N\n        extrinsics = [camera_on_sphere(origin, r, theta, phi).as_matrix() for phi in phi_list]\n\n        timing = 0.0\n        for extrinsic in extrinsics:\n            depth_img = self.camera.render(extrinsic)[1]\n            tic = time.time()\n            tsdf.integrate(depth_img, self.camera.intrinsic, extrinsic)\n            timing += time.time() - tic\n            high_res_tsdf.integrate(depth_img, self.camera.intrinsic, extrinsic)\n\n        return tsdf, high_res_tsdf.get_cloud(), timing\n\n    def execute_grasp(self, grasp, remove=True, allow_contact=False):\n        # --grasp is the target containing pose and width\n        # -- flag to control whether allow collision between pre-target and target\n        # -- remove whether remove the objec from the scene after succesful grasp\n        T_world_grasp = grasp.pose\n        T_grasp_pregrasp = Transform(Rotation.identity(), [0.0, 0.0, -0.05])\n        T_world_pregrasp = T_world_grasp * T_grasp_pregrasp\n\n        # approach along z-axis of the gripper\n        approach = T_world_grasp.rotation.as_matrix()[:, 2]\n        angle = np.arccos(np.dot(approach, np.r_[0.0, 0.0, -1.0]))\n        if angle > np.pi / 3.0:\n            # side grasp, lift the object after establishing a grasp\n            T_grasp_pregrasp_world = Transform(Rotation.identity(), [0.0, 0.0, 0.1])\n            T_world_retreat = T_grasp_pregrasp_world * T_world_grasp\n        else:\n            T_grasp_retreat = Transform(Rotation.identity(), [0.0, 0.0, -0.1])\n            T_world_retreat = T_world_grasp * T_grasp_retreat\n\n        # move the gripper to pregrasp pose and detect the collision\n        self.gripper.reset(T_world_pregrasp)\n\n        if self.gripper.detect_contact():\n            result = Label.FAILURE, self.gripper.max_opening_width\n            print(\"0\")\n        else:\n            #move the gripper to the target pose and detect collision\n            self.gripper.move_tcp_xyz(T_world_grasp, abort_on_contact=True)\n            \"\"\"\n            self.set_obj_pose_again(self.mesh_pose_npz)\n            \"\"\"\n            if self.gripper.detect_contact() and not allow_contact:\n                result = Label.FAILURE, self.gripper.max_opening_width\n                print(\"1\")\n            else:\n                self.gripper.move(0.0)      # shrink the gripper\n                # lift the gripper up along z-axis of the world frame or z-axis of the gripper frame\n                self.gripper.move_tcp_xyz(T_world_retreat, abort_on_contact=False)\n                if self.check_success(self.gripper):\n                    result = Label.SUCCESS, self.gripper.read()\n                    print(\"2\")\n                    if remove:\n                        contacts = self.world.get_contacts(self.gripper.body)\n                        self.world.remove_body(contacts[0].bodyB, isRemoveObjPerGrasp=True)\n                else:\n                    result = Label.FAILURE, self.gripper.max_opening_width\n                    print(\"3\")\n        self.world.remove_body(self.gripper.body)\n\n        remain_obj_inws_infos = []\n        if remove:\n            remain_obj_inws_infos = self.remove_and_wait()  ### wait for blender to render updated scene\n\n        return result, remain_obj_inws_infos\n\n    def remove_and_wait(self):\n        # wait for objects to rest while removing bodies that fell outside the workspace\n        removed_object = True\n        while removed_object:\n            self.wait_for_objects_to_rest()\n            removed_object, remain_obj_inws_infos = self.remove_objects_outside_workspace()\n        return remain_obj_inws_infos\n\n    def wait_for_objects_to_rest(self, timeout=2.0, tol=0.01):\n        timeout = self.world.sim_time + timeout\n        objects_resting = False\n        while not objects_resting and self.world.sim_time < timeout:\n            # simulate a quarter of a second\n            for _ in range(60):\n                self.world.step()\n            # check whether all objects are resting\n            objects_resting = True\n            for _, body in self.world.bodies.items():\n                if np.linalg.norm(body.get_velocity()) > tol:\n                    objects_resting = False\n                    break\n\n    def remove_objects_outside_workspace(self):\n        removed_object = False\n        remain_obj_inws_infos = []   ##\n\n        for body in list(self.world.bodies.values()):\n            xyz = body.get_pose().translation\n            if np.any(xyz < 0.0) or np.any(xyz > self.size):\n                self.world.remove_body(body)\n                removed_object = True\n            else:\n                urdf = self.world.bodies_urdfs[body.uid][0]\n                scale = self.world.bodies_urdfs[body.uid][1]\n                if str(urdf).split(\"/\")[-1] != \"box.urdf\" and str(urdf).split(\"/\")[-1] != \"plane.urdf\":\n                    rest_pose = body.get_pose()\n                    rest_pose_quat = rest_pose.rotation.as_quat()  # (x, y, z, w)\n                    rest_pose_trans = rest_pose.translation\n                    remain_obj_inws_infos.append([urdf, scale, rest_pose_quat, rest_pose_trans, str(body.uid)])\n        return removed_object, remain_obj_inws_infos     ##\n\n    def check_success(self, gripper):\n        # check that the fingers are in contact with some object and not fully closed\n        contacts = self.world.get_contacts(gripper.body)\n        res = len(contacts) > 0 and gripper.read() > 0.1 * gripper.max_opening_width\n        return res\n\n\nclass Gripper(object):\n    \"\"\"Simulated Panda hand.\"\"\"\n\n    def __init__(self, world):\n        self.world = world\n        self.urdf_path = Path(\"data/assets/data/urdfs/panda/hand.urdf\") #TODO put in cfg\n        self.max_opening_width = 0.08\n        self.finger_depth = 0.05\n        self.T_body_tcp = Transform(Rotation.identity(), [0.0, 0.0, 0.022])\n        self.T_tcp_body = self.T_body_tcp.inverse()\n\n    def reset(self, T_world_tcp):\n        T_world_body = T_world_tcp * self.T_tcp_body\n        self.body = self.world.load_urdf(self.urdf_path, T_world_body)\n        self.body.set_pose(T_world_body)  # sets the position of the COM, not URDF link\n        self.constraint = self.world.add_constraint(\n            self.body,\n            None,\n            None,\n            None,\n            pybullet.JOINT_FIXED,\n            [0.0, 0.0, 0.0],\n            Transform.identity(),\n            T_world_body,\n        )\n        self.update_tcp_constraint(T_world_tcp)\n        # constraint to keep fingers centered\n        self.world.add_constraint(\n            self.body,\n            self.body.links[\"panda_leftfinger\"],\n            self.body,\n            self.body.links[\"panda_rightfinger\"],\n            pybullet.JOINT_GEAR,\n            [1.0, 0.0, 0.0],\n            Transform.identity(),\n            Transform.identity(),\n        ).change(gearRatio=-1, erp=0.1, maxForce=50)\n        self.joint1 = self.body.joints[\"panda_finger_joint1\"]\n        self.joint1.set_position(0.5 * self.max_opening_width, kinematics=True)\n        self.joint2 = self.body.joints[\"panda_finger_joint2\"]\n        self.joint2.set_position(0.5 * self.max_opening_width, kinematics=True)\n\n    def update_tcp_constraint(self, T_world_tcp):\n        T_world_body = T_world_tcp * self.T_tcp_body\n        self.constraint.change(\n            jointChildPivot=T_world_body.translation,\n            jointChildFrameOrientation=T_world_body.rotation.as_quat(),\n            maxForce=300,\n        )\n\n    def set_tcp(self, T_world_tcp):\n        T_word_body = T_world_tcp * self.T_tcp_body\n        self.body.set_pose(T_word_body)\n        self.update_tcp_constraint(T_world_tcp)\n\n    def move_tcp_xyz(self, target, eef_step=0.002, vel=0.10, abort_on_contact=True):\n        T_world_body = self.body.get_pose()\n        T_world_tcp = T_world_body * self.T_body_tcp\n\n        diff = target.translation - T_world_tcp.translation\n        n_steps = int(np.linalg.norm(diff) / eef_step)\n        dist_step = diff / n_steps\n        dur_step = np.linalg.norm(dist_step) / vel\n\n        for _ in range(n_steps):\n            T_world_tcp.translation += dist_step\n            self.update_tcp_constraint(T_world_tcp)\n            for _ in range(int(dur_step / self.world.dt)):\n                self.world.step()\n            if abort_on_contact and self.detect_contact():\n                return\n\n    def detect_contact(self, threshold=5):\n        if self.world.get_contacts(self.body):\n            return True\n        else:\n            return False\n\n    def move(self, width):\n        self.joint1.set_position(0.5 * width)\n        self.joint2.set_position(0.5 * width)\n        for _ in range(int(0.5 / self.world.dt)):\n            self.world.step()\n\n    def read(self):\n        width = self.joint1.get_position() + self.joint2.get_position()\n        return width\n"
  },
  {
    "path": "src/gd/utils/__init__.py",
    "content": "def workspace_lines(size):\n    return [\n        [0.0, 0.0, 0.0],\n        [size, 0.0, 0.0],\n        [size, 0.0, 0.0],\n        [size, size, 0.0],\n        [size, size, 0.0],\n        [0.0, size, 0.0],\n        [0.0, size, 0.0],\n        [0.0, 0.0, 0.0],\n        [0.0, 0.0, size],\n        [size, 0.0, size],\n        [size, 0.0, size],\n        [size, size, size],\n        [size, size, size],\n        [0.0, size, size],\n        [0.0, size, size],\n        [0.0, 0.0, size],\n        [0.0, 0.0, 0.0],\n        [0.0, 0.0, size],\n        [size, 0.0, 0.0],\n        [size, 0.0, size],\n        [size, size, 0.0],\n        [size, size, size],\n        [0.0, size, 0.0],\n        [0.0, size, size],\n    ]\n\n"
  },
  {
    "path": "src/gd/utils/btsim.py",
    "content": "import time\n\nimport numpy as np\nimport pybullet\nfrom pybullet_utils import bullet_client\n\n\nfrom vgn.utils.transform import Rotation, Transform\n\nassert pybullet.isNumpyEnabled(), \"Pybullet needs to be built with NumPy\"\n\n\nclass BtWorld(object):\n    \"\"\"Interface to a PyBullet physics server.\n\n    Attributes:\n        dt: Time step of the physics simulation.\n        rtf: Real time factor. If negative, the simulation is run as fast as possible.\n        sim_time: Virtual time elpased since the last simulation reset.\n    \"\"\"\n\n    def __init__(self, gui=True):\n        connection_mode = pybullet.GUI if gui else pybullet.DIRECT\n        self.p = bullet_client.BulletClient(connection_mode)\n\n        self.gui = gui\n        self.dt = 1.0 / 240.0\n        self.solver_iterations = 150\n\n        self.bodies_urdfs = {}      ##\n\n        self.reset()\n\n    def set_gravity(self, gravity):\n        self.p.setGravity(*gravity)\n\n    def load_urdf(self, urdf_path, pose, scale=1.0):\n        body = Body.from_urdf(self.p, urdf_path, pose, scale)\n        self.bodies[body.uid] = body\n        self.bodies_urdfs[body.uid] = [urdf_path, scale]     ##\n\n        return body\n\n    def remove_body(self, body, isRemoveObjPerGrasp=False):\n        self.p.removeBody(body.uid)\n        del self.bodies[body.uid]\n        if isRemoveObjPerGrasp:     ##\n            self.bodies_urdfs.pop(body.uid)\n\n    def add_constraint(self, *argv, **kwargs):\n        \"\"\"See `Constraint` below.\"\"\"\n        constraint = Constraint(self.p, *argv, **kwargs)\n        return constraint\n\n    def add_camera(self, intrinsic, near, far):\n        camera = Camera(self.p, intrinsic, near, far)\n        return camera\n\n    def get_contacts(self, bodyA):\n        points = self.p.getContactPoints(bodyA.uid)\n        contacts = []\n        for point in points:\n            ###\n            urdf = self.bodies_urdfs[point[2]][0]\n            if str(urdf).split(\"/\")[-1] == \"plane.urdf\":\n                continue\n            ###\n            contact = Contact(\n                bodyA=self.bodies[point[1]],\n                bodyB=self.bodies[point[2]],\n                point=point[5],\n                normal=point[7],\n                depth=point[8],\n                force=point[9],\n            )\n            contacts.append(contact)\n        return contacts\n\n    def reset(self):\n        self.p.resetSimulation()\n        self.p.setPhysicsEngineParameter(\n            fixedTimeStep=self.dt, numSolverIterations=self.solver_iterations\n        )\n        self.bodies = {}\n        self.sim_time = 0.0\n\n    def step(self):\n        self.p.stepSimulation()\n        self.sim_time += self.dt\n        if self.gui:\n            time.sleep(self.dt)\n\n    def save_state(self):\n        return self.p.saveState()\n\n    def restore_state(self, state_uid):\n        self.p.restoreState(stateId=state_uid)\n\n    def close(self):\n        self.p.disconnect()\n\n\nclass Body(object):\n    \"\"\"Interface to a multibody simulated in PyBullet.\n\n    Attributes:\n        uid: The unique id of the body within the physics server.\n        name: The name of the body.\n        joints: A dict mapping joint names to Joint objects.\n        links: A dict mapping link names to Link objects.\n    \"\"\"\n\n    def __init__(self, physics_client, body_uid):\n        self.p = physics_client\n        self.uid = body_uid\n        self.name = self.p.getBodyInfo(self.uid)[1].decode(\"utf-8\")\n        self.joints, self.links = {}, {}\n        for i in range(self.p.getNumJoints(self.uid)):\n            joint_info = self.p.getJointInfo(self.uid, i)\n            joint_name = joint_info[1].decode(\"utf8\")\n            self.joints[joint_name] = Joint(self.p, self.uid, i)\n            link_name = joint_info[12].decode(\"utf8\")\n            self.links[link_name] = Link(self.p, self.uid, i)\n\n    @classmethod\n    def from_urdf(cls, physics_client, urdf_path, pose, scale):\n        body_uid = physics_client.loadURDF(\n            str(urdf_path),\n            pose.translation,\n            pose.rotation.as_quat(),\n            globalScaling=scale,\n        )\n        return cls(physics_client, body_uid)\n\n    def get_pose(self):\n        pos, ori = self.p.getBasePositionAndOrientation(self.uid)\n        return Transform(Rotation.from_quat(ori), np.asarray(pos))\n\n    def set_pose(self, pose):\n        self.p.resetBasePositionAndOrientation(\n            self.uid, pose.translation, pose.rotation.as_quat()\n        )\n\n    def get_velocity(self):\n        linear, angular = self.p.getBaseVelocity(self.uid)\n        return linear, angular\n\n\nclass Link(object):\n    \"\"\"Interface to a link simulated in Pybullet.\n\n    Attributes:\n        link_index: The index of the joint.\n    \"\"\"\n\n    def __init__(self, physics_client, body_uid, link_index):\n        self.p = physics_client\n        self.body_uid = body_uid\n        self.link_index = link_index\n\n    def get_pose(self):\n        link_state = self.p.getLinkState(self.body_uid, self.link_index)\n        pos, ori = link_state[0], link_state[1]\n        return Transform(Rotation.from_quat(ori), pos)\n\n\nclass Joint(object):\n    \"\"\"Interface to a joint simulated in PyBullet.\n\n    Attributes:\n        joint_index: The index of the joint.\n        lower_limit: Lower position limit of the joint.\n        upper_limit: Upper position limit of the joint.\n        effort: The maximum joint effort.\n    \"\"\"\n\n    def __init__(self, physics_client, body_uid, joint_index):\n        self.p = physics_client\n        self.body_uid = body_uid\n        self.joint_index = joint_index\n\n        joint_info = self.p.getJointInfo(body_uid, joint_index)\n        self.lower_limit = joint_info[8]\n        self.upper_limit = joint_info[9]\n        self.effort = joint_info[10]\n\n    def get_position(self):\n        joint_state = self.p.getJointState(self.body_uid, self.joint_index)\n        return joint_state[0]\n\n    def set_position(self, position, kinematics=False):\n        if kinematics:\n            self.p.resetJointState(self.body_uid, self.joint_index, position)\n        self.p.setJointMotorControl2(\n            self.body_uid,\n            self.joint_index,\n            pybullet.POSITION_CONTROL,\n            targetPosition=position,\n            force=self.effort,\n        )\n\n\nclass Constraint(object):\n    \"\"\"Interface to a constraint in PyBullet.\n\n    Attributes:\n        uid: The unique id of the constraint within the physics server.\n    \"\"\"\n\n    def __init__(\n        self,\n        physics_client,\n        parent,\n        parent_link,\n        child,\n        child_link,\n        joint_type,\n        joint_axis,\n        parent_frame,\n        child_frame,\n    ):\n        \"\"\"\n        Create a new constraint between links of bodies.\n\n        Args:\n            parent:\n            parent_link: None for the base.\n            child: None for a fixed frame in world coordinates.\n\n        \"\"\"\n        self.p = physics_client\n        parent_body_uid = parent.uid\n        parent_link_index = parent_link.link_index if parent_link else -1\n        child_body_uid = child.uid if child else -1\n        child_link_index = child_link.link_index if child_link else -1\n\n        self.uid = self.p.createConstraint(\n            parentBodyUniqueId=parent_body_uid,\n            parentLinkIndex=parent_link_index,\n            childBodyUniqueId=child_body_uid,\n            childLinkIndex=child_link_index,\n            jointType=joint_type,\n            jointAxis=joint_axis,\n            parentFramePosition=parent_frame.translation,\n            parentFrameOrientation=parent_frame.rotation.as_quat(),\n            childFramePosition=child_frame.translation,\n            childFrameOrientation=child_frame.rotation.as_quat(),\n        )\n\n    def change(self, **kwargs):\n        self.p.changeConstraint(self.uid, **kwargs)\n\n\nclass Contact(object):\n    \"\"\"Contact point between two multibodies.\n\n    Attributes:\n        point: Contact point.\n        normal: Normal vector from ... to ...\n        depth: Penetration depth\n        force: Contact force acting on body ...\n    \"\"\"\n\n    def __init__(self, bodyA, bodyB, point, normal, depth, force):\n        self.bodyA = bodyA\n        self.bodyB = bodyB\n        self.point = point\n        self.normal = normal\n        self.depth = depth\n        self.force = force\n\n\nclass Camera(object):\n    \"\"\"Virtual RGB-D camera based on the PyBullet camera interface.\n\n    Attributes:\n        intrinsic: The camera intrinsic parameters.\n    \"\"\"\n\n    def __init__(self, physics_client, intrinsic, near, far):\n        self.intrinsic = intrinsic\n        self.near = near\n        self.far = far\n        self.proj_matrix = _build_projection_matrix(intrinsic, near, far)\n        self.p = physics_client\n\n    def render(self, extrinsic):\n        \"\"\"Render synthetic RGB and depth images.\n\n        Args:\n            extrinsic: Extrinsic parameters, T_cam_ref.\n        \"\"\"\n        # Construct OpenGL compatible view and projection matrices.\n        gl_view_matrix = extrinsic\n        gl_view_matrix[2, :] *= -1  # flip the Z axis\n        gl_view_matrix = gl_view_matrix.flatten(order=\"F\")\n        gl_proj_matrix = self.proj_matrix.flatten(order=\"F\")\n\n        result = self.p.getCameraImage(\n            width=self.intrinsic.width,\n            height=self.intrinsic.height,\n            viewMatrix=gl_view_matrix,\n            projectionMatrix=gl_proj_matrix,\n            renderer=pybullet.ER_TINY_RENDERER,\n        )\n\n        rgb, z_buffer = result[2][:, :, :3], result[3]\n        depth = (\n            1.0 * self.far * self.near / (self.far - (self.far - self.near) * z_buffer)\n        )\n        return rgb, depth\n\n\ndef _build_projection_matrix(intrinsic, near, far):\n    perspective = np.array(\n        [\n            [intrinsic.fx, 0.0, -intrinsic.cx, 0.0],\n            [0.0, intrinsic.fy, -intrinsic.cy, 0.0],\n            [0.0, 0.0, near + far, near * far],\n            [0.0, 0.0, -1.0, 0.0],\n        ]\n    )\n    ortho = _gl_ortho(0.0, intrinsic.width, intrinsic.height, 0.0, near, far)\n    return np.matmul(ortho, perspective)\n\n\ndef _gl_ortho(left, right, bottom, top, near, far):\n    ortho = np.diag(\n        [2.0 / (right - left), 2.0 / (top - bottom), -2.0 / (far - near), 1.0]\n    )\n    ortho[0, 3] = -(right + left) / (right - left)\n    ortho[1, 3] = -(top + bottom) / (top - bottom)\n    ortho[2, 3] = -(far + near) / (far - near)\n    return ortho"
  },
  {
    "path": "src/gd/utils/panda_control.py",
    "content": "from panda_robot import PandaArm\nimport rospy\nimport numpy as np\nimport quaternion\nfrom gd.utils.transform import Transform\nfrom scipy.spatial.transform import Rotation\n\nclass PandaCommander(object):\n    def __init__(self):\n        self.name = \"panda_arm\"\n        self.r = PandaArm()\n        self.r.enable_robot()\n        rospy.loginfo(\"PandaCommander ready\")\n        self.moving = False\n    def reset(self):\n        rospy.logwarn(\"reset and go home!\")\n        self.r.enable_robot()\n        self.home()\n\n    def clear(self):\n        self.r.exit_control_mode()\n\n    def home(self):\n        self.moving = True\n        self.r.move_to_neutral()\n        self.moving = False\n        rospy.loginfo(\"PandaCommander: Arrived home!\")\n\n    def goto_joints(self, joints):\n        self.moving = True\n        self.r.move_to_joint_position(joints)\n        self.moving = False\n\n    def get_joints(self):\n        return self.r.angles()\n    def goto_pose(self, pose):\n        rospy.loginfo(\"PandaCommander: goto pose \" + str(pose.to_list()[-3:]))\n        self.moving = True\n        x, y, z, w = pose.rotation.as_quat()\n        self.r.move_to_cartesian_pose(pose.translation.astype(np.float32), np.quaternion(w, x, y, z))\n        safe = self.r.in_safe_state()\n        if self.r.has_collided():\n            rospy.logwarn(\"collided!\")\n            self.r.enable_robot()\n            return\n        error = self.r.error_in_current_state()\n        if error or not safe:\n            rospy.logwarn(\"error or not safe! reset and run again.\")\n            self.reset()\n            self.goto_pose(pose)\n            return \n        self.moving = False\n\n    def get_pose(self):\n            pos, rot = self.r.ee_pose()\n            return Transform(Rotation.from_quat([rot.x, rot.y, rot.z, rot.w]), pos)\n    def grasp(self, width=0.0, force=10.0):\n        return self.r.exec_gripper_cmd(width, force=force)\n\n    def move_gripper(self, width):\n        return self.r.exec_gripper_cmd(width)\n\n    def get_gripper_width(self):\n        return np.sum(self.r.gripper_state()['position'])\n"
  },
  {
    "path": "src/gd/utils/ros_utils.py",
    "content": "import geometry_msgs.msg\nimport numpy as np\nimport rospy\nfrom sensor_msgs.msg import PointCloud2, PointField\nimport std_msgs.msg\n\n\nfrom gd.utils.transform import Rotation, Transform\n\n\ndef to_point_msg(position):\n    \"\"\"Convert numpy array to a Point message.\"\"\"\n    msg = geometry_msgs.msg.Point()\n    msg.x = position[0]\n    msg.y = position[1]\n    msg.z = position[2]\n    return msg\n\n\ndef from_point_msg(msg):\n    \"\"\"Convert a Point message to a numpy array.\"\"\"\n    return np.r_[msg.x, msg.y, msg.z]\n\n\ndef to_vector3_msg(vector3):\n    \"\"\"Convert numpy array to a Vector3 message.\"\"\"\n    msg = geometry_msgs.msg.Vector3()\n    msg.x = vector3[0]\n    msg.y = vector3[1]\n    msg.z = vector3[2]\n    return msg\n\n\ndef from_vector3_msg(msg):\n    \"\"\"Convert a Vector3 message to a numpy array.\"\"\"\n    return np.r_[msg.x, msg.y, msg.z]\n\n\ndef to_quat_msg(orientation):\n    \"\"\"Convert a `Rotation` object to a Quaternion message.\"\"\"\n    quat = orientation.as_quat()\n    msg = geometry_msgs.msg.Quaternion()\n    msg.x = quat[0]\n    msg.y = quat[1]\n    msg.z = quat[2]\n    msg.w = quat[3]\n    return msg\n\n\ndef from_quat_msg(msg):\n    \"\"\"Convert a Quaternion message to a Rotation object.\"\"\"\n    return Rotation.from_quat([msg.x, msg.y, msg.z, msg.w])\n\n\ndef to_pose_msg(transform):\n    \"\"\"Convert a `Transform` object to a Pose message.\"\"\"\n    msg = geometry_msgs.msg.Pose()\n    msg.position = to_point_msg(transform.translation)\n    msg.orientation = to_quat_msg(transform.rotation)\n    return msg\n\n\ndef to_transform_msg(transform):\n    \"\"\"Convert a `Transform` object to a Transform message.\"\"\"\n    msg = geometry_msgs.msg.Transform()\n    msg.translation = to_vector3_msg(transform.translation)\n    msg.rotation = to_quat_msg(transform.rotation)\n    return msg\n\n\ndef from_transform_msg(msg):\n    \"\"\"Convert a Transform message to a Transform object.\"\"\"\n    translation = from_vector3_msg(msg.translation)\n    rotation = from_quat_msg(msg.rotation)\n    return Transform(rotation, translation)\n\n\ndef to_color_msg(color):\n    \"\"\"Convert a numpy array to a ColorRGBA message.\"\"\"\n    msg = std_msgs.msg.ColorRGBA()\n    msg.r = color[0]\n    msg.g = color[1]\n    msg.b = color[2]\n    msg.a = color[3] if len(color) == 4 else 1.0\n    return msg\n\n\ndef to_cloud_msg(points, intensities=None, frame=None, stamp=None):\n    \"\"\"Convert list of unstructured points to a PointCloud2 message.\n\n    Args:\n        points: Point coordinates as array of shape (N,3).\n        colors: Colors as array of shape (N,3).\n        frame\n        stamp\n    \"\"\"\n    msg = PointCloud2()\n    msg.header.frame_id = frame\n    msg.header.stamp = stamp or rospy.Time.now()\n\n    msg.height = 1\n    msg.width = points.shape[0]\n    msg.is_bigendian = False\n    msg.is_dense = False\n\n    msg.fields = [\n        PointField(\"x\", 0, PointField.FLOAT32, 1),\n        PointField(\"y\", 4, PointField.FLOAT32, 1),\n        PointField(\"z\", 8, PointField.FLOAT32, 1),\n    ]\n    msg.point_step = 12\n    data = points\n\n    if intensities is not None:\n        msg.fields.append(PointField(\"intensity\", 12, PointField.FLOAT32, 1))\n        msg.point_step += 4\n        data = np.hstack([points, intensities])\n\n    msg.row_step = msg.point_step * points.shape[0]\n    msg.data = data.astype(np.float32).tostring()\n\n    return msg\n\n\nclass TransformTree(object):\n    def __init__(self):\n        import tf2_ros\n        self._buffer = tf2_ros.Buffer()\n        self._listener = tf2_ros.TransformListener(self._buffer)\n        self._broadcaster = tf2_ros.TransformBroadcaster()\n        self._static_broadcaster = tf2_ros.StaticTransformBroadcaster()\n\n    def lookup(self, target_frame, source_frame, time, timeout=rospy.Duration(0)):\n        msg = self._buffer.lookup_transform(target_frame, source_frame, time, timeout)\n        return from_transform_msg(msg.transform)\n\n    def broadcast(self, transform, target_frame, source_frame):\n        msg = geometry_msgs.msg.TransformStamped()\n        msg.header.stamp = rospy.Time.now()\n        msg.header.frame_id = target_frame\n        msg.child_frame_id = source_frame\n        msg.transform = to_transform_msg(transform)\n        self._broadcaster.sendTransform(msg)\n\n    def broadcast_static(self, transform, target_frame, source_frame):\n        msg = geometry_msgs.msg.TransformStamped()\n        msg.header.stamp = rospy.Time.now()\n        msg.header.frame_id = target_frame\n        msg.child_frame_id = source_frame\n        msg.transform = to_transform_msg(transform)\n        self._static_broadcaster.sendTransform(msg)\n"
  },
  {
    "path": "src/gd/utils/transform.py",
    "content": "import numpy as np\nimport scipy.spatial.transform\n\n\nclass Rotation(scipy.spatial.transform.Rotation):\n    @classmethod\n    def identity(cls):\n        return cls.from_quat([0.0, 0.0, 0.0, 1.0])\n\n\nclass Transform(object):\n    \"\"\"Rigid spatial transform between coordinate systems in 3D space.\n\n    Attributes:\n        rotation (scipy.spatial.transform.Rotation)\n        translation (np.ndarray)\n    \"\"\"\n\n    def __init__(self, rotation, translation):\n        assert isinstance(rotation, scipy.spatial.transform.Rotation)\n        assert isinstance(translation, (np.ndarray, list))\n\n        self.rotation = rotation\n        self.translation = np.asarray(translation, np.double)\n\n    def as_matrix(self):\n        \"\"\"Represent as a 4x4 matrix.\"\"\"\n        return np.vstack(\n            (np.c_[self.rotation.as_matrix(), self.translation], [0.0, 0.0, 0.0, 1.0])\n        )\n\n    def to_dict(self):\n        \"\"\"Serialize Transform object into a dictionary.\"\"\"\n        return {\n            \"rotation\": self.rotation.as_quat().tolist(),\n            \"translation\": self.translation.tolist(),\n        }\n\n    def to_list(self):\n        return np.r_[self.rotation.as_quat(), self.translation]\n\n    def __mul__(self, other):\n        \"\"\"Compose this transform with another.\"\"\"\n        rotation = self.rotation * other.rotation\n        translation = self.rotation.apply(other.translation) + self.translation\n        return self.__class__(rotation, translation)\n\n    def transform_point(self, point):\n        return self.rotation.apply(point) + self.translation\n\n    def transform_vector(self, vector):\n        return self.rotation.apply(vector)\n\n    def inverse(self):\n        \"\"\"Compute the inverse of this transform.\"\"\"\n        rotation = self.rotation.inv()\n        translation = -rotation.apply(self.translation)\n        return self.__class__(rotation, translation)\n\n    @classmethod\n    def from_matrix(cls, m):\n        \"\"\"Initialize from a 4x4 matrix.\"\"\"\n        rotation = Rotation.from_matrix(m[:3, :3])\n        translation = m[:3, 3]\n        return cls(rotation, translation)\n\n    @classmethod\n    def from_dict(cls, dictionary):\n        rotation = Rotation.from_quat(dictionary[\"rotation\"])\n        translation = np.asarray(dictionary[\"translation\"])\n        return cls(rotation, translation)\n\n    @classmethod\n    def from_list(cls, list):\n        rotation = Rotation.from_quat(list[:4])\n        translation = list[4:]\n        return cls(rotation, translation)\n\n    @classmethod\n    def identity(cls):\n        \"\"\"Initialize with the identity transformation.\"\"\"\n        rotation = Rotation.from_quat([0.0, 0.0, 0.0, 1.0])\n        translation = np.array([0.0, 0.0, 0.0])\n        return cls(rotation, translation)\n\n    @classmethod\n    def look_at(cls, eye, center, up):\n        \"\"\"Initialize with a LookAt matrix.\n\n        Returns:\n            T_eye_ref, the transform from camera to the reference frame, w.r.t.\n            which the input arguments were defined.\n        \"\"\"\n        eye = np.asarray(eye)\n        center = np.asarray(center)\n\n        forward = center - eye\n        forward /= np.linalg.norm(forward)\n\n        right = np.cross(forward, up)\n        right /= np.linalg.norm(right)\n\n        up = np.asarray(up) / np.linalg.norm(up)\n        up = np.cross(right, forward)\n\n        m = np.eye(4, 4)\n        m[:3, 0] = right\n        m[:3, 1] = -up\n        m[:3, 2] = forward\n        m[:3, 3] = eye\n\n        return cls.from_matrix(m).inverse()\n"
  },
  {
    "path": "src/gd/vis.py",
    "content": "\"\"\"Render volumes, point clouds, and grasp detections in rviz.\"\"\"\n\nimport matplotlib.colors\nimport numpy as np\nfrom sensor_msgs.msg import PointCloud2\nimport rospy\nfrom rospy import Publisher\nfrom visualization_msgs.msg import Marker, MarkerArray\n\nfrom gd.utils import ros_utils, workspace_lines\nfrom gd.utils.transform import Transform, Rotation\n\n\ncmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"RedGreen\", [\"r\", \"g\"])\nDELETE_MARKER_MSG = Marker(action=Marker.DELETEALL)\nDELETE_MARKER_ARRAY_MSG = MarkerArray(markers=[DELETE_MARKER_MSG])\n\n\ndef draw_workspace(size):\n    scale = size * 0.005\n    pose = Transform.identity()\n    scale = [scale, 0.0, 0.0]\n    color = [0.5, 0.5, 0.5]\n    msg = _create_marker_msg(Marker.LINE_LIST, \"task\", pose, scale, color)\n    msg.points = [ros_utils.to_point_msg(point) for point in workspace_lines(size)]\n    pubs[\"workspace\"].publish(msg)\n\n\ndef draw_tsdf(vol, voxel_size, threshold=0.01):\n    msg = _create_vol_msg(vol, voxel_size, threshold)\n    pubs[\"tsdf\"].publish(msg)\n\n\ndef draw_points(points):\n    msg = ros_utils.to_cloud_msg(points, frame=\"task\")\n    pubs[\"points\"].publish(msg)\n\n\ndef draw_quality(vol, voxel_size, threshold=0.01):\n    msg = _create_vol_msg(vol, voxel_size, threshold)\n    pubs[\"quality\"].publish(msg)\n\n\ndef draw_volume(vol, voxel_size, threshold=0.01):\n    msg = _create_vol_msg(vol, voxel_size, threshold)\n    pubs[\"debug\"].publish(msg)\n\n\ndef draw_grasp(grasp, score, finger_depth):\n    radius = 0.1 * finger_depth\n    w, d = grasp.width, finger_depth\n    color = cmap(float(score))\n\n    markers = []\n\n    # left finger\n    pose = grasp.pose * Transform(Rotation.identity(), [0.0, -w / 2, d / 2])\n    scale = [radius, radius, d]\n    msg = _create_marker_msg(Marker.CYLINDER, \"task\", pose, scale, color)\n    msg.id = 0\n    markers.append(msg)\n\n    # right finger\n    pose = grasp.pose * Transform(Rotation.identity(), [0.0, w / 2, d / 2])\n    scale = [radius, radius, d]\n    msg = _create_marker_msg(Marker.CYLINDER, \"task\", pose, scale, color)\n    msg.id = 1\n    markers.append(msg)\n\n    # wrist\n    pose = grasp.pose * Transform(Rotation.identity(), [0.0, 0.0, -d / 4])\n    scale = [radius, radius, d / 2]\n    msg = _create_marker_msg(Marker.CYLINDER, \"task\", pose, scale, color)\n    msg.id = 2\n    markers.append(msg)\n\n    # palm\n    pose = grasp.pose * Transform(\n        Rotation.from_rotvec(np.pi / 2 * np.r_[1.0, 0.0, 0.0]), [0.0, 0.0, 0.0]\n    )\n    scale = [radius, radius, w]\n    msg = _create_marker_msg(Marker.CYLINDER, \"task\", pose, scale, color)\n    msg.id = 3\n    markers.append(msg)\n\n    pubs[\"grasp\"].publish(MarkerArray(markers=markers))\n\n\ndef draw_grasps(grasps, scores, finger_depth):\n    markers = []\n    for i, (grasp, score) in enumerate(zip(grasps, scores)):\n        msg = _create_grasp_marker_msg(grasp, score, finger_depth)\n        msg.id = i\n        markers.append(msg)\n    msg = MarkerArray(markers=markers)\n    pubs[\"grasps\"].publish(msg)\n\n\ndef clear():\n    pubs[\"workspace\"].publish(DELETE_MARKER_MSG)\n    pubs[\"tsdf\"].publish(ros_utils.to_cloud_msg(np.array([]), frame=\"task\"))\n    pubs[\"points\"].publish(ros_utils.to_cloud_msg(np.array([]), frame=\"task\"))\n    clear_quality()\n    pubs[\"grasp\"].publish(DELETE_MARKER_ARRAY_MSG)\n    clear_grasps()\n    pubs[\"debug\"].publish(ros_utils.to_cloud_msg(np.array([]), frame=\"task\"))\n\n\ndef clear_quality():\n    pubs[\"quality\"].publish(ros_utils.to_cloud_msg(np.array([]), frame=\"task\"))\n\n\ndef clear_grasps():\n    pubs[\"grasps\"].publish(DELETE_MARKER_ARRAY_MSG)\n\n\ndef _create_publishers():\n    pubs = dict()\n    pubs[\"workspace\"] = Publisher(\"/workspace\", Marker, queue_size=1, latch=True)\n    pubs[\"tsdf\"] = Publisher(\"/tsdf\", PointCloud2, queue_size=1, latch=True)\n    pubs[\"points\"] = Publisher(\"/points\", PointCloud2, queue_size=1, latch=True)\n    pubs[\"quality\"] = Publisher(\"/quality\", PointCloud2, queue_size=1, latch=True)\n    pubs[\"grasp\"] = Publisher(\"/grasp\", MarkerArray, queue_size=1, latch=True)\n    pubs[\"grasps\"] = Publisher(\"/grasps\", MarkerArray, queue_size=1, latch=True)\n    pubs[\"debug\"] = Publisher(\"/debug\", PointCloud2, queue_size=1, latch=True)\n    return pubs\n\n\ndef _create_marker_msg(marker_type, frame, pose, scale, color):\n    msg = Marker()\n    msg.header.frame_id = frame\n    msg.header.stamp = rospy.Time()\n    msg.type = marker_type\n    msg.action = Marker.ADD\n    msg.pose = ros_utils.to_pose_msg(pose)\n    msg.scale = ros_utils.to_vector3_msg(scale)\n    msg.color = ros_utils.to_color_msg(color)\n    return msg\n\n\ndef _create_vol_msg(vol, voxel_size, threshold):\n    vol = vol.squeeze()\n    if type(threshold) is tuple:\n        idx_arr = np.logical_and(vol > threshold[0], vol < threshold[1])\n    else:\n        idx_arr = vol > threshold\n        \n    points = np.argwhere(idx_arr) * voxel_size\n    values = np.expand_dims(vol[idx_arr], 1)\n    return ros_utils.to_cloud_msg(points, values, frame=\"task\")\n\n\ndef _create_grasp_marker_msg(grasp, score, finger_depth):\n    radius = 0.1 * finger_depth\n    w, d = grasp.width, finger_depth\n    scale = [radius, 0.0, 0.0]\n    color = list(cmap(float(score)))\n    if score < 0.01:\n        color = [0., 0., 1, 0.8]\n    msg = _create_marker_msg(Marker.LINE_LIST, \"task\", grasp.pose, scale, color)\n    msg.points = [ros_utils.to_point_msg(point) for point in _gripper_lines(w, d)]\n    return msg\n\n\ndef _gripper_lines(width, depth):\n    return [\n        [0.0, 0.0, -depth / 2.0],\n        [0.0, 0.0, 0.0],\n        [0.0, -width / 2.0, 0.0],\n        [0.0, -width / 2.0, depth],\n        [0.0, width / 2.0, 0.0],\n        [0.0, width / 2.0, depth],\n        [0.0, -width / 2.0, 0.0],\n        [0.0, width / 2.0, 0.0],\n    ]\n\n\npubs = _create_publishers()\n"
  },
  {
    "path": "src/nr/asset.py",
    "content": "import os\nimport numpy as np\n\nDATA_ROOT_DIR = '../../data/traindata_example/'\nVGN_TRAIN_ROOT = DATA_ROOT_DIR + 'giga_hemisphere_train_demo'\n\ndef add_scenes(root, type, filter_list=None):\n    scene_names = []\n    splits = os.listdir(root)\n    for split in splits:\n        if filter_list is not None and split not in filter_list: continue\n        scenes = os.listdir(os.path.join(root, split))\n        scene_names += [f'vgn_syn/train/{type}/{split}/{fn}/w_0.8' for fn in scenes]\n    return scene_names\nif os.path.exists(VGN_TRAIN_ROOT):\n    vgn_pile_train_scene_names = sorted(add_scenes(os.path.join(VGN_TRAIN_ROOT, 'pile_full'), 'pile'), key=lambda x: x.split('/')[4])\n    vgn_pack_train_scene_names = sorted(add_scenes(os.path.join(VGN_TRAIN_ROOT, 'packed_full'), 'packed'), key=lambda x: x.split('/')[4])\n    num_scenes_pile = len(vgn_pile_train_scene_names)\n    num_scenes_pack = len(vgn_pack_train_scene_names)\n    vgn_pack_train_scene_names = vgn_pack_train_scene_names[:num_scenes_pack]\n    num_val_pile = 1\n    num_val_pack = 1\n    print(f\"total: {num_scenes_pile + num_scenes_pack} pile: {num_scenes_pile} pack: {num_scenes_pack}\")\n    vgn_val_scene_names = vgn_pile_train_scene_names[-num_val_pile:]  + vgn_pack_train_scene_names[-num_val_pack:]\n    vgn_train_scene_names = vgn_pile_train_scene_names[:-num_val_pile]  + vgn_pack_train_scene_names[:-num_val_pack]\n\nVGN_SDF_DIR = DATA_ROOT_DIR + \"giga_hemisphere_train_demo/scenes_tsdf_dep-nor\"\n\nVGN_TEST_ROOT = ''\nVGN_TEST_ROOT_PILE = os.path.join(VGN_TEST_ROOT,'pile')\nVGN_TEST_ROOT_PACK = os.path.join(VGN_TEST_ROOT,'packed')\nif os.path.exists(VGN_TEST_ROOT):\n    fns = os.listdir(VGN_TEST_ROOT_PILE)\n    vgn_pile_test_scene_names = [f'vgn_syn/test/pile//{fn}/w_0.8' for fn in fns]\n    fns = os.listdir(VGN_TEST_ROOT_PACK)\n    vgn_pack_test_scene_names = [f'vgn_syn/test/packed//{fn}/w_0.8' for fn in fns]\n\n    vgn_test_scene_names = vgn_pile_test_scene_names + vgn_pack_test_scene_names\n\nCSV_ROOT = DATA_ROOT_DIR + 'GIGA_demo'\nimport pandas as pd\nfrom pathlib import Path\nimport time\nt0 = time.time()\nVGN_PACK_TRAIN_CSV = pd.read_csv(Path(CSV_ROOT + '/data_packed_train_processed_dex_noise/grasps.csv'))\nVGN_PILE_TRAIN_CSV = pd.read_csv(Path(CSV_ROOT + '/data_pile_train_processed_dex_noise/grasps.csv'))\nprint(f\"finished loading csv in {time.time() - t0} s\")\nVGN_PACK_TEST_CSV = None \nVGN_PILE_TEST_CSV = None \n"
  },
  {
    "path": "src/nr/configs/nrvgn_sdf.yaml",
    "content": "name: test\ngroup_name: \"\"\n# network\nfix_seed: true\nnetwork:  grasp_nerf\ninit_net_type: cost_volume\nagg_net_type: neus\nuse_hierarchical_sampling: true\nuse_depth: true\nuse_depth_loss: true\ndepth_loss_weight: 1.0\ndist_decoder_cfg:\n  use_vis: false\nfine_dist_decoder_cfg:\n  use_vis: false\nray_batch_num: 4096 #2048\nsample_volume: true\nrender_rgb: true\nvolume_type: [sdf]\n\nvolume_resolution: 40\ndepth_sample_num: 40\nfine_depth_sample_num: 40\nagg_net_cfg:\n  sample_num: 40\n  init_s: 0.3\n  fix_s: 0\nfine_agg_net_cfg:\n  sample_num: 40\n  init_s: 0.3\n  fix_s: 0\nvis_vol: false\n\n# loss\nloss: [render, depth, sdf, vgn] \nval_metric: [psnr_ssim, vis_img]\nkey_metric_name: loss_vgn # depth_mae psnr_nr_fine\nkey_metric_prefer: lower\nuse_dr_loss: false\nuse_dr_fine_loss: false\nuse_nr_fine_loss: true\nrender_depth: true\ndepth_correct_ratio: 1.0\ndepth_thresh: 0.8\nuse_dr_prediction: false\n\n# lr\ntotal_step: 500000\nval_interval: 5000\nlr_type: exp_decay\nlr_cfg:\n  lr_init: 1.0e-4\n  decay_step: 100000\n  decay_rate: 0.5\nnr_initial_training_steps: 0\n\n# dataset\ntrain_dataset_type: gen\ntrain_dataset_cfg:\n  resolution_type: hr\n  type2sample_weights: { vgn_syn: 100 }  \n  train_database_types: ['vgn_syn']  \n  aug_pixel_center_sample: true\n  aug_view_select_type: hard\n  ref_pad_interval: 32\n  use_src_imgs: true\n  num_input_views: 6\n\nval_set_list:\n  -\n    name: vgn_syn\n    type: gen\n    val_scene_num: -1 # if the set, use val scene list in asset.py\n    cfg:\n      use_src_imgs: true\n      num_input_views: 6"
  },
  {
    "path": "src/nr/dataset/database.py",
    "content": "import abc\nimport glob\nimport json\nimport os\nimport re\nfrom pathlib import Path\nimport sys\nimport open3d as o3d\nfrom utils.draw_utils import draw_gripper_o3d\nos.environ[\"OPENCV_IO_ENABLE_OPENEXR\"]=\"1\"\nimport cv2\nimport numpy as np\nfrom skimage.io import imread, imsave\n\nfrom asset import VGN_TRAIN_ROOT, VGN_TEST_ROOT, VGN_PILE_TRAIN_CSV,  VGN_PACK_TRAIN_CSV, VGN_PILE_TEST_CSV,VGN_PACK_TEST_CSV, VGN_SDF_DIR\n\nfrom utils.draw_utils import draw_cube, draw_axis, draw_points, draw_gripper, draw_world_points\nsys.path.append(\"../\")\nfrom gd.utils.transform import Rotation, Transform\n\nclass BaseDatabase(abc.ABC):\n    def __init__(self, database_name):\n        self.database_name = database_name\n\n    @abc.abstractmethod\n    def get_image(self, img_id):\n        pass\n\n    @abc.abstractmethod\n    def get_K(self, img_id):\n        pass\n\n    @abc.abstractmethod\n    def get_pose(self, img_id):\n        pass\n\n    @abc.abstractmethod\n    def get_img_ids(self,check_depth_exist=False):\n        pass\n\n    @abc.abstractmethod\n    def get_bbox(self, img_id):\n        pass\n\n    @abc.abstractmethod\n    def get_depth(self,img_id):\n        pass\n\n    @abc.abstractmethod\n    def get_mask(self,img_id):\n        pass\n\n    @abc.abstractmethod\n    def get_depth_range(self,img_id):\n        pass\n\nclass GraspSynDatabase(BaseDatabase):\n    def __init__(self, database_name):\n        super().__init__(database_name)\n        self.debug_save_dir = Path(f'output/nrvgn/{database_name}')\n        tp, split, scene_type, scene_split, scene_id, background_size = database_name.split('/')\n        background, size = background_size.split('_')\n        self.split = split\n        self.scene_id = scene_id\n        self.scene_type = scene_type\n        self.tp = tp\n        self.downSample = float(size) \n        tp2wh = {\n            'vgn_syn': (640, 360)\n        }\n        src_wh = tp2wh[tp]\n        self.img_wh = (np.array(src_wh) * self.downSample).astype(int)\n\n        root_dir = {'test':     {\n                                 'vgn_syn':  VGN_TEST_ROOT,\n                                },\n                    'train':    {\n                                 'vgn_syn': VGN_TRAIN_ROOT,\n                                },\n                    }\n\n        if tp == 'vgn_syn':\n            self.root_dir = Path(root_dir[split][tp]) / (scene_type + \"_full\") / scene_split / scene_id \n        else:\n            raise NotImplementedError\n\n        tp2len = {'grasp_syn': 256,\n                 'vgn_syn':24}\n        self.depth_img_ids = self.img_ids = list(range(tp2len[tp]))\n        self.blender2opencv = np.array([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])\n        \n        self.K = np.array([[\n            892.62,\n            0.0,\n            639.5\n            ],\n            [\n                0.0,\n                892.62,\n                359.5\n            ],\n            [\n                0.0,\n                0.0,\n                1.0\n            ]]) \n        self.K[:2] = self.K[:2] * self.downSample\n        if self.tp == 'vgn_syn':\n            self.K[:2] /= 2\n        self.poses_ori = np.load(self.root_dir / 'camera_pose.npy')\n        self.poses = [np.linalg.inv(p @ self.blender2opencv)[:3,:] for p in self.poses_ori]\n        \n        \n        self.depth_thres = {\n            'grasp_syn': 1.5,\n            'vgn_syn': 0.8,\n        }\n        self.fixed_depth_range = [0.2, 0.8]\n\n        tp2bbox3d = {'grasp_syn': [[-0.35, -0.45, 0],\n                                    [0.15, 0.05, 0.5]],\n                    'vgn_syn': [[-0.15, -0.15, -0.05],\n                                [0.15, 0.15, 0.25]]}\n        self.bbox3d = tp2bbox3d[tp]\n\n    def get_split(self):\n        return self.split\n\n    def get_image(self, img_id):\n        img_filename = os.path.join(self.root_dir,\n                            f'rgb/{img_id:04d}.png')\n        img = imread(img_filename)[:,:,:3]\n        img = cv2.resize(img, self.img_wh)\n        #img[self.get_mask(img_id)] = 255\n        return np.asarray(img, dtype=np.float32)\n\n    def get_K(self, img_id):\n        return self.K.astype(np.float32).copy()\n\n    def get_pose(self, img_id):\n        return self.poses[img_id].astype(np.float32).copy()\n\n    def get_img_ids(self,check_depth_exist=False):\n        if check_depth_exist: return self.depth_img_ids\n        return self.img_ids\n\n    def get_bbox3d(self, vis=False):\n        if vis:\n            img_id = 0\n            img = self.get_image(img_id)\n            cRb = self.poses[img_id][:3,:3]\n            ctb = self.poses[img_id][:3,3]\n            l = self.bbox3d[1][0] - self.bbox3d[0][0]\n            img = draw_cube(img, cRb, ctb, self.K, length=l, bias=self.bbox3d[0])\n            if not self.debug_save_dir.exists():\n                self.debug_save_dir.mkdir(parents=True)\n            imsave(str(self.debug_save_dir / 'bbox3d.jpg'), img)\n        return self.bbox3d\n\n    def get_bbox(self, img_id, vis=False):\n        mask = self.get_mask(img_id,'obj')\n        xs,ys=np.nonzero(mask)\n        x_min,x_max=np.min(xs,0),np.max(xs,0)\n        y_min,y_max=np.min(ys,0),np.max(ys,0)\n\n        if vis:\n            img = self.get_image(img_id)\n            img = cv2.rectangle(img, (y_min, x_min), (y_max, x_max), (255,0,0), 2)\n\n            imsave(str(self.debug_save_dir / 'box.jpg'), img)\n\n        return [x_min,x_max,y_min,y_max]\n        \n    def _depth_existence(self,img_id):\n        return True\n\n    def get_depth(self, img_id):\n        depth_filename = os.path.join(self.root_dir,\n                    f'depth/{img_id:04d}.exr')\n        depth_h = cv2.imread(depth_filename, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)[:,:,0]\n        depth_h = cv2.resize(depth_h, self.img_wh, interpolation=cv2.INTER_NEAREST)\n\n        return depth_h\n\n    def get_mask(self, img_id, tp='desk'):\n        if tp == 'desk':\n            mask = self.get_depth(img_id) < self.depth_thres[self.tp]\n            return (mask.astype(np.bool))\n        elif tp == 'obj':\n            mask_filename = os.path.join(self.root_dir,\n                        f'mask/{img_id:04d}.exr')\n            mask = cv2.imread(mask_filename, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)[:,:,0]\n            mask = cv2.resize(mask, self.img_wh, interpolation=cv2.INTER_NEAREST)\n            cv2.imwrite('mask.jpg', mask * 256)\n\n            return ~(mask.astype(np.bool))\n        else:\n            return np.ones((self.img_wh[1], self.img_wh[0]))\n\n    def get_depth_range(self,img_id, fixed=True):\n        if fixed:\n            return np.array(self.fixed_depth_range)\n        depth = self.get_depth(img_id)\n        nf = [max(0, np.min(depth)), min(self.depth_thres[self.tp], np.max(depth))]\n        return np.array(nf)\n\n    def get_sdf(self):\n        sdf_volume = np.load( Path(VGN_SDF_DIR) / f'{self.scene_id}.npz')['grid'][0]\n        return sdf_volume * 2 - 1\n\nclass VGNSynDatabase(GraspSynDatabase):\n    def __init__(self, database_name):\n        super().__init__(database_name)\n        split = self.get_split()\n\n        if self.scene_type == 'packed':\n            csv = VGN_PACK_TEST_CSV if split == 'test' else VGN_PACK_TRAIN_CSV\n        elif self.scene_type == 'pile':\n            csv = VGN_PILE_TEST_CSV if split == 'test' else VGN_PILE_TRAIN_CSV\n        else:\n            return\n\n        self.df = csv\n        self.df = self.df[self.df[\"scene_id\"] == self.scene_id]\n        assert len(self.df) > 0, f\"empty grasping info {database_name}\"\n\n    def visualize_grasping(self, pos, rot, w, label=None, img_id=3,save_img=False):\n        voxel_size = 0.3 / 40\n        pts_w = pos * voxel_size  \n        width = w * voxel_size\n        \n        img = self.get_image(img_id)\n        \n        t = np.array([[-0.15, -0.15, -0.05]]).repeat(pts_w.shape[0], axis=0)\n        pts_b = pts_w + t\n        \n        cRb = self.poses[img_id][:3,:3]\n        ctb = self.poses[img_id][:3,3] # + np.array([-0.15, -0.15, -0.05])\n\n        for gid in range(pts_w.shape[0]):\n            if label is not None and label[gid] == 0:\n                continue\n            btg = pts_b[gid]\n            wRg = rot[gid]\n            bRg = wRg \n            bTg = np.eye(4)\n            bTg[:3,:3] = bRg\n            bTg[:3,3] = btg\n            cTb = self.poses[img_id]\n            cTg = cTb @ bTg\n            img = draw_gripper(img, cTg[:3,:3], cTg[:3,3], self.K, width[gid], 2)\n            img = draw_world_points(img, pts_b[gid], cRb, ctb, self.K)\n\n        if save_img:\n            save_dir = str(self.debug_save_dir / f'gripper_test-{img_id}.jpg')\n            print(\"save to\", save_dir)\n            imsave(save_dir, img)\n        return img \n\n    def visualize_grasping_3d(self, pos, rot, w, label=None, voxel_size = 0.3 / 40):\n        pts_w = pos * voxel_size\n        width = w * voxel_size\n\n        geometry = o3d.geometry.TriangleMesh()\n        for gid in range(pts_w.shape[0]):\n            if label is not None and label[gid] == 0:\n                continue\n            wRg = rot[gid]\n            y_ccw_90 = np.array([[0, 0, -1], [0, 1,0], [1, 0, 0]])\n            _R = wRg @ y_ccw_90\n            _t = pts_w[gid]\n\n            geometry_gripper = draw_gripper_o3d(_R, _t, width[gid])\n            geometry += geometry_gripper\n\n        o3d.io.write_triangle_mesh(str(self.debug_save_dir / f'gripper.ply'), geometry)\n\n    def get_grasp_info(self):\n        pos = self.df[[\"i\",\"j\",\"k\"]].to_numpy(np.single)\n        index = np.round(pos).astype(np.long)\n        l = pos.shape[0]\n        width = self.df[[\"width\"]].to_numpy(np.single).reshape(l)\n        label = self.df[[\"label\"]].to_numpy(np.float32).reshape(l)\n        rotations = np.empty((l, 2, 4), dtype=np.single)\n        q = self.df[[\"qx\",\"qy\",\"qz\",\"qw\"]].to_numpy(np.single)\n        ori = Rotation.from_quat(q)\n        R = Rotation.from_rotvec(np.pi * np.r_[0.0, 0.0, 1.0])\n        rotations[:,0] = ori.as_quat()\n        rotations[:,1] = (ori * R).as_quat()\n\n        # for i in range(4):\n        #     self.visualize_grasping(pos, ori.as_matrix(), width, label, i)\n        # exit()\n        return (index, label, rotations, width)\n\n\ndef parse_database_name(database_name:str)->BaseDatabase:\n    name2database={\n        'vgn_syn': VGNSynDatabase,\n    }\n    database_type = database_name.split('/')[0]\n    if database_type in name2database:\n        return name2database[database_type](database_name)\n    else:\n        raise NotImplementedError\n\ndef get_database_split(database: BaseDatabase, split_type='val'):\n    database_name = database.database_name\n    if split_type.startswith('val'):\n        splits = split_type.split('_')\n        depth_valid = not(len(splits)>1 and splits[1]=='all')\n        if database_name.startswith('vgn'):\n            val_ids = database.get_img_ids()[2:24:8]# TODO\n            train_ids = [img_id for img_id in database.get_img_ids(check_depth_exist=depth_valid) if img_id not in val_ids]\n        else:\n            raise NotImplementedError\n    elif split_type.startswith('test'):\n        splits = split_type.split('_')\n        depth_valid = not(len(splits)>1 and splits[1]=='all')\n        if database_name.startswith('vgn'):\n            val_ids = database.get_img_ids()[2:24:8] + [0]# TODO\n            train_ids = [img_id for img_id in database.get_img_ids(check_depth_exist=depth_valid) if img_id not in val_ids]\n        else:\n            raise NotImplementedError\n    else:\n        raise NotImplementedError\n    return train_ids, val_ids"
  },
  {
    "path": "src/nr/dataset/name2dataset.py",
    "content": "from dataset.train_dataset import GeneralRendererDataset, FinetuningRendererDataset\n\nname2dataset={\n    'gen': GeneralRendererDataset,\n    'ft': FinetuningRendererDataset,\n}"
  },
  {
    "path": "src/nr/dataset/train_dataset.py",
    "content": "from torch.utils.data import Dataset\nfrom asset import *\nfrom dataset.database import parse_database_name, get_database_split\nimport numpy as np\n\nfrom utils.base_utils import get_coords_mask\nfrom utils.dataset_utils import set_seed\nfrom utils.imgs_info import build_imgs_info, random_crop, random_flip, pad_imgs_info, imgs_info_slice, \\\n    imgs_info_to_torch, grasp_info_to_torch\nfrom utils.view_select import compute_nearest_camera_indices\n\ndef select_train_ids_for_real_estate(img_ids):\n    num_frames = len(img_ids)\n    window_size = 32\n    shift = np.random.randint(low=-1, high=2)\n    id_render = np.random.randint(low=4, high=num_frames - 4 - 1)\n\n    right_bound = min(id_render + window_size + shift, num_frames - 1)\n    left_bound = max(0, right_bound - 2 * window_size)\n    candidate_ids = np.arange(left_bound, right_bound)\n    # remove the query frame itself with high probability\n    if np.random.choice([0, 1], p=[0.01, 0.99]):\n        candidate_ids = candidate_ids[candidate_ids != id_render]\n\n    id_feat = np.random.choice(candidate_ids, size=min(8, len(candidate_ids)), replace=False)\n    img_ids = np.asarray(img_ids)\n    return img_ids[id_render], img_ids[id_feat]\n\ndef add_depth_offset(depth,mask,region_min,region_max,offset_min,offset_max,noise_ratio,depth_length):\n    coords = np.stack(np.nonzero(mask), -1)[:, (1, 0)]\n    length = np.max(coords, 0) - np.min(coords, 0)\n    center = coords[np.random.randint(0, coords.shape[0])]\n    lx, ly = np.random.uniform(region_min, region_max, 2) * length\n    diff = coords - center[None, :]\n    mask0 = np.abs(diff[:, 0]) < lx\n    mask1 = np.abs(diff[:, 1]) < ly\n    masked_coords = coords[mask0 & mask1]\n    global_offset = np.random.uniform(offset_min, offset_max) * depth_length\n    if np.random.random() < 0.5:\n        global_offset = -global_offset\n    local_offset = np.random.uniform(-noise_ratio, noise_ratio, masked_coords.shape[0]) * depth_length + global_offset\n    depth[masked_coords[:, 1], masked_coords[:, 0]] += local_offset\n\ndef build_src_imgs_info_select(database, ref_ids, ref_ids_all, cost_volume_nn_num, pad_interval=-1, self_ref=True):\n    if not self_ref:\n        # ref_ids - selected ref ids for rendering\n        ref_idx_exp = compute_nearest_camera_indices(database, ref_ids, ref_ids_all)\n        ref_idx_exp = ref_idx_exp[:, 1:1 + cost_volume_nn_num]\n        ref_ids_all = np.asarray(ref_ids_all)\n        ref_ids_exp = ref_ids_all[ref_idx_exp]  # rfn,nn\n        ref_ids_exp_ = ref_ids_exp.flatten()\n        ref_ids = np.asarray(ref_ids)\n        ref_ids_in = np.unique(np.concatenate([ref_ids_exp_, ref_ids]))  # rfn'\n        mask0 = ref_ids_in[None, :] == ref_ids[:, None]  # rfn,rfn'\n        ref_idx_, ref_idx = np.nonzero(mask0)\n        ref_real_idx = ref_idx[np.argsort(ref_idx_)]  # sort\n\n        rfn, nn = ref_ids_exp.shape\n        mask1 = ref_ids_in[None, :] == ref_ids_exp.flatten()[:, None]  # nn*rfn,rfn'\n        ref_cv_idx_, ref_cv_idx = np.nonzero(mask1)\n        ref_cv_idx = ref_cv_idx[np.argsort(ref_cv_idx_)]  # sort\n        ref_cv_idx = ref_cv_idx.reshape([rfn, nn])\n        \n    else: # not using extra view to construct cost volume\n        ref_ids_in = ref_ids\n        ref_real_idx = np.asarray(list(range(len(ref_ids))))\n        ref_cv_idx = np.asarray([ref_real_idx for _ in range(len(ref_ids))])\n    #print(\"ref_ids\", ref_ids, \"ref_ids_in\", ref_ids_in, \"ref_cv_idx\", ref_cv_idx, \"ref_real_idx\", ref_real_idx)\n    is_aligned = not database.database_name.startswith('space')\n    ref_imgs_info = build_imgs_info(database, ref_ids_in, pad_interval, is_aligned)\n    return ref_imgs_info, ref_cv_idx, ref_real_idx\n\nclass GeneralRendererDataset(Dataset):\n    default_cfg={\n        'train_database_types':['dtu_train','space','real_iconic','real_estate','gso'],\n        'type2sample_weights': {'gso':20, 'dtu_train':20, 'real_iconic':20, 'space':10, 'real_estate':10},\n        'val_database_name': 'nerf_synthetic/lego/black_800',\n        'val_database_split_type': 'val',\n        \n        \"total_views\": 24,\n        \"num_input_views\": 3,\n        'min_wn': 3,\n        'max_wn': 4,\n        'ref_pad_interval': 16,\n        'train_ray_num': 512,\n        'foreground_ratio': 0.5,\n        'resolution_type': 'hr',\n        \"use_consistent_depth_range\": True,\n        'use_depth_loss_for_all': False,\n        \"use_depth\": True,\n        \"use_src_imgs\": False,\n        \"cost_volume_nn_num\": 3,\n\n        \"aug_gso_shrink_range_prob\": 0.5,\n        \"aug_depth_range_prob\": 0.05, #TODO\n        'aug_depth_range_min': 0.95,\n        'aug_depth_range_max': 1.05,\n        \"aug_use_depth_offset\": True,\n        \"aug_depth_offset_prob\": 0.25,\n        \"aug_depth_offset_region_min\": 0.05,\n        \"aug_depth_offset_region_max\": 0.1,\n        'aug_depth_offset_min': 0.5,\n        'aug_depth_offset_max': 1.0,\n        'aug_depth_offset_local': 0.1,\n        \"aug_use_depth_small_offset\": True,\n        \"aug_use_global_noise\": True,\n        \"aug_global_noise_prob\": 0.5,\n        \"aug_depth_small_offset_prob\": 0.5,\n        \"aug_forward_crop_size\": (400,600),\n        \"aug_pixel_center_sample\": False,\n        \"aug_view_select_type\": \"easy\",\n\n        \"use_consistent_min_max\": False,\n        \"revise_depth_range\": False,\n        'load_sdf': True,\n        'exclude_ref_views': False,\n    }\n    def __init__(self, cfg, is_train):\n        self.cfg={**self.default_cfg,**cfg}\n        self.is_train = is_train\n        if is_train:\n            self.num=999999\n            self.type2scene_names,self.database_types,self.database_weights = {}, [], []\n            if self.cfg['resolution_type']=='hr':\n                type2scene_names={'vgn_syn': vgn_train_scene_names}\n            elif self.cfg['resolution_type']=='lr':\n                type2scene_names={'vgn_syn': vgn_train_scene_names}\n            else:\n                raise NotImplementedError\n\n            for database_type in self.cfg['train_database_types']:\n                self.type2scene_names[database_type] = type2scene_names[database_type]\n                self.database_types.append(database_type)\n                self.database_weights.append(self.cfg['type2sample_weights'][database_type])\n                print(f\"training scene num: {len(type2scene_names[database_type])}\")\n            assert(len(self.database_types)>0)\n            # normalize weights\n            self.database_weights=np.asarray(self.database_weights)\n            self.database_weights=self.database_weights/np.sum(self.database_weights)\n        else:\n            self.database = parse_database_name(self.cfg['val_database_name'])\n            self.ref_ids, self.que_ids = get_database_split(self.database,self.cfg['val_database_split_type'])\n            self.num=len(self.que_ids)\n\n    def get_database_ref_que_ids(self, index):\n        if self.is_train:\n            database_type = np.random.choice(self.database_types,1,False,p=self.database_weights)[0]\n            database_scene_name = np.random.choice(self.type2scene_names[database_type])\n            database = parse_database_name(database_scene_name)\n            # if there is no depth for all views, we repeat random sample until find a scene with depth\n            while True:\n                ref_ids = database.get_img_ids(check_depth_exist=True)\n                if len(ref_ids)==0:\n                    database_type = np.random.choice(self.database_types, 1, False, self.database_weights)[0]\n                    database_scene_name = np.random.choice(self.type2scene_names[database_type])\n                    database = parse_database_name(database_scene_name)\n                else: break\n            que_id = np.random.choice(ref_ids)\n            if database.database_name.startswith('real_estate'):\n                que_id, ref_ids = select_train_ids_for_real_estate(ref_ids)\n        else:\n            database = self.database\n            que_id, ref_ids = self.que_ids[index], self.ref_ids\n        return database, que_id, np.asarray(ref_ids)\n\n    def select_working_views_impl(self, database_name, dist_idx, ref_num):\n        if self.cfg['aug_view_select_type']=='default':\n            if database_name.startswith('space') or database_name.startswith('real_estate'):\n                pass\n            elif database_name.startswith('gso'):\n                pool_ratio = np.random.randint(1, 5)\n                dist_idx = dist_idx[:min(ref_num * pool_ratio, 32)]\n            elif database_name.startswith('real_iconic'):\n                pool_ratio = np.random.randint(1, 5)\n                dist_idx = dist_idx[:min(ref_num * pool_ratio, 32)]\n            elif database_name.startswith('dtu_train'):\n                pool_ratio = np.random.randint(1, 3)\n                dist_idx = dist_idx[:min(ref_num * pool_ratio, 12)]\n            else:\n                raise NotImplementedError\n        elif self.cfg['aug_view_select_type']=='easy':\n            if database_name.startswith('space') or database_name.startswith('real_estate'):\n                pass\n            elif database_name.startswith('gso'):\n                pool_ratio = 3\n                dist_idx = dist_idx[:min(ref_num * pool_ratio, 24)]\n            elif database_name.startswith('real_iconic'):\n                pool_ratio = np.random.randint(1, 4)\n                dist_idx = dist_idx[:min(ref_num * pool_ratio, 20)]\n            elif database_name.startswith('dtu_train'):\n                pool_ratio = np.random.randint(1, 3)\n                dist_idx = dist_idx[:min(ref_num * pool_ratio, 12)]\n            else:\n                raise NotImplementedError\n        elif self.cfg['aug_view_select_type']=='hard':\n            if database_name.startswith('grasp'):\n                dist_idx = dist_idx[80:]  \n            elif database_name.startswith('vgn'):\n                dist_idx = dist_idx[8:] \n            else:\n                raise NotImplementedError\n        return dist_idx\n\n    def get_ref_que_ids(self, target_id):\n        N = self.cfg['total_views']\n        interval = list(range(0, N, N // self.cfg['num_input_views']))\n        res = [(target_id + i) % N for i in interval]\n        que_id = ( np.random.choice(res) + np.random.randint(1, N // self.cfg['num_input_views']) ) % N \n        return res, que_id\n\n    def select_working_views(self, database, que_id, ref_ids):\n        database_name = database.database_name\n        dist_idx = compute_nearest_camera_indices(database, [que_id], ref_ids)[0]\n        if self.is_train:\n            if np.random.random()>0.02: # 2% chance to include que image\n                dist_idx = dist_idx[ref_ids[dist_idx]!=que_id]\n            ref_num = np.random.randint(self.cfg['min_wn'], self.cfg['max_wn'])\n            dist_idx = self.select_working_views_impl(database_name,dist_idx,ref_num)\n            \n            if database_name.startswith('grasp'):\n                ref_id = np.random.randint(0, 256)\n                ref_ids = [ref_id, (ref_id + 80) % 256, (ref_id + 160) % 256]\n            elif database_name.startswith('vgn'):\n                ref_ids, que_id = self.get_ref_que_ids(np.random.randint(0, self.cfg['total_views']))\n            elif not database_name.startswith('real_estate'):\n                # we already select working views for real estate dataset\n                np.random.shuffle(dist_idx)\n                dist_idx = dist_idx[:ref_num]\n                ref_ids = ref_ids[dist_idx]\n            else:\n                ref_ids = ref_ids[:ref_num]\n        else:\n            if database_name.startswith('vgn'):\n                ref_ids, que_id = self.get_ref_que_ids(que_id)\n            elif database_name.startswith('grasp'):\n                ref_ids = [que_id, (que_id + 80) % 256, (que_id + 160) % 256]\n            else:\n                dist_idx = dist_idx[:self.cfg['min_wn']]\n                ref_ids = ref_ids[dist_idx]\n        return ref_ids, que_id\n\n    def depth_range_aug_for_gso(self, depth_range, depth, mask):\n        depth_range_new = depth_range.copy()\n        if np.random.random() < self.cfg['aug_gso_shrink_range_prob']:\n            rfn, _, h, w = depth.shape\n            far_ratios, near_ratios = [], []\n            for rfi in range(rfn):\n                depth_val = depth[rfi][mask[rfi].astype(np.bool)]\n                depth_val = depth_val[depth_val > 1e-3]\n                depth_val = depth_val[depth_val < 1e4]\n                depth_max = np.max(depth_val) * 1.1\n                depth_min = np.min(depth_val) * 0.9\n                near, far = depth_range[rfi]\n                far_ratio = depth_max / far\n                near_ratio = near / depth_min\n                far_ratios.append(far_ratio)\n                near_ratios.append(near_ratio)\n\n            far_ratio = np.max(far_ratios)\n            near_ratio = np.max(near_ratios)\n            if far_ratio < 1.0: depth_range_new[:, 1] *= np.random.uniform(far_ratio, 1.0)\n            if near_ratio < 1.0: depth_range_new[:, 0] /= np.random.uniform(near_ratio, 1.0)\n\n        if np.random.random()<0.8:\n            ratio0, ratio1 = np.random.uniform(0.025, 0.1, 2)\n            depth_range_new[:, 0] = depth_range_new[:, 0] * (1 - ratio0)\n            depth_range_new[:, 1] = depth_range_new[:, 1] * (1 + ratio1)\n        return depth_range_new\n\n    def random_change_depth_range(self, depth_range, depth, mask, database_name):\n        if database_name.startswith('gso'):\n            depth_range_new = self.depth_range_aug_for_gso(depth_range, depth, mask)\n        else:\n            depth_range_new = depth_range.copy()\n            if np.random.random()<self.cfg['aug_depth_range_prob']:\n                depth_range_new[:,0] *= np.random.uniform(self.cfg['aug_depth_range_min'],1.0)\n                depth_range_new[:,1] *= np.random.uniform(1.0,self.cfg['aug_depth_range_max'])\n        return depth_range_new\n\n\n    def add_depth_noise(self,depths,masks,depth_ranges):\n        rfn = depths.shape[0]\n        depths_output = []\n        for rfi in range(rfn):\n            depth, mask, depth_range = depths[rfi,0], masks[rfi,0], depth_ranges[rfi]\n\n            depth = depth.copy()\n            near, far = depth_range\n            depth_length = far - near\n            if self.cfg['aug_use_depth_offset'] and np.random.random() < self.cfg['aug_depth_offset_prob']:\n                add_depth_offset(depth, mask,self.cfg['aug_depth_offset_region_min'],\n                                 self.cfg['aug_depth_offset_region_max'],\n                                 self.cfg['aug_depth_offset_min'],\n                                 self.cfg['aug_depth_offset_max'],\n                                 self.cfg['aug_depth_offset_local'], depth_length)\n            if self.cfg['aug_use_depth_small_offset'] and np.random.random() < self.cfg['aug_depth_small_offset_prob']:\n                add_depth_offset(depth, mask, 0.1, 0.2, 0.01, 0.05, 0.005, depth_length)\n            if self.cfg['aug_use_global_noise'] and np.random.random() < self.cfg['aug_global_noise_prob']:\n                depth += np.random.uniform(-0.005,0.005,depth.shape).astype(np.float32)*depth_length\n            depths_output.append(depth)\n        return np.asarray(depths_output)[:,None,:,:]\n\n    def generate_coords_for_training(self, database, que_imgs_info):\n        if (database.database_name.startswith('real_estate') \\\n                or database.database_name.startswith('real_iconic') \\\n                or database.database_name.startswith('space')) and self.cfg['aug_pixel_center_sample']:\n                que_mask_cur = np.zeros_like(que_imgs_info['masks'][0, 0]).astype(np.bool)\n                h, w = que_mask_cur.shape\n                center_ratio = 0.8\n                begin_ratio = (1-center_ratio)/2\n                hb, he = int(h*begin_ratio), int(h*(center_ratio+begin_ratio))\n                wb, we = int(w*begin_ratio), int(w*(center_ratio+begin_ratio))\n                que_mask_cur[hb:he,wb:we] = True\n                coords = get_coords_mask(que_mask_cur, self.cfg['train_ray_num'], 0.9).reshape([1, -1, 2])\n        else:\n            que_mask_cur = que_imgs_info['masks'][0,0]>0\n            coords = get_coords_mask(que_mask_cur, self.cfg['train_ray_num'], self.cfg['foreground_ratio']).reshape([1,-1,2])\n        return coords\n\n    def consistent_depth_range(self, ref_imgs_info, que_imgs_info):\n        depth_range_all = np.concatenate([ref_imgs_info['depth_range'], que_imgs_info['depth_range']], 0)\n        if self.cfg['use_consistent_min_max']:\n            depth_range_all[:, 0] = np.min(depth_range_all)\n            depth_range_all[:, 1] = np.max(depth_range_all)\n        else:\n            range_len = depth_range_all[:, 1] - depth_range_all[:, 0]\n            max_len = np.max(range_len)\n            range_margin = (max_len - range_len) / 2\n            ref_near = depth_range_all[:, 0] - range_margin\n            ref_near = np.max(np.stack([ref_near, depth_range_all[:, 0] * 0.5], -1), 1)\n            depth_range_all[:, 0] = ref_near\n            depth_range_all[:, 1] = ref_near + max_len\n        ref_imgs_info['depth_range'] = depth_range_all[:-1]\n        que_imgs_info['depth_range'] = depth_range_all[-1:]\n\n    def __getitem__(self, index):\n        set_seed(index, self.is_train) #TODO: currently que_id and ref is all randomly sampled\n        database, que_id, ref_ids_all = self.get_database_ref_que_ids(index)\n        ref_ids, new_que_id = self.select_working_views(database, que_id, ref_ids_all)\n        if self.cfg['exclude_ref_views']:\n            que_id = new_que_id\n        # print(que_id, ref_ids)\n        if self.cfg['use_src_imgs']:\n            # src_imgs_info used in construction of cost volume\n            ref_imgs_info, ref_cv_idx, ref_real_idx = build_src_imgs_info_select(database,ref_ids,ref_ids_all,self.cfg['cost_volume_nn_num'])\n        else:\n            ref_idx = compute_nearest_camera_indices(database, ref_ids)[:,1:4] # used in cost volume construction\n            is_aligned = not database.database_name.startswith('space')\n            ref_imgs_info = build_imgs_info(database, ref_ids, -1, is_aligned)\n        que_imgs_info = build_imgs_info(database, [que_id], has_depth=True)\n\n       \n        # data augmentation\n        depth_range_all = np.concatenate([ref_imgs_info['depth_range'],que_imgs_info['depth_range']],0)\n        if database.database_name.startswith('gso'): # only used in gso currently\n            depth_all = np.concatenate([ref_imgs_info['depth'],que_imgs_info['depth']],0)\n            mask_all = np.concatenate([ref_imgs_info['masks'],que_imgs_info['masks']],0)\n        else:\n            depth_all, mask_all = None, None\n        depth_range_all = self.random_change_depth_range(depth_range_all, depth_all, mask_all, database.database_name)\n        ref_imgs_info['depth_range'] = depth_range_all[:-1]\n        que_imgs_info['depth_range'] = depth_range_all[-1:]\n\n        if database.database_name.startswith('gso') and self.cfg['use_depth']:\n            depth_aug = self.add_depth_noise(ref_imgs_info['depth'], ref_imgs_info['masks'], ref_imgs_info['depth_range'])\n            ref_imgs_info['true_depth'] = ref_imgs_info['depth']\n            ref_imgs_info['depth'] = depth_aug\n\n        if database.database_name.startswith('real_estate') \\\n            or database.database_name.startswith('real_iconic') \\\n            or database.database_name.startswith('space'):\n            # crop all datasets\n            ref_imgs_info, que_imgs_info = random_crop(ref_imgs_info, que_imgs_info, self.cfg['aug_forward_crop_size'])\n            if np.random.random()<0.5:\n                ref_imgs_info, que_imgs_info = random_flip(ref_imgs_info, que_imgs_info)\n\n        if self.cfg['use_depth_loss_for_all'] and self.cfg['use_depth']:\n            if not database.database_name.startswith('gso'):\n                ref_imgs_info['true_depth'] = ref_imgs_info['depth']\n        \n        if database.database_name.startswith('grasp') or database.database_name.startswith('vgn'):\n            ref_imgs_info['true_depth'] = ref_imgs_info['depth']\n            que_imgs_info['true_depth'] = que_imgs_info['depth']\n        if self.cfg['use_consistent_depth_range']:\n            self.consistent_depth_range(ref_imgs_info, que_imgs_info)\n\n        # generate coords\n        if self.is_train:\n            coords = self.generate_coords_for_training(database,que_imgs_info)\n        else:\n            qn, _, hn, wn = que_imgs_info['imgs'].shape\n            coords = np.stack(np.meshgrid(np.arange(wn),np.arange(hn)),-1)\n            coords = coords.reshape([1,-1,2]).astype(np.float32)\n        que_imgs_info['coords'] = coords\n        ref_imgs_info = pad_imgs_info(ref_imgs_info,self.cfg['ref_pad_interval'])\n\n        # don't feed depth to gpu\n        if not self.cfg['use_depth']: \n            if 'depth' in ref_imgs_info: ref_imgs_info.pop('depth')\n            if 'depth' in que_imgs_info: que_imgs_info.pop('depth')\n            if 'true_depth' in ref_imgs_info: ref_imgs_info.pop('true_depth')\n\n        if self.cfg['use_src_imgs']:\n            src_imgs_info = ref_imgs_info.copy()\n            ref_imgs_info = imgs_info_slice(ref_imgs_info, ref_real_idx)\n            ref_imgs_info['nn_ids'] = ref_cv_idx\n        else:\n            # 'nn_ids' used in constructing cost volume (specify source image ids)\n            ref_imgs_info['nn_ids'] = ref_idx.astype(np.int64)\n\n        if self.cfg['load_sdf']:\n            ref_imgs_info['sdf_gt'] = database.get_sdf()\n\n        ref_imgs_info = imgs_info_to_torch(ref_imgs_info)\n        que_imgs_info = imgs_info_to_torch(que_imgs_info)\n\n        outputs = {'ref_imgs_info': ref_imgs_info, 'que_imgs_info': que_imgs_info, 'scene_name': database.database_name}\n        if self.cfg['use_src_imgs']: outputs['src_imgs_info'] = imgs_info_to_torch(src_imgs_info)\n        \n        if database.database_name.startswith('vgn'):\n            outputs['grasp_info'] = grasp_info_to_torch(database.get_grasp_info())\n        return outputs\n\n    def __len__(self):\n        return self.num\n\n\nclass FinetuningRendererDataset(Dataset):\n    default_cfg={\n        \"database_name\": \"nerf_synthetic/lego/black_800\",\n        \"database_split_type\": \"val_all\"\n    }\n    def __init__(self,cfg, is_train):\n        self.cfg={**self.default_cfg,**cfg}\n        self.is_train=is_train\n        self.train_ids, self.val_ids = get_database_split(parse_database_name(self.cfg['database_name']),self.cfg['database_split_type'])\n\n    def __getitem__(self, index):\n        output={'index': index}\n        return output\n\n    def __len__(self):\n        if self.is_train:\n            return 99999999\n        else:\n            return len(self.val_ids)"
  },
  {
    "path": "src/nr/main.py",
    "content": "import sys, os\nimport time\n\nsys.path.append(\"./src/nr\")\nfrom pathlib import Path\nimport numpy as np\n\nimport torch\nfrom skimage.io import imsave, imread\nfrom network.renderer import name2network\nfrom utils.base_utils import load_cfg, to_cuda\nfrom utils.imgs_info import build_render_imgs_info, imgs_info_to_torch, grasp_info_to_torch\nfrom network.renderer import name2network\nfrom utils.base_utils import color_map_forward\nfrom network.loss import VGNLoss\nfrom tqdm import tqdm\nfrom scipy import ndimage\nimport cv2\nfrom gd.utils.transform import Transform, Rotation\nfrom gd.grasp import *\n\n\ndef process(\n    tsdf_vol,\n    qual_vol,\n    rot_vol,\n    width_vol,\n    gaussian_filter_sigma=1.0,\n    min_width=1.33,\n    max_width=9.33,\n    tsdf_thres_high = 0.5,\n    tsdf_thres_low = 1e-3,\n    n_grasp=0\n):\n    tsdf_vol = tsdf_vol.squeeze()  \n    qual_vol = qual_vol.squeeze()  \n    rot_vol = rot_vol.squeeze()  \n    width_vol = width_vol.squeeze()\n    # smooth quality volume with a Gaussian\n    qual_vol = ndimage.gaussian_filter(\n        qual_vol, sigma=gaussian_filter_sigma, mode=\"nearest\"\n    )\n\n    # mask out voxels too far away from the surface\n    outside_voxels = tsdf_vol > tsdf_thres_high\n    inside_voxels = np.logical_and(tsdf_thres_low < tsdf_vol, tsdf_vol < tsdf_thres_high)\n    valid_voxels = ndimage.morphology.binary_dilation(\n        outside_voxels, iterations=2, mask=np.logical_not(inside_voxels)\n    )\n    qual_vol[valid_voxels == False] = 0.0\n    \n    # reject voxels with predicted widths that are too small or too large\n    qual_vol[np.logical_or(width_vol < min_width, width_vol > max_width)] = 0.0\n\n    return qual_vol, rot_vol, width_vol\n\n\ndef select(qual_vol, rot_vol, width_vol, threshold=0.90, max_filter_size=4):\n    qual_vol[qual_vol < threshold] = 0.0\n\n    # non maximum suppression\n    max_vol = ndimage.maximum_filter(qual_vol, size=max_filter_size)\n    \n    qual_vol = np.where(qual_vol == max_vol, qual_vol, 0.0)\n    mask = np.where(qual_vol, 1.0, 0.0)\n\n    # construct grasps\n    grasps, scores, indexs = [], [], []\n    for index in np.argwhere(mask):\n        indexs.append(index)\n        grasp, score = select_index(qual_vol, rot_vol, width_vol, index)\n        grasps.append(grasp)\n        scores.append(score)\n    return grasps, scores, indexs\n\n\ndef select_index(qual_vol, rot_vol, width_vol, index):\n    i, j, k = index\n    score = qual_vol[i, j, k]\n    rot = rot_vol[:, i, j, k]\n    ori = Rotation.from_quat(rot)\n    pos = np.array([i, j, k], dtype=np.float64)\n    width = width_vol[i, j, k]\n    return Grasp(Transform(ori, pos), width), score\n\n\nclass GraspNeRFPlanner(object):\n    def set_params(self, args):\n        self.args = args\n        self.voxel_size = 0.3 / 40\n        self.bbox3d =  [[-0.15, -0.15, -0.0503],[0.15, 0.15, 0.2497]]\n        self.tsdf_thres_high = 0 \n        self.tsdf_thres_low = -0.85\n\n        self.renderer_root_dir = self.args.renderer_root_dir\n        tp, split, scene_type, scene_split, scene_id, background_size = args.database_name.split('/')\n        background, size = background_size.split('_')\n        self.split = split\n        self.tp = tp\n        self.downSample = float(size) \n        tp2wh = {\n            'vgn_syn': (640, 360)\n        }\n        src_wh = tp2wh[tp]\n        self.img_wh = (np.array(src_wh) * self.downSample).astype(int)\n        self.blender2opencv = np.array([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])\n        self.K = np.array([[892.62, 0.0, 639.5],\n                           [0.0, 892.62, 359.5],\n                           [0.0, 0.0, 1.0]]) \n        self.K[:2] = self.K[:2] * self.downSample\n        if self.tp == 'vgn_syn':\n            self.K[:2] /= 2\n        self.depth_thres = {\n            'vgn_syn': 0.8,\n        }\n        \n        if args.object_set == \"graspnet\":\n            dir_name = \"pile_graspnet_test\"\n        else:\n            if self.args.scene == \"pile\":\n                dir_name = \"pile_pile_test_200\"\n            elif self.args.scene == \"packed\":\n                dir_name = \"packed_packed_test_200\"\n            elif self.args.scene == \"single\":\n                dir_name = \"single_single_test_200\"\n\n        scene_root_dir = os.path.join(self.renderer_root_dir, \"data/mesh_pose_list\", dir_name)\n        self.mesh_pose_list = [i for i in sorted(os.listdir(scene_root_dir))]\n        self.depth_root_dir = \"\"\n        self.depth_list = []\n\n    def __init__(self, args=None, cfg_fn=None, debug_dir=None) -> None:\n        default_render_cfg = {\n        'min_wn': 3, # working view number\n        'ref_pad_interval': 16, # input image size should be multiple of 16\n        'use_src_imgs': False, # use source images to construct cost volume or not\n        'cost_volume_nn_num': 3, # number of source views used in cost volume\n        'use_depth': True, # use colmap depth in rendering or not,\n        }\n        # load render cfg\n        if cfg_fn is None:\n            self.set_params(args)\n            cfg = load_cfg(args.cfg_fn)\n        else:\n            cfg = load_cfg(cfg_fn)\n\n        print(f\"[I] GraspNeRFPlanner: using ckpt: {cfg['name']}\")\n        render_cfg = cfg['train_dataset_cfg'] if 'train_dataset_cfg' in cfg else {}\n        render_cfg = {**default_render_cfg, **render_cfg}\n        cfg['render_rgb'] = False # only for training. Disable in grasping.\n        # load model\n        self.net = name2network[cfg['network']](cfg)\n        ckpt_filename = 'model_best'\n        ckpt = torch.load(Path('src/nr/ckpt') / cfg[\"group_name\"] / cfg[\"name\"] / f'{ckpt_filename}.pth')\n        self.net.load_state_dict(ckpt['network_state_dict'])\n        self.net.cuda()\n        self.net.eval()\n        self.step = ckpt[\"step\"]\n        self.output_dir = debug_dir\n        if debug_dir is not None:\n            if not Path(debug_dir).exists():\n                Path(debug_dir).mkdir(parents=True)\n        self.loss = VGNLoss({})\n        self.num_input_views = render_cfg['num_input_views']\n        print(f\"[I] GraspNeRFPlanner: load model at step {self.step} of best metric {ckpt['best_para']}\")\n\n    def get_image(self, img_id, round_idx):\n        img_filename = os.path.join(self.args.log_root_dir, \"rendered_results/\" + str(self.args.logdir).split(\"/\")[-1], \"rgb/%04d.png\"%img_id)\n        img = imread(img_filename)[:,:,:3]\n        img = cv2.resize(img, self.img_wh)\n        return np.asarray(img, dtype=np.float32)\n    \n    def get_pose(self, img_id):\n        poses_ori = np.load(Path(self.renderer_root_dir) / 'camera_pose.npy')\n        poses = [np.linalg.inv(p @ self.blender2opencv)[:3,:] for p in poses_ori]\n        return poses[img_id].astype(np.float32).copy()\n    \n    def get_K(self, img_id):\n        return self.K.astype(np.float32).copy()\n\n    def get_depth_range(self,img_id, round_idx, fixed=False):\n        if fixed:\n            return np.array([0.2,0.8])\n        depth = self.get_depth(img_id, round_idx)\n        nf = [max(0, np.min(depth)), min(self.depth_thres[self.tp], np.max(depth))]\n        return np.array(nf)\n    \n    def __call__(self, test_view_id, round_idx, n_grasp, gt_tsdf):\n        # load data for test\n        images = [self.get_image(i, round_idx) for i in test_view_id]\n        images = color_map_forward(np.stack(images, 0)).transpose([0, 3, 1, 2])\n        extrinsics = np.stack([self.get_pose(i) for i in test_view_id], 0)\n        intrinsics = np.stack([self.get_K(i) for i in test_view_id], 0)\n        depth_range = np.asarray([self.get_depth_range(i, round_idx, fixed = True) for i in test_view_id], dtype=np.float32)\n        \n        tsdf_vol, qual_vol_ori, rot_vol_ori, width_vol_ori, toc = self.core(images, extrinsics, intrinsics, depth_range, self.bbox3d)\n\n        qual_vol, rot_vol, width_vol = process(tsdf_vol, qual_vol_ori, rot_vol_ori, width_vol_ori, tsdf_thres_high=self.tsdf_thres_high, tsdf_thres_low=self.tsdf_thres_low, n_grasp=n_grasp)\n        grasps, scores, indexs = select(qual_vol.copy(), rot_vol, width_vol)\n        grasps, scores, indexs = np.asarray(grasps), np.asarray(scores), np.asarray(indexs)\n\n        if len(grasps) > 0:\n            np.random.seed(self.args.seed + round_idx + n_grasp)\n            p = np.random.permutation(len(grasps))  \n            grasps = [from_voxel_coordinates(g, self.voxel_size) for g in grasps[p]]\n            scores = scores[p]\n            indexs = indexs[p]\n\n        return grasps, scores, toc\n    \n    def core(self, \n                images: np.ndarray, \n                extrinsics: np.ndarray, \n                intrinsics: np.ndarray, \n                depth_range=[0.2, 0.8], \n                bbox3d=[[-0.15, -0.15, -0.05],[0.15, 0.15, 0.25]], gt_info=None, que_id=0):\n        \"\"\"\n        @args\n            images: np array of shape (3, 3, h, w), image in RGB format\n            extrinsics: np array of shape (3, 4, 4), the transformation matrix from world to camera\n            intrinsics: np array of shape (3, 3, 3)\n        @rets\n            volume, label, rot, width: np array of shape (1, 1, res, res, res)\n        \"\"\"\n        _, _, h, w = images.shape\n        assert h % 32 == 0 and w % 32 == 0\n        extrinsics = extrinsics[:, :3, :]\n        que_imgs_info = build_render_imgs_info(extrinsics[que_id], intrinsics[que_id], (h, w), depth_range[que_id])\n        src_imgs_info = {'imgs': images, 'poses': extrinsics.astype(np.float32), 'Ks': intrinsics.astype(np.float32), 'depth_range': depth_range.astype(np.float32), \n                                'bbox3d': np.array(bbox3d)}\n\n        ref_imgs_info = src_imgs_info.copy()\n        num_views = images.shape[0]\n        ref_imgs_info['nn_ids'] = np.arange(num_views).repeat(num_views, 0)\n        data = {'step': self.step , 'eval': True}\n        if not gt_info:\n            data['full_vol'] = True\n        else:\n            data['grasp_info'] = to_cuda(grasp_info_to_torch(gt_info))\n        data['que_imgs_info'] = to_cuda(imgs_info_to_torch(que_imgs_info))\n        data['src_imgs_info'] = to_cuda(imgs_info_to_torch(src_imgs_info))\n        data['ref_imgs_info'] = to_cuda(imgs_info_to_torch(ref_imgs_info))\n\n        with torch.no_grad():\n            t0 = time.time()\n            render_info = self.net(data)\n            t = time.time() - t0\n        \n        if gt_info:\n            return self.loss(render_info, data, self.step, False)\n\n        label, rot, width = render_info['vgn_pred']\n        \n        return render_info['volume'].cpu().numpy(), label.cpu().numpy(), rot.cpu().numpy(), width.cpu().numpy(), t"
  },
  {
    "path": "src/nr/network/aggregate_net.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom easydict import EasyDict\nimport numpy as np\n\nfrom network.ibrnet import IBRNetWithNeuRay, IBRNetWithNeuRayNeus\nfrom network.neus import SingleVarianceNetwork\n\n\ndef get_dir_diff(prj_dir,que_dir):\n    rfn, qn, rn, dn, _ = prj_dir.shape\n    dir_diff = prj_dir - que_dir.unsqueeze(0)  # rfn,qn,rn,dn,3\n    dir_dot = torch.sum(prj_dir * que_dir.unsqueeze(0), -1, keepdim=True)\n    dir_diff = torch.cat([dir_diff, dir_dot], -1)  # rfn,qn,rn,dn,4\n    dir_diff = dir_diff.reshape(rfn, qn * rn, dn, -1).permute(1, 2, 0, 3)\n    return dir_diff\n\nclass BaseAggregationNet(nn.Module):\n    default_cfg={\n        'sample_num': 64,\n        'neuray_dim': 32,\n        'use_img_feats': False,\n    }\n    def __init__(self, cfg):\n        super().__init__()\n        self.cfg={**self.default_cfg, **cfg}\n        dim = self.cfg['neuray_dim']\n        self.prob_embed = nn.Sequential(\n            nn.Linear(2+32, dim),\n            nn.ReLU(),\n            nn.Linear(dim, dim),\n        )\n\n    def _get_embedding(self, prj_dict, que_dir):\n        \"\"\"\n        :param prj_dict\n             prj_ray_feats: rfn,qn,rn,dn,f\n             prj_hit_prob:  rfn,qn,rn,dn,1\n             prj_vis:       rfn,qn,rn,dn,1\n             prj_alpha:     rfn,qn,rn,dn,1\n             prj_rgb:       rfn,qn,rn,dn,3\n             prj_dir:       rfn,qn,rn,dn,3\n        :param que_dir:       qn,rn,dn,3\n        :return: qn,rn,dn\n        \"\"\"\n        hit_prob_val = (prj_dict['hit_prob']-0.5)*2\n        vis_val = (prj_dict['vis']-0.5)*2\n\n        prj_hit_prob, prj_vis, prj_rgb, prj_dir, prj_ray_feats = \\\n            hit_prob_val, vis_val, prj_dict['rgb'], prj_dict['dir'], prj_dict['ray_feats']\n        rfn,qn,rn,dn,_ = hit_prob_val.shape\n\n        prob_embedding = self.prob_embed(torch.cat([prj_ray_feats, prj_hit_prob, prj_vis],-1))\n\n        if que_dir is not None:\n            dir_diff = get_dir_diff(prj_dir, que_dir)\n        else:\n            _,qn,rn,dn,_ = prj_hit_prob.shape\n            dir_diff = torch.zeros((rfn, qn * rn, dn, 4)).permute(1, 2, 0, 3).to(prj_hit_prob.device)\n\n        valid_mask = prj_dict['mask']\n        valid_mask = valid_mask.float() # rfn,qn,rn,dn\n        valid_mask = valid_mask.reshape(rfn, qn * rn, dn, -1).permute(1, 2, 0, 3)\n\n        prj_img_feats = prj_dict['img_feats']\n        prj_img_feats = torch.cat([prj_rgb, prj_img_feats], -1)\n        prj_img_feats = prj_img_feats.reshape(rfn, qn * rn, dn, -1).permute(1, 2, 0, 3)\n        prob_embedding = prob_embedding.reshape(rfn, qn * rn, dn, -1).permute(1, 2, 0, 3)\n        return prj_img_feats, prob_embedding, dir_diff, valid_mask\n\nclass DefaultAggregationNet(BaseAggregationNet):\n    def __init__(self,cfg):\n        super().__init__(cfg)\n        dim = self.cfg['neuray_dim']\n        self.agg_impl = IBRNetWithNeuRay(dim,n_samples=self.cfg['sample_num'])\n\n    def forward(self, prj_dict, que_dir, que_pts=None, que_dists=None):\n        qn,rn,dn,_ = que_dir.shape\n        prj_img_feats, prob_embedding, dir_diff, valid_mask = self._get_embedding(prj_dict, que_dir)\n        outs = self.agg_impl(prj_img_feats, prob_embedding, dir_diff, valid_mask)\n        colors = outs[...,:3] # qn*rn,dn,3\n        density = outs[...,3] # qn*rn,dn,0\n        return density.reshape(qn,rn,dn), colors.reshape(qn,rn,dn,3)\n\n\nclass NeusAggregationNet(BaseAggregationNet):\n    neus_default_cfg = {\n        'cos_anneal_end_iter': 0,\n        'init_s': 0.3,\n        'fix_s': False\n    }\n    def __init__(self,cfg):\n        cfg = {**self.neus_default_cfg, **cfg}\n        super().__init__(cfg)\n        dim = self.cfg['neuray_dim']\n        self.agg_impl = IBRNetWithNeuRayNeus(dim,n_samples=self.cfg['sample_num'])\n        self.deviation_network = SingleVarianceNetwork(self.cfg['init_s'], self.cfg['fix_s'])\n        self.step = 0\n        self.cos_anneal_ratio = 1.0\n\n    def _update_cos_anneal_ratio(self):\n        self.cos_anneal_ratio = np.min([1.0, self.step / self.cfg['cos_anneal_end_iter']])\n\n    def _get_alpha_from_sdf(self, sdf, grad, que_dir, que_dists):\n        qn,rn,dn,_ = que_dir.shape\n        inv_s = self.deviation_network(torch.zeros([1, 3], device=sdf.device))[:, :1].clip(1e-6, 1e6)           # Single parameter\n        inv_s = inv_s.expand(qn*rn, dn)\n        true_cos = (-que_dir * grad).sum(-1, keepdim=True)\n        # \"cos_anneal_ratio\" grows from 0 to 1 in the beginning training iterations. The anneal strategy below makes\n        # the cos value \"not dead\" at the beginning training iterations, for better convergence.\n        iter_cos = -(F.relu(-true_cos * 0.5 + 0.5) * (1.0 - self.cos_anneal_ratio) +\n                     F.relu(-true_cos) * self.cos_anneal_ratio)[0].squeeze(-1)  # always non-positive\n        # Estimate signed distances at section points\n        estimated_next_sdf = sdf + iter_cos * que_dists[0] * 0.5\n        estimated_prev_sdf = sdf - iter_cos * que_dists[0] * 0.5\n        prev_cdf = torch.sigmoid(estimated_prev_sdf * inv_s)\n        next_cdf = torch.sigmoid(estimated_next_sdf * inv_s)\n        p = prev_cdf - next_cdf\n        c = prev_cdf\n        alpha = ((p + 1e-5) / (c + 1e-5)).reshape(qn,rn,dn).clip(0.0, 1.0)\n\n        return alpha\n\n    def forward(self, prj_dict, que_dir, que_pts, que_dists, is_train):\n        if self.cfg['cos_anneal_end_iter'] and is_train:\n            self._update_cos_anneal_ratio()\n        qn,rn,dn,_ = que_dir.shape\n        prj_img_feats, prob_embedding, dir_diff, valid_mask = self._get_embedding(prj_dict, que_dir)\n        outs, grad = self.agg_impl(prj_img_feats, prob_embedding, dir_diff, valid_mask, que_pts)\n        colors = outs[...,:3] # qn*rn,dn,3\n        sdf = outs[...,3] # qn*rn,dn,0\n        if que_dists is None:\n            return None, sdf.reshape(qn,rn,dn), colors.reshape(qn,rn,dn,3), None, None\n        if is_train:\n            self.step += 1\n            self.deviation_network.set_step(self.step)\n        alpha = self._get_alpha_from_sdf(sdf, grad, que_dir, que_dists)\n        grad_error = torch.mean((torch.linalg.norm(grad.reshape(qn,rn,dn,3), ord=2, dim=-1) - 1.0) ** 2).reshape(1,1)\n        return alpha.reshape(qn,rn,dn), sdf.reshape(qn,rn,dn), colors.reshape(qn,rn,dn,3), grad_error, self.deviation_network.variance.reshape(1,1)\n\n\nname2agg_net={\n    'default': DefaultAggregationNet,\n    'neus': NeusAggregationNet\n}"
  },
  {
    "path": "src/nr/network/dist_decoder.py",
    "content": "import torch.nn as nn\nimport torch\n\nfrom network.ops import AddBias\n\ndef get_near_far_points(depth, interval, depth_range, is_ref, fixed_interval=False, fixed_interval_val=0.01):\n    \"\"\"                               is_ref     |  not is_ref\n    :param depth:    [...,dn]      rfn,qn,rn,dn or qn,rn,dn\n    :param interval: [...,dn]        1,qn,rn,dn or qn,rn,dn\n    :param depth_range:                   rfn,2 or qn,2\n    :param is_ref:\n    :param fixed_interval:\n    :param fixed_interval_val:\n    :return: near far [rfn,qn,rn,dn] or [qn,rn,dn]\n    \"\"\"\n    if is_ref:\n        ref_near = depth_range[:, 0]\n        ref_far = depth_range[:, 1]\n        ref_near = -1 / ref_near[:, None, None, None]\n        ref_far = -1 / ref_far[:, None, None, None]\n        depth = torch.clamp(depth, min=1e-5)\n        depth = -1 / depth\n        depth = (depth - ref_near) / (ref_far - ref_near)\n    else:\n        que_near = depth_range[:, 0]  # qn\n        que_far = depth_range[:, 1]  # qn\n        que_near = -1 / que_near[:, None, None]\n        que_far = -1 / que_far[:, None, None]\n        depth = torch.clamp(depth, min=1e-5)\n        depth = -1 / depth\n        depth = (depth - que_near) / (que_far - que_near)\n\n    if not fixed_interval:\n        if is_ref:\n            interval_half = interval / 2\n            interval_ext = torch.cat([interval_half[..., 0:1], interval_half], -1)\n            near = depth - interval_ext[..., :-1]\n            far = depth + interval_ext[..., 1:]\n        else:\n            interval_half = interval / 2\n            first = depth[..., 0] - interval_half[..., 0]\n            last = depth[..., -1] + interval_half[..., -1]\n            depth_ext = (depth[..., :-1] + depth[..., 1:]) / 2\n            depth_ext = torch.cat([first[..., None], depth_ext, last[..., None]], -1)\n            near = depth_ext[..., :-1]\n            far = depth_ext[..., 1:]\n    else:\n        near = depth - fixed_interval_val/2\n        far = depth + fixed_interval_val/2\n\n    return near, far\n\nclass MixtureLogisticsDistDecoder(nn.Module):\n    default_cfg={\n        'feats_dim': 32,\n        'bias_val': 0.05,\n        \"use_vis\": True,\n    }\n    def __init__(self,cfg):\n        super().__init__()\n        self.cfg={**self.default_cfg,**cfg}\n        ray_feats_dim = self.cfg[\"feats_dim\"]\n        run_dim = ray_feats_dim\n        self.mean_decoder=nn.Sequential(\n            nn.Linear(ray_feats_dim, run_dim),\n            nn.ELU(),\n            nn.Linear(run_dim, run_dim),\n            nn.ELU(),\n            nn.Linear(run_dim, 2),\n            nn.Softplus()\n        )\n        self.var_decoder=nn.Sequential(\n            nn.Linear(ray_feats_dim, run_dim),\n            nn.ELU(),\n            nn.Linear(run_dim, run_dim),\n            nn.ELU(),\n            nn.Linear(run_dim, 2),\n            nn.Softplus(),\n            AddBias(self.cfg['bias_val']),\n        )\n        self.aw_decoder=nn.Sequential(\n            nn.Linear(ray_feats_dim, run_dim),\n            nn.ELU(),\n            nn.Linear(run_dim, run_dim),\n            nn.ELU(),\n            nn.Linear(run_dim, 1),\n            nn.Sigmoid(),\n        )\n        if self.cfg['use_vis']:\n            self.vis_decoder=nn.Sequential(\n                nn.Linear(ray_feats_dim, run_dim),\n                nn.ELU(),\n                nn.Linear(run_dim, run_dim),\n                nn.ELU(),\n                nn.Linear(run_dim, 1),\n                nn.Sigmoid(),\n            )\n\n    def forward(self, feats):\n        prj_mean = self.mean_decoder(feats)\n        prj_var = self.var_decoder(feats)\n        prj_aw = self.aw_decoder(feats)\n        if self.cfg['use_vis']:\n            prj_vis = self.vis_decoder(feats)\n        else:\n            prj_vis = None\n        return prj_mean, prj_var, prj_vis, prj_aw\n\n    def compute_prob(self, depth, interval, mean, var, vis, aw, is_ref, depth_range):\n        \"\"\"\n        :param depth:    [...,dn]      rfn,qn,rn,dn   or qn,rn,dn\n        :param interval: [...,dn]        1,qn,rn,dn   or qn,rn,dn\n        :param mean:     [...,1 or dn] rfn,qn,rn,dn,2 or qn,rn,1,2\n        :param var:      [...,1 or dn] rfn,qn,rn,dn,2 or qn,rn,1,2\n        :param vis:      [...,1 or dn] rfn,qn,rn,dn,1 or qn,rn,1,1\n        :param aw:       [...,1 or dn] rfn,qn,rn,dn,1 or qn,rn,1,1\n        :param is_ref:\n        :param depth_range: rfn,2 or qn,2\n        :return:\n        \"\"\"\n        if interval.shape != (1,0):\n            near, far = get_near_far_points(depth, interval, depth_range, is_ref)\n        else:\n            near, far = get_near_far_points(depth, interval, depth_range, is_ref, fixed_interval=True, fixed_interval_val=0.01)\n        # near and far [rfn,qn,rn,dn] or [qn,rn,dn]\n        mix = torch.cat([aw, 1 - aw],-1) # [...,2]\n        near, far = near[...,None], far[...,None]\n\n        d0 = (near - mean) * var # [...,2]\n        d1 = (far - mean) * var  # [...,2]\n        cdf0 = (0.5 + 0.5 * torch.tanh(d0)) # t(z_i)\n        cdf1 = (0.5 + 0.5 * torch.tanh(d1)) # t(z_{i+1})\n        if self.cfg['use_vis']:\n            cdf0, cdf1 = cdf0 * vis, cdf1 * vis\n        visibility = 1 - cdf0\n        hit_prob = cdf1 - cdf0\n        visibility = torch.sum(visibility*mix, -1)\n        hit_prob = torch.sum(hit_prob*mix, -1)\n\n        eps = 1e-5\n        alpha_value = torch.log(hit_prob / (visibility - hit_prob + eps) + eps)\n        return alpha_value, visibility, hit_prob\n\n    def decode_alpha_value(self, alpha_value):\n        alpha_value = torch.sigmoid(alpha_value)\n        return alpha_value\n\n    def predict_mean(self,prj_ray_feats):\n        prj_mean = self.mean_decoder(prj_ray_feats)\n        return prj_mean\n\n    def predict_aw(self,prj_ray_feats):\n        return self.aw_decoder(prj_ray_feats)\n\n\nname2dist_decoder={\n    'mixture_logistics': MixtureLogisticsDistDecoder\n}"
  },
  {
    "path": "src/nr/network/ibrnet.py",
    "content": "import numpy as np\nimport torch.nn.functional as F\nimport torch.nn as nn\nimport torch\nfrom network.neus import *\n\nclass ScaledDotProductAttention(nn.Module):\n    ''' Scaled Dot-Product Attention '''\n\n    def __init__(self, temperature, attn_dropout=0.1):\n        super().__init__()\n        self.temperature = temperature\n        # self.dropout = nn.Dropout(attn_dropout)\n\n    def forward(self, q, k, v, mask=None):\n\n        attn = torch.matmul(q / self.temperature, k.transpose(2, 3))\n\n        if mask is not None:\n            attn = attn.masked_fill(mask == 0, -1e9)\n            # attn = attn * mask\n\n        attn = F.softmax(attn, dim=-1)\n        # attn = self.dropout(F.softmax(attn, dim=-1))\n        output = torch.matmul(attn, v)\n\n        return output, attn\n\n\nclass PositionwiseFeedForward(nn.Module):\n    ''' A two-feed-forward-layer module '''\n\n    def __init__(self, d_in, d_hid, dropout=0.1):\n        super().__init__()\n        self.w_1 = nn.Linear(d_in, d_hid) # position-wise\n        self.w_2 = nn.Linear(d_hid, d_in) # position-wise\n        self.layer_norm = nn.LayerNorm(d_in, eps=1e-6)\n        # self.dropout = nn.Dropout(dropout)\n\n    def forward(self, x):\n\n        residual = x\n\n        x = self.w_2(F.relu(self.w_1(x)))\n        # x = self.dropout(x)\n        x += residual\n\n        x = self.layer_norm(x)\n\n        return x\n\nclass MultiHeadAttention(nn.Module):\n    ''' Multi-Head Attention module '''\n\n    def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):\n        super().__init__()\n\n        self.n_head = n_head\n        self.d_k = d_k\n        self.d_v = d_v\n\n        self.w_qs = nn.Linear(d_model, n_head * d_k, bias=False)\n        self.w_ks = nn.Linear(d_model, n_head * d_k, bias=False)\n        self.w_vs = nn.Linear(d_model, n_head * d_v, bias=False)\n        self.fc = nn.Linear(n_head * d_v, d_model, bias=False)\n\n        self.attention = ScaledDotProductAttention(temperature=d_k ** 0.5)\n\n        # self.dropout = nn.Dropout(dropout)\n        self.layer_norm = nn.LayerNorm(d_model, eps=1e-6)\n\n    def forward(self, q, k, v, mask=None):\n\n        d_k, d_v, n_head = self.d_k, self.d_v, self.n_head\n        sz_b, len_q, len_k, len_v = q.size(0), q.size(1), k.size(1), v.size(1)\n\n        residual = q\n\n        # Pass through the pre-attention projection: b x lq x (n*dv)\n        # Separate different heads: b x lq x n x dv\n        q = self.w_qs(q).view(sz_b, len_q, n_head, d_k)\n        k = self.w_ks(k).view(sz_b, len_k, n_head, d_k)\n        v = self.w_vs(v).view(sz_b, len_v, n_head, d_v)\n\n        # Transpose for attention dot product: b x n x lq x dv\n        q, k, v = q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2)\n\n        if mask is not None:\n            mask = mask.unsqueeze(1)   # For head axis broadcasting.\n\n        q, attn = self.attention(q, k, v, mask=mask)\n\n        # Transpose to move the head dimension back: b x lq x n x dv\n        # Combine the last two dimensions to concatenate all the heads together: b x lq x (n*dv)\n        q = q.transpose(1, 2).contiguous().view(sz_b, len_q, -1)\n        # q = self.dropout(self.fc(q))\n        q = self.fc(q)\n        q += residual\n\n        q = self.layer_norm(q)\n\n        return q, attn\n\n# default tensorflow initialization of linear layers\ndef weights_init(m):\n    if isinstance(m, nn.Linear):\n        nn.init.kaiming_normal_(m.weight.data)\n        if m.bias is not None:\n            nn.init.zeros_(m.bias.data)\n\n\n@torch.jit.script\ndef fused_mean_variance(x, weight):\n    mean = torch.sum(x*weight, dim=2, keepdim=True)\n    var = torch.sum(weight * (x - mean)**2, dim=2, keepdim=True)\n    return mean, var\n\nclass IBRNet(nn.Module):\n    def __init__(self, in_feat_ch=32, n_samples=64, **kwargs):\n        super(IBRNet, self).__init__()\n        # self.args = args\n        self.anti_alias_pooling = False\n        if self.anti_alias_pooling:\n            self.s = nn.Parameter(torch.tensor(0.2), requires_grad=True)\n        activation_func = nn.ELU(inplace=True)\n        self.n_samples = n_samples\n        self.ray_dir_fc = nn.Sequential(nn.Linear(4, 16),\n                                        activation_func,\n                                        nn.Linear(16, in_feat_ch + 3),\n                                        activation_func)\n\n        self.base_fc = nn.Sequential(nn.Linear((in_feat_ch+3)*3, 64),\n                                     activation_func,\n                                     nn.Linear(64, 32),\n                                     activation_func)\n\n        self.vis_fc = nn.Sequential(nn.Linear(32, 32),\n                                    activation_func,\n                                    nn.Linear(32, 33),\n                                    activation_func,\n                                    )\n\n        self.vis_fc2 = nn.Sequential(nn.Linear(32, 32),\n                                     activation_func,\n                                     nn.Linear(32, 1),\n                                     nn.Sigmoid()\n                                     )\n\n        self.geometry_fc = nn.Sequential(nn.Linear(32*2+1, 64),\n                                         activation_func,\n                                         nn.Linear(64, 16),\n                                         activation_func)\n\n        self.ray_attention = MultiHeadAttention(4, 16, 4, 4)\n        self.out_geometry_fc = nn.Sequential(nn.Linear(16, 16),\n                                             activation_func,\n                                             nn.Linear(16, 1),\n                                             nn.ReLU())\n\n        self.rgb_fc = nn.Sequential(nn.Linear(32+1+4, 16),\n                                    activation_func,\n                                    nn.Linear(16, 8),\n                                    activation_func,\n                                    nn.Linear(8, 1))\n\n        self.pos_encoding = self.posenc(d_hid=16, n_samples=self.n_samples)\n\n        self.base_fc.apply(weights_init)\n        self.vis_fc2.apply(weights_init)\n        self.vis_fc.apply(weights_init)\n        self.geometry_fc.apply(weights_init)\n        self.rgb_fc.apply(weights_init)\n\n    def posenc(self, d_hid, n_samples):\n\n        def get_position_angle_vec(position):\n            return [position / np.power(10000, 2 * (hid_j // 2) / d_hid) for hid_j in range(d_hid)]\n\n        sinusoid_table = np.array([get_position_angle_vec(pos_i) for pos_i in range(n_samples)])\n        sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2])  # dim 2i\n        sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2])  # dim 2i+1\n        sinusoid_table = torch.from_numpy(sinusoid_table).to(\"cuda:{}\".format(self.args.local_rank)).float().unsqueeze(0)\n        return sinusoid_table\n\n    def forward(self, rgb_feat, ray_diff, mask):\n        '''\n        :param rgb_feat: rgbs and image features [n_rays, n_samples, n_views, n_feat]\n        :param ray_diff: ray direction difference [n_rays, n_samples, n_views, 4], first 3 channels are directions,\n        last channel is inner product\n        :param mask: mask for whether each projection is valid or not. [n_rays, n_samples, n_views, 1]\n        :return: rgb and density output, [n_rays, n_samples, 4]\n        '''\n\n        num_views = rgb_feat.shape[2]\n        direction_feat = self.ray_dir_fc(ray_diff)\n        rgb_in = rgb_feat[..., :3]\n        rgb_feat = rgb_feat + direction_feat\n        if self.anti_alias_pooling:\n            _, dot_prod = torch.split(ray_diff, [3, 1], dim=-1)\n            exp_dot_prod = torch.exp(torch.abs(self.s) * (dot_prod - 1))\n            weight = (exp_dot_prod - torch.min(exp_dot_prod, dim=2, keepdim=True)[0]) * mask\n            weight = weight / (torch.sum(weight, dim=2, keepdim=True) + 1e-8) # means it will trust the one more with more consistent view point\n        else:\n            weight = mask / (torch.sum(mask, dim=2, keepdim=True) + 1e-8)\n\n        # compute mean and variance across different views for each point\n        mean, var = fused_mean_variance(rgb_feat, weight)  # [n_rays, n_samples, 1, n_feat]\n        globalfeat = torch.cat([mean, var], dim=-1)  # [n_rays, n_samples, 1, 2*n_feat]\n\n        x = torch.cat([globalfeat.expand(-1, -1, num_views, -1), rgb_feat], dim=-1)  # [n_rays, n_samples, n_views, 3*n_feat]\n        x = self.base_fc(x)\n\n        x_vis = self.vis_fc(x * weight)\n        x_res, vis = torch.split(x_vis, [x_vis.shape[-1]-1, 1], dim=-1)\n        vis = F.sigmoid(vis) * mask\n        x = x + x_res\n        vis = self.vis_fc2(x * vis) * mask\n        weight = vis / (torch.sum(vis, dim=2, keepdim=True) + 1e-8)\n\n        mean, var = fused_mean_variance(x, weight)\n        globalfeat = torch.cat([mean.squeeze(2), var.squeeze(2), weight.mean(dim=2)], dim=-1)  # [n_rays, n_samples, 32*2+1]\n        globalfeat = self.geometry_fc(globalfeat)  # [n_rays, n_samples, 16]\n        num_valid_obs = torch.sum(mask, dim=2)\n        globalfeat = globalfeat + self.pos_encoding\n        globalfeat, _ = self.ray_attention(globalfeat, globalfeat, globalfeat,\n                                           mask=(num_valid_obs > 1).float())  # [n_rays, n_samples, 16]\n        sigma = self.out_geometry_fc(globalfeat)  # [n_rays, n_samples, 1]\n        sigma_out = sigma.masked_fill(num_valid_obs < 1, 0.)  # set the sigma of invalid point to zero\n\n        # rgb computation\n        x = torch.cat([x, vis, ray_diff], dim=-1)\n        x = self.rgb_fc(x)\n        x = x.masked_fill(mask == 0, -1e9)\n        blending_weights_valid = F.softmax(x, dim=2)  # color blending\n        rgb_out = torch.sum(rgb_in*blending_weights_valid, dim=2)\n        out = torch.cat([rgb_out, sigma_out], dim=-1)\n        return out\n\n\nclass IBRNetWithNeuRay(nn.Module):\n    def __init__(self, neuray_in_dim=32, in_feat_ch=32, n_samples=64, **kwargs):\n        super().__init__()\n        # self.args = args\n        self.anti_alias_pooling = False\n        if self.anti_alias_pooling:\n            self.s = nn.Parameter(torch.tensor(0.2), requires_grad=True)\n        activation_func = nn.ELU(inplace=True)\n        self.n_samples = n_samples\n        self.ray_dir_fc = nn.Sequential(nn.Linear(4, 16),\n                                        activation_func,\n                                        nn.Linear(16, in_feat_ch + 3),\n                                        activation_func)\n\n        self.base_fc = nn.Sequential(nn.Linear((in_feat_ch+3)*5+neuray_in_dim, 64),\n                                     activation_func,\n                                     nn.Linear(64, 32),\n                                     activation_func)\n\n        self.vis_fc = nn.Sequential(nn.Linear(32, 32),\n                                    activation_func,\n                                    nn.Linear(32, 33),\n                                    activation_func,\n                                    )\n\n        self.vis_fc2 = nn.Sequential(nn.Linear(32, 32),\n                                     activation_func,\n                                     nn.Linear(32, 1),\n                                     nn.Sigmoid()\n                                     )\n\n        self.geometry_fc = nn.Sequential(nn.Linear(32*2+1, 64),\n                                         activation_func,\n                                         nn.Linear(64, 16),\n                                         activation_func)\n\n        self.ray_attention = MultiHeadAttention(4, 16, 4, 4)\n        self.out_geometry_fc = nn.Sequential(nn.Linear(16, 16),\n                                             activation_func,\n                                             nn.Linear(16, 1),\n                                             nn.ReLU())\n\n        self.rgb_fc = nn.Sequential(nn.Linear(32+1+4, 16),\n                                    activation_func,\n                                    nn.Linear(16, 8),\n                                    activation_func,\n                                    nn.Linear(8, 1))\n\n        self.neuray_fc = nn.Sequential(\n            nn.Linear(neuray_in_dim, 8,),\n            activation_func,\n            nn.Linear(8, 1),\n        )\n\n        self.pos_encoding = self.posenc(d_hid=16, n_samples=self.n_samples)\n\n        self.base_fc.apply(weights_init)\n        self.vis_fc2.apply(weights_init)\n        self.vis_fc.apply(weights_init)\n        self.geometry_fc.apply(weights_init)\n        self.rgb_fc.apply(weights_init)\n        self.neuray_fc.apply(weights_init)\n\n    def change_pos_encoding(self,n_samples):\n        self.pos_encoding = self.posenc(16, n_samples=n_samples)\n\n    def posenc(self, d_hid, n_samples):\n        def get_position_angle_vec(position):\n            return [position / np.power(10000, 2 * (hid_j // 2) / d_hid) for hid_j in range(d_hid)]\n\n        sinusoid_table = np.array([get_position_angle_vec(pos_i) for pos_i in range(n_samples)])\n        sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2])  # dim 2i\n        sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2])  # dim 2i+1\n        sinusoid_table = torch.from_numpy(sinusoid_table).to(\"cuda:{}\".format(0)).float().unsqueeze(0)\n        return sinusoid_table\n\n    def forward(self, rgb_feat, neuray_feat, ray_diff, mask):\n        '''\n        :param rgb_feat: rgbs and image features [n_rays, n_samples, n_views, n_feat]\n        :param ray_diff: ray direction difference [n_rays, n_samples, n_views, 4], first 3 channels are directions,\n        last channel is inner product\n        :param mask: mask for whether each projection is valid or not. [n_rays, n_samples, n_views, 1]\n        :return: rgb and density output, [n_rays, n_samples, 4]\n        '''\n\n        num_views = rgb_feat.shape[2]\n        direction_feat = self.ray_dir_fc(ray_diff)\n        rgb_in = rgb_feat[..., :3]\n        rgb_feat = rgb_feat + direction_feat\n        if self.anti_alias_pooling:\n            _, dot_prod = torch.split(ray_diff, [3, 1], dim=-1)\n            exp_dot_prod = torch.exp(torch.abs(self.s) * (dot_prod - 1))\n            weight = (exp_dot_prod - torch.min(exp_dot_prod, dim=2, keepdim=True)[0]) * mask\n            weight = weight / (torch.sum(weight, dim=2, keepdim=True) + 1e-8) # means it will trust the one more with more consistent view point\n        else:\n            weight = mask / (torch.sum(mask, dim=2, keepdim=True) + 1e-8)\n\n        # neuray layer 0\n        weight0 = torch.sigmoid(self.neuray_fc(neuray_feat)) * weight # [rn,dn,rfn,f]\n        mean0, var0 = fused_mean_variance(rgb_feat, weight0)  # [n_rays, n_samples, 1, n_feat]\n        mean1, var1 = fused_mean_variance(rgb_feat, weight)  # [n_rays, n_samples, 1, n_feat]\n        globalfeat = torch.cat([mean0, var0, mean1, var1], dim=-1)  # [n_rays, n_samples, 1, 2*n_feat]\n\n        x = torch.cat([globalfeat.expand(-1, -1, num_views, -1), rgb_feat, neuray_feat], dim=-1)  # [n_rays, n_samples, n_views, 3*n_feat]\n        x = self.base_fc(x)\n\n        x_vis = self.vis_fc(x * weight)\n        x_res, vis = torch.split(x_vis, [x_vis.shape[-1]-1, 1], dim=-1)\n        vis = F.sigmoid(vis) * mask\n        x = x + x_res\n        vis = self.vis_fc2(x * vis) * mask\n        weight = vis / (torch.sum(vis, dim=2, keepdim=True) + 1e-8)\n\n        mean, var = fused_mean_variance(x, weight)\n        globalfeat = torch.cat([mean.squeeze(2), var.squeeze(2), weight.mean(dim=2)], dim=-1)  # [n_rays, n_samples, 32*2+1]\n        globalfeat = self.geometry_fc(globalfeat)  # [n_rays, n_samples, 16]\n        num_valid_obs = torch.sum(mask, dim=2)\n        globalfeat = globalfeat + self.pos_encoding\n        globalfeat, _ = self.ray_attention(globalfeat, globalfeat, globalfeat,\n                                           mask=(num_valid_obs > 1).float())  # [n_rays, n_samples, 16]\n        sigma = self.out_geometry_fc(globalfeat)  # [n_rays, n_samples, 1]\n        sigma_out = sigma.masked_fill(num_valid_obs < 1, 0.)  # set the sigma of invalid point to zero\n\n        # rgb computation\n        x = torch.cat([x, vis, ray_diff], dim=-1)\n        x = self.rgb_fc(x)\n        x = x.masked_fill(mask == 0, -1e9)\n        blending_weights_valid = F.softmax(x, dim=2)  # color blending\n        rgb_out = torch.sum(rgb_in*blending_weights_valid, dim=2)\n        out = torch.cat([rgb_out, sigma_out], dim=-1)\n        return out\n\n\nclass IBRNetWithNeuRayNeus(nn.Module):\n    def __init__(self, neuray_in_dim=32, in_feat_ch=32, n_samples=64, **kwargs):\n        super().__init__()\n        # self.args = args\n        self.anti_alias_pooling = False\n        if self.anti_alias_pooling:\n            self.s = nn.Parameter(torch.tensor(0.2), requires_grad=True)\n        activation_func = nn.ELU(inplace=True)\n        self.n_samples = n_samples\n        self.ray_dir_fc = nn.Sequential(nn.Linear(4, 16),\n                                        activation_func,\n                                        nn.Linear(16, in_feat_ch + 3),\n                                        activation_func)\n\n        self.base_fc = nn.Sequential(nn.Linear((in_feat_ch+3)*5+neuray_in_dim, 64),\n                                     activation_func,\n                                     nn.Linear(64, 32),\n                                     activation_func)\n\n        self.vis_fc = nn.Sequential(nn.Linear(32, 32),\n                                    activation_func,\n                                    nn.Linear(32, 33),\n                                    activation_func,\n                                    )\n\n        self.vis_fc2 = nn.Sequential(nn.Linear(32, 32),\n                                     activation_func,\n                                     nn.Linear(32, 1),\n                                     nn.Sigmoid()\n                                     )\n        self.embed_fn, input_ch = get_embedder(3, input_dims=3)\n        self.geometry_fc = nn.Sequential(nn.Linear(32*2+1+input_ch, 64),\n                                         activation_func,\n                                         nn.Linear(64, 16),\n                                         activation_func)\n\n        self.ray_attention = MultiHeadAttention(4, 16, 4, 4)\n        self.out_geometry_fc = nn.Sequential(nn.Linear(16, 16),\n                                             nn.Linear(16, 1),\n                                             )\n        self.rgb_fc = nn.Sequential(nn.Linear(32+1+4, 16),\n                                    activation_func,\n                                    nn.Linear(16, 8),\n                                    activation_func,\n                                    nn.Linear(8, 1))\n\n        self.neuray_fc = nn.Sequential(\n            nn.Linear(neuray_in_dim, 8,),\n            activation_func,\n            nn.Linear(8, 1),\n        )\n\n        self.pos_encoding = self.posenc(d_hid=16, n_samples=self.n_samples)\n\n        self.base_fc.apply(weights_init)\n        self.vis_fc2.apply(weights_init)\n        self.vis_fc.apply(weights_init)\n        self.geometry_fc.apply(weights_init)\n        self.rgb_fc.apply(weights_init)\n        self.neuray_fc.apply(weights_init)\n\n    def change_pos_encoding(self,n_samples):\n        self.pos_encoding = self.posenc(16, n_samples=n_samples)\n\n    def posenc(self, d_hid, n_samples):\n        def get_position_angle_vec(position):\n            return [position / np.power(10000, 2 * (hid_j // 2) / d_hid) for hid_j in range(d_hid)]\n\n        sinusoid_table = np.array([get_position_angle_vec(pos_i) for pos_i in range(n_samples)])\n        sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2])  # dim 2i\n        sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2])  # dim 2i+1\n        sinusoid_table = torch.from_numpy(sinusoid_table).to(\"cuda:{}\".format(0)).float().unsqueeze(0)\n        return sinusoid_table\n\n    def forward(self, rgb_feat, neuray_feat, ray_diff, mask, que_pts):\n        '''\n        :param rgb_feat: rgbs and image features [n_rays, n_samples, n_views, n_feat]\n        :param ray_diff: ray direction difference [n_rays, n_samples, n_views, 4], first 3 channels are directions,\n        last channel is inner product\n        :param mask: mask for whether each projection is valid or not. [n_rays, n_samples, n_views, 1]\n        :return: rgb and density output, [n_rays, n_samples, 4]\n        '''\n\n        num_views = rgb_feat.shape[2]\n        direction_feat = self.ray_dir_fc(ray_diff)\n        rgb_in = rgb_feat[..., :3]\n        rgb_feat = rgb_feat + direction_feat\n        if self.anti_alias_pooling:\n            _, dot_prod = torch.split(ray_diff, [3, 1], dim=-1)\n            exp_dot_prod = torch.exp(torch.abs(self.s) * (dot_prod - 1))\n            weight = (exp_dot_prod - torch.min(exp_dot_prod, dim=2, keepdim=True)[0]) * mask\n            weight = weight / (torch.sum(weight, dim=2, keepdim=True) + 1e-8) # means it will trust the one more with more consistent view point\n        else:\n            weight = mask / (torch.sum(mask, dim=2, keepdim=True) + 1e-8)\n\n        # neuray layer 0\n        weight0 = torch.sigmoid(self.neuray_fc(neuray_feat)) * weight # [rn,dn,rfn,f]\n        mean0, var0 = fused_mean_variance(rgb_feat, weight0)  # [n_rays, n_samples, 1, n_feat]\n        mean1, var1 = fused_mean_variance(rgb_feat, weight)  # [n_rays, n_samples, 1, n_feat]\n        globalfeat = torch.cat([mean0, var0, mean1, var1], dim=-1)  # [n_rays, n_samples, 1, 2*n_feat]\n\n        x = torch.cat([globalfeat.expand(-1, -1, num_views, -1), rgb_feat, neuray_feat], dim=-1)  # [n_rays, n_samples, n_views, 3*n_feat]\n        x = self.base_fc(x)\n\n        x_vis = self.vis_fc(x * weight)\n        x_res, vis = torch.split(x_vis, [x_vis.shape[-1]-1, 1], dim=-1)\n        vis = F.sigmoid(vis) * mask\n        x = x + x_res\n        vis = self.vis_fc2(x * vis) * mask\n        weight = vis / (torch.sum(vis, dim=2, keepdim=True) + 1e-8)\n\n        mean, var = fused_mean_variance(x, weight)\n        with torch.set_grad_enabled(True):\n            que_pts.requires_grad_(True)\n            embed_pts = self.embed_fn(que_pts)[0]\n            globalfeat = torch.cat([mean.squeeze(2), var.squeeze(2), weight.mean(dim=2), embed_pts], dim=-1)  # [n_rays, n_samples, 32*2+1]\n            globalfeat = self.geometry_fc(globalfeat)  # [n_rays, n_samples, 16]\n            num_valid_obs = torch.sum(mask, dim=2)\n            globalfeat = globalfeat + self.pos_encoding\n            globalfeat, _ = self.ray_attention(globalfeat, globalfeat, globalfeat,\n                                            mask=(num_valid_obs > 1).float())  # [n_rays, n_samples, 16]\n            sdf = self.out_geometry_fc(globalfeat).clip(-1.0,1.0)  # [n_rays, n_samples, 1]\n            sdf_out = sdf.masked_fill(num_valid_obs < 1, 1.)  # set the sigma of invalid point to zero\n\n            d_output = torch.ones_like(sdf_out, requires_grad=False, device=sdf_out.device)\n            gradients = torch.autograd.grad(\n                outputs=sdf_out,\n                inputs=que_pts,\n                grad_outputs=d_output,\n                create_graph=True,\n                retain_graph=True,\n                only_inputs=True)[0]\n\n        # rgb computation\n        x = torch.cat([x, vis, ray_diff], dim=-1)\n        x = self.rgb_fc(x)\n        x = x.masked_fill(mask == 0, -1e9)\n        blending_weights_valid = F.softmax(x, dim=2)  # color blending\n        rgb_out = torch.sum(rgb_in*blending_weights_valid, dim=2)\n        out = torch.cat([rgb_out, sdf_out], dim=-1)\n        return out, gradients"
  },
  {
    "path": "src/nr/network/init_net.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\nfrom network.ops import interpolate_feats, masked_mean_var, ResEncoder, ResUNetLight, conv3x3, ResidualBlock, conv1x1\n\nclass CostVolumeInitNet(nn.Module):\n    default_cfg={\n        'cost_volume_sn': 64,\n    }\n    def __init__(self,cfg):\n        super().__init__()\n        self.cfg={**self.default_cfg,**cfg}\n\n        imagenet_mean = torch.from_numpy(np.asarray([0.485, 0.456, 0.406], np.float32)).cuda()[None, :, None, None]\n        imagenet_std = torch.from_numpy(np.asarray([0.229, 0.224, 0.225], np.float32)).cuda()[None, :, None, None]\n        self.register_buffer('imagenet_mean', imagenet_mean)\n        self.register_buffer('imagenet_std', imagenet_std)\n\n        self.res_net = ResUNetLight(out_dim=32)\n        norm_layer = lambda dim: nn.InstanceNorm2d(dim, track_running_stats=False, affine=True)\n\n\n        in_dim = 32\n\n        self.out_conv = nn.Sequential(\n            conv3x3(in_dim, 32),\n            ResidualBlock(32, 32, norm_layer=norm_layer),\n            conv1x1(32, 32),\n        )\n\n    def forward(self, ref_imgs_info, src_imgs_info, is_train):\n        ref_feats = self.res_net(ref_imgs_info['imgs'])\n        return self.out_conv(torch.cat([ref_feats], 1))\n\nname2init_net={\n    'cost_volume': CostVolumeInitNet,\n}"
  },
  {
    "path": "src/nr/network/loss.py",
    "content": "import torch\nimport torch.nn as nn\nimport numpy as np\nimport pyquaternion as pyq\nimport math\nfrom network.ops import interpolate_feats\nimport torch.nn.functional as F\nimport torchmetrics\nfrom utils.base_utils import calc_rot_error_from_qxyzw\n\nclass Loss:\n    def __init__(self, keys):\n        \"\"\"\n        keys are used in multi-gpu model, DummyLoss in train_tools.py\n        :param keys: the output keys of the dict\n        \"\"\"\n        self.keys=keys\n\n    def __call__(self, data_pr, data_gt, step, **kwargs):\n        pass\n\nclass ConsistencyLoss(Loss):\n    default_cfg={\n        'use_ray_mask': False,\n        'use_dr_loss': False,\n        'use_dr_fine_loss': False,\n        'use_nr_fine_loss': False,\n    }\n    def __init__(self, cfg):\n        self.cfg={**self.default_cfg,**cfg}\n        super().__init__([f'loss_prob','loss_prob_fine'])\n\n    def __call__(self, data_pr, data_gt, step, **kwargs):\n        if 'hit_prob_self' not in data_pr: return {}\n        prob0 = data_pr['hit_prob_nr'].detach()     # qn,rn,dn\n        prob1 = data_pr['hit_prob_self']            # qn,rn,dn\n        if self.cfg['use_ray_mask']:\n            ray_mask = data_pr['ray_mask'].float()  # 1,rn\n        else:\n            ray_mask = 1\n        ce = - prob0 * torch.log(prob1 + 1e-5) - (1 - prob0) * torch.log(1 - prob1 + 1e-5)\n        outputs={'loss_prob': torch.mean(torch.mean(ce,-1),1)}\n        if 'hit_prob_nr_fine' in data_pr:\n            prob0 = data_pr['hit_prob_nr_fine'].detach()     # qn,rn,dn\n            prob1 = data_pr['hit_prob_self_fine']            # qn,rn,dn\n            ce = - prob0 * torch.log(prob1 + 1e-5) - (1 - prob0) * torch.log(1 - prob1 + 1e-5)\n            outputs['loss_prob_fine']=torch.mean(torch.mean(ce,-1),1)\n        return outputs\n\nclass RenderLoss(Loss):\n    default_cfg={\n        'use_ray_mask': True,\n        'use_dr_loss': False,\n        'use_dr_fine_loss': False,\n        'use_nr_fine_loss': False,\n        'disable_at_eval': True,\n        'render_loss_weight': 0.01\n    }\n    def __init__(self, cfg):\n        self.cfg={**self.default_cfg,**cfg}\n        super().__init__([f'loss_rgb'])\n\n    def __call__(self, data_pr, data_gt, step, is_train=True, **kwargs):\n        if not is_train and self.cfg['disable_at_eval']:\n            return {}\n        rgb_gt = data_pr['pixel_colors_gt'] # 1,rn,3\n        rgb_nr = data_pr['pixel_colors_nr'] # 1,rn,3\n        def compute_loss(rgb_pr,rgb_gt):\n            loss=torch.sum((rgb_pr-rgb_gt)**2,-1)        # b,n\n            if self.cfg['use_ray_mask']:\n                ray_mask = data_pr['ray_mask'].float() # 1,rn\n                loss = torch.sum(loss*ray_mask,1)/(torch.sum(ray_mask,1)+1e-3)\n            else:\n                loss = torch.mean(loss, 1)\n            return loss * self.cfg['render_loss_weight']\n\n        results = {'loss_rgb_nr': compute_loss(rgb_nr, rgb_gt)}\n        if self.cfg['use_dr_loss']:\n            rgb_dr = data_pr['pixel_colors_dr']  # 1,rn,3\n            results['loss_rgb_dr'] = compute_loss(rgb_dr, rgb_gt)\n        if self.cfg['use_dr_fine_loss']:\n            results['loss_rgb_dr_fine'] = compute_loss(data_pr['pixel_colors_dr_fine'], rgb_gt)\n        if self.cfg['use_nr_fine_loss']:\n            results['loss_rgb_nr_fine'] = compute_loss(data_pr['pixel_colors_nr_fine'], rgb_gt)\n        return results\n\nclass DepthLoss(Loss):\n    default_cfg={\n        'depth_correct_thresh': 0.02,\n        'depth_loss_type': 'l2',\n        'depth_loss_l1_beta': 0.05,\n        'depth_loss_weight': 1,\n        'disable_at_eval': True,\n    }\n    def __init__(self, cfg):\n        super().__init__(['loss_depth'])\n        self.cfg={**self.default_cfg,**cfg}\n        if self.cfg['depth_loss_type']=='smooth_l1':\n            self.loss_op=nn.SmoothL1Loss(reduction='none',beta=self.cfg['depth_loss_l1_beta'])\n\n    def __call__(self, data_pr, data_gt, step, is_train=True, **kwargs):\n        if not is_train and self.cfg['disable_at_eval']:\n            return {}\n        if 'true_depth' not in data_gt['ref_imgs_info']:\n            print('no')\n            return {'loss_depth': torch.zeros([1], dtype=torch.float32, device=data_pr['pixel_colors_nr'].device)}\n        coords = data_pr['depth_coords'] # rfn,pn,2\n        depth_pr = data_pr['depth_mean'] # rfn,pn\n        depth_maps = data_gt['ref_imgs_info']['true_depth'] # rfn,1,h,w\n        rfn, _, h, w = depth_maps.shape\n        depth_gt = interpolate_feats(\n            depth_maps,coords,h,w,padding_mode='border',align_corners=True)[...,0]   # rfn,pn\n\n        # transform to inverse depth coordinate\n        depth_range = data_gt['ref_imgs_info']['depth_range'] # rfn,2\n        near, far = -1/depth_range[:,0:1], -1/depth_range[:,1:2] # rfn,1\n        def process(depth):\n            depth = torch.clamp(depth, min=1e-5)\n            depth = -1 / depth\n            depth = (depth - near) / (far - near)\n            depth = torch.clamp(depth, min=0, max=1.0)\n            return depth\n        depth_gt = process(depth_gt)\n\n        # compute loss\n        def compute_loss(depth_pr):\n            if self.cfg['depth_loss_type']=='l2':\n                loss = (depth_gt - depth_pr)**2\n            elif self.cfg['depth_loss_type']=='smooth_l1':\n                loss = self.loss_op(depth_gt, depth_pr)\n\n            if data_gt['scene_name'].startswith('gso'):\n                depth_maps_noise = data_gt['ref_imgs_info']['depth']  # rfn,1,h,w\n                depth_aug = interpolate_feats(depth_maps_noise, coords, h, w, padding_mode='border', align_corners=True)[..., 0]  # rfn,pn\n                depth_aug = process(depth_aug)\n                mask = (torch.abs(depth_aug-depth_gt)<self.cfg['depth_correct_thresh']).float()\n                loss = torch.sum(loss * mask, 1) / (torch.sum(mask, 1) + 1e-4)\n\n            return loss.mean()\n\n        outputs = {'loss_depth': compute_loss(depth_pr) * self.cfg['depth_loss_weight']}\n        if 'depth_mean_fine' in data_pr:\n            outputs['loss_depth_fine'] = compute_loss(data_pr['depth_mean_fine']) * self.cfg['depth_loss_weight']\n        return outputs\n\ndef compute_mae(pr, gt, mask):\n    return np.sum(np.abs(pr * mask - gt * mask)) / np.count_nonzero(mask)\n\nclass SDFLoss(Loss):\n    default_cfg={\n        'loss_sdf_weight': 1.0,\n        'loss_eikonal_weight': 0.1,\n        'show_sdf_mae': True,\n        'record_s': True,\n        'loss_s_weight': 0\n    }\n    def __init__(self, cfg):\n        super().__init__(['loss_sdf'])\n        self.cfg={**self.default_cfg,**cfg}\n        self.loss_fn = nn.SmoothL1Loss()\n\n    def __call__(self, data_pr, data_gt, step, is_train=True, **kwargs):\n        outputs = {}\n        if self.cfg['show_sdf_mae']:\n            sdf_pr = data_pr['volume'][0,0].detach().cpu().numpy()\n            sdf_gt = data_gt['ref_imgs_info']['sdf_gt'].detach().cpu().numpy()\n            valid_mask = sdf_gt != -1.0\n            outputs['sdf_mae'] = torch.tensor([compute_mae(sdf_pr, sdf_gt, valid_mask)],dtype=torch.float32)\n        if self.cfg['loss_sdf_weight'] > 0:\n            valid_mask = data_gt['ref_imgs_info']['sdf_gt'] != -1.0\n            outputs['loss_sdf'] = self.loss_fn(data_gt['ref_imgs_info']['sdf_gt'] * valid_mask, data_pr['volume'][0,0] * valid_mask)[None] * self.cfg['loss_sdf_weight']\n        if self.cfg['loss_eikonal_weight'] > 0:\n            outputs['loss_eikonal'] = (data_pr['sdf_gradient_error']).mean()[None] * self.cfg['loss_eikonal_weight'] \n        if self.cfg['record_s']:\n            outputs['variance'] = data_pr['s'][None]\n        if self.cfg['loss_s_weight'] > 0:\n            outputs['loss_s'] = torch.norm(data_pr['s']).mean()[None] * self.cfg['loss_s_weight']\n        return outputs\n\nclass VGNLoss(Loss):\n    default_cfg={\n        'loss_vgn_weight': 1e-2,\n    }\n    def __init__(self, cfg):\n        super().__init__(['loss_vgn'])\n        self.cfg={**self.default_cfg,**cfg}\n\n    def _loss_fn(self, y_pred, y, is_train):\n        label_pred, rotation_pred, width_pred = y_pred\n        _, label, rotations, width = y\n        loss_qual = self._qual_loss_fn(label_pred, label)\n        acc = self._acc_fn(label_pred, label)\n        loss_rot_raw = self._rot_loss_fn(rotation_pred, rotations)\n        loss_rot = label * loss_rot_raw\n        loss_width_raw = 0.01 * self._width_loss_fn(width_pred, width)\n        loss_width = label * loss_width_raw\n        loss = loss_qual + loss_rot + loss_width\n        loss_item =  {'loss_vgn': loss.mean()[None] * self.cfg['loss_vgn_weight'], \n                     'vgn_total_loss':loss.mean()[None],'vgn_qual_loss': loss_qual.mean()[None], \n                    'vgn_rot_loss':  loss_rot.mean()[None], 'vgn_width_loss':loss_width.mean()[None],\n                    'vgn_qual_acc': acc[None]}\n\n        num = torch.count_nonzero(label)\n        angle_torch = label * self._angle_error_fn(rotation_pred, rotations, 'torch')\n        loss_item['vgn_rot_err'] = (angle_torch.sum() / num)[None] if num else torch.zeros((1,),device=label.device)\n        return loss_item\n\n    def _qual_loss_fn(self, pred, target):\n        return F.binary_cross_entropy(pred, target, reduction=\"none\")\n\n    def _acc_fn(self, pred, target):\n        return 100 * (torch.round(pred) == target).float().sum() / target.shape[0]\n\n    def _pr_fn(self, pred, target):\n        p, r = torchmetrics.functional.precision_recall(torch.round(pred).to(torch.int), target.to(torch.int), 'macro',num_classes=2)\n        return p[None] * 100, r[None] * 100\n\n    def _rot_loss_fn(self, pred, target):\n        loss0 = self._quat_loss_fn(pred, target[:, 0])\n        loss1 = self._quat_loss_fn(pred, target[:, 1])\n        return torch.min(loss0, loss1)\n\n    def _angle_error_fn(self, pred, target, method='torch'):\n        if method == 'np':\n            def _angle_error(q1, q2, ):  \n                q1 = pyq.Quaternion(q1[[3,0,1,2]])\n                q1 /= q1.norm\n                q2 = pyq.Quaternion(q2[[3,0,1,2]])\n                q2 /= q2.norm\n                qd = q1.conjugate * q2\n                qdv = pyq.Quaternion(0, qd.x, qd.y, qd.z)\n                err = 2 * math.atan2(qdv.norm, qd.w) / math.pi * 180\n                return min(err, 360 - err)\n            q1s = pred.detach().cpu().numpy()\n            q2s = target.detach().cpu().numpy()\n            err = []\n            for q1,q2 in zip(q1s, q2s):\n                err.append(min(_angle_error(q1, q2[0]), _angle_error(q1, q2[1])))\n            return torch.tensor(err, device = pred.device)\n        elif method == 'torch':\n            return calc_rot_error_from_qxyzw(pred, target)\n        else:\n            raise NotImplementedError\n\n    def _quat_loss_fn(self, pred, target):\n        return 1.0 - torch.abs(torch.sum(pred * target, dim=1))\n\n    def _width_loss_fn(self, pred, target):\n        return F.mse_loss(pred, target, reduction=\"none\")\n\n    def __call__(self, data_pr, data_gt, step, is_train=True, **kwargs):\n        return self._loss_fn(data_pr['vgn_pred'], data_gt['grasp_info'], is_train)\n\nname2loss={\n    'render': RenderLoss,\n    'depth': DepthLoss,\n    'consist': ConsistencyLoss,\n    'vgn': VGNLoss,\n    'sdf': SDFLoss\n}"
  },
  {
    "path": "src/nr/network/metrics.py",
    "content": "from pathlib import Path\n\nimport torch\nfrom skimage.io import imsave\n\nfrom network.loss import Loss\nfrom utils.base_utils import color_map_backward, make_dir\nfrom skimage.metrics import structural_similarity\nimport numpy as np\n\nfrom utils.draw_utils import concat_images_list\n\n\ndef compute_psnr(img_gt, img_pr, use_vis_scores=False, vis_scores=None, vis_scores_thresh=1.5):\n    if use_vis_scores:\n        mask = vis_scores >= vis_scores_thresh\n        mask = mask.flatten()\n        img_gt = img_gt.reshape([-1, 3]).astype(np.float32)[mask]\n        img_pr = img_pr.reshape([-1, 3]).astype(np.float32)[mask]\n        mse = np.mean((img_gt - img_pr) ** 2, 0)\n\n    img_gt = img_gt.reshape([-1, 3]).astype(np.float32)\n    img_pr = img_pr.reshape([-1, 3]).astype(np.float32)\n    mse = np.mean((img_gt - img_pr) ** 2, 0)\n    mse = np.mean(mse)\n    psnr = 10 * np.log10(255 * 255 / mse)\n    return psnr\n\ndef compute_mae(depth_pr, depth_gt):\n    return np.mean(np.abs(depth_pr - depth_gt))\n\nclass PSNR_SSIM(Loss):\n    default_cfg = {\n        'eval_margin_ratio': 1.0,\n    }\n    def __init__(self, cfg):\n        super().__init__([])\n        self.cfg={**self.default_cfg,**cfg}\n\n    def __call__(self, data_pr, data_gt, step, **kwargs):\n        rgbs_gt = data_pr['pixel_colors_gt'] # 1,rn,3\n        rgbs_pr = data_pr['pixel_colors_nr'] # 1,rn,3\n        if 'que_imgs_info' in data_gt:\n            h, w = data_gt['que_imgs_info']['imgs'].shape[2:]\n        else:\n            h, w = data_pr['que_imgs_info']['imgs'].shape[2:]\n        rgbs_pr = rgbs_pr.reshape([h,w,3]).detach().cpu().numpy()\n        rgbs_pr=color_map_backward(rgbs_pr)\n\n        rgbs_gt = rgbs_gt.reshape([h,w,3]).detach().cpu().numpy()\n        rgbs_gt = color_map_backward(rgbs_gt)\n\n        h, w, _ = rgbs_gt.shape\n        h_margin = int(h * (1 - self.cfg['eval_margin_ratio'])) // 2\n        w_margin = int(w * (1 - self.cfg['eval_margin_ratio'])) // 2\n        rgbs_gt = rgbs_gt[h_margin:h - h_margin, w_margin:w - w_margin]\n        rgbs_pr = rgbs_pr[h_margin:h - h_margin, w_margin:w - w_margin]\n\n        psnr = compute_psnr(rgbs_gt,rgbs_pr)\n        outputs={\n            'psnr_nr': torch.tensor([psnr],dtype=torch.float32),\n        }\n\n        def compute_psnr_prefix(suffix):\n            if f'pixel_colors_{suffix}' in data_pr:\n                rgbs_other = data_pr[f'pixel_colors_{suffix}'] # 1,rn,3\n                # h, w = data_pr['shape']\n                rgbs_other = rgbs_other.reshape([h,w,3]).detach().cpu().numpy()\n                rgbs_other=color_map_backward(rgbs_other)\n                psnr = compute_psnr(rgbs_gt,rgbs_other)\n                ssim = structural_similarity(rgbs_gt,rgbs_other,win_size=11,multichannel=True,data_range=255)\n                outputs[f'psnr_{suffix}']=torch.tensor([psnr], dtype=torch.float32)\n\n        # compute_psnr_prefix('nr')\n        compute_psnr_prefix('dr')\n        compute_psnr_prefix('nr_fine')\n        compute_psnr_prefix('dr_fine')\n\n        depth_pr = data_pr['render_depth'].reshape([h,w]).detach().cpu().numpy()\n        depth_gt = data_gt['que_imgs_info']['true_depth'][0,0].cpu().numpy()\n\n\n        outputs['depth_mae'] = torch.tensor([compute_mae(depth_pr, depth_gt)],dtype=torch.float32) # higher is better\n        return outputs\n\nclass VisualizeImage(Loss):\n    def __init__(self, cfg):\n        super().__init__([])\n\n    def __call__(self, data_pr, data_gt, step, **kwargs):\n        if 'que_imgs_info' in data_gt:\n            h, w = data_gt['que_imgs_info']['imgs'].shape[2:]\n        else:\n            h, w = data_pr['que_imgs_info']['imgs'].shape[2:]\n        def get_img(key):\n            rgbs = data_pr[key] # 1,rn,3\n            rgbs = rgbs.reshape([h,w,3]).detach().cpu().numpy()\n            rgbs = color_map_backward(rgbs)\n            return rgbs\n\n        outputs={}\n        imgs=[get_img('pixel_colors_gt'), get_img('pixel_colors_nr')]\n        if 'pixel_colors_dr' in data_pr: imgs.append(get_img('pixel_colors_dr'))\n        if 'pixel_colors_nr_fine' in data_pr: imgs.append(get_img('pixel_colors_nr_fine'))\n        if 'pixel_colors_dr_fine' in data_pr: imgs.append(get_img('pixel_colors_dr_fine'))\n\n        data_index=kwargs['data_index']\n        model_name=kwargs['model_name']\n        Path(f'data/vis_val/{model_name}').mkdir(exist_ok=True, parents=True)\n        if h<=64 and w<=64:\n            imsave(f'data/vis_val/{model_name}/step-{step}-index-{data_index}.png',concat_images_list(*imgs))\n        else:\n            imsave(f'data/vis_val/{model_name}/step-{step}-index-{data_index}.jpg', concat_images_list(*imgs))\n        return outputs\n\nname2metrics={\n    'psnr_ssim': PSNR_SSIM,\n    'vis_img': VisualizeImage,\n}\n\ndef psnr_nr(results):\n    return np.mean(results['psnr_nr'])\n\ndef psnr_nr_fine(results):\n    return np.mean(results['psnr_nr_fine'])\n\ndef depth_mae(results):\n    return np.mean(results['depth_mae'])\n\ndef sdf_mae(results):\n    return np.mean(results['sdf_mae'])\n\ndef loss_vgn(results):\n    if 'loss_vgn' in results:\n        return np.mean(results['loss_vgn'])\n    else:\n        return 1e6\n\nname2key_metrics={\n    'psnr_nr': psnr_nr,\n    'psnr_nr_fine': psnr_nr_fine,\n    'depth_mae': depth_mae,\n    'loss_vgn': loss_vgn,\n    'sdf_mae': sdf_mae\n}"
  },
  {
    "path": "src/nr/network/mvsnet/modules.py",
    "content": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom inplace_abn import InPlaceABN\nfrom kornia.utils import create_meshgrid\n\nclass ConvBnReLU(nn.Module):\n    def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, pad=1, norm_act=InPlaceABN):\n        super(ConvBnReLU, self).__init__()\n        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=False)\n        self.bn = norm_act(out_channels)\n\n    def forward(self, x):\n        return self.bn(self.conv(x))\n\nclass ConvBnReLU3D(nn.Module):\n    def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, pad=1, norm_act=InPlaceABN):\n        super(ConvBnReLU3D, self).__init__()\n        self.conv = nn.Conv3d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=False)\n        self.bn = norm_act(out_channels)\n\n    def forward(self, x):\n        return self.bn(self.conv(x))\n\ndef homo_warp(src_feat, src_proj, ref_proj_inv, depth_values):\n    # src_feat: (B, C, H, W)\n    # src_proj: (B, 4, 4)\n    # ref_proj_inv: (B, 4, 4)\n    # depth_values: (B, D)\n    # out: (B, C, D, H, W)\n    B, C, H, W = src_feat.shape\n    D = depth_values.shape[1]\n    device = src_feat.device\n    dtype = src_feat.dtype\n\n    transform = src_proj @ ref_proj_inv\n    R = transform[:, :3, :3] # (B, 3, 3)\n    T = transform[:, :3, 3:] # (B, 3, 1)\n    # create grid from the ref frame\n    ref_grid = create_meshgrid(H, W, normalized_coordinates=False) # (1, H, W, 2)\n    ref_grid = ref_grid.to(device).to(dtype)\n    ref_grid = ref_grid.permute(0, 3, 1, 2) # (1, 2, H, W)\n    ref_grid = ref_grid.reshape(1, 2, H*W) # (1, 2, H*W)\n    ref_grid = ref_grid.expand(B, -1, -1) # (B, 2, H*W)\n    ref_grid = torch.cat((ref_grid, torch.ones_like(ref_grid[:,:1])), 1) # (B, 3, H*W)\n    ref_grid_d = ref_grid.unsqueeze(2) * depth_values.view(B, 1, D, 1) # (B, 3, D, H*W)\n    ref_grid_d = ref_grid_d.view(B, 3, D*H*W)\n    src_grid_d = R @ ref_grid_d + T # (B, 3, D*H*W)\n    del ref_grid_d, ref_grid, transform, R, T # release (GPU) memory\n    div_val = src_grid_d[:, -1:]\n    div_val[div_val<1e-4] = 1e-4\n    src_grid = src_grid_d[:, :2] / div_val # divide by depth (B, 2, D*H*W)\n    del src_grid_d, div_val\n    src_grid[:, 0] = src_grid[:, 0]/((W - 1) / 2) - 1 # scale to -1~1\n    src_grid[:, 1] = src_grid[:, 1]/((H - 1) / 2) - 1 # scale to -1~1\n    src_grid = src_grid.permute(0, 2, 1) # (B, D*H*W, 2)\n    src_grid = src_grid.view(B, D, H*W, 2)\n\n    warped_src_feat = F.grid_sample(src_feat, src_grid,\n                                    mode='bilinear', padding_mode='zeros',\n                                    align_corners=True) # (B, C, D, H*W)\n    warped_src_feat = warped_src_feat.view(B, C, D, H, W)\n\n    return warped_src_feat\n\ndef depth_regression(p, depth_values):\n    # p: probability volume [B, D, H, W]\n    # depth_values: discrete depth values [B, D]\n    depth_values = depth_values.view(*depth_values.shape, 1, 1)\n    depth = torch.sum(p * depth_values, 1)\n    return depth"
  },
  {
    "path": "src/nr/network/mvsnet/mvsnet.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom network.mvsnet.modules import ConvBnReLU, ConvBnReLU3D, depth_regression, homo_warp\nfrom inplace_abn import InPlaceABN\n\nclass FeatureNet(nn.Module):\n    def __init__(self, norm_act=InPlaceABN):\n        super(FeatureNet, self).__init__()\n        self.inplanes = 32\n\n        self.conv0 = ConvBnReLU(3, 8, 3, 1, 1, norm_act=norm_act)\n        self.conv1 = ConvBnReLU(8, 8, 3, 1, 1, norm_act=norm_act)\n\n        self.conv2 = ConvBnReLU(8, 16, 5, 2, 2, norm_act=norm_act)\n        self.conv3 = ConvBnReLU(16, 16, 3, 1, 1, norm_act=norm_act)\n        self.conv4 = ConvBnReLU(16, 16, 3, 1, 1, norm_act=norm_act)\n\n        self.conv5 = ConvBnReLU(16, 32, 5, 2, 2, norm_act=norm_act)\n        self.conv6 = ConvBnReLU(32, 32, 3, 1, 1, norm_act=norm_act)\n        self.feature = nn.Conv2d(32, 32, 3, 1, 1)\n\n    def forward(self, x):\n        x = self.conv1(self.conv0(x))\n        x = self.conv4(self.conv3(self.conv2(x)))\n        x = self.feature(self.conv6(self.conv5(x)))\n        return x\n\nclass CostRegNet(nn.Module):\n    def __init__(self, norm_act=InPlaceABN):\n        super(CostRegNet, self).__init__()\n        self.conv0 = ConvBnReLU3D(32, 8, norm_act=norm_act)\n\n        self.conv1 = ConvBnReLU3D(8, 16, stride=2, norm_act=norm_act)\n        self.conv2 = ConvBnReLU3D(16, 16, norm_act=norm_act)\n\n        self.conv3 = ConvBnReLU3D(16, 32, stride=2, norm_act=norm_act)\n        self.conv4 = ConvBnReLU3D(32, 32, norm_act=norm_act)\n\n        self.conv5 = ConvBnReLU3D(32, 64, stride=2, norm_act=norm_act)\n        self.conv6 = ConvBnReLU3D(64, 64, norm_act=norm_act)\n\n        self.conv7 = nn.Sequential(\n            nn.ConvTranspose3d(64, 32, kernel_size=3, padding=1, output_padding=1, stride=2, bias=False),\n            norm_act(32))\n\n        self.conv9 = nn.Sequential(\n            nn.ConvTranspose3d(32, 16, kernel_size=3, padding=1, output_padding=1, stride=2, bias=False),\n            norm_act(16))\n\n        self.conv11 = nn.Sequential(\n            nn.ConvTranspose3d(16, 8, kernel_size=3, padding=1, output_padding=1, stride=2, bias=False),\n            norm_act(8))\n\n        self.prob = nn.Conv3d(8, 1, 3, stride=1, padding=1)\n\n    def forward(self, x):\n        conv0 = self.conv0(x)\n        conv2 = self.conv2(self.conv1(conv0))\n        conv4 = self.conv4(self.conv3(conv2))\n        x = self.conv6(self.conv5(conv4))\n        x = conv4 + self.conv7(x)\n        del conv4\n        x = conv2 + self.conv9(x)\n        del conv2\n        x = conv0 + self.conv11(x)\n        del conv0\n        x = self.prob(x)\n        return x\n\nclass MVSNet(nn.Module):\n    def __init__(self, norm_act=InPlaceABN):\n        super(MVSNet, self).__init__()\n        self.feature = FeatureNet(norm_act)\n        self.cost_regularization = CostRegNet(norm_act)\n\n    def forward(self, imgs, proj_mats, depth_values):\n        # imgs: (B, V, 3, H, W)\n        # proj_mats: (B, V, 4, 4)\n        # depth_values: (B, D)\n        B, V, _, H, W = imgs.shape\n        D = depth_values.shape[1]\n\n        # step 1. feature extraction\n        # in: images; out: 32-channel feature maps\n        imgs = imgs.reshape(B*V, 3, H, W)\n        feats = self.feature(imgs) # (B*V, F, h, w)\n        del imgs\n        feats = feats.reshape(B, V, *feats.shape[1:]) # (B, V, F, h, w)\n        ref_feats, src_feats = feats[:, 0], feats[:, 1:]\n        ref_proj, src_projs = proj_mats[:, 0], proj_mats[:, 1:]\n        src_feats = src_feats.permute(1, 0, 2, 3, 4) # (V-1, B, F, h, w)\n        src_projs = src_projs.permute(1, 0, 2, 3) # (V-1, B, 4, 4)\n\n        # step 2. differentiable homograph, build cost volume\n        ref_volume = ref_feats.unsqueeze(2).repeat(1, 1, D, 1, 1) # (B, F, D, h, w)\n        volume_sum = ref_volume\n        volume_sq_sum = ref_volume ** 2\n        del ref_volume\n\n        ref_proj = torch.inverse(ref_proj)\n        for src_feat, src_proj in zip(src_feats, src_projs):\n            warped_volume = homo_warp(src_feat, src_proj, ref_proj, depth_values)\n            volume_sum = volume_sum + warped_volume\n            volume_sq_sum = volume_sq_sum + warped_volume ** 2\n            del warped_volume\n        # aggregate multiple feature volumes by variance\n        volume_variance = volume_sq_sum.div_(V).sub_(volume_sum.div_(V).pow_(2))\n        del volume_sq_sum, volume_sum\n        \n        # step 3. cost volume regularization\n        cost_reg = self.cost_regularization(volume_variance).squeeze(1)\n        prob_volume = F.softmax(cost_reg, 1) # (B, D, h, w)\n        depth = depth_regression(prob_volume, depth_values)\n        \n        with torch.no_grad():\n            # sum probability of 4 consecutive depth indices\n            prob_volume_sum4 = 4 * F.avg_pool3d(F.pad(prob_volume.unsqueeze(1),\n                                                      pad=(0, 0, 0, 0, 1, 2)),\n                                                (4, 1, 1), stride=1).squeeze(1) # (B, D, h, w)\n            # find the (rounded) index that is the final prediction\n            depth_index = depth_regression(prob_volume,\n                                           torch.arange(D,\n                                                        device=prob_volume.device,\n                                                        dtype=prob_volume.dtype)\n                                          ).long() # (B, h, w)\n            # the confidence is the 4-sum probability at this index\n            confidence = torch.gather(prob_volume_sum4, 1, \n                                      depth_index.unsqueeze(1)).squeeze(1) # (B, h, w)\n\n        return depth, confidence\n\n    def construct_cost_volume(self, ref_imgs, ref_nn_idx, ref_prjs, depth_values, batch_num=2):\n        # ref_imgs rfn,3,h,w\n        # ref_nn_ids: rfn,nn\n        # ref_prjs: rfn,4,4    note it is already scaled!!!\n        # depth_values: rfn,dn\n        # return: rfn,dn,h//4,w//4\n        ref_feats = self.feature(ref_imgs) # rfn,f,h,w\n        ref_prjs_inv = torch.inverse(ref_prjs) # rfn,4,4\n        dn = depth_values.shape[1]\n\n        rfn, n_num = ref_nn_idx.shape\n        cost_reg_all = []\n        for rfi in range(0,rfn,batch_num):\n            volume_sum, volume_sum_sq = ref_feats[rfi:rfi+batch_num].unsqueeze(2), ref_feats[rfi:rfi+batch_num].unsqueeze(2)**2 # 1,f,1,h,w\n            volume_sum, volume_sum_sq = volume_sum.repeat(1, 1, dn, 1, 1), volume_sum_sq.repeat(1, 1, dn, 1, 1)\n            for ni in range(n_num):\n                warp_feats = homo_warp(ref_feats[ref_nn_idx[rfi:rfi+batch_num,ni]],ref_prjs[ref_nn_idx[rfi:rfi+batch_num,ni]],\n                                       ref_prjs_inv[rfi:rfi+batch_num],depth_values[rfi:rfi+batch_num]) # 1,f,dn,h,w\n                volume_sum += warp_feats\n                volume_sum_sq += warp_feats**2\n            volume_variance = volume_sum_sq.div_(n_num+1).sub_(volume_sum.div_(n_num+1).pow_(2)) # 1,f,dn,h,w\n            del volume_sum_sq, volume_sum\n             # 1,dn,h,w\n            cost_reg_all.append(self.cost_regularization(volume_variance).squeeze(1))\n        cost_reg_all = torch.cat(cost_reg_all,0)\n        return cost_reg_all\n\n    def construct_cost_volume_with_src(self, ref_imgs, src_imgs, ref_nn_idx, ref_prjs, src_prjs, depth_values, batch_num=2):\n        # ref_imgs rfn,3,h,w\n        # src_imgs srn,3,h,w\n        # ref_nn_ids: rfn,nn\n        # ref_prjs: rfn,4,4    note it is already scaled!!!\n        # src_prjs: src,4,4    note it is already scaled!!!\n        # depth_values: rfn,dn\n        # return: rfn,dn,h//4,w//4\n        ref_feats = self.feature(ref_imgs) # rfn,f,h,w\n        src_feats = self.feature(src_imgs) # src,f,h,w\n        ref_prjs_inv = torch.inverse(ref_prjs) # rfn,4,4\n        dn = depth_values.shape[1]\n\n        rfn, n_num = ref_nn_idx.shape\n        cost_reg_all = []\n        for rfi in range(0,rfn,batch_num):\n            volume_sum, volume_sum_sq = ref_feats[rfi:rfi+batch_num].unsqueeze(2), ref_feats[rfi:rfi+batch_num].unsqueeze(2)**2 # 1,f,1,h,w\n            volume_sum, volume_sum_sq = volume_sum.repeat(1, 1, dn, 1, 1), volume_sum_sq.repeat(1, 1, dn, 1, 1)\n            for ni in range(n_num):\n                warp_feats = homo_warp(src_feats[ref_nn_idx[rfi:rfi+batch_num,ni]],src_prjs[ref_nn_idx[rfi:rfi+batch_num,ni]],\n                                       ref_prjs_inv[rfi:rfi+batch_num],depth_values[rfi:rfi+batch_num]) # 1,f,dn,h,w\n                volume_sum += warp_feats\n                volume_sum_sq += warp_feats**2\n            volume_variance = volume_sum_sq.div_(n_num+1).sub_(volume_sum.div_(n_num+1).pow_(2)) # 1,f,dn,h,w\n            del volume_sum_sq, volume_sum\n             # 1,dn,h,w\n            cost_reg_all.append(self.cost_regularization(volume_variance).squeeze(1))\n        cost_reg_all = torch.cat(cost_reg_all,0)\n        return cost_reg_all\n\n\ndef extract_model_state_dict(ckpt_path, prefixes_to_ignore=[]):\n    checkpoint = torch.load(ckpt_path, map_location=torch.device('cpu'))\n    checkpoint_ = {}\n    if 'state_dict' in checkpoint: # if it's a pytorch-lightning checkpoint\n        for k, v in checkpoint['state_dict'].items():\n            if not k.startswith('model.'):\n                continue\n            k = k[6:] # remove 'model.'\n            for prefix in prefixes_to_ignore:\n                if k.startswith(prefix):\n                    print('ignore', k)\n                    break\n            else:\n                checkpoint_[k] = v\n    else: # if it only has model weights\n        for k, v in checkpoint.items():\n            for prefix in prefixes_to_ignore:\n                if k.startswith(prefix):\n                    print('ignore', k)\n                    break\n            else:\n                checkpoint_[k] = v\n    return checkpoint_\n\ndef load_ckpt(model, ckpt_path, prefixes_to_ignore=[]):\n    model_dict = model.state_dict()\n    checkpoint_ = extract_model_state_dict(ckpt_path, prefixes_to_ignore)\n    model_dict.update(checkpoint_)\n    model.load_state_dict(model_dict)"
  },
  {
    "path": "src/nr/network/neus.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\nclass SingleVarianceNetwork(nn.Module):\n    def __init__(self, init_val, fix_s=-1):\n        super(SingleVarianceNetwork, self).__init__()\n        self.register_parameter('variance', nn.Parameter(torch.tensor(init_val)))\n        self.variance.requires_grad = False\n        self.step = 0\n        self.fix_s = fix_s\n    def set_step(self, step):\n        self.step = step\n\n    def forward(self, x):\n        if self.fix_s != -1 and self.step > self.fix_s:\n            self.variance.requires_grad = True\n        return torch.ones([len(x), 1], device=x.device) * torch.exp(self.variance * 10.0)\n\nclass Embedder:\n    def __init__(self, **kwargs):\n        self.kwargs = kwargs\n        self.create_embedding_fn()\n\n    def create_embedding_fn(self):\n        embed_fns = []\n        d = self.kwargs['input_dims']\n        out_dim = 0\n        if self.kwargs['include_input']:\n            embed_fns.append(lambda x: x)\n            out_dim += d\n\n        max_freq = self.kwargs['max_freq_log2']\n        N_freqs = self.kwargs['num_freqs']\n\n        if self.kwargs['log_sampling']:\n            freq_bands = 2. ** torch.linspace(0., max_freq, N_freqs)\n        else:\n            freq_bands = torch.linspace(2.**0., 2.**max_freq, N_freqs)\n\n        for freq in freq_bands:\n            for p_fn in self.kwargs['periodic_fns']:\n                embed_fns.append(lambda x, p_fn=p_fn, freq=freq: p_fn(x * freq))\n                out_dim += d\n\n        self.embed_fns = embed_fns\n        self.out_dim = out_dim\n\n    def embed(self, inputs):\n        return torch.cat([fn(inputs) for fn in self.embed_fns], -1)\n\n\ndef get_embedder(multires, input_dims=3):\n    embed_kwargs = {\n        'include_input': True,\n        'input_dims': input_dims,\n        'max_freq_log2': multires-1,\n        'num_freqs': multires,\n        'log_sampling': True,\n        'periodic_fns': [torch.sin, torch.cos],\n    }\n\n    embedder_obj = Embedder(**embed_kwargs)\n    def embed(x, eo=embedder_obj): return eo.embed(x)\n    return embed, embedder_obj.out_dim\n"
  },
  {
    "path": "src/nr/network/ops.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\ndef conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=dilation, groups=groups, bias=False, dilation=dilation, padding_mode='reflect')\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False, padding_mode='reflect')\n\ndef interpolate_feats(feats, points, h=None, w=None, padding_mode='zeros', align_corners=False, inter_mode='bilinear'):\n    \"\"\"\n\n    :param feats:   b,f,h,w\n    :param points:  b,n,2\n    :param h:       float\n    :param w:       float\n    :param padding_mode:\n    :param align_corners:\n    :param inter_mode:\n    :return:\n    \"\"\"\n    b, _, ch, cw = feats.shape\n    if h is None and w is None:\n        h, w = ch, cw\n    x_norm = points[:, :, 0] / (w - 1) * 2 - 1\n    y_norm = points[:, :, 1] / (h - 1) * 2 - 1\n    points_norm = torch.stack([x_norm, y_norm], -1).unsqueeze(1)    # [srn,1,n,2]\n    feats_inter = F.grid_sample(feats, points_norm, mode=inter_mode, padding_mode=padding_mode, align_corners=align_corners).squeeze(2)      # srn,f,n\n    feats_inter = feats_inter.permute(0,2,1)\n    return  feats_inter\n\ndef masked_mean_var(feats,mask,dim=2):\n    mask=mask.float() # b,1,n,1\n    mask_sum = torch.clamp_min(torch.sum(mask,dim,keepdim=True),min=1e-4) # b,1,1,1\n    feats_mean = torch.sum(feats*mask,dim,keepdim=True)/mask_sum  # b,f,1,1\n    feats_var = torch.sum((feats-feats_mean)**2*mask,dim,keepdim=True)/mask_sum # b,f,1,1\n    return feats_mean, feats_var\n\nclass ResidualBlock(nn.Module):\n    def __init__(self, dim_in, dim_out, dim_inter=None, use_norm=True, norm_layer=nn.BatchNorm2d,bias=False):\n        super().__init__()\n        if dim_inter is None:\n            dim_inter=dim_out\n\n        if use_norm:\n            self.conv=nn.Sequential(\n                norm_layer(dim_in),\n                nn.ReLU(True),\n                nn.Conv2d(dim_in,dim_inter,3,1,1,bias=bias,padding_mode='reflect'),\n                norm_layer(dim_inter),\n                nn.ReLU(True),\n                nn.Conv2d(dim_inter,dim_out,3,1,1,bias=bias,padding_mode='reflect'),\n            )\n        else:\n            self.conv=nn.Sequential(\n                nn.ReLU(True),\n                nn.Conv2d(dim_in,dim_inter,3,1,1),\n                nn.ReLU(True),\n                nn.Conv2d(dim_inter,dim_out,3,1,1),\n            )\n\n        self.short_cut=None\n        if dim_in!=dim_out:\n            self.short_cut=nn.Conv2d(dim_in,dim_out,1,1)\n\n    def forward(self, feats):\n        feats_out=self.conv(feats)\n        if self.short_cut is not None:\n            feats_out=self.short_cut(feats)+feats_out\n        else:\n            feats_out=feats_out+feats\n        return feats_out\n\nclass AddBias(nn.Module):\n    def __init__(self,val):\n        super().__init__()\n        self.val=val\n\n    def forward(self,x):\n        return x+self.val\n\nclass BasicBlock(nn.Module):\n    expansion = 1\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,\n                 base_width=64, dilation=1, norm_layer=None):\n        super(BasicBlock, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        if groups != 1 or base_width != 64:\n            raise ValueError('BasicBlock only supports groups=1 and base_width=64')\n        if dilation > 1:\n            raise NotImplementedError(\"Dilation > 1 not supported in BasicBlock\")\n        # Both self.conv1 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = norm_layer(planes, track_running_stats=False, affine=True)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = norm_layer(planes, track_running_stats=False, affine=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\nclass conv(nn.Module):\n    def __init__(self, num_in_layers, num_out_layers, kernel_size, stride):\n        super(conv, self).__init__()\n        self.kernel_size = kernel_size\n        self.conv = nn.Conv2d(num_in_layers,\n                              num_out_layers,\n                              kernel_size=kernel_size,\n                              stride=stride,\n                              padding=(self.kernel_size - 1) // 2,\n                              padding_mode='reflect')\n        self.bn = nn.InstanceNorm2d(num_out_layers, track_running_stats=False, affine=True)\n\n    def forward(self, x):\n        return F.elu(self.bn(self.conv(x)), inplace=True)\n\nclass upconv(nn.Module):\n    def __init__(self, num_in_layers, num_out_layers, kernel_size, scale):\n        super(upconv, self).__init__()\n        self.scale = scale\n        self.conv = conv(num_in_layers, num_out_layers, kernel_size, 1)\n\n    def forward(self, x):\n        x = nn.functional.interpolate(x, scale_factor=self.scale, align_corners=True, mode='bilinear')\n        return self.conv(x)\n\nclass ResUNetLight(nn.Module):\n    def __init__(self, in_dim=3, layers=(2, 3, 6, 3), out_dim=32, inplanes=32):\n        super(ResUNetLight, self).__init__()\n        # layers = [2, 3, 6, 3]\n        norm_layer = nn.InstanceNorm2d\n        self._norm_layer = norm_layer\n        self.dilation = 1\n        block = BasicBlock\n        replace_stride_with_dilation = [False, False, False]\n        self.inplanes = inplanes\n        self.groups = 1  # seems useless\n        self.base_width = 64  # seems useless\n        self.conv1 = nn.Conv2d(in_dim, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False,\n                               padding_mode='reflect')\n        self.bn1 = norm_layer(self.inplanes, track_running_stats=False, affine=True)\n        self.relu = nn.ReLU(inplace=True)\n        self.layer1 = self._make_layer(block, 32, layers[0], stride=2)\n        self.layer2 = self._make_layer(block, 64, layers[1], stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block, 128, layers[2], stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n\n        # decoder\n        self.upconv3 = upconv(128, 64, 3, 2)\n        self.iconv3 = conv(64 + 64, 64, 3, 1)\n        self.upconv2 = upconv(64, 32, 3, 2)\n        self.iconv2 = conv(32 + 32, 32, 3, 1)\n\n        # fine-level conv\n        self.out_conv = nn.Conv2d(32, out_dim, 1, 1)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        norm_layer = self._norm_layer\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                norm_layer(planes * block.expansion, track_running_stats=False, affine=True),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, self.groups,\n                            self.base_width, previous_dilation, norm_layer))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(block(self.inplanes, planes, groups=self.groups,\n                                base_width=self.base_width, dilation=self.dilation,\n                                norm_layer=norm_layer))\n\n        return nn.Sequential(*layers)\n\n    def skipconnect(self, x1, x2):\n        diffY = x2.size()[2] - x1.size()[2]\n        diffX = x2.size()[3] - x1.size()[3]\n\n        x1 = F.pad(x1, (diffX // 2, diffX - diffX // 2,\n                        diffY // 2, diffY - diffY // 2))\n        x = torch.cat([x2, x1], dim=1)\n        return x\n\n    def forward(self, x):\n        x = self.relu(self.bn1(self.conv1(x)))\n\n        x1 = self.layer1(x)\n        x2 = self.layer2(x1)\n        x3 = self.layer3(x2)\n\n        x = self.upconv3(x3)\n        x = self.skipconnect(x2, x)\n        x = self.iconv3(x)\n\n        x = self.upconv2(x)\n        x = self.skipconnect(x1, x)\n        x = self.iconv2(x)\n\n        x_out = self.out_conv(x)\n        return x_out\n\nclass ResEncoder(nn.Module):\n    def __init__(self):\n        super(ResEncoder, self).__init__()\n        self.inplanes = 32\n        filters = [32, 64, 128]\n        layers = [2, 2, 2, 2]\n        out_planes = 32\n\n        norm_layer = nn.InstanceNorm2d\n        self._norm_layer = norm_layer\n        self.dilation = 1\n        block = BasicBlock\n        replace_stride_with_dilation = [False, False, False]\n        self.groups = 1\n        self.base_width = 64\n\n        self.conv1 = nn.Conv2d(12, self.inplanes, kernel_size=8, stride=2, padding=2,\n                               bias=False, padding_mode='reflect')\n        self.bn1 = norm_layer(self.inplanes, track_running_stats=False, affine=True)\n        self.relu = nn.ReLU(inplace=True)\n        self.layer1 = self._make_layer(block, filters[0], layers[0], stride=2)\n        self.layer2 = self._make_layer(block, filters[1], layers[1], stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block, filters[2], layers[2], stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n\n        # decoder\n        self.upconv3 = upconv(filters[2], filters[1], 3, 2)\n        self.iconv3 = conv(filters[1]*2, filters[1], 3, 1)\n        self.upconv2 = upconv(filters[1], filters[0], 3, 2)\n        self.iconv2 = conv(filters[0]*2, out_planes, 3, 1)\n        self.out_conv = nn.Conv2d(out_planes, out_planes, 1, 1)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        norm_layer = self._norm_layer\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                norm_layer(planes * block.expansion, track_running_stats=False, affine=True),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, self.groups,\n                            self.base_width, 1, norm_layer))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(block(self.inplanes, planes, groups=self.groups,\n                                base_width=self.base_width, dilation=self.dilation,\n                                norm_layer=norm_layer))\n\n        return nn.Sequential(*layers)\n\n    def skipconnect(self, x1, x2):\n        diffY = x2.size()[2] - x1.size()[2]\n        diffX = x2.size()[3] - x1.size()[3]\n\n        x1 = F.pad(x1, (diffX // 2, diffX - diffX // 2,\n                        diffY // 2, diffY - diffY // 2))\n\n        # for padding issues, see\n        # https://github.com/HaiyongJiang/U-Net-Pytorch-Unstructured-Buggy/commit/0e854509c2cea854e247a9c615f175f76fbb2e3a\n        # https://github.com/xiaopeng-liao/Pytorch-UNet/commit/8ebac70e633bac59fc22bb5195e513d5832fb3bd\n\n        x = torch.cat([x2, x1], dim=1)\n        return x\n\n    def forward(self, x):\n        x = self.relu(self.bn1(self.conv1(x)))\n\n        x1 = self.layer1(x)\n        x2 = self.layer2(x1)\n        x3 = self.layer3(x2)\n\n        x = self.upconv3(x3)\n        x = self.skipconnect(x2, x)\n        x = self.iconv3(x)\n\n        x = self.upconv2(x)\n        x = self.skipconnect(x1, x)\n        x = self.iconv2(x)\n\n        x_out = self.out_conv(x)\n        return x_out"
  },
  {
    "path": "src/nr/network/render_ops.py",
    "content": "import torch\nfrom network.ops import interpolate_feats\n\ndef coords2rays(coords, poses, Ks):\n    \"\"\"\n    :param coords:   [rfn,rn,2]\n    :param poses:    [rfn,3,4]\n    :param Ks:       [rfn,3,3]\n    :return:\n        ref_rays:\n            centers:    [rfn,rn,3]\n            directions: [rfn,rn,3]\n    \"\"\"\n    rot = poses[:, :, :3].unsqueeze(1).permute(0, 1, 3, 2)  # rfn,1,3,3\n    trans = -rot @ poses[:, :, 3:].unsqueeze(1)  # rfn,1,3,1\n\n    rfn, rn, _ = coords.shape\n    centers = trans.repeat(1, rn, 1, 1).squeeze(-1)  # rfn,rn,3\n    coords = torch.cat([coords, torch.ones([rfn, rn, 1], dtype=torch.float32, device=coords.device)], 2)  # rfn,rn,3\n    Ks_inv = torch.inverse(Ks).unsqueeze(1)\n    cam_xyz = Ks_inv @ coords.unsqueeze(3)\n    cam_xyz = rot @ cam_xyz + trans\n    directions = cam_xyz.squeeze(3) - centers\n    # directions = directions / torch.clamp(torch.norm(directions, dim=2, keepdim=True), min=1e-4)\n    return centers, directions\n\ndef depth2points(que_imgs_info, que_depth):\n    \"\"\"\n    :param que_imgs_info:\n    :param que_depth:       qn,rn,dn\n    :return:\n    \"\"\"\n    cneters, directions = coords2rays(que_imgs_info['coords'],que_imgs_info['poses'],que_imgs_info['Ks']) # centers, directions qn,rn,3\n    qn, rn, _ = cneters.shape\n    que_pts = cneters.unsqueeze(2) + directions.unsqueeze(2) * que_depth.unsqueeze(3) # qn,rn,dn,3\n    qn, rn, dn, _ = que_pts.shape\n    que_dir = -directions / torch.norm(directions, dim=2, keepdim=True)  # qn,rn,3\n    que_dir = que_dir.unsqueeze(2).repeat(1, 1, dn, 1)\n    return que_pts, que_dir # qn,rn,dn,3\n\ndef depth2dists(depth):\n    device = depth.device\n    dists = depth[...,1:]-depth[...,:-1]\n    return torch.cat([dists, torch.full([*depth.shape[:-1], 1], 1e6, dtype=torch.float32, device=device)], -1)\n\ndef depth2inv_dists(depth,depth_range):\n    near, far = -1 / depth_range[:, 0], -1 / depth_range[:, 1]\n    near, far = near[:, None, None], far[:, None, None]\n    depth_inv = -1 / depth  # qn,rn,dn\n    depth_inv = (depth_inv - near) / (far - near)\n    dists = depth2dists(depth_inv)  # qn,rn,dn\n    return dists\n\ndef interpolate_feature_map(ray_feats, coords, mask, h, w, border_type='border'):\n    \"\"\"\n    :param ray_feats:       rfn,f,h,w\n    :param coords:          rfn,pn,2\n    :param mask:            rfn,pn\n    :param h:\n    :param w:\n    :param border_type:\n    :return:\n    \"\"\"\n    fh, fw = ray_feats.shape[-2:]\n    if fh == h and fw == w:\n        cur_ray_feats = interpolate_feats(ray_feats, coords, h, w, border_type, True)  # rfn,pn,f\n    else:\n        cur_ray_feats = interpolate_feats(ray_feats, coords, h, w, border_type, False)  # rfn,pn,f\n    cur_ray_feats = cur_ray_feats * mask.float().unsqueeze(-1) # rfn,pn,f\n    return cur_ray_feats\n\ndef alpha_values2hit_prob(alpha_values):\n    \"\"\"\n    :param alpha_values: qn,rn,dn\n    :return: qn,rn,dn\n    \"\"\"\n    no_hit_density = torch.cat([torch.ones((*alpha_values.shape[:-1], 1))\n                               .to(alpha_values.device), 1. - alpha_values + 1e-10], -1)  # rn,k+1\n    hit_prob = alpha_values * torch.cumprod(no_hit_density, -1)[..., :-1]  # [n,k]\n    return hit_prob\n\ndef project_points_coords(pts, Rt, K):\n    \"\"\"\n    :param pts:  [pn,3]\n    :param Rt:   [rfn,3,4]\n    :param K:    [rfn,3,3]\n    :return:\n        coords:         [rfn,pn,2]\n        invalid_mask:   [rfn,pn]\n    \"\"\"\n    pn = pts.shape[0]\n    hpts = torch.cat([pts,torch.ones([pn,1],device=pts.device,dtype=torch.float32)],1)\n    srn = Rt.shape[0]\n    KRt = K @ Rt # rfn,3,4\n    last_row = torch.zeros([srn,1,4],device=pts.device,dtype=torch.float32)\n    last_row[:,:,3] = 1.0\n    H = torch.cat([KRt,last_row],1) # rfn,4,4\n    pts_cam = H[:,None,:,:] @ hpts[None,:,:,None]\n    pts_cam = pts_cam[:,:,:3,0]\n    depth = pts_cam[:,:,2:]\n    invalid_mask = torch.abs(depth)<1e-4\n    depth[invalid_mask] = 1e-3\n    pts_2d = pts_cam[:,:,:2]/depth\n    return pts_2d, ~(invalid_mask[...,0]), depth\n\ndef project_points_directions(poses,points):\n    \"\"\"\n    :param poses:       rfn,3,4\n    :param points:      pn,3\n    :return: rfn,pn,3\n    \"\"\"\n    cam_pts = -poses[:, :, :3].permute(0, 2, 1) @ poses[:, :, 3:]  # rfn,3,1\n    dir = points.unsqueeze(0) - cam_pts.permute(0, 2, 1)  # [1,pn,3] - [rfn,1,3] -> rfn,pn,3\n    dir = -dir / torch.clamp_min(torch.norm(dir, dim=2, keepdim=True), min=1e-5)  # rfn,pn,3\n    return dir\n\ndef project_points_ref_views(ref_imgs_info, que_points):\n    \"\"\"\n    :param ref_imgs_info:\n    :param que_points:      pn,3\n    :return:\n    \"\"\"\n    prj_pts, prj_valid_mask, prj_depth = project_points_coords(\n        que_points, ref_imgs_info['poses'], ref_imgs_info['Ks']) # rfn,pn,2\n    h,w=ref_imgs_info['imgs'].shape[-2:]\n    prj_img_invalid_mask = (prj_pts[..., 0] < -0.5) | (prj_pts[..., 0] >= w - 0.5) | \\\n                           (prj_pts[..., 1] < -0.5) | (prj_pts[..., 1] >= h - 0.5)\n    valid_mask = prj_valid_mask & (~prj_img_invalid_mask)\n    prj_dir = project_points_directions(ref_imgs_info['poses'], que_points) # rfn,pn,3\n    return prj_dir, prj_pts, prj_depth, valid_mask\n\ndef project_points_dict(ref_imgs_info, que_pts):\n    # project all points\n    qn, rn, dn, _ = que_pts.shape\n    prj_dir, prj_pts, prj_depth, prj_mask = project_points_ref_views(ref_imgs_info, que_pts.reshape([qn * rn * dn, 3]))\n    rfn, _, h, w = ref_imgs_info['imgs'].shape\n    prj_ray_feats = interpolate_feature_map(ref_imgs_info['ray_feats'], prj_pts, prj_mask, h, w)\n    prj_rgb = interpolate_feature_map(ref_imgs_info['imgs'], prj_pts, prj_mask, h, w)\n    prj_dict = {'dir':prj_dir, 'pts':prj_pts, 'depth':prj_depth, 'mask': prj_mask.float(), 'ray_feats':prj_ray_feats, 'rgb':prj_rgb}\n\n    # post process\n    for k, v in prj_dict.items():\n        prj_dict[k]=v.reshape(rfn,qn,rn,dn,-1)\n    return prj_dict\n\ndef sample_depth(depth_range, coords, sample_num, random_sample):\n    \"\"\"\n    :param depth_range: qn,2\n    :param sample_num:\n    :param random_sample:\n    :return:\n    \"\"\"\n    qn, rn, _ = coords.shape\n    device = coords.device\n    near, far = depth_range[:,0], depth_range[:,1] # qn,2\n    dn = sample_num\n    assert(dn>2)\n    interval = (1 / far - 1 / near) / (dn - 1)  # qn\n    val = torch.arange(1, dn - 1, dtype=torch.float32, device=near.device)[None, None, :]\n    if random_sample:\n        val = val + (torch.rand(qn, rn, dn-2, dtype=torch.float32, device=device) - 0.5) * 0.999\n    else:\n        val = val + torch.zeros(qn, rn, dn-2, dtype=torch.float32, device=device)\n    ticks = interval[:, None, None] * val\n\n    diff = (1 / far - 1 / near)\n    ticks = torch.cat([torch.zeros(qn,rn,1,dtype=torch.float32,device=device),ticks,diff[:,None,None].repeat(1,rn,1)],-1)\n    que_depth = 1 / (1 / near[:, None, None] + ticks)  # qn, dn,\n    que_dists = torch.cat([que_depth[...,1:],torch.full([*que_depth.shape[:-1],1],1e6,dtype=torch.float32,device=device)],-1) - que_depth\n    return que_depth, que_dists # qn, rn, dn\n\ndef sample_fine_depth(depth, hit_prob, depth_range, sample_num, random_sample, inv_mode=True):\n    \"\"\"\n    :param depth:       qn,rn,dn\n    :param hit_prob:    qn,rn,dn\n    :param depth_range: qn,2\n    :param sample_num:\n    :param random_sample:\n    :param inv_mode:\n    :return: qn,rn,dn\n    \"\"\"\n    if inv_mode:\n        near, far = depth_range[0,0], depth_range[0,1]\n        near, far = -1/near, -1/far\n        depth_inv = -1 / depth  # qn,rn,dn\n        depth_inv = (depth_inv - near) / (far - near)\n        depth = depth_inv\n\n    depth_center = (depth[...,1:] + depth[...,:-1])/2\n    depth_center = torch.cat([depth[...,0:1],depth_center,depth[...,-1:]],-1) # rfn,pn,dn+1\n    fdn = sample_num\n    # Get pdf\n    hit_prob = hit_prob + 1e-5  # prevent nans\n    pdf = hit_prob / torch.sum(hit_prob, -1, keepdim=True) # rfn,pn,dn-1\n    cdf = torch.cumsum(pdf, -1) # rfn,pn,dn-1\n    cdf = torch.cat([torch.zeros_like(cdf[...,:1]), cdf], -1)  # rfn,pn,dn\n\n    # Take uniform samples\n    if not random_sample:\n        interval = 1 / fdn\n        u = 0.5*interval+torch.arange(fdn)*interval\n        # u = torch.linspace(0., 1., steps=fdn)\n        u = u.expand(list(cdf.shape[:-1]) + [fdn]) # rfn,pn,fdn\n    else:\n        u = torch.rand(list(cdf.shape[:-1]) + [fdn])\n\n    # Invert CDF\n    device = pdf.device\n    u = u.to(device).contiguous() # rfn,pn,fdn\n    inds = torch.searchsorted(cdf, u, right=True)                       # rfn,pn,fdn\n    below = torch.max(torch.zeros_like(inds-1), inds-1)                 # rfn,pn,fdn\n    above = torch.min((cdf.shape[-1]-1) * torch.ones_like(inds), inds)  # rfn,pn,fdn\n    inds_g = torch.stack([below, above], -1)  # (batch, N_samples, 2)   # rfn,pn,fdn,2\n\n    matched_shape = [*inds_g.shape[:-1], cdf.shape[-1]]\n    cdf_g = torch.gather(cdf.unsqueeze(-2).expand(matched_shape), -1, inds_g)    # rfn,pn,fdn,2\n    bins_g = torch.gather(depth_center.unsqueeze(-2).expand(matched_shape), -1, inds_g) # rfn,pn,fdn,2\n\n    denom = (cdf_g[...,1]-cdf_g[...,0]) # rfn,pn,fdn\n    denom = torch.where(denom<1e-5, torch.ones_like(denom), denom)\n    t = (u-cdf_g[...,0])/denom\n    fine_depth = bins_g[...,0] + t * (bins_g[...,1]-bins_g[...,0])\n\n    if inv_mode:\n        near, far = depth_range[0,0], depth_range[0,1]\n        near, far = -1/near, -1/far\n        fine_depth = fine_depth * (far - near) + near\n        fine_depth = -1/fine_depth\n    return fine_depth\n"
  },
  {
    "path": "src/nr/network/renderer.py",
    "content": "import torch\nimport numpy as np\nimport torch.nn as nn\n\nfrom network.aggregate_net import name2agg_net\nfrom network.dist_decoder import name2dist_decoder\nfrom network.init_net import name2init_net\nfrom network.ops import ResUNetLight\nfrom network.vis_encoder import name2vis_encoder\nfrom network.render_ops import *\nfrom utils.field_utils import TSDF_SAMPLE_POINTS\n\nclass NeuralRayRenderer(nn.Module):\n    base_cfg={\n        'vis_encoder_type': 'default',\n        'vis_encoder_cfg': {},\n\n        'dist_decoder_type': 'mixture_logistics',\n        'dist_decoder_cfg': {},\n\n        'agg_net_type': 'default',\n        'agg_net_cfg': {},\n\n        'use_hierarchical_sampling': False,\n        'fine_agg_net_cfg': {},\n        'fine_dist_decoder_cfg': {},\n        'fine_depth_sample_num': 64,\n        'fine_depth_use_all': False,\n\n        'ray_batch_num': 2048,\n        'depth_sample_num': 64,\n        'alpha_value_ground_state': -15,\n        'use_dr_prediction': False,\n        'use_nr_color_for_dr': False,\n        'use_self_hit_prob': False,\n        'use_ray_mask': True,\n        'ray_mask_view_num': 2,\n        'ray_mask_point_num': 8,\n\n        'render_depth': False,\n        'disable_view_dir': False,\n        'render_rgb': False,\n\n        'init_net_type': 'depth',\n        'init_net_cfg': {},\n        'depth_loss_coords_num': 8192,\n    }\n    def __init__(self,cfg):\n        super().__init__()\n        self.cfg = {**self.base_cfg, **cfg}\n        self.vis_encoder = name2vis_encoder[self.cfg['vis_encoder_type']](self.cfg['vis_encoder_cfg'])\n        self.dist_decoder = name2dist_decoder[self.cfg['dist_decoder_type']](self.cfg['dist_decoder_cfg'])\n        self.image_encoder = ResUNetLight(3, [1,2,6,4], 32, inplanes=16)\n        self.init_net=name2init_net[self.cfg['init_net_type']](self.cfg['init_net_cfg'])\n        self.agg_net = name2agg_net[self.cfg['agg_net_type']](self.cfg['agg_net_cfg'])\n        if self.cfg['use_hierarchical_sampling']:\n            self.fine_dist_decoder = name2dist_decoder[self.cfg['dist_decoder_type']](self.cfg['fine_dist_decoder_cfg'])\n            self.fine_agg_net = name2agg_net[self.cfg['agg_net_type']](self.cfg['fine_agg_net_cfg'])\n\n        self.use_sdf = self.cfg['agg_net_type'] in ['neus']\n\n    def predict_proj_ray_prob(self, prj_dict, ref_imgs_info, que_dists, is_fine):\n        rfn, qn, rn, dn, _ = prj_dict['mask'].shape\n        # decode ray prob\n        if is_fine:\n            prj_mean, prj_var, prj_vis, prj_aw = self.fine_dist_decoder(prj_dict['ray_feats'])\n        else:\n            prj_mean, prj_var, prj_vis, prj_aw = self.dist_decoder(prj_dict['ray_feats'])\n\n        alpha_values, visibility, hit_prob = self.dist_decoder.compute_prob(\n            prj_dict['depth'].squeeze(-1),que_dists.unsqueeze(0),prj_mean,prj_var,\n            prj_vis, prj_aw, True, ref_imgs_info['depth_range'])\n        # post process\n        prj_dict['alpha'] = alpha_values.reshape(rfn,qn,rn,dn,1) * prj_dict['mask'] + \\\n                            (1 - prj_dict['mask']) * self.cfg['alpha_value_ground_state']\n        prj_dict['vis'] = visibility.reshape(rfn,qn,rn,dn,1) * prj_dict['mask']\n        prj_dict['hit_prob'] = hit_prob.reshape(rfn,qn,rn,dn,1) * prj_dict['mask']\n        return prj_dict\n\n    def get_img_feats(self,ref_imgs_info, prj_dict):\n        rfn, _, h, w = ref_imgs_info['imgs'].shape\n        rfn, qn, rn, dn, _ = prj_dict['pts'].shape\n\n        img_feats = ref_imgs_info['img_feats']\n        prj_img_feats = interpolate_feature_map(img_feats, prj_dict['pts'].reshape(rfn, qn * rn * dn, 2),\n                                                prj_dict['mask'].reshape(rfn, qn * rn * dn), h, w,)\n        prj_dict['img_feats'] = prj_img_feats.reshape(rfn, qn, rn, dn, -1)\n        return prj_dict\n\n    def network_rendering(self, prj_dict, que_dir, que_pts, que_depth, is_fine, is_train, is_sdf=False, sdf_only=False):\n        net = self.fine_agg_net if is_fine else self.agg_net\n        que_dists = depth2dists(que_depth) if que_depth is not None else None\n        rendering_outputs = net(prj_dict, que_dir, que_pts, que_dists, is_train)\n        outputs = {}\n        if is_sdf:\n            alpha_values, outputs['sdf_values'], colors, outputs['sdf_gradient_error'], outputs['s'] = rendering_outputs\n            if sdf_only:\n                return outputs\n        else:\n            density, colors = rendering_outputs\n            alpha_values = 1.0 - torch.exp(-torch.relu(density))\n\n        outputs['alpha_values'] = alpha_values\n        outputs['colors_nr'] = colors\n        outputs['hit_prob_nr'] = hit_prob = alpha_values2hit_prob(alpha_values)\n        outputs['pixel_colors_nr'] = torch.sum(hit_prob.unsqueeze(-1)*colors,2)\n\n        return outputs\n\n    def render_by_depth(self, que_depth, que_imgs_info, ref_imgs_info, is_train, is_fine):\n        ref_imgs_info = ref_imgs_info.copy()\n        que_imgs_info = que_imgs_info.copy()\n        que_dists = depth2inv_dists(que_depth,que_imgs_info['depth_range'])\n        # generate points with query depth\n        que_pts, que_dir = depth2points(que_imgs_info, que_depth)\n        if self.cfg['disable_view_dir']:\n            que_dir = None\n        prj_dict = project_points_dict(ref_imgs_info, que_pts)\n        prj_dict = self.predict_proj_ray_prob(prj_dict, ref_imgs_info, que_dists, is_fine)\n        prj_dict = self.get_img_feats(ref_imgs_info, prj_dict)\n\n        outputs = self.network_rendering(prj_dict, que_dir, que_pts, que_depth, is_fine, is_train, is_sdf=self.use_sdf) \n\n\n        if 'imgs' in que_imgs_info:\n            outputs['pixel_colors_gt'] = interpolate_feats(\n                que_imgs_info['imgs'], que_imgs_info['coords'], align_corners=True)\n\n        if self.cfg['use_ray_mask']:\n            outputs['ray_mask'] = torch.sum(prj_dict['mask'].int(),0)>self.cfg['ray_mask_view_num'] # qn,rn,dn,1\n            outputs['ray_mask'] = torch.sum(outputs['ray_mask'],2)>self.cfg['ray_mask_point_num'] # qn,rn\n            outputs['ray_mask'] = outputs['ray_mask'][...,0]\n\n        if self.cfg['render_depth']:\n            # qn,rn,dn\n            outputs['render_depth'] = torch.sum(outputs['hit_prob_nr'] * que_depth, -1) # qn,rn\n            #outputs['render_depth_dr'] = torch.sum(hit_prob_dr * que_depth, -1) # qn,rn\n        return outputs\n\n    def fine_render_impl(self, coarse_render_info, que_imgs_info, ref_imgs_info, is_train):\n        fine_depth = sample_fine_depth(coarse_render_info['depth'], coarse_render_info['hit_prob'].detach(),\n                                       que_imgs_info['depth_range'], self.cfg['fine_depth_sample_num'], is_train)\n\n        # qn, rn, fdn+dn\n        if self.cfg['fine_depth_use_all']:\n            que_depth = torch.sort(torch.cat([coarse_render_info['depth'], fine_depth], -1), -1)[0]\n        else:\n            que_depth = torch.sort(fine_depth, -1)[0]\n        outputs = self.render_by_depth(que_depth, que_imgs_info, ref_imgs_info, is_train, True)\n        return outputs\n\n    def render_impl(self, que_imgs_info, ref_imgs_info, is_train):\n        # [qn,rn,dn]\n        # sample points along test ray at different depth\n        que_depth, _ = sample_depth(que_imgs_info['depth_range'], que_imgs_info['coords'], self.cfg['depth_sample_num'], False)\n        outputs = self.render_by_depth(que_depth, que_imgs_info, ref_imgs_info, is_train, False)\n        if self.cfg['use_hierarchical_sampling']:\n            coarse_render_info= {'depth': que_depth, 'hit_prob': outputs['hit_prob_nr']}\n            fine_outputs = self.fine_render_impl(coarse_render_info, que_imgs_info, ref_imgs_info, is_train)\n            for k, v in fine_outputs.items():\n                outputs[k + \"_fine\"] = v\n        return outputs\n\n    def sample_volume(self, ref_imgs_info):\n        ref_imgs_info = ref_imgs_info.copy()\n        res = self.cfg['volume_resolution']\n        que_pts = ( torch.from_numpy(TSDF_SAMPLE_POINTS).to(ref_imgs_info['imgs'].device) + \n                    torch.tensor(ref_imgs_info['bbox3d'][0], device=ref_imgs_info['imgs'].device)\n                ).reshape(1, res * res, res, 3)\n        que_pts = torch.flip(que_pts, (2,))\n\n        prj_dict = project_points_dict(ref_imgs_info, que_pts)\n        prj_dict = self.get_img_feats(ref_imgs_info, prj_dict)\n        valid_ratio = torch.sum(prj_dict['mask'],dim=(1,2,3,4)) / np.prod(list(prj_dict['mask'].shape)[1:])\n        if torch.mean(valid_ratio) < 0.5:\n            print(\"!! too low ratio\", valid_ratio)\n        \n        prj_dict = self.predict_proj_ray_prob(prj_dict, ref_imgs_info, torch.empty(0), False)\n        que_dir = torch.tensor([0,0,1], device=que_pts.device).reshape(1,1,1,3).repeat(1, res * res, res, 1) if not self.cfg['disable_view_dir'] else None\n\n        feat_list = []\n        mode = self.cfg['volume_type']\n        if 'image' in mode:\n            image_feat = torch.cat([prj_dict['rgb'], prj_dict['img_feats']], dim=-1)\n            mean = torch.mean(image_feat, dim=-1)\n            var = torch.var(image_feat, dim=-1)\n            feat_list.append(torch.cat([image_feat, mean, var], dim=-1).reshape(1, res, res, res, -1).permute(1,-1))\n\n        if 'alpha' in mode:\n            outputs = self.network_rendering(prj_dict, que_dir, que_pts, None, False, False)\n            feat_list.append(outputs['alpha_values'].reshape(1, 1, res, res, res))\n        \n        if 'sdf' in mode:\n            outputs = self.network_rendering(prj_dict, que_dir, que_pts, None, False, False, is_sdf=self.use_sdf, sdf_only=True)\n            feat_list.append(outputs['sdf_values'].reshape(1, 1, res, res, res))\n\n        feat = torch.cat(feat_list, dim=1)\n        feat = torch.flip(feat, (-1,)) # we sample from top to down, so we need to flip here\n        return feat\n\n    def render(self, que_imgs_info, ref_imgs_info, is_train):\n        render_info_all = {} \n        ray_batch_num = self.cfg[\"ray_batch_num\"]\n        coords = que_imgs_info['coords']\n        ray_num = coords.shape[1]\n        \n        for ray_id in range(0,ray_num,ray_batch_num):\n            que_imgs_info['coords']=coords[:,ray_id:ray_id+ray_batch_num]\n            render_info = self.render_impl(que_imgs_info,ref_imgs_info,is_train)\n            output_keys = [k for k in render_info.keys()]\n            for k in output_keys:\n                v = render_info[k]\n                if k not in render_info_all:\n                    render_info_all[k]=[]\n                render_info_all[k].append(v)\n\n        for k, v in render_info_all.items():\n            render_info_all[k]=torch.cat(v,1)\n\n        return render_info_all\n\n    def gen_depth_loss_coords(self,h,w,device):\n        coords = torch.stack(torch.meshgrid(torch.arange(h), torch.arange(w)), -1).reshape(-1, 2).to(device)\n        num = self.cfg['depth_loss_coords_num']\n        idxs = torch.randperm(coords.shape[0])\n        idxs = idxs[:num]\n        coords = coords[idxs]\n        return coords\n\n    def predict_mean_for_depth_loss(self, ref_imgs_info):\n        ray_feats = ref_imgs_info['ray_feats'] # rfn,f,h',w'\n        ref_imgs = ref_imgs_info['imgs'] # rfn,3,h,w\n        rfn, _, h, w = ref_imgs.shape\n        coords = self.gen_depth_loss_coords(h,w,ref_imgs.device) # pn,2\n        coords = coords.unsqueeze(0).repeat(rfn,1,1) # rfn,pn,2\n\n        batch_num = self.cfg['depth_loss_coords_num']\n        pn = coords.shape[1]\n        coords_dist_mean, coords_dist_mean_2, coords_dist_mean_fine, coords_dist_mean_fine_2 = [], [], [], []\n        for ci in range(0, pn, batch_num):\n            coords_ = coords[:,ci:ci+batch_num]\n            mask_ = torch.ones(coords_.shape[:2], dtype=torch.float32, device=ref_imgs.device)\n            coords_ray_feats_ = interpolate_feature_map(ray_feats, coords_, mask_, h, w) # rfn,pn,f\n            coords_dist_mean_ = self.dist_decoder.predict_mean(coords_ray_feats_)  # rfn,pn\n            coords_dist_mean_2.append(coords_dist_mean_[..., 1])\n            coords_dist_mean_ = coords_dist_mean_[..., 0]\n\n            coords_dist_mean.append(coords_dist_mean_)\n            if self.cfg['use_hierarchical_sampling']:\n                coords_dist_mean_fine_ = self.fine_dist_decoder.predict_mean(coords_ray_feats_)\n                coords_dist_mean_fine_2.append(coords_dist_mean_fine_[..., 1])\n                coords_dist_mean_fine_ = coords_dist_mean_fine_[..., 0]  # use 0 for depth supervision\n                coords_dist_mean_fine.append(coords_dist_mean_fine_)\n\n        coords_dist_mean = torch.cat(coords_dist_mean, 1)\n        outputs = {'depth_mean': coords_dist_mean, 'depth_coords': coords}\n        if len(coords_dist_mean_2)>0:\n            coords_dist_mean_2 = torch.cat(coords_dist_mean_2, 1)\n            outputs['depth_mean_2'] = coords_dist_mean_2\n        if self.cfg['use_hierarchical_sampling']:\n            coords_dist_mean_fine = torch.cat(coords_dist_mean_fine, 1)\n            outputs['depth_mean_fine'] = coords_dist_mean_fine\n            if len(coords_dist_mean_fine_2)>0:\n                coords_dist_mean_fine_2 = torch.cat(coords_dist_mean_fine_2, 1)\n                outputs['depth_mean_fine_2'] = coords_dist_mean_fine_2\n        return outputs\n\n    def forward(self,data):\n        ref_imgs_info = data['ref_imgs_info'].copy()\n        que_imgs_info = data['que_imgs_info'].copy()\n        is_train = 'eval' not in data\n        src_imgs_info = data['src_imgs_info'].copy() if 'src_imgs_info' in data else None\n\n        # extract image feature\n        ref_imgs_info['img_feats'] = self.image_encoder(ref_imgs_info['imgs'])\n        # calc visibility feature map of each view from mvs\n        ref_imgs_info['ray_feats'] = self.init_net(ref_imgs_info, src_imgs_info, is_train)\n        # refine visibity feature along with image feature\n        ref_imgs_info['ray_feats'] = self.vis_encoder(ref_imgs_info['ray_feats'], ref_imgs_info['img_feats'])\n        \n        render_outputs = {}\n        if self.cfg['render_rgb']:\n            render_outputs = self.render(que_imgs_info, ref_imgs_info, is_train)\n\n        if self.cfg['sample_volume']:\n            render_outputs['volume'] = self.sample_volume(ref_imgs_info)\n\n        if (self.cfg['use_depth_loss'] and 'true_depth' in ref_imgs_info) or (not is_train):\n            render_outputs.update(self.predict_mean_for_depth_loss(ref_imgs_info))\n\n        return render_outputs\n\nclass GraspNeRF(nn.Module):\n    default_cfg_vgn={\n        'nr_initial_training_steps': 0,\n        'freeze_nr_after_init': False\n    }\n    def __init__(self, cfg):\n        super().__init__()\n        self.cfg={**self.default_cfg_vgn,**cfg}\n        from gd.networks import get_network\n        self.nr_net = NeuralRayRenderer(self.cfg)\n        self.vgn_net = get_network(\"conv\")\n\n    def select(self, out, index):\n        qual_out, rot_out, width_out = out\n        batch_index = torch.arange(qual_out.shape[0])\n        label = qual_out[batch_index, :, index[:, 0], index[:, 1], index[:, 2]].squeeze()\n        rot = rot_out[batch_index, :, index[:, 0], index[:, 1], index[:, 2]]\n        width = width_out[batch_index, :, index[:, 0], index[:, 1], index[:, 2]].squeeze()\n        return (label, rot, width)\n\n    def forward(self, data):\n        if data['step'] < self.cfg['nr_initial_training_steps']:\n            render_outputs = super().forward(data)\n            with torch.no_grad():\n                vgn_pred = self.vgn_net(render_outputs['volume'])\n        elif self.cfg['freeze_nr_after_init']:\n            with torch.no_grad():\n                render_outputs = super().forward(data)\n            vgn_pred = self.vgn_net(render_outputs['volume'])\n        else:\n            render_outputs = self.nr_net(data)\n            vgn_pred = self.vgn_net(render_outputs['volume'])\n\n\n        if 'full_vol' not in data: \n            render_outputs['vgn_pred'] = self.select(vgn_pred, data['grasp_info'][0])\n        else:\n            render_outputs['vgn_pred'] = vgn_pred\n        return render_outputs\n\nname2network={\n    'grasp_nerf': GraspNeRF,\n}\n"
  },
  {
    "path": "src/nr/network/vis_encoder.py",
    "content": "import torch.nn as nn\nimport torch\n\nfrom network.ops import conv3x3, ResidualBlock, conv1x1\n\nclass DefaultVisEncoder(nn.Module):\n    default_cfg={}\n    def __init__(self, cfg):\n        super().__init__()\n        self.cfg={**self.default_cfg,**cfg}\n        norm_layer = lambda dim: nn.InstanceNorm2d(dim,track_running_stats=False,affine=True)\n        self.out_conv=nn.Sequential(\n            conv3x3(64, 32),\n            ResidualBlock(32, 32, norm_layer=norm_layer),\n            ResidualBlock(32, 32, norm_layer=norm_layer),\n            conv1x1(32, 32),\n        )\n\n    def forward(self, ray_feats, imgs_feats):\n        feats = self.out_conv(torch.cat([imgs_feats, ray_feats],1))\n        return feats\n\nname2vis_encoder={\n    'default': DefaultVisEncoder,\n}"
  },
  {
    "path": "src/nr/run_training.py",
    "content": "import argparse\n\nfrom train.trainer import Trainer\nfrom utils.base_utils import load_cfg\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--cfg', type=str, default='configs/train/gen/neuray_gen_depth_train.yaml')\nflags = parser.parse_args()\n\ntrainer = Trainer(load_cfg(flags.cfg))\ntrainer.run()"
  },
  {
    "path": "src/nr/train/lr_common_manager.py",
    "content": "import abc\n\nclass LearningRateManager(abc.ABC):\n    @staticmethod\n    def set_lr_for_all(optimizer, lr):\n        for param_group in optimizer.param_groups:\n            param_group['lr'] = lr\n\n    def construct_optimizer(self, optimizer, network):\n        # may specify different lr for different parts\n        # use group to set learning rate\n        paras = network.parameters()\n        return optimizer(paras, lr=1e-3)\n\n    @abc.abstractmethod\n    def __call__(self, optimizer, step, *args, **kwargs):\n        pass\n\nclass ExpDecayLR(LearningRateManager):\n    def __init__(self,cfg):\n        self.lr_init=cfg['lr_init']\n        self.decay_step=cfg['decay_step']\n        self.decay_rate=cfg['decay_rate']\n        self.lr_min=1e-5\n\n    def __call__(self, optimizer, step, *args, **kwargs):\n        lr=max(self.lr_init*(self.decay_rate**(step//self.decay_step)),self.lr_min)\n        self.set_lr_for_all(optimizer,lr)\n        return lr\n\nclass ExpDecayLRRayFeats(ExpDecayLR):\n    def construct_optimizer(self, optimizer, network):\n        paras = network.parameters()\n        return optimizer([para for para in paras] + network.ray_feats, lr=1e-3)\n\nclass WarmUpExpDecayLR(LearningRateManager):\n    def __init__(self, cfg):\n        self.lr_warm=cfg['lr_warm']\n        self.warm_step=cfg['warm_step']\n        self.lr_init=cfg['lr_init']\n        self.decay_step=cfg['decay_step']\n        self.decay_rate=cfg['decay_rate']\n        self.lr_min=1e-5\n\n    def __call__(self, optimizer, step, *args, **kwargs):\n        if step<self.warm_step:\n            lr=self.lr_warm\n        else:\n            lr=max(self.lr_init*(self.decay_rate**((step-self.warm_step)//self.decay_step)),self.lr_min)\n        self.set_lr_for_all(optimizer,lr)\n        return lr\n\nname2lr_manager={\n    'exp_decay': ExpDecayLR,\n    'exp_decay_ray_feats': ExpDecayLRRayFeats,\n    'warm_up_exp_decay': WarmUpExpDecayLR,\n}"
  },
  {
    "path": "src/nr/train/train_tools.py",
    "content": "import datetime\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport numpy as np\nfrom torch.utils.tensorboard import SummaryWriter\nimport torch.nn as nn\n\n\ndef load_model(model, optim, model_dir, epoch=-1):\n    if not os.path.exists(model_dir):\n        return 0\n\n    pths = [int(pth.split('.')[0]) for pth in os.listdir(model_dir)]\n    if len(pths) == 0:\n        return 0\n    if epoch == -1:\n        pth = max(pths)\n    else:\n        pth = epoch\n\n    pretrained_model = torch.load(os.path.join(model_dir, '{}.pth'.format(pth)))\n    model.load_state_dict(pretrained_model['net'])\n    optim.load_state_dict(pretrained_model['optim'])\n    print('load {} epoch {}'.format(model_dir, pretrained_model['epoch'] + 1))\n    return pretrained_model['epoch'] + 1\n\ndef adjust_learning_rate(optimizer, epoch, lr_decay_rate, lr_decay_epoch, min_lr=1e-5):\n    if ((epoch + 1) % lr_decay_epoch) != 0:\n        return\n\n    for param_group in optimizer.param_groups:\n        # print(param_group)\n        lr_before = param_group['lr']\n        param_group['lr'] = param_group['lr'] * lr_decay_rate\n        param_group['lr'] = max(param_group['lr'], min_lr)\n\n    print('changing learning rate {:5f} to {:.5f}'.format(lr_before, max(param_group['lr'], min_lr)))\n\ndef reset_learning_rate(optimizer, lr):\n    for param_group in optimizer.param_groups:\n        # print(param_group)\n        # lr_before = param_group['lr']\n        param_group['lr'] = lr\n    # print('changing learning rate {:5f} to {:.5f}'.format(lr_before,lr))\n    return lr\n\ndef save_model(net, optim, epoch, model_dir):\n    os.system('mkdir -p {}'.format(model_dir))\n    torch.save({\n        'net': net.feats_state_dict(),\n        'optim': optim.feats_state_dict(),\n        'epoch': epoch\n    }, os.path.join(model_dir, '{}.pth'.format(epoch)))\n\nclass Recorder(object):\n    def __init__(self, rec_dir, rec_fn):\n        self.rec_dir = rec_dir\n        self.rec_fn = rec_fn\n        self.data = OrderedDict()\n        self.writer = SummaryWriter(log_dir=rec_dir)\n\n    def rec_loss(self, losses_batch, step, epoch, prefix='train', dump=False):\n        for k, v in losses_batch.items():\n            name = '{}/{}'.format(prefix, k)\n            if name in self.data:\n                self.data[name].append(v)\n            else:\n                self.data[name] = [v]\n\n        if dump:\n            if prefix == 'train':\n                msg = '{} epoch {} step {} '.format(prefix, epoch, step)\n            else:\n                msg = '{} epoch {} '.format(prefix, epoch)\n            for k, v in self.data.items():\n                if not k.startswith(prefix): continue\n                if len(v) > 0:\n                    msg += '{} {:.5f} '.format(k.split('/')[-1], np.mean(v))\n                    self.writer.add_scalar(k, np.mean(v), step)\n                self.data[k] = []\n\n            print(msg)\n            with open(self.rec_fn, 'a') as f:\n                f.write(msg + '\\n')\n\n    def rec_msg(self, msg):\n        print(msg)\n        with open(self.rec_fn, 'a') as f:\n            f.write(msg + '\\n')\n\n\nclass Logger:\n    def __init__(self, log_dir):\n        self.log_dir=log_dir\n        self.data = OrderedDict()\n        self.writer = SummaryWriter(log_dir=log_dir + \"/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\n\n    def log(self,data, prefix='train',step=None,verbose=False):\n        msg=f'{prefix} '\n        for k, v in data.items():\n            msg += f'{k} {v:.5f} '\n            self.writer.add_scalar(f'{prefix}/{k}',v,step)\n\n        if verbose:\n            print(msg)\n        with open(os.path.join(self.log_dir,f'{prefix}.txt'), 'a') as f:\n            f.write(msg + '\\n')\n\ndef print_shape(obj):\n    if type(obj) == list or type(obj) == tuple:\n        shapes = [item.shape for item in obj]\n        print(shapes)\n    else:\n        print(obj.shape)\n\ndef overwrite_configs(cfg_base: dict, cfg: dict):\n    keysNotinBase = []\n    for key in cfg.keys():\n        if key in cfg_base.keys():\n            cfg_base[key] = cfg[key]\n        else:\n            keysNotinBase.append(key)\n            cfg_base.update({key: cfg[key]})\n    if len(keysNotinBase) != 0:\n        print('==== WARNING: These keys are not set in DEFAULT_BASE_CONFIG... ====')\n        print(keysNotinBase)\n    return cfg_base\n\ndef to_cuda(data):\n    if type(data)==list:\n        results = []\n        for i, item in enumerate(data):\n            results.append(to_cuda(item))\n        return results\n    elif type(data)==dict:\n        results={}\n        for k,v in data.items():\n            results[k]=to_cuda(v)\n        return results\n    elif type(data).__name__ == \"Tensor\":\n        return data.cuda()\n    else:\n        return data\n\ndef dim_extend(data_list):\n    results = []\n    for i, tensor in enumerate(data_list):\n        results.append(tensor[None,...])\n    return results\n\nclass MultiGPUWrapper(nn.Module):\n    def __init__(self,network,losses):\n        super().__init__()\n        self.network=network\n        self.losses=losses\n\n    def forward(self, data_gt):\n        results={}\n        data_pr=self.network(data_gt)\n        results.update(data_pr)\n        for loss in self.losses:\n            results.update(loss(data_pr,data_gt,data_gt['step']))\n        return results\n\nclass DummyLoss:\n    def __init__(self,losses):\n        self.keys=[]\n        for loss in losses:\n            self.keys+=loss.keys\n\n    def __call__(self, data_pr, data_gt, step):\n        return {key: data_pr[key] for key in self.keys}\n"
  },
  {
    "path": "src/nr/train/train_valid.py",
    "content": "import time\n\nimport torch\nimport numpy as np\nfrom tqdm import tqdm\n\nfrom network.metrics import name2key_metrics\nfrom train.train_tools import to_cuda\n\n\nclass ValidationEvaluator:\n    def __init__(self,cfg):\n        self.key_metric_name=cfg['key_metric_name']\n        self.key_metric=name2key_metrics[self.key_metric_name]\n\n    def __call__(self, model, losses, eval_dataset, step, model_name, val_set_name=None):\n        if val_set_name is not None: model_name=f'{model_name}-{val_set_name}'\n        model.eval()\n        eval_results={}\n        begin=time.time()\n        for data_i, data in tqdm(enumerate(eval_dataset)):\n            data = to_cuda(data)\n            data['eval']=True\n            data['step']=step\n            with torch.no_grad():\n                outputs=model(data)\n                for loss in losses:\n                    loss_results=loss(outputs, data, step, data_index=data_i, model_name=model_name, is_train=False)\n                    for k,v in loss_results.items():\n                        if type(v)==torch.Tensor:\n                            v=v.detach().cpu().numpy()\n\n                        if k in eval_results:\n                            eval_results[k].append(v)\n                        else:\n                            eval_results[k]=[v]\n\n        for k,v in eval_results.items():\n            eval_results[k]=np.concatenate(v,axis=0)\n\n        key_metric_val=self.key_metric(eval_results)\n        if key_metric_val != 1e6:\n            eval_results[self.key_metric_name + '_all'] = eval_results[self.key_metric_name]\n            eval_results[self.key_metric_name]=key_metric_val\n        print('eval cost {} s'.format(time.time()-begin))\n        return eval_results, key_metric_val\n"
  },
  {
    "path": "src/nr/train/trainer.py",
    "content": "import os\nimport random\nimport torch\nimport numpy as np\nfrom torch.nn import DataParallel\nfrom torch.optim import Adam, SGD\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nfrom dataset.name2dataset import name2dataset\nfrom network.loss import name2loss\nfrom network.renderer import name2network\nfrom train.lr_common_manager import name2lr_manager\nfrom network.metrics import name2metrics\nfrom train.train_tools import to_cuda, Logger, reset_learning_rate, MultiGPUWrapper, DummyLoss\nfrom train.train_valid import ValidationEvaluator\nfrom utils.dataset_utils import simple_collate_fn, dummy_collate_fn\nfrom asset import vgn_val_scene_names\n\nclass Trainer:\n    default_cfg={\n        \"optimizer_type\": 'adam',\n        \"multi_gpus\": False,\n        \"lr_type\": \"exp_decay\",\n        \"lr_cfg\":{\n            \"lr_init\": 1.0e-4,\n            \"decay_step\": 100000,\n            \"decay_rate\": 0.5,\n        },\n        \"total_step\": 300000,\n        \"train_log_step\": 20,\n        \"val_interval\": 10000,\n        \"save_interval\": 1000,\n        \"worker_num\": 8,\n        \"fix_seed\": False\n    }\n    def _init_dataset(self):\n        self.train_set=name2dataset[self.cfg['train_dataset_type']](self.cfg['train_dataset_cfg'], True)\n        self.train_set=DataLoader(self.train_set,1,True,num_workers=self.cfg['worker_num'],collate_fn=dummy_collate_fn)\n        print(f'train set len {len(self.train_set)}')\n        self.val_set_list, self.val_set_names = [], []\n        for val_set_cfg in self.cfg['val_set_list']:\n            name, val_type, val_cfg = val_set_cfg['name'], val_set_cfg['type'], val_set_cfg['cfg']\n            if 'val_scene_num' in val_set_cfg:\n                num = val_set_cfg['val_scene_num']\n                num = len(vgn_val_scene_names) if num == -1 else num\n                names, val_types = [name] * num, [val_type] * num\n                val_cfgs = []\n                for i in range(num):\n                    val_cfgs.append({**val_cfg, **{'val_database_name': vgn_val_scene_names[i]}})\n            else:\n                names, val_types, val_cfgs = [name], [val_type], [val_cfg]\n        for name, val_type, val_cfg in zip(names, val_types, val_cfgs):\n            val_set = name2dataset[val_type](val_cfg, False)\n            val_set = DataLoader(val_set,1,False,num_workers=self.cfg['worker_num'],collate_fn=dummy_collate_fn)\n            self.val_set_list.append(val_set)\n            self.val_set_names.append(name)\n        print(f\"val set num: {len(self.val_set_list)}\")\n\n    def _init_network(self):\n        self.network=name2network[self.cfg['network']](self.cfg).cuda()\n\n        # loss\n        self.val_losses = []\n        for loss_name in self.cfg['loss']:\n            self.val_losses.append(name2loss[loss_name](self.cfg))\n        self.val_metrics = []\n\n        # metrics\n        for metric_name in self.cfg['val_metric']:\n            if metric_name in name2metrics:\n                self.val_metrics.append(name2metrics[metric_name](self.cfg))\n            else:\n                self.val_metrics.append(name2loss[metric_name](self.cfg))\n\n        # we do not support multi gpu training for NeuRay\n        if self.cfg['multi_gpus']:\n            raise NotImplementedError\n            # make multi gpu network\n            # self.train_network=DataParallel(MultiGPUWrapper(self.network,self.val_losses))\n            # self.train_losses=[DummyLoss(self.val_losses)]\n        else:\n            self.train_network=self.network\n            self.train_losses=self.val_losses\n\n        if self.cfg['optimizer_type']=='adam':\n            self.optimizer = Adam\n        elif self.cfg['optimizer_type']=='sgd':\n            self.optimizer = SGD\n        else:\n            raise NotImplementedError\n\n        self.val_evaluator=ValidationEvaluator(self.cfg)\n        self.lr_manager=name2lr_manager[self.cfg['lr_type']](self.cfg['lr_cfg'])\n        self.optimizer=self.lr_manager.construct_optimizer(self.optimizer,self.network)\n\n    def __init__(self,cfg):\n        self.cfg={**self.default_cfg,**cfg}\n        if self.cfg['fix_seed']:\n            seed = 0\n            torch.manual_seed(seed)\n            np.random.seed(seed)\n            random.seed(seed)\n            torch.cuda.manual_seed_all(seed)\n            os.environ['PYTHONHASHSEED'] = str(seed)\n            print(\"fix seed\")\n        self.model_name=cfg['name']\n        self.model_dir=os.path.join('data/model',cfg['group_name'],cfg['name'])\n        if not os.path.exists(self.model_dir): os.makedirs(self.model_dir)\n        self.pth_fn=os.path.join(self.model_dir,'model.pth')\n        self.best_pth_fn=os.path.join(self.model_dir,'model_best.pth')\n        assert self.cfg[\"key_metric_prefer\"] in ['higher', 'lower']\n        self.better = lambda x, y: x > y if self.cfg[\"key_metric_prefer\"] == 'higher'  else x < y\n\n    def run(self):\n        self._init_dataset()\n        self._init_network()\n        self._init_logger()\n\n        best_para,start_step=self._load_model()\n        if self.cfg[\"key_metric_prefer\"] == 'lower' and start_step == 0:\n            best_para = 1e6\n        train_iter=iter(self.train_set)\n\n        pbar=tqdm(total=self.cfg['total_step'],bar_format='{r_bar}')\n        pbar.update(start_step)\n        for step in range(start_step,self.cfg['total_step']):\n            try:\n                train_data = next(train_iter)\n            except StopIteration:\n                self.train_set.dataset.reset()\n                train_iter = iter(self.train_set)\n                train_data = next(train_iter)\n            if not self.cfg['multi_gpus']:\n                train_data = to_cuda(train_data)\n            train_data['step']=step\n\n            self.train_network.train()\n            self.network.train()\n            lr = self.lr_manager(self.optimizer, step)\n\n            self.optimizer.zero_grad()\n            self.train_network.zero_grad()\n\n            log_info={}\n            outputs=self.train_network(train_data)\n            for loss in self.train_losses:\n                loss_results = loss(outputs,train_data,step)\n                for k,v in loss_results.items():\n                    log_info[k]=v\n\n            loss=0\n            for k,v in log_info.items():\n                if k.startswith('loss'):\n                    loss=loss+torch.mean(v)\n\n            loss.backward()\n            self.optimizer.step()\n            if ((step+1) % self.cfg['train_log_step']) == 0:\n                self._log_data(log_info,step+1,'train')\n\n            if step==0 or (step+1)%self.cfg['val_interval']==0 or (step+1)==self.cfg['total_step']:\n                torch.cuda.empty_cache()\n                val_results={}\n                val_para = 0\n                for vi, val_set in enumerate(self.val_set_list):\n                    val_results_cur, val_para_cur = self.val_evaluator(\n                        self.network, self.val_losses + self.val_metrics, val_set, step,\n                        self.model_name, val_set_name=self.val_set_names[vi])\n                    for k,v in val_results_cur.items():\n                        key = f'{self.val_set_names[vi]}-{k}'\n                        if not key in val_results:\n                            val_results[key] = v\n                        else:\n                            val_results[key] += v\n                    val_para += val_para_cur\n\n                # average all items \n                for k,v in val_results.items():\n                    val_results[k] /= len(self.val_set_list)\n                val_para /= len(self.val_set_list)\n\n                if step and self.better(val_para, best_para): # do not save the first step\n                    print(f'New best model {self.cfg[\"key_metric_name\"]}: {val_para:.5f} previous {best_para:.5f}')\n                    best_para=val_para\n                    self._save_model(step+1,best_para,self.best_pth_fn)\n                self._log_data(val_results,step+1,'val')\n                del val_results, val_para, val_para_cur, val_results_cur\n\n            if (step+1)%self.cfg['save_interval']==0:\n                self._save_model(step+1,best_para)\n\n            pbar.set_postfix(loss=float(loss.detach().cpu().numpy()),lr=lr)\n            pbar.update(1)\n            del loss, log_info\n\n        pbar.close()\n\n    def _load_model(self):\n        best_para,start_step=0,0\n        if os.path.exists(self.pth_fn):\n            checkpoint=torch.load(self.pth_fn)\n            best_para = checkpoint['best_para']\n            start_step = checkpoint['step']\n            self.network.load_state_dict(checkpoint['network_state_dict'])\n            self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\n            print(f'==> resuming from the latest {self.pth_fn} of step {start_step} with best metric {best_para}')\n\n        return best_para, start_step\n\n    def _save_model(self, step, best_para, save_fn=None):\n        save_fn = self.pth_fn if save_fn is None else save_fn\n        torch.save({\n            'step':step,\n            'best_para':best_para,\n            'network_state_dict': self.network.state_dict(),\n            'optimizer_state_dict': self.optimizer.state_dict(),\n        },save_fn)\n\n    def _init_logger(self):\n        self.logger = Logger(self.model_dir)\n\n    def _log_data(self,results,step,prefix='train',verbose=False):\n        log_results={}\n        for k, v in results.items():\n            if isinstance(v,float) or np.isscalar(v):\n                log_results[k] = v\n            elif type(v)==np.ndarray:\n                log_results[k]=np.mean(v)\n            else:\n                log_results[k]=np.mean(v.detach().cpu().numpy())\n        self.logger.log(log_results,prefix,step,verbose)\n"
  },
  {
    "path": "src/nr/utils/base_utils.py",
    "content": "import math\nimport os\n\nimport cv2\nimport h5py\nimport torch\n\nimport numpy as np\nimport pickle\n\nimport yaml\nfrom plyfile import PlyData\nfrom skimage.io import imread\n\n#######################io#########################################\nfrom transforms3d.axangles import mat2axangle\nfrom transforms3d.euler import euler2mat\n\n\ndef read_pickle(pkl_path):\n    with open(pkl_path, 'rb') as f:\n        return pickle.load(f)\n\n\ndef save_pickle(data, pkl_path):\n    os.system('mkdir -p {}'.format(os.path.dirname(pkl_path)))\n    with open(pkl_path, 'wb') as f:\n        pickle.dump(data, f)\n\n\n#####################depth and image###############################\n\ndef mask_zbuffer_to_pts(mask, zbuffer, K):\n    ys, xs = np.nonzero(mask)\n    zbuffer = zbuffer[ys, xs]\n    u, v, f = K[0, 2], K[1, 2], K[0, 0]\n    depth = zbuffer / np.sqrt((xs - u + 0.5) ** 2 + (ys - v + 0.5) ** 2 + f ** 2) * f\n\n    pts = np.asarray([xs, ys, depth], np.float32).transpose()\n    pts[:, :2] *= pts[:, 2:]\n    return np.dot(pts, np.linalg.inv(K).transpose())\n\n\ndef mask_depth_to_pts(mask, depth, K, rgb=None):\n    hs, ws = np.nonzero(mask)\n    depth = depth[hs, ws]\n    pts = np.asarray([ws, hs, depth], np.float32).transpose()\n    pts[:, :2] *= pts[:, 2:]\n    if rgb is not None:\n        return np.dot(pts, np.linalg.inv(K).transpose()), rgb[hs, ws]\n    else:\n        return np.dot(pts, np.linalg.inv(K).transpose())\n\n\ndef read_render_zbuffer(dpt_pth, max_depth, min_depth):\n    zbuffer = imread(dpt_pth)\n    mask = (zbuffer > 0) & (zbuffer < 5000)\n    zbuffer = zbuffer.astype(np.float64) / 2 ** 16 * (max_depth - min_depth) + min_depth\n    return mask, zbuffer\n\n\ndef zbuffer_to_depth(zbuffer, K):\n    u, v, f = K[0, 2], K[1, 2], K[0, 0]\n    x = np.arange(zbuffer.shape[1])\n    y = np.arange(zbuffer.shape[0])\n    x, y = np.meshgrid(x, y)\n    x = np.reshape(x, [-1, 1])\n    y = np.reshape(y, [-1, 1])\n    depth = np.reshape(zbuffer, [-1, 1])\n\n    depth = depth / np.sqrt((x - u + 0.5) ** 2 + (y - v + 0.5) ** 2 + f ** 2) * f\n    return np.reshape(depth, zbuffer.shape)\n\n\ndef project_points(pts, RT, K):\n    pts = np.matmul(pts, RT[:, :3].transpose()) + RT[:, 3:].transpose()\n    pts = np.matmul(pts, K.transpose())\n    dpt = pts[:, 2]\n    mask0 = (np.abs(dpt) < 1e-4) & (np.abs(dpt) > 0)\n    if np.sum(mask0) > 0: dpt[mask0] = 1e-4\n    mask1 = (np.abs(dpt) > -1e-4) & (np.abs(dpt) < 0)\n    if np.sum(mask1) > 0: dpt[mask1] = -1e-4\n    pts2d = pts[:, :2] / dpt[:, None]\n    return pts2d, dpt\n\n\n#######################image processing#############################\n\ndef grey_repeats(img_raw):\n    if len(img_raw.shape) == 2: img_raw = np.repeat(img_raw[:, :, None], 3, axis=2)\n    if img_raw.shape[2] > 3: img_raw = img_raw[:, :, :3]\n    return img_raw\n\n\ndef normalize_image(img, mask=None):\n    if mask is not None: img[np.logical_not(mask.astype(np.bool))] = 127\n    img = (img.transpose([2, 0, 1]).astype(np.float32) - 127.0) / 128.0\n    return torch.tensor(img, dtype=torch.float32)\n\n\ndef tensor_to_image(tensor):\n    return (tensor * 128 + 127).astype(np.uint8).transpose(1, 2, 0)\n\n\ndef equal_hist(img):\n    if len(img.shape) == 3:\n        img0 = cv2.equalizeHist(img[:, :, 0])\n        img1 = cv2.equalizeHist(img[:, :, 1])\n        img2 = cv2.equalizeHist(img[:, :, 2])\n        img = np.concatenate([img0[..., None], img1[..., None], img2[..., None]], 2)\n    else:\n        img = cv2.equalizeHist(img)\n    return img\n\n\ndef resize_large_image(img, resize_max):\n    h, w = img.shape[:2]\n    max_side = max(h, w)\n    if max_side > resize_max:\n        ratio = resize_max / max_side\n        if ratio <= 0.5: img = cv2.GaussianBlur(img, (5, 5), 1.5)\n        img = cv2.resize(img, (int(round(ratio * w)), int(round(ratio * h))), interpolation=cv2.INTER_LINEAR)\n        return img, ratio\n    else:\n        return img, 1.0\n\n\ndef downsample_gaussian_blur(img, ratio):\n    sigma = (1 / ratio) / 3\n    # ksize=np.ceil(2*sigma)\n    ksize = int(np.ceil(((sigma - 0.8) / 0.3 + 1) * 2 + 1))\n    ksize = ksize + 1 if ksize % 2 == 0 else ksize\n    img = cv2.GaussianBlur(img, (ksize, ksize), sigma, borderType=cv2.BORDER_REFLECT101)\n    return img\n\n\ndef resize_small_image(img, resize_min):\n    h, w = img.shape[:2]\n    min_side = min(h, w)\n    if min_side < resize_min:\n        ratio = resize_min / min_side\n        img = cv2.resize(img, (int(round(ratio * w)), int(round(ratio * h))), interpolation=cv2.INTER_LINEAR)\n        return img, ratio\n    else:\n        return img, 1.0\n\n\n############################geometry######################################\ndef round_coordinates(coord, h, w):\n    coord = np.round(coord).astype(np.int32)\n    coord[coord[:, 0] < 0, 0] = 0\n    coord[coord[:, 0] >= w, 0] = w - 1\n    coord[coord[:, 1] < 0, 1] = 0\n    coord[coord[:, 1] >= h, 1] = h - 1\n    return coord\n\n\ndef get_img_patch(img, pt, size):\n    if isinstance(size, list) or isinstance(size, tuple) or isinstance(size, np.ndarray):\n        size_h, size_w = size\n    else:\n        size_h, size_w = size, size\n    h, w = img.shape[:2]\n    x, y = pt.astype(np.int32)\n    xmin = max(0, x - size_w)\n    xmax = min(w - 1, x + size_w)\n    ymin = max(0, y - size_h)\n    ymax = min(h - 1, y + size_h)\n    patch = np.full([size_h * 2, size_w * 2, 3], 127, np.uint8)\n    patch[ymin - y + size_h:ymax - y + size_h, xmin - x + size_w:xmax - x + size_w] = img[ymin:ymax, xmin:xmax]\n    return patch\n\n\ndef perspective_transform(pts, H):\n    tpts = np.concatenate([pts, np.ones([pts.shape[0], 1])], 1) @ H.transpose()\n    tpts = tpts[:, :2] / np.abs(tpts[:, 2:])  # todo: why only abs? this one is correct\n    return tpts\n\n\ndef get_rot_m(angle):\n    return np.asarray([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]], np.float32)  # rn+1,3,3\n\n\ndef get_rot_m_batch(angle):\n    return np.asarray([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]], np.float32).transpose(\n        [2, 0, 1])\n\n\ndef compute_F(K1, K2, R, t):\n    \"\"\"\n\n    :param K1: [3,3]\n    :param K2: [3,3]\n    :param R:  [3,3]\n    :param t:  [3,1]\n    :return:\n    \"\"\"\n    A = K1 @ R.T @ t  # [3,1]\n    C = np.asarray([[0, -A[2, 0], A[1, 0]],\n                    [A[2, 0], 0, -A[0, 0]],\n                    [-A[1, 0], A[0, 0], 0]])\n    F = (np.linalg.inv(K2)).T @ R @ K1.T @ C\n    return F\n\n\ndef compute_relative_transformation(Rt0, Rt1):\n    \"\"\"\n    x1=Rx0+t\n    :param Rt0: x0=R0x+t0\n    :param Rt1: x1=R1x+t1\n    :return:\n        R1R0.T(x0-t0)+t1\n    \"\"\"\n    R = Rt1[:, :3] @ Rt0[:, :3].T\n    t = Rt1[:, 3] - R @ Rt0[:, 3]\n    return np.concatenate([R, t[:, None]], 1)\n\n\ndef compute_angle(rotation_diff):\n    trace = np.trace(rotation_diff)\n    trace = trace if trace <= 3 else 3\n    angular_distance = np.rad2deg(np.arccos((trace - 1.) / 2.))\n    return angular_distance\n\n\ndef load_h5(filename):\n    dict_to_load = {}\n    with h5py.File(filename, 'r') as f:\n        keys = [key for key in f.keys()]\n        for key in keys:\n            dict_to_load[key] = f[key][()]  # .value\n    return dict_to_load\n\n\ndef save_h5(dict_to_save, filename):\n    with h5py.File(filename, 'w') as f:\n        for key in dict_to_save:\n            f.create_dataset(key, data=dict_to_save[key])\n\n\ndef pts_to_hpts(pts):\n    return np.concatenate([pts, np.ones([pts.shape[0], 1])], 1)\n\n\ndef hpts_to_pts(hpts):\n    return hpts[:, :-1] / hpts[:, -1:]\n\n\ndef np_skew_symmetric(v):\n    M = np.asarray([\n        [0, -v[2], v[1], ],\n        [v[2], 0, -v[0], ],\n        [-v[1], v[0], 0, ],\n    ])\n\n    return M\n\n\ndef point_line_dist(hpts, lines):\n    \"\"\"\n    :param hpts: n,3 or n,2\n    :param lines: n,3\n    :return:\n    \"\"\"\n    if hpts.shape[1] == 2:\n        hpts = np.concatenate([hpts, np.ones([hpts.shape[0], 1])], 1)\n    return np.abs(np.sum(hpts * lines, 1)) / np.linalg.norm(lines[:, :2], 2, 1)\n\n\ndef epipolar_distance(x0, x1, F):\n    \"\"\"\n\n    :param x0: [n,2]\n    :param x1: [n,2]\n    :param F:  [3,3]\n    :return:\n    \"\"\"\n\n    hkps0 = np.concatenate([x0, np.ones([x0.shape[0], 1])], 1)\n    hkps1 = np.concatenate([x1, np.ones([x1.shape[0], 1])], 1)\n\n    lines1 = hkps0 @ F.T\n    lines0 = hkps1 @ F\n\n    dist10 = point_line_dist(hkps0, lines0)\n    dist01 = point_line_dist(hkps1, lines1)\n\n    return dist10, dist01\n\n\ndef epipolar_distance_mean(x0, x1, F):\n    return np.mean(np.stack(epipolar_distance(x0, x1, F), 1), 1)\n\n\ndef compute_dR_dt(R0, t0, R1, t1):\n    # Compute dR, dt\n    dR = np.dot(R1, R0.T)\n    dt = t1 - np.dot(dR, t0)\n    return dR, dt\n\n\ndef compute_precision_recall_np(pr, gt, eps=1e-5):\n    tp = np.sum(gt & pr)\n    fp = np.sum((~gt) & pr)\n    fn = np.sum(gt & (~pr))\n    precision = (tp + eps) / (fp + tp + eps)\n    recall = (tp + eps) / (tp + fn + eps)\n    if precision < 1e-3 or recall < 1e-3:\n        f1 = 0.0\n    else:\n        f1 = (2 * precision * recall + eps) / (precision + recall + eps)\n\n    return precision, recall, f1\n\n\ndef load_cfg(path):\n    with open(path, 'r') as f:\n        return yaml.load(f, Loader=yaml.FullLoader)\n\n\ndef get_stem(path, suffix_len=5):\n    return os.path.basename(path)[:-suffix_len]\n\n\ndef load_component(component_func, component_cfg_fn):\n    component_cfg = load_cfg(component_cfg_fn)\n    return component_func[component_cfg['type']](component_cfg)\n\n\ndef interpolate_image_points(img, pts, interpolation=cv2.INTER_LINEAR):\n    # img [h,w,k] pts [n,2]\n    if len(pts) < 32767:\n        pts = pts.astype(np.float32)\n        return cv2.remap(img, pts[:, None, 0], pts[:, None, 1], borderMode=cv2.BORDER_CONSTANT, borderValue=0,\n                         interpolation=interpolation)[:, 0]\n        # pn=len(pts)\n        # sl=int(np.ceil(np.sqrt(pn)))\n        # tmp_img=np.zeros([sl*sl,2],np.float32)\n        # tmp_img[:pn]=pts\n        # tmp_img=tmp_img.reshape([sl,sl,2])\n        # tmp_img=cv2.remap(img,tmp_img[:,:,0],tmp_img[:,:,1],borderMode=cv2.BORDER_CONSTANT,borderValue=0,interpolation=interpolation)\n        # return tmp_img.flatten()[:pn]\n    else:\n        results = []\n        for k in range(0, len(pts), 30000):\n            results.append(interpolate_image_points(img, pts[k:k + 30000], interpolation))\n        return np.concatenate(results, 0)\n\n\ndef transform_points_Rt(pts, R, t):\n    t = t.flatten()\n    return pts @ R.T + t[None, :]\n\n\ndef transform_points_pose(pts, pose):\n    R, t = pose[:, :3], pose[:, 3]\n    return pts @ R.T + t[None, :]\n\n\ndef quaternion_from_matrix(matrix, isprecise=False):\n    '''Return quaternion from rotation matrix.\n\n    If isprecise is True, the input matrix is assumed to be a precise rotation\n    matrix and a faster algorithm is used.\n\n    >>> q = quaternion_from_matrix(numpy.identity(4), True)\n    >>> numpy.allclose(q, [1, 0, 0, 0])\n    True\n    >>> q = quaternion_from_matrix(numpy.diag([1, -1, -1, 1]))\n    >>> numpy.allclose(q, [0, 1, 0, 0]) or numpy.allclose(q, [0, -1, 0, 0])\n    True\n    >>> R = rotation_matrix(0.123, (1, 2, 3))\n    >>> q = quaternion_from_matrix(R, True)\n    >>> numpy.allclose(q, [0.9981095, 0.0164262, 0.0328524, 0.0492786])\n    True\n    >>> R = [[-0.545, 0.797, 0.260, 0], [0.733, 0.603, -0.313, 0],\n    ...      [-0.407, 0.021, -0.913, 0], [0, 0, 0, 1]]\n    >>> q = quaternion_from_matrix(R)\n    >>> numpy.allclose(q, [0.19069, 0.43736, 0.87485, -0.083611])\n    True\n    >>> R = [[0.395, 0.362, 0.843, 0], [-0.626, 0.796, -0.056, 0],\n    ...      [-0.677, -0.498, 0.529, 0], [0, 0, 0, 1]]\n    >>> q = quaternion_from_matrix(R)\n    >>> numpy.allclose(q, [0.82336615, -0.13610694, 0.46344705, -0.29792603])\n    True\n    >>> R = random_rotation_matrix()\n    >>> q = quaternion_from_matrix(R)\n    >>> is_same_transform(R, quaternion_matrix(q))\n    True\n    >>> R = euler_matrix(0.0, 0.0, numpy.pi/2.0)\n    >>> numpy.allclose(quaternion_from_matrix(R, isprecise=False),\n    ...                quaternion_from_matrix(R, isprecise=True))\n    True\n\n    '''\n\n    M = np.array(matrix, dtype=np.float64, copy=False)[:4, :4]\n    if isprecise:\n        q = np.empty((4,))\n        t = np.trace(M)\n        if t > M[3, 3]:\n            q[0] = t\n            q[3] = M[1, 0] - M[0, 1]\n            q[2] = M[0, 2] - M[2, 0]\n            q[1] = M[2, 1] - M[1, 2]\n        else:\n            i, j, k = 1, 2, 3\n            if M[1, 1] > M[0, 0]:\n                i, j, k = 2, 3, 1\n            if M[2, 2] > M[i, i]:\n                i, j, k = 3, 1, 2\n            t = M[i, i] - (M[j, j] + M[k, k]) + M[3, 3]\n            q[i] = t\n            q[j] = M[i, j] + M[j, i]\n            q[k] = M[k, i] + M[i, k]\n            q[3] = M[k, j] - M[j, k]\n        q *= 0.5 / math.sqrt(t * M[3, 3])\n    else:\n        m00 = M[0, 0]\n        m01 = M[0, 1]\n        m02 = M[0, 2]\n        m10 = M[1, 0]\n        m11 = M[1, 1]\n        m12 = M[1, 2]\n        m20 = M[2, 0]\n        m21 = M[2, 1]\n        m22 = M[2, 2]\n\n        # symmetric matrix K\n        K = np.array([[m00 - m11 - m22, 0.0, 0.0, 0.0],\n                      [m01 + m10, m11 - m00 - m22, 0.0, 0.0],\n                      [m02 + m20, m12 + m21, m22 - m00 - m11, 0.0],\n                      [m21 - m12, m02 - m20, m10 - m01, m00 + m11 + m22]])\n        K /= 3.0\n\n        # quaternion is eigenvector of K that corresponds to largest eigenvalue\n        w, V = np.linalg.eigh(K)\n        q = V[[3, 0, 1, 2], np.argmax(w)]\n\n    if q[0] < 0.0:\n        np.negative(q, q)\n\n    return q\n\n\ndef compute_rotation_angle_diff(R_gt, R):\n    eps = 1e-15\n    q_gt = quaternion_from_matrix(R_gt)\n    q = quaternion_from_matrix(R)\n    q = q / (np.linalg.norm(q) + eps)\n    q_gt = q_gt / (np.linalg.norm(q_gt) + eps)\n    loss_q = np.maximum(eps, (1.0 - np.sum(q * q_gt) ** 2))\n    err_q = np.arccos(1 - 2 * loss_q)\n    return np.rad2deg(np.abs(err_q))\n\n\ndef compute_translation_angle_diff(t_gt, t):\n    eps = 1e-15\n    t = t / (np.linalg.norm(t) + eps)\n    t_gt = t_gt / (np.linalg.norm(t_gt) + eps)\n    loss_t = np.maximum(eps, (1.0 - np.sum(t * t_gt) ** 2))\n    err_t = np.arccos(np.sqrt(1 - loss_t))\n    return np.rad2deg(np.abs(err_t))\n\n\ndef bbox2corners(bbox):\n    return np.asarray([\n        [bbox[0], bbox[1]],\n        [bbox[0] + bbox[2], bbox[1]],\n        [bbox[0] + bbox[2], bbox[1] + bbox[3]],\n        [bbox[0], bbox[1] + bbox[3]],\n    ])\n\n\ndef get_identity_pose():\n    return np.concatenate([np.identity(3), np.zeros([3, 1])], 1).astype(np.float32)\n\n\ndef angular_difference(R0, R1):\n    return np.rad2deg(mat2axangle(R0 @ R1.T)[1])\n\n\ndef load_ply_model(model_path):\n    ply = PlyData.read(model_path)\n    data = ply.elements[0].data\n    x = data['x']\n    y = data['y']\n    z = data['z']\n    return np.stack([x, y, z], axis=-1)\n\n\ndef color_map_forward(rgb):\n    return rgb.astype(np.float32) / 255\n\n\ndef color_map_backward(rgb):\n    rgb = rgb * 255\n    rgb = np.clip(rgb, a_min=0, a_max=255).astype(np.uint8)\n    return rgb\n\n\ndef rotate_image(rot, pose, K, img, mask):\n    if isinstance(rot, np.ndarray):\n        R = rot\n    else:\n        R = np.array([[np.cos(rot), -np.sin(rot), 0.0],\n                      [np.sin(rot), np.cos(rot), 0.0],\n                      [0, 0, 1]], dtype=np.float32)\n\n    # adjust pose\n    pose_adj = np.copy(pose)\n    pose_adj[:, :3] = R @ pose_adj[:, :3]\n    pose_adj[:, 3:] = R @ pose_adj[:, 3:]\n\n    # adjust image\n    transform = K @ R @ np.linalg.inv(K)  # transform original\n    h, w, _ = img.shape\n\n    ys, xs = np.nonzero(mask)\n    coords = np.stack([xs, ys], -1).astype(np.float32)\n    coords_new = cv2.perspectiveTransform(coords[:, None, :], transform)[:, 0, :]\n    x_min, y_min = np.floor(np.min(coords_new, 0)).astype(np.int32)\n    x_max, y_max = np.ceil(np.max(coords_new, 0)).astype(np.int32)\n    th, tw = y_max - y_min, x_max - x_min\n    translation = np.identity(3)\n    translation[0, 2] = -x_min\n    translation[1, 2] = -y_min\n    K = translation @ K\n\n    transform = translation @ transform\n    img = cv2.warpPerspective(img, transform, (tw, th), flags=cv2.INTER_LINEAR)\n    return img, pose_adj, K\n\n\ndef resize_img(img, ratio):\n    # if ratio>=1.0: return img\n    h, w, _ = img.shape\n    hn, wn = int(np.round(h * ratio)), int(np.round(w * ratio))\n    img_out = cv2.resize(downsample_gaussian_blur(img, ratio), (wn, hn), cv2.INTER_LINEAR)\n    return img_out\n\n\ndef pad_img(img, padding_interval=8):\n    h, w = img.shape[:2]\n    hp = (padding_interval - (h % padding_interval)) % padding_interval\n    wp = (padding_interval - (w % padding_interval)) % padding_interval\n    if hp != 0 or wp != 0:\n        img = np.pad(img, ((0, hp), (0, wp), (0, 0)), 'edge')\n    return img\n\n\ndef pad_img_end(img, th, tw, padding_mode='edge', constant_values=0):\n    h, w = img.shape[:2]\n    hp = th - h\n    wp = tw - w\n    if hp != 0 or wp != 0:\n        if padding_mode == 'constant':\n            img = np.pad(img, ((0, hp), (0, wp), (0, 0)), padding_mode, constant_values=constant_values)\n        else:\n            img = np.pad(img, ((0, hp), (0, wp), (0, 0)), padding_mode)\n    return img\n\n\ndef pad_img_target(img, th, tw, K=np.eye(3), background_color=0):\n    h, w = img.shape[:2]\n    hp = th - h\n    wp = tw - w\n    if hp != 0 or wp != 0:\n        if len(img.shape) == 3:\n            img = np.pad(img, ((hp // 2, hp - hp // 2), (wp // 2, wp - wp // 2), (0, 0)), 'constant',\n                         constant_values=background_color)\n        elif len(img.shape) == 2:\n            img = np.pad(img, ((hp // 2, hp - hp // 2), (wp // 2, wp - wp // 2)), 'constant',\n                         constant_values=background_color)\n        else:\n            print(f'image shape unknown {img.shape}')\n            raise NotImplementedError\n        translation = np.identity(3)\n        translation[0, 2] = wp // 2\n        translation[1, 2] = hp // 2\n        K = translation @ K\n    return img, K\n\n\ndef get_coords_mask(que_mask, train_ray_num, foreground_ratio):\n    min_pos_num = int(train_ray_num * foreground_ratio)\n    y0, x0 = np.nonzero(que_mask)\n    y1, x1 = np.nonzero(~que_mask)\n    xy0 = np.stack([x0, y0], 1).astype(np.float32)\n    xy1 = np.stack([x1, y1], 1).astype(np.float32)\n    idx = np.arange(xy0.shape[0])\n    np.random.shuffle(idx)\n    xy0 = xy0[idx]\n    coords0 = xy0[:min_pos_num]\n    # still remain pixels\n    if min_pos_num < train_ray_num:\n        xy1 = np.concatenate([xy1, xy0[min_pos_num:]], 0)\n        idx = np.arange(xy1.shape[0])\n        np.random.shuffle(idx)\n        coords1 = xy1[idx[:(train_ray_num - min_pos_num)]]\n        coords = np.concatenate([coords0, coords1], 0)\n    else:\n        coords = coords0\n    return coords\n\n\ndef get_inverse_depth(depth_range, depth_num):\n    near, far = depth_range\n    interval = (1 / far - 1 / near) / (depth_num - 1)\n    ticks = np.arange(1, depth_num - 1)\n    ticks = 1 / (1 / near + ticks * interval)\n    return np.concatenate([np.asarray([near]).reshape([1]), ticks, np.asarray(far).reshape([1])], 0)\n\n\ndef pose_inverse(pose):\n    R = pose[:, :3].T\n    t = - R @ pose[:, 3:]\n    return np.concatenate([R, t], -1)\n\n\ndef pose_compose(pose0, pose1):\n    \"\"\"\n    apply pose0 first, then pose1\n    :param pose0:\n    :param pose1:\n    :return:\n    \"\"\"\n    t = pose1[:, :3] @ pose0[:, 3:] + pose1[:, 3:]\n    R = pose1[:, :3] @ pose0[:, :3]\n    return np.concatenate([R, t], 1)\n\n\ndef make_dir(dir):\n    if not os.path.exists(dir):\n        os.system(f'mkdir -p {dir}')\n\n\ndef to_cuda(data):\n    if type(data) == list:\n        results = []\n        for i, item in enumerate(data):\n            results.append(to_cuda(item))\n        return results\n    elif type(data) == dict:\n        results = {}\n        for k, v in data.items():\n            results[k] = to_cuda(v)\n        return results\n    elif type(data).__name__ == \"Tensor\" or type(data).__name__==\"Parameter\":\n        return data.cuda()\n    else:\n        return data\n\n\ndef to_cpu_numpy(data):\n    if type(data) == list:\n        results = []\n        for i, item in enumerate(data):\n            results.append(to_cpu_numpy(item))\n        return results\n    elif type(data) == dict:\n        results = {}\n        for k, v in data.items():\n            results[k] = to_cpu_numpy(v)\n        return results\n    elif type(data).__name__ == \"Tensor\" or type(data).__name__==\"Parameter\":\n        return data.detach().cpu().numpy()\n    else:\n        return data\n\n\ndef sample_fps_points(points, sample_num, init_center=True, index_model=False, init_first=False, init_first_index=0,\n                      init_point=None):\n    sample_num = min(points.shape[0], sample_num)\n    output_index = []\n    if init_point is None:\n        if init_center:\n            init_point = np.mean(points, 0)\n        else:\n            if init_first:\n                init_index = init_first_index\n            else:\n                init_index = np.random.randint(0, points.shape[0])\n            init_point = points[init_index]\n            output_index.append(init_index)\n\n    output_points = [init_point]\n    cur_point = init_point\n    distance = np.full(points.shape[0], 1e8)\n    for k in range(sample_num - 1):\n        cur_distance = np.linalg.norm(cur_point[None, :] - points, 2, 1)\n        distance = np.min(np.stack([cur_distance, distance], 1), 1)\n        cur_index = np.argmax(distance)\n        cur_point = points[cur_index]\n        output_points.append(cur_point)\n        output_index.append(cur_index)\n\n    if index_model:\n        return np.asarray(output_index)\n    else:\n        return np.asarray(output_points)\n\n\ndef pnp(points_3d, points_2d, camera_matrix, method=cv2.SOLVEPNP_ITERATIVE):\n    dist_coeffs = np.zeros(shape=[8, 1], dtype='float64')\n\n    assert points_3d.shape[0] == points_2d.shape[0], 'points 3D and points 2D must have same number of vertices'\n    if method == cv2.SOLVEPNP_EPNP:\n        points_3d = np.expand_dims(points_3d, 0)\n        points_2d = np.expand_dims(points_2d, 0)\n\n    points_2d = np.ascontiguousarray(points_2d.astype(np.float64))\n    points_3d = np.ascontiguousarray(points_3d.astype(np.float64))\n    camera_matrix = camera_matrix.astype(np.float64)\n    _, R_exp, t = cv2.solvePnP(points_3d,\n                               points_2d,\n                               camera_matrix,\n                               dist_coeffs,\n                               flags=method)\n\n    R, _ = cv2.Rodrigues(R_exp)\n    return np.concatenate([R, t], axis=-1)\n\n\ndef triangulate(kps0, kps1, pose0, pose1, K0, K1):\n    kps0_ = hpts_to_pts(pts_to_hpts(kps0) @ np.linalg.inv(K0).T)\n    kps1_ = hpts_to_pts(pts_to_hpts(kps1) @ np.linalg.inv(K1).T)\n    pts3d = cv2.triangulatePoints(pose0.astype(np.float64), pose1.astype(np.float64),\n                                  kps0_.T.astype(np.float64), kps1_.T.astype(np.float64)).T\n    pts3d = pts3d[:, :3] / pts3d[:, 3:]\n    return pts3d\n\n\ndef transformation_compose_2d(trans0, trans1):\n    \"\"\"\n    @param trans0: [2,3]\n    @param trans1: [2,3]\n    @return: apply trans0 then trans1\n    \"\"\"\n    t1 = trans1[:, 2]\n    t0 = trans0[:, 2]\n    R1 = trans1[:, :2]\n    R0 = trans0[:, :2]\n    R = R1 @ R0\n    t = R1 @ t0 + t1\n    return np.concatenate([R, t[:, None]], 1)\n\n\ndef transformation_apply_2d(trans, points):\n    return points @ trans[:, :2].T + trans[:, 2:].T\n\n\ndef angle_to_rotation_2d(angle):\n    return np.asarray([[np.cos(angle), -np.sin(angle)],\n                       [np.sin(angle), np.cos(angle)]])\n\n\ndef transformation_offset_2d(x, y):\n    return np.concatenate([np.eye(2), np.asarray([x, y])[:, None]], 1).astype(np.float32)\n\n\ndef transformation_scale_2d(scale):\n    return np.concatenate([np.diag([scale, scale]), np.zeros([2, 1])], 1).astype(np.float32)\n\n\ndef transformation_rotation_2d(ang):\n    return np.concatenate([angle_to_rotation_2d(ang), np.zeros([2, 1])], 1).astype(np.float32)\n\n\ndef look_at_rotation(point):\n    \"\"\"\n    @param point: point in normalized image coordinate not in pixels\n    @return: R\n    R @ x_raw -> x_lookat\n    \"\"\"\n    x, y = point\n    R1 = euler2mat(-np.arctan2(x, 1), 0, 0, 'syxz')\n    R2 = euler2mat(np.arctan2(y, 1), 0, 0, 'sxyz')\n    return R2 @ R1\n\n\ndef save_depth(fn, depth, max_val=1000):\n    import png\n    depth = np.clip(depth, a_min=0, a_max=max_val) / max_val * 65535\n    depth = depth.astype(np.uint16)\n    with open(fn, 'wb') as f:\n        writer = png.Writer(width=depth.shape[1], height=depth.shape[0], bitdepth=16, greyscale=True)\n        zgray2list = depth.tolist()\n        writer.write(f, zgray2list)\n\ndef compute_geodesic_distance_from_two_matrices(m1, m2):\n    batch = m1.shape[0]\n    m = torch.bmm(m1, m2.transpose(1, 2))  # batch*3*3\n    cos = (m[:, 0, 0] + m[:, 1, 1] + m[:, 2, 2] - 1) / 2\n    cos = torch.min(cos, torch.autograd.Variable(torch.ones(batch, device=m1.device)))\n    cos = torch.max(cos, torch.autograd.Variable(torch.ones(batch, device=m1.device)) * -1)\n    theta = torch.acos(cos)\n    theta = torch.min(theta, 2*np.pi - theta)\n    theta *= 180 / np.pi\n    return theta\n\ndef compute_rotation_matrix_from_quaternion_xyzw(quaternion, n_flag=True):\n    def normalize_vector(v):\n        batch = v.shape[0]\n        v_mag = torch.sqrt(v.pow(2).sum(1))  # batch\n        v_mag = torch.max(v_mag, torch.autograd.Variable(torch.FloatTensor([1e-8]).to(v.device)))\n        v_mag = v_mag.view(batch, 1).expand(batch, v.shape[1])\n        v = v / v_mag\n        return v\n    batch = quaternion.shape[0]\n    if n_flag:\n        quat = normalize_vector(quaternion)\n    else:\n        quat = quaternion\n\n    qx = quat[..., 0].view(batch, 1)\n    qy = quat[..., 1].view(batch, 1)\n    qz = quat[..., 2].view(batch, 1)\n    qw = quat[..., 3].view(batch, 1)\n\n    # Unit quaternion rotation matrices computatation\n    xx = qx * qx\n    yy = qy * qy\n    zz = qz * qz\n    xy = qx * qy\n    xz = qx * qz\n    yz = qy * qz\n    xw = qx * qw\n    yw = qy * qw\n    zw = qz * qw\n\n    row0 = torch.cat((1 - 2 * yy - 2 * zz, 2 * xy - 2 * zw, 2 * xz + 2 * yw), 1)  # batch*3\n    row1 = torch.cat((2 * xy + 2 * zw, 1 - 2 * xx - 2 * zz, 2 * yz - 2 * xw), 1)  # batch*3\n    row2 = torch.cat((2 * xz - 2 * yw, 2 * yz + 2 * xw, 1 - 2 * xx - 2 * yy), 1)  # batch*3\n\n    matrix = torch.cat((row0.view(batch, 1, 3), row1.view(batch, 1, 3), row2.view(batch, 1, 3)), 1)  # batch*3*3\n\n    return matrix\n\ndef calc_rot_error_from_qxyzw(rotation_pred, rotations):\n    pred_rmat = compute_rotation_matrix_from_quaternion_xyzw(rotation_pred)\n    gt_rmat = compute_rotation_matrix_from_quaternion_xyzw(rotations[:, 0])\n    gt_rmat_z_180 = compute_rotation_matrix_from_quaternion_xyzw(rotations[:, 1])\n    gt_R_error = compute_geodesic_distance_from_two_matrices(gt_rmat, pred_rmat)\n    gt_R_z_180_error = compute_geodesic_distance_from_two_matrices(gt_rmat_z_180, pred_rmat)\n    error,_ = torch.min(torch.stack([gt_R_error, gt_R_z_180_error], dim=0), dim=0)\n    return error"
  },
  {
    "path": "src/nr/utils/dataset_utils.py",
    "content": "import numpy as np\nimport time\nimport random\nimport torch\n\ndef dummy_collate_fn(data_list):\n    return data_list[0]\n\ndef simple_collate_fn(data_list):\n    ks=data_list[0].keys()\n    outputs={k:[] for k in ks}\n    for k in ks:\n        for data in data_list:\n            outputs[k].append(data[k])\n        outputs[k]=torch.stack(outputs[k],0)\n    return outputs\n\ndef set_seed(index,is_train):\n    if is_train:\n        np.random.seed((index+int(time.time()))%(2**16))\n        random.seed((index+int(time.time()))%(2**16)+1)\n        torch.random.manual_seed((index+int(time.time()))%(2**16)+1)\n    else:\n        np.random.seed(index % (2 ** 16))\n        random.seed(index % (2 ** 16) + 1)\n        torch.random.manual_seed(index % (2 ** 16) + 1)"
  },
  {
    "path": "src/nr/utils/draw_utils.py",
    "content": "import matplotlib\nmatplotlib.use('Agg')\nimport sys\nsys.path.append(\"./src/nr\")\n\nfrom utils.base_utils import compute_relative_transformation, compute_F\nimport numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\nimport matplotlib.lines as mlines\nfrom matplotlib import cm\n\nfrom sklearn.decomposition import PCA\nfrom sklearn.manifold import TSNE\nimport open3d as o3d\n\ndef newline(p1, p2):\n    ax = plt.gca()\n    xmin, xmax = ax.get_xbound()\n\n    if p2[0] == p1[0]:\n        xmin = xmax = p1[0]\n        ymin, ymax = ax.get_ybound()\n    else:\n        ymax = p1[1]+(p2[1]-p1[1])/(p2[0]-p1[0])*(xmax-p1[0])\n        ymin = p1[1]+(p2[1]-p1[1])/(p2[0]-p1[0])*(xmin-p1[0])\n\n    l = mlines.Line2D([xmin,xmax], [ymin,ymax])\n    ax.add_line(l)\n    return l\n\ndef draw_correspondence(img0, img1, kps0, kps1, matches=None, colors=None, max_draw_line_num=None, kps_color=(0,0,255),vert=False):\n    if len(img0.shape)==2:\n        img0=np.repeat(img0[:,:,None],3,2)\n    if len(img1.shape)==2:\n        img1=np.repeat(img1[:,:,None],3,2)\n\n    h0, w0 = img0.shape[:2]\n    h1, w1 = img1.shape[:2]\n    if matches is None:\n        assert(kps0.shape[0]==kps1.shape[0])\n        matches=np.repeat(np.arange(kps0.shape[0])[:,None],2,1)\n\n    if vert:\n        w = max(w0, w1)\n        h = h0 + h1\n        out_img = np.zeros([h, w, 3], np.uint8)\n        out_img[:h0, :w0] = img0\n        out_img[h0:, :w1] = img1\n    else:\n        h = max(h0, h1)\n        w = w0 + w1\n        out_img = np.zeros([h, w, 3], np.uint8)\n        out_img[:h0, :w0] = img0\n        out_img[:h1, w0:] = img1\n\n    for pt in kps0:\n        pt = np.round(pt).astype(np.int32)\n        cv2.circle(out_img, tuple(pt), 1, kps_color, -1)\n\n    for pt in kps1:\n        pt = np.round(pt).astype(np.int32)\n        pt = pt.copy()\n        if vert:\n            pt[1] += h0\n        else:\n            pt[0] += w0\n        cv2.circle(out_img, tuple(pt), 1, kps_color, -1)\n\n    if max_draw_line_num is not None and matches.shape[0]>max_draw_line_num:\n        np.random.seed(6033)\n        idxs=np.arange(matches.shape[0])\n        np.random.shuffle(idxs)\n        idxs=idxs[:max_draw_line_num]\n        matches= matches[idxs]\n\n        if colors is not None and (type(colors)==list or type(colors)==np.ndarray):\n            colors=np.asarray(colors)\n            colors= colors[idxs]\n\n    for mi,m in enumerate(matches):\n        pt = np.round(kps0[m[0]]).astype(np.int32)\n        pr_pt = np.round(kps1[m[1]]).astype(np.int32)\n        if vert:\n            pr_pt[1] += h0\n        else:\n            pr_pt[0] += w0\n        if colors is None:\n            cv2.line(out_img, tuple(pt), tuple(pr_pt), (0, 255, 0), 1)\n        elif type(colors)==list or type(colors)==np.ndarray:\n            color=(int(c) for c in colors[mi])\n            cv2.line(out_img, tuple(pt), tuple(pr_pt), tuple(color), 1)\n        else:\n            color=(int(c) for c in colors)\n            cv2.line(out_img, tuple(pt), tuple(pr_pt), tuple(color), 1)\n\n    return out_img\n\ndef draw_keypoints(img,kps,colors=None,radius=2):\n    out_img=img.copy()\n    for pi, pt in enumerate(kps):\n        pt = np.round(pt).astype(np.int32)\n        if colors is not None:\n            color=[int(c) for c in colors[pi]]\n            cv2.circle(out_img, tuple(pt), radius, color, -1, cv2.FILLED)\n        else:\n            cv2.circle(out_img, tuple(pt), radius, (0,255,0), -1)\n    return out_img\n\ndef draw_epipolar_line(F, img0, img1, pt0, color):\n    h1,w1=img1.shape[:2]\n    hpt = np.asarray([pt0[0], pt0[1], 1], dtype=np.float32)[:, None]\n    l = F @ hpt\n    l = l[:, 0]\n    a, b, c = l[0], l[1], l[2]\n    pt1 = np.asarray([0, -c / b]).astype(np.int32)\n    pt2 = np.asarray([w1, (-a * w1 - c) / b]).astype(np.int32)\n\n    img0 = cv2.circle(img0, tuple(pt0.astype(np.int32)), 5, color, 2)\n    img1 = cv2.line(img1, tuple(pt1), tuple(pt2), color, 2)\n    return img0, img1\n\ndef draw_epipolar_lines(F, img0, img1,num=20):\n    img0,img1=img0.copy(),img1.copy()\n    h0, w0, _ = img0.shape\n    h1, w1, _ = img1.shape\n\n    for k in range(num):\n        color = np.random.randint(0, 255, [3], dtype=np.int32)\n        color = [int(c) for c in color]\n        pt = np.random.uniform(0, 1, 2)\n        pt[0] *= w0\n        pt[1] *= h0\n        pt = pt.astype(np.int32)\n        img0, img1 = draw_epipolar_line(F, img0, img1, pt, color)\n\n    return img0, img1\n\ndef gen_color_map(error, clip_max=12.0, clip_min=2.0, cmap_name='viridis'):\n    rectified_error=(error-clip_min)/(clip_max-clip_min)\n    rectified_error[rectified_error<0]=0\n    rectified_error[rectified_error>=1.0]=1.0\n    viridis=cm.get_cmap(cmap_name,256)\n    colors=[viridis(e) for e in rectified_error]\n    return np.asarray(np.asarray(colors)[:,:3]*255,np.uint8)\n\ndef scale_float_image(image):\n    max_val, min_val = np.max(image), np.min(image)\n    image = (image - min_val) / (max_val - min_val) * 255\n    return image.astype(np.uint8)\n\ndef concat_images(img0,img1,vert=False):\n    if not vert:\n        h0,h1=img0.shape[0],img1.shape[0],\n        if h0<h1: img0=cv2.copyMakeBorder(img0,0,h1-h0,0,0,borderType=cv2.BORDER_CONSTANT,value=0)\n        if h1<h0: img1=cv2.copyMakeBorder(img1,0,h0-h1,0,0,borderType=cv2.BORDER_CONSTANT,value=0)\n        img = np.concatenate([img0, img1], axis=1)\n    else:\n        w0,w1=img0.shape[1],img1.shape[1]\n        if w0<w1: img0=cv2.copyMakeBorder(img0,0,0,0,w1-w0,borderType=cv2.BORDER_CONSTANT,value=0)\n        if w1<w0: img1=cv2.copyMakeBorder(img1,0,0,0,w0-w1,borderType=cv2.BORDER_CONSTANT,value=0)\n        img = np.concatenate([img0, img1], axis=0)\n\n    return img\n\n\ndef concat_images_list(*args,vert=False):\n    if len(args)==1: return args[0]\n    img_out=args[0]\n    for img in args[1:]:\n        img_out=concat_images(img_out,img,vert)\n    return img_out\n\n\ndef get_colors_gt_pr(gt,pr=None):\n    if pr is None:\n        pr=np.ones_like(gt)\n    colors=np.zeros([gt.shape[0],3],np.uint8)\n    colors[gt & pr]=np.asarray([0,255,0])[None,:]     # tp\n    colors[ (~gt) & pr]=np.asarray([255,0,0])[None,:] # fp\n    colors[ gt & (~pr)]=np.asarray([0,0,255])[None,:] # fn\n    return colors\n\n\ndef draw_hist(fn,vals,bins=100,hist_range=None,names=None):\n    if type(vals)==list:\n        val_num=len(vals)\n        if hist_range is None:\n            hist_range = (np.min(vals),np.max(vals))\n        if names is None:\n            names=[str(k) for k in range(val_num)]\n        for k in range(val_num):\n            plt.hist(vals[k], bins=bins, range=hist_range, alpha=0.5, label=names[k])\n        plt.legend()\n    else:\n        if hist_range is None:\n            hist_range = (np.min(vals),np.max(vals))\n        plt.hist(vals,bins=bins,range=hist_range)\n\n    plt.savefig(fn)\n    plt.close()\n\ndef draw_pr_curve(fn,gt_sort):\n    pos_num_all=np.sum(gt_sort)\n    pos_nums=np.cumsum(gt_sort)\n    sample_nums=np.arange(gt_sort.shape[0])+1\n    precisions=pos_nums.astype(np.float64)/sample_nums\n    recalls=pos_nums/pos_num_all\n\n    precisions=precisions[np.arange(0,gt_sort.shape[0],gt_sort.shape[0]//40)]\n    recalls=recalls[np.arange(0,gt_sort.shape[0],gt_sort.shape[0]//40)]\n    plt.plot(recalls,precisions,'r-')\n    plt.xlim(0,1)\n    plt.ylim(0,1)\n    plt.savefig(fn)\n    plt.close()\n\ndef draw_features_distribution(fn,feats,colors,ds_type='pca'):\n    n,d=feats.shape\n    if d>2:\n        if ds_type=='pca':\n            pca=PCA(2)\n            feats=pca.fit_transform(feats)\n        elif ds_type=='tsne':\n            tsne=TSNE(2)\n            feats=tsne.fit_transform(feats)\n        elif ds_type=='pca-tsne':\n            if d>50:\n                tsne=PCA(50)\n                feats=tsne.fit_transform(feats)\n            tsne=TSNE(2,100.0)\n            feats=tsne.fit_transform(feats)\n        else:\n            raise NotImplementedError\n\n    colors=[np.array([c[0],c[1],c[2]],np.float64)/255.0 for c in colors]\n    feats_min=np.min(feats,0,keepdims=True)\n    feats_max=np.max(feats,0,keepdims=True)\n    feats=(feats-(feats_min+feats_max)/2)*10/(feats_max-feats_min)\n    plt.scatter(feats[:,0],feats[:,1],s=0.5,c=colors)\n    plt.savefig(fn)\n    plt.close()\n    return feats\n\ndef draw_points(img,points):\n    pts=np.round(points).astype(np.int32)\n    h,w,_=img.shape\n    pts[:,0]=np.clip(pts[:,0],a_min=0,a_max=w-1)\n    pts[:,1]=np.clip(pts[:,1],a_min=0,a_max=h-1)\n    img=img.copy()\n    img[pts[:,1],pts[:,0]]=255\n    # img[pts[:,1],pts[:,0]]+=np.asarray([127,0,0],np.uint8)[None,:]\n    return img\n\ndef draw_bbox(img,bbox,color=None):\n    if color is not None:\n        color=[int(c) for c in color]\n    else:\n        color=(0,255,0)\n    img=cv2.rectangle(img,(bbox[0],bbox[1]),(bbox[0]+bbox[2],bbox[1]+bbox[3]),color)\n    return img\n\ndef output_points(fn,pts,colors=None):\n    with open(fn, 'w') as f:\n        for pi, pt in enumerate(pts):\n            f.write(f'{pt[0]:.6f} {pt[1]:.6f} {pt[2]:.6f} ')\n            if colors is not None:\n                f.write(f'{int(colors[pi,0])} {int(colors[pi,1])} {int(colors[pi,2])}')\n            f.write('\\n')\n\ndef compute_axis_points(pose):\n    R=pose[:,:3] # 3,3\n    t=pose[:,3:] # 3,1\n    pts = np.concatenate([np.identity(3),np.zeros([3,1])],1) # 3,4\n    pts = R.T @ (pts - t)\n    colors = np.asarray([[255,0,0],[0,255,0,],[0,0,255],[0,0,0]],np.uint8)\n    return pts.T, colors\n\ndef draw_epipolar_lines_func(img0,img1,Rt0,Rt1,K0,K1):\n    Rt=compute_relative_transformation(Rt0,Rt1)\n    F=compute_F(K0,K1,Rt[:,:3],Rt[:,3:])\n    return concat_images_list(*draw_epipolar_lines(F,img0,img1))\n\ndef draw_axis(img, R, t, K, length=0.1, width=3, with_text=False, dist=None):\n    \"\"\"\n    Draw a 6dof axis (XYZ -> RGB) in the given rotation and translation\n    :param img - rgb numpy array (RGB format, not opencv BGR)\n    :rotation_vec - euler rotations, numpy array of length 3,\n                    use cv2.Rodrigues(R)[0] to convert from rotation matrix\n    :t - 3d translation vector, in meters (dtype must be float)\n    :K - intrinsic calibration matrix , 3x3\n    :length - factor to control the axis lengths\n    :dist - optional distortion coefficients, numpy array of length 4. If None distortion is ignored.\n    \"\"\"\n    rotation_vec = cv2.Rodrigues(R)[0]\n    img = img.astype(np.float32)\n    dist = np.zeros(4, dtype=float) if dist is None else dist\n    points = length * np.float32([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0]]).reshape(-1, 3)\n    axis_points, _ = cv2.projectPoints(points, rotation_vec, t, K, dist)\n    axis_points = axis_points.astype(np.int)\n\n    if with_text:\n        for pt,txt in zip(axis_points, \"XYZO\"):\n            cv2.putText(img, txt, tuple(pt.ravel()), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)\n\n    \n    img = cv2.line(img, tuple(axis_points[3].ravel()), tuple(axis_points[0].ravel()), (0, 0, 255), width)\n    img = cv2.line(img, tuple(axis_points[3].ravel()), tuple(axis_points[1].ravel()), (0, 255, 0), width)\n    img = cv2.line(img, tuple(axis_points[3].ravel()), tuple(axis_points[2].ravel()), (255, 0, 0), width)\n    return img\n\ndef draw_gripper(img, R, t, K, width, thickness=2, dist=None):\n    rotation_vec = cv2.Rodrigues(R)[0]\n    img = img.astype(np.float32)\n    dist = np.zeros(4, dtype=float) if dist is None else dist\n    points = np.float32([[0, -width/2, 0], [0, -width/2, 0.05], [0, width/2, 0], [0, width/2, 0.05], [0, 0, -0.05], [0, 0, 0]]).reshape(-1, 3)\n    axis_points, _ = cv2.projectPoints(points, rotation_vec, t, K, dist)\n    axis_points = axis_points.astype(np.int)\n\n    img = cv2.line(img, tuple(axis_points[0].ravel()), tuple(axis_points[2].ravel()), (0, 255, 0), thickness)\n    img = cv2.line(img, tuple(axis_points[0].ravel()), tuple(axis_points[1].ravel()), (0, 255, 0), thickness)\n    img = cv2.line(img, tuple(axis_points[2].ravel()), tuple(axis_points[3].ravel()), (0, 255, 0), thickness)\n\n    img = cv2.line(img, tuple(axis_points[-2].ravel()), tuple(axis_points[-1].ravel()), (255, 0, 0), thickness)\n    return img   \n\n\ndef draw_cube(img,  R, t, K, length=0.3, bias=[-0.15,-0.15,-0.], dist=None):\n    rotation_vec = cv2.Rodrigues(R)[0]\n    img = img.astype(np.float32)\n    dist = np.zeros(4, dtype=float) if dist is None else dist\n    points = length * np.float32([[0,0,0], [0,1,0], [1,1,0], [1,0,0],\n                                    [0,0,1], [0,1,1], [1,1,1], [1,0,1]])\n    points +=  np.float32(bias)\n    imgpts, _ = cv2.projectPoints(points, rotation_vec, t, K, dist)\n    imgpts = np.int32(imgpts).reshape(-1,2)\n    # draw ground floor in green\n    img = cv2.drawContours(img, [imgpts[:4]],-1,(0,255,0),3)\n    # draw pillars in blue color\n    for i,j in zip(range(4),range(4,8)):\n        img = cv2.line(img, tuple(imgpts[i]), tuple(imgpts[j]),(0,0,255),3)\n    # draw top layer in red color\n    img = cv2.drawContours(img, [imgpts[4:]],-1,(255,0,0),3)\n    return img\n\ndef draw_world_points(img, points, Rwc, twc, K, dist=None):\n    rotation_vec = cv2.Rodrigues(Rwc)[0]\n    img = img.astype(np.float32)\n    dist = np.zeros(4, dtype=float) if dist is None else dist\n    axis_points, _ = cv2.projectPoints(points, rotation_vec, twc, K, dist)\n    img = draw_keypoints(img, axis_points.squeeze(1))\n\n    return img\n\ndef extract_surface_points_from_volume(vol, rg, bound=(-1,1), color=(0,0,1), scale=0.3 / 40):\n    assert len(vol.shape) == 3\n    ind = np.transpose( ( (vol > rg[0]).astype(np.bool) & (vol < rg[1]).astype(np.bool) ).nonzero())\n    print(ind.shape)\n    pcd = o3d.geometry.PointCloud()\n    pcd.points = o3d.utility.Vector3dVector(ind)\n    colors = np.array([color]).repeat(ind.shape[0],axis=0)\n    \n    if color == None:\n            values = vol[ind[:,0], ind[:,1], ind[:,2]]\n            a, b = bound\n            m = (a + b) / 2\n            r = np.where(values <= m, values - a, - values + b) \n            g = np.where(values <= m, 0, 1 - r)\n            b = np.where(values <= m, 1 - r, 0) \n            colors = np.stack([r, g, b], axis=-1)\n    else:\n            colors = np.array([color]).repeat(ind.shape[0],axis=0)\n\n    pcd.scale(scale, center=(0, 0, 0))\n    pcd.colors = o3d.utility.Vector3dVector(colors)\n\n    return pcd\n\ndef draw_volume_surface(logdir, vol, prefix='surface', rg=(-0.2,0.2)):    \n    pcd = extract_surface_points_from_volume(vol, rg)\n\n    o3d.io.write_point_cloud(f\"{logdir}/{prefix}.ply\", pcd)\n\n\ndef create_mesh_box(width, height, depth, dx=0, dy=0, dz=0):\n    ''' Author: chenxi-wang\n    Create box instance with mesh representation.\n    '''\n    box = o3d.geometry.TriangleMesh()\n    vertices = np.array([[0,0,0],\n                        [width,0,0],\n                        [0,0,depth],\n                        [width,0,depth],\n                        [0,height,0],\n                        [width,height,0],\n                        [0,height,depth],\n                        [width,height,depth]])\n    vertices[:,0] += dx\n    vertices[:,1] += dy\n    vertices[:,2] += dz\n    triangles = np.array([[4,7,5],[4,6,7],[0,2,4],[2,6,4],\n                        [0,1,2],[1,3,2],[1,5,7],[1,7,3],\n                        [2,3,7],[2,7,6],[0,4,1],[1,4,5]])\n    box.vertices = o3d.utility.Vector3dVector(vertices)\n    box.triangles = o3d.utility.Vector3iVector(triangles)\n    return box\n\ndef draw_gripper_o3d(R, t, width, score=1, color=None):\n    '''\n    Author: chenxi-wang\n\n    **Input:**\n\n    - center: numpy array of (3,), target point as gripper center\n\n    - R: numpy array of (3,3), rotation matrix of gripper\n\n    - width: float, gripper width\n\n    - score: float, grasp quality score\n\n    **Output:**\n\n    - open3d.geometry.TriangleMesh\n    '''\n    #x, y, z = center\n    center = t\n    #print(center)\n    height = 0.004\n    finger_width = 0.004\n    tail_length = 0.04\n    depth = 0.05\n    depth_base = 0 # 0.02\n\n    if color is not None:\n        color_r, color_g, color_b = color\n    else:\n        color_r = score  # red for high score\n        color_g = 0\n        color_b = 1 - score  # blue for low score\n\n    left = create_mesh_box(depth + depth_base + finger_width, finger_width, height)\n    right = create_mesh_box(depth + depth_base + finger_width, finger_width, height)\n    bottom = create_mesh_box(finger_width, width, height)\n    tail = create_mesh_box(tail_length, finger_width, height)\n\n    left_points = np.array(left.vertices)\n    left_triangles = np.array(left.triangles)\n    left_points[:, 0] -= depth_base + finger_width\n    left_points[:, 1] -= width / 2 + finger_width\n    left_points[:, 2] -= height / 2\n\n    right_points = np.array(right.vertices)\n    right_triangles = np.array(right.triangles) + 8\n    right_points[:, 0] -= depth_base + finger_width\n    right_points[:, 1] += width / 2\n    right_points[:, 2] -= height / 2\n\n    bottom_points = np.array(bottom.vertices)\n    bottom_triangles = np.array(bottom.triangles) + 16\n    bottom_points[:, 0] -= finger_width + depth_base\n    bottom_points[:, 1] -= width / 2\n    bottom_points[:, 2] -= height / 2\n\n    tail_points = np.array(tail.vertices)\n    tail_triangles = np.array(tail.triangles) + 24\n    tail_points[:, 0] -= tail_length + finger_width + depth_base\n    tail_points[:, 1] -= finger_width / 2\n    tail_points[:, 2] -= height / 2\n\n    vertices = np.concatenate([left_points, right_points, bottom_points, tail_points], axis=0)\n    vertices = np.dot(R, vertices.T).T + center\n    triangles = np.concatenate([left_triangles, right_triangles, bottom_triangles, tail_triangles], axis=0)\n    colors = np.array([[color_r, color_g, color_b] for _ in range(len(vertices))])\n\n    gripper = o3d.geometry.TriangleMesh()\n    gripper.vertices = o3d.utility.Vector3dVector(vertices)\n    gripper.triangles = o3d.utility.Vector3iVector(triangles)\n    gripper.vertex_colors = o3d.utility.Vector3dVector(colors)\n    return gripper\n\ndef transform_points(points, trans):\n        '''\n        Author: chenxi-wang\n\n        **Input:**\n\n        - points: numpy array of (N,3), point cloud\n\n        - trans: numpy array of (4,4), transformation matrix\n\n        **Output:**\n\n        - numpy array of (N,3), transformed points.\n        '''\n        ones = np.ones([points.shape[0], 1], dtype=points.dtype)\n        points_ = np.concatenate([points, ones], axis=-1)\n        points_ = np.matmul(trans, points_.T).T\n        return points_[:, :3]\n"
  },
  {
    "path": "src/nr/utils/field_utils.py",
    "content": "import torch\nimport numpy as np\n\ndef generate_grid_points_old(bound_min, bound_max, resolution):\n    X = torch.linspace(bound_min[0], bound_max[0], resolution)\n    Y = torch.linspace(bound_min[1], bound_max[1], resolution)\n    Z = torch.linspace(bound_max[2], bound_min[2], resolution) # from top to down to be like with training rays\n    XYZ = torch.stack(torch.meshgrid(X, Y, Z), dim=-1)\n\n    return XYZ\n\nRESOLUTION = 40\nVOLUME_SIZE = 0.3\nVOXEL_SIZE = VOLUME_SIZE / RESOLUTION\nHALF_VOXEL_SIZE = VOXEL_SIZE / 2\n\ndef generate_grid_points():\n    points = []\n    for x in range(RESOLUTION):\n        for y in range(RESOLUTION):\n            for z in range(RESOLUTION):\n                points.append([x * VOXEL_SIZE + HALF_VOXEL_SIZE,\n                                y * VOXEL_SIZE + HALF_VOXEL_SIZE,\n                               z * VOXEL_SIZE + HALF_VOXEL_SIZE])\n    return np.array(points).astype(np.float32)\n\nTSDF_SAMPLE_POINTS = generate_grid_points()\n\nif __name__ == \"__main__\":\n    GT_POINTS = np.load('points.npy')\n    TSDF_VOLUME_MASK = np.zeros((1, 40, 40, 40), dtype=np.bool8)\n    idxs = []\n    for point in GT_POINTS:\n        i, j, k = np.floor(point / VOXEL_SIZE).astype(int)\n        TSDF_VOLUME_MASK[0, i, j, k] = True\n        idxs.append(i * (RESOLUTION * RESOLUTION) + j * RESOLUTION + k)\n    print(TSDF_SAMPLE_POINTS[idxs], GT_POINTS)\n    assert np.allclose(TSDF_SAMPLE_POINTS[idxs], GT_POINTS)\n\n"
  },
  {
    "path": "src/nr/utils/grasp_utils.py",
    "content": "import datetime\nfrom pathlib import Path\nimport numpy as np\nfrom scipy import ndimage\nimport sys\nsys.path.append(\"./src\")\nimport time\n\nfrom nr.utils.base_utils import color_map_forward\nfrom nr.utils.draw_utils import draw_cube, extract_surface_points_from_volume\nfrom gd.utils.transform import Transform, Rotation\nfrom skimage.io import imsave\nimport cv2\n\nclass Grasp(object):\n    \"\"\"Grasp parameterized as pose of a 2-finger robot hand.\n    \n    TODO(mbreyer): clarify definition of grasp frame\n    \"\"\"\n\n    def __init__(self, pose, width):\n        self.pose = pose\n        self.width = width\n\n\ndef to_voxel_coordinates(grasp, voxel_size):\n    pose = grasp.pose\n    pose.translation /= voxel_size\n    width = grasp.width / voxel_size\n    return Grasp(pose, width)\n\n\ndef from_voxel_coordinates(grasp, voxel_size):\n    pose = grasp.pose\n    pose.translation *= voxel_size\n    width = grasp.width * voxel_size\n    return Grasp(pose, width)\n\n\ndef process(\n    tsdf_vol,\n    qual_vol,\n    rot_vol,\n    width_vol,\n    gaussian_filter_sigma=1.0,\n    min_width=0,\n    max_width=12,\n):\n    tsdf_vol = tsdf_vol.squeeze()\n    qual_vol = qual_vol.squeeze()\n    rot_vol = rot_vol.squeeze()\n    width_vol = width_vol.squeeze()\n    # smooth quality volume with a Gaussian\n    qual_vol = ndimage.gaussian_filter(\n        qual_vol, sigma=gaussian_filter_sigma, mode=\"nearest\"\n    )\n\n    # mask out voxels too far away from the surface\n    outside_voxels = tsdf_vol > 0.1\n    inside_voxels = np.logical_and(-1 < tsdf_vol, tsdf_vol < -0.1)\n    valid_voxels = ndimage.morphology.binary_dilation(\n        outside_voxels, iterations=2, mask=np.logical_not(inside_voxels)\n    )\n    qual_vol[valid_voxels == False] = 0.0\n    # reject voxels with predicted widths that are too small or too large\n    qual_vol[np.logical_or(width_vol < min_width, width_vol > max_width)] = 0.0\n\n    return tsdf_vol, qual_vol, rot_vol, width_vol\n\ndef select_index(qual_vol, rot_vol, width_vol, index):\n    i, j, k = index\n    score = qual_vol[i, j, k]\n    ori = Rotation.from_quat(rot_vol[:, i, j, k])\n    pos = np.array([i, j, k], dtype=np.float64)\n    width = width_vol[i, j, k]\n    return Grasp(Transform(ori, pos), width), score\n\ndef select(qual_vol, rot_vol, width_vol, threshold=0.90, max_filter_size=4):\n    # threshold on grasp quality\n    qual_vol[qual_vol < threshold] = 0.0\n\n    # non maximum suppression\n    max_vol = ndimage.maximum_filter(qual_vol, size=max_filter_size)\n    qual_vol = np.where(qual_vol == max_vol, qual_vol, 0.0)\n    mask = np.where(qual_vol, 1.0, 0.0)\n\n    # construct grasps\n    grasps, scores = [], []\n    for index in np.argwhere(mask):\n        grasp, score = select_index(qual_vol, rot_vol, width_vol, index)\n        grasps.append(grasp)\n        scores.append(score)\n\n    return grasps, scores\n\n\ndef sim_grasp(database,  alp_vol, qual_vol, rot_vol, width_vol, top_k=10):\n    from utils.grasp_utils import select, process\n    qual_vol, rot_vol, width_vol = process(alp_vol, qual_vol, rot_vol, width_vol)\n    grasps, scores = select(qual_vol.copy(), rot_vol, width_vol)\n    grasps, scores = np.asarray(grasps), np.asarray(scores)\n    \n    img = None\n    if len(grasps) > 0:\n        p = np.argsort(scores)[::-1][:top_k]\n        grasps = [g for g in grasps[p]]\n        scores = scores[p]\n        pos = np.array([ g.pose.translation for g in grasps ])\n        rot = np.array([ g.pose.rotation.as_matrix() for g in grasps ])\n        width =  np.array([ g.width for g in grasps ])\n\n        img = database.visualize_grasping(pos, rot, width)\n        database.visualize_grasping_3d(pos, rot, width, scores)\n            \n\n    return grasps, scores, img \n\n\ndef run_real(run_id, model, images: list, extrinsics: list, intrinsic, save_img=True):\n    extrinsics = np.stack(extrinsics, 0)\n    intrinsics = np.repeat(np.expand_dims(intrinsic, 0), extrinsics.shape[0], axis=0)\n    depth_range = np.repeat(np.expand_dims(np.r_[0.2, 0.8], 0), extrinsics.shape[0], axis=0).astype(np.float32)\n    bbox3d = [[-0.15, -0.15, 0.00], [0.15, 0.15, 0.3]]\n\n    if save_img:\n        save_path = f'data/grasp_capture/{run_id}'\n        if not Path(save_path).exists():\n            Path(save_path).mkdir(parents=True)\n        for i, img in enumerate(images):\n            img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)\n            # img = draw_cube(img, extrinsics[i][:3,:3], extrinsics[i][:3,3], intrinsics[i], length=0.3, bias=bbox3d[0])\n            cv2.imwrite(f\"{save_path}/{i}.png\", img)\n\n    images = color_map_forward(np.stack(images, 0)).transpose([0, 3, 1, 2])\n    \n    t0 = time.time()\n    tsdf_vol, qual_vol, rot_vol, width_vol = model(images, extrinsics, intrinsics, depth_range=depth_range, bbox3d=bbox3d, que_id=3)\n    t = time.time() - t0\n\n    tsdf_vol, qual_vol, rot_vol, width_vol = process(tsdf_vol, qual_vol, rot_vol, width_vol)\n    grasps, scores = select(qual_vol.copy(), rot_vol, width_vol)\n    grasps, scores = np.asarray(grasps), np.asarray(scores)\n\n    if len(grasps) > 0:\n        p = np.random.permutation(len(grasps))\n        grasps = [from_voxel_coordinates(g, 0.3 / 40) for g in grasps[p]]\n        scores = scores[p]\n\n    pc = extract_surface_points_from_volume(tsdf_vol, (-0.2, 0.2))\n\n    return grasps, scores, tsdf_vol, pc, t"
  },
  {
    "path": "src/nr/utils/imgs_info.py",
    "content": "import numpy as np\nimport torch\n\nfrom utils.base_utils import color_map_forward, pad_img_end\n\ndef random_crop(ref_imgs_info, que_imgs_info, target_size):\n    imgs = ref_imgs_info['imgs']\n    n, _, h, w = imgs.shape\n    out_h, out_w = target_size[0], target_size[1]\n    if out_w >= w or out_h >= h:\n        return ref_imgs_info\n\n    center_h = np.random.randint(low=out_h // 2 + 1, high=h - out_h // 2 - 1)\n    center_w = np.random.randint(low=out_w // 2 + 1, high=w - out_w // 2 - 1)\n\n    def crop(tensor):\n        tensor = tensor[:, :, center_h - out_h // 2:center_h + out_h // 2,\n                              center_w - out_w // 2:center_w + out_w // 2]\n        return tensor\n\n    def crop_imgs_info(imgs_info):\n        imgs_info['imgs'] = crop(imgs_info['imgs'])\n        if 'depth' in imgs_info: imgs_info['depth'] = crop(imgs_info['depth'])\n        if 'true_depth' in imgs_info: imgs_info['true_depth'] = crop(imgs_info['true_depth'])\n        if 'masks' in imgs_info: imgs_info['masks'] = crop(imgs_info['masks'])\n\n        Ks = imgs_info['Ks'] # n, 3, 3\n        h_init = center_h - out_h // 2\n        w_init = center_w - out_w // 2\n        Ks[:,0,2]-=w_init\n        Ks[:,1,2]-=h_init\n        imgs_info['Ks']=Ks\n        return imgs_info\n\n    return crop_imgs_info(ref_imgs_info), crop_imgs_info(que_imgs_info)\n\ndef random_flip(ref_imgs_info,que_imgs_info):\n    def flip(tensor):\n        tensor = np.flip(tensor.transpose([0, 2, 3, 1]), 2)  # n,h,w,3\n        tensor = np.ascontiguousarray(tensor.transpose([0, 3, 1, 2]))\n        return tensor\n\n    def flip_imgs_info(imgs_info):\n        imgs_info['imgs'] = flip(imgs_info['imgs'])\n        if 'depth' in imgs_info: imgs_info['depth'] = flip(imgs_info['depth'])\n        if 'true_depth' in imgs_info: imgs_info['true_depth'] = flip(imgs_info['true_depth'])\n        if 'masks' in imgs_info: imgs_info['masks'] = flip(imgs_info['masks'])\n\n        Ks = imgs_info['Ks']  # n, 3, 3\n        Ks[:, 0, :] *= -1\n        w = imgs_info['imgs'].shape[-1]\n        Ks[:, 0, 2] += w - 1\n        imgs_info['Ks'] = Ks\n        return imgs_info\n\n    ref_imgs_info = flip_imgs_info(ref_imgs_info)\n    que_imgs_info = flip_imgs_info(que_imgs_info)\n    return ref_imgs_info, que_imgs_info\n\ndef pad_imgs_info(ref_imgs_info,pad_interval):\n    ref_imgs, ref_depths, ref_masks = ref_imgs_info['imgs'], ref_imgs_info['depth'], ref_imgs_info['masks']\n    ref_depth_gt = ref_imgs_info['true_depth'] if 'true_depth' in ref_imgs_info else None\n    rfn, _, h, w = ref_imgs.shape\n    ph = (pad_interval - (h % pad_interval)) % pad_interval\n    pw = (pad_interval - (w % pad_interval)) % pad_interval\n    if ph != 0 or pw != 0:\n        ref_imgs = np.pad(ref_imgs, ((0, 0), (0, 0), (0, ph), (0, pw)), 'reflect')\n        ref_depths = np.pad(ref_depths, ((0, 0), (0, 0), (0, ph), (0, pw)), 'reflect')\n        ref_masks = np.pad(ref_masks, ((0, 0), (0, 0), (0, ph), (0, pw)), 'reflect')\n        if ref_depth_gt is not None:\n            ref_depth_gt = np.pad(ref_depth_gt, ((0, 0), (0, 0), (0, ph), (0, pw)), 'reflect')\n    ref_imgs_info['imgs'], ref_imgs_info['depth'], ref_imgs_info['masks'] = ref_imgs, ref_depths, ref_masks\n    if ref_depth_gt is not None:\n        ref_imgs_info['true_depth'] = ref_depth_gt\n    return ref_imgs_info\n\ndef build_imgs_info(database, ref_ids, pad_interval=-1, is_aligned=True, align_depth_range=False, has_mask=True, has_depth=True, replace_none_depth = False):\n    if not is_aligned:\n        assert has_depth\n        rfn = len(ref_ids)\n        ref_imgs, ref_masks, ref_depths, shapes = [], [], [], []\n        for ref_id in ref_ids:\n            img = database.get_image(ref_id)\n            shapes.append([img.shape[0], img.shape[1]])\n            ref_imgs.append(img)\n            ref_masks.append(database.get_mask(ref_id))\n            ref_depths.append(database.get_depth(ref_id))\n\n        shapes = np.asarray(shapes)\n        th, tw = np.max(shapes, 0)\n        for rfi in range(rfn):\n            ref_imgs[rfi] = pad_img_end(ref_imgs[rfi], th, tw, 'reflect')\n            ref_masks[rfi] = pad_img_end(ref_masks[rfi][:, :, None], th, tw, 'constant', 0)[..., 0]\n            ref_depths[rfi] = pad_img_end(ref_depths[rfi][:, :, None], th, tw, 'constant', 0)[..., 0]\n        ref_imgs = color_map_forward(np.stack(ref_imgs, 0)).transpose([0, 3, 1, 2])\n        ref_masks = np.stack(ref_masks, 0)[:, None, :, :]\n        ref_depths = np.stack(ref_depths, 0)[:, None, :, :]\n    else:\n        ref_imgs = color_map_forward(np.asarray([database.get_image(ref_id) for ref_id in ref_ids])).transpose([0, 3, 1, 2])\n        if has_mask:\n            ref_masks =  np.asarray([database.get_mask(ref_id) for ref_id in ref_ids], dtype=np.float32)[:, None, :, :]\n        else:\n            b, _, h, w = ref_imgs.shape\n            ref_masks = np.ones([b, _, h, w], dtype=np.float32)\n        if has_depth:\n            ref_depths = [database.get_depth(ref_id) for ref_id in ref_ids]\n            if replace_none_depth:\n                b, _, h, w = ref_imgs.shape\n                for i, depth in enumerate(ref_depths):\n                    if depth is None: ref_depths[i] = np.zeros([h, w], dtype=np.float32)\n            ref_depths = np.asarray(ref_depths, dtype=np.float32)[:, None, :, :]\n        else: ref_depths = None\n\n    ref_poses = np.asarray([database.get_pose(ref_id) for ref_id in ref_ids], dtype=np.float32)\n    ref_Ks = np.asarray([database.get_K(ref_id) for ref_id in ref_ids], dtype=np.float32)\n    ref_depth_range = np.asarray([database.get_depth_range(ref_id) for ref_id in ref_ids], dtype=np.float32)\n    if align_depth_range:\n        ref_depth_range[:,0]=np.min(ref_depth_range[:,0])\n        ref_depth_range[:,1]=np.max(ref_depth_range[:,1])\n    ref_imgs_info = {'imgs': ref_imgs, 'poses': ref_poses, 'Ks': ref_Ks, 'depth_range': ref_depth_range, 'masks': ref_masks, 'bbox3d': database.get_bbox3d()}\n    if has_depth: ref_imgs_info['depth'] = ref_depths\n    if pad_interval!=-1:\n        ref_imgs_info = pad_imgs_info(ref_imgs_info, pad_interval)\n    return ref_imgs_info\n\ndef build_render_imgs_info(que_pose,que_K,que_shape,que_depth_range):\n    h, w = que_shape\n    h, w = int(h), int(w)\n    que_coords = np.stack(np.meshgrid(np.arange(w), np.arange(h)), -1)\n    que_coords = que_coords.reshape([1, -1, 2]).astype(np.float32)\n    return {'poses': que_pose.astype(np.float32)[None,:,:],  # 1,3,4\n            'Ks': que_K.astype(np.float32)[None,:,:],  # 1,3,3\n            'coords': que_coords,\n            'depth_range': np.asarray(que_depth_range, np.float32)[None, :],\n            'shape': (h,w)}\n\ndef build_canonical_info(bbox, resolution, que_pose, que_K):\n    x_min,x_max,y_min,y_max = bbox\n    print('bbox', bbox)\n    que_coords = np.stack(np.meshgrid(np.linspace(y_min, y_max, 2), np.linspace(x_min, x_max, 2)), -1)\n    print('que_coords', que_coords)\n    return {'poses': que_pose.astype(np.float32)[None,:,:],  # 1,3,4\n            'Ks': que_K.astype(np.float32)[None,:,:],  # 1,3,3\n            'coords': que_coords,\n            'depth_range': np.asarray([0.5, 0.8], np.float32)[None, :],\n            'shape': (resolution, resolution)}\n\ndef imgs_info_to_torch(imgs_info):\n    for k, v in imgs_info.items():\n        if isinstance(v,np.ndarray):\n            imgs_info[k] = torch.from_numpy(v).float()\n    return imgs_info\n\ndef grasp_info_to_torch(info):\n    torch_info = []\n    for item in info:\n        torch_info.append(torch.from_numpy(item))\n    return torch_info\ndef imgs_info_slice(imgs_info, indices):\n    imgs_info_out={}\n    imgs_info_out['bbox3d'] = imgs_info['bbox3d']\n    for k, v in imgs_info.items():\n        if k != 'bbox3d' and v is not None:\n            imgs_info_out[k] = v[indices]\n    return imgs_info_out\n"
  },
  {
    "path": "src/nr/utils/view_select.py",
    "content": "import numpy as np\n\nfrom dataset.database import BaseDatabase\n\ndef compute_nearest_camera_indices(database, que_ids, ref_ids=None):\n    if ref_ids is None: ref_ids = que_ids\n    ref_poses = [database.get_pose(ref_id) for ref_id in ref_ids]\n    ref_cam_pts = np.asarray([-pose[:, :3].T @ pose[:, 3] for pose in ref_poses])\n    que_poses = [database.get_pose(que_id) for que_id in que_ids]\n    que_cam_pts = np.asarray([-pose[:, :3].T @ pose[:, 3] for pose in que_poses])\n\n    dists = np.linalg.norm(ref_cam_pts[None, :, :] - que_cam_pts[:, None, :], 2, 2)\n    dists_idx = np.argsort(dists, 1)\n    return dists_idx\n\ndef select_working_views(ref_poses, que_poses, work_num, exclude_self=False):\n    ref_cam_pts = np.asarray([-pose[:, :3].T @ pose[:, 3] for pose in ref_poses])\n    render_cam_pts = np.asarray([-pose[:, :3].T @ pose[:, 3] for pose in que_poses])\n    dists = np.linalg.norm(ref_cam_pts[None, :, :] - render_cam_pts[:, None, :], 2, 2) # qn,rfn\n    ids = np.argsort(dists)\n    if exclude_self:\n        ids = ids[:, 1:work_num+1]\n    else:\n        ids = ids[:, :work_num]\n    return ids\n\ndef select_working_views_db(database: BaseDatabase, ref_ids, que_poses, work_num, exclude_self=False):\n    ref_ids = database.get_img_ids() if ref_ids is None else ref_ids\n    ref_poses = [database.get_pose(img_id) for img_id in ref_ids]\n\n    ref_ids = np.asarray(ref_ids)\n    ref_poses = np.asarray(ref_poses)\n    indices = select_working_views(ref_poses, que_poses, work_num, exclude_self)\n    return ref_ids[indices] # qn,wn"
  },
  {
    "path": "src/rd/modify_material.py",
    "content": "from mathutils import Vector\nimport bpy\nimport random\n\ndef modify_material(mat_links, mat_nodes, material_name, mat_randomize_mode, is_texture=False, orign_base_color=None,\n                    tex_node=None, is_transfer=True, is_arm=False):\n    if is_transfer:\n        if material_name.split(\"_\")[0] == \"metal\" or material_name.split(\"_\")[0] == \"porcelain\" or \\\n                material_name.split(\"_\")[0] == \"plasticsp\" or material_name.split(\"_\")[0] == \"paintsp\":\n            tex_mix_prop = random.uniform(0.85, 0.98)\n        else:\n            tex_mix_prop = random.uniform(0.7, 0.95)\n        mix_prop = random.uniform(0.6, 0.9)\n\n        if mat_randomize_mode == \"specular_texmix\" or mat_randomize_mode == \"mixed\" or mat_randomize_mode == \"specular_and_transparent\"\\\n                or material_name.split(\"_\")[0] == \"metal\" or material_name.split(\"_\")[0] == \"porcelain\" \\\n                or material_name.split(\"_\")[0] == \"plasticsp\" or material_name.split(\"_\")[0] == \"paintsp\":\n            transfer_rand = random.randint(0, 2)\n        else:\n            transfer_rand = 1\n\n        if transfer_rand == 1:\n            transfer_flag = True\n        else:\n            transfer_flag = False\n            tex_mix_prop = 1\n            mix_prop = 1\n            if not is_arm:\n                bs_color_rand = random.uniform(-0.2, 0.2)\n            else:\n                bs_color_rand = 0\n            r_rand = bs_color_rand\n            g_rand = bs_color_rand\n            b_rand = bs_color_rand\n\n\n    else:\n        tex_mix_prop = 1\n        mix_prop = 1\n        transfer_flag = False\n        if not is_arm:\n            bs_color_rand = random.uniform(-0.2, 0.2)\n        else:\n            bs_color_rand = 0\n        r_rand = bs_color_rand\n        g_rand = bs_color_rand\n        b_rand = bs_color_rand\n\n    bsdfnode_list = [n for n in mat_nodes if isinstance(n, bpy.types.ShaderNodeBsdfPrincipled)]\n    if bsdfnode_list != []:\n        for bsdfnode in bsdfnode_list:\n            if not bsdfnode.inputs[4].links:  # metallic\n                src_value = bsdfnode.inputs[4].default_value\n                if material_name.split(\"_\")[0] == \"metal\":\n                    new_value = src_value + random.uniform(-0.05, 0.05)\n                elif material_name.split(\"_\")[0] == \"porcelain\":\n                    new_value = src_value + random.uniform(-0.05, 0.1)\n                elif material_name.split(\"_\")[0] == \"plasticsp\":\n                    new_value = src_value + random.uniform(-0.05, 0.1)\n                else:\n                    new_value = src_value + random.uniform(-0.05, 0.05)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[4].default_value = new_value\n            if not bsdfnode.inputs[5].links:  # specular\n                src_value = bsdfnode.inputs[5].default_value\n                # if material_name.split(\"_\")[0] == \"metal\":\n                new_value = src_value + random.uniform(0, 0.3)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[5].default_value = new_value\n            if not bsdfnode.inputs[6].links:  # specularTint\n                src_value = bsdfnode.inputs[6].default_value\n                new_value = src_value + random.uniform(-1, 1)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[6].default_value = new_value\n            if not bsdfnode.inputs[7].links:  # roughness\n                src_value = bsdfnode.inputs[7].default_value\n                if material_name.split(\"_\")[0] == \"metal\" or material_name.split(\"_\")[0] == \"porcelain\" or \\\n                        material_name.split(\"_\")[0] == \"plasticsp\" or material_name.split(\"_\")[0] == \"paintsp\":\n                    new_value = src_value + random.uniform(-0.2, 0.01)\n                else:\n                    new_value = src_value + random.uniform(-0.03, 0.1)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[7].default_value = new_value\n            if not bsdfnode.inputs[8].links:  # anisotropic\n                src_value = bsdfnode.inputs[8].default_value\n                new_value = src_value + random.uniform(-0.1, 0.1)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[8].default_value = new_value\n            if not bsdfnode.inputs[9].links:  # anisotropicRotation\n                src_value = bsdfnode.inputs[9].default_value\n                new_value = src_value + random.uniform(-0.3, 0.3)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[9].default_value = new_value\n            if not bsdfnode.inputs[10].links:  # sheen\n                src_value = bsdfnode.inputs[10].default_value\n                new_value = src_value + random.uniform(-0.1, 0.1)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[10].default_value = new_value\n            if not bsdfnode.inputs[11].links:  # sheenTint\n                src_value = bsdfnode.inputs[11].default_value\n                new_value = src_value + random.uniform(-0.2, 0.2)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[11].default_value = new_value\n            if not bsdfnode.inputs[12].links:  # clearcoat\n                src_value = bsdfnode.inputs[12].default_value\n                new_value = src_value + random.uniform(-0.2, 0.2)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[12].default_value = new_value\n            if not bsdfnode.inputs[13].links:  # clearcoatGloss\n                src_value = bsdfnode.inputs[13].default_value\n                new_value = src_value + random.uniform(-0.2, 0.2)\n                if new_value > 1.0:\n                    new_value = 1.0\n                elif new_value < 0:\n                    new_value = 0.0\n                bsdfnode.inputs[13].default_value = new_value\n\n    ## metal\n    if material_name == \"metal_0\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.3, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 1.0)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.3, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Image Texture.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_1\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.9, 1.00)        # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.5, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[7].default_value = random.uniform(0.08, 0.25)  # roughness\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.04, 0.5)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.3, 0.7)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.8, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Tangent\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[22])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_10\":\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.5)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.3, 0.7)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            bsdf_new.location = Vector((-800, 0))\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            mix_new.location = Vector((-800, 0))\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Image Texture\"].outputs[1], mat_nodes[\"Principled BSDF-new\"].inputs[19])\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[4])\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_11\":\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.8)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 0.8)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[4])\n            mat_links.new(mat_nodes[\"Image Texture.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_12\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.8)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 0.8)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Reroute.006\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_13\":\n        # mat_nodes[\"Principled BSDF.001\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF.001\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF.001\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF.001\"].inputs[8].default_value = random.uniform(0.3, 0.7)  # anisotropic\n        # mat_nodes[\"Principled BSDF.001\"].inputs[9].default_value = random.uniform(0.0, 0.8)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF.001\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF.001\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.001\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Mix.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.001\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output.001\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_14\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.5)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 0.5)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)         # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)         # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.85\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Group\"].outputs[1], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Group\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_2\":\n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.5, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.95)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)        # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Image Texture.003\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_3\":\n        ##  \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.5, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.2)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 1.0)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 1.0)        # clearcoatGloss\n        mat_nodes[\"Gamma\"].inputs[1].default_value = random.uniform(3.0, 4.0)\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Gamma\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_4\":\n        ##  \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.95, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.1, 0.5)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.0, 0.2)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 0.5)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.5)        # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"metal_5\":\n        ##  \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.2, 0.4)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.6, 0.9)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.8, 1.0)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Voronoi Texture\"].outputs[1], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Tangent\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[22])\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[21])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_6\":\n        ##  \n        # mat_nodes[\"BSDF guidé\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"BSDF guidé\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"BSDF guidé\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"BSDF guidé\"].inputs[8].default_value = random.uniform(0.0, 0.2)  # anisotropic\n        # mat_nodes[\"BSDF guidé\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"BSDF guidé\"].inputs[12].default_value = random.uniform(0.0, 0.3)        # clearcoat\n        # mat_nodes[\"BSDF guidé\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n        mat_nodes[\"Valeur\"].outputs[0].default_value = random.uniform(0.1, 0.3)\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            bsdf_new.location = Vector((-800, 0))\n            for key, input in enumerate(mat_nodes[\"BSDF guidé\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            mix_new.location = Vector((-800, 0))\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Mélanger.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"BSDF guidé\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Sortie de matériau\"].inputs[0])\n    elif material_name == \"metal_7\":\n        ##  \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[8].default_value = random.uniform(0.7, 0.9)  # anisotropic\n        # mat_nodes[\"Principled BSDF\"].inputs[9].default_value = random.uniform(0.0, 1.0)         # anisotropicRotation\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 0.3)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            # bsdf_new.location = Vector((-800, 0))\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            # mix_new.location = Vector((-800, 0))\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.7\n\n            mat_links.new(mat_nodes[\"Reroute.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Tangent\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[22])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"metal_8\":\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n            bsdf_1_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_1_new.name = 'Principled BSDF-1-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.001\"].inputs):\n                bsdf_1_new.inputs[key].default_value = input.default_value\n            bsdf_2_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_2_new.name = 'Principled BSDF-2-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.002\"].inputs):\n                bsdf_2_new.inputs[key].default_value = input.default_value\n            bsdf_3_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_3_new.name = 'Principled BSDF-3-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF.003\"].inputs):\n                bsdf_3_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n            mix_1_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_1_new.name = 'Mix Shader-1-new'\n            mix_2_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_2_new.name = 'Mix Shader-2-new'\n            mix_3_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_3_new.name = 'Mix Shader-3-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-1-new\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-2-new\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-3-new\"].inputs[0])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.6\n                mat_nodes[\"Mix Shader-1-new\"].inputs[0].default_value = 0.6\n                mat_nodes[\"Mix Shader-2-new\"].inputs[0].default_value = 0.6\n                mat_nodes[\"Mix Shader-3-new\"].inputs[0].default_value = 0.6\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Principled BSDF-1-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Principled BSDF-2-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Principled BSDF-3-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.5\n                mat_nodes[\"Mix Shader-1-new\"].inputs[0].default_value = 0.5\n                mat_nodes[\"Mix Shader-2-new\"].inputs[0].default_value = 0.5\n                mat_nodes[\"Mix Shader-3-new\"].inputs[0].default_value = 0.5\n\n            mat_links.new(mat_nodes[\"ColorRamp\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n            mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-1-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Bump.001\"].outputs[0], mat_nodes[\"Principled BSDF-2-new\"].inputs[20])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.001\"].outputs[0], mat_nodes[\"Mix Shader-1-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-1-new\"].outputs[0], mat_nodes[\"Mix Shader-1-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-1-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[2])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.002\"].outputs[0], mat_nodes[\"Mix Shader-2-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-2-new\"].outputs[0], mat_nodes[\"Mix Shader-2-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-2-new\"].outputs[0], mat_nodes[\"Mix Shader.001\"].inputs[1])\n\n            mat_links.new(mat_nodes[\"Principled BSDF.003\"].outputs[0], mat_nodes[\"Mix Shader-3-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-3-new\"].outputs[0], mat_nodes[\"Mix Shader-3-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-3-new\"].outputs[0], mat_nodes[\"Mix Shader.001\"].inputs[2])\n    elif material_name == \"metal_9\":\n        ##  \n        # mat_nodes[\"Principled BSDF\"].inputs[4].default_value = random.uniform(0.98, 1.00)       # metallic\n        # mat_nodes[\"Principled BSDF\"].inputs[5].default_value = random.uniform(0.5, 1.0)         # specular\n        # mat_nodes[\"Principled BSDF\"].inputs[6].default_value = random.uniform(0.0, 1.0)         # specularTint\n        mat_nodes[\"Principled BSDF\"].inputs[7].default_value = random.uniform(0.01, 0.3)  # roughness\n        # mat_nodes[\"Principled BSDF\"].inputs[12].default_value = random.uniform(0.0, 0.3)        # clearcoat\n        # mat_nodes[\"Principled BSDF\"].inputs[13].default_value = random.uniform(0.0, 0.3)        # clearcoatGloss\n        mat_nodes[\"Anisotropic BSDF\"].inputs[1].default_value = random.uniform(0.11, 0.25)\n        mat_nodes[\"Anisotropic BSDF\"].inputs[2].default_value = random.uniform(0.4, 0.6)\n\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Anisotropic BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n                mat_nodes[\"Anisotropic BSDF\"].inputs[0].default_value = list(orign_base_color)\n\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n\n    ## porcelain\n    elif material_name == \"porcelain_0\":\n        if transfer_flag == True:\n            # if is_texture:\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n            # else:\n            #     mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9    \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = mix_prop  # 0.8    \n\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"porcelain_1\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n            else:\n                mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Mix\"].inputs[1].default_value\n\n            new_bs_color_r = bs_color[0] + random.uniform(-0.3, 0.3)\n            new_bs_color_g = bs_color[1] + random.uniform(-0.3, 0.3)\n            new_bs_color_b = bs_color[2] + random.uniform(-0.3, 0.3)\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0.2\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0.2\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0.2\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(new_bs_color)\n    elif material_name == \"porcelain_2\":\n        if transfer_flag == True:\n            bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n            bsdf_new.name = 'Principled BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n                bsdf_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = tex_mix_prop  # 0.9  \n            else:\n                mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.8\n\n            mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n            mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n            mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"porcelain_3\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix.001\"].inputs[1])\n            else:\n                mat_nodes[\"Mix.001\"].inputs[1].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Mix.001\"].inputs[1].default_value\n\n            new_bs_color_r = bs_color[0] + random.uniform(-0.3, 0.3)\n            new_bs_color_g = bs_color[1] + random.uniform(-0.3, 0.3)\n            new_bs_color_b = bs_color[2] + random.uniform(-0.3, 0.3)\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0.2\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0.2\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0.2\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Mix.001\"].inputs[1].default_value = list(new_bs_color)\n    elif material_name == \"porcelain_4\":\n        if transfer_flag == True:\n            # if is_texture:\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n            # else:\n            #     mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n            #     mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Glossy BSDF\"].inputs[1].default_value = random.uniform(0.05, 0.15)\n\n            diff_new = mat_nodes.new(type='ShaderNodeBsdfDiffuse')\n            diff_new.name = 'Diffuse BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Diffuse BSDF\"].inputs):\n                diff_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF-new\"].inputs[0])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0   \n            else:\n                mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9   \n\n            mat_links.new(mat_nodes[\"Diffuse BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Diffuse BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n    elif material_name == \"porcelain_5\":\n        if transfer_flag == True:\n            # if is_texture:\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            #     mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n            # else:\n            #     mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n            #     mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(orign_base_color)\n            diff_new = mat_nodes.new(type='ShaderNodeBsdfDiffuse')\n            diff_new.name = 'Diffuse BSDF-new'\n            for key, input in enumerate(mat_nodes[\"Diffuse BSDF\"].inputs):\n                diff_new.inputs[key].default_value = input.default_value\n\n            mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n            mix_new.name = 'Mix Shader-new'\n\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF-new\"].inputs[0])\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0   \n            else:\n                mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9   \n\n            mat_links.new(mat_nodes[\"Diffuse BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n            mat_links.new(mat_nodes[\"Diffuse BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n            mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n    elif material_name == \"porcelain_6\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + random.uniform(-0.3, 0.3)\n            new_bs_color_g = bs_color[1] + random.uniform(-0.3, 0.3)\n            new_bs_color_b = bs_color[2] + random.uniform(-0.3, 0.3)\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0.2\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0.2\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0.2\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(new_bs_color)\n\n    ## plastic\n    elif material_name == \"plastic_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_3\":\n        mat_nodes[\"值(明度)\"].outputs[0].default_value = random.uniform(0.05, 0.25)\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_5\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_6\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.012\"].inputs[0])\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.021\"].inputs[0])\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.022\"].inputs[0])\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.033\"].inputs[0])\n        else:\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"RGB.001\"].outputs[0].default_value = list(orign_base_color)\n            \"\"\"\n            mat_nodes[\"RGB.002\"].outputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"RGB.003\"].outputs[0].default_value = list(orign_base_color)\n            \"\"\"\n\n    ## rubber\n    elif material_name == \"rubber_0\":\n        diff_new = mat_nodes.new(type='ShaderNodeBsdfDiffuse')\n        diff_new.name = 'Diffuse BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Diffuse BSDF\"].inputs):\n            diff_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF-new\"].inputs[0])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0   \n        else:\n            mat_nodes[\"Diffuse BSDF\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9   \n\n        mat_links.new(mat_nodes[\"Diffuse BSDF\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Diffuse BSDF-new\"].outputs[0], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Mix Shader\"].inputs[1])\n    elif material_name == \"rubber_1\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n\n        mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"RGB Curves\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"rubber_2\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n\n        mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"RGB Curves\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"rubber_3\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"rubber_4\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n\n    ## plastic_specular\n    elif material_name == \"plasticsp_0\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.001\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute\"].inputs[0])\n            else:\n                mat_nodes[\"RGB.001\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB.001\"].outputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB.001\"].outputs[0].default_value = list(new_bs_color)\n    elif material_name == \"plasticsp_1\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n\n    ## paint_specular\n    elif material_name == \"paintsp_0\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Diffuse BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB\"].outputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_1\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[2])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix.001\"].inputs[1])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Hue Saturation Value\"].inputs[4])\n            else:\n                mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB\"].outputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_2\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Group\"].inputs[0])\n            else:\n                mat_nodes[\"Group\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Group\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Group\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_3\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            # bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n            new_bs_color = [random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1), 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_4\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF\"].inputs[0])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Glossy BSDF.001\"].inputs[0])\n            else:\n                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(orign_base_color)\n                mat_nodes[\"Glossy BSDF.001\"].inputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"Principled BSDF\"].inputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(new_bs_color)\n            mat_nodes[\"Glossy BSDF\"].inputs[0].default_value = list(new_bs_color)\n            mat_nodes[\"Glossy BSDF.001\"].inputs[0].default_value = list(new_bs_color)\n    elif material_name == \"paintsp_5\":\n        if transfer_flag == True:\n            if is_texture:\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Invert\"].inputs[1])\n                mat_links.new(tex_node.outputs[0], mat_nodes[\"Reroute.002\"].inputs[0])\n            else:\n                mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n        else:\n            bs_color = mat_nodes[\"RGB\"].outputs[0].default_value\n\n            new_bs_color_r = bs_color[0] + r_rand\n            new_bs_color_g = bs_color[1] + g_rand\n            new_bs_color_b = bs_color[2] + b_rand\n            if new_bs_color_r < 0:\n                new_bs_color_r = 0\n            if new_bs_color_g < 0:\n                new_bs_color_g = 0\n            if new_bs_color_b < 0:\n                new_bs_color_b = 0\n\n            if new_bs_color_r > 1:\n                new_bs_color_r = 1\n            if new_bs_color_g > 1:\n                new_bs_color_g = 1\n            if new_bs_color_b > 1:\n                new_bs_color_b = 1\n\n            new_bs_color = [new_bs_color_r, new_bs_color_g, new_bs_color_b, 1]\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(new_bs_color)\n\n    ## rubber\n    elif material_name == \"rubber_5\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 1.0\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n\n        mat_links.new(mat_nodes[\"Mix.005\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n\n    ## plastic\n    elif material_name == \"plastic_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF.001\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_4\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_7\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF.001\"].inputs[0])\n        else:\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_8\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Group\"].inputs[0])\n        else:\n            mat_nodes[\"Group\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_9\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_10\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_11\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_12\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[2])\n        else:\n            mat_nodes[\"RGB\"].outputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_13\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"plastic_14\":\n        mat_nodes[\"Math.005\"].inputs[1].default_value = random.uniform(0.05, 0.3)\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n\n    ## paper\n    elif material_name == \"paper_0\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = 0.9\n\n        mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"Mix.002\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"paper_1\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.8, 0.95)\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.8, 0.9)\n\n        mat_links.new(mat_nodes[\"Normal Map\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"Image Texture.001\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n    elif material_name == \"paper_2\":\n        bsdf_new = mat_nodes.new(type='ShaderNodeBsdfPrincipled')\n        bsdf_new.name = 'Principled BSDF-new'\n        for key, input in enumerate(mat_nodes[\"Principled BSDF\"].inputs):\n            bsdf_new.inputs[key].default_value = input.default_value\n\n        mix_new = mat_nodes.new(type='ShaderNodeMixShader')\n        mix_new.name = 'Mix Shader-new'\n\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[\"Base Color\"])\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.9, 0.95)\n        else:\n            mat_nodes[\"Principled BSDF-new\"].inputs[0].default_value = list(orign_base_color)\n            mat_nodes[\"Mix Shader-new\"].inputs[0].default_value = random.uniform(0.9, 0.95)\n\n        mat_links.new(mat_nodes[\"Bump\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[20])\n        mat_links.new(mat_nodes[\"Bright/Contrast\"].outputs[0], mat_nodes[\"Principled BSDF-new\"].inputs[7])\n\n        mat_links.new(mat_nodes[\"Principled BSDF\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[1])\n        mat_links.new(mat_nodes[\"Principled BSDF-new\"].outputs[\"BSDF\"], mat_nodes[\"Mix Shader-new\"].inputs[2])\n        mat_links.new(mat_nodes[\"Mix Shader-new\"].outputs[0], mat_nodes[\"Material Output\"].inputs[\"Surface\"])\n\n    ## leather\n    elif material_name == \"leather_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)\n    elif material_name == \"leather_3\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_4\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"leather_5\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"lether\"].inputs[0])\n        else:\n            mat_nodes[\"lether\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"wood_0\":\n        pass\n    elif material_name == \"wood_1\":\n        pass\n    elif material_name == \"wood_2\":\n        pass\n    elif material_name == \"wood_3\":\n        pass\n    elif material_name == \"wood_4\":\n        pass\n    elif material_name == \"wood_5\":\n        pass\n    elif material_name == \"wood_6\":\n        pass\n    elif material_name == \"wood_7\":\n        pass\n    elif material_name == \"wood_8\":\n        pass\n    elif material_name == \"wood_9\":\n        pass\n\n    ## fabric\n    elif material_name == \"fabric_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"fabric_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"fabric_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)\n\n    ## clay\n    elif material_name == \"clay_0\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"clay_1\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"clay_2\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"clay_3\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)\n    elif material_name == \"clay_4\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n        else:\n            mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list(orign_base_color)\n    elif material_name == \"clay_5\":\n        if is_texture:\n            mat_links.new(tex_node.outputs[0], mat_nodes[\"Mix\"].inputs[1])\n        else:\n            mat_nodes[\"Mix\"].inputs[1].default_value = list(orign_base_color)\n\n    ## glass\n    elif material_name == \"glass_0\":\n        pass\n    elif material_name == \"glass_4\":\n        pass\n    elif material_name == \"glass_5\":\n        pass\n    elif material_name == \"glass_14\":\n        pass\n    else:\n        print(material_name + \" no change\")\n\n\ndef set_modify_material(obj, material, obj_texture_img_list, mat_randomize_mode, is_transfer=True):\n    for mat_slot in obj.material_slots:\n        # print(\"-material_slots:\",mat_slot)\n        if mat_slot.material:\n            if mat_slot.material.node_tree:\n                # print(\"--material:\" + str(mat_slot.material.name))\n                srcmat = material\n                mat = srcmat.copy()\n                mat.name = mat_slot.material.name  # rename\n                mat_links = mat.node_tree.links\n                mat_nodes = mat.node_tree.nodes\n                bsdf_node = mat_slot.material.node_tree.nodes.get(\"Principled BSDF\", None)\n                if bsdf_node is not None:\n                    tex_node = mat_slot.material.node_tree.nodes.new(type='ShaderNodeTexImage')\n                    tex_node.name = 'objtexture_tex'\n                    tex_node.extension = 'EXTEND'\n                    # random pick real table image\n                    flag = random.randint(0, len(obj_texture_img_list) - 1)\n                    tex_node.image = obj_texture_img_list[flag]\n                 \n                    # check if texture node exists\n                    tex_node_orign = mat_slot.material.node_tree.nodes.get('objtexture_tex', None)\n                    if tex_node_orign is not None:\n                        # mat = mat_slot.material.copy()\n                        # Get the bl_idname to create a new node of the same type\n                        tex_node = mat_nodes.new(tex_node_orign.bl_idname)\n                        texture_img = bpy.data.images[tex_node_orign.image.name]\n                        # Assign the default values from the old node to the new node\n                        tex_node.image = texture_img\n                        tex_node.projection = 'SPHERE'\n                        # tex_node.location = Vector((-800, 0))\n                        ###\n                        mapping_node = mat_nodes.new(type='ShaderNodeMapping')\n                        mapping_node.name = 'objtexture_mapping'\n                        texcoord_node = mat_nodes.new(type='ShaderNodeTexCoord')\n                        texcoord_node.name = 'objtexture_texcoord'\n                        mat_links.new(mapping_node.outputs[0], tex_node.inputs[0])\n                        mat_links.new(texcoord_node.outputs[0], mapping_node.inputs[0])\n                        ###\n                        modify_material(mat_links, mat_nodes, srcmat.name, mat_randomize_mode, is_texture=True,\n                                        tex_node=tex_node, is_transfer=is_transfer)\n                    else:\n                        orign_base_color = mat_slot.material.node_tree.nodes[\"Principled BSDF\"].inputs[0].default_value\n                        if orign_base_color[0] == 0.0 and orign_base_color[1] == 0.0 and orign_base_color[2] == 0.0:\n                            orign_base_color = [0.05, 0.05, 0.05, 1]\n                       \n                        modify_material(mat_links, mat_nodes, srcmat.name, mat_randomize_mode, is_texture=False,\n                                        orign_base_color=orign_base_color, is_transfer=is_transfer)\n\n                # apply material\n                bpy.data.materials.remove(mat_slot.material)\n                mat_slot.material = mat\n\n\ndef set_modify_raw_material(obj):\n    for mat_slot in obj.material_slots:\n        if mat_slot.material:\n            if mat_slot.material.node_tree:\n                bsdf_node = mat_slot.material.node_tree.nodes.get(\"Principled BSDF\", None)\n                if bsdf_node is not None:\n                    tex_node_orign = mat_slot.material.node_tree.nodes.get(\"Image Texture\", None)\n                    if tex_node_orign is None:\n                            orign_base_color = mat_slot.material.node_tree.nodes[\"Principled BSDF\"].inputs[0].default_value\n                            if orign_base_color[0] == 0.0 and orign_base_color[1] == 0.0 and orign_base_color[2] == 0.0:\n                                mat = mat_slot.material.copy()\n                                mat.name = mat_slot.material.name   # rename\n                                mat_nodes = mat.node_tree.nodes   \n                                mat_nodes[\"Principled BSDF\"].inputs[0].default_value = list([0.05, 0.05, 0.05, 1])\n                                bpy.data.materials.remove(mat_slot.material)\n                                mat_slot.material = mat\n\n\ndef set_modify_table_material(obj, material, selected_realtable_img): ### realtable_img_list):\n    #print(\"--material:\" + str(mat_slot.material.name))\n    srcmat = material\n    #print(srcmat.name)\n    mat = srcmat.copy()\n    mat_links = mat.node_tree.links\n    mat_nodes = mat.node_tree.nodes\n\n    tex_node = mat_nodes.new(type='ShaderNodeTexImage')\n    tex_node.name = 'realtable_tex'\n    tex_node.extension = 'EXTEND'\n    ### flag = random.randint(0, len(realtable_img_list)-1)\n    tex_node.image = selected_realtable_img ### realtable_img_list[flag]\n    mapping_node = mat_nodes.new(type='ShaderNodeMapping')\n    mapping_node.name = 'realtable_mapping'\n    texcoord_node = mat_nodes.new(type='ShaderNodeTexCoord')\n    texcoord_node.name = 'realtable_texcoord'\n\n    mat_links.new(tex_node.outputs[0], mat_nodes[\"Principled BSDF\"].inputs[0])\n    mat_links.new(mapping_node.outputs[0], tex_node.inputs[0])\n    mat_links.new(texcoord_node.outputs[2], mapping_node.inputs[0])\n\n    obj.active_material = mat\n\n\ndef set_modify_floor_material(obj, material, selected_realfloor_img):### realfloor_img_list):\n    srcmat = material\n    mat = srcmat.copy()\n    mat_links = mat.node_tree.links\n    mat_nodes = mat.node_tree.nodes\n\n    bsdfnode_list = [n for n in mat_nodes if isinstance(n, bpy.types.ShaderNodeBsdfPrincipled)]\n    if bsdfnode_list == []:\n        obj.active_material = material\n    else:\n        for bsdfnode in bsdfnode_list:\n            tex_node = mat_nodes.new(type='ShaderNodeTexImage')\n            tex_node.name = 'realfloor_tex'\n            tex_node.extension = 'REPEAT'\n            ### flag = random.randint(0, len(realfloor_img_list)-1)\n            tex_node.image = selected_realfloor_img ### realfloor_img_list[flag]\n            mapping_node = mat_nodes.new(type='ShaderNodeMapping')\n            mapping_node.name = 'realfloor_mapping'\n            texcoord_node = mat_nodes.new(type='ShaderNodeTexCoord')\n            texcoord_node.name = 'realfloor_texcoord'\n\n            sc = random.uniform(1.00, 4.00)\n \n            mat_links.new(tex_node.outputs[0], bsdfnode.inputs[0]) #mat_nodes[bsdf_node_name].inputs[0])\n            mat_links.new(mapping_node.outputs[0], tex_node.inputs[0])\n            mat_links.new(texcoord_node.outputs[2], mapping_node.inputs[0])\n\n            obj.active_material = mat"
  },
  {
    "path": "src/rd/render.py",
    "content": "import os\nimport random\nimport bpy\nimport math\nimport numpy as np\nfrom rd.modify_material import  set_modify_table_material, set_modify_floor_material\nfrom rd.render_utils import *\n\ndef blender_init_scene(code_root, log_root_dir, obj_texture_image_root_path, scene_type, urdfs_and_poses_dict, round_idx, logdir, check_seen_scene, material_type, gpuid, output_modality_dict):\n    if scene_type == \"pile\":\n        seed = 1143+830+round_idx\n    elif scene_type == \"packed\":\n        seed = 111143+170+round_idx\n    else:\n        seed = 43+round_idx\n    np.random.seed(seed)\n    random.seed(seed)\n    os.environ['PYTHONHASHSEED'] = str(seed)\n\n    DEVICE_LIST = [int(gpuid)]\n\n    obj_texture_image_idxfile = \"test_paths.txt\"\n    asset_root = code_root\n    env_map_path = os.path.join(asset_root, \"envmap_lib_test\")\n    real_table_image_root_path = os.path.join(asset_root, \"realtable_test\")\n    real_floor_image_root_path = os.path.join(asset_root, \"realfloor_test\")\n\n    emitter_pattern_path = os.path.join(asset_root, \"pattern\", \"test_pattern.png\")\n    default_background_texture_path = os.path.join(asset_root, \"texture\", \"texture_0.jpg\")\n    table_CAD_model_path = os.path.join(asset_root, \"table_obj\", \"table.obj\")\n\n    output_root_path = os.path.join(log_root_dir, \"rendered_results/\" + str(logdir).split(\"/\")[-1])\n    \n    obj_uid_list = [str(uid) for uid in urdfs_and_poses_dict]\n    obj_scale_list = [value[0] for value in urdfs_and_poses_dict.values()]\n    obj_quat_list = [value[1][[3, 0, 1, 2]] for value in urdfs_and_poses_dict.values()]\n  \n    obj_trans_list = []\n    for value in urdfs_and_poses_dict.values():\n        T = value[2]\n        T = T + tsdf2blender_coord_T_shift\n        obj_trans_list.append(T)\n    \n    urdf_path_list = [os.path.join(value[3]) for value in urdfs_and_poses_dict.values()] #\"/\".join(code_root.split(\"/\")[:-2]), \n\n    max_instance_num = 20\n\n    if not os.path.exists(output_root_path):\n        os.makedirs(output_root_path)\n\n    # generate CAD model list\n    CAD_model_list = generate_CAD_model_list(scene_type, urdf_path_list, obj_uid_list)\n\n    renderer = BlenderRenderer(viewport_size_x=camera_width, viewport_size_y=camera_height, DEVICE_LIST=DEVICE_LIST)\n    renderer.loadImages(emitter_pattern_path, env_map_path, real_table_image_root_path, real_floor_image_root_path, obj_texture_image_root_path, obj_texture_image_idxfile, check_seen_scene)\n    renderer.addEnvMap()\n    renderer.addBackground(background_size, background_position, background_scale, default_background_texture_path)\n    renderer.addMaterialLib(material_class_instance_pairs)  ###\n    renderer.addMaskMaterial(max_instance_num)\n    renderer.addNOCSMaterial()\n    renderer.addNormalMaterial()\n\n    renderer.clearModel()\n    # set scene output path\n    path_scene = output_root_path #os.path.join(output_root_path, uuid.uuid4().hex, \"init\") ### \"scene_\"+str(SCENE_NUM).zfill(4))\n    if os.path.exists(path_scene)==False:\n        os.makedirs(path_scene)\n\n    # camera pose list, environment light list and background material_listz\n    quaternion_list = []\n    translation_list = []\n\n    # environment map list\n    env_map_id_list = []\n    rotation_elur_z_list = []\n\n    # background material list\n    background_material_list = []\n\n    # table material list\n    table_material_list = []\n\n    look_at = look_at_shift\n    quat_list, trans_list, rot_list = genCameraPosition(look_at)\n\n    rot_array = np.array(rot_list)  # (256, 3, 3)\n    trans_array = np.array(trans_list)  #  (256, 3, 1)\n    cam_RT = np.concatenate([rot_array, trans_array], 2)\n    zero_one = np.expand_dims([[0, 0, 0, 1]],0).repeat(rot_array.shape[0],axis=0)\n    cam_RT = np.concatenate([cam_RT, zero_one], 1)  # (256, 4, 4)\n\n    # generate camara pose list\n    for i in range(NUM_FRAME_PER_SCENE):\n        quaternion = quat_list[i] #### cam_pose_list[i][0]\n        translation = trans_list[i] #### cam_pose_list[i][1]\n        quaternion_list.append(quaternion)\n        translation_list.append(translation)\n\n    flag_env_map = random.randint(0, len(renderer.env_map) - 1)\n    flag_env_map_rot = random.uniform(-math.pi, math.pi)\n    flag_realfloor = random.randint(0, len(renderer.realfloor_img_list) - 1)\n    flag_realtable = random.randint(0, len(renderer.realtable_img_list) - 1)\n\n    # generate environment map list\n    env_map_id_list.append(flag_env_map)\n    rotation_elur_z_list.append(flag_env_map_rot)\n\n    \n    # generate background material list \n    if my_material_randomize_mode == 'raw':\n        background_material_list.append(renderer.my_material['default_background'])\n    else:\n        material_selected = random.sample(renderer.my_material['background'], 1)[0] ### renderer.my_material['background'][1] \n        background_material_list.append(material_selected)\n        material_selected = random.sample(renderer.my_material['table'], 1)[0] ### renderer.my_material['table'][0]  \n        table_material_list.append(material_selected)\n\n    # read objects from floder\n    meta_output = {}\n    select_model_list = []\n    select_model_list_other = []\n    select_model_list_transparent = []\n    select_model_list_dis = []\n    select_number = 1\n\n    for item in CAD_model_list:\n        if item in ['other']:\n            test = CAD_model_list[item]\n            for model in test:\n                select_model_list.append(model)\n        else:\n            raise ValueError(\"No such category!\")\n    \n    # table model\n    renderer.loadModel(table_CAD_model_path)\n    obj = bpy.data.objects['table']\n    # resize table, unit: m\n    class_scale = 0.001\n    obj.scale = (class_scale, class_scale, class_scale)\n    y_transform = np.array([[0,0,-1],[0,1,0],[1,0,0]])\n    transform = y_transform # z_transform @ y_transform\n    obj_world_pose_quat = quaternionFromRotMat(transform)\n    obj_world_pose_T_shift = np.array([0,0,-0.0751])\n    obj_world_pose_T = obj_world_pose_T_shift ### np.array([cam_pose_T[0],cam_pose_T[1],0]) + obj_world_pose_T_shift\n    setModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n    obj_world_pose_T_shift = np.array([0,0,0])\n    obj_world_pose_T = obj_world_pose_T_shift ### np.array([cam_pose_T[0],cam_pose_T[1],0]) + obj_world_pose_T_shift\n    # setModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n    bpy.ops.mesh.primitive_plane_add(size=1., enter_editmode=False, align='WORLD', location=obj_world_pose_T)\n    bpy.ops.rigidbody.object_add()\n    bpy.context.object.rigid_body.type = 'PASSIVE'\n    bpy.context.object.rigid_body.collision_shape = 'BOX'\n    obj = bpy.data.objects['Plane']\n    obj.name = 'tableplane'\n    obj.data.name = 'tableplane'\n    obj.scale = (0.898, 1.3, 1.)\n    ###\n\n    instance_id = 1\n    # set object parameters\n    imported_obj_name_list = []\n    for model in select_model_list:\n        instance_path = model[0]\n        class_name = model[1]\n        instance_uid = model[2]\n        instance_folder = model[0].split('/')[-1][:-4] \n        instance_name = str(instance_id) + \"_\" + class_name + \"_\" + instance_folder + \"_\" + instance_uid ### class_folder + \"_\" + instance_folder\n\n        material_type_in_mixed_mode = generate_material_type(instance_name, class_material_pairs, instance_material_except_pairs, instance_material_include_pairs, material_class_instance_pairs, material_type)\n\n        # download CAD model and rename\n        renderer.loadModel(instance_path)\n        import_obj_name = instance_folder #bpy.data.objects.keys()[\"instance_folder\"] \n\n        obj = bpy.data.objects[import_obj_name]\n        obj.name = instance_name\n        obj.data.name = instance_name\n\n        obj_world_pose_T = obj_trans_list[instance_id-1] ### obj_pose_list[instance_id-1][:3,3]\n        obj_world_pose_quat = obj_quat_list[instance_id-1] ### quaternionFromRotMat(obj_world_pose_R)\n        setModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n\n        # set object as rigid body\n        setRigidBody(obj)\n\n        # set material\n        renderer.set_material_randomize_mode(class_material_pairs, my_material_randomize_mode, obj, material_type_in_mixed_mode)\n        \n        # generate meta file\n        class_scale = obj_scale_list[instance_id-1] ### random.uniform(g_synset_name_scale_pairs[class_name][0], g_synset_name_scale_pairs[class_name][1])\n        obj.scale = (class_scale, class_scale, class_scale)\n\n        # query material type\n        material_class_id = None\n        for key in material_class_instance_pairs:\n            if material_type_in_mixed_mode == 'raw':\n                material_class_id = material_class_id_dict[material_type_in_mixed_mode]\n                break\n            elif material_type_in_mixed_mode in material_class_instance_pairs[key]:\n                material_class_id = material_class_id_dict[key]\n                break\n        if material_class_id == None:\n            raise ValueError(\"material_class_id error!\")\n\n        meta_output[str(instance_id)] = [#str(g_synset_name_label_pairs[class_name]),\n                                         ### class_folder, \n                                         str(instance_folder), \n                                         ### str(class_scale),\n                                         str(material_class_id), ###str(material_name_label_pairs[material_type_in_mixed_mode])]\n                                         str(material_type_id_dict[material_type_in_mixed_mode])\n                                         ]\n\n        instance_id += 1\n\n    if output_modality_dict['IR'] or output_modality_dict['RGB']:\n        renderer.setEnvMap(env_map_id_list[0], rotation_elur_z_list[0])\n        # pick real floor image\n        selected_realfloor_img = renderer.realfloor_img_list[flag_realfloor]\n\n        # pcik real table image\n        selected_realtable_img = renderer.realtable_img_list[flag_realtable]\n        for obj in bpy.data.objects:\n            if obj.type == \"MESH\" and obj.name.split('_')[0] == 'background':\n                if obj.name == 'background_0':\n                    set_modify_floor_material(obj, background_material_list[0], selected_realfloor_img) ### renderer.realfloor_img_list)\n                else:\n                    background_0_obj = bpy.data.objects['background_0']\n                    obj.active_material = background_0_obj.material_slots[0].material\n            elif obj.type == \"MESH\" and obj.name == 'table':\n                set_modify_table_material(obj, table_material_list[0], selected_realtable_img)### renderer.realtable_img_list)\n            elif obj.type == \"MESH\" and obj.name == 'tableplane':\n                    table_obj = bpy.data.objects['table']\n                    obj.active_material = table_obj.material_slots[0].material\n\n    return renderer, quaternion_list, translation_list, path_scene#, obj_trans_list, obj_quat_list\n\n\ndef blender_update_sceneobj(obj_name_list, obj_trans_list, obj_quat_list, obj_uid_list):\n    for obj_name in bpy.data.objects.keys():\n        obj = bpy.data.objects[obj_name]\n        if obj.type == 'MESH' and obj_name[0:10] != \"background\" and obj_name not in ['camera_l', 'camera_r', 'light_emitter', 'table', 'tableplane']:\n            obj_uid = obj_name.split(\"_\")[-1]\n            if obj_uid not in obj_uid_list:\n                print(\"[V]blender_update_sceneobj: obj_uid [not in] obj_uid_list: \", obj_name, obj_name_list, obj_uid, obj_uid_list)\n                obj.hide_render = True\n            else:\n                obj.hide_render = False\n                obj_world_pose_T = obj_trans_list[obj_uid_list.index(obj_uid)] + tsdf2blender_coord_T_shift\n                obj_world_pose_quat = obj_quat_list[obj_uid_list.index(obj_uid)]\n                print(\"[V]blender_update_sceneobj: obj_uid [in] obj_uid_list: \", obj_name, obj_name_list, obj_uid, obj_uid_list, obj_world_pose_T, obj_world_pose_quat)\n                setModelPosition(obj, obj_world_pose_T, obj_world_pose_quat)\n\n\ndef blender_render(renderer, quaternion_list, translation_list, path_scene, render_frame_list, output_modality_dict, camera_focal, is_init=False):\n    # set the key frame\n    scene = bpy.data.scenes['Scene']\n\n    camera_fov = 2 * math.atan(camera_width / (2 * camera_focal))\n\n    # render IR image and RGB image\n    if output_modality_dict['IR'] or output_modality_dict['RGB']:\n        if is_init:\n            renderer.src_energy_for_rgb_render = bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value\n\n        for i in render_frame_list:  \n            renderer.setCamera(quaternion_list[i], translation_list[i], camera_fov, baseline_distance)\n            renderer.setLighting()\n\n            # render RGB image\n            if output_modality_dict['RGB']:\n                rgb_dir_path = os.path.join(path_scene, 'rgb')\n                if os.path.exists(rgb_dir_path) == False:\n                    os.makedirs(rgb_dir_path)\n\n                renderer.render_mode = \"RGB\"\n                camera = bpy.data.objects['camera_l']\n                scene.camera = camera\n                save_path = rgb_dir_path\n                save_name = str(i).zfill(4)\n                renderer.render(save_name, save_path)\n\n            # render IR image\n            if output_modality_dict['IR']:\n                ir_l_dir_path = os.path.join(path_scene, 'ir_l')\n                if os.path.exists(ir_l_dir_path)==False:\n                    os.makedirs(ir_l_dir_path)\n                ir_r_dir_path = os.path.join(path_scene, 'ir_r')\n                if os.path.exists(ir_r_dir_path)==False:\n                    os.makedirs(ir_r_dir_path)\n\n                renderer.render_mode = \"IR\"\n                camera = bpy.data.objects['camera_l']\n                scene.camera = camera\n                save_path = ir_l_dir_path\n                save_name = str(i).zfill(4)\n                renderer.render(save_name, save_path)\n\n                camera = bpy.data.objects['camera_r']\n                scene.camera = camera\n                save_path = ir_r_dir_path\n                save_name = str(i).zfill(4)\n                renderer.render(save_name, save_path)\n        \n    # render normal map and depth map\n    if output_modality_dict['Normal']:\n        # set normal as material\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH':\n                obj.data.materials.clear()\n                obj.active_material = renderer.my_material[\"normal\"]\n\n        # render normal map\n        for i in render_frame_list:\n            renderer.setCamera(quaternion_list[i], translation_list[i], camera_fov, baseline_distance)\n\n            normal_dir_path = os.path.join(path_scene, 'normal')\n            if os.path.exists(normal_dir_path)==False:\n                os.makedirs(normal_dir_path)\n            depth_dir_path = os.path.join(path_scene, 'depth')\n            if os.path.exists(depth_dir_path)==False:\n                os.makedirs(depth_dir_path)\n\n            renderer.render_mode = \"Normal\"\n            camera = bpy.data.objects['camera_l']\n            scene.camera = camera\n            save_path = normal_dir_path\n            save_name = str(i).zfill(4)\n            renderer.render(save_name, save_path)\n\n    context = bpy.context\n    for ob in context.selected_objects:\n        ob.animation_data_clear()\n"
  },
  {
    "path": "src/rd/render_utils.py",
    "content": "import os\nimport random\nimport bpy\nimport math\nimport numpy as np\nfrom mathutils import Vector, Matrix\nfrom bpy_extras.object_utils import world_to_camera_view\nfrom rd.modify_material import set_modify_material, set_modify_raw_material, set_modify_table_material, set_modify_floor_material\n\n# render parameter \nRENDERING_PATH = os.getcwd()\nLIGHT_EMITTER_ENERGY = 5\nLIGHT_ENV_MAP_ENERGY_IR = 0.035 \nLIGHT_ENV_MAP_ENERGY_RGB = 1.0 \nCYCLES_SAMPLE = 32\nCAMERA_TYPE = \"realsense\"\nNUM_FRAME_PER_SCENE = 24\n\n# view point parameter\nlook_at_shift = np.array([0,0,0])\nnum_point_ver = 6\nnum_point_hor = 4\nbeta_range = (15*math.pi/180, 45*math.pi/180)\nr = 0.5\ntsdf2blender_coord_T_shift = np.array([-0.15, -0.15, -0.0503])\nTABLE_CAD_MODEL_HEIGHT = 0.75\n\n# material randomization mode (transparent, specular, mixed, raw)\nmy_material_randomize_mode = 'mixed'\n\n# set depth sensor parameter\ncamera_width = 640\ncamera_height = 360\nbaseline_distance = 0.055\n\n# set background parameter\nbackground_size = 3.\nbackground_position = (0., 0., 0.)\nbackground_scale = (1., 1., 1.)\n\n\n# set camera randomize paramater\nstart_point_range = ((0.5, 0.95), (-0.6, 0.6, -0.6, 0.6))\nup_range = (-0.18, -0.18, -0.18, 0.18)\nlook_at_range = (background_position[0] - 0.05, background_position[0] + 0.05, \n                 background_position[1] - 0.05, background_position[1] + 0.05,\n                 background_position[2] - 0.05, background_position[2] + 0.05)\n\n\ng_syn_light_num_lowbound = 4\ng_syn_light_num_highbound = 6\ng_syn_light_dist_lowbound = 8\ng_syn_light_dist_highbound = 12\ng_syn_light_azimuth_degree_lowbound = 0\ng_syn_light_azimuth_degree_highbound = 360\ng_syn_light_elevation_degree_lowbound = 0\ng_syn_light_elevation_degree_highbound = 90\ng_syn_light_energy_mean = 3\ng_syn_light_energy_std = 0.5\ng_syn_light_environment_energy_lowbound = 0\ng_syn_light_environment_energy_highbound = 1\n\n\ng_shape_synset_name_pairs_all = {'02691156': 'aeroplane',\n                                '02747177': 'ashtray',\n                                '02773838': 'backpack',\n                                '02801938': 'basket',\n                                '02808440': 'tub',  # bathtub\n                                '02818832': 'bed',\n                                '02828884': 'bench',\n                                '02834778': 'bicycle',\n                                '02843684': 'mailbox', # missing in objectnet3d, birdhouse, use view distribution of mailbox\n                                '02858304': 'boat',\n                                '02871439': 'bookshelf',\n                                '02876657': 'bottle',\n                                '02880940': 'bowl', # missing in objectnet3d, bowl, use view distribution of plate\n                                '02924116': 'bus',\n                                '02933112': 'cabinet',\n                                '02942699': 'camera',\n                                '02946921': 'can',\n                                '02954340': 'cap',\n                                '02958343': 'car',\n                                '02992529': 'cellphone',\n                                '03001627': 'chair',\n                                '03046257': 'clock',\n                                '03085013': 'keyboard',\n                                '03207941': 'dishwasher',\n                                '03211117': 'tvmonitor',\n                                '03261776': 'headphone',\n                                '03325088': 'faucet',\n                                '03337140': 'filing_cabinet',\n                                '03467517': 'guitar',\n                                '03513137': 'helmet',\n                                '03593526': 'jar',\n                                '03624134': 'knife',\n                                '03636649': 'lamp',\n                                '03642806': 'laptop',\n                                '03691459': 'speaker',\n                                '03710193': 'mailbox',\n                                '03759954': 'microphone',\n                                '03761084': 'microwave',\n                                '03790512': 'motorbike',\n                                '03797390': 'mug',  # missing in objectnet3d, mug, use view distribution of cup\n                                '03928116': 'piano',\n                                '03938244': 'pillow',\n                                '03948459': 'rifle',  # missing in objectnet3d, pistol, use view distribution of rifle\n                                '03991062': 'pot',\n                                '04004475': 'printer',\n                                '04074963': 'remote_control',\n                                '04090263': 'rifle',\n                                '04099429': 'road_pole',  # missing in objectnet3d, rocket, use view distribution of road_pole\n                                '04225987': 'skateboard',\n                                '04256520': 'sofa',\n                                '04330267': 'stove',\n                                '04379243': 'diningtable',  # use view distribution of dining_table\n                                '04401088': 'telephone',\n                                '04460130': 'road_pole',  # missing in objectnet3d, tower, use view distribution of road_pole\n                                '04468005': 'train',\n                                '04530566': 'washing_machine',\n                                '04554684': 'dishwasher'}  # washer, use view distribution of dishwasher\n\ng_synset_name_label_pairs = {#'aeroplane': 7,\n                                #'bottle': 1,\n                                #'bowl': 2,   \n                                #'camera': 3,\n                                #'can': 4,\n                                #'car': 5,\n                                #'mug': 6,    \n                                'other': 0}   \n\nmaterial_class_instance_pairs = {'specular': ['metal', 'paintsp'],  # 'porcelain','plasticsp',\n                                    'transparent': ['glass'],\n                                    'diffuse': ['plastic','rubber','paper','leather','wood','clay','fabric'],\n                                    'background': ['background']}\n\n\n# material list\nclass_material_pairs = {'specular': ['other'],\n                        'transparent': ['other'],\n                        'diffuse': ['other']}\n\ninstance_material_except_pairs = {'metal': [],\n                                    'porcelain': [],\n                                    'plasticsp': [],\n                                    'paintsp':[],\n\n                                    'glass': [],#[8,9,18,19,20,24,25,26,27,28,29,30,31,32,34,43,59,72],\n                                    \n                                    'plastic': [],\n                                    'rubber': [],     \n                                    'leather': [],\n                                    'wood':[],\n                                    'paper':[],\n                                    'fabric':[],\n                                    'clay':[],   \n                                    }\ninstance_material_include_pairs = {\n                                    }\n\nmaterial_class_id_dict = {'raw': 0,\n                        'diffuse': 1,\n                        'transparent': 2,\n                        'specular': 3}\n\nmaterial_type_id_dict = {'raw': 0,\n                        'metal': 1,\n                        'porcelain': 2,\n                        'plasticsp': 3,\n                        'paintsp':4,\n                        'glass': 5, \n                        'plastic': 6,\n                        'rubber': 7,     \n                        'leather': 8,\n                        'wood':9,\n                        'paper':10,\n                        'fabric':11,\n                        'clay':12,               \n                        }\n\n\ndef obj_centered_camera_pos(dist, azimuth_deg, elevation_deg):\n    phi = float(elevation_deg) / 180 * math.pi\n    theta = float(azimuth_deg) / 180 * math.pi\n    x = (dist * math.cos(theta) * math.cos(phi))\n    y = (dist * math.sin(theta) * math.cos(phi))\n    z = (dist * math.sin(phi))\n    return (x, y, z)\n\ndef quaternionFromYawPitchRoll(yaw, pitch, roll):\n    c1 = math.cos(yaw / 2.0)\n    c2 = math.cos(pitch / 2.0)\n    c3 = math.cos(roll / 2.0)    \n    s1 = math.sin(yaw / 2.0)\n    s2 = math.sin(pitch / 2.0)\n    s3 = math.sin(roll / 2.0)    \n    q1 = c1 * c2 * c3 + s1 * s2 * s3\n    q2 = c1 * c2 * s3 - s1 * s2 * c3\n    q3 = c1 * s2 * c3 + s1 * c2 * s3\n    q4 = s1 * c2 * c3 - c1 * s2 * s3\n    return (q1, q2, q3, q4)\n\ndef camPosToQuaternion(cx, cy, cz):\n    q1a = 0\n    q1b = 0\n    q1c = math.sqrt(2) / 2\n    q1d = math.sqrt(2) / 2\n    camDist = math.sqrt(cx * cx + cy * cy + cz * cz)\n    cx = cx / camDist\n    cy = cy / camDist\n    cz = cz / camDist    \n    t = math.sqrt(cx * cx + cy * cy) \n    tx = cx / t\n    ty = cy / t\n    yaw = math.acos(ty)\n    if tx > 0:\n        yaw = 2 * math.pi - yaw\n    pitch = 0\n    tmp = min(max(tx*cx + ty*cy, -1),1)\n    #roll = math.acos(tx * cx + ty * cy)\n    roll = math.acos(tmp)\n    if cz < 0:\n        roll = -roll    \n    print(\"%f %f %f\" % (yaw, pitch, roll))\n    q2a, q2b, q2c, q2d = quaternionFromYawPitchRoll(yaw, pitch, roll)    \n    q1 = q1a * q2a - q1b * q2b - q1c * q2c - q1d * q2d\n    q2 = q1b * q2a + q1a * q2b + q1d * q2c - q1c * q2d\n    q3 = q1c * q2a - q1d * q2b + q1a * q2c + q1b * q2d\n    q4 = q1d * q2a + q1c * q2b - q1b * q2c + q1a * q2d\n    return (q1, q2, q3, q4)\n\ndef camRotQuaternion(cx, cy, cz, theta): \n    theta = theta / 180.0 * math.pi\n    camDist = math.sqrt(cx * cx + cy * cy + cz * cz)\n    cx = -cx / camDist\n    cy = -cy / camDist\n    cz = -cz / camDist\n    q1 = math.cos(theta * 0.5)\n    q2 = -cx * math.sin(theta * 0.5)\n    q3 = -cy * math.sin(theta * 0.5)\n    q4 = -cz * math.sin(theta * 0.5)\n    return (q1, q2, q3, q4)\n\ndef quaternionProduct(qx, qy): \n    a = qx[0]\n    b = qx[1]\n    c = qx[2]\n    d = qx[3]\n    e = qy[0]\n    f = qy[1]\n    g = qy[2]\n    h = qy[3]\n    q1 = a * e - b * f - c * g - d * h\n    q2 = a * f + b * e + c * h - d * g\n    q3 = a * g - b * h + c * e + d * f\n    q4 = a * h + b * g - c * f + d * e    \n    return (q1, q2, q3, q4)\n\ndef quaternionToRotation(q):\n    w, x, y, z = q\n    r00 = 1 - 2 * y ** 2 - 2 * z ** 2\n    r01 = 2 * x * y + 2 * w * z\n    r02 = 2 * x * z - 2 * w * y\n\n    r10 = 2 * x * y - 2 * w * z\n    r11 = 1 - 2 * x ** 2 - 2 * z ** 2\n    r12 = 2 * y * z + 2 * w * x\n\n    r20 = 2 * x * z + 2 * w * y\n    r21 = 2 * y * z - 2 * w * x\n    r22 = 1 - 2 * x ** 2 - 2 * y ** 2\n    r = [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]]\n    return r\n\ndef quaternionToRotation_xyzw(q):\n    x, y, z, w = q\n    r00 = 1 - 2 * y ** 2 - 2 * z ** 2\n    r01 = 2 * x * y + 2 * w * z\n    r02 = 2 * x * z - 2 * w * y\n\n    r10 = 2 * x * y - 2 * w * z\n    r11 = 1 - 2 * x ** 2 - 2 * z ** 2\n    r12 = 2 * y * z + 2 * w * x\n\n    r20 = 2 * x * z + 2 * w * y\n    r21 = 2 * y * z - 2 * w * x\n    r22 = 1 - 2 * x ** 2 - 2 * y ** 2\n    r = [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]]\n    return r\n\ndef quaternionFromRotMat(rotation_matrix):\n    rotation_matrix = np.reshape(rotation_matrix, (1, 9))[0]\n    w = math.sqrt(rotation_matrix[0]+rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    x = math.sqrt(rotation_matrix[0]-rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    y = math.sqrt(-rotation_matrix[0]+rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    z = math.sqrt(-rotation_matrix[0]-rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    a = [w,x,y,z]\n    m = a.index(max(a))\n    if m == 0:\n        x = (rotation_matrix[7]-rotation_matrix[5])/(4*w)\n        y = (rotation_matrix[2]-rotation_matrix[6])/(4*w)\n        z = (rotation_matrix[3]-rotation_matrix[1])/(4*w)\n    if m == 1:\n        w = (rotation_matrix[7]-rotation_matrix[5])/(4*x)\n        y = (rotation_matrix[1]+rotation_matrix[3])/(4*x)\n        z = (rotation_matrix[6]+rotation_matrix[2])/(4*x)\n    if m == 2:\n        w = (rotation_matrix[2]-rotation_matrix[6])/(4*y)\n        x = (rotation_matrix[1]+rotation_matrix[3])/(4*y)\n        z = (rotation_matrix[5]+rotation_matrix[7])/(4*y)\n    if m == 3:\n        w = (rotation_matrix[3]-rotation_matrix[1])/(4*z)\n        x = (rotation_matrix[6]+rotation_matrix[2])/(4*z)\n        y = (rotation_matrix[5]+rotation_matrix[7])/(4*z)\n    quaternion = (w,x,y,z)\n    return quaternion\n\ndef quaternionFromRotMat_xyzw(rotation_matrix):\n    rotation_matrix = np.reshape(rotation_matrix, (1, 9))[0]\n    w = math.sqrt(rotation_matrix[0]+rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    x = math.sqrt(rotation_matrix[0]-rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    y = math.sqrt(-rotation_matrix[0]+rotation_matrix[4]-rotation_matrix[8]+1 + 1e-6)/2\n    z = math.sqrt(-rotation_matrix[0]-rotation_matrix[4]+rotation_matrix[8]+1 + 1e-6)/2\n    a = [x,y,z,w]\n    m = a.index(max(a))\n    if m == 0:\n        x = (rotation_matrix[7]-rotation_matrix[5])/(4*w)\n        y = (rotation_matrix[2]-rotation_matrix[6])/(4*w)\n        z = (rotation_matrix[3]-rotation_matrix[1])/(4*w)\n    if m == 1:\n        w = (rotation_matrix[7]-rotation_matrix[5])/(4*x)\n        y = (rotation_matrix[1]+rotation_matrix[3])/(4*x)\n        z = (rotation_matrix[6]+rotation_matrix[2])/(4*x)\n    if m == 2:\n        w = (rotation_matrix[2]-rotation_matrix[6])/(4*y)\n        x = (rotation_matrix[1]+rotation_matrix[3])/(4*y)\n        z = (rotation_matrix[5]+rotation_matrix[7])/(4*y)\n    if m == 3:\n        w = (rotation_matrix[3]-rotation_matrix[1])/(4*z)\n        x = (rotation_matrix[6]+rotation_matrix[2])/(4*z)\n        y = (rotation_matrix[5]+rotation_matrix[7])/(4*z)\n    quaternion = (x,y,z,w)\n    return quaternion\n\ndef rotVector(q, vector_ori):\n    r = quaternionToRotation(q)\n    x_ori = vector_ori[0]\n    y_ori = vector_ori[1]\n    z_ori = vector_ori[2]\n    x_rot = r[0][0] * x_ori + r[1][0] * y_ori + r[2][0] * z_ori\n    y_rot = r[0][1] * x_ori + r[1][1] * y_ori + r[2][1] * z_ori\n    z_rot = r[0][2] * x_ori + r[1][2] * y_ori + r[2][2] * z_ori\n    return (x_rot, y_rot, z_rot)\n\ndef cameraLPosToCameraRPos(q_l, pos_l, baseline_dis):\n    vector_camera_l_y = (1, 0, 0)\n    vector_rot = rotVector(q_l, vector_camera_l_y)\n    pos_r = (pos_l[0] + vector_rot[0] * baseline_dis,\n             pos_l[1] + vector_rot[1] * baseline_dis,\n             pos_l[2] + vector_rot[2] * baseline_dis)\n    return pos_r\n\ndef getRTFromAToB(pointCloudA, pointCloudB):\n    muA = np.mean(pointCloudA, axis=0)\n    muB = np.mean(pointCloudB, axis=0)\n\n    zeroMeanA = pointCloudA - muA\n    zeroMeanB = pointCloudB - muB\n\n    covMat = np.matmul(np.transpose(zeroMeanA), zeroMeanB)\n    U, S, Vt = np.linalg.svd(covMat)\n    R = np.matmul(Vt.T, U.T)\n\n    if np.linalg.det(R) < 0:\n        print(\"[V]getRTFromAToB: Reflection detected\")\n        Vt[2, :] *= -1\n        R = Vt.T * U.T\n    T = (-np.matmul(R, muA.T) + muB.T).reshape(3, 1)\n    return R, T\n\ndef cameraPositionRandomize(start_point_range, look_at_range, up_range):\n    r_range, vector_range = start_point_range\n    r_min, r_max = r_range\n    x_min, x_max, y_min, y_max = vector_range\n    r = random.uniform(r_min, r_max)\n    x = random.uniform(x_min, x_max)\n    y = random.uniform(y_min, y_max)\n    z = math.sqrt(1 - x**2 - y**2)\n    vector_camera_axis = np.array([x, y, z])\n\n    x_min, x_max, y_min, y_max = up_range\n    x = random.uniform(x_min, x_max)\n    y = random.uniform(y_min, y_max)    \n    z = math.sqrt(1 - x**2 - y**2)\n    up = np.array([x, y, z])\n\n    x_min, x_max, y_min, y_max, z_min, z_max = look_at_range\n    look_at = np.array([random.uniform(x_min, x_max),\n                        random.uniform(y_min, y_max),\n                        random.uniform(z_min, z_max)])\n    position = look_at + r * vector_camera_axis\n\n    vectorZ = - (look_at - position)/np.linalg.norm(look_at - position)\n    vectorX = np.cross(up, vectorZ)/np.linalg.norm(np.cross(up, vectorZ))\n    vectorY = np.cross(vectorZ, vectorX)/np.linalg.norm(np.cross(vectorX, vectorZ))\n\n    # points in camera coordinates\n    pointSensor= np.array([[0., 0., 0.], [1., 0., 0.], [0., 2., 0.], [0., 0., 3.]])\n\n    # points in world coordinates \n    pointWorld = np.array([position,\n                            position + vectorX,\n                            position + vectorY * 2,\n                            position + vectorZ * 3])\n\n    resR, resT = getRTFromAToB(pointSensor, pointWorld)\n    resQ = quaternionFromRotMat(resR)\n    return resQ, resT    \n\n\ndef genCameraPosition(look_at):\n    quat_list = []\n    rot_list = []\n    trans_list = []\n    position_list = []\n    \n    # alpha: \n    alpha = 0\n    alpha_delta = (2 * math.pi) / num_point_ver\n    for i in range(num_point_ver):\n        alpha = alpha + alpha_delta\n        flag_x = 1\n        flag_y = 1\n        alpha1 = alpha\n        if alpha > math.pi/2 and alpha <= math.pi: \n            alpha1 = math.pi - alpha\n            flag_x = -1\n            flag_y = 1\n        elif alpha > math.pi and alpha <= math.pi*(3/2):\n            alpha1 = alpha - math.pi\n            flag_x = -1\n            flag_y = -1\n        elif alpha > math.pi*(3/2):\n            alpha1 = math.pi*2 - alpha\n            flag_x = 1\n            flag_y = -1\n    \n        beta = beta_range[0]\n        beta_delta = (beta_range[1]-beta_range[0])/(num_point_hor-1)\n        for j in range(num_point_hor):\n            if j != 0:\n                beta = beta + beta_delta\n\n            x = flag_x * (r * math.sin(beta)) * math.cos(alpha1)\n            y = flag_y * (r * math.sin(beta)) * math.sin(alpha1)\n            z = r * math.cos(beta)\n            position = np.array([x, y, z]) + look_at\n            look_at = look_at\n            up = np.array([0, 0, 1])\n\n            vectorZ = - (look_at - position)/np.linalg.norm(look_at - position)\n            vectorX = np.cross(up, vectorZ)/np.linalg.norm(np.cross(up, vectorZ))\n            vectorY = np.cross(vectorZ, vectorX)/np.linalg.norm(np.cross(vectorX, vectorZ))\n\n            # points in camera coordinates\n            pointSensor= np.array([[0., 0., 0.], [1., 0., 0.], [0., 2., 0.], [0., 0., 3.]])\n\n            # points in world coordinates \n            pointWorld = np.array([position,\n                                   position + vectorX,\n                                   position + vectorY * 2,\n                                   position + vectorZ * 3])\n\n            resR, resT = getRTFromAToB(pointSensor, pointWorld)\n            resQ = quaternionFromRotMat(resR)\n\n            quat_list.append(resQ)\n            rot_list.append(resR)\n            trans_list.append(resT)\n            position_list.append(position)\n    return quat_list, trans_list, rot_list \n\n\ndef quanternion_mul(q1, q2):\n    s1 = q1[0]\n    v1 = np.array(q1[1:])\n    s2 = q2[0]\n    v2 = np.array(q2[1:])\n    s = s1 * s2 - np.dot(v1, v2)\n    v = s1 * v2 + s2 * v1 + np.cross(v1, v2)\n    return (s, v[0], v[1], v[2])\n\nclass BlenderRenderer(object):\n    def __init__(self, viewport_size_x=640, viewport_size_y=360, DEVICE_LIST=None):\n        '''\n        viewport_size_x, viewport_size_y: rendering viewport resolution\n        '''\n\n        self.DEVICE_LIST = DEVICE_LIST\n\n        # remove all objects, cameras and lights\n        for obj in bpy.data.meshes:\n            bpy.data.meshes.remove(obj)\n\n        for cam in bpy.data.cameras:\n            bpy.data.cameras.remove(cam)\n\n        for light in bpy.data.lights:\n            bpy.data.lights.remove(light)\n\n        for obj in bpy.data.objects:\n            bpy.data.objects.remove(obj, do_unlink=True)\n\n        render_context = bpy.context.scene.render\n\n        # add left camera\n        camera_l_data = bpy.data.cameras.new(name=\"camera_l\")\n        camera_l_object = bpy.data.objects.new(name=\"camera_l\", object_data=camera_l_data)\n        bpy.context.collection.objects.link(camera_l_object)\n\n        # add right camera\n        camera_r_data = bpy.data.cameras.new(name=\"camera_r\")\n        camera_r_object = bpy.data.objects.new(name=\"camera_r\", object_data=camera_r_data)\n        bpy.context.collection.objects.link(camera_r_object)\n\n        camera_l = bpy.data.objects[\"camera_l\"]\n        camera_r = bpy.data.objects[\"camera_r\"]\n\n        # set the camera postion and orientation so that it is in\n        # the front of the object\n        camera_l.location = (1, 0, 0)\n        camera_r.location = (1, 0, 0)\n\n        # add emitter light\n        light_emitter_data = bpy.data.lights.new(name=\"light_emitter\", type='SPOT')\n        light_emitter_object = bpy.data.objects.new(name=\"light_emitter\", object_data=light_emitter_data)\n        bpy.context.collection.objects.link(light_emitter_object)\n\n        light_emitter = bpy.data.objects[\"light_emitter\"]\n        light_emitter.location = (1, 0, 0)\n        light_emitter.data.energy = LIGHT_EMITTER_ENERGY\n\n        # render setting\n        render_context.resolution_percentage = 100\n        self.render_context = render_context\n\n        self.camera_l = camera_l\n        self.camera_r = camera_r\n\n        self.light_emitter = light_emitter\n\n        self.model_loaded = False\n        self.background_added = None\n\n        self.render_context.resolution_x = viewport_size_x\n        self.render_context.resolution_y = viewport_size_y\n\n        self.my_material = {}\n        self.render_mode = 'IR'\n\n        # output setting \n        self.render_context.image_settings.file_format = 'PNG'\n        self.render_context.image_settings.compression = 0\n        self.render_context.image_settings.color_mode = 'BW'\n        self.render_context.image_settings.color_depth = '8'\n\n        # cycles setting\n        self.render_context.engine = 'CYCLES'\n        bpy.context.scene.cycles.progressive = 'BRANCHED_PATH'\n        bpy.context.scene.cycles.use_denoising = True\n        bpy.context.scene.cycles.denoiser = 'NLM'\n        bpy.context.scene.cycles.film_exposure = 0.5\n\n        # self.render_context.use_antialiasing = False\n        ##########\n        bpy.context.scene.view_layers[\"View Layer\"].use_sky = True\n        ##########\n\n        # switch on nodes\n        bpy.context.scene.use_nodes = True\n        tree = bpy.context.scene.node_tree\n        links = tree.links\n  \n        # clear default nodes\n        for n in tree.nodes:\n            tree.nodes.remove(n)\n  \n        # create input render layer node\n        rl = tree.nodes.new('CompositorNodeRLayers')\n\n        # create output node\n        self.fileOutput = tree.nodes.new(type=\"CompositorNodeOutputFile\")\n        self.fileOutput.base_path = \"./new_data/0000\"\n        self.fileOutput.format.file_format = 'OPEN_EXR'\n        self.fileOutput.format.color_depth= '32'\n        self.fileOutput.file_slots[0].path = 'depth#'\n        links.new(rl.outputs[2], self.fileOutput.inputs[0])\n\n        # depth sensor pattern\n        self.pattern = []\n        # environment map\n        self.env_map = []\n        self.realtable_img_list = []\n        self.realfloor_img_list = []\n        self.obj_texture_img_list = []\n\n        self.src_energy_for_rgb_render = 0\n\n    def loadImages(self, pattern_path, env_map_path, real_table_image_root_path, real_floor_image_root_path, obj_texture_image_root_path, obj_texture_image_idxfile, check_seen_scene):\n        # load pattern image\n        self.pattern = bpy.data.images.load(filepath=pattern_path)\n        if check_seen_scene:\n            env_map_path_list = os.listdir(env_map_path)\n            real_table_image_root_path_list = os.listdir(real_table_image_root_path)\n            real_floor_image_root_path_list = os.listdir(real_floor_image_root_path)\n        else:\n            env_map_path_list = sorted(os.listdir(env_map_path))\n            real_table_image_root_path_list = sorted(os.listdir(real_table_image_root_path))\n            real_floor_image_root_path_list = sorted(os.listdir(real_floor_image_root_path))\n        # load env map\n        for item in env_map_path_list:\n            if item.split('.')[-1] == 'hdr':\n                self.env_map.append(bpy.data.images.load(filepath=os.path.join(env_map_path, item)))\n        # load real table images\n        for item in real_table_image_root_path_list:\n            if item.split('.')[-1] == 'jpg':\n                self.realtable_img_list.append(bpy.data.images.load(filepath=os.path.join(real_table_image_root_path, item)))\n        # load real floor images\n        for item in real_floor_image_root_path_list:\n            if item.split('.')[-1] == 'jpg':\n                self.realfloor_img_list.append(bpy.data.images.load(filepath=os.path.join(real_floor_image_root_path, item)))\n        # load obj texture images\n        f_teximg_idx = open(os.path.join(obj_texture_image_root_path, obj_texture_image_idxfile),\"r\")\n        lines = f_teximg_idx.readlines() \n        for item in lines:\n            item = item[:-1]   \n            self.obj_texture_img_list.append(bpy.data.images.load(filepath=os.path.join(obj_texture_image_root_path, \"images\", item)))\n\n\n    def addEnvMap(self):\n        # Get the environment node tree of the current scene\n        node_tree = bpy.context.scene.world.node_tree\n        tree_nodes = node_tree.nodes\n\n        # Clear all nodes\n        tree_nodes.clear()\n\n        # Add Background node\n        node_background = tree_nodes.new(type='ShaderNodeBackground')\n\n        # Add Environment Texture node\n        node_environment = tree_nodes.new('ShaderNodeTexEnvironment')\n        # Load and assign the image to the node property\n        # node_environment.image = bpy.data.images.load(\"/Users/zhangjiyao/Desktop/test_addon/envmap_lib/autoshop_01_1k.hdr\") # Relative path\n        node_environment.location = -300,0\n\n        node_tex_coord = tree_nodes.new(type='ShaderNodeTexCoord')\n        node_tex_coord.location = -700,0\n\n        node_mapping = tree_nodes.new(type='ShaderNodeMapping')\n        node_mapping.location = -500,0\n\n        # Add Output node\n        node_output = tree_nodes.new(type='ShaderNodeOutputWorld')   \n        node_output.location = 200,0\n\n        # Link all nodes\n        links = node_tree.links\n        links.new(node_environment.outputs[\"Color\"], node_background.inputs[\"Color\"])\n        links.new(node_background.outputs[\"Background\"], node_output.inputs[\"Surface\"])\n        links.new(node_tex_coord.outputs[\"Generated\"], node_mapping.inputs[\"Vector\"])\n        links.new(node_mapping.outputs[\"Vector\"], node_environment.inputs[\"Vector\"])\n\n        #### bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = 1.0\n        random_energy = random.uniform(LIGHT_ENV_MAP_ENERGY_RGB * 0.8, LIGHT_ENV_MAP_ENERGY_RGB * 1.2)\n        bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = random_energy\n        ####\n\n\n    def setEnvMap(self, env_map_id, rotation_elur_z):\n        # Get the environment node tree of the current scene\n        node_tree = bpy.context.scene.world.node_tree\n\n        # Get Environment Texture node\n        node_environment = node_tree.nodes['Environment Texture']\n        # Load and assign the image to the node property\n        node_environment.image = self.env_map[env_map_id]\n\n        node_mapping = node_tree.nodes['Mapping']\n        node_mapping.inputs[2].default_value[2] = rotation_elur_z\n\n\n    def addMaskMaterial(self, num=20):\n        background_material_name_list = [\"mask_background\", \"mask_table\", \"mask_tableplane\"]\n        for material_name in background_material_name_list:\n            material_class = (bpy.data.materials.get(material_name) or bpy.data.materials.new(material_name))         # test if material exists, if it does not exist, create it:\n\n            # enable 'Use nodes'\n            material_class.use_nodes = True\n            node_tree = material_class.node_tree\n\n            # remove default nodes\n            material_class.node_tree.nodes.clear()\n\n            # add new nodes  \n            node_1 = node_tree.nodes.new('ShaderNodeOutputMaterial')\n            node_2= node_tree.nodes.new('ShaderNodeBrightContrast')\n\n            # link nodes\n            node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n            node_2.inputs[0].default_value = (1, 1, 1, 1)\n            self.my_material[material_name] =  material_class\n\n\n        for i in range(num):\n            class_name = str(i + 1)\n            # set the material of background    \n            material_name = \"mask_\" + class_name\n\n            # test if material exists\n            # if it does not exist, create it:\n            material_class = (bpy.data.materials.get(material_name) or \n                bpy.data.materials.new(material_name))\n\n            # enable 'Use nodes'\n            material_class.use_nodes = True\n            node_tree = material_class.node_tree\n\n            # remove default nodes\n            material_class.node_tree.nodes.clear()\n\n            # add new nodes  \n            node_1 = node_tree.nodes.new('ShaderNodeOutputMaterial')\n            node_2= node_tree.nodes.new('ShaderNodeBrightContrast')\n\n            # link nodes\n            node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n\n            if class_name.split('_')[0] == 'background' or class_name.split('_')[0] == 'table' or class_name.split('_')[0] == 'tableplane':\n                node_2.inputs[0].default_value = (1, 1, 1, 1)\n            else:\n                node_2.inputs[0].default_value = ((i + 1)/255., 0., 0., 1)\n\n            self.my_material[material_name] =  material_class\n\n\n    def addNOCSMaterial(self):\n        material_name = 'coord_color'\n        mat = (bpy.data.materials.get(material_name) or bpy.data.materials.new(material_name))\n\n        mat.use_nodes = True\n        node_tree = mat.node_tree\n        nodes = node_tree.nodes\n        nodes.clear()        \n\n        links = node_tree.links\n        links.clear()\n\n        vcol_R = nodes.new(type=\"ShaderNodeVertexColor\")\n        vcol_R.layer_name = \"Col_R\" # the vertex color layer name\n        vcol_G = nodes.new(type=\"ShaderNodeVertexColor\")\n        vcol_G.layer_name = \"Col_G\" # the vertex color layer name\n        vcol_B = nodes.new(type=\"ShaderNodeVertexColor\")\n        vcol_B.layer_name = \"Col_B\" # the vertex color layer name\n\n        node_Output = node_tree.nodes.new('ShaderNodeOutputMaterial')\n        node_Emission = node_tree.nodes.new('ShaderNodeEmission')\n        node_LightPath = node_tree.nodes.new('ShaderNodeLightPath')\n        node_Mix = node_tree.nodes.new('ShaderNodeMixShader')\n        node_Combine = node_tree.nodes.new(type=\"ShaderNodeCombineRGB\")\n\n\n        # make links\n        node_tree.links.new(vcol_R.outputs[1], node_Combine.inputs[0])\n        node_tree.links.new(vcol_G.outputs[1], node_Combine.inputs[1])\n        node_tree.links.new(vcol_B.outputs[1], node_Combine.inputs[2])\n        node_tree.links.new(node_Combine.outputs[0], node_Emission.inputs[0])\n\n        node_tree.links.new(node_LightPath.outputs[0], node_Mix.inputs[0])\n        node_tree.links.new(node_Emission.outputs[0], node_Mix.inputs[2])\n        node_tree.links.new(node_Mix.outputs[0], node_Output.inputs[0])\n\n        self.my_material[material_name] = mat\n\n\n    def addNormalMaterial(self):\n        material_name = 'normal'\n        mat = (bpy.data.materials.get(material_name) or bpy.data.materials.new(material_name))\n        mat.use_nodes = True\n        node_tree = mat.node_tree\n        nodes = node_tree.nodes\n        nodes.clear()\n            \n        links = node_tree.links\n        links.clear()\n            \n        # Nodes :\n        new_node = nodes.new(type='ShaderNodeMath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (151.59744262695312, 854.5482177734375)\n        new_node.name = 'Math'\n        new_node.operation = 'MULTIPLY'\n        new_node.select = False\n        new_node.use_clamp = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n        new_node.inputs[1].default_value = 1.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeLightPath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (602.9912719726562, 1046.660888671875)\n        new_node.name = 'Light Path'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.outputs[0].default_value = 0.0\n        new_node.outputs[1].default_value = 0.0\n        new_node.outputs[2].default_value = 0.0\n        new_node.outputs[3].default_value = 0.0\n        new_node.outputs[4].default_value = 0.0\n        new_node.outputs[5].default_value = 0.0\n        new_node.outputs[6].default_value = 0.0\n        new_node.outputs[7].default_value = 0.0\n        new_node.outputs[8].default_value = 0.0\n        new_node.outputs[9].default_value = 0.0\n        new_node.outputs[10].default_value = 0.0\n        new_node.outputs[11].default_value = 0.0\n        new_node.outputs[12].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeOutputMaterial')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.is_active_output = True\n        new_node.location = (1168.93017578125, 701.84033203125)\n        new_node.name = 'Material Output'\n        new_node.select = False\n        new_node.target = 'ALL'\n        new_node.width = 140.0\n        new_node.inputs[2].default_value = [0.0, 0.0, 0.0]\n\n        new_node = nodes.new(type='ShaderNodeBsdfTransparent')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (731.72900390625, 721.4832763671875)\n        new_node.name = 'Transparent BSDF'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [1.0, 1.0, 1.0, 1.0]\n\n        new_node = nodes.new(type='ShaderNodeCombineXYZ')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (594.4229736328125, 602.9271240234375)\n        new_node.name = 'Combine XYZ'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.0\n        new_node.inputs[1].default_value = 0.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = [0.0, 0.0, 0.0]\n\n        new_node = nodes.new(type='ShaderNodeMixShader')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (992.7239990234375, 707.2142333984375)\n        new_node.name = 'Mix Shader'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n\n        new_node = nodes.new(type='ShaderNodeEmission')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (774.0802612304688, 608.2547607421875)\n        new_node.name = 'Emission'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [1.0, 1.0, 1.0, 1.0]\n        new_node.inputs[1].default_value = 1.0\n\n        new_node = nodes.new(type='ShaderNodeSeparateXYZ')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (-130.12167358398438, 558.1497802734375)\n        new_node.name = 'Separate XYZ'\n        new_node.select = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[0].default_value = 0.0\n        new_node.outputs[1].default_value = 0.0\n        new_node.outputs[2].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeMath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (162.43240356445312, 618.8094482421875)\n        new_node.name = 'Math.002'\n        new_node.operation = 'MULTIPLY'\n        new_node.select = False\n        new_node.use_clamp = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n        new_node.inputs[1].default_value = 1.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeMath')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (126.8158187866211, 364.5539855957031)\n        new_node.name = 'Math.001'\n        new_node.operation = 'MULTIPLY'\n        new_node.select = False\n        new_node.use_clamp = False\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = 0.5\n        new_node.inputs[1].default_value = -1.0\n        new_node.inputs[2].default_value = 0.0\n        new_node.outputs[0].default_value = 0.0\n\n        new_node = nodes.new(type='ShaderNodeVectorTransform')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.convert_from = 'WORLD'\n        new_node.convert_to = 'CAMERA'\n        new_node.location = (-397.0209045410156, 594.7037353515625)\n        new_node.name = 'Vector Transform'\n        new_node.select = False\n        new_node.vector_type = 'VECTOR'\n        new_node.width = 140.0\n        new_node.inputs[0].default_value = [0.5, 0.5, 0.5]\n        new_node.outputs[0].default_value = [0.0, 0.0, 0.0]\n\n        new_node = nodes.new(type='ShaderNodeNewGeometry')\n        new_node.active_preview = False\n        new_node.color = (0.6079999804496765, 0.6079999804496765, 0.6079999804496765)\n        new_node.location = (-651.8067016601562, 593.0455932617188)\n        new_node.name = 'Geometry'\n        new_node.width = 140.0\n        new_node.outputs[0].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[1].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[2].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[3].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[4].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[5].default_value = [0.0, 0.0, 0.0]\n        new_node.outputs[6].default_value = 0.0\n        new_node.outputs[7].default_value = 0.0\n        new_node.outputs[8].default_value = 0.0\n\n        # Links :\n\n        links.new(nodes[\"Light Path\"].outputs[0], nodes[\"Mix Shader\"].inputs[0])    \n        links.new(nodes[\"Separate XYZ\"].outputs[0], nodes[\"Math\"].inputs[0])    \n        links.new(nodes[\"Separate XYZ\"].outputs[1], nodes[\"Math.002\"].inputs[0])    \n        links.new(nodes[\"Separate XYZ\"].outputs[2], nodes[\"Math.001\"].inputs[0])    \n        links.new(nodes[\"Vector Transform\"].outputs[0], nodes[\"Separate XYZ\"].inputs[0])    \n        links.new(nodes[\"Combine XYZ\"].outputs[0], nodes[\"Emission\"].inputs[0])    \n        links.new(nodes[\"Math\"].outputs[0], nodes[\"Combine XYZ\"].inputs[0])    \n        links.new(nodes[\"Math.002\"].outputs[0], nodes[\"Combine XYZ\"].inputs[1])    \n        links.new(nodes[\"Math.001\"].outputs[0], nodes[\"Combine XYZ\"].inputs[2])    \n        links.new(nodes[\"Transparent BSDF\"].outputs[0], nodes[\"Mix Shader\"].inputs[1])    \n        links.new(nodes[\"Emission\"].outputs[0], nodes[\"Mix Shader\"].inputs[2])    \n        links.new(nodes[\"Mix Shader\"].outputs[0], nodes[\"Material Output\"].inputs[0])    \n        links.new(nodes[\"Geometry\"].outputs[1], nodes[\"Vector Transform\"].inputs[0])    \n\n        self.my_material[material_name] = mat\n\n    def addMaterialLib(self, material_class_instance_pairs):\n        for mat in bpy.data.materials:\n            name = mat.name\n            name_class = str(name.split('_')[0])\n            if name_class != 'Dots Stroke' and name_class != 'default':   \n                if name_class not in self.my_material:\n                    self.my_material[name_class] = [mat]\n                else:\n                    self.my_material[name_class].append(mat)    # e.g. self.my_material['metal'] = [.....]\n    ###\n\n    def setCamera(self, quaternion, translation, fov, baseline_distance):\n        self.camera_l.data.angle = fov\n        self.camera_r.data.angle = self.camera_l.data.angle\n        cx = translation[0]\n        cy = translation[1]\n        cz = translation[2]\n\n        self.camera_l.location[0] = cx\n        self.camera_l.location[1] = cy \n        self.camera_l.location[2] = cz\n\n        self.camera_l.rotation_mode = 'QUATERNION'\n        self.camera_l.rotation_quaternion[0] = quaternion[0]\n        self.camera_l.rotation_quaternion[1] = quaternion[1]\n        self.camera_l.rotation_quaternion[2] = quaternion[2]\n        self.camera_l.rotation_quaternion[3] = quaternion[3]\n\n        self.camera_r.rotation_mode = 'QUATERNION'\n        self.camera_r.rotation_quaternion[0] = quaternion[0]\n        self.camera_r.rotation_quaternion[1] = quaternion[1]\n        self.camera_r.rotation_quaternion[2] = quaternion[2]\n        self.camera_r.rotation_quaternion[3] = quaternion[3]\n        cx, cy, cz = cameraLPosToCameraRPos(quaternion, (cx, cy, cz), baseline_distance)\n        self.camera_r.location[0] = cx\n        self.camera_r.location[1] = cy \n        self.camera_r.location[2] = cz\n\n\n    def setLighting(self):\n        # emitter        \n        #self.light_emitter.location = self.camera_r.location\n        self.light_emitter.location = self.camera_l.location + 0.51 * (self.camera_r.location - self.camera_l.location)\n        self.light_emitter.rotation_mode = 'QUATERNION'\n        self.light_emitter.rotation_quaternion = self.camera_r.rotation_quaternion\n\n        # emitter setting\n        bpy.context.view_layer.objects.active = None\n        # bpy.ops.object.select_all(action=\"DESELECT\")\n        self.render_context.engine = 'CYCLES'\n        self.light_emitter.select_set(True)\n        self.light_emitter.data.use_nodes = True\n        self.light_emitter.data.type = \"POINT\"\n        self.light_emitter.data.shadow_soft_size = 0.001\n        random_energy = random.uniform(LIGHT_EMITTER_ENERGY * 0.9, LIGHT_EMITTER_ENERGY * 1.1)\n        self.light_emitter.data.energy = random_energy\n\n        # remove default node\n        light_emitter = bpy.data.objects[\"light_emitter\"].data\n        light_emitter.node_tree.nodes.clear()\n\n        # add new nodes\n        light_output = light_emitter.node_tree.nodes.new(\"ShaderNodeOutputLight\")\n        node_1 = light_emitter.node_tree.nodes.new(\"ShaderNodeEmission\")\n        node_2 = light_emitter.node_tree.nodes.new(\"ShaderNodeTexImage\")\n        node_3 = light_emitter.node_tree.nodes.new(\"ShaderNodeMapping\")\n        node_4 = light_emitter.node_tree.nodes.new(\"ShaderNodeVectorMath\")\n        node_5 = light_emitter.node_tree.nodes.new(\"ShaderNodeSeparateXYZ\")\n        node_6 = light_emitter.node_tree.nodes.new(\"ShaderNodeTexCoord\")\n\n        # link nodes\n        light_emitter.node_tree.links.new(light_output.inputs[0], node_1.outputs[0])\n        light_emitter.node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n        light_emitter.node_tree.links.new(node_2.inputs[0], node_3.outputs[0])\n        light_emitter.node_tree.links.new(node_3.inputs[0], node_4.outputs[0])\n        light_emitter.node_tree.links.new(node_4.inputs[0], node_6.outputs[1])\n        light_emitter.node_tree.links.new(node_4.inputs[1], node_5.outputs[2])\n        light_emitter.node_tree.links.new(node_5.inputs[0], node_6.outputs[1])\n\n        # set parameter of nodes\n        node_1.inputs[1].default_value = 1.0        # scale\n        node_2.extension = 'CLIP'\n        # node_2.interpolation = 'Cubic'\n\n        node_3.inputs[1].default_value[0] = 0.5\n        node_3.inputs[1].default_value[1] = 0.5\n        node_3.inputs[1].default_value[2] = 0\n        node_3.inputs[2].default_value[0] = 0\n        node_3.inputs[2].default_value[1] = 0\n        node_3.inputs[2].default_value[2] = 0.05\n\n        # scale of pattern\n        node_3.inputs[3].default_value[0] = 0.6\n        node_3.inputs[3].default_value[1] = 0.85\n        node_3.inputs[3].default_value[2] = 0\n        node_4.operation = 'DIVIDE'\n\n        # pattern path\n        node_2.image = self.pattern\n\n\n    def lightModeSelect(self, light_mode):\n        if light_mode == \"RGB\":\n            self.light_emitter.hide_render = True\n            ###\n            bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = self.src_energy_for_rgb_render\n\n        elif light_mode == \"IR\":\n            self.light_emitter.hide_render = False\n            # set the environment map energy\n            random_energy = random.uniform(LIGHT_ENV_MAP_ENERGY_IR * 0.8, LIGHT_ENV_MAP_ENERGY_IR * 1.2)\n            bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = random_energy\n        \n        elif light_mode == \"Mask\" or light_mode == \"NOCS\" or light_mode == \"Normal\":\n            self.light_emitter.hide_render = True\n            bpy.data.worlds[\"World\"].node_tree.nodes[\"Background\"].inputs[1].default_value = 0\n        else:\n            raise NotImplementedError   \n\n\n    def outputModeSelect(self, output_mode):\n        if output_mode == \"RGB\":\n            self.render_context.image_settings.file_format = 'PNG'\n            self.render_context.image_settings.compression = 0\n            self.render_context.image_settings.color_mode = 'RGB'\n            self.render_context.image_settings.color_depth = '8'\n            bpy.context.scene.view_settings.view_transform = 'Filmic'\n            bpy.context.scene.render.filter_size = 1.5\n            self.render_context.resolution_x = 640 ### 1280\n            self.render_context.resolution_y = 360 ### 720\n        elif output_mode == \"IR\":\n            self.render_context.image_settings.file_format = 'PNG'\n            self.render_context.image_settings.compression = 0\n            self.render_context.image_settings.color_mode = 'BW'\n            self.render_context.image_settings.color_depth = '8'\n            bpy.context.scene.view_settings.view_transform = 'Filmic'\n            bpy.context.scene.render.filter_size = 1.5\n            self.render_context.resolution_x = 640 ### 1280\n            self.render_context.resolution_y = 360 ### 720\n        elif output_mode == \"Mask\":\n            self.render_context.image_settings.file_format = 'OPEN_EXR'\n            self.render_context.image_settings.color_mode = 'RGB'\n            bpy.context.scene.view_settings.view_transform = 'Raw'\n            bpy.context.scene.render.filter_size = 0\n            self.render_context.resolution_x = 640\n            self.render_context.resolution_y = 360\n        elif output_mode == \"NOCS\":\n            # self.render_context.image_settings.file_format = 'OPEN_EXR'\n            self.render_context.image_settings.file_format = 'PNG'            \n            self.render_context.image_settings.color_mode = 'RGB'\n            self.render_context.image_settings.color_depth = '8'\n            bpy.context.scene.view_settings.view_transform = 'Raw'\n            bpy.context.scene.render.filter_size = 0\n            self.render_context.resolution_x = 640\n            self.render_context.resolution_y = 360\n        elif output_mode == \"Normal\":\n            self.render_context.image_settings.file_format = 'OPEN_EXR'\n            self.render_context.image_settings.color_mode = 'RGB'\n            bpy.context.scene.view_settings.view_transform = 'Raw'\n            bpy.context.scene.render.filter_size = 1.5\n            self.render_context.resolution_x = 640\n            self.render_context.resolution_y = 360\n        else:\n            raise NotImplementedError\n\n    def renderEngineSelect(self, engine_mode):\n\n        if engine_mode == \"CYCLES\":\n            self.render_context.engine = 'CYCLES'\n            bpy.context.scene.cycles.progressive = 'BRANCHED_PATH'\n            bpy.context.scene.cycles.use_denoising = True\n            bpy.context.scene.cycles.denoiser = 'NLM'\n            bpy.context.scene.cycles.film_exposure = 1.0\n            bpy.context.scene.cycles.aa_samples = CYCLES_SAMPLE\n\n            ## Set the device_type\n            bpy.context.preferences.addons[\"cycles\"].preferences.compute_device_type = \"CUDA\" # or \"OPENCL\"\n\n            ## get_devices() to let Blender detects GPU device\n            cuda_devices, _ = bpy.context.preferences.addons[\"cycles\"].preferences.get_devices()\n            #print(bpy.context.preferences.addons[\"cycles\"].preferences.compute_device_type)\n            for d in bpy.context.preferences.addons[\"cycles\"].preferences.devices:\n                d[\"use\"] = 1 # Using all devices, include GPU and CPU\n                #print(d[\"name\"], d[\"use\"])\n            device_list = self.DEVICE_LIST\n            activated_gpus = []\n            for i, device in enumerate(cuda_devices):\n                if (i in device_list):\n                    device.use = True\n                    activated_gpus.append(device.name)\n                else:\n                    device.use = False\n\n\n        elif engine_mode == \"EEVEE\":\n            bpy.context.scene.render.engine = 'BLENDER_EEVEE'\n        else:\n            print(\"Not support the mode!\")    \n\n\n    def addBackground(self, size, position, scale, default_background_texture_path):\n        # set the material of background    \n        material_name = \"default_background\"\n\n        # test if material exists\n        # if it does not exist, create it:\n        material_background = (bpy.data.materials.get(material_name) or \n            bpy.data.materials.new(material_name))\n\n        # enable 'Use nodes'\n        material_background.use_nodes = True\n        node_tree = material_background.node_tree\n\n        # remove default nodes\n        material_background.node_tree.nodes.clear()\n\n        # add new nodes  \n        node_1 = node_tree.nodes.new('ShaderNodeOutputMaterial')\n        node_2 = node_tree.nodes.new('ShaderNodeBsdfPrincipled')\n        node_3 = node_tree.nodes.new('ShaderNodeTexImage')\n\n        # link nodes\n        node_tree.links.new(node_1.inputs[0], node_2.outputs[0])\n        node_tree.links.new(node_2.inputs[0], node_3.outputs[0])\n\n        # add texture image\n        node_3.image = bpy.data.images.load(filepath=default_background_texture_path)\n        self.my_material['default_background'] = material_background\n\n        # add background plane\n        for i in range(-2, 3, 1):\n            for j in range(-2, 3, 1):\n                position_i_j = (i * size + position[0], j * size + position[1], position[2] - TABLE_CAD_MODEL_HEIGHT)\n                bpy.ops.mesh.primitive_plane_add(size=size, enter_editmode=False, align='WORLD', location=position_i_j, scale=scale)\n                bpy.ops.rigidbody.object_add()\n                bpy.context.object.rigid_body.type = 'PASSIVE'\n                bpy.context.object.rigid_body.collision_shape = 'BOX'\n        for i in range(-2, 3, 1):\n            for j in [-2, 2]:\n                position_i_j = (i * size + position[0], j * size + position[1], position[2] - 0.25)# - TABLE_CAD_MODEL_HEIGHT)\n                rotation_elur = (math.pi / 2., 0., 0.)\n                bpy.ops.mesh.primitive_plane_add(size=size, enter_editmode=False, align='WORLD', location=position_i_j, rotation = rotation_elur)\n                bpy.ops.rigidbody.object_add()\n                bpy.context.object.rigid_body.type = 'PASSIVE'\n                bpy.context.object.rigid_body.collision_shape = 'BOX'    \n        for j in range(-2, 3, 1):\n            for i in [-2, 2]:\n                position_i_j = (i * size + position[0], j * size + position[1], position[2] - 0.25)# - TABLE_CAD_MODEL_HEIGHT)\n                rotation_elur = (0, math.pi / 2, 0)\n                bpy.ops.mesh.primitive_plane_add(size=size, enter_editmode=False, align='WORLD', location=position_i_j, rotation = rotation_elur)\n                bpy.ops.rigidbody.object_add()\n                bpy.context.object.rigid_body.type = 'PASSIVE'\n                bpy.context.object.rigid_body.collision_shape = 'BOX'        \n        count = 0\n        for obj in bpy.data.objects:\n            if obj.type == \"MESH\":\n                obj.name = \"background_\" + str(count)\n                obj.data.name = \"background_\" + str(count)\n                obj.active_material = material_background\n                count += 1\n\n        self.background_added = True\n\n\n    def clearModel(self):\n        '''\n        # delete all meshes\n        for item in bpy.data.meshes:\n            bpy.data.meshes.remove(item)\n        for item in bpy.data.materials:\n            bpy.data.materials.remove(item)\n        '''\n\n        # remove all objects except background\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                bpy.data.meshes.remove(obj.data)\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                bpy.data.objects.remove(obj, do_unlink=True)\n\n        # remove all default material\n        for mat in bpy.data.materials:\n            name = mat.name.split('.')\n            if name[0] == 'Material':\n                bpy.data.materials.remove(mat)\n\n\n    def loadModel(self, file_path):\n        self.model_loaded = True\n        try:\n            if file_path.endswith('obj'):\n                bpy.ops.import_scene.obj(filepath=file_path)\n            elif file_path.endswith('3ds'):\n                bpy.ops.import_scene.autodesk_3ds(filepath=file_path)\n            elif file_path.endswith('dae'):\n                # Must install OpenCollada. Please read README.md\n                bpy.ops.wm.collada_import(filepath=file_path)\n            else:\n                self.model_loaded = False\n                raise Exception(\"Loading failed: %s\" % (file_path))\n        except Exception:\n            self.model_loaded = False\n\n\n    def render(self, image_name=\"tmp\", image_path=RENDERING_PATH):\n        # Render the object\n        if not self.model_loaded:\n            print(\"[W]render: Model not loaded.\")\n            return      \n\n        if self.render_mode == \"IR\":\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"IR\")\n            self.outputModeSelect(\"IR\")\n            self.renderEngineSelect(\"CYCLES\")\n\n        elif self.render_mode == 'RGB':\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"RGB\")\n            self.outputModeSelect(\"RGB\")\n            self.renderEngineSelect(\"CYCLES\")\n\n        elif self.render_mode == \"Mask\":\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"Mask\")\n            self.outputModeSelect(\"Mask\")\n            # self.renderEngineSelect(\"EEVEE\")\n            self.renderEngineSelect(\"CYCLES\")\n            bpy.context.scene.cycles.use_denoising = False\n            bpy.context.scene.cycles.aa_samples = 1\n\n        elif self.render_mode == \"NOCS\":\n            bpy.context.scene.use_nodes = False\n            # set light and render mode\n            self.lightModeSelect(\"NOCS\")\n            self.outputModeSelect(\"NOCS\")\n            # self.renderEngineSelect(\"EEVEE\")\n            self.renderEngineSelect(\"CYCLES\")\n            bpy.context.scene.cycles.use_denoising = False\n            bpy.context.scene.cycles.aa_samples = 1\n\n        elif self.render_mode == \"Normal\":\n            bpy.context.scene.use_nodes = True\n            self.fileOutput.base_path = image_path.replace(\"normal\",\"depth\")\n            self.fileOutput.file_slots[0].path = image_name[:4]+\"_#\"# + 'depth_#'\n\n            # set light and render mode\n            self.lightModeSelect(\"Normal\")\n            self.outputModeSelect(\"Normal\")\n            # self.renderEngineSelect(\"EEVEE\")\n            self.renderEngineSelect(\"CYCLES\")\n            bpy.context.scene.cycles.use_denoising = False\n            bpy.context.scene.cycles.aa_samples = 32\n\n        else:\n            print(\"[W]render: The render mode is not supported\")\n            return \n\n        bpy.context.scene.render.filepath = os.path.join(image_path, image_name)\n        bpy.ops.render.render(write_still=True)  # save straight to file\n\n\n    def set_material_randomize_mode(self, class_material_pairs, mat_randomize_mode, instance, material_type_in_mixed_mode):\n        if mat_randomize_mode in ['mixed','diffuse','transparent','specular_tex','specular_texmix','specular_and_transparent']:\n            if material_type_in_mixed_mode == 'raw':\n                print(\"[V]set_material_randomize_mode\", instance.name, 'material type: raw')\n                set_modify_raw_material(instance)\n            else:\n                material = random.sample(self.my_material[material_type_in_mixed_mode], 1)[0]\n                print(\"[V]set_material_randomize_mode\", instance.name, 'material type: ', material_type_in_mixed_mode)\n                ## graspnet\n                set_modify_material(instance, material, self.obj_texture_img_list, mat_randomize_mode=mat_randomize_mode)\n        elif mat_randomize_mode == 'specular':\n            material = random.sample(self.my_material[material_type_in_mixed_mode], 1)[0]\n            print(\"[V]set_material_randomize_mode\", instance.name, 'material type: ', material_type_in_mixed_mode)\n            set_modify_material(instance, material, self.obj_texture_img_list, mat_randomize_mode=mat_randomize_mode,\n                                is_transfer=False)\n        else:\n            raise NotImplementedError(\"No such mat_randomize_mode!\")\n\n\n    def get_instance_pose(self):\n        instance_pose = {}\n        bpy.context.view_layer.update()\n        cam = self.camera_l\n        mat_rot_x = Matrix.Rotation(math.radians(180.0), 4, 'X')\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                instance_id = obj.name.split('_')[0]\n                mat_rel = cam.matrix_world.inverted() @ obj.matrix_world\n                # location\n                relative_location = [mat_rel.translation[0],\n                                     - mat_rel.translation[1],\n                                     - mat_rel.translation[2]]\n                # rotation\n                # relative_rotation_euler = mat_rel.to_euler() # must be converted from radians to degrees\n                relative_rotation_quat = [mat_rel.to_quaternion()[0],\n                                          mat_rel.to_quaternion()[1],\n                                          mat_rel.to_quaternion()[2],\n                                          mat_rel.to_quaternion()[3]]\n                quat_x = [0, 1, 0, 0]\n                quat = quanternion_mul(quat_x, relative_rotation_quat)\n                quat = [quat[0], - quat[1], - quat[2], - quat[3]]\n                instance_pose[str(instance_id)] = [quat, relative_location]\n\n        return instance_pose\n\n\n    def check_visible(self, threshold=(0.1, 0.9, 0.1, 0.9)):\n        w_min, x_max, h_min, h_max = threshold\n        visible_objects_list = []\n        bpy.context.view_layer.update()\n        cs, ce = self.camera_l.data.clip_start, self.camera_l.data.clip_end\n        for obj in bpy.data.objects:\n            if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n                obj_center = obj.matrix_world.translation\n                co_ndc = world_to_camera_view(scene, self.camera_l, obj_center)\n                if (w_min < co_ndc.x < x_max and\n                    h_min < co_ndc.y < h_max and\n                    cs < co_ndc.z <  ce):\n                    obj.select_set(True)\n                    visible_objects_list.append(obj)\n                else:\n                    obj.select_set(False)\n        return visible_objects_list\n\n\ndef setModelPosition(instance, location, quaternion):\n    instance.rotation_mode = 'QUATERNION'\n    instance.rotation_quaternion[0] = quaternion[0]\n    instance.rotation_quaternion[1] = quaternion[1]\n    instance.rotation_quaternion[2] = quaternion[2]\n    instance.rotation_quaternion[3] = quaternion[3]\n    instance.location = location\n###\n\ndef setRigidBody(instance):\n    bpy.context.view_layer.objects.active = instance \n    object_single = bpy.context.active_object\n\n    # add rigid body constraints to cube\n    bpy.ops.rigidbody.object_add()\n    bpy.context.object.rigid_body.mass = 1\n    bpy.context.object.rigid_body.kinematic = True\n    bpy.context.object.rigid_body.collision_shape = 'CONVEX_HULL'\n    bpy.context.object.rigid_body.restitution = 0.01\n    bpy.context.object.rigid_body.angular_damping = 0.8\n    bpy.context.object.rigid_body.linear_damping = 0.99\n\n    bpy.context.object.rigid_body.kinematic = False\n    object_single.keyframe_insert(data_path='rigid_body.kinematic', frame=0)\n\n\ndef set_visiable_objects(visible_objects_list):\n    for obj in bpy.data.objects:\n        if obj.type == 'MESH' and not obj.name.split('_')[0] == 'background':\n            if obj in visible_objects_list:\n                obj.hide_render = False\n            else:\n                obj.hide_render = True\n\n\ndef generate_CAD_model_list(scene_type, urdf_path_list, obj_uid_list):\n    CAD_model_list = {}\n    ###\n    for idx in range(len(urdf_path_list)):\n        urdf_path = urdf_path_list[idx]\n        obj_uid = obj_uid_list[idx]\n        class_name = 'other'\n        urdf_path = str(urdf_path).replace(\"\\\\\",\"/\").split(\"/\")\n        if scene_type == \"blocks\":\n            instance_path = \"/\".join(urdf_path[:-1]) + \"/\" + urdf_path[-1][:-5]+\".obj\"\n        else:\n            instance_path = \"/\".join(urdf_path[:-1])+\"/\"+urdf_path[-1][:-5]+\"_visual.obj\"\n        class_list = []\n        class_list.append([instance_path, class_name, obj_uid])\n        if class_name == 'other' and 'other' in CAD_model_list:\n            CAD_model_list[class_name] = CAD_model_list[class_name] + class_list\n        else:\n            CAD_model_list[class_name] = class_list\n  \n    return CAD_model_list\n\n\ndef generate_material_type(obj_name, class_material_pairs, instance_material_except_pairs, instance_material_include_pairs, material_class_instance_pairs, material_type):\n    ###\n    specular_type_for_ins_list = []\n    transparent_type_for_ins_list = []\n    diffuse_type_for_ins_list = []\n    for key in instance_material_except_pairs:\n            if key in material_class_instance_pairs['specular']:\n                specular_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['transparent']:\n                transparent_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['diffuse']:\n                diffuse_type_for_ins_list.append(key)\n    for key in instance_material_include_pairs:\n            if key in material_class_instance_pairs['specular']:\n                specular_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['transparent']:\n                transparent_type_for_ins_list.append(key)\n            elif key in material_class_instance_pairs['diffuse']:\n                diffuse_type_for_ins_list.append(key)\n\n    if material_type == \"transparent\":\n        return random.sample(transparent_type_for_ins_list, 1)[0]\n    elif material_type == \"diffuse\":\n        return random.sample(diffuse_type_for_ins_list, 1)[0]\n    elif material_type == \"specular\" or material_type == \"specular_tex\" or material_type == \"specular_texmix\":\n        return random.sample(specular_type_for_ins_list, 1)[0]\n    elif material_type == \"specular_and_transparent\":\n        flag = random.randint(0, 2)\n        if flag == 0:\n            return random.sample(specular_type_for_ins_list, 1)[0]  ### 'specular'\n        else:\n            return random.sample(transparent_type_for_ins_list, 1)[0]  ### 'transparent'\n    elif material_type == \"mixed\":\n        # randomly pick material class\n        flag = random.randint(0, 2) # D:S:T=1:2:2\n        # select the raw material\n        if flag == 0:\n            return random.sample(diffuse_type_for_ins_list, 1)[0] ### 'diffuse'\n        # select one from specular and transparent\n        elif flag == 1:\n            return random.sample(specular_type_for_ins_list, 1)[0] ### 'specular'\n        else:\n            return random.sample(transparent_type_for_ins_list, 1)[0]  ### 'transparent'\n    else:\n        raise ValueError(f\"Material type error: {material_type}\")"
  },
  {
    "path": "train.sh",
    "content": "cd src/nr\nCUDA_VISIBLE_DEVICES=$1 python run_training.py --cfg configs/nrvgn_sdf.yaml\ncd -"
  }
]