[
  {
    "path": ".github/issue_template.md",
    "content": "<!--\n\nThanks for filing an issue! If you are having a problem importing a file, please\ninclude a link to the file so we can test it.\n\n-->\n"
  },
  {
    "path": ".gitignore",
    "content": "# Automated test results\ntest/report.json\n\n## Generic ignores below here\n################################\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*,cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# IPython Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# dotenv\n.env\n\n# virtualenv\nvenv/\nENV/\n\n# Spyder project settings\n.spyderproject\n\n# Rope project settings\n.ropeproject\n"
  },
  {
    "path": ".gitmodules",
    "content": "[submodule \"test/glTF-Sample-Models\"]\n\tpath = test/glTF-Sample-Models\n\turl = https://github.com/KhronosGroup/glTF-Sample-Models.git\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: python\npython:\n  \"3.5\"\n\n# From michaeldegroot/cats-blender-plugin\nbefore_install:\n  - sudo apt-get update -qq\n  # install blender from official sources.\n  # This will most propably install an outdated blender version,\n  # but it will resolve all system dependencies blender has to be able to run.\n  - sudo apt-get install blender\n\ninstall:\n  # Then update blender\n  - mkdir tmp && cd tmp\n  - wget http://mirror.cs.umn.edu/blender.org/release/Blender2.79/blender-2.79-linux-glibc219-x86_64.tar.bz2\n  - tar jxf blender-2.79-linux-glibc219-x86_64.tar.bz2\n  - mv blender-2.79-linux-glibc219-x86_64 blender\n  - cd ..\n\nscript:\n  python test/test.py run --exe ./tmp/blender/blender\n\n#deploy:\n#  provider: pages\n#  skip_cleanup: true\n#  github_token: $GITHUB_TOKEN\n#  local_dir: ouput\n"
  },
  {
    "path": "INSTALL.md",
    "content": "See also the [Blender manual on installing\nadd-ons](https://docs.blender.org/manual/en/latest/preferences/addons.html).\n\n## Installing from a Release ZIP\n\nDownload the latest release from the\n[Releases](https://github.com/ksons/gltf-blender-importer/releases) page. It\nshould be a ZIP file with a name like `io_scene_gltf_ksons-X.Y.Z.zip`.\n\nOpen Blender and select **File > User Preferences** (or **Edit > user\nPreferences** if that doesn't exist). Change to the **Add-ons** tab and select\n**Install Add-on from File...** at the bottom of the screen (or **Install...**\nat the top of the screen if that doesn't exist). Pick the ZIP file you\ndownloaded. The add-on is now installed.\n\nYou still need to enable it. In the **Add-ons** tab, put 'gltf' in the search\nbox and tick the checkbox next to **Import-Export: KSons' glTF 2.0 Importer**.\n\n<img src=\"./doc/addon-install.png\"/>\n\n\n## Installing from Source\n\nObtain the source code, eg.\n\n    git clone https://github.com/ksons/gltf-blender-importer.git\n\nYou can create a ZIP to install with the method above by running the script\n`make_package.py`. A ZIP file `io_scene_gltf_ksons.zip` will be created in the\n`dist/` folder.\n\nOtherwise, find your Blender add-on directory. It is most commonly:\n\n* **On Windows**, `C:\\Users\\<YOUR USER NAME>\\AppData\\Roaming\\Blender\n  Foundation\\Blender\\<YOUR BLENDER VERSION>\\scripts\\addons\\`\n* **On Linux**, `/home/<YOUR USER NAME>/.config/blender/<YOUR BLENDER\n  VERSION>/scripts/addons/`\n* **On OSX**, `/Users/<YOUR USER NAME>/Library/Application\n  Support/Blender/<YOUR BLENDER VERSION>/scripts/addons/`\n\nAlternatively, open Blender, switch to the Python console, and enter\n`print(bpy.utils.user_resource('SCRIPTS', 'addons'))` to have it printed for\nyou.\n\nThen copy (or, for easier development, symbolically link) the `io_scene_gltf`\nfolder from the `addons` folder in this repo to your Blender add-on directory.\n\nFinally enable the add-on the same way as above.\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2017 Kristian Sons\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "## If you're looking for the official importer included with Blender, go [here](https://github.com/KhronosGroup/glTF-Blender-IO).\n\n<p align=\"center\">\n<img src=\"doc/hero.png\" alt=\"Fox model by PixelMannen, rigging by Tom Kranis\">\n</p>\n\n<h2 align=center>\ngltf-blender-importer\n<a href=\"https://travis-ci.org/ksons/gltf-blender-importer\"><img src=\"https://travis-ci.org/ksons/gltf-blender-importer.svg?branch=master\" alt=\"Build status\"/></a>\n</h1>\n\n<p align=center>Un-official Blender importer for glTF 2.0.</p>\n\n<p align=center>\n<a href=\"https://github.com/ksons/gltf-blender-importer/releases/download/v0.5.0/io_scene_gltf_ksons-0.5.0.zip\"><img src=\"./doc/download_button.png\"></a>\n</p>\n\n### Installation\nClick the \"Download Add-on\" button above to download the ZIP containing the\nadd-on. In Blender, navigate to **File > User Preferences... > Add-ons** (or\n**Edit > User Preferences... > Add-ons**) and install that ZIP with the\n**Install Add-on from File...** button (or **Install...** button). Then type\n'glTF' in the search bar and tick the checkbox next to **KSons' glTF 2.0\nImporter** to enable it.\n\nYou can now import glTFs with **File > Import > KSons' glTF 2.0 (.glb/.gltf)**.\n\n<p align=\"center\"><img src=\"doc/addon-install.png\"></p>\n\nSee [INSTALL.md](INSTALL.md) for further installation instructions.\n\n### Supported Extensions\n* KHR_materials_pbrSpecularGlossiness\n* KHR_lights_punctual\n* KHR_materials_unlit\n* KHR_texture_transform\n* MSFT_texture_dds\n* EXT_property_animation (extension abandoned upstream)\n\n### Unsupported Features\n* Inverse bind matrices are ignored\n\n### Samples Renderings\n![BoomBox](doc/boom-box.jpg)\n![Corset](doc/corset.jpg)\n![Lantern](doc/lantern.jpg)\n\n### See also\nOfficial Importer-Exporter: [glTF-Blender-IO](https://github.com/KhronosGroup/glTF-Blender-IO)\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/__init__.py",
    "content": "import json\nimport os\nimport struct\n\nimport bpy\nfrom bpy.props import StringProperty, BoolProperty, FloatProperty, EnumProperty\nfrom bpy_extras.io_utils import ImportHelper\n\nbl_info = {\n    'name': \"KSons' glTF 2.0 Importer\",\n    'author': 'Kristian Sons (ksons), scurest',\n    'blender': (2, 80, 0),\n    'version': (0, 5, 0),\n    'location': \"File > Import > KSons' glTF 2.0 (.glb/.gltf)\",\n    'description': 'Importer for the glTF 2.0 file format.',\n    'warning': '',\n    'wiki_url': 'https://github.com/ksons/gltf-blender-importer/blob/master/README.md',\n    'tracker_url': 'https://github.com/ksons/gltf-blender-importer/issues',\n    'category': 'Import-Export'\n}\n\n# Supported glTF version\nGLTF_VERSION = (2, 0)\n\n# Supported extensions\nEXTENSIONS = set((\n    'EXT_property_animation',  # tentative, only material properties supported\n    'KHR_lights_punctual',\n    'KHR_materials_pbrSpecularGlossiness',\n    'KHR_materials_unlit',\n    'KHR_texture_transform',\n    'MSFT_texture_dds',\n))\n\nfrom .importer import Importer\n\nclass ImportGLTF(bpy.types.Operator, ImportHelper):\n    \"\"\"Load a glTF 2.0 file.\"\"\"\n\n    bl_idname = 'import_scene.gltf_ksons'\n    bl_label = 'Import glTF'\n\n    filename_ext = '.gltf'\n    filter_glob = StringProperty(\n        default='*.gltf;*.glb',\n        options={'HIDDEN'},\n    )\n\n    global_scale = FloatProperty(\n        name='Global Scale',\n        description=(\n            'Scales all linear distances by the given factor. Use to change '\n            'units (glTF is in meters)'\n        ),\n        default=1.0,\n    )\n    axis_conversion = EnumProperty(\n        items=[\n            ('BLENDER_UP', 'Blender Up (+Z)', ''),\n            ('BLENDER_RIGHT', 'Blender Right (+Y)', ''),\n        ],\n        name='Up (+Y) to',\n        description=(\n            \"Choose whether to convert coordinates to Blender's up-axis convention \"\n            'or leave everything in the same order it is in the glTF'\n        ),\n        default='BLENDER_UP',\n    )\n    smooth_polys = BoolProperty(\n        name='Enable Polygon Smoothing',\n        description=(\n            'Enable smoothing for all polygons in imported meshes. Suggest '\n            'disabling for low-res models'\n        ),\n        default=True,\n    )\n    split_meshes = BoolProperty(\n        name='Split Meshes into Primitives',\n        description=(\n            'A glTF mesh is made of pieces called primitives. For example, each primitive '\n            'uses only one material. When this option is disabled, one glTF mesh makes '\n            'one Blender mesh. When it is enabled, each glTF primitive makes one Blender mesh. '\n            'Useful for examining the structure of glTF meshes'\n        ),\n        default=False,\n    )\n    bone_rotation_mode = EnumProperty(\n        items=[\n            ('NONE', \"Don't change\", ''),\n            ('POINT_TO_CHILDREN', 'Point to children', ''),\n        ],\n        name='Direction',\n        description=(\n            'Adjusts which direction bones will point towards by applying a rotation '\n            'to each bone. Point-to-children uses a heuristic that tries to make bones '\n            'point nicely'\n        ),\n        default='POINT_TO_CHILDREN',\n    )\n    import_animations = BoolProperty(\n        name='Import Animations',\n        description=(\n            'Whether to import animations. Look for them in the NLA editor'\n        ),\n        default=True,\n    )\n    framerate = FloatProperty(\n        name='Frames/second',\n        description=(\n            'The Blender animation frame corresponding to the glTF time is computed '\n            \"as framerate * t. Negative values or zero mean to use the current scene's \"\n            'framerate'\n        ),\n        default=0.0,\n    )\n    always_doublesided = BoolProperty(\n        name='Always Double-Sided',\n        description=(\n            'Make all materials double-sided, even if the glTF says they should be '\n            'single-sided.\\n'\n            'Single-sidedness (ie. backing culling enabled) is simulated in Blender '\n            'using alpha, which is a somewhat ugly hack'\n        ),\n        default=True,\n    )\n    add_root = BoolProperty(\n        name='Add Root Node',\n        description=(\n            'When enabled, everything in the glTF file will be placed under a new '\n            'root node with the name of the .gltf/.glb file'\n        ),\n        default=True,\n    )\n    import_scenes_as_collections = BoolProperty(\n        name='Import Scenes as Collections',\n        description=(\n            'When enabled, import glTF scenes as Blender collections (requires Blender '\n            '>= 2.8). When disabled, the glTF scenes are ignored.\\n\\n'\n            'Note that all objects are always placed in the current Blender scene'\n        ),\n        default=False,\n    )\n\n    def draw(self, context):\n        layout = self.layout\n\n        col = layout.box().column()\n        col.label(text='Units:', icon='EMPTY_DATA')\n        col.prop(self, 'axis_conversion')\n        col.prop(self, 'global_scale')\n\n        col = layout.box().column()\n        col.label(text='Mesh:', icon='MESH_DATA')\n        col.prop(self, 'smooth_polys')\n        col.prop(self, 'split_meshes')\n\n        col = layout.box().column()\n        col.label(text='Bones:', icon='BONE_DATA')\n        col.prop(self, 'bone_rotation_mode')\n\n        col = layout.box().column()\n        col.label(text='Animation:', icon='POSE_HLT')\n        col.prop(self, 'import_animations')\n        col.prop(self, 'framerate')\n\n        col = layout.box().column()\n        col.label(text='Materials:', icon='MATERIAL_DATA')\n        col.prop(self, 'always_doublesided')\n\n        col = layout.box().column()\n        col.label(text='Scene:', icon='SCENE_DATA')\n        col.prop(self, 'add_root')\n        col.prop(self, 'import_scenes_as_collections')\n\n    def execute(self, context):\n        imp = Importer(self.filepath, self.as_keywords())\n        imp.do_import()\n        return {'FINISHED'}\n\n\n# Add to a menu\ndef menu_func_import(self, context):\n    self.layout.operator(ImportGLTF.bl_idname, text=\"KSons' glTF 2.0 (.glb/.gltf)\")\n\n\ndef register():\n    if bpy.app.version >= (2, 80, 0):\n        bpy.utils.register_class(ImportGLTF)\n        bpy.types.TOPBAR_MT_file_import.append(menu_func_import)\n    else:\n        bpy.utils.register_module(__name__)\n        bpy.types.INFO_MT_file_import.append(menu_func_import)\n\n\ndef unregister():\n    if bpy.app.version >= (2, 80, 0):\n        bpy.types.TOPBAR_MT_file_import.remove(menu_func_import)\n        bpy.utils.unregister_class(ImportGLTF)\n    else:\n        bpy.utils.unregister_module(__name__)\n        bpy.types.INFO_MT_file_import.remove(menu_func_import)\n\n\nif __name__ == '__main__':\n    register()\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/__init__.py",
    "content": "import json\nimport bpy\n\ndef quote(s):\n    \"\"\"Quote a string with double-quotes.\"\"\"\n    return json.dumps(s)\n\nfrom .precompute import animation_precomputation\nfrom .node_trs import add_node_trs_animation\nfrom .morph_weight import add_morph_weight_animation\nfrom .material import add_material_animation\n\ndef add_animations(op):\n    for anim_info in op.animation_info:\n        for node_id in anim_info.node_trs:\n            add_node_trs_animation(op, anim_info, node_id)\n\n        for node_id in anim_info.morph_weight:\n            add_morph_weight_animation(op, anim_info, node_id)\n\n        for material_id in anim_info.material:\n            add_material_animation(op, anim_info, material_id)\n\n    create_nla_tracks(op)\n\n\ndef create_nla_tracks(op):\n    \"\"\"\n    Put all the actions in NLA tracks, each animation one after the other in one\n    big timeline.\n    \"\"\"\n    def get_track(bl_thing, track_name):\n        if not bl_thing.animation_data:\n            bl_thing.animation_data_create()\n\n        if track_name not in bl_thing.animation_data.nla_tracks:\n            track = bl_thing.animation_data.nla_tracks.new()\n            track.name = track_name\n\n        return bl_thing.animation_data.nla_tracks[track_name]\n\n    t = 0.0  # Start time in the big timeline\n    padding = 5.0  # Padding time between animations\n\n    for anim_info in op.animation_info:\n        anim_id = anim_info.anim_id\n        anim_name = op.gltf['animations'][anim_id].get('name', 'animations[%d]' % anim_id)\n\n        for object_name, action in anim_info.trs_actions.items():\n            bl_object = bpy.data.objects[object_name]\n            track = get_track(bl_object, 'Position')\n            track.strips.new(anim_name, t, action)\n\n        for object_name, action in anim_info.morph_actions.items():\n            shape_keys = bpy.data.objects[object_name].data.shape_keys\n            track = get_track(shape_keys, 'Morph')\n            track.strips.new(anim_name, t, action)\n\n        for material_id, action in anim_info.material_actions.items():\n            node_tree = op.get('material', material_id).node_tree\n            track = get_track(node_tree, 'Material')\n            track.strips.new(anim_name, t, action)\n\n        t += anim_info.duration + padding\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/curve.py",
    "content": "import bpy\nfrom mathutils import Vector, Quaternion, Matrix\n\n\nclass Curve:\n    @staticmethod\n    def for_sampler(op, sampler, num_targets=None):\n        c = Curve()\n\n        c.times = op.get('accessor', sampler['input'])\n        c.ords = op.get('accessor', sampler['output'])\n        c.interp = sampler.get('interpolation', 'LINEAR')\n        if c.interp not in ['LINEAR', 'STEP', 'CUBICSPLINE']:\n            print('unknown interpolation: %s', c.interp)\n            c.interp = 'LINEAR'\n\n        if num_targets != None:\n            # Group one frame's worth of morph weights together.\n            c.ords = [\n                c.ords[i: i + num_targets]\n                for i in range(0, len(c.ords), num_targets)\n            ]\n\n        if c.interp == 'CUBICSPLINE':\n            # Move the in-tangents and out-tangents into separate arrays.\n            c.ins, c.ords, c.outs = c.ords[::3], c.ords[1::3], c.ords[2::3]\n\n        assert(len(c.times) == len(c.ords))\n\n        return c\n\n    def num_components(self):\n        y = self.ords[0]\n        return 1 if type(y) in [float, int] else len(y)\n\n    def shorten_quaternion_paths(self):\n        if self.interp != 'LINEAR':\n            return\n\n        self.ords = [Vector(y) for y in self.ords]\n        for i in range(1, len(self.ords)):\n            if self.ords[i - 1].dot(self.ords[i]) < 0:\n                self.ords[i] = -self.ords[i]\n\n    def make_fcurves(self, op, action, data_path,\n                     transform=lambda x: x,\n                     tangent_transform=None\n                     ):\n        framerate = op.options['framerate']\n        if framerate <= 0:\n            framerate = bpy.context.scene.render.fps\n        times = self.times\n        ords = self.ords\n        interp = self.interp\n        bl_interp = {\n            'STEP': 'CONSTANT',\n            'LINEAR': 'LINEAR',\n            'CUBICSPLINE': 'BEZIER',\n        }[interp]\n\n        num_components = self.num_components()\n        if type(data_path) == list:\n            assert(len(data_path) == num_components)\n            fcurves = [\n                action.fcurves.new(data_path=path, index=index)\n                for path, index in data_path\n            ]\n        else:\n            fcurves = [\n                action.fcurves.new(data_path=data_path, index=i)\n                for i in range(0, num_components)\n            ]\n\n        for fcurve in fcurves:\n            fcurve.keyframe_points.add(len(times))\n\n        ords = [transform(y) for y in ords]\n\n        # tmp is an array laid out like\n        #\n        #   [frame, ordinate, frame, ordinate, ...]\n        #\n        # This let's us set all the keyframes points in one batch, which is fast.\n        tmp = [0] * (2 * len(times))\n        tmp[::2] = (framerate * t for t in times)\n        for i in range(0, num_components):\n            if num_components == 1:\n                tmp[1::2] = ords\n            else:\n                tmp[1::2] = (y[i] for y in ords)\n            fcurves[i].keyframe_points.foreach_set('co', tmp)\n\n        for fcurve in fcurves:\n            for pt in fcurve.keyframe_points:\n                pt.interpolation = bl_interp\n\n        if interp == 'CUBICSPLINE':\n            if not tangent_transform:\n                tangent_transform = transform\n\n            # Blender appears to do Hermite spline interpolation of the _graph_\n            # between the points (t1, y1) and (t2, y2), unlike glTF which does\n            # interpolation only of the _ordinates_ y1 and y2. So if this is the\n            # interval between two keyframes at times t1 and t2 with control\n            # points C1 and C2\n            #\n            #                               o C2: (ct2, cy2)\n            #    C1: (ct1, cy1) o            \\\n            #                  /              * P2: (t1, y1)\n            #                 /\n            #   P1: (t1, y1) *\n            #\n            # glTF gives us the right derivative at P1, b (= the slope of the\n            # line P1 C1) and the left derivative at P2, a (= the slope of the\n            # line P2 C2). So once we pick ct1 and ct2, cy1 and cy2 follow.\n            #\n            # We pick ct1 and ct2 so that spline interpolation in the\n            # t-direction reduces to just linear interpolation.\n\n            for k in range(0, len(times) - 1):\n                t1, t2 = times[k], times[k + 1]\n                b, a = self.outs[k], self.ins[k + 1]\n                a, b = tangent_transform(a), tangent_transform(b)\n                if num_components == 1:\n                    a, b = (a,), (b,)\n\n                ct1 = (2 * t1 + t2) / 3\n                ct2 = (t1 + 2 * t2) / 3\n\n                for i in range(0, num_components):\n                    pt1 = fcurves[i].keyframe_points[k]\n                    pt1.handle_right_type = 'FREE'\n                    pt1.handle_right = ct1 * framerate, pt1.co[1] + (ct1 - t1) * b[i]\n\n                    pt2 = fcurves[i].keyframe_points[k + 1]\n                    pt2.handle_left_type = 'FREE'\n                    pt2.handle_left = ct2 * framerate, pt2.co[1] + (ct2 - t2) * a[i]\n\n        for fcurve in fcurves:\n            fcurve.update()\n\n        return fcurves\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/material.py",
    "content": "import bpy\nfrom . import quote\nfrom .curve import Curve\n\n\ndef add_material_animation(op, anim_info, material_id):\n    anim_id = anim_info.anim_id\n    data = anim_info.material[material_id]\n    animation = op.gltf['animations'][anim_id]\n    material = op.get('material', material_id)\n\n    name = '%s@%s (Material)' % (\n        animation.get('name', 'animations[%d]' % anim_id),\n        material.name,\n    )\n    action = bpy.data.actions.new(name)\n    anim_info.material_actions[material_id] = action\n\n    fcurves = []\n\n    for prop, sampler in data.get('properties', {}).items():\n        curve = Curve.for_sampler(op, sampler)\n        data_path = op.material_infos[material_id].paths.get(prop)\n        if not data_path:\n            print('no place to put animated property %s in material node tree' % prop)\n            continue\n        fcurves += curve.make_fcurves(op, action, data_path)\n\n    if fcurves:\n        group = action.groups.new('Material Property')\n        for fcurve in fcurves:\n            fcurve.group = group\n\n    for texture_type, samplers in data.get('texture_transform', {}).items():\n        base_path = op.material_infos[material_id].paths[texture_type + '-transform']\n\n        fcurves = []\n\n        if 'offset' in samplers:\n            curve = Curve.for_sampler(op, samplers['offset'])\n            data_path = base_path + '.translation'\n            fcurves += curve.make_fcurves(op, action, data_path)\n\n        if 'rotation' in samplers:\n            curve = Curve.for_sampler(op, samplers['rotation'])\n            data_path = [(base_path + '.rotation', 2)]  # animate rotation around Z-axis\n            fcurves += curve.make_fcurves(op, action, data_path, transform=lambda theta:-theta)\n\n        if 'scale' in samplers:\n            curve = Curve.for_sampler(op, samplers['scale'])\n            data_path = base_path + '.scale'\n            fcurves += curve.make_fcurves(op, action, data_path)\n\n        group_name = {\n            'normalTexture': 'Normal',\n            'occlusionTexture': 'Occlusion',\n            'emissiveTexture': 'Emissive',\n            'baseColorTexture': 'Base Color',\n            'metallicRoughnessTexture': 'Metallic-Roughness',\n            'diffuseTexture': 'Diffuse',\n            'specularGlossinessTexture': 'Specular-Glossiness',\n        }[texture_type] + ' Texture Transform'\n        group = action.groups.new(group_name)\n        for fcurve in fcurves:\n            fcurve.group = group\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/morph_weight.py",
    "content": "import bpy\nfrom . import quote\nfrom .curve import Curve\n\n# Morph Weight Animations\n\n\ndef add_morph_weight_animation(op, anim_info, node_id):\n    anim_id = anim_info.anim_id\n    sampler = anim_info.morph_weight[node_id]\n    animation = op.gltf['animations'][anim_id]\n\n    vnodes = find_mesh_instances(op.node_id_to_vnode[node_id])\n    for vnode in vnodes:\n        blender_object = vnode.blender_object\n\n        if not blender_object.data.shape_keys:\n            # Can happen if the mesh has only non-POSITION morph targets so we\n            # didn't create a shape key\n            return\n\n        # Create action\n        name = '%s@%s (Morph)' % (\n            animation.get('name', 'animations[%d]' % anim_id),\n            blender_object.name,\n        )\n        action = bpy.data.actions.new(name)\n        action.id_root = 'KEY'\n        anim_info.morph_actions[blender_object.name] = action\n\n        # Find out the number of morph targets\n        mesh_id = op.gltf['nodes'][node_id]['mesh']\n        mesh = op.gltf['meshes'][mesh_id]\n        num_targets = len(mesh['primitives'][0]['targets'])\n\n        curve = Curve.for_sampler(op, sampler, num_targets=num_targets)\n        data_paths = [\n            ('key_blocks[%s].value' % quote('Morph %d' % i), 0)\n            for i in range(0, num_targets)\n        ]\n\n        curve.make_fcurves(op, action, data_paths)\n\n\ndef find_mesh_instances(vnode):\n    \"\"\"\n    A mesh instance at a vnode may be moved and split-up into multiple vnodes\n    during vtree creation. Find all the places it ended up.\n    \"\"\"\n    if vnode.mesh:\n        return [vnode]\n    else:\n        vnodes = []\n        for moved_to in vnode.mesh_moved_to:\n            vnodes += find_mesh_instances(moved_to)\n        return vnodes\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/node_trs.py",
    "content": "from mathutils import Vector, Quaternion, Matrix\nimport bpy\nfrom . import quote\nfrom .curve import Curve\nfrom ..compat import mul\n\n# Handles animating TRS properties for glTF nodes. In Blender, this can be\n# either an object or a bone.\n\n\ndef add_node_trs_animation(op, anim_info, node_id):\n    if op.node_id_to_vnode[node_id].type == 'BONE':\n        bone_trs(op, anim_info, node_id)\n    else:\n        object_trs(op, anim_info, node_id)\n\n\ndef object_trs(op, anim_info, node_id):\n    animation_id = anim_info.anim_id\n    samplers = anim_info.node_trs[node_id]\n\n    # Create action\n    animation = op.gltf['animations'][animation_id]\n    blender_object = op.node_id_to_vnode[node_id].blender_object\n    name = '%s@%s' % (\n        animation.get('name', 'animations[%d]' % animation_id),\n        blender_object.name,\n    )\n    action = bpy.data.actions.new(name)\n    anim_info.trs_actions[blender_object.name] = action\n\n    if 'translation' in samplers:\n        curve = Curve.for_sampler(op, samplers['translation'])\n        fcurves = curve.make_fcurves(\n            op, action, 'location',\n            transform=op.convert_translation)\n\n        group = action.groups.new('Location')\n        for fcurve in fcurves:\n            fcurve.group = group\n\n    if 'rotation' in samplers:\n        curve = Curve.for_sampler(op, samplers['rotation'])\n        curve.shorten_quaternion_paths()\n        fcurves = curve.make_fcurves(\n            op, action, 'rotation_quaternion',\n            transform=op.convert_rotation)\n\n        group = action.groups.new('Rotation')\n        for fcurve in fcurves:\n            fcurve.group = group\n\n    if 'scale' in samplers:\n        curve = Curve.for_sampler(op, samplers['scale'])\n        fcurves = curve.make_fcurves(\n            op, action, 'scale',\n            transform=op.convert_scale)\n\n        group = action.groups.new('Scale')\n        for fcurve in fcurves:\n            fcurve.group = group\n\n\ndef bone_trs(op, anim_info, node_id):\n    anim_id = anim_info.anim_id\n    samplers = anim_info.node_trs[node_id]\n\n    # Unlike an object, a bone doesn't get its own action; there is one action\n    # for the whole armature. Look it up or create it if it doesn't exist yet.\n    bone_vnode = op.node_id_to_vnode[node_id]\n    armature_vnode = bone_vnode.armature_vnode\n    armature_object = armature_vnode.blender_object\n    if armature_object.name not in anim_info.trs_actions:\n        name = '%s@%s' % (\n            op.gltf['animations'][anim_id].get('name', 'animations[%d]' % anim_id),\n            armature_vnode.blender_armature.name,\n        )\n        action = bpy.data.actions.new(name)\n        anim_info.trs_actions[armature_object.name] = action\n\n    action = anim_info.trs_actions[armature_object.name]\n\n    # In glTF, the ordinates of an animation curve say what the final position\n    # of the node should be\n    #\n    #     T(b) = sample_gltf_curve()\n    #\n    # But in Blender, you animate the pose bone, and the final position is\n    # computed relative to the rest position as\n    #\n    #     P(b) = sample_blender_curve()\n    #\n    # and these are related as (see vnode.py for the notation used here)\n    #\n    #     T'(b) = C(pb)^{-1} T(b) C(b)\n    #           = E(b) P(b)\n    #\n    # Computing\n    #\n    #       P(b)\n    #     = E(b)^{-1} C(pb)^{-1} T(b) C(b)\n    #     = Rot[er^{-1}] Trans[-et]\n    #       Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]\n    #       Trans[t] Rot[r] Scale[s]\n    #       Rot[cr(b)] HomScale[cs(b)]\n    #\n    #     { float the Trans to the left }\n    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]\n    #       Rot[er^{-1}] Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]\n    #       Rot[r] Scale[s]\n    #       Rot[cr(b)] HomScale[cs(b)]\n    #\n    #     { combine scalings }\n    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]\n    #       Rot[er^{-1}] Rot[cr(pb)^{-1}]\n    #       Rot[r] Scale[s cs(b) / cs(pb)]\n    #       Rot[cr(b)]\n    #\n    #     { interchange the final Rot and Scale, permuting the scale\n    #       (see exchange_scale_rot_matrix) }\n    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]\n    #       Rot[er^{-1}] Rot[cr(pb)^{-1}]\n    #       Rot[r] Rot[cr(b)]\n    #       Scale[M s cs(b) / cs(pb)]\n    #\n    #     { combine rotations }\n    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]\n    #       Rot[er^{-1} cr(pb)^{-1} r cr(b)]\n    #       Scale[M s cs(b) / cs(pb)]\n    #     = Trans[pt] Rot[pr] Scale[ps]\n    #\n    # Note that pt depends only on t (and not r or s), and similarly for pr and\n    # ps.\n\n    et, er = bone_vnode.editbone_tr\n    cr_pb = bone_vnode.parent.correction_rotation\n    cs_pb = bone_vnode.parent.correction_homscale\n    cr = bone_vnode.correction_rotation\n    cs = bone_vnode.correction_homscale\n\n    er_inv = er.conjugated()\n    cr_pb_inv = cr_pb.conjugated()\n    cs_pb_inv = 1 / cs_pb\n\n    if 'translation' in samplers:\n        # pt = Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))\n        trans_mat = mul(\n            er_inv.to_matrix().to_4x4(),\n            mul(\n                Matrix.Translation(-et),\n                (cs_pb_inv * cr_pb_inv.to_matrix()).to_4x4()\n            )\n        )\n\n        convert_translation = op.convert_translation\n        def transform_translation(t): return mul(trans_mat, convert_translation(t))\n\n        # In order to transform the tangents for cubic interpolation, we need to\n        # know how the derivative transforms too. The other transforms are\n        # linear, so their derivatives change the same way they do, but\n        # transform_translation is affine, so its derivative changes by its\n        # underlying linear map.\n        lin_mat = trans_mat.to_3x3()\n        def transform_velocity(t): return mul(lin_mat, convert_translation(t))\n\n    if 'rotation' in samplers:\n        # pt = er^{-1} cr(pb)^{-1} r cr(b)\n        #    = left_r r cr(b)\n        left_r = mul(er_inv, cr_pb_inv)\n\n        convert_rotation = op.convert_rotation\n        def transform_rotation(r): return mul(mul(left_r, convert_rotation(r)), cr)\n\n    if 'scale' in samplers:\n        # ps = (M cs(b) / cs(pb)) s\n        # where M is the matrix from exchange_scale_rot_matrix\n        scale_mat = exchange_scale_rot_matrix(bone_vnode.correction_rotation)\n        scale_mat *= cs * cs_pb_inv\n\n        convert_scale = op.convert_scale\n        def transform_scale(s):\n            return mul(scale_mat, convert_scale(s))\n\n    bone_name = bone_vnode.blender_name\n    base_path = 'pose.bones[%s]' % quote(bone_name)\n\n    fcurves = []\n\n    if 'translation' in samplers:\n        curve = Curve.for_sampler(op, samplers['translation'])\n        fcurves += curve.make_fcurves(\n            op, action, base_path + '.location',\n            transform=transform_translation,\n            tangent_transform=transform_velocity)\n\n    if 'rotation' in samplers:\n        curve = Curve.for_sampler(op, samplers['rotation'])\n        # NOTE: it doesn't matter that we're shortening before we transform\n        # because transform_rotation preserves the dot product\n        curve.shorten_quaternion_paths()\n        fcurves += curve.make_fcurves(\n            op, action, base_path + '.rotation_quaternion',\n            transform=transform_rotation)\n\n    if 'scale' in samplers:\n        curve = Curve.for_sampler(op, samplers['scale'])\n        fcurves += curve.make_fcurves(\n            op, action, base_path + '.scale',\n            transform=transform_scale)\n\n    group = action.groups.new(bone_name)\n    for fcurve in fcurves:\n        fcurve.group = group\n\n\ndef exchange_scale_rot_matrix(r):\n    \"\"\"\n    Gives a matrix M, depending on quaternion r, with the property that\n\n        Scale[s] Rot[r] = Rot[r] Scale[Ms]\n\n    for all s.\n\n    In order for this to work, Rot[r] must be, up to sign, a permutation of the\n    basis vectors.\n    \"\"\"\n    # M should be the matrix for the inverse of the permutation effected by\n    # Rot[r] I think.\n    m = r.to_matrix()\n    # Drop all signs; after this, M should be a permutation matrix\n    for i in range(0, 3):\n        for j in range(0, 3):\n            m[i][j] = 0 if abs(m[i][j]) < 0.5 else 1\n    m.transpose()\n    return m\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/precompute.py",
    "content": "import re\nimport bpy\n\nclass AnimationInfo:\n    def __init__(self, anim_id):\n        self.anim_id = anim_id\n\n        # These are for organizing the samplers by the object they affect.\n        # Filled out during precomputation.\n\n        # node_trs[node_idx]['translation'/'rotation'/'scale'] is the sampler\n        # for that node's TRS property\n        self.node_trs = {}\n        # morph_weight[node_idx] is the sampler for that node's morph weights\n        self.morph_weight = {}\n        # material[material_idx][property name] is the sampler for that\n        # materials' property\n        # material[material_idx]['texture_transform'][texture_type]['offset'/'rotation'/'scale']\n        # is the sampler for texture transform values\n        self.material = {}\n        # Duration of longest input sampler\n        self.duration = 0.0\n\n        # trs_actions[object_blender_name] records the TRS action on that object.\n        self.trs_actions = {}\n        # trs_actions[object_blender_name] records the morph weight (shape key)\n        # action on that object.\n        self.morph_actions = {}\n        # material_actions[material_id] records the action on that material.\n        self.material_actions = {}\n\n\ndef animation_precomputation(op):\n    \"\"\"Precompute AnimationInfo for each animation.\"\"\"\n    animations = op.gltf.get('animations', [])\n    op.animation_info = [\n        gather_animation(op, anim_id)\n        for anim_id in range(0, len(animations))\n    ]\n\n\ndef first_match(patterns, s):\n    for pattern in patterns:\n        match = re.match(pattern, s)\n        if match:\n            return match\n    return None\n\n\ndef gather_animation(op, anim_id):\n    anim = op.gltf['animations'][anim_id]\n    samplers = anim['samplers']\n\n    info = AnimationInfo(anim_id)\n\n    framerate = op.options['framerate']\n    if framerate <= 0:\n        framerate = bpy.context.scene.render.fps\n    def calc_duration(sampler):\n        acc = op.gltf['accessors'][sampler['input']]\n        max_time = framerate * acc['max'][0]\n        info.duration = max(info.duration, max_time)\n\n    # Normal glTF channels\n    channels = anim['channels']\n    for channel in channels:\n        sampler = samplers[channel['sampler']]\n        target = channel['target']\n        if 'node' not in target:\n            continue\n        node_id = target['node']\n        path = target['path']\n\n        if path in ['translation', 'rotation', 'scale']:\n            info.node_trs.setdefault(node_id, {})[path] = sampler\n            calc_duration(sampler)\n        elif path == 'weights':\n            info.morph_weight[node_id] = sampler\n            calc_duration(sampler)\n        else:\n            print('skipping animation curve, unknown path: %s' % path)\n            continue\n\n    # EXT_property_animation channels\n    channels = (\n        anim.get('extensions', {})\n        .get('EXT_property_animation', {})\n        .get('channels', [])\n    )\n    for channel in channels:\n        sampler = samplers[channel['sampler']]\n        target = channel['target']\n\n        # Node TRS properties\n        patterns = [\n            r'^/nodes/(\\d+)/(translation|rotation|scale)$',\n        ]\n        match = first_match(patterns, target)\n        if match:\n            node_id, path = match.groups()\n            info.node_trs.setdefault(int(node_id), {})[path] = sampler\n            calc_duration(sampler)\n            continue\n\n        # Simple material properties\n        patterns = [\n            r'^/materials/(\\d+)/(emissiveFactor|alphaCutoff)$',\n            r'^/materials/(\\d+)/(normalTexture/scale|occlusionTexture/strength)$',\n            r'^/materials/(\\d+)/pbrMetallicRoughness/(baseColorFactor|metallicFactor|roughnessFactor)$',\n            r'^/materials/(\\d+)/extensions/KHR_materials_pbrSpecularGlossiness/(diffuseFactor|specularFactor|glossinessFactor)$',\n        ]\n        match = first_match(patterns, target)\n        if match:\n            material_id, prop = match.groups()\n            (info.material\n                .setdefault(int(material_id), {})\n                .setdefault('properties', {})\n             )[prop] = sampler\n            calc_duration(sampler)\n\n            # Record that this property is live (so don't skip it during material creation)\n            op.material_infos[int(material_id)].liveness.add(prop)\n\n            continue\n\n        # Texture transform properties\n        patterns = [\n            r'^/materials/(\\d+)/(normalTexture|occlusionTexture|emissiveTexture)/extensions/KHR_texture_transform/(offset|rotation|scale)$',\n            r'^/materials/(\\d+)/pbrMetallicRoughness/(baseColorTexture|metallicRoughnessTexture)/extensions/KHR_texture_transform/(offset|rotation|scale)$',\n            r'^/materials/(\\d+)/extensions/KHR_materials_pbrSpecularGlossiness/(diffuseTexture|specularGlossinessTexture)/extensions/KHR_texture_transform/(offset|rotation|scale)$',\n        ]\n        match = first_match(patterns, target)\n        if match:\n            material_id, texture_type, path = match.groups()\n            (info.material\n                .setdefault(int(material_id), {})\n                .setdefault('texture_transform', {})\n                .setdefault(texture_type, {})\n             )[path] = sampler\n\n            # Record that this property is live (don't skip it during material creation)\n            op.material_infos[int(material_id)].liveness.add(texture_type + '-transform')\n\n            continue\n\n        print('skipping animation curve, target not supported: %s' % target)\n\n    return info\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/buffer.py",
    "content": "import base64\nimport os\nimport struct\n\n# This file handles creating buffers, buffer views, and accessors. It's pure\n# python and doesn't depend on Blender at all.\n#\n# Buffers and buffer views are represented with memoryviews so we can do\n# efficient slicing.\n\n\ndef create_buffer(op, idx):\n    \"\"\"Create a memoryview for buffers[idx].\"\"\"\n    buffer = op.gltf['buffers'][idx]\n\n    # Handle GLB buffer\n    if op.glb_buffer != None and idx == 0 and 'uri' not in buffer:\n        return op.glb_buffer\n\n    uri = buffer['uri']\n\n    # Try to decode base64 data URIs\n    if uri.startswith('data:'):\n        idx = uri.find(';base64,')\n        if idx != -1:\n            base64_data = uri[idx + len(';base64,'):]\n            return memoryview(base64.b64decode(base64_data))\n\n    # If we got here, assume it's a filepath\n    buffer_location = os.path.join(op.base_path, uri)  # TODO: absolute paths?\n    with open(buffer_location, 'rb') as fp:\n        return memoryview(fp.read())\n\n\ndef create_buffer_view(op, idx):\n    \"\"\"Create a pair for bufferViews[idx].\n\n    The pair contains a memoryview for the view and also its stride, which is\n    specified in the bufferView as well.\n    \"\"\"\n    buffer_view = op.gltf['bufferViews'][idx]\n    buffer = op.get('buffer', buffer_view['buffer'])\n    byte_offset = buffer_view.get('byteOffset', 0)\n    byte_length = buffer_view['byteLength']\n    stride = buffer_view.get('byteStride', None)\n\n    view = buffer[byte_offset:byte_offset + byte_length]\n    return (view, stride)\n\n\ndef create_accessor(op, idx):\n    \"\"\"Create an array holding the elements of accessors[idx].\n\n    If the accessor is of SCALAR type, each element is a number. Otherwise, each\n    element is a tuple holding the components for that element.\n    \"\"\"\n    accessor = op.gltf['accessors'][idx]\n    return create_accessor_from_properties(op, accessor)\n\n\ndef create_accessor_from_properties(op, accessor):\n    count = accessor['count']\n    fmt_char_lut = dict([\n        (5120, 'b'),  # BYTE\n        (5121, 'B'),  # UNSIGNED_BYTE\n        (5122, 'h'),  # SHORT\n        (5123, 'H'),  # UNSIGNED_SHORT\n        (5125, 'I'),  # UNSIGNED_INT\n        (5126, 'f')   # FLOAT\n    ])\n    fmt_char = fmt_char_lut[accessor['componentType']]\n    component_size = struct.calcsize(fmt_char)\n    num_components_lut = {\n        'SCALAR': 1,\n        'VEC2': 2,\n        'VEC3': 3,\n        'VEC4': 4,\n        'MAT2': 4,\n        'MAT3': 9,\n        'MAT4': 16\n    }\n    num_components = num_components_lut[accessor['type']]\n    fmt = '<' + (fmt_char * num_components)\n    default_stride = struct.calcsize(fmt)\n\n    # Special layouts for certain formats; see the section about\n    # data alignment in the glTF 2.0 spec.\n    if accessor['type'] == 'MAT2' and component_size == 1:\n        fmt = '<' + \\\n            (fmt_char * 2) + 'xx' + \\\n            (fmt_char * 2)\n        default_stride = 8\n    elif accessor['type'] == 'MAT3' and component_size == 1:\n        fmt = '<' + \\\n            (fmt_char * 3) + 'x' + \\\n            (fmt_char * 3) + 'x' + \\\n            (fmt_char * 3)\n        default_stride = 12\n    elif accessor['type'] == 'MAT3' and component_size == 2:\n        fmt = '<' + \\\n            (fmt_char * 3) + 'xx' + \\\n            (fmt_char * 3) + 'xx' + \\\n            (fmt_char * 3)\n        default_stride = 24\n\n    normalize = None\n    if accessor.get('normalized', False):\n        normalize_lut = dict([\n            (5120, lambda x: max(x / (2**7 - 1), -1)),   # BYTE\n            (5121, lambda x: x / (2**8 - 1)),            # UNSIGNED_BYTE\n            (5122, lambda x: max(x / (2**15 - 1), -1)),  # SHORT\n            (5123, lambda x: x / (2**16 - 1)),           # UNSIGNED_SHORT\n            (5125, lambda x: x / (2**32 - 1))            # UNSIGNED_INT\n        ])\n        normalize = normalize_lut[accessor['componentType']]\n\n    if 'bufferView' in accessor:\n        (buf, stride) = op.get('buffer_view', accessor['bufferView'])\n        stride = stride or default_stride\n    else:\n        stride = default_stride\n        buf = b'\\0' * (stride * count)\n\n    off = accessor.get('byteOffset', 0)\n\n    # Main decoding loop (this is hot, so try to make it fast)\n    # Interpret buf as elems seperated by padding for the stride\n    #    |elem|xx|elem|xx|elem|xx|elem|\n    # Read count-1 |elem|xx| blocks, followed by one |elem|\n    elem_byte_len = struct.calcsize(fmt)\n    assert(stride >= elem_byte_len)\n    padded_fmt = fmt + (stride - elem_byte_len) * 'x'\n    unpack_iter = struct.Struct(padded_fmt).iter_unpack(buf[off:off + (count - 1) * stride])\n    last = struct.unpack_from(fmt, buf, offset=off + (count - 1) * stride)\n    if normalize and num_components == 1:\n        result = [normalize(x[0]) for x in unpack_iter]\n        result.append(normalize(last[0]))\n    elif normalize:\n        result = [tuple(normalize(y) for y in x)  for x in unpack_iter]\n        result.append(tuple(normalize(y) for y in last))\n    elif num_components == 1:\n        result = [x[0] for x in unpack_iter]\n        result.append(last[0])\n    else:\n        result = list(unpack_iter)\n        result.append(last)\n\n    # A sparse property says \"change the elements at these indices to these\n    # values\" where \"these\" are given in an accessor-like way, so we find the\n    # list of indices and values by recursing into this function.\n    if 'sparse' in accessor:\n        sparse = accessor['sparse']\n        indices_props = {\n            'count': sparse['count'],\n            'bufferView': sparse['indices']['bufferView'],\n            'byteOffset': sparse['indices'].get('byteOffset', 0),\n            'componentType': sparse['indices']['componentType'],\n            'type': 'SCALAR',\n        }\n        indices = create_accessor_from_properties(op, indices_props)\n        values_props = {\n            'count': sparse['count'],\n            'bufferView': sparse['values']['bufferView'],\n            'byteOffset': sparse['values'].get('byteOffset', 0),\n            'componentType': accessor['componentType'],\n            'type': accessor['type'],\n            'normalized': accessor.get('normalized', False),\n        }\n        values = create_accessor_from_properties(op, values_props)\n\n        for (index, val) in zip(indices, values):\n            result[index] = val\n\n    return result\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/camera.py",
    "content": "import bpy\n\n\ndef create_camera(op, idx):\n    \"\"\"Create a Blender camera for the glTF cameras[idx].\"\"\"\n    data = op.gltf['cameras'][idx]\n    name = data.get('name', 'cameras[%d]' % idx)\n    camera = bpy.data.cameras.new(name)\n\n    if data['type'] == 'orthographic':\n        camera.type = 'ORTHO'\n        p = data['orthographic']\n        camera.clip_start = p['znear']\n        camera.clip_end = p['zfar']\n        # TODO: should we warn if xmag != ymag?\n        camera.ortho_scale = max(p['xmag'], p['ymag'])\n\n    elif data['type'] == 'perspective':\n        camera.type = 'PERSP'\n        p = data['perspective']\n        camera.clip_start = p['znear']\n        # according to the spec a missing zfar means \"infinite\"\n        HUGE = 3.40282e+38\n        camera.clip_end = p.get('zfar', HUGE)\n        camera.lens_unit = 'FOV'\n        camera.angle_y = p['yfov']\n\n        # TODO: aspect ratio\n\n    else:\n        print('unknown camera type: %s' % data['type'])\n\n    return camera\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/compat.py",
    "content": "import bpy\n\n# Compatiblity shims\n\n# Blender 2.8 changed matrix-matrix, matrix-vector, quaternion-quaternion, and\n# quaternion-vector multiplication from x * y to x @ y\nif bpy.app.version >= (2, 80, 0):\n    def mul(x, y): return x @ y\nelse:\n    def mul(x, y): return x * y\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/importer.py",
    "content": "from mathutils import Vector, Quaternion\nfrom . import buffer, mesh, camera, light, material, animation, load, vnode, node, scene\n\nclass Importer:\n    \"\"\"Manages all import state.\"\"\"\n\n    def __init__(self, filepath, options):\n        self.filepath = filepath\n        self.options = options\n        self.caches = {}\n\n    def do_import(self):\n        self.set_conversions()\n\n        load.load(self)\n\n        material.material_precomputation(self)\n        if self.options['import_animations']:\n            animation.animation_precomputation(self)\n\n        vnode.create_vtree(self)\n        node.realize_vtree(self)\n\n        if self.options['import_animations']:\n            animation.add_animations(self)\n\n        if self.options['import_scenes_as_collections']:\n            scene.import_scenes_as_collections(self)\n\n    def get(self, kind, id):\n        \"\"\"\n        Gets some kind of resource, eg. a decoded accessor, a mesh, etc. Kept in\n        a cache to enable sharing.\n        \"\"\"\n        cache = self.caches.setdefault(kind, {})\n        if id in cache:\n            return cache[id]\n        else:\n            CREATE_FNS = {\n                'buffer': buffer.create_buffer,\n                'buffer_view': buffer.create_buffer_view,\n                'accessor': buffer.create_accessor,\n                'image': material.create_image,\n                'material': material.create_material,\n                'node_group': material.create_group,\n                'mesh': mesh.create_mesh,\n                'camera': camera.create_camera,\n                'light': light.create_light,\n            }\n            result = CREATE_FNS[kind](self, id)\n            if type(result) == dict and result.get('do_not_cache_me', False):\n                # Callee is requesting we not cache it\n                result = result['result']\n            else:\n                cache[id] = result\n            return result\n\n    def set_conversions(self):\n        \"\"\"\n        Set the convert_{translation,rotation,scale} functions for converting\n        from glTF to Blender units. The user can configure this.\n        \"\"\"\n        global_scale = self.options['global_scale']\n        axis_conversion = self.options['axis_conversion']\n\n        if axis_conversion == 'BLENDER_UP':\n            def convert_translation(t):\n                return global_scale * Vector([t[0], -t[2], t[1]])\n\n            def convert_rotation(r):\n                return Quaternion([r[3], r[0], -r[2], r[1]])\n\n            def convert_scale(s):\n                return Vector([s[0], s[2], s[1]])\n\n        else:\n            def convert_translation(t):\n                return global_scale * Vector(t)\n\n            def convert_rotation(r):\n                return Quaternion([r[3], r[0], r[1], r[2]])\n\n            def convert_scale(s):\n                return Vector(s)\n\n        self.convert_translation = convert_translation\n        self.convert_rotation = convert_rotation\n        self.convert_scale = convert_scale\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/light.py",
    "content": "import math\nimport bpy\n\n\ndef create_light(op, idx):\n    light = op.gltf['extensions']['KHR_lights_punctual']['lights'][idx]\n    name = light.get('name', 'lights[%d]' % idx)\n\n    light_type = light['type']\n    color = light.get('color', [1, 1, 1])\n    intensity = light.get('intensity', 1)\n\n    bl_type = {\n        'directional': 'SUN',\n        'point': 'POINT',\n        'spot': 'SPOT',\n    }.get(light_type)\n    if not bl_type:\n        print('unknown light type:', type)\n        bl_type = 'POINT'\n\n    if bpy.app.version >= (2, 80, 0):\n        bl_light = bpy.data.lights.new(name, type=bl_type)\n    else:\n        bl_light = bpy.data.lamps.new(name, type=bl_type)\n    bl_light.use_nodes = True\n\n    emission = bl_light.node_tree.nodes['Emission']\n    emission.inputs['Color'].default_value = tuple(color) + (1,)\n\n    if light_type == 'directional':\n        watt = lux2W(intensity, ideal_555nm_source)\n        emission.inputs['Strength'].default_value = watt\n    elif light_type == 'point':\n        watt = cd2W(intensity, ideal_555nm_source, surface=4*math.pi)\n        emission.inputs['Strength'].default_value = watt\n    elif light_type == 'spot':\n        spot = light.get('spot', {})\n        inner = spot.get('innerConeAngle', 0)\n        outer = spot.get('outerConeAngle', math.pi/4)\n        bl_light.spot_size = outer\n        bl_light.spot_blend = inner / outer\n\n        # For the surface calc see:\n        # https://en.wikipedia.org/wiki/Solid_angle#Cone,_spherical_cap,_hemisphere\n        emission.inputs['Strength'].default_value = cd2W(\n            intensity,\n            ideal_555nm_source,\n            surface=2 * math.pi * (1 - math.cos(outer / 2)),\n        )\n    else:\n        assert(False)\n\n    return bl_light\n\n\n# Watt conversions\n\nincandescent_bulb = 0.0249\nideal_555nm_source = 1 / 683\n\n\ndef cd2W(intensity, efficiency, surface):\n    \"\"\"\n    intensity in candles\n    efficency is a factor\n    surface in steradians\n    \"\"\"\n    lumens = intensity * surface\n    return lumens / (efficiency * 683)\n\n\ndef lux2W(intensity, efficiency):\n    \"\"\"\n    intensity in lux (lm/m2)\n    efficency is a factor\n    \"\"\"\n    return intensity / (efficiency * 683)\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/load.py",
    "content": "import os\nimport json\nimport struct\nfrom . import GLTF_VERSION, EXTENSIONS\n\n\ndef load(op):\n    parse_file(op)\n    check_version(op)\n    check_extensions(op)\n\n\ndef parse_file(op):\n    op.glb_buffer = None\n\n    filename = op.filepath\n\n    # Remember this for resolving relative paths\n    op.base_path = os.path.dirname(filename)\n\n    with open(filename, 'rb') as f:\n        contents = f.read()\n\n    # Use magic number to detect GLB files.\n    is_glb = contents[:4] == b'glTF'\n    if is_glb:\n        parse_glb(op, contents)\n    else:\n        parse_gltf(op, contents)\n\n\ndef parse_gltf(op, contents):\n    op.gltf = json.loads(contents.decode('utf-8'))\n\n\ndef parse_glb(op, contents):\n    contents = memoryview(contents)\n\n    # Parse the header\n    header = struct.unpack_from('<4sII', contents)\n    glb_version = header[1]\n    if glb_version != 2:\n        raise Exception('GLB: version not supported: %d' % glb_version)\n\n    # Parse the chunks; we only want the JSON and BIN ones\n    offset = 12  # end of header\n    while offset < len(contents):\n        length, type = struct.unpack_from('<I4s', contents, offset=offset)\n        offset += 8\n        data = contents[offset: offset + length]\n        offset += length\n\n        # The first chunk must be JSON\n        if not hasattr(op, 'gltf'):\n            assert(type == b'JSON')\n            op.gltf = json.loads(\n                data.tobytes().decode('utf-8'),  # Need to decode for < 2.79.4 which comes with Python 3.5\n                encoding='utf-8'\n            )\n        else:\n            if type == b'BIN\\0':\n                op.glb_buffer = data\n                return\n    else:\n        raise Exception('empty GLB!')\n\n\ndef check_version(op):\n    def parse_version(s):\n        \"\"\"Parse a string like '1.1' to a tuple (1,1).\"\"\"\n        try:\n            version = tuple(int(x) for x in s.split('.'))\n            if len(version) >= 2:\n                return version\n        except Exception:\n            pass\n        raise Exception('unknown version format: %s' % s)\n\n    asset = op.gltf['asset']\n\n    if 'minVersion' in asset:\n        min_version = parse_version(asset['minVersion'])\n        supported = GLTF_VERSION >= min_version\n        if not supported:\n            raise Exception('unsupported minimum version: %s' % min_version)\n    else:\n        version = parse_version(asset['version'])\n        # Check only major version; we should be backwards- and forwards-compatible\n        supported = version[0] == GLTF_VERSION[0]\n        if not supported:\n            raise Exception('unsupported version: %s' % version)\n\n\ndef check_extensions(op):\n    required = set(op.gltf.get('extensionsRequired', []))\n    used = set(op.gltf.get('extensionsUsed', []))\n\n    unsupported_required = required.difference(EXTENSIONS)\n    for ext in unsupported_required:\n        raise Exception('unsupported extension was required: %s' % ext)\n\n    unsupported_used = list(used.difference(EXTENSIONS))\n    if unsupported_used:\n        print(\n            'Note that the following extensions are unsupported:',\n            *unsupported_used)\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/__init__.py",
    "content": "import json\nimport bpy\nfrom .block import Block\nfrom .texture import create_texture_block\nfrom . import image, node_groups, precompute\n\n# Re-exports\ncreate_image = image.create_image\ncreate_group = node_groups.create_group\nmaterial_precomputation = precompute.material_procomputation\n\n\ndef create_material(op, idx):\n    \"\"\"\n    Create a Blender material for the glTF materials[idx]. If idx is the\n    special value 'default_material', create a Blender material for the default\n    glTF material instead.\n    \"\"\"\n    mc = MaterialCreator()\n    mc.op = op\n    mc.idx = idx\n    mc.liveness = op.material_infos[idx].liveness\n\n    if idx == 'default_material':\n        mc.material = {}\n        material_name = 'glTF Default Material'\n    else:\n        mc.material = op.gltf['materials'][idx]\n        material_name = mc.material.get('name', 'materials[%d]' % idx)\n\n    if 'KHR_materials_unlit' in mc.material.get('extensions', {}):\n        mc.pbr = mc.material.get('pbrMetallicRoughness', {})\n        mc.type = 'unlit'\n    elif 'KHR_materials_pbrSpecularGlossiness' in mc.material.get('extensions', {}):\n        mc.pbr = mc.material['extensions']['KHR_materials_pbrSpecularGlossiness']\n        mc.type = 'specGloss'\n    else:\n        mc.pbr = mc.material.get('pbrMetallicRoughness', {})\n        mc.type = 'metalRough'\n\n    # Create a new Blender node-tree material and empty it\n    bl_material = bpy.data.materials.new(material_name)\n    bl_material.use_nodes = True\n    mc.tree = bl_material.node_tree\n    mc.links = mc.tree.links\n    while mc.tree.nodes:\n        mc.tree.nodes.remove(mc.tree.nodes[0])\n\n    create_node_tree(mc)\n\n    # Set the viewport alpha mode\n    alpha_mode = mc.material.get('alphaMode', 'OPAQUE')\n    double_sided = mc.material.get('doubleSided', False) or mc.op.options['always_doublesided']\n    if not double_sided and alpha_mode == 'OPAQUE':\n        # Since we use alpha to simulate backface culling\n        alpha_mode = 'MASK'\n\n    if alpha_mode not in ['OPAQUE', 'MASK', 'BLEND']:\n        print('unknown alpha mode %s' % alpha_mode)\n        alpha_mode = 'OPAQUE'\n\n    if getattr(bl_material, 'blend_method', None):\n        bl_material.blend_method = {\n            # glTF: Blender\n            'OPAQUE': 'OPAQUE',\n            'MASK': 'CLIP',\n            'BLEND': 'BLEND',\n        }[alpha_mode]\n    else:\n        bl_material.game_settings.alpha_blend = {\n            # glTF: Blender\n            'OPAQUE': 'OPAQUE',\n            'MASK': 'CLIP',\n            'BLEND': 'ALPHA',\n        }[alpha_mode]\n\n    # Set diffuse/specular color (for solid view)\n    if 'baseColorFactor' in mc.pbr:\n        diffuse_color = mc.pbr['baseColorFactor'][:len(bl_material.diffuse_color)]\n        bl_material.diffuse_color = diffuse_color\n    if 'diffuseFactor' in mc.pbr:\n        diffuse_color = mc.pbr['diffuseFactor'][:len(bl_material.diffuse_color)]\n        bl_material.diffuse_color = diffuse_color\n    if 'specularFactor' in mc.pbr:\n        specular_color = mc.pbr['specularFactor'][:len(bl_material.specular_color)]\n        bl_material.specular_color = specular_color\n\n    return bl_material\n\n\ndef create_node_tree(mc):\n    emissive_block = None\n    if mc.type != 'unlit':\n        emissive_block = create_emissive(mc)\n    shaded_block = create_shaded(mc)\n\n    if emissive_block:\n        block = mc.adjoin({\n            'node': 'AddShader',\n            'input.0': emissive_block,\n            'input.1': shaded_block,\n        })\n    else:\n        block = shaded_block\n\n    alpha_block = create_alpha_block(mc)\n    if alpha_block:\n        # Push things into a better position\n        # [block] ->               -> [mix]\n        #            [alpha block]\n        alpha_block.pad_top(600)\n        combined_block = Block.row_align_center([block, alpha_block])\n        combined_block.outputs = \\\n            [block.outputs[0], alpha_block.outputs[0], alpha_block.outputs[1]]\n        block = mc.adjoin({\n            'node': 'MixShader',\n            'output.0/input.2': combined_block,\n            'output.1/input.Fac': combined_block,\n            'output.2/input.1': combined_block,\n        })\n\n    mc.adjoin({\n        'node': 'OutputMaterial',\n        'input.Surface': block,\n    }).center_at_origin()\n\n\ndef create_emissive(mc):\n    if mc.type == 'unlit':\n        return None\n\n    block = None\n    if 'emissiveTexture' in mc.material:\n        block = create_texture_block(\n            mc,\n            'emissiveTexture',\n            mc.material['emissiveTexture']\n        )\n        block.img_node.label = 'EMISSIVE'\n\n    factor = mc.material.get('emissiveFactor', [0, 0, 0])\n\n    if factor != [1, 1, 1] or 'emissiveFactor' in mc.liveness:\n        if block:\n            block = mc.adjoin({\n                'node': 'MixRGB',\n                'prop.blend_type': 'MULTIPLY',\n                'input.Fac': Value(1),\n                'input.Color1': block,\n                'input.Color2': Value(factor + [1], record_to='emissiveFactor'),\n            })\n        else:\n            if factor == [0, 0, 0] and 'emissiveFactor' not in mc.liveness:\n                block = None\n            else:\n                block = Value(factor + [1], record_to='emissiveFactor')\n\n    if block:\n        block = mc.adjoin({\n            'node': 'Emission',\n            'input.Color': block,\n        })\n\n    return block\n\n\ndef create_alpha_block(mc):\n    alpha_mode = mc.material.get('alphaMode', 'OPAQUE')\n    double_sided = mc.material.get('doubleSided', False) or mc.op.options['always_doublesided']\n\n    if alpha_mode not in ['OPAQUE', 'MASK', 'BLEND']:\n        alpha_mode = 'OPAQUE'\n\n    # Create an empty block with the baseColor/diffuse texture's alpha\n    if alpha_mode != 'OPAQUE' and getattr(mc, 'img_node', None):\n        block = Block.empty(0, 0)\n        block.outputs = [mc.img_node.outputs[1]]\n    else:\n        block = None\n\n    # Alpha cutoff in MASK mode\n    if alpha_mode == 'MASK' and block:\n        alpha_cutoff = mc.material.get('alphaCutoff', 0.5)\n        block = mc.adjoin({\n            'node': 'Math',\n            'prop.operation': 'GREATER_THAN',\n            'input.0': block,\n            'input.1': Value(alpha_cutoff, record_to='alphaCutoff'),\n        })\n\n    # Handle doublesidedness\n    if not double_sided:\n        sided_block = mc.adjoin({\n            'node': 'NewGeometry',\n        })\n        sided_block = mc.adjoin({\n            'node': 'Math',\n            'prop.operation': 'SUBTRACT',\n            'input.0': Value(1),\n            'output.Backfacing/input.1': sided_block,\n        })\n        if block:\n            block = mc.adjoin({\n                'node': 'Math',\n                'prop.operation': 'MULTIPLY',\n                'input.1': block,\n                'input.0': sided_block,\n            })\n        else:\n            block = sided_block\n\n    if block:\n        transparent_block = mc.adjoin({\n            'node': 'BsdfTransparent',\n        })\n\n        alpha_block = Block.col_align_right([block, transparent_block])\n        alpha_block.outputs = [block.outputs[0], transparent_block.outputs[0]]\n        block = alpha_block\n\n    return block\n\n\ndef create_shaded(mc):\n    if mc.type == 'metalRough':\n        return create_metalRough_pbr(mc)\n    elif mc.type == 'specGloss':\n        return create_specGloss_pbr(mc)\n    elif mc.type == 'unlit':\n        return create_unlit(mc)\n    else:\n        assert(False)\n\n\ndef create_metalRough_pbr(mc):\n    params = {\n        'node': 'BsdfPrincipled',\n        'dim': (200, 540),\n    }\n\n    base_color_block = create_base_color(mc)\n    if base_color_block:\n        params['input.Base Color'] = base_color_block\n\n    metal_roughness_block = create_metal_roughness(mc)\n    if metal_roughness_block:\n        params['output.0/input.Metallic'] = metal_roughness_block\n        params['output.1/input.Roughness'] = metal_roughness_block\n\n    normal_block = create_normal_block(mc)\n    if normal_block:\n        params['input.Normal'] = normal_block\n\n    return mc.adjoin(params)\n\n\ndef create_specGloss_pbr(mc):\n    try:\n        bpy.context.scene.render.engine = 'BLENDER_EEVEE'\n        node = mc.tree.nodes.new('ShaderNodeEeveeSpecular')\n        mc.tree.nodes.remove(node)\n        has_specular_node = True\n    except Exception:\n        has_specular_node = False\n\n    if has_specular_node:\n        params = {\n            'node': 'EeveeSpecular',\n            'dim': (200, 540),\n        }\n    else:\n        params = {\n            'node': 'Group',\n            'group': 'pbrSpecularGlossiness',\n            'dim': (200, 540),\n        }\n\n    diffuse_block = create_diffuse(mc)\n    if diffuse_block:\n        params['input.Base Color'] = diffuse_block\n\n    spec_rough_block = create_spec_roughness(mc)\n    if spec_rough_block:\n        params['output.0/input.Specular'] = spec_rough_block\n        params['output.1/input.Roughness'] = spec_rough_block\n\n    normal_block = create_normal_block(mc)\n    if normal_block:\n        params['input.Normal'] = normal_block\n\n    if has_specular_node:\n        occlusion_block = create_occlusion_block(mc)\n        if occlusion_block:\n            params['output.0/input.Ambient Occlusion'] = occlusion_block\n\n    return mc.adjoin(params)\n\n\ndef create_unlit(mc):\n    params = {\n        # TODO: pick a better node?\n        'node': 'Emission',\n    }\n\n    base_color_block = create_base_color(mc)\n    if base_color_block:\n        params['input.Color'] = base_color_block\n\n    return mc.adjoin(params)\n\n\ndef create_base_color(mc):\n    block = None\n    if 'baseColorTexture' in mc.pbr:\n        block = create_texture_block(\n            mc,\n            'baseColorTexture',\n            mc.pbr['baseColorTexture'],\n        )\n        block.img_node.label = 'BASE COLOR'\n        # Remember for alpha value\n        mc.img_node = block.img_node\n\n    for color_set_num in range(0, mc.op.material_infos[mc.idx].num_color_sets):\n        vert_color_block = mc.adjoin({\n            'node': 'Attribute',\n            'prop.attribute_name': 'COLOR_%d' % color_set_num,\n        })\n        if block:\n            block = mc.adjoin({\n                'node': 'MixRGB',\n                'prop.blend_type': 'MULTIPLY',\n                'input.Fac': Value(1),\n                'input.Color1': block,\n                'input.Color2': vert_color_block,\n            })\n        else:\n            block = vert_color_block\n\n    factor = mc.pbr.get('baseColorFactor', [1, 1, 1, 1])\n    if factor != [1, 1, 1, 1] or 'baseColorFactor' in mc.liveness:\n        if block:\n            block = mc.adjoin({\n                'node': 'MixRGB',\n                'prop.blend_type': 'MULTIPLY',\n                'input.Fac': Value(1),\n                'input.Color1': block,\n                'input.Color2': Value(factor, record_to='baseColorFactor'),\n            })\n        else:\n            block = Value(factor, record_to='baseColorFactor')\n\n    return block\n\n\ndef create_diffuse(mc):\n    block = None\n    if 'diffuseTexture' in mc.pbr:\n        block = create_texture_block(\n            mc,\n            'diffuseTexture',\n            mc.pbr['diffuseTexture'],\n        )\n        block.img_node.label = 'DIFFUSE'\n        # Remember for alpha value\n        mc.img_node = block.img_node\n\n    for color_set_num in range(0, mc.op.material_infos[mc.idx].num_color_sets):\n        vert_color_block = mc.adjoin({\n            'node': 'Attribute',\n            'prop.attribute_name': 'COLOR_%d' % color_set_num,\n        })\n        if block:\n            block = mc.adjoin({\n                'node': 'MixRGB',\n                'prop.blend_type': 'MULTIPLY',\n                'input.Fac': Value(1),\n                'input.Color1': block,\n                'input.Color2': vert_color_block,\n            })\n        else:\n            block = vert_color_block\n\n    factor = mc.pbr.get('diffuseFactor', [1, 1, 1, 1])\n    if factor != [1, 1, 1, 1] or 'diffuseFactor' in mc.liveness:\n        if block:\n            block = mc.adjoin({\n                'node': 'MixRGB',\n                'prop.blend_type': 'MULTIPLY',\n                'input.Fac': Value(1),\n                'input.Color1': block,\n                'input.Color2': Value(factor, record_to='diffuseFactor'),\n            })\n        else:\n            block = Value(factor, record_to='diffuseFactor')\n\n    return block\n\n\ndef create_metal_roughness(mc):\n    block = None\n    if 'metallicRoughnessTexture' in mc.pbr:\n        tex_block = create_texture_block(\n            mc,\n            'metallicRoughnessTexture',\n            mc.pbr['metallicRoughnessTexture'],\n        )\n        tex_block.img_node.label = 'METALLIC ROUGHNESS'\n        tex_block.img_node.color_space = 'NONE'\n\n        block = mc.adjoin({\n            'node': 'SeparateRGB',\n            'input.Image': tex_block,\n        })\n        block.outputs = [block.outputs['B'], block.outputs['G']]\n\n    metal_factor = mc.pbr.get('metallicFactor', 1)\n    rough_factor = mc.pbr.get('roughnessFactor', 1)\n\n    if not block:\n        return [\n            Value(metal_factor, record_to='metallicFactor'),\n            Value(rough_factor, record_to='roughFactor'),\n        ]\n\n    if metal_factor != 1 or 'metallicFactor' in mc.liveness:\n        metal_factor_options = {\n            'node': 'Math',\n            'prop.operation': 'MULTIPLY',\n            'output.0/input.0': block,\n            'input.1': Value(metal_factor, record_to='metallicFactor'),\n        }\n    else:\n        metal_factor_options = {}\n    if rough_factor != 1 or 'roughnessFactor' in mc.liveness:\n        rough_factor_options = {\n            'node': 'Math',\n            'prop.operation': 'MULTIPLY',\n            'output.1/input.0': block,\n            'input.1': Value(rough_factor, record_to='roughnessFactor'),\n        }\n    else:\n        rough_factor_options = {}\n\n    return mc.adjoin_split(metal_factor_options, rough_factor_options, block)\n\n\ndef create_spec_roughness(mc):\n    block = None\n    if 'specularGlossinessTexture' in mc.pbr:\n        block = create_texture_block(\n            mc,\n            'specularGlossinessTexture',\n            mc.pbr['specularGlossinessTexture'],\n        )\n        block.img_node.label = 'SPECULAR GLOSSINESS'\n\n    spec_factor = mc.pbr.get('specularFactor', [1, 1, 1]) + [1]\n    gloss_factor = mc.pbr.get('glossinessFactor', 1)\n\n    if not block:\n        return [\n            Value(spec_factor, record_to='specularFactor'),\n            Value(gloss_factor, record_to='glossinessFactor'),\n        ]\n\n    if spec_factor != [1, 1, 1, 1] or 'specularFactor' in mc.liveness:\n        spec_factor_options = {\n            'node': 'MixRGB',\n            'prop.operation': 'MULTIPLY',\n            'input.Fac': Value(1),\n            'output.Color/input.Color1': block,\n            'input.Color2': Value(spec_factor, record_to='specularFactor'),\n        }\n    else:\n        spec_factor_options = {}\n    if gloss_factor != 1 or 'glossinessFactor' in mc.liveness:\n        gloss_factor_options = {\n            'node': 'Math',\n            'prop.operation': 'MULTIPLY',\n            'output.Alpha/input.0': block,\n            'input.1': Value(gloss_factor, record_to='glossinessFactor'),\n        }\n    else:\n        gloss_factor_options = {}\n\n    block = mc.adjoin_split(spec_factor_options, gloss_factor_options, block)\n\n    # Convert glossiness to roughness\n    return mc.adjoin_split(None, {\n        'node': 'Math',\n        'prop.operation': 'SUBTRACT',\n        'input.0': Value(1.0),\n        'output.1/input.1': block,\n    }, block)\n\n\ndef create_normal_block(mc):\n    if 'normalTexture' in mc.material:\n        tex_block = create_texture_block(\n            mc,\n            'normalTexture',\n            mc.material['normalTexture'],\n        )\n        tex_block.img_node.label = 'NORMAL'\n        tex_block.img_node.color_space = 'NONE'\n\n        return mc.adjoin({\n            'node': 'NormalMap',\n            'prop.uv_map': 'TEXCOORD_%d' % mc.material['normalTexture'].get('texCoord', 0),\n            'input.Strength': Value(mc.material['normalTexture'].get('scale', 1), record_to='normalTexture/scale'),\n            'input.Color': tex_block,\n        })\n    else:\n        return None\n\n\ndef create_occlusion_block(mc):\n    if 'occlusionTexture' in mc.material:\n        block = create_texture_block(\n            mc,\n            'occlusionTexture',\n            mc.material['occlusionTexture'],\n        )\n        block.img_node.label = 'OCCLUSION'\n        block.img_node.color_space = 'NONE'\n\n        block = block = mc.adjoin({\n            'node': 'SeparateRGB',\n            'input.Image': block,\n        })\n\n        strength = mc.material['occlusionTexture'].get('strength', 1)\n        if strength != 1 or 'occlusionTexture/strength' in mc.liveness:\n            block = block = mc.adjoin({\n                'node': 'Math',\n                'prop.operation': 'MULTIPLY',\n                'input.0': block,\n                'input.1': Value(strength, record_to='occlusionTexture/strength'),\n            })\n\n        return block\n    else:\n        return None\n\n\nclass MaterialCreator:\n    \"\"\"\n    Work-horse for creating nodes and automatically laying out blocks.\n    \"\"\"\n    def new_node(self, opts):\n        new_node = self.tree.nodes.new('ShaderNode' + opts['node'])\n        new_node.width = 140\n        new_node.height = 100\n\n        if 'group' in opts:\n            new_node.node_tree = self.op.get('node_group', opts['group'])\n\n        def str_or_int(x):\n            try:\n                return int(x)\n            except ValueError:\n                return x\n\n        input_blocks = []\n        for key, val in opts.items():\n            if key.startswith('input.'):\n                input_key = str_or_int(key[len('input.'):])\n                input_block = self.connect(val, 0, new_node, 'inputs', input_key)\n                if input_block and input_block not in input_blocks:\n                    input_blocks.append(input_block)\n\n            elif key.startswith('output.'):\n                if '/' in key:\n                    output_part, input_part = key.split('/')\n                    output_key = str_or_int(output_part[len('output.'):])\n                    input_key = str_or_int(input_part[len('input.'):])\n                    input_block = self.connect(val, output_key, new_node, 'inputs', input_key)\n                    if input_block and input_block not in input_blocks:\n                        input_blocks.append(input_block)\n\n                else:\n                    output_key = str_or_int(key[len('output.'):])\n                    input_block = self.connect(val, 0, new_node, 'outputs', output_key)\n                    if input_block and input_block not in input_blocks:\n                        input_blocks.append(input_block)\n\n            elif key.startswith('prop.'):\n                prop_name = key[len('prop.'):]\n                setattr(new_node, prop_name, val)\n\n            elif key == 'dim':\n                new_node.width, new_node.height = val\n\n        return new_node, input_blocks\n\n    def adjoin(self, opts):\n        \"\"\"\n        Adjoins a new node. All the blocks that are used as inputs to it are\n        laid out in a column to its left.\n\n        [input1] -> [new_node]\n        [input2] ->\n        ...      ->\n        \"\"\"\n        new_node, input_blocks = self.new_node(opts)\n\n        input_block = Block.col_align_right(input_blocks)\n        block = Block.row_align_center([input_block, new_node])\n        block.outputs = new_node.outputs\n\n        return block\n\n    def adjoin_split(self, opts1, opts2, left_block):\n        \"\"\"\n        Adjoins at-most-two new nodes (either or both can be missing). They are\n        laid out in a column with left_block to their left. Return a block with\n        two outputs; the first is the output of the first block, or the first\n        output of left_block if missing; the second is the first output of the\n        second block, or the second of left_block if missing.\n\n        [left_block] -> [block1] ->\n                     -> [block2] ->\n        \"\"\"\n        if not opts1 and not opts2:\n            return left_block\n\n        outputs = []\n        if opts1:\n            block1, __input_blocks = self.new_node(opts1)\n            outputs.append(block1.outputs[0])\n        else:\n            block1 = Block.empty()\n            outputs.append(left_block.outputs[0])\n        if opts2:\n            block2, __input_blocks = self.new_node(opts2)\n            outputs.append(block2.outputs[0])\n        else:\n            block2 = Block.empty()\n            outputs.append(left_block.outputs[1])\n\n        split_block = Block.col_align_right([block1, block2])\n        block = Block.row_align_center([left_block, split_block])\n        block.outputs = outputs\n\n        return block\n\n    def connect(self, connector, connector_key, node, socket_type, socket_key):\n        \"\"\"\n        Connect a connector, which may be either a socket or a Value (or\n        nothing) to a socket in the shader node tree.\n        \"\"\"\n        if connector is None:\n            return None\n\n        if type(connector) == Value:\n            connector = [connector]\n\n        if type(connector) == list:\n            self.connect_value(connector[connector_key], node, socket_type, socket_key)\n            return None\n\n        else:\n            assert(socket_type == 'inputs')\n            self.connect_block(connector, connector_key, node.inputs[socket_key])\n            return connector\n\n    def connect_value(self, value, node, socket_type, socket_key):\n        getattr(node, socket_type)[socket_key].default_value = value.value\n        # Record the data path to this socket in our material info so the\n        # animation creator can find it to animate\n        if value.record_to:\n            self.op.material_infos[self.idx].paths[value.record_to] = (\n                'nodes[' + json.dumps(node.name) + ']' +\n                '.' + socket_type + '[' + json.dumps(socket_key) + ']' +\n                '.default_value'\n            )\n\n    def connect_block(self, block, output_key, socket):\n        self.links.new(block.outputs[output_key], socket)\n\n\nclass Value:\n    \"\"\"\n    This is a helper class that tells the material creator to set the value of a\n    socket rather than connect it to another socket. The record_to property, if\n    present, is a key that the path to the socket should be remembered under.\n    Remembering the path to where a Value got written into the node tree is used\n    for animation importing (which needs to know where eg. the baseColorFactor\n    wound up; it could be in a Multiply node or directly in the color socket of\n    the Principled node, etc).\n    \"\"\"\n    def __init__(self, value, record_to=''):\n        self.value = value\n        self.record_to = record_to\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/block.py",
    "content": "from mathutils import Vector\n\n# A _block_ is either a shader node or a rectangular set of smaller blocks\n# represented by the Block class. We can line blocks up in rows, etc. So we use\n# them to make node trees look nice.\n\n\nclass Block:\n    def __init__(self, *blocks):\n        self.children = []\n        # Bounding box of children\n        self.top_left = Vector((0, 0))\n        self.bottom_right = Vector((0, 0))\n\n        for block in blocks:\n            self.add(block)\n\n    def add(self, child):\n        self.children.append(child)\n        if len(self.children) == 1:\n            self.top_left = top_left(child)\n            self.bottom_right = bottom_right(child)\n        else:\n            tl = top_left(child)\n            br = bottom_right(child)\n            self.top_left = Vector((\n                min(self.top_left[0], tl[0]),\n                max(self.top_left[1], tl[1]),\n            ))\n            self.bottom_right = Vector((\n                max(self.bottom_right[0], br[0]),\n                min(self.bottom_right[1], br[1]),\n            ))\n\n    def move_by(self, delta):\n        for child in self.children:\n            move_by(child, delta)\n        self.top_left += delta\n        self.bottom_right += delta\n\n    def pad_top(self, padding):\n        self.top_left = Vector((\n            self.top_left[0],\n            self.top_left[1] + padding,\n        ))\n\n    def center_at_origin(self):\n        center_at_origin(self)\n\n    @staticmethod\n    # Creates an empty block (used for spacing purposes)\n    def empty(width=100, height=140):\n        block = Block()\n        block.bottom_right = Vector((width, -height))\n        return block\n\n    @staticmethod\n    # Aligns the blocks in a center-aligned row. Returns a new Block containing\n    # the blocks.\n    #       .--.         .---.\n    #       |  | .-----. |   |\n    #     --|A |-|  B  |-| C |--\n    #       |  | '-----' |   |\n    #       '--'         '---'\n    def row_align_center(blocks, gutter=100):\n        x, y = 0, 0\n        max_height = max((height(block) for block in blocks), default=0)\n        for block in blocks:\n            w, h = width(block), height(block)\n            dh = (max_height - h) / 2\n            move_to(block, Vector((x, y - dh)))\n            if w != 0:\n                x += w + gutter\n\n        return Block(*blocks)\n\n    @staticmethod\n    # Aligns the blocks in a right-aligned column. Returns a new Block\n    # containing the blocks.\n    #        .--.\n    #        | A|\n    #        '--'\n    #     .-----.\n    #     |  B  |\n    #     '-----'\n    #       .---.\n    #       | C |\n    #       '---'\n    def col_align_right(blocks, gutter=100):\n        x, y = 0, 0\n        max_width = max((width(block) for block in blocks), default=0)\n        for block in blocks:\n            w, h = width(block), height(block)\n            dw = max_width - w\n            move_to(block, Vector((x + dw, y)))\n            if h != 0:\n                y -= h + gutter\n\n        return Block(*blocks)\n\n\ndef top_left(block):\n    if type(block) == Block:\n        return block.top_left\n    return Vector(block.location)\n\n\ndef bottom_right(block):\n    if type(block) == Block:\n        return Vector(block.bottom_right)\n    return block.location + Vector((block.width, -block.height))\n\n\ndef move_by(block, delta):\n    if type(block) == Block:\n        block.move_by(delta)\n    else:\n        block.location += delta\n\n\ndef width(block):\n    tl = top_left(block)\n    br = bottom_right(block)\n    return br[0] - tl[0]\n\n\ndef height(block):\n    tl = top_left(block)\n    br = bottom_right(block)\n    return tl[1] - br[1]\n\n\ndef move_to(block, pos):\n    delta = pos - top_left(block)\n    move_by(block, delta)\n\n\ndef center_at_origin(block):\n    w, h = width(block), height(block)\n    move_to(block, Vector((-w/2, h/2)))\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/groups.json",
    "content": "// !!AUTO-GENERATED!! See node_groups.py\n{\n\"Texcoord CLAMP\":{\"name\":\"Texcoord CLAMP\",\"inputs\":[{\"name\":\"Value\",\"idname\":\"NodeSocketFloat\",\"default_value\":0.5,\"min_value\":-10000.0,\"max_value\":10000.0}],\"outputs\":[{\"name\":\"Value\",\"idname\":\"NodeSocketFloat\",\"default_value\":0.0,\"min_value\":0.0,\"max_value\":0.0}],\"nodes\":[{\"name\":\"Group Input\",\"idname\":\"NodeGroupInput\",\"location\":[-439.2994689941406,-68.00346374511719],\"width\":140.0,\"height\":100.0,\"inputs\":[],\"outputs\":[null,null]},{\"name\":\"Group Output\",\"idname\":\"NodeGroupOutput\",\"location\":[185.09613037109375,-68.60009765625],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[]},{\"name\":\"Math\",\"idname\":\"ShaderNodeMath\",\"location\":[-124.9363784790039,-15.0498046875],\"width\":140.0,\"height\":100.0,\"inputs\":[0.0,null],\"outputs\":[null],\"operation\":\"ADD\",\"use_clamp\":true}],\"links\":[0,0,2,1,2,0,1,0]},\n\"Texcoord MIRRORED_REPEAT\":{\"name\":\"Texcoord MIRRORED_REPEAT\",\"inputs\":[{\"name\":\"Value\",\"idname\":\"NodeSocketFloat\",\"default_value\":0.5,\"min_value\":-10000.0,\"max_value\":10000.0}],\"outputs\":[{\"name\":\"Output\",\"idname\":\"NodeSocketFloat\",\"default_value\":0.0,\"min_value\":-3.4028234663852886e+38,\"max_value\":3.4028234663852886e+38}],\"nodes\":[{\"name\":\"Frame.001\",\"idname\":\"NodeFrame\",\"location\":[244.09178161621094,254.49673461914062],\"width\":557.14794921875,\"height\":380.4698486328125,\"inputs\":[],\"outputs\":[],\"label\":\"Lerp\"},{\"name\":\"Frame\",\"idname\":\"NodeFrame\",\"location\":[-701.92236328125,266.97216796875],\"width\":540.37060546875,\"height\":423.4593811035156,\"inputs\":[],\"outputs\":[],\"label\":\"x mod 2\"},{\"name\":\"Group Input\",\"idname\":\"NodeGroupInput\",\"location\":[-903.9764404296875,8.935855865478516],\"width\":140.0,\"height\":100.0,\"inputs\":[],\"outputs\":[null,null]},{\"name\":\"Math.002\",\"idname\":\"ShaderNodeMath\",\"location\":[136.47003173828125,-47.819976806640625],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[null],\"parent\":0,\"operation\":\"MULTIPLY\",\"use_clamp\":false},{\"name\":\"Math.006\",\"idname\":\"ShaderNodeMath\",\"location\":[-41.066375732421875,-47.826690673828125],\"width\":140.0,\"height\":100.0,\"inputs\":[1.0,null],\"outputs\":[null],\"parent\":0,\"operation\":\"SUBTRACT\",\"use_clamp\":false},{\"name\":\"Math.004\",\"idname\":\"ShaderNodeMath\",\"location\":[316.495361328125,-123.95169067382812],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[null],\"parent\":0,\"operation\":\"ADD\",\"use_clamp\":false},{\"name\":\"Group Output\",\"idname\":\"NodeGroupOutput\",\"location\":[801.2479248046875,68.79402160644531],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[]},{\"name\":\"Math.009\",\"idname\":\"ShaderNodeMath\",\"location\":[364.4091796875,-69.60244750976562],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[null],\"parent\":1,\"operation\":\"ADD\",\"use_clamp\":false},{\"name\":\"Math\",\"idname\":\"ShaderNodeMath\",\"location\":[85.581787109375,-45.44383239746094],\"width\":140.0,\"height\":100.0,\"inputs\":[null,2.0],\"outputs\":[null],\"parent\":1,\"operation\":\"MODULO\",\"use_clamp\":false},{\"name\":\"Math.007\",\"idname\":\"ShaderNodeMath\",\"location\":[23.624755859375,-261.1719970703125],\"width\":140.0,\"height\":100.0,\"inputs\":[null,0.0],\"outputs\":[null],\"parent\":1,\"operation\":\"LESS_THAN\",\"use_clamp\":false},{\"name\":\"Math.008\",\"idname\":\"ShaderNodeMath\",\"location\":[197.54718017578125,-261.172119140625],\"width\":140.0,\"height\":100.0,\"inputs\":[null,2.0],\"outputs\":[null],\"parent\":1,\"operation\":\"MULTIPLY\",\"use_clamp\":false},{\"name\":\"Math.001\",\"idname\":\"ShaderNodeMath\",\"location\":[-76.39251708984375,319.6142578125],\"width\":140.0,\"height\":100.0,\"inputs\":[1.0,null],\"outputs\":[null],\"operation\":\"GREATER_THAN\",\"use_clamp\":false},{\"name\":\"Math.005\",\"idname\":\"ShaderNodeMath\",\"location\":[-75.58465576171875,-64.29931640625],\"width\":140.0,\"height\":100.0,\"inputs\":[2.0,null],\"outputs\":[null],\"operation\":\"SUBTRACT\",\"use_clamp\":false},{\"name\":\"Math.003\",\"idname\":\"ShaderNodeMath\",\"location\":[134.81446838378906,-219.2749481201172],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[null],\"parent\":0,\"operation\":\"MULTIPLY\",\"use_clamp\":false}],\"links\":[2,0,8,0,5,0,6,0,3,0,5,0,13,0,5,1,11,0,4,1,11,0,13,0,4,0,3,0,8,0,7,0,12,0,3,1,2,0,9,0,9,0,10,0,10,0,7,1,7,0,11,1,7,0,12,1,7,0,13,1]},\n\"Texcoord REPEAT\":{\"name\":\"Texcoord REPEAT\",\"inputs\":[{\"name\":\"Value\",\"idname\":\"NodeSocketFloat\",\"default_value\":0.5,\"min_value\":-10000.0,\"max_value\":10000.0}],\"outputs\":[{\"name\":\"Value\",\"idname\":\"NodeSocketFloat\",\"default_value\":0.0,\"min_value\":0.0,\"max_value\":0.0}],\"nodes\":[{\"name\":\"Math.002\",\"idname\":\"ShaderNodeMath\",\"location\":[-111.34617614746094,-22.616287231445312],\"width\":140.0,\"height\":100.0,\"inputs\":[null,0.0],\"outputs\":[null],\"operation\":\"LESS_THAN\",\"use_clamp\":false},{\"name\":\"Math\",\"idname\":\"ShaderNodeMath\",\"location\":[-139.84437561035156,171.7362060546875],\"width\":140.0,\"height\":100.0,\"inputs\":[null,1.0],\"outputs\":[null],\"operation\":\"MODULO\",\"use_clamp\":false},{\"name\":\"Group Input\",\"idname\":\"NodeGroupInput\",\"location\":[-359.3721618652344,35.831207275390625],\"width\":140.0,\"height\":100.0,\"inputs\":[],\"outputs\":[null,null]},{\"name\":\"Math.001\",\"idname\":\"ShaderNodeMath\",\"location\":[85.65119934082031,104.58448791503906],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[null],\"operation\":\"ADD\",\"use_clamp\":false},{\"name\":\"Group Output\",\"idname\":\"NodeGroupOutput\",\"location\":[275.0805358886719,63.34889602661133],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[]}],\"links\":[2,0,1,0,1,0,3,0,3,0,4,0,2,0,0,0,0,0,3,1]},\n\"glTF <-> Blender UV\":{\"name\":\"glTF <-> Blender UV\",\"inputs\":[{\"name\":\"Vector\",\"idname\":\"NodeSocketVector\",\"default_value\":[0.0,0.0,0.0],\"min_value\":-1.0,\"max_value\":1.0}],\"outputs\":[{\"name\":\"Vector\",\"idname\":\"NodeSocketVector\",\"default_value\":[0.0,0.0,0.0],\"min_value\":0.0,\"max_value\":0.0}],\"nodes\":[{\"name\":\"Mapping\",\"idname\":\"ShaderNodeMapping\",\"location\":[0.0,0.0],\"width\":320.0,\"height\":100.0,\"inputs\":[null],\"outputs\":[null],\"translation\":[0.0,1.0,0.0],\"rotation\":[0.0,0.0,0.0],\"scale\":[1.0,-1.0,1.0]},{\"name\":\"Group Output\",\"idname\":\"NodeGroupOutput\",\"location\":[403.02301025390625,-113.90129089355469],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[]},{\"name\":\"Group Input\",\"idname\":\"NodeGroupInput\",\"location\":[-223.15174865722656,-78.30713653564453],\"width\":140.0,\"height\":100.0,\"inputs\":[],\"outputs\":[null,null]}],\"links\":[2,0,0,0,0,0,1,0]},\n\"pbrSpecularGlossiness\":{\"name\":\"pbrSpecularGlossiness\",\"inputs\":[{\"name\":\"Base Color\",\"idname\":\"NodeSocketColor\",\"default_value\":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{\"name\":\"Specular\",\"idname\":\"NodeSocketColor\",\"default_value\":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{\"name\":\"Roughness\",\"idname\":\"NodeSocketFloatFactor\",\"default_value\":0.5,\"min_value\":0.0,\"max_value\":1.0},{\"name\":\"Normal\",\"idname\":\"NodeSocketVector\",\"default_value\":[0.0,0.0,0.0],\"min_value\":-1.0,\"max_value\":1.0}],\"outputs\":[{\"name\":\"Shader\",\"idname\":\"NodeSocketShader\"}],\"nodes\":[{\"name\":\"Diffuse BSDF\",\"idname\":\"ShaderNodeBsdfDiffuse\",\"location\":[-195.1316680908203,203.0072784423828],\"width\":150.0,\"height\":100.0,\"inputs\":[null,null,null],\"outputs\":[null]},{\"name\":\"Group Output\",\"idname\":\"NodeGroupOutput\",\"location\":[408.60809326171875,-0.0],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[]},{\"name\":\"Group Input\",\"idname\":\"NodeGroupInput\",\"location\":[-658.364990234375,4.160030841827393],\"width\":140.0,\"height\":100.0,\"inputs\":[],\"outputs\":[null,null,null,null,null]},{\"name\":\"Add Shader\",\"idname\":\"ShaderNodeAddShader\",\"location\":[96.44002532958984,-22.353256225585938],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[null]},{\"name\":\"Glossy BSDF\",\"idname\":\"ShaderNodeBsdfGlossy\",\"location\":[-208.60809326171875,-203.0072784423828],\"width\":150.0,\"height\":100.0,\"inputs\":[null,null,null],\"outputs\":[null]}],\"links\":[2,0,0,0,2,1,4,0,2,3,0,2,0,0,3,0,4,0,3,1,3,0,1,0,2,2,0,1,2,2,4,1,2,3,4,2]},\n\"pbrSpecularGlossiness.001\":{\"name\":\"pbrSpecularGlossiness.001\",\"inputs\":[{\"name\":\"Diffuse\",\"idname\":\"NodeSocketColor\",\"default_value\":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{\"name\":\"Specular\",\"idname\":\"NodeSocketColor\",\"default_value\":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{\"name\":\"Glossiness\",\"idname\":\"NodeSocketFloatFactor\",\"default_value\":0.5,\"min_value\":0.0,\"max_value\":1.0},{\"name\":\"Normal\",\"idname\":\"NodeSocketVector\",\"default_value\":[0.0,0.0,0.0],\"min_value\":-1.0,\"max_value\":1.0}],\"outputs\":[{\"name\":\"Shader\",\"idname\":\"NodeSocketShader\"}],\"nodes\":[{\"name\":\"Diffuse BSDF\",\"idname\":\"ShaderNodeBsdfDiffuse\",\"location\":[-195.1316680908203,203.0072784423828],\"width\":150.0,\"height\":100.0,\"inputs\":[null,0.0,null],\"outputs\":[null]},{\"name\":\"Glossy BSDF\",\"idname\":\"ShaderNodeBsdfGlossy\",\"location\":[-208.60809326171875,-203.0072784423828],\"width\":150.0,\"height\":100.0,\"inputs\":[null,0.0,null],\"outputs\":[null]},{\"name\":\"Group Output\",\"idname\":\"NodeGroupOutput\",\"location\":[408.60809326171875,-0.0],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null],\"outputs\":[]},{\"name\":\"Group Input\",\"idname\":\"NodeGroupInput\",\"location\":[-658.364990234375,4.160030841827393],\"width\":140.0,\"height\":100.0,\"inputs\":[],\"outputs\":[null,null,null,null,null]},{\"name\":\"Mix Shader\",\"idname\":\"ShaderNodeMixShader\",\"location\":[76.44883728027344,-5.425174713134766],\"width\":140.0,\"height\":100.0,\"inputs\":[null,null,null],\"outputs\":[null]}],\"links\":[3,0,0,0,3,1,1,0,3,2,4,0,0,0,4,2,1,0,4,1,4,0,2,0,3,3,0,2,3,3,1,2]}\n}\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/image.py",
    "content": "import tempfile\nimport os\nimport base64\nimport bpy\nfrom bpy_extras.image_utils import load_image\n\n\ndef create_image(op, idx):\n    image = op.gltf['images'][idx]\n\n    name = image.get('name', 'image-%d' % idx)\n\n    img = None\n    if 'uri' in image:\n        uri = image['uri']\n        is_data_uri = uri[:5] == 'data:'\n        if is_data_uri:\n            found_at = uri.find(';base64,')\n            if found_at == -1:\n                print('error loading image: data URI not base64?')\n                return None\n            else:\n                buffer = base64.b64decode(uri[found_at + 8:])\n        else:\n            if name not in image:\n                name = os.path.basename(uri)\n            # Load the image from disk\n            image_location = os.path.join(op.base_path, uri)\n            img = load_image(image_location)\n            if not img:\n                print('error loading image')\n                return None\n    else:\n        buffer, _stride = op.get('buffer_view', image['bufferView'])\n\n    if not img:\n        # The image data is in buffer, but I don't know how to load an image\n        # from memory. We'll write it to a temp file and load it from there.\n        # Yes, this is a hack :)\n        with tempfile.TemporaryDirectory() as tmpdir:\n            img_path = os.path.join(tmpdir, 'image-%d' % idx)\n            with open(img_path, 'wb') as f:\n                f.write(buffer)\n            img = load_image(img_path)\n            img.pack()  # TODO: should we use as_png?\n\n    img.name = name\n\n    return img\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/node_groups.py",
    "content": "import json\nimport os\nimport bpy\n\n# This file creates the node groups that we use during material creation. Node\n# groups are serialized in groups.json. The data comes from\n# KhronosGroup/glTF-Blender-Exporter/pbr_node/glTF2.blend, plus some\n# modifications.\nthis_dir = os.path.dirname(os.path.abspath(__file__))\nnode_groups_path = os.path.join(this_dir, 'groups.json')\nwith open(node_groups_path, 'r') as f:\n    f.readline()  # throw away comment line\n    GROUP_DATA = json.load(f)\n\n\ndef create_group(op, name):\n    data = GROUP_DATA[name]\n\n    # Before we create a new one, if there is an existing group with the right\n    # name and whose inputs/outputs have the right names, (perhaps from a\n    # previous import), use that instead.\n    if name in bpy.data.node_groups:\n        g = bpy.data.node_groups[name]\n        in_names = [input.name for input in g.inputs]\n        out_names = [output.name for output in g.outputs]\n        matches = (\n            in_names == [y['name'] for y in data['inputs']] and\n            out_names == [y['name'] for y in data['outputs']]\n        )\n        if matches:\n            return g\n\n    g = bpy.data.node_groups.new(data['name'], 'ShaderNodeTree')\n    inputs = g.inputs\n    outputs = g.outputs\n    nodes = g.nodes\n    links = g.links\n\n    # New groups aren't empty; empty it\n    while nodes:\n        nodes.remove(nodes[0])\n\n    def deserialize_sockets(sockets, ys):\n        for y in ys:\n            s = sockets.new(y['idname'], y['name'])\n            if 'default_value' in y:\n                s.default_value = y['default_value']\n            if 'min_value' in y:\n                s.min_value = y['min_value']\n            if 'max_value' in y:\n                s.max_value = y['max_value']\n\n    deserialize_sockets(inputs, data['inputs'])\n    deserialize_sockets(outputs, data['outputs'])\n\n    for y in data['nodes']:\n        node = nodes.new(y['idname'])\n        node.name = y['name']\n        if 'node_tree' in y:\n            node.node_tree = op.get('node_group', y['node_tree'])\n        for attr in [\n            'label', 'operation', 'blend_type', 'use_clamp',\n            'translation', 'rotation', 'scale'\n        ]:\n            if attr in y:\n                setattr(node, attr, y[attr])\n\n        for i, v in enumerate(y['inputs']):\n            if v != None:\n                node.inputs[i].default_value = v\n        for i, v in enumerate(y['outputs']):\n            if v != None:\n                node.outputs[i].default_value = v\n\n    for i, y in enumerate(data['nodes']):\n        if 'parent' in y:\n            nodes[i].parent = nodes[y['parent']]\n\n    for i, y in enumerate(data['nodes']):\n        nodes[i].location = y['location']\n        nodes[i].width = y['width']\n        nodes[i].height = y['height']\n\n    for i in range(0, len(data['links']), 4):\n        a, b, c, d = data['links'][i:i+4]\n        links.new(nodes[a].outputs[b], nodes[c].inputs[d])\n\n    return g\n\n\n# The rest of this file isn't used in the importer but you can use it to edit\n# the serialized groups. First run load() to load all the groups, edit, and then\n# serialize them back to node_groups.json with serialize().\n\ndef load():\n    # Implements *just* enough of ImportGLTF to get create_group to work :)\n    class ProxyOp:\n        def __init__(self):\n            self.node_groups = {}\n\n        def get(self, type, name):\n            assert(type == 'node_group')\n            if name not in self.node_groups:\n                self.node_groups[name] = create_group(self, name)\n            return self.node_groups[name]\n\n    op = ProxyOp()\n    for name in GROUP_DATA.keys():\n        create_group(op, name)\n\n\ndef serialize_group(group):\n    def val(x):\n        if x == None:\n            return x\n        if type(x) in [int, float, bool, list, str]:\n            return x\n        if hasattr(x, '__len__'):\n            return list(x)\n        assert(False)\n\n    def serialize_sockets(sockets):\n        result = []\n        for s in sockets:\n            x = {\n                'name': s.name,\n                'idname': s.bl_socket_idname,\n            }\n            if hasattr(s, 'default_value'):\n                x['default_value'] = val(s.default_value)\n            if hasattr(s, 'min_value'):\n                x['min_value'] = val(s.min_value)\n            if hasattr(s, 'max_value'):\n                x['max_value'] = val(s.max_value)\n            result.append(x)\n        return result\n\n    inputs = serialize_sockets(group.inputs)\n    outputs = serialize_sockets(group.outputs)\n\n    node_to_idx = {}\n    for i, node in enumerate(group.nodes):\n        node_to_idx[node] = i\n\n    nodes = []\n    for node in group.nodes:\n        x = {\n            'name': node.name,\n            'idname': node.bl_idname,\n            'location': val(node.location),\n            'width': node.width,\n            'height': node.height,\n            'inputs': [],\n            'outputs': [],\n        }\n\n        if node.parent:\n            x['parent'] = node_to_idx[node.parent]\n        if hasattr(node, 'label') and node.label != '':\n            x['label'] = node.label\n        if hasattr(node, 'node_tree'):\n            x['node_tree'] = node.node_tree.name\n\n        for attr in [\n            'operation', 'blend_type', 'use_clamp',\n            'translation', 'rotation', 'scale',\n        ]:\n            if hasattr(node, attr):\n                x[attr] = val(getattr(node, attr))\n\n        for input in node.inputs:\n            if input.links or not hasattr(input, 'default_value'):\n                x['inputs'].append(None)\n            else:\n                x['inputs'].append(val(input.default_value))\n        for output in node.outputs:\n            if output.links or not hasattr(output, 'defaultvalue'):\n                x['outputs'].append(None)\n            else:\n                x['outputs'].append(val(output.default_value))\n\n        nodes.append(x)\n\n    links = []\n    for link in group.links:\n        from_node_id = node_to_idx[link.from_node]\n        from_socket_id = list(link.from_node.outputs).index(link.from_socket)\n        to_node_id = node_to_idx[link.to_node]\n        to_socket_id = list(link.to_node.inputs).index(link.to_socket)\n        links += [from_node_id, from_socket_id, to_node_id, to_socket_id]\n\n    return {\n        'name': group.name,\n        'inputs': inputs,\n        'outputs': outputs,\n        'nodes': nodes,\n        'links': links,\n    }\n\n\ndef serialize():\n    groups = {}\n    for group in bpy.data.node_groups:\n        groups[group.name] = serialize_group(group)\n\n    with open(node_groups_path, 'w') as f:\n        f.write('// !!AUTO-GENERATED!! See node_groups.py\\n')\n        f.write('{\\n')\n        keys = list(groups.keys())\n        keys.sort()\n        for k in keys:\n            json.dump(k, f)\n            f.write(':')\n            json.dump(groups[k], f, separators=(',', ':'))\n            if k != keys[-1]:\n                f.write(',')\n            f.write('\\n')\n        f.write('}\\n')\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/precompute.py",
    "content": "from ..mesh import MAX_NUM_COLOR_SETS\n\nclass MaterialInfo:\n    def __init__(self):\n        # The maximum number of color sets used by any primitive with this\n        # material, ie. the smallest n st. no primitive with this material has a\n        # COLOR_n attribute.\n        self.num_color_sets = 0\n        # The set of \"live\" material property names that have to correspond to\n        # some value in the Blender shader tree, because we're going to want to\n        # animate them.\n        self.liveness = set()\n        # Maps a property name to its Blender path suitable for animation. All\n        # live properties must get an entry here.\n        self.paths = {}\n\ndef material_procomputation(op):\n    op.material_infos = {\n        idx: MaterialInfo()\n        for idx, __material in enumerate(op.gltf.get('materials', []))\n    }\n    op.material_infos['default_material'] = MaterialInfo()\n\n    # Find out what vertex colors materials use\n    for mesh in op.gltf.get('meshes', []):\n        for primitive in mesh['primitives']:\n            i = 0\n            while 'COLOR_%d' % i in primitive['attributes']:\n                if i >= MAX_NUM_COLOR_SETS:\n                    break\n\n                mat = primitive.get('material', 'default_material')\n                if i >= op.material_infos[mat].num_color_sets:\n                    op.material_infos[mat].num_color_sets = i + 1\n                i += 1\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/texture.py",
    "content": "import json\nfrom . import block\nBlock = block.Block\n\n# Creates a texture block for the given material.\n#\n# The texture block reads the appropriate texcoord set, possibly transforms\n# the UVs for KHR_texture_transform, applies wrapping to the UVs, and\n# samples an image texture. In general, it looks like\n#\n#    [Texcoord] -> [UV Transform] -> [UV Wrap] -> [Img Texture] ->\n\n\ndef create_texture_block(mc, texture_type, info):\n    texture = mc.op.gltf['textures'][info['index']]\n\n    texcoord_set = info.get('texCoord', 0)\n    block = None\n    # We'll create the texcoord block lazily\n    def create_texcoord_block():\n        return mc.adjoin({\n            'node': 'UVMap',\n            'prop.uv_map': 'TEXCOORD_%d' % texcoord_set,\n        })\n\n    # The [UV Transform] block looks like\n    #\n    #    -> [gltf<->Blender] -> [Transform] -> [gltf<->Blender] ->\n    #\n    # the [gltf<->Blender] blocks are Group Nodes that convert between glTF and\n    # Blender UV conventions, ie. (u, v) -> (u, 1-v). [Transform] is a Mapping\n    # Node that applies the actual TRS transform.\n    needs_tex_transform = (\n        'KHR_texture_transform' in info.get('extensions', {}) or\n        # This is set if the texture transform is animated\n        (texture_type + '-transform') in mc.op.material_infos[mc.idx].liveness\n    )\n    if needs_tex_transform:\n        t = info.get('extensions', {}).get('KHR_texture_transform', {})\n\n        texcoord_set = t.get('texCoord', texcoord_set)\n        offset = t.get('offset', [0, 0])\n        rotation = t.get('rotation', 0)\n        scale = t.get('scale', [1, 1])\n\n        # Rotation is counter-clockwise, but in glTF's UV space where Y is down,\n        # which makes it a clockwise rotation in normal terms\n        rotation = -rotation\n\n        # [Texcoord] -> [gltf<->Blender]\n        if not block:\n            block = create_texcoord_block()\n        block = mc.adjoin({\n            'node': 'Group',\n            'group': 'glTF <-> Blender UV',\n            'input.0': block,\n        })\n\n        # -> [Transform]\n        block = mc.adjoin({\n            'node': 'Mapping',\n            'dim': (320, 275),\n            'prop.vector_type': 'POINT',\n            'input.0': block,\n        })\n        mapping_node = block.outputs[0].node\n        mapping_node.translation[0], mapping_node.translation[1] = offset\n        mapping_node.rotation[2] = rotation\n        mapping_node.scale[0], mapping_node.scale[1] = scale\n\n        mc.op.material_infos[mc.idx].paths[texture_type + '-transform'] = (\n            'nodes[' + json.dumps(mapping_node.name) + ']'\n        )\n\n        # -> [gltf<->Blender]\n        block = mc.adjoin({\n            'node': 'Group',\n            'group': 'glTF <-> Blender UV',\n            'input.0': block,\n        })\n\n    if 'sampler' in texture:\n        sampler = mc.op.gltf['samplers'][texture['sampler']]\n    else:\n        sampler = {}\n\n    # Handle the wrapping mode. The Image Texture Node can have a wrapping mode\n    # but it doesn't cover all possibilities in glTF.\n    CLAMP_TO_EDGE = 33071\n    MIRRORED_REPEAT = 33648\n    REPEAT = 10497\n\n    wrap_s = sampler.get('wrapS', REPEAT)\n    wrap_t = sampler.get('wrapT', REPEAT)\n    if wrap_s not in [CLAMP_TO_EDGE, MIRRORED_REPEAT, REPEAT]:\n        print('unknown wrapping mode:', wrap_s)\n        wrap_s = REPEAT\n    if wrap_t not in [CLAMP_TO_EDGE, MIRRORED_REPEAT, REPEAT]:\n        print('unknown wrapping mode:', wrap_t)\n        wrap_t = REPEAT\n\n    if (wrap_s, wrap_t) == (CLAMP_TO_EDGE, CLAMP_TO_EDGE):\n        extension = 'EXTEND'\n    elif (wrap_s, wrap_t) == (REPEAT, REPEAT):\n        extension = 'REPEAT'\n    else:\n        # Blender couldn't handle it. We have to insert the [UV Wrap] block. It\n        # looks like\n        #\n        #                      -> [wrap S] ->\n        #    -> [separate XYZ]                [combine XYZ] ->\n        #                      -> [wrap T] ->\n        #\n        # where the [wrap _] blocks are Group Nodes that compute\n        #\n        #     x -> x mod 1               for REPEAT\n        #\n        #     x -> / y       if y <= 1   for MIRRORED_REPEAT\n        #          \\ 2 - y   if y > 1\n        #            where y = x mod 2\n        #\n        # and where the [wrap _] block is omitted (ie. the value is passed\n        # through) for CLAMP_TO_EDGE because we set the wrapping mode on the\n        # Texture Node to do clamping (the artifacts produced when we use\n        # clamping for the actual wrapping mode are slightly better than if we\n        # used another mode).\n        extension = 'EXTEND'\n\n        if not block:\n            block = create_texcoord_block()\n\n        # -> [separate XYZ]\n        block = mc.adjoin({\n            'node': 'SeparateXYZ',\n            'input.0': block,\n        })\n\n        # -> [wrap S]\n        # -> [wrap T]\n        gltf_to_blender_wrap = dict([\n            (REPEAT, 'Texcoord REPEAT'),\n            (MIRRORED_REPEAT, 'Texcoord MIRRORED_REPEAT'),\n        ])\n        block = mc.adjoin_split(\n            {\n                'node': 'Group',\n                'dim': (230, 100),\n                'group': gltf_to_blender_wrap[wrap_s],\n                'input.0': block,\n            } if wrap_s != CLAMP_TO_EDGE else {},\n            {\n                'node': 'Group',\n                'dim': (230, 100),\n                'group': gltf_to_blender_wrap[wrap_t],\n                'output.1/input.0': block,\n            } if wrap_t != CLAMP_TO_EDGE else {},\n            block,\n        )\n\n        # -> [combine XYZ]\n        block = mc.adjoin({\n            'node': 'CombineXYZ',\n            'output.0/input.0': block,\n            'output.1/input.1': block,\n        })\n\n    # Determine interpolation.\n\n    NEAREST = 9728\n    LINEAR = 9729\n    NEAREST_MIPMAP_NEAREST = 9984\n    LINEAR_MIPMAP_NEAREST = 9985\n    NEAREST_MIPMAP_LINEAR = 9986\n    LINEAR_MIPMAP_LINEAR = 9987\n    AUTO_FILTER = LINEAR  # which one to use if unspecified\n\n    mag_filter = sampler.get('magFilter', AUTO_FILTER)\n    min_filter = sampler.get('minFilter', AUTO_FILTER)\n    if mag_filter not in [NEAREST, LINEAR]:\n        print('unknown texture mag filter:', mag_filter)\n        mag_filter = AUTO_FILTER\n    # Ignore mipmaps.\n    if min_filter in [NEAREST, NEAREST_MIPMAP_NEAREST, NEAREST_MIPMAP_LINEAR]:\n        min_filter = NEAREST\n    elif min_filter in [LINEAR, LINEAR_MIPMAP_NEAREST, LINEAR_MIPMAP_LINEAR]:\n        min_filter = LINEAR\n    else:\n        print('unknown texture min filter:', min_filter)\n        min_filter = AUTO_FILTER\n\n    # We can't set the min and mag and filters separately in Blender. Just\n    # prefer linear, unless both were nearest.\n    if (min_filter, mag_filter) == (NEAREST, NEAREST):\n        interpolation = 'Closest'\n    else:\n        interpolation = 'Linear'\n\n    # Find source\n    if 'MSFT_texture_dds' in info.get('extensions', {}):\n        image_id = texture['MSFT_texture_dds']['source']\n        image = mc.op.get('image', image_id)\n    elif 'source' not in texture:\n        image = None\n    else:\n        image_id = texture['source']\n        image = mc.op.get('image', image_id)\n\n    # -> [TexImage]\n    if not block and texcoord_set != 0:\n        block = create_texcoord_block()\n    block = mc.adjoin({\n        'node': 'TexImage',\n        'dim': (220, 250),\n        'prop.image': image,\n        'prop.interpolation': interpolation,\n        'prop.extension': extension,\n        'input.0': block,\n    })\n\n    block.img_node = block.outputs[0].node\n\n    return block\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/mesh.py",
    "content": "import bmesh\nimport bpy\nfrom mathutils import Vector\n\nMAX_NUM_COLOR_SETS = 8\nMAX_NUM_TEXCOORD_SETS = 8\n\ndef create_mesh(op, mesh_spec):\n    idx, primitive_idx = mesh_spec\n\n    mesh = op.gltf['meshes'][idx]\n    primitives = mesh['primitives']\n\n    # The caller can request we generate only one primitive instead of all of them\n    if primitive_idx is not None:\n        primitives = [primitives[primitive_idx]]\n\n    bme = bmesh.new()\n\n    # If any of the materials used in this mesh use COLOR_0 attributes, we need\n    # to pre-emptively create that layer, or else the Attribute node referencing\n    # COLOR_0 in those materials will produce a solid red color. See\n    # material.compute_materials_using_color0, which, note, must be called\n    # before this function.\n    needs_color0 = any(\n        op.material_infos[prim.get('material', 'default_material')].num_color_sets > 0\n        for prim in primitives\n    )\n    if needs_color0:\n        bme.loops.layers.color.new('COLOR_0')\n\n    # Make a list of all the materials this mesh will need; the material on a\n    # face is set by giving an index into this list.\n    materials = list(set(\n        op.get('material', primitive.get('material', 'default_material'))\n        for primitive in primitives\n    ))\n\n    # Add in all the primitives\n    for primitive in primitives:\n        material = op.get('material', primitive.get('material', 'default_material'))\n        material_idx = materials.index(material)\n\n        add_primitive_to_bmesh(op, bme, primitive, material_idx)\n\n    name = mesh_name(op, mesh_spec)\n    me = bpy.data.meshes.new(name)\n    bmesh_to_mesh(bme, me)\n    bme.free()\n\n    # Fill in the material list (we can't do me.materials = materials since this\n    # property is read-only).\n    for material in materials:\n        me.materials.append(material)\n\n    # Set polygon smoothing if the user requested it\n    if op.options['smooth_polys']:\n        for polygon in me.polygons:\n            polygon.use_smooth = True\n\n    me.update()\n\n    if not me.shape_keys:\n        return me\n    else:\n        # Tell op.get not to cache us if we have morph targets; this is because\n        # morph target weights are stored on the mesh instance in glTF, what\n        # would be on the object in Blender. But in Blender shape keys are part\n        # of the mesh. So when an object wants a mesh with morph targets, it\n        # always needs to get a new one. Ergo we lose sharing for meshes with\n        # morph targets.\n        return {\n            'result': me,\n            'do_not_cache_me': True,\n        }\n\n\ndef mesh_name(op, mesh_spec):\n    mesh_idx, primitive_idx = mesh_spec\n    name = op.gltf['meshes'][mesh_idx].get('name', 'meshes[%d]' % mesh_idx)\n    if primitive_idx is not None:\n        # Look for a name on the extras property\n        extras = op.gltf['meshes'][mesh_idx]['primitives'][primitive_idx].get('extras')\n        if type(extras) == dict and type(extras.get('name')) == str and extras['name']:\n            primitive_name = extras['name']\n        else:\n            primitive_name = 'primitives[%d]' % primitive_idx\n        name += '.' + primitive_name\n    return name\n\n\ndef bmesh_to_mesh(bme, me):\n    bme.to_mesh(me)\n\n    # to_mesh ignores normals?\n    normals = [v.normal for v in bme.verts]\n    me.use_auto_smooth = True\n    me.normals_split_custom_set_from_vertices(normals)\n\n    if len(bme.verts.layers.shape) != 0:\n        # to_mesh does NOT create shape keys so if there's shape data we'll have\n        # to do it by hand. The only way I could find to create a shape key was\n        # to temporarily parent me to an object and use obj.shape_key_add.\n        dummy_ob = None\n        try:\n            dummy_ob = bpy.data.objects.new('##dummy-object##', me)\n            dummy_ob.shape_key_add(name='Basis')\n            me.shape_keys.name = me.name\n            for layer_name in bme.verts.layers.shape.keys():\n                dummy_ob.shape_key_add(name=layer_name)\n                key_block = me.shape_keys.key_blocks[layer_name]\n                layer = bme.verts.layers.shape[layer_name]\n\n                for i, v in enumerate(bme.verts):\n                    key_block.data[i].co = v[layer]\n        finally:\n            if dummy_ob:\n                bpy.data.objects.remove(dummy_ob)\n\n\ndef get_layer(bme_layers, name):\n    \"\"\"Gets a layer from a BMLayerCollection, creating it if it does not exist.\"\"\"\n    if name not in bme_layers:\n        return bme_layers.new(name)\n    return bme_layers[name]\n\n\ndef add_primitive_to_bmesh(op, bme, primitive, material_index):\n    \"\"\"Adds a glTF primitive into a bmesh.\"\"\"\n    attributes = primitive['attributes']\n\n    # Early out if there's no POSITION data\n    if 'POSITION' not in attributes:\n        return\n\n    positions = op.get('accessor', attributes['POSITION'])\n\n    if 'indices' in primitive:\n        indices = op.get('accessor', primitive['indices'])\n    else:\n        indices = range(0, len(positions))\n\n    bme_verts = bme.verts\n    bme_edges = bme.edges\n    bme_faces = bme.faces\n\n    convert_coordinates = op.convert_translation\n    if op.options['axis_conversion'] == 'BLENDER_UP':\n        def convert_normal(n):\n            return Vector([n[0], -n[2], n[1]])\n    else:\n        def convert_normal(n):\n            return n\n\n    # The primitive stores vertex attributes in arrays and gives indices into\n    # those arrays\n    #\n    #     Attributes:\n    #       v0 v1 v2 v3 v4 ...\n    #     Indices:\n    #       1 2 4 ...\n    #\n    # We want to add **only those vertices that are used in an edge/tri** to the\n    # bmesh. Because of this and because the bmesh already has some vertices,\n    # when we add the new vertices their index in the bmesh will be different\n    # than their index in the primitive's vertex attribute arrays\n    #\n    #     Bmesh:\n    #       ...pre-existing vertices... v1 v2 v4 ...\n    #\n    # The index into the primitive's vertex attribute array is called the\n    # vertex's p-index (pidx) and the index into the bmesh is called its b-index\n    # (bidx). Remember to use the right index!\n\n    # The pidx of all the vertices that are actually used by the primitive\n    used_pidxs = set(indices)\n    # Contains a pair (bidx, pidx) for every vertex in the primitive\n    vert_idxs = []\n    # pidx_to_bidx[pidx] is the bidx of the vertex with pidx (or -1 if unused)\n    pidx_to_bidx = [-1] * len(positions)\n    bidx = len(bme_verts)\n    for pidx in range(0, len(positions)):\n        if pidx in used_pidxs:\n            bme_verts.new(convert_coordinates(positions[pidx]))\n            vert_idxs.append((bidx, pidx))\n            pidx_to_bidx[pidx] = bidx\n            bidx += 1\n    bme_verts.ensure_lookup_table()\n\n    # Add edges/faces to bmesh\n    mode = primitive.get('mode', 4)\n    edges, tris = edges_and_tris(indices, mode)\n    # NOTE: edges and vertices are in terms of pidxs\n    for edge in edges:\n        try:\n            bme_edges.new((\n                bme_verts[pidx_to_bidx[edge[0]]],\n                bme_verts[pidx_to_bidx[edge[1]]],\n            ))\n        except ValueError:\n            # Ignores dulicate/degenerate edges\n            pass\n    for tri in tris:\n        try:\n            tri = bme_faces.new((\n                bme_verts[pidx_to_bidx[tri[0]]],\n                bme_verts[pidx_to_bidx[tri[1]]],\n                bme_verts[pidx_to_bidx[tri[2]]],\n            ))\n            tri.material_index = material_index\n        except ValueError:\n            # Ignores dulicate/degenerate tris\n            pass\n\n    # Set normals\n    if 'NORMAL' in attributes:\n        normals = op.get('accessor', attributes['NORMAL'])\n        for bidx, pidx in vert_idxs:\n            bme_verts[bidx].normal = convert_normal(normals[pidx])\n\n    # Set vertex colors. Add them in the order COLOR_0, COLOR_1, etc.\n    set_num = 0\n    while 'COLOR_%d' % set_num in attributes:\n        if set_num >= MAX_NUM_COLOR_SETS:\n            print('more than %d COLOR_n attributes; dropping the rest on the floor',\n                MAX_NUM_COLOR_SETS\n            )\n            break\n\n        layer_name = 'COLOR_%d' % set_num\n        layer = get_layer(bme.loops.layers.color, layer_name)\n\n        colors = op.get('accessor', attributes[layer_name])\n\n        # Check whether Blender takes RGB or RGBA colors (old versions only take RGB)\n        num_components = len(colors[0])\n        blender_num_components = len(bme_verts[0].link_loops[0][layer])\n        if num_components == 3 and blender_num_components == 4:\n            # RGB -> RGBA\n            colors = [color+(1,) for color in colors]\n        if num_components == 4 and blender_num_components == 3:\n            # RGBA -> RGB\n            colors = [color[:3] for color in colors]\n            print('No RGBA vertex colors in your Blender version; dropping A component!')\n\n        for bidx, pidx in vert_idxs:\n            for loop in bme_verts[bidx].link_loops:\n                loop[layer] = colors[pidx]\n\n        set_num += 1\n\n    # Set texcoords\n    set_num = 0\n    while 'TEXCOORD_%d' % set_num in attributes:\n        if set_num >= MAX_NUM_TEXCOORD_SETS:\n            print('more than %d TEXCOORD_n attributes; dropping the rest on the floor',\n                MAX_NUM_TEXCOORD_SETS\n            )\n            break\n\n        layer_name = 'TEXCOORD_%d' % set_num\n        layer = get_layer(bme.loops.layers.uv, layer_name)\n\n        uvs = op.get('accessor', attributes[layer_name])\n\n        for bidx, pidx in vert_idxs:\n            # UV transform\n            u, v = uvs[pidx]\n            uv = (u, 1 - v)\n\n            for loop in bme_verts[bidx].link_loops:\n                loop[layer].uv = uv\n\n        set_num += 1\n\n    # Set joints/weights for skinning (multiple sets allow > 4 influences)\n    # TODO: multiple sets are untested!\n    joint_sets = []\n    weight_sets = []\n    set_num = 0\n    while 'JOINTS_%d' % set_num in attributes and 'WEIGHTS_%d' % set_num in attributes:\n        joint_sets.append(op.get('accessor', attributes['JOINTS_%d' % set_num]))\n        weight_sets.append(op.get('accessor', attributes['WEIGHTS_%d' % set_num]))\n        set_num += 1\n    if joint_sets:\n        layer = get_layer(bme.verts.layers.deform, 'Vertex Weights')\n\n        for joint_set, weight_set in zip(joint_sets, weight_sets):\n            for bidx, pidx in vert_idxs:\n                for j in range(0, 4):\n                    weight = weight_set[pidx][j]\n                    if weight != 0.0:\n                        joint = joint_set[pidx][j]\n                        bme_verts[bidx][layer][joint] = weight\n\n    # Set morph target positions (we don't handle normals/tangents)\n    for k, target in enumerate(primitive.get('targets', [])):\n        if 'POSITION' not in target:\n            continue\n\n        layer = get_layer(bme.verts.layers.shape, 'Morph %d' % k)\n\n        morph_positions = op.get('accessor', target['POSITION'])\n\n        for bidx, pidx in vert_idxs:\n            bme_verts[bidx][layer] = convert_coordinates(\n                Vector(positions[pidx]) +\n                Vector(morph_positions[pidx])\n            )\n\n\ndef edges_and_tris(indices, mode):\n    \"\"\"\n    Convert the indices for different primitive modes into a list of edges\n    (pairs of endpoints) and a list of tris (triples of vertices).\n    \"\"\"\n    edges = []\n    tris = []\n    # TODO: only mode TRIANGLES is tested!!\n    if mode == 0:\n        # POINTS\n        pass\n    elif mode == 1:\n        # LINES\n        #   1   3\n        #  /   /\n        # 0   2\n        edges = [tuple(indices[i:i+2]) for i in range(0, len(indices), 2)]\n    elif mode == 2:\n        # LINE LOOP\n        #   1---2\n        #  /     \\\n        # 0-------3\n        edges = [tuple(indices[i:i+2]) for i in range(0, len(indices) - 1)]\n        edges.append((indices[-1], indices[0]))\n    elif mode == 3:\n        # LINE STRIP\n        #   1---2\n        #  /     \\\n        # 0       3\n        edges = [tuple(indices[i:i+2]) for i in range(0, len(indices) - 1)]\n    elif mode == 4:\n        # TRIANGLES\n        #   2     3\n        #  / \\   / \\\n        # 0---1 4---5\n        tris = [tuple(indices[i:i+3]) for i in range(0, len(indices), 3)]\n    elif mode == 5:\n        # TRIANGLE STRIP\n        #   1---3---5\n        #  / \\ / \\ /\n        # 0---2---4\n        def alternate(i, xs):\n            ccw = i % 2 != 0\n            return xs if ccw else (xs[0], xs[2], xs[1])\n        tris = [\n            alternate(i, tuple(indices[i:i+3]))\n            for i in range(0, len(indices) - 2)\n        ]\n    elif mode == 6:\n        # TRIANGLE FAN\n        #   3---2\n        #  / \\ / \\\n        # 4---0---1\n        tris = [\n            (indices[0], indices[i], indices[i+1])\n            for i in range(1, len(indices) - 1)\n        ]\n    else:\n        raise Exception('primitive mode unimplemented: %d' % mode)\n\n    return edges, tris\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/node.py",
    "content": "import os\nimport bpy\nfrom mathutils import Vector, Matrix\nfrom .compat import mul\n\n\ndef realize_vtree(op):\n    \"\"\"Create actual Blender nodes for the vnodes.\"\"\"\n    # Fix for #16\n    try:\n        bpy.ops.object.mode_set(mode='OBJECT')\n    except Exception:\n        pass\n\n    # First pass: depth-first realization of the vnode graph\n    def realize_vnode(vnode):\n        if vnode.type == 'OBJECT':\n            realize_object(op, vnode)\n\n        elif vnode.type == 'ARMATURE':\n            realize_armature(op, vnode)\n\n        elif vnode.type == 'BONE':\n            realize_bone(op, vnode)\n\n        elif vnode.type == 'ROOT':\n            realize_root(op, vnode)\n\n        for child in vnode.children:\n            realize_vnode(child)\n\n        # We enter edit-mode when we realize an armature. On the way back up,\n        # we've finished creating edit bones and can go back to object mode.\n        if vnode.type == 'ARMATURE':\n            bpy.ops.object.mode_set(mode='OBJECT')\n\n            # Unlink it; we'll link this in the right place later on.\n            if bpy.app.version >= (2, 80, 0):\n                ob_collection = bpy.context.scene.collection.objects\n                if vnode.blender_object.name in ob_collection:\n                    ob_collection.unlink(vnode.blender_object)\n            else:\n                bpy.context.scene.objects.unlink(vnode.blender_object)\n\n\n    realize_vnode(op.root_vnode)\n\n    # Second pass for things that require we know the blender_object and\n    # blender_name of the vnodes.\n    def pass2(vnode):\n        if vnode.mesh and vnode.mesh['skin'] != None:\n            obj = vnode.blender_object\n\n            # Create vertex groups.\n            joints = op.gltf['skins'][vnode.mesh['skin']]['joints']\n            for node_id in joints:\n                bone_name = op.node_id_to_vnode[node_id].blender_name\n                obj.vertex_groups.new(name=bone_name)\n\n            # Create the skin modifier.\n            modifier = obj.modifiers.new('Skin', 'ARMATURE')\n            armature_vnode = op.node_id_to_vnode[joints[0]].armature_vnode\n            modifier.object = armature_vnode.blender_object\n            modifier.use_vertex_groups = True\n\n            # We need to constrain the mesh to its armature so that its world\n            # space position is affected only by the world space transform of\n            # the joints and not of the node where it is instantiated, see\n            # glTF/#1195.\n            constraint = obj.constraints.new(type='COPY_TRANSFORMS')\n            constraint.owner_space = 'LOCAL'\n            constraint.target_space = 'LOCAL'\n            constraint.target = armature_vnode.blender_object\n\n            # TODO: investigate this more\n\n        # Set pose for bones that had non-homogeneous scalings\n        if vnode.type == 'BONE' and vnode.posebone_s is not None:\n            blender_object = vnode.armature_vnode.blender_object\n            pose_bone = blender_object.pose.bones[vnode.blender_name]\n            pose_bone.scale = vnode.posebone_s\n\n        for child in vnode.children:\n            pass2(child)\n\n    pass2(op.root_vnode)\n\n    link_everything_into_scene(op)\n\n\ndef realize_object(op, vnode):\n    \"\"\"Create a real Object for an OBJECT vnode.\"\"\"\n    # Create the mesh/camera/light instance\n    data = None\n    if vnode.mesh:\n        data = op.get('mesh', (vnode.mesh['mesh'], vnode.mesh['primitive_idx']))\n\n        # Set instance's morph target weights\n        if vnode.mesh['weights'] and data.shape_keys:\n            keyblocks = data.shape_keys.key_blocks\n            for i, weight in enumerate(vnode.mesh['weights']):\n                if ('Morph %d' % i) in keyblocks:\n                    keyblocks['Morph %d' % i].value = weight\n\n    elif vnode.camera:\n        data = op.get('camera', vnode.camera['camera'])\n\n    elif vnode.light:\n        data = op.get('light', vnode.light['light'])\n\n    obj = bpy.data.objects.new(vnode.name, data)\n    vnode.blender_object = obj\n\n    # Set TRS\n    t, r, s = vnode.trs\n    obj.location = t\n    obj.rotation_mode = 'QUATERNION'\n    obj.rotation_quaternion = r\n    obj.scale = s\n\n    # Set our parent\n    if vnode.parent:\n        if vnode.parent.type == 'BONE':\n            obj.parent = vnode.parent.armature_vnode.blender_object\n            obj.parent_type = 'BONE'\n            obj.parent_bone = vnode.parent.blender_name\n        elif vnode.parent.blender_object:\n            obj.parent = vnode.parent.blender_object\n\n\ndef realize_armature(op, vnode):\n    \"\"\"Create a real Armature for an ARMATURE vnode.\"\"\"\n    # TODO: find a way to avoid using ops and having to change modes\n    bpy.ops.object.add(type='ARMATURE', enter_editmode=True)\n    obj = bpy.context.object\n\n    vnode.blender_object = obj\n    vnode.blender_armature = obj.data\n\n    # Clear our location (ops.object.add puts the new armature at the location\n    # of the 3D Cursor)\n    obj.location = [0, 0, 0]\n\n    if vnode.parent:\n        obj.parent = vnode.parent.blender_object\n\n\ndef realize_bone(op, vnode):\n    \"\"\"Create a real EditBone for a BONE vnode.\"\"\"\n    armature = vnode.armature_vnode.blender_armature\n    editbone = armature.edit_bones.new(vnode.name)\n\n    editbone.use_connect = False\n\n    # Bones transforms are given, not by giving their local-to-parent transform,\n    # but by giving their head, tail, and roll in armature space. So we need the\n    # local-to-armature transform.\n    m = vnode.editbone_local_to_armature\n    editbone.head = mul(m, Vector((0, 0, 0)))\n    editbone.tail = mul(m, Vector((0, vnode.bone_length, 0)))\n    editbone.align_roll(mul(m, Vector((0, 0, 1))) - editbone.head)\n\n    vnode.blender_name = editbone.name\n    # NOTE: can't access this after we leave edit mode\n    vnode.blender_editbone = editbone\n\n    # Set parent\n    if vnode.parent:\n        if getattr(vnode.parent, 'blender_editbone', None):\n            editbone.parent = vnode.parent.blender_editbone\n\n\ndef realize_root(op, vnode):\n    \"\"\"\n    Realize the ROOT if the user requested it (giving it the same filename as\n    the glTF).\n    \"\"\"\n    if not op.options['add_root']:\n        return\n\n    obj = bpy.data.objects.new(os.path.basename(op.filepath), None)\n    vnode.blender_object = obj\n\n\nif bpy.app.version >= (2, 80, 0):\n    def link_vnode_into_scene(vnode, scene):\n        if vnode.blender_object:\n            if vnode.blender_object.name not in scene.collection.objects:\n                scene.collection.objects.link(vnode.blender_object)\nelse:\n    def link_vnode_into_scene(vnode, scene):\n        if vnode.blender_object:\n            try:\n                scene.objects.link(vnode.blender_object)\n            except Exception:\n                # Ignore exception if its already linked\n                pass\n\n\ndef link_tree_into_scene(vnode, scene):\n    link_vnode_into_scene(vnode, scene)\n    for child in vnode.children:\n        link_tree_into_scene(child, scene)\n\n\ndef link_everything_into_scene(op):\n    link_tree_into_scene(op.root_vnode, bpy.context.scene)\n\n    # The renderer is also tied to the scene\n    if bpy.context.scene.render.engine == 'BLENDER_RENDER':\n        # Our materials won't work in BLENDER_RENDER\n        bpy.context.scene.render.engine = 'CYCLES'\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/scene.py",
    "content": "import os\nimport bpy\n\n\ndef link_vnode_into_collection(vnode, collection):\n    if vnode.blender_object:\n        if vnode.blender_object.name not in collection.objects:\n            collection.objects.link(vnode.blender_object)\n\n\ndef link_tree_into_collection(vnode, collection):\n    link_vnode_into_collection(vnode, collection)\n    for child in vnode.children:\n        link_tree_into_collection(child, collection)\n\n\ndef import_scenes_as_collections(op):\n    if getattr(bpy.data, 'collections', None) is None:\n        print(\n            \"Can't import scenes as collections; \"\n            'no collections in this Blender version!'\n        )\n        return\n\n    scenes = op.gltf.get('scenes', [])\n    if not scenes:\n        return\n\n    base_collection = bpy.data.collections.new(os.path.basename(op.filepath))\n\n    default_scene_idx = op.gltf.get('scene')\n    for scene_idx, scene in enumerate(op.gltf.get('scenes', [])):\n        name = scene.get('name', 'scenes[%d]' % scene_idx)\n        if scene_idx == default_scene_idx:\n            name += ' (Default)'\n\n        collection = bpy.data.collections.new(name)\n        base_collection.children.link(collection)\n\n        for node_idx in scene['nodes']:\n            vnode = op.node_id_to_vnode[node_idx]\n\n            # A root node might not be a root vnode (eg. because we inserted an\n            # armature above it). Find the real root.\n            while vnode.parent is not None and vnode.parent.parent is not None:\n                vnode = vnode.parent\n\n            link_tree_into_collection(vnode, collection)\n"
  },
  {
    "path": "addons/io_scene_gltf_ksons/vnode.py",
    "content": "from math import pi\nfrom mathutils import Matrix, Quaternion, Vector, Euler\nfrom .compat import mul\nfrom .mesh import mesh_name\n\n# The node graph in glTF needs to fixed up quite a bit before it will work for\n# Blender. We first create a graph of \"virtual nodes\" to match the graph in the\n# glTF file and then transform it in a bunch of passes to make it suitable for\n# Blender import.\n\nclass VNode:\n    def __init__(self):\n        # The ID of the glTF node this vnode was created from, or None if there\n        # wasn't one\n        self.node_id = None\n        # List of child vnodes\n        self.children = []\n        # Parent vnode, or None for the root\n        self.parent = None\n        # (Vector, Quaternion, Vector) triple of the local-to-parent TRS transform\n        self.trs = (Vector((0, 0, 0)), Quaternion((1, 0, 0, 0)), Vector((1, 1, 1)))\n\n        # What type of Blender object will be created for this vnode: one of\n        # OBJECT, ARMATURE, BONE, or ROOT (for the special vnode that we use the\n        # turn the forest into a tree to make things easier to process).\n        self.type = 'OBJECT'\n\n        # Dicts of instance data\n        self.mesh = None\n        self.camera = None\n        self.light = None\n        # If this node had an instance in glTF but we moved it to another node,\n        # we record where we put it here\n        self.mesh_moved_to = None\n        self.camera_moved_to = None\n        self.light_moved_to = None\n\n        # These will be filled out after realization with the Blender data\n        # created for this vnode.\n        self.blender_object = None\n        self.blender_armature = None\n        self.blender_editbone = None\n        self.blender_name = None\n\n        # The editbone's (Translation, Rotation)\n        self.editbone_tr = None\n        self.posebone_s = None\n        self.editbone_local_to_armature = Matrix.Identity(4)\n        self.bone_length = 0\n        # Correction to apply to the original TRS to get the editbone TR\n        self.correction_rotation = Quaternion((1, 0, 0, 0))\n        self.correction_homscale = 1\n\n\ndef create_vtree(op):\n    initial_vtree(op)\n    insert_armatures(op)\n    move_instances(op)\n    adjust_bones(op)\n\n\n# In the first pass, create the vgraph from the forest from the glTF file,\n# making one OBJECT for each node\n#\n#       OBJ\n#      /  \\\n#     OBJ  OBJ\n#         /  \\\n#       OBJ   OBJ\n#\n# (The ROOT is also added, but we won't draw it)\ndef initial_vtree(op):\n    nodes = op.gltf.get('nodes', [])\n\n    op.node_id_to_vnode = {}\n\n    # Create a vnode for each node\n    for node_id, node in enumerate(nodes):\n        vnode = VNode()\n        vnode.node_id = node_id\n        vnode.name = node.get('name', 'nodes[%d]' % node_id)\n        vnode.trs = get_node_trs(op, node)\n        vnode.type = 'OBJECT'\n\n        if 'mesh' in node:\n            vnode.mesh = {\n                'mesh': node['mesh'],\n                'primitive_idx': None, # use all primitives\n                'skin': node.get('skin'),\n                'weights': node.get('weights', op.gltf['meshes'][node['mesh']].get('weights')),\n            }\n        if 'camera' in node:\n            vnode.camera = {\n                'camera': node['camera'],\n            }\n        if 'KHR_lights_punctual' in node.get('extensions', {}):\n            vnode.light = {\n                'light': node['extensions']['KHR_lights_punctual']['light'],\n            }\n\n        op.node_id_to_vnode[node_id] = vnode\n\n    # Fill in the parent/child relationships\n    for node_id, node in enumerate(nodes):\n        vnode = op.node_id_to_vnode[node_id]\n        for child_id in node.get('children', []):\n            child_vnode = op.node_id_to_vnode[child_id]\n\n            # Prevent cycles\n            assert(child_vnode.parent == None)\n\n            child_vnode.parent = vnode\n            vnode.children.append(child_vnode)\n\n    # Add a root node to make the forest of vnodes into a tree.\n    op.root_vnode = VNode()\n    op.root_vnode.type = 'ROOT'\n\n    for vnode in op.node_id_to_vnode.values():\n        if vnode.parent == None:\n            vnode.parent = op.root_vnode\n            op.root_vnode.children.append(vnode)\n\n\n# There is no special kind of node used for skinning in glTF. Joints are just\n# regular nodes. But in Blender, only a bone can be used for skinning and bones\n# are descendants of armatures.\n#\n# In the second pass we insert enough ARMATURE vnodes into the vtree so that\n# every vnode which is the joint of a skin is a descendant of an ARMATURE. All\n# descendants of ARMATURES are then turned into bones.\n#\n#       OBJ\n#      /  \\\n#    OBJ  ARMA\n#          |\n#         BONE\n#         /  \\\n#      BONE   BONE\ndef insert_armatures(op):\n    # Insert an armature for every skin\n    skins = op.gltf.get('skins', [])\n    for skin_id, skin in enumerate(skins):\n        armature = VNode()\n        armature.name = skin.get('name', 'skins[%d]' % skin_id)\n        armature.type = 'ARMATURE'\n\n        # We're going to find a place to insert the armature. It must be above\n        # all of the joint nodes.\n        vnodes_below = [op.node_id_to_vnode[joint_id] for joint_id in skin['joints']]\n        # Add in the skeleton node too (which we hope is an ancestor of the joints).\n        if 'skeleton' in skin:\n            vnodes_below.append(op.node_id_to_vnode[skin['skeleton']])\n\n        ancestor = lowest_common_ancestor(vnodes_below)\n\n        ancestor_is_joint = ancestor.node_id in skin['joints']\n        if ancestor_is_joint:\n            insert_above(ancestor, armature)\n        else:\n            insert_below(ancestor, armature)\n\n    # Walk down the tree, marking all children of armatures as bones and\n    # deleting any armature which is a descendant of another.\n    def visit(vnode, armature_ancestor):\n        # Make a copy of this because we don't want it to change (when we delete\n        # a vnode) while we're in the middle of iterating it\n        children = list(vnode.children)\n\n        # If we are below an armature...\n        if armature_ancestor:\n            # Found an armature descended of another\n            if vnode.type == 'ARMATURE':\n                remove_vnode(vnode)\n\n            else:\n                vnode.type = 'BONE'\n                vnode.armature_vnode = armature_ancestor\n\n        else:\n            if vnode.type == 'ARMATURE':\n                armature_ancestor = vnode\n\n        for child in children:\n            visit(child, armature_ancestor)\n\n    visit(op.root_vnode, None)\n\n\n# Now we need to enforce Blender's rule that (1) and object may have only one\n# data instance (ie. only one of a mesh or a camera or a light), and (2) a bone\n# may not have a data instance at all. We also need to move all cameras/lights\n# to new children so that we have somewhere to hang the glTF->Blender axis\n# conversion they need.\n#\n#\n#             OBJ               Eg. if there was a mesh and camera on OBJ1\n#            /  \\               we will move the camera to a new child OBJ3\n#        OBJ1   ARMA            (leaving the mesh on OBJ1).\n#         /      |              And if there was a mesh on BONE2 we will move\n#     OBJ3      BONE            the mesh to OBJ4\n#               /  \\\n#            BONE   BONE2\n#                    |\n#                   OBJ4\ndef move_instances(op):\n    def move_instance_to_new_child(vnode, key):\n        inst = getattr(vnode, key)\n        setattr(vnode, key, None)\n\n        if key == 'mesh':\n            id = inst['mesh']\n            name = op.gltf['meshes'][id].get('name', 'meshes[%d]' % id)\n        elif key == 'camera':\n            id = inst['camera']\n            name = op.gltf['cameras'][id].get('name', 'cameras[%d]' % id)\n        elif key == 'light':\n            id = inst['light']\n            lights = op.gltf['extensions']['KHR_lights_punctual']['lights']\n            name = lights[id].get('name', 'lights[%d]' % id)\n        else:\n            assert(False)\n\n        new_child = VNode()\n        new_child.name = name\n        new_child.parent = vnode\n        vnode.children.append(new_child)\n        new_child.type = 'OBJECT'\n\n        setattr(new_child, key, inst)\n        setattr(vnode, key + '_moved_to', [new_child])\n\n        if key in ['camera', 'light']:\n            # Quarter-turn around the X-axis. Needed for cameras or lights that\n            # point along the -Z axis in Blender but glTF says should look along the\n            # -Y axis\n            new_child.trs = (\n                new_child.trs[0],\n                Quaternion((2**(-1/2), 2**(-1/2), 0, 0)),\n                new_child.trs[2]\n            )\n\n        return new_child\n\n\n    def visit(vnode):\n        # Make a copy of this so we don't re-process new children we just made\n        children = list(vnode.children)\n\n        # Always move a camera or light to a child because it needs the\n        # gltf->Blender axis conversion\n        if vnode.camera:\n            move_instance_to_new_child(vnode, 'camera')\n        if vnode.light:\n            move_instance_to_new_child(vnode, 'light')\n\n        if vnode.mesh and vnode.type == 'BONE':\n            move_instance_to_new_child(vnode, 'mesh')\n\n        for child in children:\n            visit(child)\n\n    visit(op.root_vnode)\n\n    # The user can request that meshes be split into their primitives, like this\n    #\n    #       OBJ      =>     OBJ\n    #      (mesh)         /  |  \\\n    #                  OBJ  OBJ  OBJ\n    #                (mesh)(mesh)(mesh)\n    if op.options['split_meshes']:\n        def visit(vnode):\n            children = list(vnode.children)\n\n            if vnode.mesh is not None:\n                num_prims = len(op.gltf['meshes'][vnode.mesh['mesh']]['primitives'])\n                if num_prims > 1:\n                    new_children = []\n                    for prim_idx in range(0, num_prims):\n                        child = VNode()\n                        child.name = mesh_name(op, (vnode.mesh['mesh'], prim_idx))\n                        child.type = 'OBJECT'\n                        child.parent = vnode\n                        child.mesh = {\n                            'mesh': vnode.mesh['mesh'],\n                            'skin': vnode.mesh['skin'],\n                            'weights': vnode.mesh['weights'],\n                            'primitive_idx': prim_idx,\n                        }\n                        new_children.append(child)\n                    vnode.mesh = None\n                    vnode.children += new_children\n                    vnode.mesh_moved_to = new_children\n\n            for child in children:\n                visit(child)\n\n        visit(op.root_vnode)\n\n# Here's the compilcated pass.\n#\n# Brief review: every bone in glTF has a local-to-parent transform T(b;pose).\n# Sometimes we suppress the dependence on the pose and just write T(b). The\n# composition with the parent's local-to-parent, and so on up the armature is\n# the local-to-armature transform\n#\n#     L(b) = T(root) ... T(ppb) T(pb) T(b)\n#\n# where pb is the parent of b, ppb is the grandparent, etc. In Blender the\n# local-to-armature is\n#\n#     LB(b) = E(root) P(root) ... E(ppb) P(ppb) E(pb) P(pb) E(b) P(b)\n#\n# where E(b) is a TR transform for the edit bone and P(b) is a TRS transform for\n# the pose bone.\n#\n# NOTE: I am note entirely sure of that formula.\n#\n# In the rest position P(b;rest) = 1 for all b, so we would like to just make\n# E(b) = T(b;rest), but we can't since T(b;rest) might have a scaling, and we\n# also want to try to rotate T(b) so we can pick which way the Blender\n# octahedorn points.\n#\n# So we're going to change T(b). For every bone b pick a rotation cr(b) and a\n# scalar cs(b) and define the correction matrix for b to be\n#\n#     C(b) = Rot[cr(b)] HomScale[cs(b)]\n#\n# and transform T(b) to\n#\n#     T'(b) = C(pb)^{-1} T(b) C(b)\n#\n# If we compute L'(b) using the T'(b), most of the C terms cancel out and we get\n#\n#     L'(b) = L(b) C(b)\n#\n# This is close enough; we'll be able to cancel off the extra C(b) later.\n#\n# How do we pick C(b)? Assume we've already computed C(pb) and calculate T'(b)\n#\n#       T'(b)\n#     = C(pb)^{-1} T(b) C(b)\n#     = Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]\n#       Trans[t] Rot[r] Scale[s]\n#       Rot[cr(b)] HomScale[cs(b)]\n#     { floating the Trans to the left, combining Rots }\n#     = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]\n#       Rot[cr(pb)^{-1} r] HomScale[1/cs(pb)] Scale[s]\n#       Rot[cr(b)] HomScale[cs(b)]\n#\n# Now assume Scale[s] = HomScale[s] (and s is not 0), ie. the bone has a\n# homogeneous scaling. Then we can rearrange this and get\n#\n#       Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]\n#       Rot[cr(pb)^{-1} r cr(b)]\n#       HomScale[s cs(b) / cs(pb)]\n#\n# Now if we want the rotation to be R we can pick cr(b) = r^{-1} cr(pb) R. We\n# also want the scale to be 1, because again, E(b) has a scaling of 1 in Blender\n# always, so we pick cs(b) = cs(pb) / s.\n#\n# Okay, cool, so this is now a TR matrix and we can identify it with E(b).\n#\n# But what if Scale[s] **isn't** homogeneous? We appear to have no choice but to\n# put it on P(b;loadtime) for some non-rest pose we'll set at load time. This is\n# unfortunate because the rest pose in Blender won't be the same as the rest\n# pose in glTF (and there's inverse bind matrix fallout too).\n#\n# So in that case we'll take C(b) = 1, and set\n#\n#     E(b) = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ] Rot[cr(pb)^{-1} r]\n#     P(b;loadtime) = Scale[s / cs(pb)]\n#\n# So in both cases we now have LB(b) = L'(b).\n#\n# TODO: we can still pick a rotation when the scaling is heterogeneous\n\n# Maps an axis into a rotation carrying that axis into +Y\nAXIS_TO_PLUS_Y = {\n    '-X': Euler([0, 0, -pi/2]).to_quaternion(),\n    '+X': Euler([0, 0, pi/2]).to_quaternion(),\n    '-Y': Euler([pi, 0, 0]).to_quaternion(),\n    '+Y': Euler([0, 0, 0]).to_quaternion(),\n    '-Z': Euler([pi/2, 0, 0]).to_quaternion(),\n    '+Z': Euler([-pi/2, 0, 0]).to_quaternion(),\n}\ndef adjust_bones(op):\n    # List of distances between bone heads (used for computing bone lengths)\n    interbone_dists = []\n\n    def visit_bone(vnode):\n        t, r, s = vnode.trs\n\n        cr_pb_inv = vnode.parent.correction_rotation.conjugated()\n        cs_pb = vnode.parent.correction_homscale\n\n        # Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]\n        editbone_t = mul(cr_pb_inv, t) / cs_pb\n\n        if is_non_degenerate_homscale(s):\n            # s is a homogeneous scaling (ie. scalar mutliplication)\n            s = s[0]\n\n            # cs(b) = cs(pb) / s\n            vnode.correction_homscale = cs_pb / s\n\n            if op.options['bone_rotation_mode'] == 'POINT_TO_CHILDREN':\n                # We always pick a rotation for cr(b) that is, up to sign, a permutation of\n                # the basis vectors. This is necessary for some of the algebra to work out\n                # in animtion importing.\n\n                # General idea: assume we have one child. We want to rotate so\n                # that our tail comes close to the child's head. Out tail lies\n                # on our +Y axis. The child head is going to be Rot[cr(b)^{-1}]\n                # child_t / cs(b) where b is us and child_t is the child's\n                # trs[0]. So we want to choose cr(b) so that this is as close as\n                # possible to +Y, ie. we want to rotate it so that its largest\n                # component is along the +Y axis. Note that only the sign of\n                # cs(b) affects this, not its magnitude (since the largest\n                # component of v, 2v, 3v, etc. are all the same).\n\n                # Pick the targest to rotate towards. If we have one child, use\n                # that.\n                if len(vnode.children) == 1:\n                    target = vnode.children[0].trs[0]\n                elif len(vnode.children) == 0:\n                    # As though we had a child displaced the same way we were\n                    # from our parent.\n                    target = vnode.trs[0]\n                else:\n                    # Mean of all our children.\n                    center = Vector((0, 0, 0))\n                    for child in vnode.children:\n                        center += child.trs[0]\n                    center /= len(vnode.children)\n                    target = center\n                if cs_pb / s < 0:\n                    target = -target\n\n                x, y, z = abs(target[0]), abs(target[1]), abs(target[2])\n                if x > y and x > z:\n                    axis = '-X' if target[0] < 0 else '+X'\n                elif z > x and z > y:\n                    axis = '-Z' if target[2] < 0 else '+Z'\n                else:\n                    axis = '-Y' if target[1] < 0 else '+Y'\n\n                cr_inv = AXIS_TO_PLUS_Y[axis]\n                cr = cr_inv.conjugated()\n\n            elif op.options['bone_rotation_mode'] == 'NONE':\n                cr = Quaternion((1, 0, 0, 0))\n\n            else:\n                assert(False)\n\n            vnode.correction_rotation = cr\n\n            # cr(pb)^{-1} r cr(b)\n            editbone_r = mul(mul(cr_pb_inv, r), cr)\n\n        else:\n            # TODO: we could still use a rotation here.\n            # C(b) = 1\n            vnode.correction_rotation = Quaternion((1, 0, 0, 0))\n            vnode.correction_homscale = 1\n            # E(b) = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ] Rot[cr(pb)^{-1} r]\n            # P(b;loadtime) = Scale[s / cs(pb)]\n            editbone_r = mul(cr_pb_inv, r)\n            vnode.pose_s = s / cs_pb\n\n        vnode.editbone_tr = editbone_t, editbone_r\n        vnode.editbone_local_to_armature = mul(\n            vnode.parent.editbone_local_to_armature,\n            mul(Matrix.Translation(editbone_t), editbone_r.to_matrix().to_4x4())\n        )\n\n        interbone_dists.append(editbone_t.length)\n\n        # Try getting a bone length for our parent. The length that makes its\n        # tail meet our head is considered best. Since the tail always lies\n        # along the +Y ray, the closer we are to the this ray the better our\n        # length will be compared to the legnths chosen by our siblings. This is\n        # measured by the \"goodness\". Amoung siblings with equal goodness, we\n        # pick the smaller length, so the parent's tail will meet the nearest\n        # child.\n        vnode.bone_length_goodness = -99999\n        if vnode.parent.type == 'BONE':\n            t_len = editbone_t.length\n            if t_len > 0.0005:\n                goodness = editbone_t.dot(Vector((0, 1, 0))) / t_len\n                if goodness > vnode.parent.bone_length_goodness:\n                    if vnode.parent.bone_length == 0 or vnode.parent.bone_length > t_len:\n                        vnode.parent.bone_length = t_len\n                    vnode.parent.bone_length_goodness = goodness\n\n        # Recurse\n        for child in vnode.children:\n            if child.type == 'BONE':\n                visit_bone(child)\n\n        # We're on the way back up. Last chance to set our bone length if none\n        # of our children did. Use our parent's, if it has one. Otherwise, use\n        # the average inter-bone distance, if its not 0. Otherwise, just use 1\n        # -_-\n        if not vnode.bone_length:\n            if vnode.parent.bone_length:\n                vnode.bone_length = vnode.parent.bone_length\n            else:\n                avg = sum(interbone_dists) / max(1, len(interbone_dists))\n                if avg > 0.0005:\n                    vnode.bone_length = avg\n                else:\n                    vnode.bone_length = 1\n\n    def visit(vnode):\n        if vnode.type == 'ARMATURE':\n            for child in vnode.children:\n                visit_bone(child)\n        else:\n            for child in vnode.children:\n                visit(child)\n\n    visit(op.root_vnode)\n\n    # Remember that L'(b) = L(b) C(b)? Remember that we had to move any\n    # mesh/camera/light on a bone to an object? That's the perfect place to put\n    # a transform of C(b)^{-1} to cancel out that extra factor!\n    def visit_object_child_of_bone(vnode):\n        t, r, s = vnode.trs\n\n        # This moves us back along the bone, because for some reason Blender\n        # puts us at the tail of the bone, not the head\n        t -= Vector((0, vnode.parent.bone_length, 0))\n\n        #   Rot[cr^{-1}] HomScale[1/cs] Trans[t] Rot[r] Scale[s]\n        # = Trans[ Rot[cr^{-1}] t / cs] Rot[cr^{-1} r] Scale[s / cs]\n        cr_inv = vnode.parent.correction_rotation.conjugated()\n        cs = vnode.parent.correction_homscale\n        t = mul(cr_inv, t) / cs\n        r = mul(cr_inv, r)\n        s /= cs\n\n        vnode.trs = t, r, s\n\n    def visit(vnode):\n        if vnode.type == 'OBJECT' and vnode.parent.type == 'BONE':\n            visit_object_child_of_bone(vnode)\n        for child in vnode.children:\n            visit(child)\n\n    visit(op.root_vnode)\n\n\n# Helper functions below here:\n\ndef get_node_trs(op, node):\n    \"\"\"Gets the TRS proerties from a glTF node JSON object.\"\"\"\n    if 'matrix' in node:\n        m = node['matrix']\n        # column-major to row-major\n        m = Matrix([m[0:4], m[4:8], m[8:12], m[12:16]])\n        m.transpose()\n        loc, rot, sca = m.decompose()\n        # wxyz -> xyzw\n        # convert_rotation will switch back\n        rot = [rot[1], rot[2], rot[3], rot[0]]\n\n    else:\n        sca = node.get('scale', [1.0, 1.0, 1.0])\n        rot = node.get('rotation', [0.0, 0.0, 0.0, 1.0])\n        loc = node.get('translation', [0.0, 0.0, 0.0])\n\n    # Switch glTF coordinates to Blender coordinates\n    sca = op.convert_scale(sca)\n    rot = op.convert_rotation(rot)\n    loc = op.convert_translation(loc)\n\n    return [Vector(loc), Quaternion(rot), Vector(sca)]\n\n\ndef lowest_common_ancestor(vnodes):\n    \"\"\"\n    Compute the lowest common ancestors of vnodes, ie. the lowest node of which\n    all the given vnodes are (possibly impromper) descendants.\n    \"\"\"\n    assert(vnodes)\n\n    def ancestor_list(vnode):\n        \"\"\"\n        Computes the ancestor-list of vnode: the list of all its ancestors\n        starting at the root and ending at vnode itself.\n        \"\"\"\n        chain = []\n        while vnode:\n            chain.append(vnode)\n            vnode = vnode.parent\n        chain.reverse()\n        return chain\n\n    def first_difference(l1, l2):\n        \"\"\"\n        Returns the index of the first difference in two lists, or None if one is\n        a prefix of the other.\n        \"\"\"\n        i = 0\n        while True:\n            if i == len(l1) or i == len(l2):\n                return None\n            if l1[i] != l2[i]:\n                return i\n            i += 1\n\n    # Ancestor list for the lowest common ancestor so far\n    lowest_ancestor_list = ancestor_list(vnodes[0])\n\n    for vnode in vnodes[1:]:\n        cur_ancestor_list = ancestor_list(vnode)\n        d = first_difference(lowest_ancestor_list, cur_ancestor_list)\n        if d is None:\n            if len(cur_ancestor_list) < len(lowest_ancestor_list):\n                lowest_ancestor_list = cur_ancestor_list\n        else:\n            lowest_ancestor_list = lowest_ancestor_list[:d]\n\n    return lowest_ancestor_list[-1]\n\n\ndef insert_above(vnode, new_parent):\n    \"\"\"\n    Inserts new_parent between vnode and its parent. That is, turn\n\n        parent -> sister              parent -> sister\n               -> vnode      into            -> new_parent -> vnode\n               -> sister                     -> sister\n    \"\"\"\n    if not vnode.parent:\n        vnode.parent = new_parent\n        new_parent.parent = None\n        new_parent.children = [vnode]\n    else:\n        parent = vnode.parent\n        i = parent.children.index(vnode)\n        parent.children[i] = new_parent\n        new_parent.parent = parent\n        new_parent.children = [vnode]\n        vnode.parent = new_parent\n\n\ndef insert_below(vnode, new_child):\n    \"\"\"\n    Insert new_child between vnode and its children. That is, turn\n\n        vnode -> child              vnode -> new_child -> child\n              -> child     into                        -> child\n              -> child                                 -> child\n    \"\"\"\n    children = vnode.children\n    vnode.children = [new_child]\n    new_child.parent = vnode\n    new_child.children = children\n    for child in children:\n        child.parent = new_child\n\n\ndef remove_vnode(vnode):\n    \"\"\"\n    Remove vnode from the tree, replacing it with its children. That is, turn\n\n        parent -> sister                  parent -> sister\n               -> vnode -> child   into          -> child\n               -> sister                         -> sister\n    \"\"\"\n    assert(vnode.parent) # will never be called on the root\n\n    parent = vnode.parent\n    children = vnode.children\n\n    i = parent.children.index(vnode)\n    parent.children = (\n        parent.children[:i] +\n        children +\n        parent.children[i+1:]\n    )\n    for child in children:\n        child.parent = parent\n\n    vnode.parent = None\n    vnode.children = []\n\n\ndef is_non_degenerate_homscale(s):\n    \"\"\"Returns true if Scale[s] is multiplication by a non-zero scalar.\"\"\"\n    largest = max(abs(x) for x in s)\n    smallest = min(abs(x) for x in s)\n\n    if smallest < 1e-5:\n        # Too small; consider it zero\n        return False\n    return largest - smallest < largest * 0.001\n"
  },
  {
    "path": "deploy.py",
    "content": "import argparse\nimport os\nimport re\nimport subprocess\n\nimport make_package\n\n\ndef replace_in_file(file, expr, new_substr):\n    lines = []\n    regex = re.compile(expr, re.IGNORECASE)\n    with open(file) as infile:\n        for line in infile:\n            line = regex.sub(new_substr, line)\n            lines.append(line)\n    with open(file, 'w') as outfile:\n        for line in lines:\n            outfile.write(line)\n\n\nthis_dir = os.path.dirname(os.path.abspath(__file__))\n\nparser = argparse.ArgumentParser()\nparser.add_argument('version')\nargs = parser.parse_args()\n\nversion = args.version.split('.')\nversion_string = '.'.join(version)\nversion_tuple = '(%s)' % ', '.join(version)\n\nmain_file = os.path.join(this_dir, 'addons', 'io_scene_gltf_ksons', '__init__.py')\nreadme_file = os.path.join(this_dir, 'README.md')\n\nreplace_in_file(main_file,\n                r\"'version': \\([0-9\\, ]+\\)\",\n                \"'version': {}\".format(version_tuple))\n\nreplace_in_file(readme_file,\n                r'download/v[0-9\\.]+/io_scene_gltf_ksons-[0-9\\.]+.zip',\n                'download/v{}/io_scene_gltf_ksons-{}.zip'.format(version_string, version_string))\n\nos.chdir(this_dir)\nsubprocess.call(['git', 'add', main_file, readme_file])\nsubprocess.call(['git', 'commit', '-m', 'Bump version number to {}'.format(version_string)])\nsubprocess.call(['git', 'tag', 'v{}'.format(version_string)])\n\nmake_package.make_package(suffix=version_string)\n"
  },
  {
    "path": "make_package.py",
    "content": "import os\nimport shutil\nimport tempfile\n\n\ndef make_package(suffix=None):\n    this_dir = os.path.dirname(os.path.abspath(__file__))\n    dist_dir = os.path.join(this_dir, 'dist')\n\n    if not os.path.exists(dist_dir):\n        os.makedirs(dist_dir)\n\n    with tempfile.TemporaryDirectory() as tmpdir:\n        shutil.copytree(\n            os.path.join(this_dir, 'addons', 'io_scene_gltf_ksons'),\n            os.path.join(tmpdir, 'io_scene_gltf_ksons'),\n            ignore=shutil.ignore_patterns('__pycache__'))\n\n        zip_name = 'io_scene_gltf_ksons'\n        if suffix:\n            zip_name += '-' + suffix\n\n        shutil.make_archive(\n            os.path.join('dist', zip_name),\n            'zip',\n            tmpdir)\n\n\nif __name__ == '__main__':\n    make_package()\n"
  },
  {
    "path": "setup.cfg",
    "content": "[flake8]\nmax-line-length = 120"
  },
  {
    "path": "test/README.md",
    "content": "## Testing\n\nThe [glTF Sample Models](https://github.com/KhronosGroup/glTF-Sample-Models) are\nused for automated testing of the importer. A model file is considered to pass\nif importing it doesn't raise an exception.\n\n\n### Instructions\n\nTo run tests. This will fetch the sample models on its first run (be warned,\nthis is a big download). The optional `--exe` argument is to allow you to test\nmultiple Blender versions.\n\n    ./test.py run [--exe BLENDER-EXE-PATH]\n\nTo display the results of the last test run. These are stored in `report.json`\nin this directory\n\n    ./test.py report\n\nTo display the import times from the last test run\n\n    ./test.py report-times\n\nYou can use the exit code from `run` and `report` (success=0) to determine if\nthe tests passed programatically.\n"
  },
  {
    "path": "test/bl_generate_report.py",
    "content": "\"\"\"\nRuns tests and writes the results to the report.json file.\n\nThis should be executed inside Blender, not from normal Python!\n\"\"\"\n\nimport glob\nimport json\nimport os\nfrom timeit import default_timer as timer\nimport sys\n\nimport bpy\n\nprint('bpy.app.version:', bpy.app.version)\nprint('python sys.version:', sys.version)\n\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nsamples_path = os.path.join(base_dir, 'glTF-Sample-Models', '2.0')\nsite_local_path = os.path.join(base_dir, 'site_local')\nreport_path = os.path.join(base_dir, 'report.json')\n\ntests = []\n\nfiles = (\n    glob.glob(samples_path + '/**/*.gltf', recursive=True) +\n    glob.glob(samples_path + '/**/*.glb', recursive=True) +\n    glob.glob(site_local_path + '/**/*.glb', recursive=True) +\n    glob.glob(site_local_path + '/**/*.glb', recursive=True)\n)\n\n# Skip Draco encoded files for now\nfiles = [fn for fn in files if 'Draco' not in fn]\n\nfor filename in files:\n    short_name = os.path.relpath(filename, samples_path)\n    print('\\nTrying ', short_name, '...')\n\n    bpy.ops.wm.read_factory_settings()\n\n    try:\n        start_time = timer()\n        bpy.ops.import_scene.gltf_ksons(filepath=filename)\n        end_time = timer()\n        print('[PASSED]\\n')\n        test = {\n            'filename': short_name,\n            'result': 'PASSED',\n            'timeElapsed': end_time - start_time,\n        }\n\n    except Exception as e:\n        print('[FAILED]\\n')\n        test = {\n            'filename': filename,\n            'result': 'FAILED',\n            'error': str(e),\n        }\n\n    tests.append(test)\n\nreport = {\n    'blenderVersion': list(bpy.app.version),\n    'tests': tests,\n}\n\nwith open(report_path, 'w+') as f:\n    json.dump(report, f, indent=4)\n"
  },
  {
    "path": "test/site_local/.gitignore",
    "content": "*\n!.gitignore\n!README.md\n"
  },
  {
    "path": "test/site_local/README.md",
    "content": "Add your own test files here. They won't be tracked by git.\n"
  },
  {
    "path": "test/test.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nRun and report on automated tests for the importer.\n\nYou can read the test results programmatically (eg. for CI) from the\nreport.json file or by examining the exit code of this script. Possible\nvalues are:\n\n0 - All tests passed\n1 - Some kind of error occurred (as distinct from \"some test failed\")\n3 - At least one test failed\n\"\"\"\n\nimport argparse\nimport json\nimport os\nimport subprocess\nimport sys\n\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nsamples_path = os.path.join(base_dir, 'glTF-Sample-Models', '2.0')\nreport_path = os.path.join(base_dir, 'report.json')\ntest_script = os.path.join(base_dir, 'bl_generate_report.py')\nscripts_dir = os.path.join(base_dir, os.pardir)\n\ndef cmd_get(args=None):\n    \"\"\"Get sample files by initializing git submodules.\"\"\"\n    try:\n        print(\"Checking if we're in a git repo...\")\n        subprocess.run(\n            ['git', 'rev-parse'],\n            cwd=base_dir,\n            check=True\n        )\n    except BaseException:\n        print('Is git installed?')\n        print('Did you get this repo through git (as opposed to eg. a zip)?')\n        raise\n\n    try:\n        print(\"Fetching submodules (WARNING: large download)...\")\n        subprocess.run(\n            ['git', 'submodule', 'update', '--init', '--recursive'],\n            cwd=base_dir,\n            check=True\n        )\n    except BaseException:\n        print(\"Couldn't init submodules. Aborting\")\n        raise\n\n    if not os.path.isdir(samples_path):\n        print(\"Samples still aren't there! Aborting\")\n        raise Exception('no samples after initializing submodules')\n\n    print('Good to go!')\n\n\ndef cmd_run(args):\n    \"\"\"Calls Blender to generate report.json file.\"\"\"\n    if not os.path.isdir(samples_path):\n        print(\"Couldn't find glTF-Sample-Models/2.0/\")\n        print(\"I'll try to fetch it for you...\")\n        cmd_get()\n        print('This step should only happen once.\\n\\n')\n\n    exe = args.exe\n\n    # Print Blender version for debugging\n    try:\n        subprocess.run([exe, '--version'], check=True)\n    except BaseException:\n        print(\"Couldn't run %s\" % exe)\n        print('Check that Blender is installed!')\n        raise\n\n    print()\n\n    # We're going to try to run Blender in a clean-ish environment for testing.\n    # we want to be sure we're using the current state of 'io_scene_gltf_ksons'.\n    # The user scripts variable expects an addons/plugin directory structure\n    # which we have in the projects root directory\n    env = os.environ.copy()\n    env['BLENDER_USER_SCRIPTS'] = scripts_dir\n    subprocess.run(\n        [\n            exe,\n            '-noaudio',  # sound ssystem to None (less output on stdout)\n            '--background',  # run UI-less\n            '--factory-startup',  # factory settings\n            '--addons', 'io_scene_gltf_ksons',  # enable the addon\n            '--python', test_script  # run the test script\n        ],\n        env=env,\n        check=True\n    )\n\n    return cmd_report()\n\n\ndef cmd_report(args=None):\n    \"\"\"Print report from report.json file.\"\"\"\n    with open(report_path) as f:\n        report = json.load(f)\n\n    tests = report['tests']\n\n    num_passed = 0\n    num_failed = 0\n    failures = []\n    ok = '\\033[32m' + 'ok' + '\\033[0m'  # green 'ok'\n    failed = '\\033[31m' + 'FAILED' + '\\033[0m'  # red 'FAILED'\n\n    for test in tests:\n        print('import', test['filename'], '... ', end='')\n        if test['result'] == 'PASSED':\n            print(ok, \"(%.4f s)\" % test['timeElapsed'])\n            num_passed += 1\n        else:\n            print(failed)\n            print(test['error'])\n            num_failed += 1\n            failures.append(test['filename'])\n\n    if failures:\n        print('\\nfailures:')\n        for name in failures:\n            print('   ', name)\n\n    result = ok if num_failed == 0 else failed\n    print(\n        '\\ntest result: %s. %d passed; %d failed\\n' %\n        (result, num_passed, num_failed)\n    )\n\n    exit_code = 0 if num_failed == 0 else 3\n    return exit_code\n\n\ndef cmd_report_times(args=None):\n    \"\"\"Prints the tests sorted by import time.\"\"\"\n    with open(report_path) as f:\n        report = json.load(f)\n\n    test_passed = lambda test: test['result'] == 'PASSED'\n    tests = list(filter(test_passed, report['tests']))\n    tests.sort(key=lambda test: test['timeElapsed'], reverse=True)\n\n    for (num, test) in enumerate(tests, start=1):\n        print('( #%-3d )  % 2.4fs   %s' % (num, test['timeElapsed'], test['filename']))\n\n\np = argparse.ArgumentParser(description='glTF importer tests')\nsubs = p.add_subparsers(title='subcommands')\n\nrun = subs.add_parser('run', help='Run tests and generate report')\nrun.add_argument('--exe', default='blender', help='Blender executable')\nrun.set_defaults(func=cmd_run)\n\nget = subs.add_parser('get-samples', help='Fetch or update samples')\nget.set_defaults(func=cmd_get)\n\nreport = subs.add_parser('report', help='Print last report')\nreport.set_defaults(func=cmd_report)\n\nreport_times = subs.add_parser('report-times', help='Print import times for last report')\nreport_times.set_defaults(func=cmd_report_times)\n\nargv = sys.argv\nif len(argv) == 1:\n    print('assuming you wanted to run the tests\\n')\n    argv.append('run')\nargs = p.parse_args(argv[1:])\nresult = args.func(args)\nif type(result) == int:\n    sys.exit(result)\n"
  }
]