Full Code of ksons/gltf-blender-importer for AI

master db67e6cd832f cached
41 files
55.1 MB
42.4k tokens
132 symbols
1 requests
Download .txt
Repository: ksons/gltf-blender-importer
Branch: master
Commit: db67e6cd832f
Files: 41
Total size: 55.1 MB

Directory structure:
gitextract_6jqv601m/

├── .github/
│   └── issue_template.md
├── .gitignore
├── .gitmodules
├── .travis.yml
├── INSTALL.md
├── LICENSE
├── README.md
├── addons/
│   └── io_scene_gltf_ksons/
│       ├── __init__.py
│       ├── animation/
│       │   ├── __init__.py
│       │   ├── curve.py
│       │   ├── material.py
│       │   ├── morph_weight.py
│       │   ├── node_trs.py
│       │   └── precompute.py
│       ├── buffer.py
│       ├── camera.py
│       ├── compat.py
│       ├── importer.py
│       ├── light.py
│       ├── load.py
│       ├── material/
│       │   ├── __init__.py
│       │   ├── block.py
│       │   ├── groups.json
│       │   ├── image.py
│       │   ├── node_groups.py
│       │   ├── precompute.py
│       │   └── texture.py
│       ├── mesh.py
│       ├── node.py
│       ├── scene.py
│       └── vnode.py
├── deploy.py
├── make_package.py
├── setup.cfg
└── test/
    ├── README.md
    ├── bl_generate_report.py
    ├── data/
    │   ├── fin4_Ref.exr
    │   └── renderScene.blend
    ├── site_local/
    │   ├── .gitignore
    │   └── README.md
    └── test.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/issue_template.md
================================================
<!--

Thanks for filing an issue! If you are having a problem importing a file, please
include a link to the file so we can test it.

-->


================================================
FILE: .gitignore
================================================
# Automated test results
test/report.json

## Generic ignores below here
################################
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# IPython Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# dotenv
.env

# virtualenv
venv/
ENV/

# Spyder project settings
.spyderproject

# Rope project settings
.ropeproject


================================================
FILE: .gitmodules
================================================
[submodule "test/glTF-Sample-Models"]
	path = test/glTF-Sample-Models
	url = https://github.com/KhronosGroup/glTF-Sample-Models.git


================================================
FILE: .travis.yml
================================================
language: python
python:
  "3.5"

# From michaeldegroot/cats-blender-plugin
before_install:
  - sudo apt-get update -qq
  # install blender from official sources.
  # This will most propably install an outdated blender version,
  # but it will resolve all system dependencies blender has to be able to run.
  - sudo apt-get install blender

install:
  # Then update blender
  - mkdir tmp && cd tmp
  - wget http://mirror.cs.umn.edu/blender.org/release/Blender2.79/blender-2.79-linux-glibc219-x86_64.tar.bz2
  - tar jxf blender-2.79-linux-glibc219-x86_64.tar.bz2
  - mv blender-2.79-linux-glibc219-x86_64 blender
  - cd ..

script:
  python test/test.py run --exe ./tmp/blender/blender

#deploy:
#  provider: pages
#  skip_cleanup: true
#  github_token: $GITHUB_TOKEN
#  local_dir: ouput


================================================
FILE: INSTALL.md
================================================
See also the [Blender manual on installing
add-ons](https://docs.blender.org/manual/en/latest/preferences/addons.html).

## Installing from a Release ZIP

Download the latest release from the
[Releases](https://github.com/ksons/gltf-blender-importer/releases) page. It
should be a ZIP file with a name like `io_scene_gltf_ksons-X.Y.Z.zip`.

Open Blender and select **File > User Preferences** (or **Edit > user
Preferences** if that doesn't exist). Change to the **Add-ons** tab and select
**Install Add-on from File...** at the bottom of the screen (or **Install...**
at the top of the screen if that doesn't exist). Pick the ZIP file you
downloaded. The add-on is now installed.

You still need to enable it. In the **Add-ons** tab, put 'gltf' in the search
box and tick the checkbox next to **Import-Export: KSons' glTF 2.0 Importer**.

<img src="./doc/addon-install.png"/>


## Installing from Source

Obtain the source code, eg.

    git clone https://github.com/ksons/gltf-blender-importer.git

You can create a ZIP to install with the method above by running the script
`make_package.py`. A ZIP file `io_scene_gltf_ksons.zip` will be created in the
`dist/` folder.

Otherwise, find your Blender add-on directory. It is most commonly:

* **On Windows**, `C:\Users\<YOUR USER NAME>\AppData\Roaming\Blender
  Foundation\Blender\<YOUR BLENDER VERSION>\scripts\addons\`
* **On Linux**, `/home/<YOUR USER NAME>/.config/blender/<YOUR BLENDER
  VERSION>/scripts/addons/`
* **On OSX**, `/Users/<YOUR USER NAME>/Library/Application
  Support/Blender/<YOUR BLENDER VERSION>/scripts/addons/`

Alternatively, open Blender, switch to the Python console, and enter
`print(bpy.utils.user_resource('SCRIPTS', 'addons'))` to have it printed for
you.

Then copy (or, for easier development, symbolically link) the `io_scene_gltf`
folder from the `addons` folder in this repo to your Blender add-on directory.

Finally enable the add-on the same way as above.


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2017 Kristian Sons

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
## If you're looking for the official importer included with Blender, go [here](https://github.com/KhronosGroup/glTF-Blender-IO).

<p align="center">
<img src="doc/hero.png" alt="Fox model by PixelMannen, rigging by Tom Kranis">
</p>

<h2 align=center>
gltf-blender-importer
<a href="https://travis-ci.org/ksons/gltf-blender-importer"><img src="https://travis-ci.org/ksons/gltf-blender-importer.svg?branch=master" alt="Build status"/></a>
</h1>

<p align=center>Un-official Blender importer for glTF 2.0.</p>

<p align=center>
<a href="https://github.com/ksons/gltf-blender-importer/releases/download/v0.5.0/io_scene_gltf_ksons-0.5.0.zip"><img src="./doc/download_button.png"></a>
</p>

### Installation
Click the "Download Add-on" button above to download the ZIP containing the
add-on. In Blender, navigate to **File > User Preferences... > Add-ons** (or
**Edit > User Preferences... > Add-ons**) and install that ZIP with the
**Install Add-on from File...** button (or **Install...** button). Then type
'glTF' in the search bar and tick the checkbox next to **KSons' glTF 2.0
Importer** to enable it.

You can now import glTFs with **File > Import > KSons' glTF 2.0 (.glb/.gltf)**.

<p align="center"><img src="doc/addon-install.png"></p>

See [INSTALL.md](INSTALL.md) for further installation instructions.

### Supported Extensions
* KHR_materials_pbrSpecularGlossiness
* KHR_lights_punctual
* KHR_materials_unlit
* KHR_texture_transform
* MSFT_texture_dds
* EXT_property_animation (extension abandoned upstream)

### Unsupported Features
* Inverse bind matrices are ignored

### Samples Renderings
![BoomBox](doc/boom-box.jpg)
![Corset](doc/corset.jpg)
![Lantern](doc/lantern.jpg)

### See also
Official Importer-Exporter: [glTF-Blender-IO](https://github.com/KhronosGroup/glTF-Blender-IO)


================================================
FILE: addons/io_scene_gltf_ksons/__init__.py
================================================
import json
import os
import struct

import bpy
from bpy.props import StringProperty, BoolProperty, FloatProperty, EnumProperty
from bpy_extras.io_utils import ImportHelper

bl_info = {
    'name': "KSons' glTF 2.0 Importer",
    'author': 'Kristian Sons (ksons), scurest',
    'blender': (2, 80, 0),
    'version': (0, 5, 0),
    'location': "File > Import > KSons' glTF 2.0 (.glb/.gltf)",
    'description': 'Importer for the glTF 2.0 file format.',
    'warning': '',
    'wiki_url': 'https://github.com/ksons/gltf-blender-importer/blob/master/README.md',
    'tracker_url': 'https://github.com/ksons/gltf-blender-importer/issues',
    'category': 'Import-Export'
}

# Supported glTF version
GLTF_VERSION = (2, 0)

# Supported extensions
EXTENSIONS = set((
    'EXT_property_animation',  # tentative, only material properties supported
    'KHR_lights_punctual',
    'KHR_materials_pbrSpecularGlossiness',
    'KHR_materials_unlit',
    'KHR_texture_transform',
    'MSFT_texture_dds',
))

from .importer import Importer

class ImportGLTF(bpy.types.Operator, ImportHelper):
    """Load a glTF 2.0 file."""

    bl_idname = 'import_scene.gltf_ksons'
    bl_label = 'Import glTF'

    filename_ext = '.gltf'
    filter_glob = StringProperty(
        default='*.gltf;*.glb',
        options={'HIDDEN'},
    )

    global_scale = FloatProperty(
        name='Global Scale',
        description=(
            'Scales all linear distances by the given factor. Use to change '
            'units (glTF is in meters)'
        ),
        default=1.0,
    )
    axis_conversion = EnumProperty(
        items=[
            ('BLENDER_UP', 'Blender Up (+Z)', ''),
            ('BLENDER_RIGHT', 'Blender Right (+Y)', ''),
        ],
        name='Up (+Y) to',
        description=(
            "Choose whether to convert coordinates to Blender's up-axis convention "
            'or leave everything in the same order it is in the glTF'
        ),
        default='BLENDER_UP',
    )
    smooth_polys = BoolProperty(
        name='Enable Polygon Smoothing',
        description=(
            'Enable smoothing for all polygons in imported meshes. Suggest '
            'disabling for low-res models'
        ),
        default=True,
    )
    split_meshes = BoolProperty(
        name='Split Meshes into Primitives',
        description=(
            'A glTF mesh is made of pieces called primitives. For example, each primitive '
            'uses only one material. When this option is disabled, one glTF mesh makes '
            'one Blender mesh. When it is enabled, each glTF primitive makes one Blender mesh. '
            'Useful for examining the structure of glTF meshes'
        ),
        default=False,
    )
    bone_rotation_mode = EnumProperty(
        items=[
            ('NONE', "Don't change", ''),
            ('POINT_TO_CHILDREN', 'Point to children', ''),
        ],
        name='Direction',
        description=(
            'Adjusts which direction bones will point towards by applying a rotation '
            'to each bone. Point-to-children uses a heuristic that tries to make bones '
            'point nicely'
        ),
        default='POINT_TO_CHILDREN',
    )
    import_animations = BoolProperty(
        name='Import Animations',
        description=(
            'Whether to import animations. Look for them in the NLA editor'
        ),
        default=True,
    )
    framerate = FloatProperty(
        name='Frames/second',
        description=(
            'The Blender animation frame corresponding to the glTF time is computed '
            "as framerate * t. Negative values or zero mean to use the current scene's "
            'framerate'
        ),
        default=0.0,
    )
    always_doublesided = BoolProperty(
        name='Always Double-Sided',
        description=(
            'Make all materials double-sided, even if the glTF says they should be '
            'single-sided.\n'
            'Single-sidedness (ie. backing culling enabled) is simulated in Blender '
            'using alpha, which is a somewhat ugly hack'
        ),
        default=True,
    )
    add_root = BoolProperty(
        name='Add Root Node',
        description=(
            'When enabled, everything in the glTF file will be placed under a new '
            'root node with the name of the .gltf/.glb file'
        ),
        default=True,
    )
    import_scenes_as_collections = BoolProperty(
        name='Import Scenes as Collections',
        description=(
            'When enabled, import glTF scenes as Blender collections (requires Blender '
            '>= 2.8). When disabled, the glTF scenes are ignored.\n\n'
            'Note that all objects are always placed in the current Blender scene'
        ),
        default=False,
    )

    def draw(self, context):
        layout = self.layout

        col = layout.box().column()
        col.label(text='Units:', icon='EMPTY_DATA')
        col.prop(self, 'axis_conversion')
        col.prop(self, 'global_scale')

        col = layout.box().column()
        col.label(text='Mesh:', icon='MESH_DATA')
        col.prop(self, 'smooth_polys')
        col.prop(self, 'split_meshes')

        col = layout.box().column()
        col.label(text='Bones:', icon='BONE_DATA')
        col.prop(self, 'bone_rotation_mode')

        col = layout.box().column()
        col.label(text='Animation:', icon='POSE_HLT')
        col.prop(self, 'import_animations')
        col.prop(self, 'framerate')

        col = layout.box().column()
        col.label(text='Materials:', icon='MATERIAL_DATA')
        col.prop(self, 'always_doublesided')

        col = layout.box().column()
        col.label(text='Scene:', icon='SCENE_DATA')
        col.prop(self, 'add_root')
        col.prop(self, 'import_scenes_as_collections')

    def execute(self, context):
        imp = Importer(self.filepath, self.as_keywords())
        imp.do_import()
        return {'FINISHED'}


# Add to a menu
def menu_func_import(self, context):
    self.layout.operator(ImportGLTF.bl_idname, text="KSons' glTF 2.0 (.glb/.gltf)")


def register():
    if bpy.app.version >= (2, 80, 0):
        bpy.utils.register_class(ImportGLTF)
        bpy.types.TOPBAR_MT_file_import.append(menu_func_import)
    else:
        bpy.utils.register_module(__name__)
        bpy.types.INFO_MT_file_import.append(menu_func_import)


def unregister():
    if bpy.app.version >= (2, 80, 0):
        bpy.types.TOPBAR_MT_file_import.remove(menu_func_import)
        bpy.utils.unregister_class(ImportGLTF)
    else:
        bpy.utils.unregister_module(__name__)
        bpy.types.INFO_MT_file_import.remove(menu_func_import)


if __name__ == '__main__':
    register()


================================================
FILE: addons/io_scene_gltf_ksons/animation/__init__.py
================================================
import json
import bpy

def quote(s):
    """Quote a string with double-quotes."""
    return json.dumps(s)

from .precompute import animation_precomputation
from .node_trs import add_node_trs_animation
from .morph_weight import add_morph_weight_animation
from .material import add_material_animation

def add_animations(op):
    for anim_info in op.animation_info:
        for node_id in anim_info.node_trs:
            add_node_trs_animation(op, anim_info, node_id)

        for node_id in anim_info.morph_weight:
            add_morph_weight_animation(op, anim_info, node_id)

        for material_id in anim_info.material:
            add_material_animation(op, anim_info, material_id)

    create_nla_tracks(op)


def create_nla_tracks(op):
    """
    Put all the actions in NLA tracks, each animation one after the other in one
    big timeline.
    """
    def get_track(bl_thing, track_name):
        if not bl_thing.animation_data:
            bl_thing.animation_data_create()

        if track_name not in bl_thing.animation_data.nla_tracks:
            track = bl_thing.animation_data.nla_tracks.new()
            track.name = track_name

        return bl_thing.animation_data.nla_tracks[track_name]

    t = 0.0  # Start time in the big timeline
    padding = 5.0  # Padding time between animations

    for anim_info in op.animation_info:
        anim_id = anim_info.anim_id
        anim_name = op.gltf['animations'][anim_id].get('name', 'animations[%d]' % anim_id)

        for object_name, action in anim_info.trs_actions.items():
            bl_object = bpy.data.objects[object_name]
            track = get_track(bl_object, 'Position')
            track.strips.new(anim_name, t, action)

        for object_name, action in anim_info.morph_actions.items():
            shape_keys = bpy.data.objects[object_name].data.shape_keys
            track = get_track(shape_keys, 'Morph')
            track.strips.new(anim_name, t, action)

        for material_id, action in anim_info.material_actions.items():
            node_tree = op.get('material', material_id).node_tree
            track = get_track(node_tree, 'Material')
            track.strips.new(anim_name, t, action)

        t += anim_info.duration + padding


================================================
FILE: addons/io_scene_gltf_ksons/animation/curve.py
================================================
import bpy
from mathutils import Vector, Quaternion, Matrix


class Curve:
    @staticmethod
    def for_sampler(op, sampler, num_targets=None):
        c = Curve()

        c.times = op.get('accessor', sampler['input'])
        c.ords = op.get('accessor', sampler['output'])
        c.interp = sampler.get('interpolation', 'LINEAR')
        if c.interp not in ['LINEAR', 'STEP', 'CUBICSPLINE']:
            print('unknown interpolation: %s', c.interp)
            c.interp = 'LINEAR'

        if num_targets != None:
            # Group one frame's worth of morph weights together.
            c.ords = [
                c.ords[i: i + num_targets]
                for i in range(0, len(c.ords), num_targets)
            ]

        if c.interp == 'CUBICSPLINE':
            # Move the in-tangents and out-tangents into separate arrays.
            c.ins, c.ords, c.outs = c.ords[::3], c.ords[1::3], c.ords[2::3]

        assert(len(c.times) == len(c.ords))

        return c

    def num_components(self):
        y = self.ords[0]
        return 1 if type(y) in [float, int] else len(y)

    def shorten_quaternion_paths(self):
        if self.interp != 'LINEAR':
            return

        self.ords = [Vector(y) for y in self.ords]
        for i in range(1, len(self.ords)):
            if self.ords[i - 1].dot(self.ords[i]) < 0:
                self.ords[i] = -self.ords[i]

    def make_fcurves(self, op, action, data_path,
                     transform=lambda x: x,
                     tangent_transform=None
                     ):
        framerate = op.options['framerate']
        if framerate <= 0:
            framerate = bpy.context.scene.render.fps
        times = self.times
        ords = self.ords
        interp = self.interp
        bl_interp = {
            'STEP': 'CONSTANT',
            'LINEAR': 'LINEAR',
            'CUBICSPLINE': 'BEZIER',
        }[interp]

        num_components = self.num_components()
        if type(data_path) == list:
            assert(len(data_path) == num_components)
            fcurves = [
                action.fcurves.new(data_path=path, index=index)
                for path, index in data_path
            ]
        else:
            fcurves = [
                action.fcurves.new(data_path=data_path, index=i)
                for i in range(0, num_components)
            ]

        for fcurve in fcurves:
            fcurve.keyframe_points.add(len(times))

        ords = [transform(y) for y in ords]

        # tmp is an array laid out like
        #
        #   [frame, ordinate, frame, ordinate, ...]
        #
        # This let's us set all the keyframes points in one batch, which is fast.
        tmp = [0] * (2 * len(times))
        tmp[::2] = (framerate * t for t in times)
        for i in range(0, num_components):
            if num_components == 1:
                tmp[1::2] = ords
            else:
                tmp[1::2] = (y[i] for y in ords)
            fcurves[i].keyframe_points.foreach_set('co', tmp)

        for fcurve in fcurves:
            for pt in fcurve.keyframe_points:
                pt.interpolation = bl_interp

        if interp == 'CUBICSPLINE':
            if not tangent_transform:
                tangent_transform = transform

            # Blender appears to do Hermite spline interpolation of the _graph_
            # between the points (t1, y1) and (t2, y2), unlike glTF which does
            # interpolation only of the _ordinates_ y1 and y2. So if this is the
            # interval between two keyframes at times t1 and t2 with control
            # points C1 and C2
            #
            #                               o C2: (ct2, cy2)
            #    C1: (ct1, cy1) o            \
            #                  /              * P2: (t1, y1)
            #                 /
            #   P1: (t1, y1) *
            #
            # glTF gives us the right derivative at P1, b (= the slope of the
            # line P1 C1) and the left derivative at P2, a (= the slope of the
            # line P2 C2). So once we pick ct1 and ct2, cy1 and cy2 follow.
            #
            # We pick ct1 and ct2 so that spline interpolation in the
            # t-direction reduces to just linear interpolation.

            for k in range(0, len(times) - 1):
                t1, t2 = times[k], times[k + 1]
                b, a = self.outs[k], self.ins[k + 1]
                a, b = tangent_transform(a), tangent_transform(b)
                if num_components == 1:
                    a, b = (a,), (b,)

                ct1 = (2 * t1 + t2) / 3
                ct2 = (t1 + 2 * t2) / 3

                for i in range(0, num_components):
                    pt1 = fcurves[i].keyframe_points[k]
                    pt1.handle_right_type = 'FREE'
                    pt1.handle_right = ct1 * framerate, pt1.co[1] + (ct1 - t1) * b[i]

                    pt2 = fcurves[i].keyframe_points[k + 1]
                    pt2.handle_left_type = 'FREE'
                    pt2.handle_left = ct2 * framerate, pt2.co[1] + (ct2 - t2) * a[i]

        for fcurve in fcurves:
            fcurve.update()

        return fcurves


================================================
FILE: addons/io_scene_gltf_ksons/animation/material.py
================================================
import bpy
from . import quote
from .curve import Curve


def add_material_animation(op, anim_info, material_id):
    anim_id = anim_info.anim_id
    data = anim_info.material[material_id]
    animation = op.gltf['animations'][anim_id]
    material = op.get('material', material_id)

    name = '%s@%s (Material)' % (
        animation.get('name', 'animations[%d]' % anim_id),
        material.name,
    )
    action = bpy.data.actions.new(name)
    anim_info.material_actions[material_id] = action

    fcurves = []

    for prop, sampler in data.get('properties', {}).items():
        curve = Curve.for_sampler(op, sampler)
        data_path = op.material_infos[material_id].paths.get(prop)
        if not data_path:
            print('no place to put animated property %s in material node tree' % prop)
            continue
        fcurves += curve.make_fcurves(op, action, data_path)

    if fcurves:
        group = action.groups.new('Material Property')
        for fcurve in fcurves:
            fcurve.group = group

    for texture_type, samplers in data.get('texture_transform', {}).items():
        base_path = op.material_infos[material_id].paths[texture_type + '-transform']

        fcurves = []

        if 'offset' in samplers:
            curve = Curve.for_sampler(op, samplers['offset'])
            data_path = base_path + '.translation'
            fcurves += curve.make_fcurves(op, action, data_path)

        if 'rotation' in samplers:
            curve = Curve.for_sampler(op, samplers['rotation'])
            data_path = [(base_path + '.rotation', 2)]  # animate rotation around Z-axis
            fcurves += curve.make_fcurves(op, action, data_path, transform=lambda theta:-theta)

        if 'scale' in samplers:
            curve = Curve.for_sampler(op, samplers['scale'])
            data_path = base_path + '.scale'
            fcurves += curve.make_fcurves(op, action, data_path)

        group_name = {
            'normalTexture': 'Normal',
            'occlusionTexture': 'Occlusion',
            'emissiveTexture': 'Emissive',
            'baseColorTexture': 'Base Color',
            'metallicRoughnessTexture': 'Metallic-Roughness',
            'diffuseTexture': 'Diffuse',
            'specularGlossinessTexture': 'Specular-Glossiness',
        }[texture_type] + ' Texture Transform'
        group = action.groups.new(group_name)
        for fcurve in fcurves:
            fcurve.group = group


================================================
FILE: addons/io_scene_gltf_ksons/animation/morph_weight.py
================================================
import bpy
from . import quote
from .curve import Curve

# Morph Weight Animations


def add_morph_weight_animation(op, anim_info, node_id):
    anim_id = anim_info.anim_id
    sampler = anim_info.morph_weight[node_id]
    animation = op.gltf['animations'][anim_id]

    vnodes = find_mesh_instances(op.node_id_to_vnode[node_id])
    for vnode in vnodes:
        blender_object = vnode.blender_object

        if not blender_object.data.shape_keys:
            # Can happen if the mesh has only non-POSITION morph targets so we
            # didn't create a shape key
            return

        # Create action
        name = '%s@%s (Morph)' % (
            animation.get('name', 'animations[%d]' % anim_id),
            blender_object.name,
        )
        action = bpy.data.actions.new(name)
        action.id_root = 'KEY'
        anim_info.morph_actions[blender_object.name] = action

        # Find out the number of morph targets
        mesh_id = op.gltf['nodes'][node_id]['mesh']
        mesh = op.gltf['meshes'][mesh_id]
        num_targets = len(mesh['primitives'][0]['targets'])

        curve = Curve.for_sampler(op, sampler, num_targets=num_targets)
        data_paths = [
            ('key_blocks[%s].value' % quote('Morph %d' % i), 0)
            for i in range(0, num_targets)
        ]

        curve.make_fcurves(op, action, data_paths)


def find_mesh_instances(vnode):
    """
    A mesh instance at a vnode may be moved and split-up into multiple vnodes
    during vtree creation. Find all the places it ended up.
    """
    if vnode.mesh:
        return [vnode]
    else:
        vnodes = []
        for moved_to in vnode.mesh_moved_to:
            vnodes += find_mesh_instances(moved_to)
        return vnodes


================================================
FILE: addons/io_scene_gltf_ksons/animation/node_trs.py
================================================
from mathutils import Vector, Quaternion, Matrix
import bpy
from . import quote
from .curve import Curve
from ..compat import mul

# Handles animating TRS properties for glTF nodes. In Blender, this can be
# either an object or a bone.


def add_node_trs_animation(op, anim_info, node_id):
    if op.node_id_to_vnode[node_id].type == 'BONE':
        bone_trs(op, anim_info, node_id)
    else:
        object_trs(op, anim_info, node_id)


def object_trs(op, anim_info, node_id):
    animation_id = anim_info.anim_id
    samplers = anim_info.node_trs[node_id]

    # Create action
    animation = op.gltf['animations'][animation_id]
    blender_object = op.node_id_to_vnode[node_id].blender_object
    name = '%s@%s' % (
        animation.get('name', 'animations[%d]' % animation_id),
        blender_object.name,
    )
    action = bpy.data.actions.new(name)
    anim_info.trs_actions[blender_object.name] = action

    if 'translation' in samplers:
        curve = Curve.for_sampler(op, samplers['translation'])
        fcurves = curve.make_fcurves(
            op, action, 'location',
            transform=op.convert_translation)

        group = action.groups.new('Location')
        for fcurve in fcurves:
            fcurve.group = group

    if 'rotation' in samplers:
        curve = Curve.for_sampler(op, samplers['rotation'])
        curve.shorten_quaternion_paths()
        fcurves = curve.make_fcurves(
            op, action, 'rotation_quaternion',
            transform=op.convert_rotation)

        group = action.groups.new('Rotation')
        for fcurve in fcurves:
            fcurve.group = group

    if 'scale' in samplers:
        curve = Curve.for_sampler(op, samplers['scale'])
        fcurves = curve.make_fcurves(
            op, action, 'scale',
            transform=op.convert_scale)

        group = action.groups.new('Scale')
        for fcurve in fcurves:
            fcurve.group = group


def bone_trs(op, anim_info, node_id):
    anim_id = anim_info.anim_id
    samplers = anim_info.node_trs[node_id]

    # Unlike an object, a bone doesn't get its own action; there is one action
    # for the whole armature. Look it up or create it if it doesn't exist yet.
    bone_vnode = op.node_id_to_vnode[node_id]
    armature_vnode = bone_vnode.armature_vnode
    armature_object = armature_vnode.blender_object
    if armature_object.name not in anim_info.trs_actions:
        name = '%s@%s' % (
            op.gltf['animations'][anim_id].get('name', 'animations[%d]' % anim_id),
            armature_vnode.blender_armature.name,
        )
        action = bpy.data.actions.new(name)
        anim_info.trs_actions[armature_object.name] = action

    action = anim_info.trs_actions[armature_object.name]

    # In glTF, the ordinates of an animation curve say what the final position
    # of the node should be
    #
    #     T(b) = sample_gltf_curve()
    #
    # But in Blender, you animate the pose bone, and the final position is
    # computed relative to the rest position as
    #
    #     P(b) = sample_blender_curve()
    #
    # and these are related as (see vnode.py for the notation used here)
    #
    #     T'(b) = C(pb)^{-1} T(b) C(b)
    #           = E(b) P(b)
    #
    # Computing
    #
    #       P(b)
    #     = E(b)^{-1} C(pb)^{-1} T(b) C(b)
    #     = Rot[er^{-1}] Trans[-et]
    #       Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]
    #       Trans[t] Rot[r] Scale[s]
    #       Rot[cr(b)] HomScale[cs(b)]
    #
    #     { float the Trans to the left }
    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]
    #       Rot[er^{-1}] Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]
    #       Rot[r] Scale[s]
    #       Rot[cr(b)] HomScale[cs(b)]
    #
    #     { combine scalings }
    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]
    #       Rot[er^{-1}] Rot[cr(pb)^{-1}]
    #       Rot[r] Scale[s cs(b) / cs(pb)]
    #       Rot[cr(b)]
    #
    #     { interchange the final Rot and Scale, permuting the scale
    #       (see exchange_scale_rot_matrix) }
    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]
    #       Rot[er^{-1}] Rot[cr(pb)^{-1}]
    #       Rot[r] Rot[cr(b)]
    #       Scale[M s cs(b) / cs(pb)]
    #
    #     { combine rotations }
    #     = Trans[Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))]
    #       Rot[er^{-1} cr(pb)^{-1} r cr(b)]
    #       Scale[M s cs(b) / cs(pb)]
    #     = Trans[pt] Rot[pr] Scale[ps]
    #
    # Note that pt depends only on t (and not r or s), and similarly for pr and
    # ps.

    et, er = bone_vnode.editbone_tr
    cr_pb = bone_vnode.parent.correction_rotation
    cs_pb = bone_vnode.parent.correction_homscale
    cr = bone_vnode.correction_rotation
    cs = bone_vnode.correction_homscale

    er_inv = er.conjugated()
    cr_pb_inv = cr_pb.conjugated()
    cs_pb_inv = 1 / cs_pb

    if 'translation' in samplers:
        # pt = Rot[er^{-1}](-et + Rot[cr(pb)^{-1}] t / cs(pb))
        trans_mat = mul(
            er_inv.to_matrix().to_4x4(),
            mul(
                Matrix.Translation(-et),
                (cs_pb_inv * cr_pb_inv.to_matrix()).to_4x4()
            )
        )

        convert_translation = op.convert_translation
        def transform_translation(t): return mul(trans_mat, convert_translation(t))

        # In order to transform the tangents for cubic interpolation, we need to
        # know how the derivative transforms too. The other transforms are
        # linear, so their derivatives change the same way they do, but
        # transform_translation is affine, so its derivative changes by its
        # underlying linear map.
        lin_mat = trans_mat.to_3x3()
        def transform_velocity(t): return mul(lin_mat, convert_translation(t))

    if 'rotation' in samplers:
        # pt = er^{-1} cr(pb)^{-1} r cr(b)
        #    = left_r r cr(b)
        left_r = mul(er_inv, cr_pb_inv)

        convert_rotation = op.convert_rotation
        def transform_rotation(r): return mul(mul(left_r, convert_rotation(r)), cr)

    if 'scale' in samplers:
        # ps = (M cs(b) / cs(pb)) s
        # where M is the matrix from exchange_scale_rot_matrix
        scale_mat = exchange_scale_rot_matrix(bone_vnode.correction_rotation)
        scale_mat *= cs * cs_pb_inv

        convert_scale = op.convert_scale
        def transform_scale(s):
            return mul(scale_mat, convert_scale(s))

    bone_name = bone_vnode.blender_name
    base_path = 'pose.bones[%s]' % quote(bone_name)

    fcurves = []

    if 'translation' in samplers:
        curve = Curve.for_sampler(op, samplers['translation'])
        fcurves += curve.make_fcurves(
            op, action, base_path + '.location',
            transform=transform_translation,
            tangent_transform=transform_velocity)

    if 'rotation' in samplers:
        curve = Curve.for_sampler(op, samplers['rotation'])
        # NOTE: it doesn't matter that we're shortening before we transform
        # because transform_rotation preserves the dot product
        curve.shorten_quaternion_paths()
        fcurves += curve.make_fcurves(
            op, action, base_path + '.rotation_quaternion',
            transform=transform_rotation)

    if 'scale' in samplers:
        curve = Curve.for_sampler(op, samplers['scale'])
        fcurves += curve.make_fcurves(
            op, action, base_path + '.scale',
            transform=transform_scale)

    group = action.groups.new(bone_name)
    for fcurve in fcurves:
        fcurve.group = group


def exchange_scale_rot_matrix(r):
    """
    Gives a matrix M, depending on quaternion r, with the property that

        Scale[s] Rot[r] = Rot[r] Scale[Ms]

    for all s.

    In order for this to work, Rot[r] must be, up to sign, a permutation of the
    basis vectors.
    """
    # M should be the matrix for the inverse of the permutation effected by
    # Rot[r] I think.
    m = r.to_matrix()
    # Drop all signs; after this, M should be a permutation matrix
    for i in range(0, 3):
        for j in range(0, 3):
            m[i][j] = 0 if abs(m[i][j]) < 0.5 else 1
    m.transpose()
    return m


================================================
FILE: addons/io_scene_gltf_ksons/animation/precompute.py
================================================
import re
import bpy

class AnimationInfo:
    def __init__(self, anim_id):
        self.anim_id = anim_id

        # These are for organizing the samplers by the object they affect.
        # Filled out during precomputation.

        # node_trs[node_idx]['translation'/'rotation'/'scale'] is the sampler
        # for that node's TRS property
        self.node_trs = {}
        # morph_weight[node_idx] is the sampler for that node's morph weights
        self.morph_weight = {}
        # material[material_idx][property name] is the sampler for that
        # materials' property
        # material[material_idx]['texture_transform'][texture_type]['offset'/'rotation'/'scale']
        # is the sampler for texture transform values
        self.material = {}
        # Duration of longest input sampler
        self.duration = 0.0

        # trs_actions[object_blender_name] records the TRS action on that object.
        self.trs_actions = {}
        # trs_actions[object_blender_name] records the morph weight (shape key)
        # action on that object.
        self.morph_actions = {}
        # material_actions[material_id] records the action on that material.
        self.material_actions = {}


def animation_precomputation(op):
    """Precompute AnimationInfo for each animation."""
    animations = op.gltf.get('animations', [])
    op.animation_info = [
        gather_animation(op, anim_id)
        for anim_id in range(0, len(animations))
    ]


def first_match(patterns, s):
    for pattern in patterns:
        match = re.match(pattern, s)
        if match:
            return match
    return None


def gather_animation(op, anim_id):
    anim = op.gltf['animations'][anim_id]
    samplers = anim['samplers']

    info = AnimationInfo(anim_id)

    framerate = op.options['framerate']
    if framerate <= 0:
        framerate = bpy.context.scene.render.fps
    def calc_duration(sampler):
        acc = op.gltf['accessors'][sampler['input']]
        max_time = framerate * acc['max'][0]
        info.duration = max(info.duration, max_time)

    # Normal glTF channels
    channels = anim['channels']
    for channel in channels:
        sampler = samplers[channel['sampler']]
        target = channel['target']
        if 'node' not in target:
            continue
        node_id = target['node']
        path = target['path']

        if path in ['translation', 'rotation', 'scale']:
            info.node_trs.setdefault(node_id, {})[path] = sampler
            calc_duration(sampler)
        elif path == 'weights':
            info.morph_weight[node_id] = sampler
            calc_duration(sampler)
        else:
            print('skipping animation curve, unknown path: %s' % path)
            continue

    # EXT_property_animation channels
    channels = (
        anim.get('extensions', {})
        .get('EXT_property_animation', {})
        .get('channels', [])
    )
    for channel in channels:
        sampler = samplers[channel['sampler']]
        target = channel['target']

        # Node TRS properties
        patterns = [
            r'^/nodes/(\d+)/(translation|rotation|scale)$',
        ]
        match = first_match(patterns, target)
        if match:
            node_id, path = match.groups()
            info.node_trs.setdefault(int(node_id), {})[path] = sampler
            calc_duration(sampler)
            continue

        # Simple material properties
        patterns = [
            r'^/materials/(\d+)/(emissiveFactor|alphaCutoff)$',
            r'^/materials/(\d+)/(normalTexture/scale|occlusionTexture/strength)$',
            r'^/materials/(\d+)/pbrMetallicRoughness/(baseColorFactor|metallicFactor|roughnessFactor)$',
            r'^/materials/(\d+)/extensions/KHR_materials_pbrSpecularGlossiness/(diffuseFactor|specularFactor|glossinessFactor)$',
        ]
        match = first_match(patterns, target)
        if match:
            material_id, prop = match.groups()
            (info.material
                .setdefault(int(material_id), {})
                .setdefault('properties', {})
             )[prop] = sampler
            calc_duration(sampler)

            # Record that this property is live (so don't skip it during material creation)
            op.material_infos[int(material_id)].liveness.add(prop)

            continue

        # Texture transform properties
        patterns = [
            r'^/materials/(\d+)/(normalTexture|occlusionTexture|emissiveTexture)/extensions/KHR_texture_transform/(offset|rotation|scale)$',
            r'^/materials/(\d+)/pbrMetallicRoughness/(baseColorTexture|metallicRoughnessTexture)/extensions/KHR_texture_transform/(offset|rotation|scale)$',
            r'^/materials/(\d+)/extensions/KHR_materials_pbrSpecularGlossiness/(diffuseTexture|specularGlossinessTexture)/extensions/KHR_texture_transform/(offset|rotation|scale)$',
        ]
        match = first_match(patterns, target)
        if match:
            material_id, texture_type, path = match.groups()
            (info.material
                .setdefault(int(material_id), {})
                .setdefault('texture_transform', {})
                .setdefault(texture_type, {})
             )[path] = sampler

            # Record that this property is live (don't skip it during material creation)
            op.material_infos[int(material_id)].liveness.add(texture_type + '-transform')

            continue

        print('skipping animation curve, target not supported: %s' % target)

    return info


================================================
FILE: addons/io_scene_gltf_ksons/buffer.py
================================================
import base64
import os
import struct

# This file handles creating buffers, buffer views, and accessors. It's pure
# python and doesn't depend on Blender at all.
#
# Buffers and buffer views are represented with memoryviews so we can do
# efficient slicing.


def create_buffer(op, idx):
    """Create a memoryview for buffers[idx]."""
    buffer = op.gltf['buffers'][idx]

    # Handle GLB buffer
    if op.glb_buffer != None and idx == 0 and 'uri' not in buffer:
        return op.glb_buffer

    uri = buffer['uri']

    # Try to decode base64 data URIs
    if uri.startswith('data:'):
        idx = uri.find(';base64,')
        if idx != -1:
            base64_data = uri[idx + len(';base64,'):]
            return memoryview(base64.b64decode(base64_data))

    # If we got here, assume it's a filepath
    buffer_location = os.path.join(op.base_path, uri)  # TODO: absolute paths?
    with open(buffer_location, 'rb') as fp:
        return memoryview(fp.read())


def create_buffer_view(op, idx):
    """Create a pair for bufferViews[idx].

    The pair contains a memoryview for the view and also its stride, which is
    specified in the bufferView as well.
    """
    buffer_view = op.gltf['bufferViews'][idx]
    buffer = op.get('buffer', buffer_view['buffer'])
    byte_offset = buffer_view.get('byteOffset', 0)
    byte_length = buffer_view['byteLength']
    stride = buffer_view.get('byteStride', None)

    view = buffer[byte_offset:byte_offset + byte_length]
    return (view, stride)


def create_accessor(op, idx):
    """Create an array holding the elements of accessors[idx].

    If the accessor is of SCALAR type, each element is a number. Otherwise, each
    element is a tuple holding the components for that element.
    """
    accessor = op.gltf['accessors'][idx]
    return create_accessor_from_properties(op, accessor)


def create_accessor_from_properties(op, accessor):
    count = accessor['count']
    fmt_char_lut = dict([
        (5120, 'b'),  # BYTE
        (5121, 'B'),  # UNSIGNED_BYTE
        (5122, 'h'),  # SHORT
        (5123, 'H'),  # UNSIGNED_SHORT
        (5125, 'I'),  # UNSIGNED_INT
        (5126, 'f')   # FLOAT
    ])
    fmt_char = fmt_char_lut[accessor['componentType']]
    component_size = struct.calcsize(fmt_char)
    num_components_lut = {
        'SCALAR': 1,
        'VEC2': 2,
        'VEC3': 3,
        'VEC4': 4,
        'MAT2': 4,
        'MAT3': 9,
        'MAT4': 16
    }
    num_components = num_components_lut[accessor['type']]
    fmt = '<' + (fmt_char * num_components)
    default_stride = struct.calcsize(fmt)

    # Special layouts for certain formats; see the section about
    # data alignment in the glTF 2.0 spec.
    if accessor['type'] == 'MAT2' and component_size == 1:
        fmt = '<' + \
            (fmt_char * 2) + 'xx' + \
            (fmt_char * 2)
        default_stride = 8
    elif accessor['type'] == 'MAT3' and component_size == 1:
        fmt = '<' + \
            (fmt_char * 3) + 'x' + \
            (fmt_char * 3) + 'x' + \
            (fmt_char * 3)
        default_stride = 12
    elif accessor['type'] == 'MAT3' and component_size == 2:
        fmt = '<' + \
            (fmt_char * 3) + 'xx' + \
            (fmt_char * 3) + 'xx' + \
            (fmt_char * 3)
        default_stride = 24

    normalize = None
    if accessor.get('normalized', False):
        normalize_lut = dict([
            (5120, lambda x: max(x / (2**7 - 1), -1)),   # BYTE
            (5121, lambda x: x / (2**8 - 1)),            # UNSIGNED_BYTE
            (5122, lambda x: max(x / (2**15 - 1), -1)),  # SHORT
            (5123, lambda x: x / (2**16 - 1)),           # UNSIGNED_SHORT
            (5125, lambda x: x / (2**32 - 1))            # UNSIGNED_INT
        ])
        normalize = normalize_lut[accessor['componentType']]

    if 'bufferView' in accessor:
        (buf, stride) = op.get('buffer_view', accessor['bufferView'])
        stride = stride or default_stride
    else:
        stride = default_stride
        buf = b'\0' * (stride * count)

    off = accessor.get('byteOffset', 0)

    # Main decoding loop (this is hot, so try to make it fast)
    # Interpret buf as elems seperated by padding for the stride
    #    |elem|xx|elem|xx|elem|xx|elem|
    # Read count-1 |elem|xx| blocks, followed by one |elem|
    elem_byte_len = struct.calcsize(fmt)
    assert(stride >= elem_byte_len)
    padded_fmt = fmt + (stride - elem_byte_len) * 'x'
    unpack_iter = struct.Struct(padded_fmt).iter_unpack(buf[off:off + (count - 1) * stride])
    last = struct.unpack_from(fmt, buf, offset=off + (count - 1) * stride)
    if normalize and num_components == 1:
        result = [normalize(x[0]) for x in unpack_iter]
        result.append(normalize(last[0]))
    elif normalize:
        result = [tuple(normalize(y) for y in x)  for x in unpack_iter]
        result.append(tuple(normalize(y) for y in last))
    elif num_components == 1:
        result = [x[0] for x in unpack_iter]
        result.append(last[0])
    else:
        result = list(unpack_iter)
        result.append(last)

    # A sparse property says "change the elements at these indices to these
    # values" where "these" are given in an accessor-like way, so we find the
    # list of indices and values by recursing into this function.
    if 'sparse' in accessor:
        sparse = accessor['sparse']
        indices_props = {
            'count': sparse['count'],
            'bufferView': sparse['indices']['bufferView'],
            'byteOffset': sparse['indices'].get('byteOffset', 0),
            'componentType': sparse['indices']['componentType'],
            'type': 'SCALAR',
        }
        indices = create_accessor_from_properties(op, indices_props)
        values_props = {
            'count': sparse['count'],
            'bufferView': sparse['values']['bufferView'],
            'byteOffset': sparse['values'].get('byteOffset', 0),
            'componentType': accessor['componentType'],
            'type': accessor['type'],
            'normalized': accessor.get('normalized', False),
        }
        values = create_accessor_from_properties(op, values_props)

        for (index, val) in zip(indices, values):
            result[index] = val

    return result


================================================
FILE: addons/io_scene_gltf_ksons/camera.py
================================================
import bpy


def create_camera(op, idx):
    """Create a Blender camera for the glTF cameras[idx]."""
    data = op.gltf['cameras'][idx]
    name = data.get('name', 'cameras[%d]' % idx)
    camera = bpy.data.cameras.new(name)

    if data['type'] == 'orthographic':
        camera.type = 'ORTHO'
        p = data['orthographic']
        camera.clip_start = p['znear']
        camera.clip_end = p['zfar']
        # TODO: should we warn if xmag != ymag?
        camera.ortho_scale = max(p['xmag'], p['ymag'])

    elif data['type'] == 'perspective':
        camera.type = 'PERSP'
        p = data['perspective']
        camera.clip_start = p['znear']
        # according to the spec a missing zfar means "infinite"
        HUGE = 3.40282e+38
        camera.clip_end = p.get('zfar', HUGE)
        camera.lens_unit = 'FOV'
        camera.angle_y = p['yfov']

        # TODO: aspect ratio

    else:
        print('unknown camera type: %s' % data['type'])

    return camera


================================================
FILE: addons/io_scene_gltf_ksons/compat.py
================================================
import bpy

# Compatiblity shims

# Blender 2.8 changed matrix-matrix, matrix-vector, quaternion-quaternion, and
# quaternion-vector multiplication from x * y to x @ y
if bpy.app.version >= (2, 80, 0):
    def mul(x, y): return x @ y
else:
    def mul(x, y): return x * y


================================================
FILE: addons/io_scene_gltf_ksons/importer.py
================================================
from mathutils import Vector, Quaternion
from . import buffer, mesh, camera, light, material, animation, load, vnode, node, scene

class Importer:
    """Manages all import state."""

    def __init__(self, filepath, options):
        self.filepath = filepath
        self.options = options
        self.caches = {}

    def do_import(self):
        self.set_conversions()

        load.load(self)

        material.material_precomputation(self)
        if self.options['import_animations']:
            animation.animation_precomputation(self)

        vnode.create_vtree(self)
        node.realize_vtree(self)

        if self.options['import_animations']:
            animation.add_animations(self)

        if self.options['import_scenes_as_collections']:
            scene.import_scenes_as_collections(self)

    def get(self, kind, id):
        """
        Gets some kind of resource, eg. a decoded accessor, a mesh, etc. Kept in
        a cache to enable sharing.
        """
        cache = self.caches.setdefault(kind, {})
        if id in cache:
            return cache[id]
        else:
            CREATE_FNS = {
                'buffer': buffer.create_buffer,
                'buffer_view': buffer.create_buffer_view,
                'accessor': buffer.create_accessor,
                'image': material.create_image,
                'material': material.create_material,
                'node_group': material.create_group,
                'mesh': mesh.create_mesh,
                'camera': camera.create_camera,
                'light': light.create_light,
            }
            result = CREATE_FNS[kind](self, id)
            if type(result) == dict and result.get('do_not_cache_me', False):
                # Callee is requesting we not cache it
                result = result['result']
            else:
                cache[id] = result
            return result

    def set_conversions(self):
        """
        Set the convert_{translation,rotation,scale} functions for converting
        from glTF to Blender units. The user can configure this.
        """
        global_scale = self.options['global_scale']
        axis_conversion = self.options['axis_conversion']

        if axis_conversion == 'BLENDER_UP':
            def convert_translation(t):
                return global_scale * Vector([t[0], -t[2], t[1]])

            def convert_rotation(r):
                return Quaternion([r[3], r[0], -r[2], r[1]])

            def convert_scale(s):
                return Vector([s[0], s[2], s[1]])

        else:
            def convert_translation(t):
                return global_scale * Vector(t)

            def convert_rotation(r):
                return Quaternion([r[3], r[0], r[1], r[2]])

            def convert_scale(s):
                return Vector(s)

        self.convert_translation = convert_translation
        self.convert_rotation = convert_rotation
        self.convert_scale = convert_scale


================================================
FILE: addons/io_scene_gltf_ksons/light.py
================================================
import math
import bpy


def create_light(op, idx):
    light = op.gltf['extensions']['KHR_lights_punctual']['lights'][idx]
    name = light.get('name', 'lights[%d]' % idx)

    light_type = light['type']
    color = light.get('color', [1, 1, 1])
    intensity = light.get('intensity', 1)

    bl_type = {
        'directional': 'SUN',
        'point': 'POINT',
        'spot': 'SPOT',
    }.get(light_type)
    if not bl_type:
        print('unknown light type:', type)
        bl_type = 'POINT'

    if bpy.app.version >= (2, 80, 0):
        bl_light = bpy.data.lights.new(name, type=bl_type)
    else:
        bl_light = bpy.data.lamps.new(name, type=bl_type)
    bl_light.use_nodes = True

    emission = bl_light.node_tree.nodes['Emission']
    emission.inputs['Color'].default_value = tuple(color) + (1,)

    if light_type == 'directional':
        watt = lux2W(intensity, ideal_555nm_source)
        emission.inputs['Strength'].default_value = watt
    elif light_type == 'point':
        watt = cd2W(intensity, ideal_555nm_source, surface=4*math.pi)
        emission.inputs['Strength'].default_value = watt
    elif light_type == 'spot':
        spot = light.get('spot', {})
        inner = spot.get('innerConeAngle', 0)
        outer = spot.get('outerConeAngle', math.pi/4)
        bl_light.spot_size = outer
        bl_light.spot_blend = inner / outer

        # For the surface calc see:
        # https://en.wikipedia.org/wiki/Solid_angle#Cone,_spherical_cap,_hemisphere
        emission.inputs['Strength'].default_value = cd2W(
            intensity,
            ideal_555nm_source,
            surface=2 * math.pi * (1 - math.cos(outer / 2)),
        )
    else:
        assert(False)

    return bl_light


# Watt conversions

incandescent_bulb = 0.0249
ideal_555nm_source = 1 / 683


def cd2W(intensity, efficiency, surface):
    """
    intensity in candles
    efficency is a factor
    surface in steradians
    """
    lumens = intensity * surface
    return lumens / (efficiency * 683)


def lux2W(intensity, efficiency):
    """
    intensity in lux (lm/m2)
    efficency is a factor
    """
    return intensity / (efficiency * 683)


================================================
FILE: addons/io_scene_gltf_ksons/load.py
================================================
import os
import json
import struct
from . import GLTF_VERSION, EXTENSIONS


def load(op):
    parse_file(op)
    check_version(op)
    check_extensions(op)


def parse_file(op):
    op.glb_buffer = None

    filename = op.filepath

    # Remember this for resolving relative paths
    op.base_path = os.path.dirname(filename)

    with open(filename, 'rb') as f:
        contents = f.read()

    # Use magic number to detect GLB files.
    is_glb = contents[:4] == b'glTF'
    if is_glb:
        parse_glb(op, contents)
    else:
        parse_gltf(op, contents)


def parse_gltf(op, contents):
    op.gltf = json.loads(contents.decode('utf-8'))


def parse_glb(op, contents):
    contents = memoryview(contents)

    # Parse the header
    header = struct.unpack_from('<4sII', contents)
    glb_version = header[1]
    if glb_version != 2:
        raise Exception('GLB: version not supported: %d' % glb_version)

    # Parse the chunks; we only want the JSON and BIN ones
    offset = 12  # end of header
    while offset < len(contents):
        length, type = struct.unpack_from('<I4s', contents, offset=offset)
        offset += 8
        data = contents[offset: offset + length]
        offset += length

        # The first chunk must be JSON
        if not hasattr(op, 'gltf'):
            assert(type == b'JSON')
            op.gltf = json.loads(
                data.tobytes().decode('utf-8'),  # Need to decode for < 2.79.4 which comes with Python 3.5
                encoding='utf-8'
            )
        else:
            if type == b'BIN\0':
                op.glb_buffer = data
                return
    else:
        raise Exception('empty GLB!')


def check_version(op):
    def parse_version(s):
        """Parse a string like '1.1' to a tuple (1,1)."""
        try:
            version = tuple(int(x) for x in s.split('.'))
            if len(version) >= 2:
                return version
        except Exception:
            pass
        raise Exception('unknown version format: %s' % s)

    asset = op.gltf['asset']

    if 'minVersion' in asset:
        min_version = parse_version(asset['minVersion'])
        supported = GLTF_VERSION >= min_version
        if not supported:
            raise Exception('unsupported minimum version: %s' % min_version)
    else:
        version = parse_version(asset['version'])
        # Check only major version; we should be backwards- and forwards-compatible
        supported = version[0] == GLTF_VERSION[0]
        if not supported:
            raise Exception('unsupported version: %s' % version)


def check_extensions(op):
    required = set(op.gltf.get('extensionsRequired', []))
    used = set(op.gltf.get('extensionsUsed', []))

    unsupported_required = required.difference(EXTENSIONS)
    for ext in unsupported_required:
        raise Exception('unsupported extension was required: %s' % ext)

    unsupported_used = list(used.difference(EXTENSIONS))
    if unsupported_used:
        print(
            'Note that the following extensions are unsupported:',
            *unsupported_used)


================================================
FILE: addons/io_scene_gltf_ksons/material/__init__.py
================================================
import json
import bpy
from .block import Block
from .texture import create_texture_block
from . import image, node_groups, precompute

# Re-exports
create_image = image.create_image
create_group = node_groups.create_group
material_precomputation = precompute.material_procomputation


def create_material(op, idx):
    """
    Create a Blender material for the glTF materials[idx]. If idx is the
    special value 'default_material', create a Blender material for the default
    glTF material instead.
    """
    mc = MaterialCreator()
    mc.op = op
    mc.idx = idx
    mc.liveness = op.material_infos[idx].liveness

    if idx == 'default_material':
        mc.material = {}
        material_name = 'glTF Default Material'
    else:
        mc.material = op.gltf['materials'][idx]
        material_name = mc.material.get('name', 'materials[%d]' % idx)

    if 'KHR_materials_unlit' in mc.material.get('extensions', {}):
        mc.pbr = mc.material.get('pbrMetallicRoughness', {})
        mc.type = 'unlit'
    elif 'KHR_materials_pbrSpecularGlossiness' in mc.material.get('extensions', {}):
        mc.pbr = mc.material['extensions']['KHR_materials_pbrSpecularGlossiness']
        mc.type = 'specGloss'
    else:
        mc.pbr = mc.material.get('pbrMetallicRoughness', {})
        mc.type = 'metalRough'

    # Create a new Blender node-tree material and empty it
    bl_material = bpy.data.materials.new(material_name)
    bl_material.use_nodes = True
    mc.tree = bl_material.node_tree
    mc.links = mc.tree.links
    while mc.tree.nodes:
        mc.tree.nodes.remove(mc.tree.nodes[0])

    create_node_tree(mc)

    # Set the viewport alpha mode
    alpha_mode = mc.material.get('alphaMode', 'OPAQUE')
    double_sided = mc.material.get('doubleSided', False) or mc.op.options['always_doublesided']
    if not double_sided and alpha_mode == 'OPAQUE':
        # Since we use alpha to simulate backface culling
        alpha_mode = 'MASK'

    if alpha_mode not in ['OPAQUE', 'MASK', 'BLEND']:
        print('unknown alpha mode %s' % alpha_mode)
        alpha_mode = 'OPAQUE'

    if getattr(bl_material, 'blend_method', None):
        bl_material.blend_method = {
            # glTF: Blender
            'OPAQUE': 'OPAQUE',
            'MASK': 'CLIP',
            'BLEND': 'BLEND',
        }[alpha_mode]
    else:
        bl_material.game_settings.alpha_blend = {
            # glTF: Blender
            'OPAQUE': 'OPAQUE',
            'MASK': 'CLIP',
            'BLEND': 'ALPHA',
        }[alpha_mode]

    # Set diffuse/specular color (for solid view)
    if 'baseColorFactor' in mc.pbr:
        diffuse_color = mc.pbr['baseColorFactor'][:len(bl_material.diffuse_color)]
        bl_material.diffuse_color = diffuse_color
    if 'diffuseFactor' in mc.pbr:
        diffuse_color = mc.pbr['diffuseFactor'][:len(bl_material.diffuse_color)]
        bl_material.diffuse_color = diffuse_color
    if 'specularFactor' in mc.pbr:
        specular_color = mc.pbr['specularFactor'][:len(bl_material.specular_color)]
        bl_material.specular_color = specular_color

    return bl_material


def create_node_tree(mc):
    emissive_block = None
    if mc.type != 'unlit':
        emissive_block = create_emissive(mc)
    shaded_block = create_shaded(mc)

    if emissive_block:
        block = mc.adjoin({
            'node': 'AddShader',
            'input.0': emissive_block,
            'input.1': shaded_block,
        })
    else:
        block = shaded_block

    alpha_block = create_alpha_block(mc)
    if alpha_block:
        # Push things into a better position
        # [block] ->               -> [mix]
        #            [alpha block]
        alpha_block.pad_top(600)
        combined_block = Block.row_align_center([block, alpha_block])
        combined_block.outputs = \
            [block.outputs[0], alpha_block.outputs[0], alpha_block.outputs[1]]
        block = mc.adjoin({
            'node': 'MixShader',
            'output.0/input.2': combined_block,
            'output.1/input.Fac': combined_block,
            'output.2/input.1': combined_block,
        })

    mc.adjoin({
        'node': 'OutputMaterial',
        'input.Surface': block,
    }).center_at_origin()


def create_emissive(mc):
    if mc.type == 'unlit':
        return None

    block = None
    if 'emissiveTexture' in mc.material:
        block = create_texture_block(
            mc,
            'emissiveTexture',
            mc.material['emissiveTexture']
        )
        block.img_node.label = 'EMISSIVE'

    factor = mc.material.get('emissiveFactor', [0, 0, 0])

    if factor != [1, 1, 1] or 'emissiveFactor' in mc.liveness:
        if block:
            block = mc.adjoin({
                'node': 'MixRGB',
                'prop.blend_type': 'MULTIPLY',
                'input.Fac': Value(1),
                'input.Color1': block,
                'input.Color2': Value(factor + [1], record_to='emissiveFactor'),
            })
        else:
            if factor == [0, 0, 0] and 'emissiveFactor' not in mc.liveness:
                block = None
            else:
                block = Value(factor + [1], record_to='emissiveFactor')

    if block:
        block = mc.adjoin({
            'node': 'Emission',
            'input.Color': block,
        })

    return block


def create_alpha_block(mc):
    alpha_mode = mc.material.get('alphaMode', 'OPAQUE')
    double_sided = mc.material.get('doubleSided', False) or mc.op.options['always_doublesided']

    if alpha_mode not in ['OPAQUE', 'MASK', 'BLEND']:
        alpha_mode = 'OPAQUE'

    # Create an empty block with the baseColor/diffuse texture's alpha
    if alpha_mode != 'OPAQUE' and getattr(mc, 'img_node', None):
        block = Block.empty(0, 0)
        block.outputs = [mc.img_node.outputs[1]]
    else:
        block = None

    # Alpha cutoff in MASK mode
    if alpha_mode == 'MASK' and block:
        alpha_cutoff = mc.material.get('alphaCutoff', 0.5)
        block = mc.adjoin({
            'node': 'Math',
            'prop.operation': 'GREATER_THAN',
            'input.0': block,
            'input.1': Value(alpha_cutoff, record_to='alphaCutoff'),
        })

    # Handle doublesidedness
    if not double_sided:
        sided_block = mc.adjoin({
            'node': 'NewGeometry',
        })
        sided_block = mc.adjoin({
            'node': 'Math',
            'prop.operation': 'SUBTRACT',
            'input.0': Value(1),
            'output.Backfacing/input.1': sided_block,
        })
        if block:
            block = mc.adjoin({
                'node': 'Math',
                'prop.operation': 'MULTIPLY',
                'input.1': block,
                'input.0': sided_block,
            })
        else:
            block = sided_block

    if block:
        transparent_block = mc.adjoin({
            'node': 'BsdfTransparent',
        })

        alpha_block = Block.col_align_right([block, transparent_block])
        alpha_block.outputs = [block.outputs[0], transparent_block.outputs[0]]
        block = alpha_block

    return block


def create_shaded(mc):
    if mc.type == 'metalRough':
        return create_metalRough_pbr(mc)
    elif mc.type == 'specGloss':
        return create_specGloss_pbr(mc)
    elif mc.type == 'unlit':
        return create_unlit(mc)
    else:
        assert(False)


def create_metalRough_pbr(mc):
    params = {
        'node': 'BsdfPrincipled',
        'dim': (200, 540),
    }

    base_color_block = create_base_color(mc)
    if base_color_block:
        params['input.Base Color'] = base_color_block

    metal_roughness_block = create_metal_roughness(mc)
    if metal_roughness_block:
        params['output.0/input.Metallic'] = metal_roughness_block
        params['output.1/input.Roughness'] = metal_roughness_block

    normal_block = create_normal_block(mc)
    if normal_block:
        params['input.Normal'] = normal_block

    return mc.adjoin(params)


def create_specGloss_pbr(mc):
    try:
        bpy.context.scene.render.engine = 'BLENDER_EEVEE'
        node = mc.tree.nodes.new('ShaderNodeEeveeSpecular')
        mc.tree.nodes.remove(node)
        has_specular_node = True
    except Exception:
        has_specular_node = False

    if has_specular_node:
        params = {
            'node': 'EeveeSpecular',
            'dim': (200, 540),
        }
    else:
        params = {
            'node': 'Group',
            'group': 'pbrSpecularGlossiness',
            'dim': (200, 540),
        }

    diffuse_block = create_diffuse(mc)
    if diffuse_block:
        params['input.Base Color'] = diffuse_block

    spec_rough_block = create_spec_roughness(mc)
    if spec_rough_block:
        params['output.0/input.Specular'] = spec_rough_block
        params['output.1/input.Roughness'] = spec_rough_block

    normal_block = create_normal_block(mc)
    if normal_block:
        params['input.Normal'] = normal_block

    if has_specular_node:
        occlusion_block = create_occlusion_block(mc)
        if occlusion_block:
            params['output.0/input.Ambient Occlusion'] = occlusion_block

    return mc.adjoin(params)


def create_unlit(mc):
    params = {
        # TODO: pick a better node?
        'node': 'Emission',
    }

    base_color_block = create_base_color(mc)
    if base_color_block:
        params['input.Color'] = base_color_block

    return mc.adjoin(params)


def create_base_color(mc):
    block = None
    if 'baseColorTexture' in mc.pbr:
        block = create_texture_block(
            mc,
            'baseColorTexture',
            mc.pbr['baseColorTexture'],
        )
        block.img_node.label = 'BASE COLOR'
        # Remember for alpha value
        mc.img_node = block.img_node

    for color_set_num in range(0, mc.op.material_infos[mc.idx].num_color_sets):
        vert_color_block = mc.adjoin({
            'node': 'Attribute',
            'prop.attribute_name': 'COLOR_%d' % color_set_num,
        })
        if block:
            block = mc.adjoin({
                'node': 'MixRGB',
                'prop.blend_type': 'MULTIPLY',
                'input.Fac': Value(1),
                'input.Color1': block,
                'input.Color2': vert_color_block,
            })
        else:
            block = vert_color_block

    factor = mc.pbr.get('baseColorFactor', [1, 1, 1, 1])
    if factor != [1, 1, 1, 1] or 'baseColorFactor' in mc.liveness:
        if block:
            block = mc.adjoin({
                'node': 'MixRGB',
                'prop.blend_type': 'MULTIPLY',
                'input.Fac': Value(1),
                'input.Color1': block,
                'input.Color2': Value(factor, record_to='baseColorFactor'),
            })
        else:
            block = Value(factor, record_to='baseColorFactor')

    return block


def create_diffuse(mc):
    block = None
    if 'diffuseTexture' in mc.pbr:
        block = create_texture_block(
            mc,
            'diffuseTexture',
            mc.pbr['diffuseTexture'],
        )
        block.img_node.label = 'DIFFUSE'
        # Remember for alpha value
        mc.img_node = block.img_node

    for color_set_num in range(0, mc.op.material_infos[mc.idx].num_color_sets):
        vert_color_block = mc.adjoin({
            'node': 'Attribute',
            'prop.attribute_name': 'COLOR_%d' % color_set_num,
        })
        if block:
            block = mc.adjoin({
                'node': 'MixRGB',
                'prop.blend_type': 'MULTIPLY',
                'input.Fac': Value(1),
                'input.Color1': block,
                'input.Color2': vert_color_block,
            })
        else:
            block = vert_color_block

    factor = mc.pbr.get('diffuseFactor', [1, 1, 1, 1])
    if factor != [1, 1, 1, 1] or 'diffuseFactor' in mc.liveness:
        if block:
            block = mc.adjoin({
                'node': 'MixRGB',
                'prop.blend_type': 'MULTIPLY',
                'input.Fac': Value(1),
                'input.Color1': block,
                'input.Color2': Value(factor, record_to='diffuseFactor'),
            })
        else:
            block = Value(factor, record_to='diffuseFactor')

    return block


def create_metal_roughness(mc):
    block = None
    if 'metallicRoughnessTexture' in mc.pbr:
        tex_block = create_texture_block(
            mc,
            'metallicRoughnessTexture',
            mc.pbr['metallicRoughnessTexture'],
        )
        tex_block.img_node.label = 'METALLIC ROUGHNESS'
        tex_block.img_node.color_space = 'NONE'

        block = mc.adjoin({
            'node': 'SeparateRGB',
            'input.Image': tex_block,
        })
        block.outputs = [block.outputs['B'], block.outputs['G']]

    metal_factor = mc.pbr.get('metallicFactor', 1)
    rough_factor = mc.pbr.get('roughnessFactor', 1)

    if not block:
        return [
            Value(metal_factor, record_to='metallicFactor'),
            Value(rough_factor, record_to='roughFactor'),
        ]

    if metal_factor != 1 or 'metallicFactor' in mc.liveness:
        metal_factor_options = {
            'node': 'Math',
            'prop.operation': 'MULTIPLY',
            'output.0/input.0': block,
            'input.1': Value(metal_factor, record_to='metallicFactor'),
        }
    else:
        metal_factor_options = {}
    if rough_factor != 1 or 'roughnessFactor' in mc.liveness:
        rough_factor_options = {
            'node': 'Math',
            'prop.operation': 'MULTIPLY',
            'output.1/input.0': block,
            'input.1': Value(rough_factor, record_to='roughnessFactor'),
        }
    else:
        rough_factor_options = {}

    return mc.adjoin_split(metal_factor_options, rough_factor_options, block)


def create_spec_roughness(mc):
    block = None
    if 'specularGlossinessTexture' in mc.pbr:
        block = create_texture_block(
            mc,
            'specularGlossinessTexture',
            mc.pbr['specularGlossinessTexture'],
        )
        block.img_node.label = 'SPECULAR GLOSSINESS'

    spec_factor = mc.pbr.get('specularFactor', [1, 1, 1]) + [1]
    gloss_factor = mc.pbr.get('glossinessFactor', 1)

    if not block:
        return [
            Value(spec_factor, record_to='specularFactor'),
            Value(gloss_factor, record_to='glossinessFactor'),
        ]

    if spec_factor != [1, 1, 1, 1] or 'specularFactor' in mc.liveness:
        spec_factor_options = {
            'node': 'MixRGB',
            'prop.operation': 'MULTIPLY',
            'input.Fac': Value(1),
            'output.Color/input.Color1': block,
            'input.Color2': Value(spec_factor, record_to='specularFactor'),
        }
    else:
        spec_factor_options = {}
    if gloss_factor != 1 or 'glossinessFactor' in mc.liveness:
        gloss_factor_options = {
            'node': 'Math',
            'prop.operation': 'MULTIPLY',
            'output.Alpha/input.0': block,
            'input.1': Value(gloss_factor, record_to='glossinessFactor'),
        }
    else:
        gloss_factor_options = {}

    block = mc.adjoin_split(spec_factor_options, gloss_factor_options, block)

    # Convert glossiness to roughness
    return mc.adjoin_split(None, {
        'node': 'Math',
        'prop.operation': 'SUBTRACT',
        'input.0': Value(1.0),
        'output.1/input.1': block,
    }, block)


def create_normal_block(mc):
    if 'normalTexture' in mc.material:
        tex_block = create_texture_block(
            mc,
            'normalTexture',
            mc.material['normalTexture'],
        )
        tex_block.img_node.label = 'NORMAL'
        tex_block.img_node.color_space = 'NONE'

        return mc.adjoin({
            'node': 'NormalMap',
            'prop.uv_map': 'TEXCOORD_%d' % mc.material['normalTexture'].get('texCoord', 0),
            'input.Strength': Value(mc.material['normalTexture'].get('scale', 1), record_to='normalTexture/scale'),
            'input.Color': tex_block,
        })
    else:
        return None


def create_occlusion_block(mc):
    if 'occlusionTexture' in mc.material:
        block = create_texture_block(
            mc,
            'occlusionTexture',
            mc.material['occlusionTexture'],
        )
        block.img_node.label = 'OCCLUSION'
        block.img_node.color_space = 'NONE'

        block = block = mc.adjoin({
            'node': 'SeparateRGB',
            'input.Image': block,
        })

        strength = mc.material['occlusionTexture'].get('strength', 1)
        if strength != 1 or 'occlusionTexture/strength' in mc.liveness:
            block = block = mc.adjoin({
                'node': 'Math',
                'prop.operation': 'MULTIPLY',
                'input.0': block,
                'input.1': Value(strength, record_to='occlusionTexture/strength'),
            })

        return block
    else:
        return None


class MaterialCreator:
    """
    Work-horse for creating nodes and automatically laying out blocks.
    """
    def new_node(self, opts):
        new_node = self.tree.nodes.new('ShaderNode' + opts['node'])
        new_node.width = 140
        new_node.height = 100

        if 'group' in opts:
            new_node.node_tree = self.op.get('node_group', opts['group'])

        def str_or_int(x):
            try:
                return int(x)
            except ValueError:
                return x

        input_blocks = []
        for key, val in opts.items():
            if key.startswith('input.'):
                input_key = str_or_int(key[len('input.'):])
                input_block = self.connect(val, 0, new_node, 'inputs', input_key)
                if input_block and input_block not in input_blocks:
                    input_blocks.append(input_block)

            elif key.startswith('output.'):
                if '/' in key:
                    output_part, input_part = key.split('/')
                    output_key = str_or_int(output_part[len('output.'):])
                    input_key = str_or_int(input_part[len('input.'):])
                    input_block = self.connect(val, output_key, new_node, 'inputs', input_key)
                    if input_block and input_block not in input_blocks:
                        input_blocks.append(input_block)

                else:
                    output_key = str_or_int(key[len('output.'):])
                    input_block = self.connect(val, 0, new_node, 'outputs', output_key)
                    if input_block and input_block not in input_blocks:
                        input_blocks.append(input_block)

            elif key.startswith('prop.'):
                prop_name = key[len('prop.'):]
                setattr(new_node, prop_name, val)

            elif key == 'dim':
                new_node.width, new_node.height = val

        return new_node, input_blocks

    def adjoin(self, opts):
        """
        Adjoins a new node. All the blocks that are used as inputs to it are
        laid out in a column to its left.

        [input1] -> [new_node]
        [input2] ->
        ...      ->
        """
        new_node, input_blocks = self.new_node(opts)

        input_block = Block.col_align_right(input_blocks)
        block = Block.row_align_center([input_block, new_node])
        block.outputs = new_node.outputs

        return block

    def adjoin_split(self, opts1, opts2, left_block):
        """
        Adjoins at-most-two new nodes (either or both can be missing). They are
        laid out in a column with left_block to their left. Return a block with
        two outputs; the first is the output of the first block, or the first
        output of left_block if missing; the second is the first output of the
        second block, or the second of left_block if missing.

        [left_block] -> [block1] ->
                     -> [block2] ->
        """
        if not opts1 and not opts2:
            return left_block

        outputs = []
        if opts1:
            block1, __input_blocks = self.new_node(opts1)
            outputs.append(block1.outputs[0])
        else:
            block1 = Block.empty()
            outputs.append(left_block.outputs[0])
        if opts2:
            block2, __input_blocks = self.new_node(opts2)
            outputs.append(block2.outputs[0])
        else:
            block2 = Block.empty()
            outputs.append(left_block.outputs[1])

        split_block = Block.col_align_right([block1, block2])
        block = Block.row_align_center([left_block, split_block])
        block.outputs = outputs

        return block

    def connect(self, connector, connector_key, node, socket_type, socket_key):
        """
        Connect a connector, which may be either a socket or a Value (or
        nothing) to a socket in the shader node tree.
        """
        if connector is None:
            return None

        if type(connector) == Value:
            connector = [connector]

        if type(connector) == list:
            self.connect_value(connector[connector_key], node, socket_type, socket_key)
            return None

        else:
            assert(socket_type == 'inputs')
            self.connect_block(connector, connector_key, node.inputs[socket_key])
            return connector

    def connect_value(self, value, node, socket_type, socket_key):
        getattr(node, socket_type)[socket_key].default_value = value.value
        # Record the data path to this socket in our material info so the
        # animation creator can find it to animate
        if value.record_to:
            self.op.material_infos[self.idx].paths[value.record_to] = (
                'nodes[' + json.dumps(node.name) + ']' +
                '.' + socket_type + '[' + json.dumps(socket_key) + ']' +
                '.default_value'
            )

    def connect_block(self, block, output_key, socket):
        self.links.new(block.outputs[output_key], socket)


class Value:
    """
    This is a helper class that tells the material creator to set the value of a
    socket rather than connect it to another socket. The record_to property, if
    present, is a key that the path to the socket should be remembered under.
    Remembering the path to where a Value got written into the node tree is used
    for animation importing (which needs to know where eg. the baseColorFactor
    wound up; it could be in a Multiply node or directly in the color socket of
    the Principled node, etc).
    """
    def __init__(self, value, record_to=''):
        self.value = value
        self.record_to = record_to


================================================
FILE: addons/io_scene_gltf_ksons/material/block.py
================================================
from mathutils import Vector

# A _block_ is either a shader node or a rectangular set of smaller blocks
# represented by the Block class. We can line blocks up in rows, etc. So we use
# them to make node trees look nice.


class Block:
    def __init__(self, *blocks):
        self.children = []
        # Bounding box of children
        self.top_left = Vector((0, 0))
        self.bottom_right = Vector((0, 0))

        for block in blocks:
            self.add(block)

    def add(self, child):
        self.children.append(child)
        if len(self.children) == 1:
            self.top_left = top_left(child)
            self.bottom_right = bottom_right(child)
        else:
            tl = top_left(child)
            br = bottom_right(child)
            self.top_left = Vector((
                min(self.top_left[0], tl[0]),
                max(self.top_left[1], tl[1]),
            ))
            self.bottom_right = Vector((
                max(self.bottom_right[0], br[0]),
                min(self.bottom_right[1], br[1]),
            ))

    def move_by(self, delta):
        for child in self.children:
            move_by(child, delta)
        self.top_left += delta
        self.bottom_right += delta

    def pad_top(self, padding):
        self.top_left = Vector((
            self.top_left[0],
            self.top_left[1] + padding,
        ))

    def center_at_origin(self):
        center_at_origin(self)

    @staticmethod
    # Creates an empty block (used for spacing purposes)
    def empty(width=100, height=140):
        block = Block()
        block.bottom_right = Vector((width, -height))
        return block

    @staticmethod
    # Aligns the blocks in a center-aligned row. Returns a new Block containing
    # the blocks.
    #       .--.         .---.
    #       |  | .-----. |   |
    #     --|A |-|  B  |-| C |--
    #       |  | '-----' |   |
    #       '--'         '---'
    def row_align_center(blocks, gutter=100):
        x, y = 0, 0
        max_height = max((height(block) for block in blocks), default=0)
        for block in blocks:
            w, h = width(block), height(block)
            dh = (max_height - h) / 2
            move_to(block, Vector((x, y - dh)))
            if w != 0:
                x += w + gutter

        return Block(*blocks)

    @staticmethod
    # Aligns the blocks in a right-aligned column. Returns a new Block
    # containing the blocks.
    #        .--.
    #        | A|
    #        '--'
    #     .-----.
    #     |  B  |
    #     '-----'
    #       .---.
    #       | C |
    #       '---'
    def col_align_right(blocks, gutter=100):
        x, y = 0, 0
        max_width = max((width(block) for block in blocks), default=0)
        for block in blocks:
            w, h = width(block), height(block)
            dw = max_width - w
            move_to(block, Vector((x + dw, y)))
            if h != 0:
                y -= h + gutter

        return Block(*blocks)


def top_left(block):
    if type(block) == Block:
        return block.top_left
    return Vector(block.location)


def bottom_right(block):
    if type(block) == Block:
        return Vector(block.bottom_right)
    return block.location + Vector((block.width, -block.height))


def move_by(block, delta):
    if type(block) == Block:
        block.move_by(delta)
    else:
        block.location += delta


def width(block):
    tl = top_left(block)
    br = bottom_right(block)
    return br[0] - tl[0]


def height(block):
    tl = top_left(block)
    br = bottom_right(block)
    return tl[1] - br[1]


def move_to(block, pos):
    delta = pos - top_left(block)
    move_by(block, delta)


def center_at_origin(block):
    w, h = width(block), height(block)
    move_to(block, Vector((-w/2, h/2)))


================================================
FILE: addons/io_scene_gltf_ksons/material/groups.json
================================================
// !!AUTO-GENERATED!! See node_groups.py
{
"Texcoord CLAMP":{"name":"Texcoord CLAMP","inputs":[{"name":"Value","idname":"NodeSocketFloat","default_value":0.5,"min_value":-10000.0,"max_value":10000.0}],"outputs":[{"name":"Value","idname":"NodeSocketFloat","default_value":0.0,"min_value":0.0,"max_value":0.0}],"nodes":[{"name":"Group Input","idname":"NodeGroupInput","location":[-439.2994689941406,-68.00346374511719],"width":140.0,"height":100.0,"inputs":[],"outputs":[null,null]},{"name":"Group Output","idname":"NodeGroupOutput","location":[185.09613037109375,-68.60009765625],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[]},{"name":"Math","idname":"ShaderNodeMath","location":[-124.9363784790039,-15.0498046875],"width":140.0,"height":100.0,"inputs":[0.0,null],"outputs":[null],"operation":"ADD","use_clamp":true}],"links":[0,0,2,1,2,0,1,0]},
"Texcoord MIRRORED_REPEAT":{"name":"Texcoord MIRRORED_REPEAT","inputs":[{"name":"Value","idname":"NodeSocketFloat","default_value":0.5,"min_value":-10000.0,"max_value":10000.0}],"outputs":[{"name":"Output","idname":"NodeSocketFloat","default_value":0.0,"min_value":-3.4028234663852886e+38,"max_value":3.4028234663852886e+38}],"nodes":[{"name":"Frame.001","idname":"NodeFrame","location":[244.09178161621094,254.49673461914062],"width":557.14794921875,"height":380.4698486328125,"inputs":[],"outputs":[],"label":"Lerp"},{"name":"Frame","idname":"NodeFrame","location":[-701.92236328125,266.97216796875],"width":540.37060546875,"height":423.4593811035156,"inputs":[],"outputs":[],"label":"x mod 2"},{"name":"Group Input","idname":"NodeGroupInput","location":[-903.9764404296875,8.935855865478516],"width":140.0,"height":100.0,"inputs":[],"outputs":[null,null]},{"name":"Math.002","idname":"ShaderNodeMath","location":[136.47003173828125,-47.819976806640625],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[null],"parent":0,"operation":"MULTIPLY","use_clamp":false},{"name":"Math.006","idname":"ShaderNodeMath","location":[-41.066375732421875,-47.826690673828125],"width":140.0,"height":100.0,"inputs":[1.0,null],"outputs":[null],"parent":0,"operation":"SUBTRACT","use_clamp":false},{"name":"Math.004","idname":"ShaderNodeMath","location":[316.495361328125,-123.95169067382812],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[null],"parent":0,"operation":"ADD","use_clamp":false},{"name":"Group Output","idname":"NodeGroupOutput","location":[801.2479248046875,68.79402160644531],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[]},{"name":"Math.009","idname":"ShaderNodeMath","location":[364.4091796875,-69.60244750976562],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[null],"parent":1,"operation":"ADD","use_clamp":false},{"name":"Math","idname":"ShaderNodeMath","location":[85.581787109375,-45.44383239746094],"width":140.0,"height":100.0,"inputs":[null,2.0],"outputs":[null],"parent":1,"operation":"MODULO","use_clamp":false},{"name":"Math.007","idname":"ShaderNodeMath","location":[23.624755859375,-261.1719970703125],"width":140.0,"height":100.0,"inputs":[null,0.0],"outputs":[null],"parent":1,"operation":"LESS_THAN","use_clamp":false},{"name":"Math.008","idname":"ShaderNodeMath","location":[197.54718017578125,-261.172119140625],"width":140.0,"height":100.0,"inputs":[null,2.0],"outputs":[null],"parent":1,"operation":"MULTIPLY","use_clamp":false},{"name":"Math.001","idname":"ShaderNodeMath","location":[-76.39251708984375,319.6142578125],"width":140.0,"height":100.0,"inputs":[1.0,null],"outputs":[null],"operation":"GREATER_THAN","use_clamp":false},{"name":"Math.005","idname":"ShaderNodeMath","location":[-75.58465576171875,-64.29931640625],"width":140.0,"height":100.0,"inputs":[2.0,null],"outputs":[null],"operation":"SUBTRACT","use_clamp":false},{"name":"Math.003","idname":"ShaderNodeMath","location":[134.81446838378906,-219.2749481201172],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[null],"parent":0,"operation":"MULTIPLY","use_clamp":false}],"links":[2,0,8,0,5,0,6,0,3,0,5,0,13,0,5,1,11,0,4,1,11,0,13,0,4,0,3,0,8,0,7,0,12,0,3,1,2,0,9,0,9,0,10,0,10,0,7,1,7,0,11,1,7,0,12,1,7,0,13,1]},
"Texcoord REPEAT":{"name":"Texcoord REPEAT","inputs":[{"name":"Value","idname":"NodeSocketFloat","default_value":0.5,"min_value":-10000.0,"max_value":10000.0}],"outputs":[{"name":"Value","idname":"NodeSocketFloat","default_value":0.0,"min_value":0.0,"max_value":0.0}],"nodes":[{"name":"Math.002","idname":"ShaderNodeMath","location":[-111.34617614746094,-22.616287231445312],"width":140.0,"height":100.0,"inputs":[null,0.0],"outputs":[null],"operation":"LESS_THAN","use_clamp":false},{"name":"Math","idname":"ShaderNodeMath","location":[-139.84437561035156,171.7362060546875],"width":140.0,"height":100.0,"inputs":[null,1.0],"outputs":[null],"operation":"MODULO","use_clamp":false},{"name":"Group Input","idname":"NodeGroupInput","location":[-359.3721618652344,35.831207275390625],"width":140.0,"height":100.0,"inputs":[],"outputs":[null,null]},{"name":"Math.001","idname":"ShaderNodeMath","location":[85.65119934082031,104.58448791503906],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[null],"operation":"ADD","use_clamp":false},{"name":"Group Output","idname":"NodeGroupOutput","location":[275.0805358886719,63.34889602661133],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[]}],"links":[2,0,1,0,1,0,3,0,3,0,4,0,2,0,0,0,0,0,3,1]},
"glTF <-> Blender UV":{"name":"glTF <-> Blender UV","inputs":[{"name":"Vector","idname":"NodeSocketVector","default_value":[0.0,0.0,0.0],"min_value":-1.0,"max_value":1.0}],"outputs":[{"name":"Vector","idname":"NodeSocketVector","default_value":[0.0,0.0,0.0],"min_value":0.0,"max_value":0.0}],"nodes":[{"name":"Mapping","idname":"ShaderNodeMapping","location":[0.0,0.0],"width":320.0,"height":100.0,"inputs":[null],"outputs":[null],"translation":[0.0,1.0,0.0],"rotation":[0.0,0.0,0.0],"scale":[1.0,-1.0,1.0]},{"name":"Group Output","idname":"NodeGroupOutput","location":[403.02301025390625,-113.90129089355469],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[]},{"name":"Group Input","idname":"NodeGroupInput","location":[-223.15174865722656,-78.30713653564453],"width":140.0,"height":100.0,"inputs":[],"outputs":[null,null]}],"links":[2,0,0,0,0,0,1,0]},
"pbrSpecularGlossiness":{"name":"pbrSpecularGlossiness","inputs":[{"name":"Base Color","idname":"NodeSocketColor","default_value":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{"name":"Specular","idname":"NodeSocketColor","default_value":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{"name":"Roughness","idname":"NodeSocketFloatFactor","default_value":0.5,"min_value":0.0,"max_value":1.0},{"name":"Normal","idname":"NodeSocketVector","default_value":[0.0,0.0,0.0],"min_value":-1.0,"max_value":1.0}],"outputs":[{"name":"Shader","idname":"NodeSocketShader"}],"nodes":[{"name":"Diffuse BSDF","idname":"ShaderNodeBsdfDiffuse","location":[-195.1316680908203,203.0072784423828],"width":150.0,"height":100.0,"inputs":[null,null,null],"outputs":[null]},{"name":"Group Output","idname":"NodeGroupOutput","location":[408.60809326171875,-0.0],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[]},{"name":"Group Input","idname":"NodeGroupInput","location":[-658.364990234375,4.160030841827393],"width":140.0,"height":100.0,"inputs":[],"outputs":[null,null,null,null,null]},{"name":"Add Shader","idname":"ShaderNodeAddShader","location":[96.44002532958984,-22.353256225585938],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[null]},{"name":"Glossy BSDF","idname":"ShaderNodeBsdfGlossy","location":[-208.60809326171875,-203.0072784423828],"width":150.0,"height":100.0,"inputs":[null,null,null],"outputs":[null]}],"links":[2,0,0,0,2,1,4,0,2,3,0,2,0,0,3,0,4,0,3,1,3,0,1,0,2,2,0,1,2,2,4,1,2,3,4,2]},
"pbrSpecularGlossiness.001":{"name":"pbrSpecularGlossiness.001","inputs":[{"name":"Diffuse","idname":"NodeSocketColor","default_value":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{"name":"Specular","idname":"NodeSocketColor","default_value":[0.800000011920929,0.800000011920929,0.800000011920929,1.0]},{"name":"Glossiness","idname":"NodeSocketFloatFactor","default_value":0.5,"min_value":0.0,"max_value":1.0},{"name":"Normal","idname":"NodeSocketVector","default_value":[0.0,0.0,0.0],"min_value":-1.0,"max_value":1.0}],"outputs":[{"name":"Shader","idname":"NodeSocketShader"}],"nodes":[{"name":"Diffuse BSDF","idname":"ShaderNodeBsdfDiffuse","location":[-195.1316680908203,203.0072784423828],"width":150.0,"height":100.0,"inputs":[null,0.0,null],"outputs":[null]},{"name":"Glossy BSDF","idname":"ShaderNodeBsdfGlossy","location":[-208.60809326171875,-203.0072784423828],"width":150.0,"height":100.0,"inputs":[null,0.0,null],"outputs":[null]},{"name":"Group Output","idname":"NodeGroupOutput","location":[408.60809326171875,-0.0],"width":140.0,"height":100.0,"inputs":[null,null],"outputs":[]},{"name":"Group Input","idname":"NodeGroupInput","location":[-658.364990234375,4.160030841827393],"width":140.0,"height":100.0,"inputs":[],"outputs":[null,null,null,null,null]},{"name":"Mix Shader","idname":"ShaderNodeMixShader","location":[76.44883728027344,-5.425174713134766],"width":140.0,"height":100.0,"inputs":[null,null,null],"outputs":[null]}],"links":[3,0,0,0,3,1,1,0,3,2,4,0,0,0,4,2,1,0,4,1,4,0,2,0,3,3,0,2,3,3,1,2]}
}


================================================
FILE: addons/io_scene_gltf_ksons/material/image.py
================================================
import tempfile
import os
import base64
import bpy
from bpy_extras.image_utils import load_image


def create_image(op, idx):
    image = op.gltf['images'][idx]

    name = image.get('name', 'image-%d' % idx)

    img = None
    if 'uri' in image:
        uri = image['uri']
        is_data_uri = uri[:5] == 'data:'
        if is_data_uri:
            found_at = uri.find(';base64,')
            if found_at == -1:
                print('error loading image: data URI not base64?')
                return None
            else:
                buffer = base64.b64decode(uri[found_at + 8:])
        else:
            if name not in image:
                name = os.path.basename(uri)
            # Load the image from disk
            image_location = os.path.join(op.base_path, uri)
            img = load_image(image_location)
            if not img:
                print('error loading image')
                return None
    else:
        buffer, _stride = op.get('buffer_view', image['bufferView'])

    if not img:
        # The image data is in buffer, but I don't know how to load an image
        # from memory. We'll write it to a temp file and load it from there.
        # Yes, this is a hack :)
        with tempfile.TemporaryDirectory() as tmpdir:
            img_path = os.path.join(tmpdir, 'image-%d' % idx)
            with open(img_path, 'wb') as f:
                f.write(buffer)
            img = load_image(img_path)
            img.pack()  # TODO: should we use as_png?

    img.name = name

    return img


================================================
FILE: addons/io_scene_gltf_ksons/material/node_groups.py
================================================
import json
import os
import bpy

# This file creates the node groups that we use during material creation. Node
# groups are serialized in groups.json. The data comes from
# KhronosGroup/glTF-Blender-Exporter/pbr_node/glTF2.blend, plus some
# modifications.
this_dir = os.path.dirname(os.path.abspath(__file__))
node_groups_path = os.path.join(this_dir, 'groups.json')
with open(node_groups_path, 'r') as f:
    f.readline()  # throw away comment line
    GROUP_DATA = json.load(f)


def create_group(op, name):
    data = GROUP_DATA[name]

    # Before we create a new one, if there is an existing group with the right
    # name and whose inputs/outputs have the right names, (perhaps from a
    # previous import), use that instead.
    if name in bpy.data.node_groups:
        g = bpy.data.node_groups[name]
        in_names = [input.name for input in g.inputs]
        out_names = [output.name for output in g.outputs]
        matches = (
            in_names == [y['name'] for y in data['inputs']] and
            out_names == [y['name'] for y in data['outputs']]
        )
        if matches:
            return g

    g = bpy.data.node_groups.new(data['name'], 'ShaderNodeTree')
    inputs = g.inputs
    outputs = g.outputs
    nodes = g.nodes
    links = g.links

    # New groups aren't empty; empty it
    while nodes:
        nodes.remove(nodes[0])

    def deserialize_sockets(sockets, ys):
        for y in ys:
            s = sockets.new(y['idname'], y['name'])
            if 'default_value' in y:
                s.default_value = y['default_value']
            if 'min_value' in y:
                s.min_value = y['min_value']
            if 'max_value' in y:
                s.max_value = y['max_value']

    deserialize_sockets(inputs, data['inputs'])
    deserialize_sockets(outputs, data['outputs'])

    for y in data['nodes']:
        node = nodes.new(y['idname'])
        node.name = y['name']
        if 'node_tree' in y:
            node.node_tree = op.get('node_group', y['node_tree'])
        for attr in [
            'label', 'operation', 'blend_type', 'use_clamp',
            'translation', 'rotation', 'scale'
        ]:
            if attr in y:
                setattr(node, attr, y[attr])

        for i, v in enumerate(y['inputs']):
            if v != None:
                node.inputs[i].default_value = v
        for i, v in enumerate(y['outputs']):
            if v != None:
                node.outputs[i].default_value = v

    for i, y in enumerate(data['nodes']):
        if 'parent' in y:
            nodes[i].parent = nodes[y['parent']]

    for i, y in enumerate(data['nodes']):
        nodes[i].location = y['location']
        nodes[i].width = y['width']
        nodes[i].height = y['height']

    for i in range(0, len(data['links']), 4):
        a, b, c, d = data['links'][i:i+4]
        links.new(nodes[a].outputs[b], nodes[c].inputs[d])

    return g


# The rest of this file isn't used in the importer but you can use it to edit
# the serialized groups. First run load() to load all the groups, edit, and then
# serialize them back to node_groups.json with serialize().

def load():
    # Implements *just* enough of ImportGLTF to get create_group to work :)
    class ProxyOp:
        def __init__(self):
            self.node_groups = {}

        def get(self, type, name):
            assert(type == 'node_group')
            if name not in self.node_groups:
                self.node_groups[name] = create_group(self, name)
            return self.node_groups[name]

    op = ProxyOp()
    for name in GROUP_DATA.keys():
        create_group(op, name)


def serialize_group(group):
    def val(x):
        if x == None:
            return x
        if type(x) in [int, float, bool, list, str]:
            return x
        if hasattr(x, '__len__'):
            return list(x)
        assert(False)

    def serialize_sockets(sockets):
        result = []
        for s in sockets:
            x = {
                'name': s.name,
                'idname': s.bl_socket_idname,
            }
            if hasattr(s, 'default_value'):
                x['default_value'] = val(s.default_value)
            if hasattr(s, 'min_value'):
                x['min_value'] = val(s.min_value)
            if hasattr(s, 'max_value'):
                x['max_value'] = val(s.max_value)
            result.append(x)
        return result

    inputs = serialize_sockets(group.inputs)
    outputs = serialize_sockets(group.outputs)

    node_to_idx = {}
    for i, node in enumerate(group.nodes):
        node_to_idx[node] = i

    nodes = []
    for node in group.nodes:
        x = {
            'name': node.name,
            'idname': node.bl_idname,
            'location': val(node.location),
            'width': node.width,
            'height': node.height,
            'inputs': [],
            'outputs': [],
        }

        if node.parent:
            x['parent'] = node_to_idx[node.parent]
        if hasattr(node, 'label') and node.label != '':
            x['label'] = node.label
        if hasattr(node, 'node_tree'):
            x['node_tree'] = node.node_tree.name

        for attr in [
            'operation', 'blend_type', 'use_clamp',
            'translation', 'rotation', 'scale',
        ]:
            if hasattr(node, attr):
                x[attr] = val(getattr(node, attr))

        for input in node.inputs:
            if input.links or not hasattr(input, 'default_value'):
                x['inputs'].append(None)
            else:
                x['inputs'].append(val(input.default_value))
        for output in node.outputs:
            if output.links or not hasattr(output, 'defaultvalue'):
                x['outputs'].append(None)
            else:
                x['outputs'].append(val(output.default_value))

        nodes.append(x)

    links = []
    for link in group.links:
        from_node_id = node_to_idx[link.from_node]
        from_socket_id = list(link.from_node.outputs).index(link.from_socket)
        to_node_id = node_to_idx[link.to_node]
        to_socket_id = list(link.to_node.inputs).index(link.to_socket)
        links += [from_node_id, from_socket_id, to_node_id, to_socket_id]

    return {
        'name': group.name,
        'inputs': inputs,
        'outputs': outputs,
        'nodes': nodes,
        'links': links,
    }


def serialize():
    groups = {}
    for group in bpy.data.node_groups:
        groups[group.name] = serialize_group(group)

    with open(node_groups_path, 'w') as f:
        f.write('// !!AUTO-GENERATED!! See node_groups.py\n')
        f.write('{\n')
        keys = list(groups.keys())
        keys.sort()
        for k in keys:
            json.dump(k, f)
            f.write(':')
            json.dump(groups[k], f, separators=(',', ':'))
            if k != keys[-1]:
                f.write(',')
            f.write('\n')
        f.write('}\n')


================================================
FILE: addons/io_scene_gltf_ksons/material/precompute.py
================================================
from ..mesh import MAX_NUM_COLOR_SETS

class MaterialInfo:
    def __init__(self):
        # The maximum number of color sets used by any primitive with this
        # material, ie. the smallest n st. no primitive with this material has a
        # COLOR_n attribute.
        self.num_color_sets = 0
        # The set of "live" material property names that have to correspond to
        # some value in the Blender shader tree, because we're going to want to
        # animate them.
        self.liveness = set()
        # Maps a property name to its Blender path suitable for animation. All
        # live properties must get an entry here.
        self.paths = {}

def material_procomputation(op):
    op.material_infos = {
        idx: MaterialInfo()
        for idx, __material in enumerate(op.gltf.get('materials', []))
    }
    op.material_infos['default_material'] = MaterialInfo()

    # Find out what vertex colors materials use
    for mesh in op.gltf.get('meshes', []):
        for primitive in mesh['primitives']:
            i = 0
            while 'COLOR_%d' % i in primitive['attributes']:
                if i >= MAX_NUM_COLOR_SETS:
                    break

                mat = primitive.get('material', 'default_material')
                if i >= op.material_infos[mat].num_color_sets:
                    op.material_infos[mat].num_color_sets = i + 1
                i += 1


================================================
FILE: addons/io_scene_gltf_ksons/material/texture.py
================================================
import json
from . import block
Block = block.Block

# Creates a texture block for the given material.
#
# The texture block reads the appropriate texcoord set, possibly transforms
# the UVs for KHR_texture_transform, applies wrapping to the UVs, and
# samples an image texture. In general, it looks like
#
#    [Texcoord] -> [UV Transform] -> [UV Wrap] -> [Img Texture] ->


def create_texture_block(mc, texture_type, info):
    texture = mc.op.gltf['textures'][info['index']]

    texcoord_set = info.get('texCoord', 0)
    block = None
    # We'll create the texcoord block lazily
    def create_texcoord_block():
        return mc.adjoin({
            'node': 'UVMap',
            'prop.uv_map': 'TEXCOORD_%d' % texcoord_set,
        })

    # The [UV Transform] block looks like
    #
    #    -> [gltf<->Blender] -> [Transform] -> [gltf<->Blender] ->
    #
    # the [gltf<->Blender] blocks are Group Nodes that convert between glTF and
    # Blender UV conventions, ie. (u, v) -> (u, 1-v). [Transform] is a Mapping
    # Node that applies the actual TRS transform.
    needs_tex_transform = (
        'KHR_texture_transform' in info.get('extensions', {}) or
        # This is set if the texture transform is animated
        (texture_type + '-transform') in mc.op.material_infos[mc.idx].liveness
    )
    if needs_tex_transform:
        t = info.get('extensions', {}).get('KHR_texture_transform', {})

        texcoord_set = t.get('texCoord', texcoord_set)
        offset = t.get('offset', [0, 0])
        rotation = t.get('rotation', 0)
        scale = t.get('scale', [1, 1])

        # Rotation is counter-clockwise, but in glTF's UV space where Y is down,
        # which makes it a clockwise rotation in normal terms
        rotation = -rotation

        # [Texcoord] -> [gltf<->Blender]
        if not block:
            block = create_texcoord_block()
        block = mc.adjoin({
            'node': 'Group',
            'group': 'glTF <-> Blender UV',
            'input.0': block,
        })

        # -> [Transform]
        block = mc.adjoin({
            'node': 'Mapping',
            'dim': (320, 275),
            'prop.vector_type': 'POINT',
            'input.0': block,
        })
        mapping_node = block.outputs[0].node
        mapping_node.translation[0], mapping_node.translation[1] = offset
        mapping_node.rotation[2] = rotation
        mapping_node.scale[0], mapping_node.scale[1] = scale

        mc.op.material_infos[mc.idx].paths[texture_type + '-transform'] = (
            'nodes[' + json.dumps(mapping_node.name) + ']'
        )

        # -> [gltf<->Blender]
        block = mc.adjoin({
            'node': 'Group',
            'group': 'glTF <-> Blender UV',
            'input.0': block,
        })

    if 'sampler' in texture:
        sampler = mc.op.gltf['samplers'][texture['sampler']]
    else:
        sampler = {}

    # Handle the wrapping mode. The Image Texture Node can have a wrapping mode
    # but it doesn't cover all possibilities in glTF.
    CLAMP_TO_EDGE = 33071
    MIRRORED_REPEAT = 33648
    REPEAT = 10497

    wrap_s = sampler.get('wrapS', REPEAT)
    wrap_t = sampler.get('wrapT', REPEAT)
    if wrap_s not in [CLAMP_TO_EDGE, MIRRORED_REPEAT, REPEAT]:
        print('unknown wrapping mode:', wrap_s)
        wrap_s = REPEAT
    if wrap_t not in [CLAMP_TO_EDGE, MIRRORED_REPEAT, REPEAT]:
        print('unknown wrapping mode:', wrap_t)
        wrap_t = REPEAT

    if (wrap_s, wrap_t) == (CLAMP_TO_EDGE, CLAMP_TO_EDGE):
        extension = 'EXTEND'
    elif (wrap_s, wrap_t) == (REPEAT, REPEAT):
        extension = 'REPEAT'
    else:
        # Blender couldn't handle it. We have to insert the [UV Wrap] block. It
        # looks like
        #
        #                      -> [wrap S] ->
        #    -> [separate XYZ]                [combine XYZ] ->
        #                      -> [wrap T] ->
        #
        # where the [wrap _] blocks are Group Nodes that compute
        #
        #     x -> x mod 1               for REPEAT
        #
        #     x -> / y       if y <= 1   for MIRRORED_REPEAT
        #          \ 2 - y   if y > 1
        #            where y = x mod 2
        #
        # and where the [wrap _] block is omitted (ie. the value is passed
        # through) for CLAMP_TO_EDGE because we set the wrapping mode on the
        # Texture Node to do clamping (the artifacts produced when we use
        # clamping for the actual wrapping mode are slightly better than if we
        # used another mode).
        extension = 'EXTEND'

        if not block:
            block = create_texcoord_block()

        # -> [separate XYZ]
        block = mc.adjoin({
            'node': 'SeparateXYZ',
            'input.0': block,
        })

        # -> [wrap S]
        # -> [wrap T]
        gltf_to_blender_wrap = dict([
            (REPEAT, 'Texcoord REPEAT'),
            (MIRRORED_REPEAT, 'Texcoord MIRRORED_REPEAT'),
        ])
        block = mc.adjoin_split(
            {
                'node': 'Group',
                'dim': (230, 100),
                'group': gltf_to_blender_wrap[wrap_s],
                'input.0': block,
            } if wrap_s != CLAMP_TO_EDGE else {},
            {
                'node': 'Group',
                'dim': (230, 100),
                'group': gltf_to_blender_wrap[wrap_t],
                'output.1/input.0': block,
            } if wrap_t != CLAMP_TO_EDGE else {},
            block,
        )

        # -> [combine XYZ]
        block = mc.adjoin({
            'node': 'CombineXYZ',
            'output.0/input.0': block,
            'output.1/input.1': block,
        })

    # Determine interpolation.

    NEAREST = 9728
    LINEAR = 9729
    NEAREST_MIPMAP_NEAREST = 9984
    LINEAR_MIPMAP_NEAREST = 9985
    NEAREST_MIPMAP_LINEAR = 9986
    LINEAR_MIPMAP_LINEAR = 9987
    AUTO_FILTER = LINEAR  # which one to use if unspecified

    mag_filter = sampler.get('magFilter', AUTO_FILTER)
    min_filter = sampler.get('minFilter', AUTO_FILTER)
    if mag_filter not in [NEAREST, LINEAR]:
        print('unknown texture mag filter:', mag_filter)
        mag_filter = AUTO_FILTER
    # Ignore mipmaps.
    if min_filter in [NEAREST, NEAREST_MIPMAP_NEAREST, NEAREST_MIPMAP_LINEAR]:
        min_filter = NEAREST
    elif min_filter in [LINEAR, LINEAR_MIPMAP_NEAREST, LINEAR_MIPMAP_LINEAR]:
        min_filter = LINEAR
    else:
        print('unknown texture min filter:', min_filter)
        min_filter = AUTO_FILTER

    # We can't set the min and mag and filters separately in Blender. Just
    # prefer linear, unless both were nearest.
    if (min_filter, mag_filter) == (NEAREST, NEAREST):
        interpolation = 'Closest'
    else:
        interpolation = 'Linear'

    # Find source
    if 'MSFT_texture_dds' in info.get('extensions', {}):
        image_id = texture['MSFT_texture_dds']['source']
        image = mc.op.get('image', image_id)
    elif 'source' not in texture:
        image = None
    else:
        image_id = texture['source']
        image = mc.op.get('image', image_id)

    # -> [TexImage]
    if not block and texcoord_set != 0:
        block = create_texcoord_block()
    block = mc.adjoin({
        'node': 'TexImage',
        'dim': (220, 250),
        'prop.image': image,
        'prop.interpolation': interpolation,
        'prop.extension': extension,
        'input.0': block,
    })

    block.img_node = block.outputs[0].node

    return block


================================================
FILE: addons/io_scene_gltf_ksons/mesh.py
================================================
import bmesh
import bpy
from mathutils import Vector

MAX_NUM_COLOR_SETS = 8
MAX_NUM_TEXCOORD_SETS = 8

def create_mesh(op, mesh_spec):
    idx, primitive_idx = mesh_spec

    mesh = op.gltf['meshes'][idx]
    primitives = mesh['primitives']

    # The caller can request we generate only one primitive instead of all of them
    if primitive_idx is not None:
        primitives = [primitives[primitive_idx]]

    bme = bmesh.new()

    # If any of the materials used in this mesh use COLOR_0 attributes, we need
    # to pre-emptively create that layer, or else the Attribute node referencing
    # COLOR_0 in those materials will produce a solid red color. See
    # material.compute_materials_using_color0, which, note, must be called
    # before this function.
    needs_color0 = any(
        op.material_infos[prim.get('material', 'default_material')].num_color_sets > 0
        for prim in primitives
    )
    if needs_color0:
        bme.loops.layers.color.new('COLOR_0')

    # Make a list of all the materials this mesh will need; the material on a
    # face is set by giving an index into this list.
    materials = list(set(
        op.get('material', primitive.get('material', 'default_material'))
        for primitive in primitives
    ))

    # Add in all the primitives
    for primitive in primitives:
        material = op.get('material', primitive.get('material', 'default_material'))
        material_idx = materials.index(material)

        add_primitive_to_bmesh(op, bme, primitive, material_idx)

    name = mesh_name(op, mesh_spec)
    me = bpy.data.meshes.new(name)
    bmesh_to_mesh(bme, me)
    bme.free()

    # Fill in the material list (we can't do me.materials = materials since this
    # property is read-only).
    for material in materials:
        me.materials.append(material)

    # Set polygon smoothing if the user requested it
    if op.options['smooth_polys']:
        for polygon in me.polygons:
            polygon.use_smooth = True

    me.update()

    if not me.shape_keys:
        return me
    else:
        # Tell op.get not to cache us if we have morph targets; this is because
        # morph target weights are stored on the mesh instance in glTF, what
        # would be on the object in Blender. But in Blender shape keys are part
        # of the mesh. So when an object wants a mesh with morph targets, it
        # always needs to get a new one. Ergo we lose sharing for meshes with
        # morph targets.
        return {
            'result': me,
            'do_not_cache_me': True,
        }


def mesh_name(op, mesh_spec):
    mesh_idx, primitive_idx = mesh_spec
    name = op.gltf['meshes'][mesh_idx].get('name', 'meshes[%d]' % mesh_idx)
    if primitive_idx is not None:
        # Look for a name on the extras property
        extras = op.gltf['meshes'][mesh_idx]['primitives'][primitive_idx].get('extras')
        if type(extras) == dict and type(extras.get('name')) == str and extras['name']:
            primitive_name = extras['name']
        else:
            primitive_name = 'primitives[%d]' % primitive_idx
        name += '.' + primitive_name
    return name


def bmesh_to_mesh(bme, me):
    bme.to_mesh(me)

    # to_mesh ignores normals?
    normals = [v.normal for v in bme.verts]
    me.use_auto_smooth = True
    me.normals_split_custom_set_from_vertices(normals)

    if len(bme.verts.layers.shape) != 0:
        # to_mesh does NOT create shape keys so if there's shape data we'll have
        # to do it by hand. The only way I could find to create a shape key was
        # to temporarily parent me to an object and use obj.shape_key_add.
        dummy_ob = None
        try:
            dummy_ob = bpy.data.objects.new('##dummy-object##', me)
            dummy_ob.shape_key_add(name='Basis')
            me.shape_keys.name = me.name
            for layer_name in bme.verts.layers.shape.keys():
                dummy_ob.shape_key_add(name=layer_name)
                key_block = me.shape_keys.key_blocks[layer_name]
                layer = bme.verts.layers.shape[layer_name]

                for i, v in enumerate(bme.verts):
                    key_block.data[i].co = v[layer]
        finally:
            if dummy_ob:
                bpy.data.objects.remove(dummy_ob)


def get_layer(bme_layers, name):
    """Gets a layer from a BMLayerCollection, creating it if it does not exist."""
    if name not in bme_layers:
        return bme_layers.new(name)
    return bme_layers[name]


def add_primitive_to_bmesh(op, bme, primitive, material_index):
    """Adds a glTF primitive into a bmesh."""
    attributes = primitive['attributes']

    # Early out if there's no POSITION data
    if 'POSITION' not in attributes:
        return

    positions = op.get('accessor', attributes['POSITION'])

    if 'indices' in primitive:
        indices = op.get('accessor', primitive['indices'])
    else:
        indices = range(0, len(positions))

    bme_verts = bme.verts
    bme_edges = bme.edges
    bme_faces = bme.faces

    convert_coordinates = op.convert_translation
    if op.options['axis_conversion'] == 'BLENDER_UP':
        def convert_normal(n):
            return Vector([n[0], -n[2], n[1]])
    else:
        def convert_normal(n):
            return n

    # The primitive stores vertex attributes in arrays and gives indices into
    # those arrays
    #
    #     Attributes:
    #       v0 v1 v2 v3 v4 ...
    #     Indices:
    #       1 2 4 ...
    #
    # We want to add **only those vertices that are used in an edge/tri** to the
    # bmesh. Because of this and because the bmesh already has some vertices,
    # when we add the new vertices their index in the bmesh will be different
    # than their index in the primitive's vertex attribute arrays
    #
    #     Bmesh:
    #       ...pre-existing vertices... v1 v2 v4 ...
    #
    # The index into the primitive's vertex attribute array is called the
    # vertex's p-index (pidx) and the index into the bmesh is called its b-index
    # (bidx). Remember to use the right index!

    # The pidx of all the vertices that are actually used by the primitive
    used_pidxs = set(indices)
    # Contains a pair (bidx, pidx) for every vertex in the primitive
    vert_idxs = []
    # pidx_to_bidx[pidx] is the bidx of the vertex with pidx (or -1 if unused)
    pidx_to_bidx = [-1] * len(positions)
    bidx = len(bme_verts)
    for pidx in range(0, len(positions)):
        if pidx in used_pidxs:
            bme_verts.new(convert_coordinates(positions[pidx]))
            vert_idxs.append((bidx, pidx))
            pidx_to_bidx[pidx] = bidx
            bidx += 1
    bme_verts.ensure_lookup_table()

    # Add edges/faces to bmesh
    mode = primitive.get('mode', 4)
    edges, tris = edges_and_tris(indices, mode)
    # NOTE: edges and vertices are in terms of pidxs
    for edge in edges:
        try:
            bme_edges.new((
                bme_verts[pidx_to_bidx[edge[0]]],
                bme_verts[pidx_to_bidx[edge[1]]],
            ))
        except ValueError:
            # Ignores dulicate/degenerate edges
            pass
    for tri in tris:
        try:
            tri = bme_faces.new((
                bme_verts[pidx_to_bidx[tri[0]]],
                bme_verts[pidx_to_bidx[tri[1]]],
                bme_verts[pidx_to_bidx[tri[2]]],
            ))
            tri.material_index = material_index
        except ValueError:
            # Ignores dulicate/degenerate tris
            pass

    # Set normals
    if 'NORMAL' in attributes:
        normals = op.get('accessor', attributes['NORMAL'])
        for bidx, pidx in vert_idxs:
            bme_verts[bidx].normal = convert_normal(normals[pidx])

    # Set vertex colors. Add them in the order COLOR_0, COLOR_1, etc.
    set_num = 0
    while 'COLOR_%d' % set_num in attributes:
        if set_num >= MAX_NUM_COLOR_SETS:
            print('more than %d COLOR_n attributes; dropping the rest on the floor',
                MAX_NUM_COLOR_SETS
            )
            break

        layer_name = 'COLOR_%d' % set_num
        layer = get_layer(bme.loops.layers.color, layer_name)

        colors = op.get('accessor', attributes[layer_name])

        # Check whether Blender takes RGB or RGBA colors (old versions only take RGB)
        num_components = len(colors[0])
        blender_num_components = len(bme_verts[0].link_loops[0][layer])
        if num_components == 3 and blender_num_components == 4:
            # RGB -> RGBA
            colors = [color+(1,) for color in colors]
        if num_components == 4 and blender_num_components == 3:
            # RGBA -> RGB
            colors = [color[:3] for color in colors]
            print('No RGBA vertex colors in your Blender version; dropping A component!')

        for bidx, pidx in vert_idxs:
            for loop in bme_verts[bidx].link_loops:
                loop[layer] = colors[pidx]

        set_num += 1

    # Set texcoords
    set_num = 0
    while 'TEXCOORD_%d' % set_num in attributes:
        if set_num >= MAX_NUM_TEXCOORD_SETS:
            print('more than %d TEXCOORD_n attributes; dropping the rest on the floor',
                MAX_NUM_TEXCOORD_SETS
            )
            break

        layer_name = 'TEXCOORD_%d' % set_num
        layer = get_layer(bme.loops.layers.uv, layer_name)

        uvs = op.get('accessor', attributes[layer_name])

        for bidx, pidx in vert_idxs:
            # UV transform
            u, v = uvs[pidx]
            uv = (u, 1 - v)

            for loop in bme_verts[bidx].link_loops:
                loop[layer].uv = uv

        set_num += 1

    # Set joints/weights for skinning (multiple sets allow > 4 influences)
    # TODO: multiple sets are untested!
    joint_sets = []
    weight_sets = []
    set_num = 0
    while 'JOINTS_%d' % set_num in attributes and 'WEIGHTS_%d' % set_num in attributes:
        joint_sets.append(op.get('accessor', attributes['JOINTS_%d' % set_num]))
        weight_sets.append(op.get('accessor', attributes['WEIGHTS_%d' % set_num]))
        set_num += 1
    if joint_sets:
        layer = get_layer(bme.verts.layers.deform, 'Vertex Weights')

        for joint_set, weight_set in zip(joint_sets, weight_sets):
            for bidx, pidx in vert_idxs:
                for j in range(0, 4):
                    weight = weight_set[pidx][j]
                    if weight != 0.0:
                        joint = joint_set[pidx][j]
                        bme_verts[bidx][layer][joint] = weight

    # Set morph target positions (we don't handle normals/tangents)
    for k, target in enumerate(primitive.get('targets', [])):
        if 'POSITION' not in target:
            continue

        layer = get_layer(bme.verts.layers.shape, 'Morph %d' % k)

        morph_positions = op.get('accessor', target['POSITION'])

        for bidx, pidx in vert_idxs:
            bme_verts[bidx][layer] = convert_coordinates(
                Vector(positions[pidx]) +
                Vector(morph_positions[pidx])
            )


def edges_and_tris(indices, mode):
    """
    Convert the indices for different primitive modes into a list of edges
    (pairs of endpoints) and a list of tris (triples of vertices).
    """
    edges = []
    tris = []
    # TODO: only mode TRIANGLES is tested!!
    if mode == 0:
        # POINTS
        pass
    elif mode == 1:
        # LINES
        #   1   3
        #  /   /
        # 0   2
        edges = [tuple(indices[i:i+2]) for i in range(0, len(indices), 2)]
    elif mode == 2:
        # LINE LOOP
        #   1---2
        #  /     \
        # 0-------3
        edges = [tuple(indices[i:i+2]) for i in range(0, len(indices) - 1)]
        edges.append((indices[-1], indices[0]))
    elif mode == 3:
        # LINE STRIP
        #   1---2
        #  /     \
        # 0       3
        edges = [tuple(indices[i:i+2]) for i in range(0, len(indices) - 1)]
    elif mode == 4:
        # TRIANGLES
        #   2     3
        #  / \   / \
        # 0---1 4---5
        tris = [tuple(indices[i:i+3]) for i in range(0, len(indices), 3)]
    elif mode == 5:
        # TRIANGLE STRIP
        #   1---3---5
        #  / \ / \ /
        # 0---2---4
        def alternate(i, xs):
            ccw = i % 2 != 0
            return xs if ccw else (xs[0], xs[2], xs[1])
        tris = [
            alternate(i, tuple(indices[i:i+3]))
            for i in range(0, len(indices) - 2)
        ]
    elif mode == 6:
        # TRIANGLE FAN
        #   3---2
        #  / \ / \
        # 4---0---1
        tris = [
            (indices[0], indices[i], indices[i+1])
            for i in range(1, len(indices) - 1)
        ]
    else:
        raise Exception('primitive mode unimplemented: %d' % mode)

    return edges, tris


================================================
FILE: addons/io_scene_gltf_ksons/node.py
================================================
import os
import bpy
from mathutils import Vector, Matrix
from .compat import mul


def realize_vtree(op):
    """Create actual Blender nodes for the vnodes."""
    # Fix for #16
    try:
        bpy.ops.object.mode_set(mode='OBJECT')
    except Exception:
        pass

    # First pass: depth-first realization of the vnode graph
    def realize_vnode(vnode):
        if vnode.type == 'OBJECT':
            realize_object(op, vnode)

        elif vnode.type == 'ARMATURE':
            realize_armature(op, vnode)

        elif vnode.type == 'BONE':
            realize_bone(op, vnode)

        elif vnode.type == 'ROOT':
            realize_root(op, vnode)

        for child in vnode.children:
            realize_vnode(child)

        # We enter edit-mode when we realize an armature. On the way back up,
        # we've finished creating edit bones and can go back to object mode.
        if vnode.type == 'ARMATURE':
            bpy.ops.object.mode_set(mode='OBJECT')

            # Unlink it; we'll link this in the right place later on.
            if bpy.app.version >= (2, 80, 0):
                ob_collection = bpy.context.scene.collection.objects
                if vnode.blender_object.name in ob_collection:
                    ob_collection.unlink(vnode.blender_object)
            else:
                bpy.context.scene.objects.unlink(vnode.blender_object)


    realize_vnode(op.root_vnode)

    # Second pass for things that require we know the blender_object and
    # blender_name of the vnodes.
    def pass2(vnode):
        if vnode.mesh and vnode.mesh['skin'] != None:
            obj = vnode.blender_object

            # Create vertex groups.
            joints = op.gltf['skins'][vnode.mesh['skin']]['joints']
            for node_id in joints:
                bone_name = op.node_id_to_vnode[node_id].blender_name
                obj.vertex_groups.new(name=bone_name)

            # Create the skin modifier.
            modifier = obj.modifiers.new('Skin', 'ARMATURE')
            armature_vnode = op.node_id_to_vnode[joints[0]].armature_vnode
            modifier.object = armature_vnode.blender_object
            modifier.use_vertex_groups = True

            # We need to constrain the mesh to its armature so that its world
            # space position is affected only by the world space transform of
            # the joints and not of the node where it is instantiated, see
            # glTF/#1195.
            constraint = obj.constraints.new(type='COPY_TRANSFORMS')
            constraint.owner_space = 'LOCAL'
            constraint.target_space = 'LOCAL'
            constraint.target = armature_vnode.blender_object

            # TODO: investigate this more

        # Set pose for bones that had non-homogeneous scalings
        if vnode.type == 'BONE' and vnode.posebone_s is not None:
            blender_object = vnode.armature_vnode.blender_object
            pose_bone = blender_object.pose.bones[vnode.blender_name]
            pose_bone.scale = vnode.posebone_s

        for child in vnode.children:
            pass2(child)

    pass2(op.root_vnode)

    link_everything_into_scene(op)


def realize_object(op, vnode):
    """Create a real Object for an OBJECT vnode."""
    # Create the mesh/camera/light instance
    data = None
    if vnode.mesh:
        data = op.get('mesh', (vnode.mesh['mesh'], vnode.mesh['primitive_idx']))

        # Set instance's morph target weights
        if vnode.mesh['weights'] and data.shape_keys:
            keyblocks = data.shape_keys.key_blocks
            for i, weight in enumerate(vnode.mesh['weights']):
                if ('Morph %d' % i) in keyblocks:
                    keyblocks['Morph %d' % i].value = weight

    elif vnode.camera:
        data = op.get('camera', vnode.camera['camera'])

    elif vnode.light:
        data = op.get('light', vnode.light['light'])

    obj = bpy.data.objects.new(vnode.name, data)
    vnode.blender_object = obj

    # Set TRS
    t, r, s = vnode.trs
    obj.location = t
    obj.rotation_mode = 'QUATERNION'
    obj.rotation_quaternion = r
    obj.scale = s

    # Set our parent
    if vnode.parent:
        if vnode.parent.type == 'BONE':
            obj.parent = vnode.parent.armature_vnode.blender_object
            obj.parent_type = 'BONE'
            obj.parent_bone = vnode.parent.blender_name
        elif vnode.parent.blender_object:
            obj.parent = vnode.parent.blender_object


def realize_armature(op, vnode):
    """Create a real Armature for an ARMATURE vnode."""
    # TODO: find a way to avoid using ops and having to change modes
    bpy.ops.object.add(type='ARMATURE', enter_editmode=True)
    obj = bpy.context.object

    vnode.blender_object = obj
    vnode.blender_armature = obj.data

    # Clear our location (ops.object.add puts the new armature at the location
    # of the 3D Cursor)
    obj.location = [0, 0, 0]

    if vnode.parent:
        obj.parent = vnode.parent.blender_object


def realize_bone(op, vnode):
    """Create a real EditBone for a BONE vnode."""
    armature = vnode.armature_vnode.blender_armature
    editbone = armature.edit_bones.new(vnode.name)

    editbone.use_connect = False

    # Bones transforms are given, not by giving their local-to-parent transform,
    # but by giving their head, tail, and roll in armature space. So we need the
    # local-to-armature transform.
    m = vnode.editbone_local_to_armature
    editbone.head = mul(m, Vector((0, 0, 0)))
    editbone.tail = mul(m, Vector((0, vnode.bone_length, 0)))
    editbone.align_roll(mul(m, Vector((0, 0, 1))) - editbone.head)

    vnode.blender_name = editbone.name
    # NOTE: can't access this after we leave edit mode
    vnode.blender_editbone = editbone

    # Set parent
    if vnode.parent:
        if getattr(vnode.parent, 'blender_editbone', None):
            editbone.parent = vnode.parent.blender_editbone


def realize_root(op, vnode):
    """
    Realize the ROOT if the user requested it (giving it the same filename as
    the glTF).
    """
    if not op.options['add_root']:
        return

    obj = bpy.data.objects.new(os.path.basename(op.filepath), None)
    vnode.blender_object = obj


if bpy.app.version >= (2, 80, 0):
    def link_vnode_into_scene(vnode, scene):
        if vnode.blender_object:
            if vnode.blender_object.name not in scene.collection.objects:
                scene.collection.objects.link(vnode.blender_object)
else:
    def link_vnode_into_scene(vnode, scene):
        if vnode.blender_object:
            try:
                scene.objects.link(vnode.blender_object)
            except Exception:
                # Ignore exception if its already linked
                pass


def link_tree_into_scene(vnode, scene):
    link_vnode_into_scene(vnode, scene)
    for child in vnode.children:
        link_tree_into_scene(child, scene)


def link_everything_into_scene(op):
    link_tree_into_scene(op.root_vnode, bpy.context.scene)

    # The renderer is also tied to the scene
    if bpy.context.scene.render.engine == 'BLENDER_RENDER':
        # Our materials won't work in BLENDER_RENDER
        bpy.context.scene.render.engine = 'CYCLES'


================================================
FILE: addons/io_scene_gltf_ksons/scene.py
================================================
import os
import bpy


def link_vnode_into_collection(vnode, collection):
    if vnode.blender_object:
        if vnode.blender_object.name not in collection.objects:
            collection.objects.link(vnode.blender_object)


def link_tree_into_collection(vnode, collection):
    link_vnode_into_collection(vnode, collection)
    for child in vnode.children:
        link_tree_into_collection(child, collection)


def import_scenes_as_collections(op):
    if getattr(bpy.data, 'collections', None) is None:
        print(
            "Can't import scenes as collections; "
            'no collections in this Blender version!'
        )
        return

    scenes = op.gltf.get('scenes', [])
    if not scenes:
        return

    base_collection = bpy.data.collections.new(os.path.basename(op.filepath))

    default_scene_idx = op.gltf.get('scene')
    for scene_idx, scene in enumerate(op.gltf.get('scenes', [])):
        name = scene.get('name', 'scenes[%d]' % scene_idx)
        if scene_idx == default_scene_idx:
            name += ' (Default)'

        collection = bpy.data.collections.new(name)
        base_collection.children.link(collection)

        for node_idx in scene['nodes']:
            vnode = op.node_id_to_vnode[node_idx]

            # A root node might not be a root vnode (eg. because we inserted an
            # armature above it). Find the real root.
            while vnode.parent is not None and vnode.parent.parent is not None:
                vnode = vnode.parent

            link_tree_into_collection(vnode, collection)


================================================
FILE: addons/io_scene_gltf_ksons/vnode.py
================================================
from math import pi
from mathutils import Matrix, Quaternion, Vector, Euler
from .compat import mul
from .mesh import mesh_name

# The node graph in glTF needs to fixed up quite a bit before it will work for
# Blender. We first create a graph of "virtual nodes" to match the graph in the
# glTF file and then transform it in a bunch of passes to make it suitable for
# Blender import.

class VNode:
    def __init__(self):
        # The ID of the glTF node this vnode was created from, or None if there
        # wasn't one
        self.node_id = None
        # List of child vnodes
        self.children = []
        # Parent vnode, or None for the root
        self.parent = None
        # (Vector, Quaternion, Vector) triple of the local-to-parent TRS transform
        self.trs = (Vector((0, 0, 0)), Quaternion((1, 0, 0, 0)), Vector((1, 1, 1)))

        # What type of Blender object will be created for this vnode: one of
        # OBJECT, ARMATURE, BONE, or ROOT (for the special vnode that we use the
        # turn the forest into a tree to make things easier to process).
        self.type = 'OBJECT'

        # Dicts of instance data
        self.mesh = None
        self.camera = None
        self.light = None
        # If this node had an instance in glTF but we moved it to another node,
        # we record where we put it here
        self.mesh_moved_to = None
        self.camera_moved_to = None
        self.light_moved_to = None

        # These will be filled out after realization with the Blender data
        # created for this vnode.
        self.blender_object = None
        self.blender_armature = None
        self.blender_editbone = None
        self.blender_name = None

        # The editbone's (Translation, Rotation)
        self.editbone_tr = None
        self.posebone_s = None
        self.editbone_local_to_armature = Matrix.Identity(4)
        self.bone_length = 0
        # Correction to apply to the original TRS to get the editbone TR
        self.correction_rotation = Quaternion((1, 0, 0, 0))
        self.correction_homscale = 1


def create_vtree(op):
    initial_vtree(op)
    insert_armatures(op)
    move_instances(op)
    adjust_bones(op)


# In the first pass, create the vgraph from the forest from the glTF file,
# making one OBJECT for each node
#
#       OBJ
#      /  \
#     OBJ  OBJ
#         /  \
#       OBJ   OBJ
#
# (The ROOT is also added, but we won't draw it)
def initial_vtree(op):
    nodes = op.gltf.get('nodes', [])

    op.node_id_to_vnode = {}

    # Create a vnode for each node
    for node_id, node in enumerate(nodes):
        vnode = VNode()
        vnode.node_id = node_id
        vnode.name = node.get('name', 'nodes[%d]' % node_id)
        vnode.trs = get_node_trs(op, node)
        vnode.type = 'OBJECT'

        if 'mesh' in node:
            vnode.mesh = {
                'mesh': node['mesh'],
                'primitive_idx': None, # use all primitives
                'skin': node.get('skin'),
                'weights': node.get('weights', op.gltf['meshes'][node['mesh']].get('weights')),
            }
        if 'camera' in node:
            vnode.camera = {
                'camera': node['camera'],
            }
        if 'KHR_lights_punctual' in node.get('extensions', {}):
            vnode.light = {
                'light': node['extensions']['KHR_lights_punctual']['light'],
            }

        op.node_id_to_vnode[node_id] = vnode

    # Fill in the parent/child relationships
    for node_id, node in enumerate(nodes):
        vnode = op.node_id_to_vnode[node_id]
        for child_id in node.get('children', []):
            child_vnode = op.node_id_to_vnode[child_id]

            # Prevent cycles
            assert(child_vnode.parent == None)

            child_vnode.parent = vnode
            vnode.children.append(child_vnode)

    # Add a root node to make the forest of vnodes into a tree.
    op.root_vnode = VNode()
    op.root_vnode.type = 'ROOT'

    for vnode in op.node_id_to_vnode.values():
        if vnode.parent == None:
            vnode.parent = op.root_vnode
            op.root_vnode.children.append(vnode)


# There is no special kind of node used for skinning in glTF. Joints are just
# regular nodes. But in Blender, only a bone can be used for skinning and bones
# are descendants of armatures.
#
# In the second pass we insert enough ARMATURE vnodes into the vtree so that
# every vnode which is the joint of a skin is a descendant of an ARMATURE. All
# descendants of ARMATURES are then turned into bones.
#
#       OBJ
#      /  \
#    OBJ  ARMA
#          |
#         BONE
#         /  \
#      BONE   BONE
def insert_armatures(op):
    # Insert an armature for every skin
    skins = op.gltf.get('skins', [])
    for skin_id, skin in enumerate(skins):
        armature = VNode()
        armature.name = skin.get('name', 'skins[%d]' % skin_id)
        armature.type = 'ARMATURE'

        # We're going to find a place to insert the armature. It must be above
        # all of the joint nodes.
        vnodes_below = [op.node_id_to_vnode[joint_id] for joint_id in skin['joints']]
        # Add in the skeleton node too (which we hope is an ancestor of the joints).
        if 'skeleton' in skin:
            vnodes_below.append(op.node_id_to_vnode[skin['skeleton']])

        ancestor = lowest_common_ancestor(vnodes_below)

        ancestor_is_joint = ancestor.node_id in skin['joints']
        if ancestor_is_joint:
            insert_above(ancestor, armature)
        else:
            insert_below(ancestor, armature)

    # Walk down the tree, marking all children of armatures as bones and
    # deleting any armature which is a descendant of another.
    def visit(vnode, armature_ancestor):
        # Make a copy of this because we don't want it to change (when we delete
        # a vnode) while we're in the middle of iterating it
        children = list(vnode.children)

        # If we are below an armature...
        if armature_ancestor:
            # Found an armature descended of another
            if vnode.type == 'ARMATURE':
                remove_vnode(vnode)

            else:
                vnode.type = 'BONE'
                vnode.armature_vnode = armature_ancestor

        else:
            if vnode.type == 'ARMATURE':
                armature_ancestor = vnode

        for child in children:
            visit(child, armature_ancestor)

    visit(op.root_vnode, None)


# Now we need to enforce Blender's rule that (1) and object may have only one
# data instance (ie. only one of a mesh or a camera or a light), and (2) a bone
# may not have a data instance at all. We also need to move all cameras/lights
# to new children so that we have somewhere to hang the glTF->Blender axis
# conversion they need.
#
#
#             OBJ               Eg. if there was a mesh and camera on OBJ1
#            /  \               we will move the camera to a new child OBJ3
#        OBJ1   ARMA            (leaving the mesh on OBJ1).
#         /      |              And if there was a mesh on BONE2 we will move
#     OBJ3      BONE            the mesh to OBJ4
#               /  \
#            BONE   BONE2
#                    |
#                   OBJ4
def move_instances(op):
    def move_instance_to_new_child(vnode, key):
        inst = getattr(vnode, key)
        setattr(vnode, key, None)

        if key == 'mesh':
            id = inst['mesh']
            name = op.gltf['meshes'][id].get('name', 'meshes[%d]' % id)
        elif key == 'camera':
            id = inst['camera']
            name = op.gltf['cameras'][id].get('name', 'cameras[%d]' % id)
        elif key == 'light':
            id = inst['light']
            lights = op.gltf['extensions']['KHR_lights_punctual']['lights']
            name = lights[id].get('name', 'lights[%d]' % id)
        else:
            assert(False)

        new_child = VNode()
        new_child.name = name
        new_child.parent = vnode
        vnode.children.append(new_child)
        new_child.type = 'OBJECT'

        setattr(new_child, key, inst)
        setattr(vnode, key + '_moved_to', [new_child])

        if key in ['camera', 'light']:
            # Quarter-turn around the X-axis. Needed for cameras or lights that
            # point along the -Z axis in Blender but glTF says should look along the
            # -Y axis
            new_child.trs = (
                new_child.trs[0],
                Quaternion((2**(-1/2), 2**(-1/2), 0, 0)),
                new_child.trs[2]
            )

        return new_child


    def visit(vnode):
        # Make a copy of this so we don't re-process new children we just made
        children = list(vnode.children)

        # Always move a camera or light to a child because it needs the
        # gltf->Blender axis conversion
        if vnode.camera:
            move_instance_to_new_child(vnode, 'camera')
        if vnode.light:
            move_instance_to_new_child(vnode, 'light')

        if vnode.mesh and vnode.type == 'BONE':
            move_instance_to_new_child(vnode, 'mesh')

        for child in children:
            visit(child)

    visit(op.root_vnode)

    # The user can request that meshes be split into their primitives, like this
    #
    #       OBJ      =>     OBJ
    #      (mesh)         /  |  \
    #                  OBJ  OBJ  OBJ
    #                (mesh)(mesh)(mesh)
    if op.options['split_meshes']:
        def visit(vnode):
            children = list(vnode.children)

            if vnode.mesh is not None:
                num_prims = len(op.gltf['meshes'][vnode.mesh['mesh']]['primitives'])
                if num_prims > 1:
                    new_children = []
                    for prim_idx in range(0, num_prims):
                        child = VNode()
                        child.name = mesh_name(op, (vnode.mesh['mesh'], prim_idx))
                        child.type = 'OBJECT'
                        child.parent = vnode
                        child.mesh = {
                            'mesh': vnode.mesh['mesh'],
                            'skin': vnode.mesh['skin'],
                            'weights': vnode.mesh['weights'],
                            'primitive_idx': prim_idx,
                        }
                        new_children.append(child)
                    vnode.mesh = None
                    vnode.children += new_children
                    vnode.mesh_moved_to = new_children

            for child in children:
                visit(child)

        visit(op.root_vnode)

# Here's the compilcated pass.
#
# Brief review: every bone in glTF has a local-to-parent transform T(b;pose).
# Sometimes we suppress the dependence on the pose and just write T(b). The
# composition with the parent's local-to-parent, and so on up the armature is
# the local-to-armature transform
#
#     L(b) = T(root) ... T(ppb) T(pb) T(b)
#
# where pb is the parent of b, ppb is the grandparent, etc. In Blender the
# local-to-armature is
#
#     LB(b) = E(root) P(root) ... E(ppb) P(ppb) E(pb) P(pb) E(b) P(b)
#
# where E(b) is a TR transform for the edit bone and P(b) is a TRS transform for
# the pose bone.
#
# NOTE: I am note entirely sure of that formula.
#
# In the rest position P(b;rest) = 1 for all b, so we would like to just make
# E(b) = T(b;rest), but we can't since T(b;rest) might have a scaling, and we
# also want to try to rotate T(b) so we can pick which way the Blender
# octahedorn points.
#
# So we're going to change T(b). For every bone b pick a rotation cr(b) and a
# scalar cs(b) and define the correction matrix for b to be
#
#     C(b) = Rot[cr(b)] HomScale[cs(b)]
#
# and transform T(b) to
#
#     T'(b) = C(pb)^{-1} T(b) C(b)
#
# If we compute L'(b) using the T'(b), most of the C terms cancel out and we get
#
#     L'(b) = L(b) C(b)
#
# This is close enough; we'll be able to cancel off the extra C(b) later.
#
# How do we pick C(b)? Assume we've already computed C(pb) and calculate T'(b)
#
#       T'(b)
#     = C(pb)^{-1} T(b) C(b)
#     = Rot[cr(pb)^{-1}] HomScale[1/cs(pb)]
#       Trans[t] Rot[r] Scale[s]
#       Rot[cr(b)] HomScale[cs(b)]
#     { floating the Trans to the left, combining Rots }
#     = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]
#       Rot[cr(pb)^{-1} r] HomScale[1/cs(pb)] Scale[s]
#       Rot[cr(b)] HomScale[cs(b)]
#
# Now assume Scale[s] = HomScale[s] (and s is not 0), ie. the bone has a
# homogeneous scaling. Then we can rearrange this and get
#
#       Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]
#       Rot[cr(pb)^{-1} r cr(b)]
#       HomScale[s cs(b) / cs(pb)]
#
# Now if we want the rotation to be R we can pick cr(b) = r^{-1} cr(pb) R. We
# also want the scale to be 1, because again, E(b) has a scaling of 1 in Blender
# always, so we pick cs(b) = cs(pb) / s.
#
# Okay, cool, so this is now a TR matrix and we can identify it with E(b).
#
# But what if Scale[s] **isn't** homogeneous? We appear to have no choice but to
# put it on P(b;loadtime) for some non-rest pose we'll set at load time. This is
# unfortunate because the rest pose in Blender won't be the same as the rest
# pose in glTF (and there's inverse bind matrix fallout too).
#
# So in that case we'll take C(b) = 1, and set
#
#     E(b) = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ] Rot[cr(pb)^{-1} r]
#     P(b;loadtime) = Scale[s / cs(pb)]
#
# So in both cases we now have LB(b) = L'(b).
#
# TODO: we can still pick a rotation when the scaling is heterogeneous

# Maps an axis into a rotation carrying that axis into +Y
AXIS_TO_PLUS_Y = {
    '-X': Euler([0, 0, -pi/2]).to_quaternion(),
    '+X': Euler([0, 0, pi/2]).to_quaternion(),
    '-Y': Euler([pi, 0, 0]).to_quaternion(),
    '+Y': Euler([0, 0, 0]).to_quaternion(),
    '-Z': Euler([pi/2, 0, 0]).to_quaternion(),
    '+Z': Euler([-pi/2, 0, 0]).to_quaternion(),
}
def adjust_bones(op):
    # List of distances between bone heads (used for computing bone lengths)
    interbone_dists = []

    def visit_bone(vnode):
        t, r, s = vnode.trs

        cr_pb_inv = vnode.parent.correction_rotation.conjugated()
        cs_pb = vnode.parent.correction_homscale

        # Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ]
        editbone_t = mul(cr_pb_inv, t) / cs_pb

        if is_non_degenerate_homscale(s):
            # s is a homogeneous scaling (ie. scalar mutliplication)
            s = s[0]

            # cs(b) = cs(pb) / s
            vnode.correction_homscale = cs_pb / s

            if op.options['bone_rotation_mode'] == 'POINT_TO_CHILDREN':
                # We always pick a rotation for cr(b) that is, up to sign, a permutation of
                # the basis vectors. This is necessary for some of the algebra to work out
                # in animtion importing.

                # General idea: assume we have one child. We want to rotate so
                # that our tail comes close to the child's head. Out tail lies
                # on our +Y axis. The child head is going to be Rot[cr(b)^{-1}]
                # child_t / cs(b) where b is us and child_t is the child's
                # trs[0]. So we want to choose cr(b) so that this is as close as
                # possible to +Y, ie. we want to rotate it so that its largest
                # component is along the +Y axis. Note that only the sign of
                # cs(b) affects this, not its magnitude (since the largest
                # component of v, 2v, 3v, etc. are all the same).

                # Pick the targest to rotate towards. If we have one child, use
                # that.
                if len(vnode.children) == 1:
                    target = vnode.children[0].trs[0]
                elif len(vnode.children) == 0:
                    # As though we had a child displaced the same way we were
                    # from our parent.
                    target = vnode.trs[0]
                else:
                    # Mean of all our children.
                    center = Vector((0, 0, 0))
                    for child in vnode.children:
                        center += child.trs[0]
                    center /= len(vnode.children)
                    target = center
                if cs_pb / s < 0:
                    target = -target

                x, y, z = abs(target[0]), abs(target[1]), abs(target[2])
                if x > y and x > z:
                    axis = '-X' if target[0] < 0 else '+X'
                elif z > x and z > y:
                    axis = '-Z' if target[2] < 0 else '+Z'
                else:
                    axis = '-Y' if target[1] < 0 else '+Y'

                cr_inv = AXIS_TO_PLUS_Y[axis]
                cr = cr_inv.conjugated()

            elif op.options['bone_rotation_mode'] == 'NONE':
                cr = Quaternion((1, 0, 0, 0))

            else:
                assert(False)

            vnode.correction_rotation = cr

            # cr(pb)^{-1} r cr(b)
            editbone_r = mul(mul(cr_pb_inv, r), cr)

        else:
            # TODO: we could still use a rotation here.
            # C(b) = 1
            vnode.correction_rotation = Quaternion((1, 0, 0, 0))
            vnode.correction_homscale = 1
            # E(b) = Trans[ Rot[cr(pb)^{-1}] t / cs(pb) ] Rot[cr(pb)^{-1} r]
            # P(b;loadtime) = Scale[s / cs(pb)]
            editbone_r = mul(cr_pb_inv, r)
            vnode.pose_s = s / cs_pb

        vnode.editbone_tr = editbone_t, editbone_r
        vnode.editbone_local_to_armature = mul(
            vnode.parent.editbone_local_to_armature,
            mul(Matrix.Translation(editbone_t), editbone_r.to_matrix().to_4x4())
        )

        interbone_dists.append(editbone_t.length)

        # Try getting a bone length for our parent. The length that makes its
        # tail meet our head is considered best. Since the tail always lies
        # along the +Y ray, the closer we are to the this ray the better our
        # length will be compared to the legnths chosen by our siblings. This is
        # measured by the "goodness". Amoung siblings with equal goodness, we
        # pick the smaller length, so the parent's tail will meet the nearest
        # child.
        vnode.bone_length_goodness = -99999
        if vnode.parent.type == 'BONE':
            t_len = editbone_t.length
            if t_len > 0.0005:
                goodness = editbone_t.dot(Vector((0, 1, 0))) / t_len
                if goodness > vnode.parent.bone_length_goodness:
                    if vnode.parent.bone_length == 0 or vnode.parent.bone_length > t_len:
                        vnode.parent.bone_length = t_len
                    vnode.parent.bone_length_goodness = goodness

        # Recurse
        for child in vnode.children:
            if child.type == 'BONE':
                visit_bone(child)

        # We're on the way back up. Last chance to set our bone length if none
        # of our children did. Use our parent's, if it has one. Otherwise, use
        # the average inter-bone distance, if its not 0. Otherwise, just use 1
        # -_-
        if not vnode.bone_length:
            if vnode.parent.bone_length:
                vnode.bone_length = vnode.parent.bone_length
            else:
                avg = sum(interbone_dists) / max(1, len(interbone_dists))
                if avg > 0.0005:
                    vnode.bone_length = avg
                else:
                    vnode.bone_length = 1

    def visit(vnode):
        if vnode.type == 'ARMATURE':
            for child in vnode.children:
                visit_bone(child)
        else:
            for child in vnode.children:
                visit(child)

    visit(op.root_vnode)

    # Remember that L'(b) = L(b) C(b)? Remember that we had to move any
    # mesh/camera/light on a bone to an object? That's the perfect place to put
    # a transform of C(b)^{-1} to cancel out that extra factor!
    def visit_object_child_of_bone(vnode):
        t, r, s = vnode.trs

        # This moves us back along the bone, because for some reason Blender
        # puts us at the tail of the bone, not the head
        t -= Vector((0, vnode.parent.bone_length, 0))

        #   Rot[cr^{-1}] HomScale[1/cs] Trans[t] Rot[r] Scale[s]
        # = Trans[ Rot[cr^{-1}] t / cs] Rot[cr^{-1} r] Scale[s / cs]
        cr_inv = vnode.parent.correction_rotation.conjugated()
        cs = vnode.parent.correction_homscale
        t = mul(cr_inv, t) / cs
        r = mul(cr_inv, r)
        s /= cs

        vnode.trs = t, r, s

    def visit(vnode):
        if vnode.type == 'OBJECT' and vnode.parent.type == 'BONE':
            visit_object_child_of_bone(vnode)
        for child in vnode.children:
            visit(child)

    visit(op.root_vnode)


# Helper functions below here:

def get_node_trs(op, node):
    """Gets the TRS proerties from a glTF node JSON object."""
    if 'matrix' in node:
        m = node['matrix']
        # column-major to row-major
        m = Matrix([m[0:4], m[4:8], m[8:12], m[12:16]])
        m.transpose()
        loc, rot, sca = m.decompose()
        # wxyz -> xyzw
        # convert_rotation will switch back
        rot = [rot[1], rot[2], rot[3], rot[0]]

    else:
        sca = node.get('scale', [1.0, 1.0, 1.0])
        rot = node.get('rotation', [0.0, 0.0, 0.0, 1.0])
        loc = node.get('translation', [0.0, 0.0, 0.0])

    # Switch glTF coordinates to Blender coordinates
    sca = op.convert_scale(sca)
    rot = op.convert_rotation(rot)
    loc = op.convert_translation(loc)

    return [Vector(loc), Quaternion(rot), Vector(sca)]


def lowest_common_ancestor(vnodes):
    """
    Compute the lowest common ancestors of vnodes, ie. the lowest node of which
    all the given vnodes are (possibly impromper) descendants.
    """
    assert(vnodes)

    def ancestor_list(vnode):
        """
        Computes the ancestor-list of vnode: the list of all its ancestors
        starting at the root and ending at vnode itself.
        """
        chain = []
        while vnode:
            chain.append(vnode)
            vnode = vnode.parent
        chain.reverse()
        return chain

    def first_difference(l1, l2):
        """
        Returns the index of the first difference in two lists, or None if one is
        a prefix of the other.
        """
        i = 0
        while True:
            if i == len(l1) or i == len(l2):
                return None
            if l1[i] != l2[i]:
                return i
            i += 1

    # Ancestor list for the lowest common ancestor so far
    lowest_ancestor_list = ancestor_list(vnodes[0])

    for vnode in vnodes[1:]:
        cur_ancestor_list = ancestor_list(vnode)
        d = first_difference(lowest_ancestor_list, cur_ancestor_list)
        if d is None:
            if len(cur_ancestor_list) < len(lowest_ancestor_list):
                lowest_ancestor_list = cur_ancestor_list
        else:
            lowest_ancestor_list = lowest_ancestor_list[:d]

    return lowest_ancestor_list[-1]


def insert_above(vnode, new_parent):
    """
    Inserts new_parent between vnode and its parent. That is, turn

        parent -> sister              parent -> sister
               -> vnode      into            -> new_parent -> vnode
               -> sister                     -> sister
    """
    if not vnode.parent:
        vnode.parent = new_parent
        new_parent.parent = None
        new_parent.children = [vnode]
    else:
        parent = vnode.parent
        i = parent.children.index(vnode)
        parent.children[i] = new_parent
        new_parent.parent = parent
        new_parent.children = [vnode]
        vnode.parent = new_parent


def insert_below(vnode, new_child):
    """
    Insert new_child between vnode and its children. That is, turn

        vnode -> child              vnode -> new_child -> child
              -> child     into                        -> child
              -> child                                 -> child
    """
    children = vnode.children
    vnode.children = [new_child]
    new_child.parent = vnode
    new_child.children = children
    for child in children:
        child.parent = new_child


def remove_vnode(vnode):
    """
    Remove vnode from the tree, replacing it with its children. That is, turn

        parent -> sister                  parent -> sister
               -> vnode -> child   into          -> child
               -> sister                         -> sister
    """
    assert(vnode.parent) # will never be called on the root

    parent = vnode.parent
    children = vnode.children

    i = parent.children.index(vnode)
    parent.children = (
        parent.children[:i] +
        children +
        parent.children[i+1:]
    )
    for child in children:
        child.parent = parent

    vnode.parent = None
    vnode.children = []


def is_non_degenerate_homscale(s):
    """Returns true if Scale[s] is multiplication by a non-zero scalar."""
    largest = max(abs(x) for x in s)
    smallest = min(abs(x) for x in s)

    if smallest < 1e-5:
        # Too small; consider it zero
        return False
    return largest - smallest < largest * 0.001


================================================
FILE: deploy.py
================================================
import argparse
import os
import re
import subprocess

import make_package


def replace_in_file(file, expr, new_substr):
    lines = []
    regex = re.compile(expr, re.IGNORECASE)
    with open(file) as infile:
        for line in infile:
            line = regex.sub(new_substr, line)
            lines.append(line)
    with open(file, 'w') as outfile:
        for line in lines:
            outfile.write(line)


this_dir = os.path.dirname(os.path.abspath(__file__))

parser = argparse.ArgumentParser()
parser.add_argument('version')
args = parser.parse_args()

version = args.version.split('.')
version_string = '.'.join(version)
version_tuple = '(%s)' % ', '.join(version)

main_file = os.path.join(this_dir, 'addons', 'io_scene_gltf_ksons', '__init__.py')
readme_file = os.path.join(this_dir, 'README.md')

replace_in_file(main_file,
                r"'version': \([0-9\, ]+\)",
                "'version': {}".format(version_tuple))

replace_in_file(readme_file,
                r'download/v[0-9\.]+/io_scene_gltf_ksons-[0-9\.]+.zip',
                'download/v{}/io_scene_gltf_ksons-{}.zip'.format(version_string, version_string))

os.chdir(this_dir)
subprocess.call(['git', 'add', main_file, readme_file])
subprocess.call(['git', 'commit', '-m', 'Bump version number to {}'.format(version_string)])
subprocess.call(['git', 'tag', 'v{}'.format(version_string)])

make_package.make_package(suffix=version_string)


================================================
FILE: make_package.py
================================================
import os
import shutil
import tempfile


def make_package(suffix=None):
    this_dir = os.path.dirname(os.path.abspath(__file__))
    dist_dir = os.path.join(this_dir, 'dist')

    if not os.path.exists(dist_dir):
        os.makedirs(dist_dir)

    with tempfile.TemporaryDirectory() as tmpdir:
        shutil.copytree(
            os.path.join(this_dir, 'addons', 'io_scene_gltf_ksons'),
            os.path.join(tmpdir, 'io_scene_gltf_ksons'),
            ignore=shutil.ignore_patterns('__pycache__'))

        zip_name = 'io_scene_gltf_ksons'
        if suffix:
            zip_name += '-' + suffix

        shutil.make_archive(
            os.path.join('dist', zip_name),
            'zip',
            tmpdir)


if __name__ == '__main__':
    make_package()


================================================
FILE: setup.cfg
================================================
[flake8]
max-line-length = 120

================================================
FILE: test/README.md
================================================
## Testing

The [glTF Sample Models](https://github.com/KhronosGroup/glTF-Sample-Models) are
used for automated testing of the importer. A model file is considered to pass
if importing it doesn't raise an exception.


### Instructions

To run tests. This will fetch the sample models on its first run (be warned,
this is a big download). The optional `--exe` argument is to allow you to test
multiple Blender versions.

    ./test.py run [--exe BLENDER-EXE-PATH]

To display the results of the last test run. These are stored in `report.json`
in this directory

    ./test.py report

To display the import times from the last test run

    ./test.py report-times

You can use the exit code from `run` and `report` (success=0) to determine if
the tests passed programatically.


================================================
FILE: test/bl_generate_report.py
================================================
"""
Runs tests and writes the results to the report.json file.

This should be executed inside Blender, not from normal Python!
"""

import glob
import json
import os
from timeit import default_timer as timer
import sys

import bpy

print('bpy.app.version:', bpy.app.version)
print('python sys.version:', sys.version)

base_dir = os.path.dirname(os.path.abspath(__file__))
samples_path = os.path.join(base_dir, 'glTF-Sample-Models', '2.0')
site_local_path = os.path.join(base_dir, 'site_local')
report_path = os.path.join(base_dir, 'report.json')

tests = []

files = (
    glob.glob(samples_path + '/**/*.gltf', recursive=True) +
    glob.glob(samples_path + '/**/*.glb', recursive=True) +
    glob.glob(site_local_path + '/**/*.glb', recursive=True) +
    glob.glob(site_local_path + '/**/*.glb', recursive=True)
)

# Skip Draco encoded files for now
files = [fn for fn in files if 'Draco' not in fn]

for filename in files:
    short_name = os.path.relpath(filename, samples_path)
    print('\nTrying ', short_name, '...')

    bpy.ops.wm.read_factory_settings()

    try:
        start_time = timer()
        bpy.ops.import_scene.gltf_ksons(filepath=filename)
        end_time = timer()
        print('[PASSED]\n')
        test = {
            'filename': short_name,
            'result': 'PASSED',
            'timeElapsed': end_time - start_time,
        }

    except Exception as e:
        print('[FAILED]\n')
        test = {
            'filename': filename,
            'result': 'FAILED',
            'error': str(e),
        }

    tests.append(test)

report = {
    'blenderVersion': list(bpy.app.version),
    'tests': tests,
}

with open(report_path, 'w+') as f:
    json.dump(report, f, indent=4)


================================================
FILE: test/data/fin4_Ref.exr
================================================
[File too large to display: 15.6 MB]

================================================
FILE: test/data/renderScene.blend
================================================
[File too large to display: 39.4 MB]

================================================
FILE: test/site_local/.gitignore
================================================
*
!.gitignore
!README.md


================================================
FILE: test/site_local/README.md
================================================
Add your own test files here. They won't be tracked by git.


================================================
FILE: test/test.py
================================================
#!/usr/bin/env python3
"""
Run and report on automated tests for the importer.

You can read the test results programmatically (eg. for CI) from the
report.json file or by examining the exit code of this script. Possible
values are:

0 - All tests passed
1 - Some kind of error occurred (as distinct from "some test failed")
3 - At least one test failed
"""

import argparse
import json
import os
import subprocess
import sys

base_dir = os.path.dirname(os.path.abspath(__file__))
samples_path = os.path.join(base_dir, 'glTF-Sample-Models', '2.0')
report_path = os.path.join(base_dir, 'report.json')
test_script = os.path.join(base_dir, 'bl_generate_report.py')
scripts_dir = os.path.join(base_dir, os.pardir)

def cmd_get(args=None):
    """Get sample files by initializing git submodules."""
    try:
        print("Checking if we're in a git repo...")
        subprocess.run(
            ['git', 'rev-parse'],
            cwd=base_dir,
            check=True
        )
    except BaseException:
        print('Is git installed?')
        print('Did you get this repo through git (as opposed to eg. a zip)?')
        raise

    try:
        print("Fetching submodules (WARNING: large download)...")
        subprocess.run(
            ['git', 'submodule', 'update', '--init', '--recursive'],
            cwd=base_dir,
            check=True
        )
    except BaseException:
        print("Couldn't init submodules. Aborting")
        raise

    if not os.path.isdir(samples_path):
        print("Samples still aren't there! Aborting")
        raise Exception('no samples after initializing submodules')

    print('Good to go!')


def cmd_run(args):
    """Calls Blender to generate report.json file."""
    if not os.path.isdir(samples_path):
        print("Couldn't find glTF-Sample-Models/2.0/")
        print("I'll try to fetch it for you...")
        cmd_get()
        print('This step should only happen once.\n\n')

    exe = args.exe

    # Print Blender version for debugging
    try:
        subprocess.run([exe, '--version'], check=True)
    except BaseException:
        print("Couldn't run %s" % exe)
        print('Check that Blender is installed!')
        raise

    print()

    # We're going to try to run Blender in a clean-ish environment for testing.
    # we want to be sure we're using the current state of 'io_scene_gltf_ksons'.
    # The user scripts variable expects an addons/plugin directory structure
    # which we have in the projects root directory
    env = os.environ.copy()
    env['BLENDER_USER_SCRIPTS'] = scripts_dir
    subprocess.run(
        [
            exe,
            '-noaudio',  # sound ssystem to None (less output on stdout)
            '--background',  # run UI-less
            '--factory-startup',  # factory settings
            '--addons', 'io_scene_gltf_ksons',  # enable the addon
            '--python', test_script  # run the test script
        ],
        env=env,
        check=True
    )

    return cmd_report()


def cmd_report(args=None):
    """Print report from report.json file."""
    with open(report_path) as f:
        report = json.load(f)

    tests = report['tests']

    num_passed = 0
    num_failed = 0
    failures = []
    ok = '\033[32m' + 'ok' + '\033[0m'  # green 'ok'
    failed = '\033[31m' + 'FAILED' + '\033[0m'  # red 'FAILED'

    for test in tests:
        print('import', test['filename'], '... ', end='')
        if test['result'] == 'PASSED':
            print(ok, "(%.4f s)" % test['timeElapsed'])
            num_passed += 1
        else:
            print(failed)
            print(test['error'])
            num_failed += 1
            failures.append(test['filename'])

    if failures:
        print('\nfailures:')
        for name in failures:
            print('   ', name)

    result = ok if num_failed == 0 else failed
    print(
        '\ntest result: %s. %d passed; %d failed\n' %
        (result, num_passed, num_failed)
    )

    exit_code = 0 if num_failed == 0 else 3
    return exit_code


def cmd_report_times(args=None):
    """Prints the tests sorted by import time."""
    with open(report_path) as f:
        report = json.load(f)

    test_passed = lambda test: test['result'] == 'PASSED'
    tests = list(filter(test_passed, report['tests']))
    tests.sort(key=lambda test: test['timeElapsed'], reverse=True)

    for (num, test) in enumerate(tests, start=1):
        print('( #%-3d )  % 2.4fs   %s' % (num, test['timeElapsed'], test['filename']))


p = argparse.ArgumentParser(description='glTF importer tests')
subs = p.add_subparsers(title='subcommands')

run = subs.add_parser('run', help='Run tests and generate report')
run.add_argument('--exe', default='blender', help='Blender executable')
run.set_defaults(func=cmd_run)

get = subs.add_parser('get-samples', help='Fetch or update samples')
get.set_defaults(func=cmd_get)

report = subs.add_parser('report', help='Print last report')
report.set_defaults(func=cmd_report)

report_times = subs.add_parser('report-times', help='Print import times for last report')
report_times.set_defaults(func=cmd_report_times)

argv = sys.argv
if len(argv) == 1:
    print('assuming you wanted to run the tests\n')
    argv.append('run')
args = p.parse_args(argv[1:])
result = args.func(args)
if type(result) == int:
    sys.exit(result)
Download .txt
gitextract_6jqv601m/

├── .github/
│   └── issue_template.md
├── .gitignore
├── .gitmodules
├── .travis.yml
├── INSTALL.md
├── LICENSE
├── README.md
├── addons/
│   └── io_scene_gltf_ksons/
│       ├── __init__.py
│       ├── animation/
│       │   ├── __init__.py
│       │   ├── curve.py
│       │   ├── material.py
│       │   ├── morph_weight.py
│       │   ├── node_trs.py
│       │   └── precompute.py
│       ├── buffer.py
│       ├── camera.py
│       ├── compat.py
│       ├── importer.py
│       ├── light.py
│       ├── load.py
│       ├── material/
│       │   ├── __init__.py
│       │   ├── block.py
│       │   ├── groups.json
│       │   ├── image.py
│       │   ├── node_groups.py
│       │   ├── precompute.py
│       │   └── texture.py
│       ├── mesh.py
│       ├── node.py
│       ├── scene.py
│       └── vnode.py
├── deploy.py
├── make_package.py
├── setup.cfg
└── test/
    ├── README.md
    ├── bl_generate_report.py
    ├── data/
    │   ├── fin4_Ref.exr
    │   └── renderScene.blend
    ├── site_local/
    │   ├── .gitignore
    │   └── README.md
    └── test.py
Download .txt
SYMBOL INDEX (132 symbols across 26 files)

FILE: addons/io_scene_gltf_ksons/__init__.py
  class ImportGLTF (line 37) | class ImportGLTF(bpy.types.Operator, ImportHelper):
    method draw (line 144) | def draw(self, context):
    method execute (line 175) | def execute(self, context):
  function menu_func_import (line 182) | def menu_func_import(self, context):
  function register (line 186) | def register():
  function unregister (line 195) | def unregister():

FILE: addons/io_scene_gltf_ksons/animation/__init__.py
  function quote (line 4) | def quote(s):
  function add_animations (line 13) | def add_animations(op):
  function create_nla_tracks (line 27) | def create_nla_tracks(op):

FILE: addons/io_scene_gltf_ksons/animation/curve.py
  class Curve (line 5) | class Curve:
    method for_sampler (line 7) | def for_sampler(op, sampler, num_targets=None):
    method num_components (line 32) | def num_components(self):
    method shorten_quaternion_paths (line 36) | def shorten_quaternion_paths(self):
    method make_fcurves (line 45) | def make_fcurves(self, op, action, data_path,

FILE: addons/io_scene_gltf_ksons/animation/material.py
  function add_material_animation (line 6) | def add_material_animation(op, anim_info, material_id):

FILE: addons/io_scene_gltf_ksons/animation/morph_weight.py
  function add_morph_weight_animation (line 8) | def add_morph_weight_animation(op, anim_info, node_id):
  function find_mesh_instances (line 45) | def find_mesh_instances(vnode):

FILE: addons/io_scene_gltf_ksons/animation/node_trs.py
  function add_node_trs_animation (line 11) | def add_node_trs_animation(op, anim_info, node_id):
  function object_trs (line 18) | def object_trs(op, anim_info, node_id):
  function bone_trs (line 64) | def bone_trs(op, anim_info, node_id):
  function exchange_scale_rot_matrix (line 216) | def exchange_scale_rot_matrix(r):

FILE: addons/io_scene_gltf_ksons/animation/precompute.py
  class AnimationInfo (line 4) | class AnimationInfo:
    method __init__ (line 5) | def __init__(self, anim_id):
  function animation_precomputation (line 33) | def animation_precomputation(op):
  function first_match (line 42) | def first_match(patterns, s):
  function gather_animation (line 50) | def gather_animation(op, anim_id):

FILE: addons/io_scene_gltf_ksons/buffer.py
  function create_buffer (line 12) | def create_buffer(op, idx):
  function create_buffer_view (line 35) | def create_buffer_view(op, idx):
  function create_accessor (line 51) | def create_accessor(op, idx):
  function create_accessor_from_properties (line 61) | def create_accessor_from_properties(op, accessor):

FILE: addons/io_scene_gltf_ksons/camera.py
  function create_camera (line 4) | def create_camera(op, idx):

FILE: addons/io_scene_gltf_ksons/compat.py
  function mul (line 8) | def mul(x, y): return x @ y
  function mul (line 10) | def mul(x, y): return x * y

FILE: addons/io_scene_gltf_ksons/importer.py
  class Importer (line 4) | class Importer:
    method __init__ (line 7) | def __init__(self, filepath, options):
    method do_import (line 12) | def do_import(self):
    method get (line 30) | def get(self, kind, id):
    method set_conversions (line 58) | def set_conversions(self):

FILE: addons/io_scene_gltf_ksons/light.py
  function create_light (line 5) | def create_light(op, idx):
  function cd2W (line 63) | def cd2W(intensity, efficiency, surface):
  function lux2W (line 73) | def lux2W(intensity, efficiency):

FILE: addons/io_scene_gltf_ksons/load.py
  function load (line 7) | def load(op):
  function parse_file (line 13) | def parse_file(op):
  function parse_gltf (line 32) | def parse_gltf(op, contents):
  function parse_glb (line 36) | def parse_glb(op, contents):
  function check_version (line 68) | def check_version(op):
  function check_extensions (line 94) | def check_extensions(op):

FILE: addons/io_scene_gltf_ksons/material/__init__.py
  function create_material (line 13) | def create_material(op, idx):
  function create_node_tree (line 91) | def create_node_tree(mc):
  function create_emissive (line 128) | def create_emissive(mc):
  function create_alpha_block (line 167) | def create_alpha_block(mc):
  function create_shaded (line 224) | def create_shaded(mc):
  function create_metalRough_pbr (line 235) | def create_metalRough_pbr(mc):
  function create_specGloss_pbr (line 257) | def create_specGloss_pbr(mc):
  function create_unlit (line 299) | def create_unlit(mc):
  function create_base_color (line 312) | def create_base_color(mc):
  function create_diffuse (line 356) | def create_diffuse(mc):
  function create_metal_roughness (line 400) | def create_metal_roughness(mc):
  function create_spec_roughness (line 448) | def create_spec_roughness(mc):
  function create_normal_block (line 498) | def create_normal_block(mc):
  function create_occlusion_block (line 518) | def create_occlusion_block(mc):
  class MaterialCreator (line 547) | class MaterialCreator:
    method new_node (line 551) | def new_node(self, opts):
    method adjoin (line 597) | def adjoin(self, opts):
    method adjoin_split (line 614) | def adjoin_split(self, opts1, opts2, left_block):
    method connect (line 648) | def connect(self, connector, connector_key, node, socket_type, socket_...
    method connect_value (line 668) | def connect_value(self, value, node, socket_type, socket_key):
    method connect_block (line 679) | def connect_block(self, block, output_key, socket):
  class Value (line 683) | class Value:
    method __init__ (line 693) | def __init__(self, value, record_to=''):

FILE: addons/io_scene_gltf_ksons/material/block.py
  class Block (line 8) | class Block:
    method __init__ (line 9) | def __init__(self, *blocks):
    method add (line 18) | def add(self, child):
    method move_by (line 35) | def move_by(self, delta):
    method pad_top (line 41) | def pad_top(self, padding):
    method center_at_origin (line 47) | def center_at_origin(self):
    method empty (line 52) | def empty(width=100, height=140):
    method row_align_center (line 65) | def row_align_center(blocks, gutter=100):
    method col_align_right (line 89) | def col_align_right(blocks, gutter=100):
  function top_left (line 102) | def top_left(block):
  function bottom_right (line 108) | def bottom_right(block):
  function move_by (line 114) | def move_by(block, delta):
  function width (line 121) | def width(block):
  function height (line 127) | def height(block):
  function move_to (line 133) | def move_to(block, pos):
  function center_at_origin (line 138) | def center_at_origin(block):

FILE: addons/io_scene_gltf_ksons/material/image.py
  function create_image (line 8) | def create_image(op, idx):

FILE: addons/io_scene_gltf_ksons/material/node_groups.py
  function create_group (line 16) | def create_group(op, name):
  function load (line 95) | def load():
  function serialize_group (line 112) | def serialize_group(group):
  function serialize (line 201) | def serialize():

FILE: addons/io_scene_gltf_ksons/material/precompute.py
  class MaterialInfo (line 3) | class MaterialInfo:
    method __init__ (line 4) | def __init__(self):
  function material_procomputation (line 17) | def material_procomputation(op):

FILE: addons/io_scene_gltf_ksons/material/texture.py
  function create_texture_block (line 14) | def create_texture_block(mc, texture_type, info):

FILE: addons/io_scene_gltf_ksons/mesh.py
  function create_mesh (line 8) | def create_mesh(op, mesh_spec):
  function mesh_name (line 78) | def mesh_name(op, mesh_spec):
  function bmesh_to_mesh (line 92) | def bmesh_to_mesh(bme, me):
  function get_layer (line 121) | def get_layer(bme_layers, name):
  function add_primitive_to_bmesh (line 128) | def add_primitive_to_bmesh(op, bme, primitive, material_index):
  function edges_and_tris (line 312) | def edges_and_tris(indices, mode):

FILE: addons/io_scene_gltf_ksons/node.py
  function realize_vtree (line 7) | def realize_vtree(op):
  function realize_object (line 91) | def realize_object(op, vnode):
  function realize_armature (line 131) | def realize_armature(op, vnode):
  function realize_bone (line 148) | def realize_bone(op, vnode):
  function realize_root (line 173) | def realize_root(op, vnode):
  function link_vnode_into_scene (line 186) | def link_vnode_into_scene(vnode, scene):
  function link_vnode_into_scene (line 191) | def link_vnode_into_scene(vnode, scene):
  function link_tree_into_scene (line 200) | def link_tree_into_scene(vnode, scene):
  function link_everything_into_scene (line 206) | def link_everything_into_scene(op):

FILE: addons/io_scene_gltf_ksons/scene.py
  function link_vnode_into_collection (line 5) | def link_vnode_into_collection(vnode, collection):
  function link_tree_into_collection (line 11) | def link_tree_into_collection(vnode, collection):
  function import_scenes_as_collections (line 17) | def import_scenes_as_collections(op):

FILE: addons/io_scene_gltf_ksons/vnode.py
  class VNode (line 11) | class VNode:
    method __init__ (line 12) | def __init__(self):
  function create_vtree (line 55) | def create_vtree(op):
  function initial_vtree (line 72) | def initial_vtree(op):
  function insert_armatures (line 140) | def insert_armatures(op):
  function move_instances (line 206) | def move_instances(op):
  function adjust_bones (line 387) | def adjust_bones(op):
  function get_node_trs (line 557) | def get_node_trs(op, node):
  function lowest_common_ancestor (line 582) | def lowest_common_ancestor(vnodes):
  function insert_above (line 629) | def insert_above(vnode, new_parent):
  function insert_below (line 650) | def insert_below(vnode, new_child):
  function remove_vnode (line 666) | def remove_vnode(vnode):
  function is_non_degenerate_homscale (line 692) | def is_non_degenerate_homscale(s):

FILE: deploy.py
  function replace_in_file (line 9) | def replace_in_file(file, expr, new_substr):

FILE: make_package.py
  function make_package (line 6) | def make_package(suffix=None):

FILE: test/test.py
  function cmd_get (line 26) | def cmd_get(args=None):
  function cmd_run (line 58) | def cmd_run(args):
  function cmd_report (line 100) | def cmd_report(args=None):
  function cmd_report_times (line 139) | def cmd_report_times(args=None):
Condensed preview — 41 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (172K chars).
[
  {
    "path": ".github/issue_template.md",
    "chars": 138,
    "preview": "<!--\n\nThanks for filing an issue! If you are having a problem importing a file, please\ninclude a link to the file so we "
  },
  {
    "path": ".gitignore",
    "chars": 1151,
    "preview": "# Automated test results\ntest/report.json\n\n## Generic ignores below here\n################################\n# Byte-compile"
  },
  {
    "path": ".gitmodules",
    "chars": 132,
    "preview": "[submodule \"test/glTF-Sample-Models\"]\n\tpath = test/glTF-Sample-Models\n\turl = https://github.com/KhronosGroup/glTF-Sample"
  },
  {
    "path": ".travis.yml",
    "chars": 787,
    "preview": "language: python\npython:\n  \"3.5\"\n\n# From michaeldegroot/cats-blender-plugin\nbefore_install:\n  - sudo apt-get update -qq\n"
  },
  {
    "path": "INSTALL.md",
    "chars": 1947,
    "preview": "See also the [Blender manual on installing\nadd-ons](https://docs.blender.org/manual/en/latest/preferences/addons.html).\n"
  },
  {
    "path": "LICENSE",
    "chars": 1070,
    "preview": "MIT License\n\nCopyright (c) 2017 Kristian Sons\n\nPermission is hereby granted, free of charge, to any person obtaining a c"
  },
  {
    "path": "README.md",
    "chars": 1796,
    "preview": "## If you're looking for the official importer included with Blender, go [here](https://github.com/KhronosGroup/glTF-Ble"
  },
  {
    "path": "addons/io_scene_gltf_ksons/__init__.py",
    "chars": 6686,
    "preview": "import json\nimport os\nimport struct\n\nimport bpy\nfrom bpy.props import StringProperty, BoolProperty, FloatProperty, EnumP"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/__init__.py",
    "chars": 2233,
    "preview": "import json\nimport bpy\n\ndef quote(s):\n    \"\"\"Quote a string with double-quotes.\"\"\"\n    return json.dumps(s)\n\nfrom .preco"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/curve.py",
    "chars": 5122,
    "preview": "import bpy\nfrom mathutils import Vector, Quaternion, Matrix\n\n\nclass Curve:\n    @staticmethod\n    def for_sampler(op, sam"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/material.py",
    "chars": 2432,
    "preview": "import bpy\nfrom . import quote\nfrom .curve import Curve\n\n\ndef add_material_animation(op, anim_info, material_id):\n    an"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/morph_weight.py",
    "chars": 1736,
    "preview": "import bpy\nfrom . import quote\nfrom .curve import Curve\n\n# Morph Weight Animations\n\n\ndef add_morph_weight_animation(op, "
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/node_trs.py",
    "chars": 8110,
    "preview": "from mathutils import Vector, Quaternion, Matrix\nimport bpy\nfrom . import quote\nfrom .curve import Curve\nfrom ..compat i"
  },
  {
    "path": "addons/io_scene_gltf_ksons/animation/precompute.py",
    "chars": 5477,
    "preview": "import re\nimport bpy\n\nclass AnimationInfo:\n    def __init__(self, anim_id):\n        self.anim_id = anim_id\n\n        # Th"
  },
  {
    "path": "addons/io_scene_gltf_ksons/buffer.py",
    "chars": 6237,
    "preview": "import base64\nimport os\nimport struct\n\n# This file handles creating buffers, buffer views, and accessors. It's pure\n# py"
  },
  {
    "path": "addons/io_scene_gltf_ksons/camera.py",
    "chars": 970,
    "preview": "import bpy\n\n\ndef create_camera(op, idx):\n    \"\"\"Create a Blender camera for the glTF cameras[idx].\"\"\"\n    data = op.gltf"
  },
  {
    "path": "addons/io_scene_gltf_ksons/compat.py",
    "chars": 272,
    "preview": "import bpy\n\n# Compatiblity shims\n\n# Blender 2.8 changed matrix-matrix, matrix-vector, quaternion-quaternion, and\n# quate"
  },
  {
    "path": "addons/io_scene_gltf_ksons/importer.py",
    "chars": 2951,
    "preview": "from mathutils import Vector, Quaternion\nfrom . import buffer, mesh, camera, light, material, animation, load, vnode, no"
  },
  {
    "path": "addons/io_scene_gltf_ksons/light.py",
    "chars": 2157,
    "preview": "import math\nimport bpy\n\n\ndef create_light(op, idx):\n    light = op.gltf['extensions']['KHR_lights_punctual']['lights'][i"
  },
  {
    "path": "addons/io_scene_gltf_ksons/load.py",
    "chars": 3066,
    "preview": "import os\nimport json\nimport struct\nfrom . import GLTF_VERSION, EXTENSIONS\n\n\ndef load(op):\n    parse_file(op)\n    check_"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/__init__.py",
    "chars": 22484,
    "preview": "import json\nimport bpy\nfrom .block import Block\nfrom .texture import create_texture_block\nfrom . import image, node_grou"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/block.py",
    "chars": 3765,
    "preview": "from mathutils import Vector\n\n# A _block_ is either a shader node or a rectangular set of smaller blocks\n# represented b"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/groups.json",
    "chars": 9336,
    "preview": "// !!AUTO-GENERATED!! See node_groups.py\n{\n\"Texcoord CLAMP\":{\"name\":\"Texcoord CLAMP\",\"inputs\":[{\"name\":\"Value\",\"idname\":"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/image.py",
    "chars": 1530,
    "preview": "import tempfile\nimport os\nimport base64\nimport bpy\nfrom bpy_extras.image_utils import load_image\n\n\ndef create_image(op, "
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/node_groups.py",
    "chars": 6898,
    "preview": "import json\nimport os\nimport bpy\n\n# This file creates the node groups that we use during material creation. Node\n# group"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/precompute.py",
    "chars": 1397,
    "preview": "from ..mesh import MAX_NUM_COLOR_SETS\n\nclass MaterialInfo:\n    def __init__(self):\n        # The maximum number of color"
  },
  {
    "path": "addons/io_scene_gltf_ksons/material/texture.py",
    "chars": 7429,
    "preview": "import json\nfrom . import block\nBlock = block.Block\n\n# Creates a texture block for the given material.\n#\n# The texture b"
  },
  {
    "path": "addons/io_scene_gltf_ksons/mesh.py",
    "chars": 12712,
    "preview": "import bmesh\nimport bpy\nfrom mathutils import Vector\n\nMAX_NUM_COLOR_SETS = 8\nMAX_NUM_TEXCOORD_SETS = 8\n\ndef create_mesh("
  },
  {
    "path": "addons/io_scene_gltf_ksons/node.py",
    "chars": 7161,
    "preview": "import os\nimport bpy\nfrom mathutils import Vector, Matrix\nfrom .compat import mul\n\n\ndef realize_vtree(op):\n    \"\"\"Create"
  },
  {
    "path": "addons/io_scene_gltf_ksons/scene.py",
    "chars": 1557,
    "preview": "import os\nimport bpy\n\n\ndef link_vnode_into_collection(vnode, collection):\n    if vnode.blender_object:\n        if vnode."
  },
  {
    "path": "addons/io_scene_gltf_ksons/vnode.py",
    "chars": 24998,
    "preview": "from math import pi\nfrom mathutils import Matrix, Quaternion, Vector, Euler\nfrom .compat import mul\nfrom .mesh import me"
  },
  {
    "path": "deploy.py",
    "chars": 1421,
    "preview": "import argparse\nimport os\nimport re\nimport subprocess\n\nimport make_package\n\n\ndef replace_in_file(file, expr, new_substr)"
  },
  {
    "path": "make_package.py",
    "chars": 764,
    "preview": "import os\nimport shutil\nimport tempfile\n\n\ndef make_package(suffix=None):\n    this_dir = os.path.dirname(os.path.abspath("
  },
  {
    "path": "setup.cfg",
    "chars": 30,
    "preview": "[flake8]\nmax-line-length = 120"
  },
  {
    "path": "test/README.md",
    "chars": 776,
    "preview": "## Testing\n\nThe [glTF Sample Models](https://github.com/KhronosGroup/glTF-Sample-Models) are\nused for automated testing "
  },
  {
    "path": "test/bl_generate_report.py",
    "chars": 1716,
    "preview": "\"\"\"\nRuns tests and writes the results to the report.json file.\n\nThis should be executed inside Blender, not from normal "
  },
  {
    "path": "test/site_local/.gitignore",
    "chars": 25,
    "preview": "*\n!.gitignore\n!README.md\n"
  },
  {
    "path": "test/site_local/README.md",
    "chars": 60,
    "preview": "Add your own test files here. They won't be tracked by git.\n"
  },
  {
    "path": "test/test.py",
    "chars": 5306,
    "preview": "#!/usr/bin/env python3\n\"\"\"\nRun and report on automated tests for the importer.\n\nYou can read the test results programmat"
  }
]

// ... and 2 more files (download for full content)

About this extraction

This page contains the full source code of the ksons/gltf-blender-importer GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 41 files (55.1 MB), approximately 42.4k tokens, and a symbol index with 132 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!