[
  {
    "path": ".appveyor.yml",
    "content": "environment:\n\n  matrix:\n\n    # For Python versions available on Appveyor, see\n    # http://www.appveyor.com/docs/installed-software#python\n    # The list here is complete (excluding Python 2.6, which\n    # isn't covered by this document) at the time of writing.\n\n    - PYTHON: \"C:\\\\Python27\"\n    - PYTHON: \"C:\\\\Python35\"\n    - PYTHON: \"C:\\\\Python27-x64\"\n    - PYTHON: \"C:\\\\Python35-x64\"\n    - PYTHON: \"C:\\\\Python36-x64\"\n\ninstall:\n  # We need wheel installed to build wheels\n  - \"%PYTHON%\\\\python.exe -m pip install wheel\"\n  - \"%PYTHON%\\\\python.exe -m pip install cython\"\n  - \"%PYTHON%\\\\python.exe -m pip install -r requirements.txt\"\n  - \"%PYTHON%\\\\python.exe -m pip install -e .\"\n\nbuild: off\n\ntest_script:\n  # Put your test command here.\n  # If you don't need to build C extensions on 64-bit Python 3.3 or 3.4,\n  # you can remove \"build.cmd\" from the front of the command, as it's\n  # only needed to support those cases.\n  # Note that you must use the environment variable %PYTHON% to refer to\n  # the interpreter you're using - Appveyor does not do anything special\n  # to put the Python version you want to use on PATH.\n  - \"%PYTHON%\\\\python.exe -m pytest tests/\"\n\nafter_test:\n  # This step builds your wheels.\n  # Again, you only need build.cmd if you're building C extensions for\n  # 64-bit Python 3.3/3.4. And you need to use %PYTHON% to get the correct\n  # interpreter\n  - \"%PYTHON%\\\\python.exe setup.py bdist_wheel\"\n\nartifacts:\n  # bdist_wheel puts your built wheel in the dist directory\n  - path: dist\\*\n\n#on_success:\n#  You can use this step to upload your artifacts to a public website.\n#  See Appveyor's documentation for more details. Or you can simply\n#  access your wheels from the Appveyor \"artifacts\" tab for your build.\n"
  },
  {
    "path": ".gitignore",
    "content": "*.weights\n\n# Cython / C extensions\ncythonize.json\nspacy/*.html\n*.cpp\n*.so\n\n# Vim / VSCode / editors\n*.swp\n*.sw*\nProfile.prof\n.vscode\n.sass-cache\n\n# Python\n.Python\n.python-version\n__pycache__/\n*.py[cod]\n.env/\n.env*\n.~env/\n.venv\nvenv/\n.dev\n.denv\n.pypyenv\n\n# Distribution / packaging\nenv/\nbuild/\ndevelop-eggs/\ndist/\neggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\n*.egg-info/\n.installed.cfg\n*.egg\n.eggs\nMANIFEST\n\n# Temporary files\n*.~*\ntmp/\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.cache\nnosetests.xml\ncoverage.xml\n\n# Translations\n*.mo\n\n# Mr Developer\n.mr.developer.cfg\n.project\n.pydevproject\n\n# Rope\n.ropeproject\n\n# Django stuff:\n*.log\n*.pot\n\n# Windows\n*.bat\nThumbs.db\nDesktop.ini\n\n# Mac OS X\n*.DS_Store\n\n# Komodo project files\n*.komodoproject\n\n# Other\n*.tgz\n\n# Pycharm project files\n*.idea\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: python\n\npython:\n  - \"2.7\"\n  - \"3.5\"\n  - \"3.6\"\n\ninstall:\n  - if [ \"$TRAVIS_OS_NAME\" == \"linux\" ] ; then sudo apt-get install libopenblas-dev ; fi\n  - pip install -r requirements.txt\n  - pip install cython\n  - python setup.py build_ext --inplace\n  - pip install -e .\n  - export PYTHONPATH=`pwd`\n  - python -m lightnet download tiny-yolo\n  - pip install pytest\n\nscript:\n  - python -m pytest tests\n\n\nnotifications:\n  email: false\n  slack:\n    secure: VSqtxg7u4NTZRfoZqjxPRPVS92KTy/mp62egfDZ9ujTP4VPxNe15QZuTB6r/ICPgEYqBtdhLc/aetuBcemt0bHfentV0F7bz7iDY/AFQC1h1i4G0D0wKMufuqOJFw9MOp2tSpuvCVzhCxR+Ymx/F9SaeYBAiwBawce4wu+qu3lA=\n"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (C) 2017 ExplosionAI UG (haftungsbeschränkt), 2014-2017 Joseph Redmon\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "MANIFEST.in",
    "content": "include LICENSE\ninclude README.rst\ninclude bin/cythonize.py\ninclude lightnet/_darknet/Makefile\nrecursive-include lightnet/_darknet *.c\nrecursive-include lightnet/_darknet *.cu\nrecursive-include lightnet/_darknet *.h\nrecursive-include lightnet/data *.cfg\nrecursive-include lightnet/data *.data\nrecursive-include lightnet/data *.names\n"
  },
  {
    "path": "README.rst",
    "content": "LightNet: Bringing pjreddie's DarkNet out of the shadows\n********************************************************\n\nLightNet provides a simple and efficient Python interface to\n`DarkNet <https://github.com/pjreddie/darknet>`_, a neural  network library\nwritten by Joseph Redmon that's well known for its state-of-the-art object\ndetection models, `YOLO and YOLOv2 <https://pjreddie.com/darknet/yolo/>`_.\nLightNet's main purpose for now is to power `Prodigy <https://prodi.gy>`_'s\nupcoming object detection and image segmentation features. However, it may be\nuseful to anyone interested in the DarkNet library.\n\n.. image:: https://img.shields.io/travis/explosion/lightnet/master.svg?style=flat-square\n    :target: https://travis-ci.org/explosion/lightnet\n    :alt: Build Status\n\n.. image:: https://img.shields.io/github/release/explosion/lightnet.svg?style=flat-square\n    :target: https://github.com/explosion/lightnet/releases\n    :alt: Current Release Version\n\n.. image:: https://img.shields.io/pypi/v/lightnet.svg?style=flat-square\n    :target: https://pypi.python.org/pypi/lightnet\n    :alt: pypi Version\n\n.. image:: https://img.shields.io/twitter/follow/explosion_ai.svg?style=social&label=Follow\n    :target: https://twitter.com/explosion_ai\n    :alt: Explosion AI on Twitter\n\n----\n\nLightNet's features include:\n\n* **State-of-the-art object detection**: YOLOv2 offers unmatched speed/accuracy trade-offs.\n* **Easy-to-use via Python**: Pass in byte strings, get back numpy arrays with bounding boxes.\n* **Lightweight and self-contained**: No dependency on large frameworks like Tensorflow, PyTorch etc. The DarkNet source is provided in the package.\n* **Easy to install**: Just ``pip install lightnet`` and ``python -m lightnet download yolo``.\n* **Cross-platform**: Works on OSX and Linux, on Python 2.7, 3.5 and 3.6.\n* **10x faster on CPU**: Uses BLAS for its matrix multiplications routines.\n* **Not named DarkNet**: Avoids some potentially awkward misunderstandings.\n\n.. image:: https://user-images.githubusercontent.com/13643239/33104476-a31678ce-cf28-11e7-993f-872f3234f4b5.png\n    :alt: LightNet \"logo\"\n\n🌓 Installation\n===============\n\n==================== ===\n**Operating system** macOS / OS X, Linux (Windows coming soon)\n**Python version**   CPython 2.7, 3.5, 3.6. Only 64 bit.\n**Package managers** pip (source packages only)\n==================== ===\n\nLightNet requires an installation of `OpenBLAS <https://www.openblas.net/>`_:\n\n.. code:: bash\n\n    sudo apt-get install libopenblas-dev\n\nLightNet can be installed via pip:\n\n.. code:: bash\n\n    pip install lightnet\n\nOnce you've downloaded LightNet, you can install a model using the\n``lightnet download`` command. This will save the models in the\n``lightnet/data`` directory. If you've installed LightNet system-wide, make\nsure to run the command as administrator.\n\n.. code:: bash\n\n    python -m lightnet download tiny-yolo\n    python -m lightnet download yolo\n\nThe following models are currently available via the ``download`` command:\n\n===================== ======= ===\n``yolo.weights``      258 MB  `Direct download`__\n``tiny-yolo.weights`` 44.9 MB `Direct download`__\n===================== ======= ===\n\n__ https://pjreddie.com/media/files/yolo.weights\n__ https://pjreddie.com/media/files/tiny-yolo.weights\n\n🌓 Usage\n========\n\nAn object detection system predicts labelled bounding boxes on an image. The\nlabel scheme comes from the training data, so different models will have\ndifferent label sets. `YOLOv2 <https://pjreddie.com/darknet/yolo/>`_ can detect\nobjects in images of any resolution. Smaller images will be faster to predict,\nwhile high resolution images will give you better object detection accuracy.\n\nImages can be loaded by file-path, by JPEG-encoded byte-string, or by numpy\narray. If passing in a numpy array, it should be of dtype float32, and shape\n``(width, height, colors)``.\n\n.. code:: python\n\n    import lightnet\n\n    model = lightnet.load('tiny-yolo')\n    image = lightnet.Image.from_bytes(open('eagle.jpg', 'rb').read())\n    boxes = model(image)\n\n``METHOD`` lightnet.load\n------------------------\n\nLoad a pre-trained model. If a ``path`` is provided, it shoud be a directory\ncontaining two files,  named ``{name}.weights`` and ``{name}.cfg``. If a\n``path`` is not provided, the built-in data directory is used, which is\nlocated within the LightNet package.\n\n.. code:: python\n\n    model = lightnet.load('tiny-yolo')\n    model = lightnet.load(path='/path/to/yolo')\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``name``    unicode     Name of the model located in the data directory, e.g. ``tiny-yolo``.\n``path``    unicode     Optional path to a model data directory.\n**RETURNS** ``Network`` The loaded model.\n=========== =========== ===========\n\n----\n\n🌓 Network\n==========\n\nThe neural network object. Wraps DarkNet's ``network`` struct.\n\n``CLASSMETHOD`` Network.load\n----------------------------\n\nLoad a pre-trained model. Identical to ``lightnet.load()``.\n\n``METHOD`` Network.__call__\n---------------------------\n\nDetect bounding boxes given an ``Image`` object. The bounding boxes are\nprovided as a list, with each entry\n``(class_id, class_name, prob, [(x, y, width, height)])``, where ```x``` and\n``y``` are the pixel coordinates of the center of the centre of the box, and\n``width`` and ``height`` describe its dimensions. ``class_id`` is the integer\nindex of the object type, class_name is a string with the object type, and\n``prob`` is a float indicating the detection score. The ``thresh`` parameter\ncontrols the prediction threshold. Objects with a detection probability above\n``thresh`` are returned. We don't know what ``hier_thresh`` or ``nms`` do.\n\n.. code:: python\n\n    boxes = model(image, thresh=0.5, hier_thresh=0.5, nms=0.45)\n\n=============== =========== ===========\nArgument        Type        Description\n=============== =========== ===========\n``image``       ``Image``   The image to process.\n``thresh``      float       Prediction threshold.\n``hier_thresh`` float\n``path``        unicode     Optional path to a model data directory.\n**RETURNS**     list        The bounding boxes, as ``(class_id, class_name, prob, xywh)`` tuples.\n=============== =========== ===========\n\n``METHOD`` Network.update\n-------------------------\n\nUpdate the model, on a batch of examples. The images should be provided as a\nlist of ``Image`` objects. The ``box_labels`` should be a list of ``BoxLabel``\nobjects. Returns a float, indicating how much the models prediction differed\nfrom the provided true labels.\n\n.. code:: python\n\n    loss = model.update([image1, image2], [box_labels1, box_labels2])\n\n============== =========== ===========\nArgument       Type        Description\n============== =========== ===========\n``images``     list        List of ``Image`` objects.\n``box_labels`` list        List of ``BoxLabel`` objects.\n**RETURNS**    float       The loss indicating how much the prediction differed from the provided labels.\n============== =========== ===========\n\n----\n\n🌓 Image\n========\n\nData container for a single image. Wraps DarkNet's ``image`` struct.\n\n``METHOD`` Image.__init__\n-------------------------\n\nCreate an image. `data` should be a numpy array of dtype float32, and shape\n(width, height, colors).\n\n.. code:: python\n\n    image = Image(data)\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``data``    numpy array The image data\n**RETURNS** ``Image``   The newly constructed object.\n=========== =========== ===========\n\n``CLASSMETHOD`` Image.blank\n---------------------------\n\nCreate a blank image, of specified dimensions.\n\n.. code:: python\n\n    image = Image.blank(width, height, colors)\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``width``   int         The image width, in pixels.\n``height``  int         The image height, in pixels.\n``colors``  int         The number of color channels (usually ``3``).\n**RETURNS** ``Image``   The newly constructed object.\n=========== =========== ===========\n\n``CLASSMETHOD`` Image.load\n--------------------------\n\nLoad an image from a path to a jpeg file, of the specified dimensions.\n\n.. code:: python\n\n    image = Image.load(path, width, height, colors)\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``path``    unicode     The path to the image file.\n``width``   int         The image width, in pixels.\n``height``  int         The image height, in pixels.\n``colors``  int         The number of color channels (usually ``3``).\n**RETURNS** ``Image``   The newly constructed object.\n=========== =========== ===========\n\n``CLASSMETHOD`` Image.from_bytes\n--------------------------------\n\nRead an image from a byte-string, which should be the contents of a jpeg file.\n\n.. code:: python\n\n    image = Image.from_bytes(bytes_data)\n\n============== =========== ===========\nArgument       Type        Description\n============== =========== ===========\n``bytes_data`` bytes       The image contents.\n**RETURNS**    ``Image``   The newly constructed object.\n============== =========== ===========\n\n----\n\n🌓 BoxLabels\n============\n\nData container for labelled bounding boxes for a single image. Wraps an array\nof DarkNet's ``box_label`` struct.\n\n``METHOD`` BoxLabels.__init__\n-----------------------------\n\nLabelled box annotations for a single image, used to update the model. ``ids``\nshould be a 1d numpy array of dtype int32, indicating the correct class IDs of\nthe objects. ``boxes`` should be a 2d array of dtype float32, and shape\n``(len(ids), 4)``. The 4 columns of the boxes should provide the **relative**\n``x, y, width, height`` of the bounding box, where ``x`` and ``y`` are the\ncoordinates of the centre, relative to the image size, and ``width`` and\n``height`` are the relative dimensions of the box.\n\n.. code:: python\n\n    box_labels = BoxLabels(ids, boxes)\n\n============== ============= ===========\nArgument       Type          Description\n============== ============= ===========\n``ids``        numpy array   The class IDs of the objects.\n``boxes``      numpy array   The boxes providing the relative ``x, y, width, height`` of the bounding box.\n**RETURNS**    ``BoxLabels`` The newly constructed object.\n============== ============= ===========\n\n``CLASSMETHOD`` BoxLabels.load\n------------------------------\n\nLoad annotations for a single image from a text file. Each box should be\ndescribed on a single line, in the format ``class_id x y width height``.\n\n.. code:: python\n\n    box_labels = BoxLabels.load(path)\n\n============== ============= ===========\nArgument       Type          Description\n============== ============= ===========\n``path``       unicode       The path to load from.\n**RETURNS**    ``BoxLabels`` The newly constructed object.\n============== ============= ===========\n"
  },
  {
    "path": "bin/cythonize.py",
    "content": "#!/usr/bin/env python\n\"\"\" cythonize\n\nCythonize pyx files into C files as needed.\n\nUsage: cythonize [root_dir]\n\nDefault [root_dir] is 'lightnet'.\n\nChecks pyx files to see if they have been changed relative to their\ncorresponding C files.  If they have, then runs cython on these files to\nrecreate the C files.\n\nThe script thinks that the pyx files have changed relative to the C files\nby comparing hashes stored in a database file.\n\nSimple script to invoke Cython (and Tempita) on all .pyx (.pyx.in)\nfiles; while waiting for a proper build system. Uses file hashes to\nfigure out if rebuild is needed.\n\nFor now, this script should be run by developers when changing Cython files\nonly, and the resulting C files checked in, so that end-users (and Python-only\ndevelopers) do not get the Cython/Tempita dependencies.\n\nOriginally written by Dag Sverre Seljebotn, and copied here from:\n\nhttps://raw.github.com/dagss/private-scipy-refactor/cythonize/cythonize.py\n\nNote: this script does not check any of the dependent C libraries; it only\noperates on the Cython .pyx files.\n\"\"\"\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport re\nimport sys\nimport hashlib\nimport subprocess\n\nHASH_FILE = 'cythonize.dat'\nDEFAULT_ROOT = 'lightnet'\nVENDOR = 'Explosion'\n\n# WindowsError is not defined on unix systems\ntry:\n    WindowsError\nexcept NameError:\n    WindowsError = None\n\n#\n# Rules\n#\ndef process_pyx(fromfile, tofile):\n    try:\n        from Cython.Compiler.Version import version as cython_version\n        from distutils.version import LooseVersion\n        if LooseVersion(cython_version) < LooseVersion('0.19'):\n            raise Exception('Building %s requires Cython >= 0.19' % VENDOR)\n\n    except ImportError:\n        pass\n\n    flags = ['--fast-fail']\n    if tofile.endswith('.cpp'):\n        flags += ['--cplus']\n\n    try:\n        try:\n            r = subprocess.call(['cython'] + flags + [\"-o\", tofile, fromfile])\n            if r != 0:\n                raise Exception('Cython failed')\n        except OSError:\n            # There are ways of installing Cython that don't result in a cython\n            # executable on the path, see gh-2397.\n            r = subprocess.call([sys.executable, '-c',\n                                 'import sys; from Cython.Compiler.Main import '\n                                 'setuptools_main as main; sys.exit(main())'] + flags +\n                                 [\"-o\", tofile, fromfile])\n            if r != 0:\n                raise Exception('Cython failed')\n    except OSError:\n        raise OSError('Cython needs to be installed')\n\ndef process_tempita_pyx(fromfile, tofile):\n    try:\n        try:\n            from Cython import Tempita as tempita\n        except ImportError:\n            import tempita\n    except ImportError:\n        raise Exception('Building %s requires Tempita: '\n                        'pip install --user Tempita' % VENDOR)\n    with open(fromfile, \"r\") as f:\n        tmpl = f.read()\n    pyxcontent = tempita.sub(tmpl)\n    assert fromfile.endswith('.pyx.in')\n    pyxfile = fromfile[:-len('.pyx.in')] + '.pyx'\n    with open(pyxfile, \"w\") as f:\n        f.write(pyxcontent)\n    process_pyx(pyxfile, tofile)\n\nrules = {\n    # fromext : function\n    '.pyx' : process_pyx,\n    '.pyx.in' : process_tempita_pyx\n    }\n#\n# Hash db\n#\ndef load_hashes(filename):\n    # Return { filename : (sha1 of input, sha1 of output) }\n    if os.path.isfile(filename):\n        hashes = {}\n        with open(filename, 'r') as f:\n            for line in f:\n                filename, inhash, outhash = line.split()\n                hashes[filename] = (inhash, outhash)\n    else:\n        hashes = {}\n    return hashes\n\ndef save_hashes(hash_db, filename):\n    with open(filename, 'w') as f:\n        for key, value in sorted(hash_db.items()):\n            f.write(\"%s %s %s\\n\" % (key, value[0], value[1]))\n\ndef sha1_of_file(filename):\n    h = hashlib.sha1()\n    with open(filename, \"rb\") as f:\n        h.update(f.read())\n    return h.hexdigest()\n\n#\n# Main program\n#\n\ndef normpath(path):\n    path = path.replace(os.sep, '/')\n    if path.startswith('./'):\n        path = path[2:]\n    return path\n\ndef get_hash(frompath, topath):\n    from_hash = sha1_of_file(frompath)\n    to_hash = sha1_of_file(topath) if os.path.exists(topath) else None\n    return (from_hash, to_hash)\n\ndef process(path, fromfile, tofile, processor_function, hash_db):\n    fullfrompath = os.path.join(path, fromfile)\n    fulltopath = os.path.join(path, tofile)\n    current_hash = get_hash(fullfrompath, fulltopath)\n    if current_hash == hash_db.get(normpath(fullfrompath), None):\n        print('%s has not changed' % fullfrompath)\n        return\n\n    orig_cwd = os.getcwd()\n    try:\n        os.chdir(path)\n        print('Processing %s' % fullfrompath)\n        processor_function(fromfile, tofile)\n    finally:\n        os.chdir(orig_cwd)\n    # changed target file, recompute hash\n    current_hash = get_hash(fullfrompath, fulltopath)\n    # store hash in db\n    hash_db[normpath(fullfrompath)] = current_hash\n\n\ndef find_process_files(root_dir):\n    hash_db = load_hashes(HASH_FILE)\n    for cur_dir, dirs, files in os.walk(root_dir):\n        for filename in files:\n            in_file = os.path.join(cur_dir, filename + \".in\")\n            if filename.endswith('.pyx') and os.path.isfile(in_file):\n                continue\n            for fromext, function in rules.items():\n                if filename.endswith(fromext):\n                    with open(os.path.join(cur_dir, filename), 'rb') as f:\n                         data = f.read()\n                         m = re.search(br\"^\\s*#\\s*distutils:\\s*language\\s*=\\s*c\\+\\+\\s*$\", data, re.I|re.M)\n                         if m:\n                             toext = \".cpp\"\n                         else:\n                             toext = \".c\"\n                    fromfile = filename\n                    tofile = filename[:-len(fromext)] + toext\n                    process(cur_dir, fromfile, tofile, function, hash_db)\n                    save_hashes(hash_db, HASH_FILE)\n\ndef main():\n    try:\n        root_dir = sys.argv[1]\n    except IndexError:\n        root_dir = DEFAULT_ROOT\n    find_process_files(root_dir)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "bin/train.py",
    "content": "from lightnet.lightnet import train\nimport plac\nfrom pathlib import Path\n\ntry:\n    unicode\nexcept NameError:\n    unicode = str\n\ndef path2bytes(loc):\n    return unicode(Path(loc).resolve()).encode('utf8')\n\ndef main(cfg_loc, weight_loc, images_loc):\n    train(path2bytes(cfg_loc), path2bytes(weight_loc),\n          path2bytes(images_loc), path2bytes('/tmp/yolo'))\n\nif __name__ == '__main__':\n    plac.call(main)\n"
  },
  {
    "path": "lightnet/__init__.pxd",
    "content": ""
  },
  {
    "path": "lightnet/__init__.py",
    "content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nfrom .lightnet import Network, Image, BoxLabels\nfrom .about import __version__\n\n\ndef load(name, path=None):\n    return Network.load(name, path=path)\n"
  },
  {
    "path": "lightnet/__main__.py",
    "content": "# coding: utf8\nfrom __future__ import print_function\n# NB! This breaks in plac on Python 2!!\n# from __future__ import unicode_literals\n\n\nif __name__ == '__main__':\n    import plac\n    import sys\n    try:\n        from lightnet.cli import download\n    except ImportError:\n        from cli import download\n\n    commands = {\n        'download': download,\n    }\n    if len(sys.argv) == 1:\n        print(', '.join(commands), title=\"Available commands\", exits=1)\n    command = sys.argv.pop(1)\n    sys.argv[0] = 'lightnet %s' % command\n    if command in commands:\n        plac.call(commands[command])\n    else:\n        print(\n            \"Available: %s\" % ', '.join(commands),\n            title=\"Unknown command: %s\" % command,\n            exits=1)\n"
  },
  {
    "path": "lightnet/_darknet/Makefile",
    "content": "GPU=0\nCUDNN=0\nOPENCV=0\nOPENMP=0\nDEBUG=0\n\nARCH= -gencode arch=compute_30,code=sm_30 \\\n      -gencode arch=compute_35,code=sm_35 \\\n      -gencode arch=compute_50,code=[sm_50,compute_50] \\\n      -gencode arch=compute_52,code=[sm_52,compute_52]\n#      -gencode arch=compute_20,code=[sm_20,sm_21] \\ This one is deprecated?\n\n# This is what I use, uncomment if you know your arch and want to specify\n# ARCH= -gencode arch=compute_52,code=compute_52\n\nVPATH=./\nSLIB=libdarknet.so\nALIB=libdarknet.a\nEXEC=darknet\nOBJDIR=./obj/\n\nCC=gcc\nNVCC=nvcc\nAR=ar\nARFLAGS=rcs\nOPTS=-Ofast\nLDFLAGS= -lm -pthread\nCOMMON= -I.\nCFLAGS=-Wall -Wno-unknown-pragmas -Wfatal-errors -fPIC\n\nifeq ($(OPENMP), 1)\nCFLAGS+= -fopenmp\nendif\n\nifeq ($(DEBUG), 1)\nOPTS=-O0 -g\nendif\n\nCFLAGS+=$(OPTS)\n\nifeq ($(OPENCV), 1)\nCOMMON+= -DOPENCV\nCFLAGS+= -DOPENCV\nLDFLAGS+= `pkg-config --libs opencv`\nCOMMON+= `pkg-config --cflags opencv`\nendif\n\nifeq ($(GPU), 1)\nCOMMON+= -DGPU -I/usr/local/cuda/include/\nCFLAGS+= -DGPU\nLDFLAGS+= -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand\nendif\n\nifeq ($(CUDNN), 1)\nCOMMON+= -DCUDNN\nCFLAGS+= -DCUDNN\nLDFLAGS+= -lcudnn\nendif\n\nOBJ=gemm.o utils.o cuda.o deconvolutional_layer.o convolutional_layer.o list.o image.o activations.o im2col.o col2im.o blas.o crop_layer.o dropout_layer.o maxpool_layer.o softmax_layer.o data.o matrix.o network.o connected_layer.o cost_layer.o parser.o option_list.o detection_layer.o route_layer.o box.o normalization_layer.o avgpool_layer.o layer.o local_layer.o shortcut_layer.o activation_layer.o rnn_layer.o gru_layer.o crnn_layer.o demo.o batchnorm_layer.o region_layer.o reorg_layer.o tree.o  lstm_layer.o\nEXECOBJA=captcha.o lsd.o super.o art.o tag.o cifar.o go.o rnn.o segmenter.o regressor.o classifier.o coco.o yolo.o detector.o nightmare.o attention.o darknet.o\nifeq ($(GPU), 1)\nLDFLAGS+= -lstdc++\nOBJ+=convolutional_kernels.o deconvolutional_kernels.o activation_kernels.o im2col_kernels.o col2im_kernels.o blas_kernels.o crop_layer_kernels.o dropout_layer_kernels.o maxpool_layer_kernels.o avgpool_layer_kernels.o\nendif\n\nEXECOBJ = $(addprefix $(OBJDIR), $(EXECOBJA))\nOBJS = $(addprefix $(OBJDIR), $(OBJ))\nDEPS = $(wildcard ./*.h) Makefile ./darknet.h\n\n#all: obj backup results $(SLIB) $(ALIB) $(EXEC)\nall: obj $(ALIB)\n\n\n$(EXEC): $(EXECOBJ) $(ALIB)\n\t$(CC) $(COMMON) $(CFLAGS) $^ -o $@ $(LDFLAGS) $(ALIB)\n\n$(ALIB): $(OBJS)\n\t$(AR) $(ARFLAGS) $@ $^\n\n$(SLIB): $(OBJS)\n\t$(CC) $(CFLAGS) -shared $^ -o $@ $(LDFLAGS)\n\n$(OBJDIR)%.o: %.c $(DEPS)\n\t$(CC) $(COMMON) $(CFLAGS) -c $< -o $@\n\n$(OBJDIR)%.o: %.cu $(DEPS)\n\t$(NVCC) $(ARCH) $(COMMON) --compiler-options \"$(CFLAGS)\" -c $< -o $@\n\nobj:\n\tmkdir -p obj\nbackup:\n\tmkdir -p backup\nresults:\n\tmkdir -p results\n\n.PHONY: clean\n\nclean:\n\trm -rf $(OBJS) $(SLIB) $(ALIB) $(EXEC) $(EXECOBJ)\n\n"
  },
  {
    "path": "lightnet/_darknet/activation_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"activations.h\"\n#include \"cuda.h\"\n}\n\n\n__device__ float lhtan_activate_kernel(float x)\n{\n    if(x < 0) return .001f*x;\n    if(x > 1) return .001f*(x-1.f) + 1.f;\n    return x;\n}\n__device__ float lhtan_gradient_kernel(float x)\n{\n    if(x > 0 && x < 1) return 1;\n    return .001;\n}\n\n__device__ float hardtan_activate_kernel(float x)\n{\n    if (x < -1) return -1;\n    if (x > 1) return 1;\n    return x;\n}\n__device__ float linear_activate_kernel(float x){return x;}\n__device__ float logistic_activate_kernel(float x){return 1.f/(1.f + expf(-x));}\n__device__ float loggy_activate_kernel(float x){return 2.f/(1.f + expf(-x)) - 1;}\n__device__ float relu_activate_kernel(float x){return x*(x>0);}\n__device__ float elu_activate_kernel(float x){return (x >= 0)*x + (x < 0)*(expf(x)-1);}\n__device__ float relie_activate_kernel(float x){return (x>0) ? x : .01f*x;}\n__device__ float ramp_activate_kernel(float x){return x*(x>0)+.1f*x;}\n__device__ float leaky_activate_kernel(float x){return (x>0) ? x : .1f*x;}\n__device__ float tanh_activate_kernel(float x){return (2.f/(1 + expf(-2*x)) - 1);}\n__device__ float plse_activate_kernel(float x)\n{\n    if(x < -4) return .01f * (x + 4);\n    if(x > 4)  return .01f * (x - 4) + 1;\n    return .125f*x + .5f;\n}\n__device__ float stair_activate_kernel(float x)\n{\n    int n = floorf(x);\n    if (n%2 == 0) return floorf(x/2);\n    else return (x - n) + floorf(x/2);\n}\n \n\n__device__ float hardtan_gradient_kernel(float x)\n{\n    if (x > -1 && x < 1) return 1;\n    return 0;\n}\n__device__ float linear_gradient_kernel(float x){return 1;}\n__device__ float logistic_gradient_kernel(float x){return (1-x)*x;}\n__device__ float loggy_gradient_kernel(float x)\n{\n    float y = (x+1)/2;\n    return 2*(1-y)*y;\n}\n__device__ float relu_gradient_kernel(float x){return (x>0);}\n__device__ float elu_gradient_kernel(float x){return (x >= 0) + (x < 0)*(x + 1);}\n__device__ float relie_gradient_kernel(float x){return (x>0) ? 1 : .01f;}\n__device__ float ramp_gradient_kernel(float x){return (x>0)+.1f;}\n__device__ float leaky_gradient_kernel(float x){return (x>0) ? 1 : .1f;}\n__device__ float tanh_gradient_kernel(float x){return 1-x*x;}\n__device__ float plse_gradient_kernel(float x){return (x < 0 || x > 1) ? .01f : .125f;}\n__device__ float stair_gradient_kernel(float x)\n{\n    if (floorf(x) == x) return 0;\n    return 1;\n}\n\n__device__ float activate_kernel(float x, ACTIVATION a)\n{\n    switch(a){\n        case LINEAR:\n            return linear_activate_kernel(x);\n        case LOGISTIC:\n            return logistic_activate_kernel(x);\n        case LOGGY:\n            return loggy_activate_kernel(x);\n        case RELU:\n            return relu_activate_kernel(x);\n        case ELU:\n            return elu_activate_kernel(x);\n        case RELIE:\n            return relie_activate_kernel(x);\n        case RAMP:\n            return ramp_activate_kernel(x);\n        case LEAKY:\n            return leaky_activate_kernel(x);\n        case TANH:\n            return tanh_activate_kernel(x);\n        case PLSE:\n            return plse_activate_kernel(x);\n        case STAIR:\n            return stair_activate_kernel(x);\n        case HARDTAN:\n            return hardtan_activate_kernel(x);\n        case LHTAN:\n            return lhtan_activate_kernel(x);\n    }\n    return 0;\n}\n\n__device__ float gradient_kernel(float x, ACTIVATION a)\n{\n    switch(a){\n        case LINEAR:\n            return linear_gradient_kernel(x);\n        case LOGISTIC:\n            return logistic_gradient_kernel(x);\n        case LOGGY:\n            return loggy_gradient_kernel(x);\n        case RELU:\n            return relu_gradient_kernel(x);\n        case ELU:\n            return elu_gradient_kernel(x);\n        case RELIE:\n            return relie_gradient_kernel(x);\n        case RAMP:\n            return ramp_gradient_kernel(x);\n        case LEAKY:\n            return leaky_gradient_kernel(x);\n        case TANH:\n            return tanh_gradient_kernel(x);\n        case PLSE:\n            return plse_gradient_kernel(x);\n        case STAIR:\n            return stair_gradient_kernel(x);\n        case HARDTAN:\n            return hardtan_gradient_kernel(x);\n        case LHTAN:\n            return lhtan_gradient_kernel(x);\n    }\n    return 0;\n}\n\n__global__ void binary_gradient_array_kernel(float *x, float *dy, int n, int s, BINARY_ACTIVATION a, float *dx)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    int i = id % s;\n    int b = id / s;\n    float x1 = x[b*s + i];\n    float x2 = x[b*s + s/2 + i];\n    if(id < n) {\n        float de = dy[id];\n        dx[b*s + i] = x2*de;\n        dx[b*s + s/2 + i] = x1*de; \n    }\n}\n\nextern \"C\" void binary_gradient_array_gpu(float *x, float *dx, int n, int size, BINARY_ACTIVATION a, float *y) \n{\n    binary_gradient_array_kernel<<<cuda_gridsize(n/2), BLOCK>>>(x, dx, n/2, size, a, y);\n    check_error(cudaPeekAtLastError());\n}\n__global__ void binary_activate_array_kernel(float *x, int n, int s, BINARY_ACTIVATION a, float *y)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    int i = id % s;\n    int b = id / s;\n    float x1 = x[b*s + i];\n    float x2 = x[b*s + s/2 + i];\n    if(id < n) y[id] = x1*x2;\n}\n\nextern \"C\" void binary_activate_array_gpu(float *x, int n, int size, BINARY_ACTIVATION a, float *y) \n{\n    binary_activate_array_kernel<<<cuda_gridsize(n/2), BLOCK>>>(x, n/2, size, a, y);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void activate_array_kernel(float *x, int n, ACTIVATION a)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n) x[i] = activate_kernel(x[i], a);\n}\n\n__global__ void gradient_array_kernel(float *x, int n, ACTIVATION a, float *delta)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n) delta[i] *= gradient_kernel(x[i], a);\n}\n\nextern \"C\" void activate_array_gpu(float *x, int n, ACTIVATION a) \n{\n    activate_array_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, a);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void gradient_array_gpu(float *x, int n, ACTIVATION a, float *delta) \n{\n    gradient_array_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, a, delta);\n    check_error(cudaPeekAtLastError());\n}\n"
  },
  {
    "path": "lightnet/_darknet/activation_layer.c",
    "content": "#include \"activation_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nlayer make_activation_layer(int batch, int inputs, ACTIVATION activation)\n{\n    layer l = {0};\n    l.type = ACTIVE;\n\n    l.inputs = inputs;\n    l.outputs = inputs;\n    l.batch=batch;\n\n    l.output = calloc(batch*inputs, sizeof(float*));\n    l.delta = calloc(batch*inputs, sizeof(float*));\n\n    l.forward = forward_activation_layer;\n    l.backward = backward_activation_layer;\n#ifdef GPU\n    l.forward_gpu = forward_activation_layer_gpu;\n    l.backward_gpu = backward_activation_layer_gpu;\n\n    l.output_gpu = cuda_make_array(l.output, inputs*batch);\n    l.delta_gpu = cuda_make_array(l.delta, inputs*batch);\n#endif\n    l.activation = activation;\n    fprintf(stderr, \"Activation Layer: %d inputs\\n\", inputs);\n    return l;\n}\n\nvoid forward_activation_layer(layer l, network net)\n{\n    copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);\n    activate_array(l.output, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_activation_layer(layer l, network net)\n{\n    gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);\n    copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1);\n}\n\n#ifdef GPU\n\nvoid forward_activation_layer_gpu(layer l, network net)\n{\n    copy_gpu(l.outputs*l.batch, net.input_gpu, 1, l.output_gpu, 1);\n    activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_activation_layer_gpu(layer l, network net)\n{\n    gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);\n    copy_gpu(l.outputs*l.batch, l.delta_gpu, 1, net.delta_gpu, 1);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/activation_layer.h",
    "content": "#ifndef ACTIVATION_LAYER_H\n#define ACTIVATION_LAYER_H\n\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_activation_layer(int batch, int inputs, ACTIVATION activation);\n\nvoid forward_activation_layer(layer l, network net);\nvoid backward_activation_layer(layer l, network net);\n\n#ifdef GPU\nvoid forward_activation_layer_gpu(layer l, network net);\nvoid backward_activation_layer_gpu(layer l, network net);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/activations.c",
    "content": "#include \"activations.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nchar *get_activation_string(ACTIVATION a)\n{\n    switch(a){\n        case LOGISTIC:\n            return \"logistic\";\n        case LOGGY:\n            return \"loggy\";\n        case RELU:\n            return \"relu\";\n        case ELU:\n            return \"elu\";\n        case RELIE:\n            return \"relie\";\n        case RAMP:\n            return \"ramp\";\n        case LINEAR:\n            return \"linear\";\n        case TANH:\n            return \"tanh\";\n        case PLSE:\n            return \"plse\";\n        case LEAKY:\n            return \"leaky\";\n        case STAIR:\n            return \"stair\";\n        case HARDTAN:\n            return \"hardtan\";\n        case LHTAN:\n            return \"lhtan\";\n        default:\n            break;\n    }\n    return \"relu\";\n}\n\nACTIVATION get_activation(char *s)\n{\n    if (strcmp(s, \"logistic\")==0) return LOGISTIC;\n    if (strcmp(s, \"loggy\")==0) return LOGGY;\n    if (strcmp(s, \"relu\")==0) return RELU;\n    if (strcmp(s, \"elu\")==0) return ELU;\n    if (strcmp(s, \"relie\")==0) return RELIE;\n    if (strcmp(s, \"plse\")==0) return PLSE;\n    if (strcmp(s, \"hardtan\")==0) return HARDTAN;\n    if (strcmp(s, \"lhtan\")==0) return LHTAN;\n    if (strcmp(s, \"linear\")==0) return LINEAR;\n    if (strcmp(s, \"ramp\")==0) return RAMP;\n    if (strcmp(s, \"leaky\")==0) return LEAKY;\n    if (strcmp(s, \"tanh\")==0) return TANH;\n    if (strcmp(s, \"stair\")==0) return STAIR;\n    fprintf(stderr, \"Couldn't find activation function %s, going with ReLU\\n\", s);\n    return RELU;\n}\n\nfloat activate(float x, ACTIVATION a)\n{\n    switch(a){\n        case LINEAR:\n            return linear_activate(x);\n        case LOGISTIC:\n            return logistic_activate(x);\n        case LOGGY:\n            return loggy_activate(x);\n        case RELU:\n            return relu_activate(x);\n        case ELU:\n            return elu_activate(x);\n        case RELIE:\n            return relie_activate(x);\n        case RAMP:\n            return ramp_activate(x);\n        case LEAKY:\n            return leaky_activate(x);\n        case TANH:\n            return tanh_activate(x);\n        case PLSE:\n            return plse_activate(x);\n        case STAIR:\n            return stair_activate(x);\n        case HARDTAN:\n            return hardtan_activate(x);\n        case LHTAN:\n            return lhtan_activate(x);\n    }\n    return 0;\n}\n\nvoid activate_array(float *x, const int n, const ACTIVATION a)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        x[i] = activate(x[i], a);\n    }\n}\n\nfloat gradient(float x, ACTIVATION a)\n{\n    switch(a){\n        case LINEAR:\n            return linear_gradient(x);\n        case LOGISTIC:\n            return logistic_gradient(x);\n        case LOGGY:\n            return loggy_gradient(x);\n        case RELU:\n            return relu_gradient(x);\n        case ELU:\n            return elu_gradient(x);\n        case RELIE:\n            return relie_gradient(x);\n        case RAMP:\n            return ramp_gradient(x);\n        case LEAKY:\n            return leaky_gradient(x);\n        case TANH:\n            return tanh_gradient(x);\n        case PLSE:\n            return plse_gradient(x);\n        case STAIR:\n            return stair_gradient(x);\n        case HARDTAN:\n            return hardtan_gradient(x);\n        case LHTAN:\n            return lhtan_gradient(x);\n    }\n    return 0;\n}\n\nvoid gradient_array(const float *x, const int n, const ACTIVATION a, float *delta)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        delta[i] *= gradient(x[i], a);\n    }\n} \n\n"
  },
  {
    "path": "lightnet/_darknet/activations.h",
    "content": "#ifndef ACTIVATIONS_H\n#define ACTIVATIONS_H\n#include \"darknet.h\"\n#include \"cuda.h\"\n#include \"math.h\"\n\nACTIVATION get_activation(char *s);\n\nchar *get_activation_string(ACTIVATION a);\nfloat activate(float x, ACTIVATION a);\nfloat gradient(float x, ACTIVATION a);\nvoid gradient_array(const float *x, const int n, const ACTIVATION a, float *delta);\nvoid activate_array(float *x, const int n, const ACTIVATION a);\n#ifdef GPU\nvoid activate_array_gpu(float *x, int n, ACTIVATION a);\nvoid gradient_array_gpu(float *x, int n, ACTIVATION a, float *delta);\n#endif\n\nstatic inline float stair_activate(float x)\n{\n    int n = floor(x);\n    if (n%2 == 0) return floor(x/2.);\n    else return (x - n) + floor(x/2.);\n}\nstatic inline float hardtan_activate(float x)\n{\n    if (x < -1) return -1;\n    if (x > 1) return 1;\n    return x;\n}\nstatic inline float linear_activate(float x){return x;}\nstatic inline float logistic_activate(float x){return 1./(1. + exp(-x));}\nstatic inline float loggy_activate(float x){return 2./(1. + exp(-x)) - 1;}\nstatic inline float relu_activate(float x){return x*(x>0);}\nstatic inline float elu_activate(float x){return (x >= 0)*x + (x < 0)*(exp(x)-1);}\nstatic inline float relie_activate(float x){return (x>0) ? x : .01*x;}\nstatic inline float ramp_activate(float x){return x*(x>0)+.1*x;}\nstatic inline float leaky_activate(float x){return (x>0) ? x : .1*x;}\nstatic inline float tanh_activate(float x){return (exp(2*x)-1)/(exp(2*x)+1);}\nstatic inline float plse_activate(float x)\n{\n    if(x < -4) return .01 * (x + 4);\n    if(x > 4)  return .01 * (x - 4) + 1;\n    return .125*x + .5;\n}\n\nstatic inline float lhtan_activate(float x)\n{\n    if(x < 0) return .001*x;\n    if(x > 1) return .001*(x-1) + 1;\n    return x;\n}\nstatic inline float lhtan_gradient(float x)\n{\n    if(x > 0 && x < 1) return 1;\n    return .001;\n}\n\nstatic inline float hardtan_gradient(float x)\n{\n    if (x > -1 && x < 1) return 1;\n    return 0;\n}\nstatic inline float linear_gradient(float x){return 1;}\nstatic inline float logistic_gradient(float x){return (1-x)*x;}\nstatic inline float loggy_gradient(float x)\n{\n    float y = (x+1.)/2.;\n    return 2*(1-y)*y;\n}\nstatic inline float stair_gradient(float x)\n{\n    if (floor(x) == x) return 0;\n    return 1;\n}\nstatic inline float relu_gradient(float x){return (x>0);}\nstatic inline float elu_gradient(float x){return (x >= 0) + (x < 0)*(x + 1);}\nstatic inline float relie_gradient(float x){return (x>0) ? 1 : .01;}\nstatic inline float ramp_gradient(float x){return (x>0)+.1;}\nstatic inline float leaky_gradient(float x){return (x>0) ? 1 : .1;}\nstatic inline float tanh_gradient(float x){return 1-x*x;}\nstatic inline float plse_gradient(float x){return (x < 0 || x > 1) ? .01 : .125;}\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/avgpool_layer.c",
    "content": "#include \"avgpool_layer.h\"\n#include \"cuda.h\"\n#include <stdio.h>\n\navgpool_layer make_avgpool_layer(int batch, int w, int h, int c)\n{\n    fprintf(stderr, \"avg                     %4d x%4d x%4d   ->  %4d\\n\",  w, h, c, c);\n    avgpool_layer l = {0};\n    l.type = AVGPOOL;\n    l.batch = batch;\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.out_w = 1;\n    l.out_h = 1;\n    l.out_c = c;\n    l.outputs = l.out_c;\n    l.inputs = h*w*c;\n    int output_size = l.outputs * batch;\n    l.output =  calloc(output_size, sizeof(float));\n    l.delta =   calloc(output_size, sizeof(float));\n    l.forward = forward_avgpool_layer;\n    l.backward = backward_avgpool_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_avgpool_layer_gpu;\n    l.backward_gpu = backward_avgpool_layer_gpu;\n    l.output_gpu  = cuda_make_array(l.output, output_size);\n    l.delta_gpu   = cuda_make_array(l.delta, output_size);\n    #endif\n    return l;\n}\n\nvoid resize_avgpool_layer(avgpool_layer *l, int w, int h)\n{\n    l->w = w;\n    l->h = h;\n    l->inputs = h*w*l->c;\n}\n\nvoid forward_avgpool_layer(const avgpool_layer l, network net)\n{\n    int b,i,k;\n\n    for(b = 0; b < l.batch; ++b){\n        for(k = 0; k < l.c; ++k){\n            int out_index = k + b*l.c;\n            l.output[out_index] = 0;\n            for(i = 0; i < l.h*l.w; ++i){\n                int in_index = i + l.h*l.w*(k + b*l.c);\n                l.output[out_index] += net.input[in_index];\n            }\n            l.output[out_index] /= l.h*l.w;\n        }\n    }\n}\n\nvoid backward_avgpool_layer(const avgpool_layer l, network net)\n{\n    int b,i,k;\n\n    for(b = 0; b < l.batch; ++b){\n        for(k = 0; k < l.c; ++k){\n            int out_index = k + b*l.c;\n            for(i = 0; i < l.h*l.w; ++i){\n                int in_index = i + l.h*l.w*(k + b*l.c);\n                net.delta[in_index] += l.delta[out_index] / (l.h*l.w);\n            }\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/avgpool_layer.h",
    "content": "#ifndef AVGPOOL_LAYER_H\n#define AVGPOOL_LAYER_H\n\n#include \"image.h\"\n#include \"cuda.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer avgpool_layer;\n\nimage get_avgpool_image(avgpool_layer l);\navgpool_layer make_avgpool_layer(int batch, int w, int h, int c);\nvoid resize_avgpool_layer(avgpool_layer *l, int w, int h);\nvoid forward_avgpool_layer(const avgpool_layer l, network net);\nvoid backward_avgpool_layer(const avgpool_layer l, network net);\n\n#ifdef GPU\nvoid forward_avgpool_layer_gpu(avgpool_layer l, network net);\nvoid backward_avgpool_layer_gpu(avgpool_layer l, network net);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/avgpool_layer_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"avgpool_layer.h\"\n#include \"cuda.h\"\n}\n\n__global__ void forward_avgpool_layer_kernel(int n, int w, int h, int c, float *input, float *output)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= n) return;\n\n    int k = id % c;\n    id /= c;\n    int b = id;\n\n    int i;\n    int out_index = (k + c*b);\n    output[out_index] = 0;\n    for(i = 0; i < w*h; ++i){\n        int in_index = i + h*w*(k + b*c);\n        output[out_index] += input[in_index];\n    }\n    output[out_index] /= w*h;\n}\n\n__global__ void backward_avgpool_layer_kernel(int n, int w, int h, int c, float *in_delta, float *out_delta)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= n) return;\n\n    int k = id % c;\n    id /= c;\n    int b = id;\n\n    int i;\n    int out_index = (k + c*b);\n    for(i = 0; i < w*h; ++i){\n        int in_index = i + h*w*(k + b*c);\n        in_delta[in_index] += out_delta[out_index] / (w*h);\n    }\n}\n\nextern \"C\" void forward_avgpool_layer_gpu(avgpool_layer layer, network net)\n{\n    size_t n = layer.c*layer.batch;\n\n    forward_avgpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.w, layer.h, layer.c, net.input_gpu, layer.output_gpu);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void backward_avgpool_layer_gpu(avgpool_layer layer, network net)\n{\n    size_t n = layer.c*layer.batch;\n\n    backward_avgpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.w, layer.h, layer.c, net.delta_gpu, layer.delta_gpu);\n    check_error(cudaPeekAtLastError());\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/batchnorm_layer.c",
    "content": "#include \"convolutional_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"blas.h\"\n#include <stdio.h>\n\nlayer make_batchnorm_layer(int batch, int w, int h, int c)\n{\n    fprintf(stderr, \"Batch Normalization Layer: %d x %d x %d image\\n\", w,h,c);\n    layer l = {0};\n    l.type = BATCHNORM;\n    l.batch = batch;\n    l.h = l.out_h = h;\n    l.w = l.out_w = w;\n    l.c = l.out_c = c;\n    l.output = calloc(h * w * c * batch, sizeof(float));\n    l.delta  = calloc(h * w * c * batch, sizeof(float));\n    l.inputs = w*h*c;\n    l.outputs = l.inputs;\n\n    l.scales = calloc(c, sizeof(float));\n    l.scale_updates = calloc(c, sizeof(float));\n    l.biases = calloc(c, sizeof(float));\n    l.bias_updates = calloc(c, sizeof(float));\n    int i;\n    for(i = 0; i < c; ++i){\n        l.scales[i] = 1;\n    }\n\n    l.mean = calloc(c, sizeof(float));\n    l.variance = calloc(c, sizeof(float));\n\n    l.rolling_mean = calloc(c, sizeof(float));\n    l.rolling_variance = calloc(c, sizeof(float));\n\n    l.forward = forward_batchnorm_layer;\n    l.backward = backward_batchnorm_layer;\n#ifdef GPU\n    l.forward_gpu = forward_batchnorm_layer_gpu;\n    l.backward_gpu = backward_batchnorm_layer_gpu;\n\n    l.output_gpu =  cuda_make_array(l.output, h * w * c * batch);\n    l.delta_gpu =   cuda_make_array(l.delta, h * w * c * batch);\n\n    l.biases_gpu = cuda_make_array(l.biases, c);\n    l.bias_updates_gpu = cuda_make_array(l.bias_updates, c);\n\n    l.scales_gpu = cuda_make_array(l.scales, c);\n    l.scale_updates_gpu = cuda_make_array(l.scale_updates, c);\n\n    l.mean_gpu = cuda_make_array(l.mean, c);\n    l.variance_gpu = cuda_make_array(l.variance, c);\n\n    l.rolling_mean_gpu = cuda_make_array(l.mean, c);\n    l.rolling_variance_gpu = cuda_make_array(l.variance, c);\n\n    l.mean_delta_gpu = cuda_make_array(l.mean, c);\n    l.variance_delta_gpu = cuda_make_array(l.variance, c);\n\n    l.x_gpu = cuda_make_array(l.output, l.batch*l.outputs);\n    l.x_norm_gpu = cuda_make_array(l.output, l.batch*l.outputs);\n    #ifdef CUDNN\n    cudnnCreateTensorDescriptor(&l.normTensorDesc);\n    cudnnCreateTensorDescriptor(&l.dstTensorDesc);\n    cudnnSetTensor4dDescriptor(l.dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l.batch, l.out_c, l.out_h, l.out_w); \n    cudnnSetTensor4dDescriptor(l.normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l.out_c, 1, 1); \n\n    #endif\n#endif\n    return l;\n}\n\nvoid backward_scale_cpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates)\n{\n    int i,b,f;\n    for(f = 0; f < n; ++f){\n        float sum = 0;\n        for(b = 0; b < batch; ++b){\n            for(i = 0; i < size; ++i){\n                int index = i + size*(f + n*b);\n                sum += delta[index] * x_norm[index];\n            }\n        }\n        scale_updates[f] += sum;\n    }\n}\n\nvoid mean_delta_cpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta)\n{\n\n    int i,j,k;\n    for(i = 0; i < filters; ++i){\n        mean_delta[i] = 0;\n        for (j = 0; j < batch; ++j) {\n            for (k = 0; k < spatial; ++k) {\n                int index = j*filters*spatial + i*spatial + k;\n                mean_delta[i] += delta[index];\n            }\n        }\n        mean_delta[i] *= (-1./sqrt(variance[i] + .00001f));\n    }\n}\nvoid  variance_delta_cpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta)\n{\n\n    int i,j,k;\n    for(i = 0; i < filters; ++i){\n        variance_delta[i] = 0;\n        for(j = 0; j < batch; ++j){\n            for(k = 0; k < spatial; ++k){\n                int index = j*filters*spatial + i*spatial + k;\n                variance_delta[i] += delta[index]*(x[index] - mean[i]);\n            }\n        }\n        variance_delta[i] *= -.5 * pow(variance[i] + .00001f, (float)(-3./2.));\n    }\n}\nvoid normalize_delta_cpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta)\n{\n    int f, j, k;\n    for(j = 0; j < batch; ++j){\n        for(f = 0; f < filters; ++f){\n            for(k = 0; k < spatial; ++k){\n                int index = j*filters*spatial + f*spatial + k;\n                delta[index] = delta[index] * 1./(sqrt(variance[f] + .00001f)) + variance_delta[f] * 2. * (x[index] - mean[f]) / (spatial * batch) + mean_delta[f]/(spatial*batch);\n            }\n        }\n    }\n}\n\nvoid resize_batchnorm_layer(layer *layer, int w, int h)\n{\n    fprintf(stderr, \"Not implemented\\n\");\n}\n\nvoid forward_batchnorm_layer(layer l, network net)\n{\n    if(l.type == BATCHNORM) copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);\n    copy_cpu(l.outputs*l.batch, l.output, 1, l.x, 1);\n    if(net.train){\n        mean_cpu(l.output, l.batch, l.out_c, l.out_h*l.out_w, l.mean);\n        variance_cpu(l.output, l.mean, l.batch, l.out_c, l.out_h*l.out_w, l.variance);\n\n        scal_cpu(l.out_c, .99, l.rolling_mean, 1);\n        axpy_cpu(l.out_c, .01, l.mean, 1, l.rolling_mean, 1);\n        scal_cpu(l.out_c, .99, l.rolling_variance, 1);\n        axpy_cpu(l.out_c, .01, l.variance, 1, l.rolling_variance, 1);\n\n        normalize_cpu(l.output, l.mean, l.variance, l.batch, l.out_c, l.out_h*l.out_w);   \n        copy_cpu(l.outputs*l.batch, l.output, 1, l.x_norm, 1);\n    } else {\n        normalize_cpu(l.output, l.rolling_mean, l.rolling_variance, l.batch, l.out_c, l.out_h*l.out_w);\n    }\n    scale_bias(l.output, l.scales, l.batch, l.out_c, l.out_h*l.out_w);\n    add_bias(l.output, l.biases, l.batch, l.out_c, l.out_h*l.out_w);\n}\n\nvoid backward_batchnorm_layer(layer l, network net)\n{\n    if(!net.train){\n        l.mean = l.rolling_mean;\n        l.variance = l.rolling_variance;\n    }\n    backward_bias(l.bias_updates, l.delta, l.batch, l.out_c, l.out_w*l.out_h);\n    backward_scale_cpu(l.x_norm, l.delta, l.batch, l.out_c, l.out_w*l.out_h, l.scale_updates);\n\n    scale_bias(l.delta, l.scales, l.batch, l.out_c, l.out_h*l.out_w);\n\n    mean_delta_cpu(l.delta, l.variance, l.batch, l.out_c, l.out_w*l.out_h, l.mean_delta);\n    variance_delta_cpu(l.x, l.delta, l.mean, l.variance, l.batch, l.out_c, l.out_w*l.out_h, l.variance_delta);\n    normalize_delta_cpu(l.x, l.mean, l.variance, l.mean_delta, l.variance_delta, l.batch, l.out_c, l.out_w*l.out_h, l.delta);\n    if(l.type == BATCHNORM) copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1);\n}\n\n#ifdef GPU\n\nvoid pull_batchnorm_layer(layer l)\n{\n    cuda_pull_array(l.scales_gpu, l.scales, l.c);\n    cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.c);\n    cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.c);\n}\nvoid push_batchnorm_layer(layer l)\n{\n    cuda_push_array(l.scales_gpu, l.scales, l.c);\n    cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.c);\n    cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.c);\n}\n\nvoid forward_batchnorm_layer_gpu(layer l, network net)\n{\n    if(l.type == BATCHNORM) copy_gpu(l.outputs*l.batch, net.input_gpu, 1, l.output_gpu, 1);\n    copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.x_gpu, 1);\n    if (net.train) {\n#ifdef CUDNN\n        float one = 1;\n        float zero = 0;\n        cudnnBatchNormalizationForwardTraining(cudnn_handle(),\n                CUDNN_BATCHNORM_SPATIAL,\n                &one,\n                &zero,\n                l.dstTensorDesc,\n                l.x_gpu,\n                l.dstTensorDesc,\n                l.output_gpu,\n                l.normTensorDesc,\n                l.scales_gpu,\n                l.biases_gpu,\n                .01,\n                l.rolling_mean_gpu,\n                l.rolling_variance_gpu,\n                .00001,\n                l.mean_gpu,\n                l.variance_gpu);\n#else\n        fast_mean_gpu(l.output_gpu, l.batch, l.out_c, l.out_h*l.out_w, l.mean_gpu);\n        fast_variance_gpu(l.output_gpu, l.mean_gpu, l.batch, l.out_c, l.out_h*l.out_w, l.variance_gpu);\n\n        scal_gpu(l.out_c, .99, l.rolling_mean_gpu, 1);\n        axpy_gpu(l.out_c, .01, l.mean_gpu, 1, l.rolling_mean_gpu, 1);\n        scal_gpu(l.out_c, .99, l.rolling_variance_gpu, 1);\n        axpy_gpu(l.out_c, .01, l.variance_gpu, 1, l.rolling_variance_gpu, 1);\n\n        copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.x_gpu, 1);\n        normalize_gpu(l.output_gpu, l.mean_gpu, l.variance_gpu, l.batch, l.out_c, l.out_h*l.out_w);\n        copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.x_norm_gpu, 1);\n\n        scale_bias_gpu(l.output_gpu, l.scales_gpu, l.batch, l.out_c, l.out_h*l.out_w);\n        add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.out_c, l.out_w*l.out_h);\n#endif\n    } else {\n        normalize_gpu(l.output_gpu, l.rolling_mean_gpu, l.rolling_variance_gpu, l.batch, l.out_c, l.out_h*l.out_w);\n        scale_bias_gpu(l.output_gpu, l.scales_gpu, l.batch, l.out_c, l.out_h*l.out_w);\n        add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.out_c, l.out_w*l.out_h);\n    }\n\n}\n\nvoid backward_batchnorm_layer_gpu(layer l, network net)\n{\n    if(!net.train){\n        l.mean_gpu = l.rolling_mean_gpu;\n        l.variance_gpu = l.rolling_variance_gpu;\n    }\n#ifdef CUDNN\n    float one = 1;\n    float zero = 0;\n    cudnnBatchNormalizationBackward(cudnn_handle(),\n            CUDNN_BATCHNORM_SPATIAL,\n            &one,\n            &zero,\n            &one,\n            &one,\n            l.dstTensorDesc,\n            l.x_gpu,\n            l.dstTensorDesc,\n            l.delta_gpu,\n            l.dstTensorDesc,\n            l.x_norm_gpu,\n            l.normTensorDesc,\n            l.scales_gpu,\n            l.scale_updates_gpu,\n            l.bias_updates_gpu,\n            .00001,\n            l.mean_gpu,\n            l.variance_gpu);\n    copy_gpu(l.outputs*l.batch, l.x_norm_gpu, 1, l.delta_gpu, 1);\n#else\n    backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.out_c, l.out_w*l.out_h);\n    backward_scale_gpu(l.x_norm_gpu, l.delta_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.scale_updates_gpu);\n\n    scale_bias_gpu(l.delta_gpu, l.scales_gpu, l.batch, l.out_c, l.out_h*l.out_w);\n\n    fast_mean_delta_gpu(l.delta_gpu, l.variance_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.mean_delta_gpu);\n    fast_variance_delta_gpu(l.x_gpu, l.delta_gpu, l.mean_gpu, l.variance_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.variance_delta_gpu);\n    normalize_delta_gpu(l.x_gpu, l.mean_gpu, l.variance_gpu, l.mean_delta_gpu, l.variance_delta_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.delta_gpu);\n#endif\n    if(l.type == BATCHNORM) copy_gpu(l.outputs*l.batch, l.delta_gpu, 1, net.delta_gpu, 1);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/batchnorm_layer.h",
    "content": "#ifndef BATCHNORM_LAYER_H\n#define BATCHNORM_LAYER_H\n\n#include \"image.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_batchnorm_layer(int batch, int w, int h, int c);\nvoid forward_batchnorm_layer(layer l, network net);\nvoid backward_batchnorm_layer(layer l, network net);\n\n#ifdef GPU\nvoid forward_batchnorm_layer_gpu(layer l, network net);\nvoid backward_batchnorm_layer_gpu(layer l, network net);\nvoid pull_batchnorm_layer(layer l);\nvoid push_batchnorm_layer(layer l);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/blas.c",
    "content": "#include \"blas.h\"\n\n#include <math.h>\n#include <assert.h>\n#include <float.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\nvoid reorg_cpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out)\n{\n    int b,i,j,k;\n    int out_c = c/(stride*stride);\n\n    for(b = 0; b < batch; ++b){\n        for(k = 0; k < c; ++k){\n            for(j = 0; j < h; ++j){\n                for(i = 0; i < w; ++i){\n                    int in_index  = i + w*(j + h*(k + c*b));\n                    int c2 = k % out_c;\n                    int offset = k / out_c;\n                    int w2 = i*stride + offset % stride;\n                    int h2 = j*stride + offset / stride;\n                    int out_index = w2 + w*stride*(h2 + h*stride*(c2 + out_c*b));\n                    if(forward) out[out_index] = x[in_index];\n                    else out[in_index] = x[out_index];\n                }\n            }\n        }\n    }\n}\n\nvoid flatten(float *x, int size, int layers, int batch, int forward)\n{\n    float *swap = calloc(size*layers*batch, sizeof(float));\n    int i,c,b;\n    for(b = 0; b < batch; ++b){\n        for(c = 0; c < layers; ++c){\n            for(i = 0; i < size; ++i){\n                int i1 = b*layers*size + c*size + i;\n                int i2 = b*layers*size + i*layers + c;\n                if (forward) swap[i2] = x[i1];\n                else swap[i1] = x[i2];\n            }\n        }\n    }\n    memcpy(x, swap, size*layers*batch*sizeof(float));\n    free(swap);\n}\n\nvoid weighted_sum_cpu(float *a, float *b, float *s, int n, float *c)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        c[i] = s[i]*a[i] + (1-s[i])*(b ? b[i] : 0);\n    }\n}\n\nvoid weighted_delta_cpu(float *a, float *b, float *s, float *da, float *db, float *ds, int n, float *dc)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        if(da) da[i] += dc[i] * s[i];\n        if(db) db[i] += dc[i] * (1-s[i]);\n        ds[i] += dc[i] * (a[i] - b[i]);\n    }\n}\n\nvoid shortcut_cpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out)\n{\n    int stride = w1/w2;\n    int sample = w2/w1;\n    assert(stride == h1/h2);\n    assert(sample == h2/h1);\n    if(stride < 1) stride = 1;\n    if(sample < 1) sample = 1;\n    int minw = (w1 < w2) ? w1 : w2;\n    int minh = (h1 < h2) ? h1 : h2;\n    int minc = (c1 < c2) ? c1 : c2;\n\n    int i,j,k,b;\n    for(b = 0; b < batch; ++b){\n        for(k = 0; k < minc; ++k){\n            for(j = 0; j < minh; ++j){\n                for(i = 0; i < minw; ++i){\n                    int out_index = i*sample + w2*(j*sample + h2*(k + c2*b));\n                    int add_index = i*stride + w1*(j*stride + h1*(k + c1*b));\n                    out[out_index] += add[add_index];\n                }\n            }\n        }\n    }\n}\n\nvoid mean_cpu(float *x, int batch, int filters, int spatial, float *mean)\n{\n    float scale = 1./(batch * spatial);\n    int i,j,k;\n    for(i = 0; i < filters; ++i){\n        mean[i] = 0;\n        for(j = 0; j < batch; ++j){\n            for(k = 0; k < spatial; ++k){\n                int index = j*filters*spatial + i*spatial + k;\n                mean[i] += x[index];\n            }\n        }\n        mean[i] *= scale;\n    }\n}\n\nvoid variance_cpu(float *x, float *mean, int batch, int filters, int spatial, float *variance)\n{\n    float scale = 1./(batch * spatial - 1);\n    int i,j,k;\n    for(i = 0; i < filters; ++i){\n        variance[i] = 0;\n        for(j = 0; j < batch; ++j){\n            for(k = 0; k < spatial; ++k){\n                int index = j*filters*spatial + i*spatial + k;\n                variance[i] += pow((x[index] - mean[i]), 2);\n            }\n        }\n        variance[i] *= scale;\n    }\n}\n\nvoid normalize_cpu(float *x, float *mean, float *variance, int batch, int filters, int spatial)\n{\n    int b, f, i;\n    for(b = 0; b < batch; ++b){\n        for(f = 0; f < filters; ++f){\n            for(i = 0; i < spatial; ++i){\n                int index = b*filters*spatial + f*spatial + i;\n                x[index] = (x[index] - mean[f])/(sqrt(variance[f]) + .000001f);\n            }\n        }\n    }\n}\n\nvoid const_cpu(int N, float ALPHA, float *X, int INCX)\n{\n    int i;\n    for(i = 0; i < N; ++i) X[i*INCX] = ALPHA;\n}\n\nvoid mul_cpu(int N, float *X, int INCX, float *Y, int INCY)\n{\n    int i;\n    for(i = 0; i < N; ++i) Y[i*INCY] *= X[i*INCX];\n}\n\nvoid pow_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)\n{\n    int i;\n    for(i = 0; i < N; ++i) Y[i*INCY] = pow(X[i*INCX], ALPHA);\n}\n\nvoid axpy_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)\n{\n    int i;\n    for(i = 0; i < N; ++i) Y[i*INCY] += ALPHA*X[i*INCX];\n}\n\nvoid scal_cpu(int N, float ALPHA, float *X, int INCX)\n{\n    int i;\n    for(i = 0; i < N; ++i) X[i*INCX] *= ALPHA;\n}\n\nvoid fill_cpu(int N, float ALPHA, float *X, int INCX)\n{\n    int i;\n    for(i = 0; i < N; ++i) X[i*INCX] = ALPHA;\n}\n\nvoid deinter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT)\n{\n    int i, j;\n    int index = 0;\n    for(j = 0; j < B; ++j) {\n        for(i = 0; i < NX; ++i){\n            if(X) X[j*NX + i] += OUT[index];\n            ++index;\n        }\n        for(i = 0; i < NY; ++i){\n            if(Y) Y[j*NY + i] += OUT[index];\n            ++index;\n        }\n    }\n}\n\nvoid inter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT)\n{\n    int i, j;\n    int index = 0;\n    for(j = 0; j < B; ++j) {\n        for(i = 0; i < NX; ++i){\n            OUT[index++] = X[j*NX + i];\n        }\n        for(i = 0; i < NY; ++i){\n            OUT[index++] = Y[j*NY + i];\n        }\n    }\n}\n\nvoid copy_cpu(int N, float *X, int INCX, float *Y, int INCY)\n{\n    int i;\n    for(i = 0; i < N; ++i) Y[i*INCY] = X[i*INCX];\n}\n\nvoid mult_add_into_cpu(int N, float *X, float *Y, float *Z)\n{\n    int i;\n    for(i = 0; i < N; ++i) Z[i] += X[i]*Y[i];\n}\n\nvoid smooth_l1_cpu(int n, float *pred, float *truth, float *delta, float *error)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        float diff = truth[i] - pred[i];\n        float abs_val = fabs(diff);\n        if(abs_val < 1) {\n            error[i] = diff * diff;\n            delta[i] = diff;\n        }\n        else {\n            error[i] = 2*abs_val - 1;\n            delta[i] = (diff < 0) ? 1 : -1;\n        }\n    }\n}\n\nvoid l1_cpu(int n, float *pred, float *truth, float *delta, float *error)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        float diff = truth[i] - pred[i];\n        error[i] = fabs(diff);\n        delta[i] = diff > 0 ? 1 : -1;\n    }\n}\n\nvoid l2_cpu(int n, float *pred, float *truth, float *delta, float *error)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        float diff = truth[i] - pred[i];\n        error[i] = diff * diff;\n        delta[i] = diff;\n    }\n}\n\nfloat dot_cpu(int N, float *X, int INCX, float *Y, int INCY)\n{\n    int i;\n    float dot = 0;\n    for(i = 0; i < N; ++i) dot += X[i*INCX] * Y[i*INCY];\n    return dot;\n}\n\nvoid softmax(float *input, int n, float temp, int stride, float *output)\n{\n    int i;\n    float sum = 0;\n    float largest = -FLT_MAX;\n    for(i = 0; i < n; ++i){\n        if(input[i*stride] > largest) largest = input[i*stride];\n    }\n    for(i = 0; i < n; ++i){\n        float e = exp(input[i*stride]/temp - largest/temp);\n        sum += e;\n        output[i*stride] = e;\n    }\n    for(i = 0; i < n; ++i){\n        output[i*stride] /= sum;\n    }\n}\n\n\nvoid softmax_cpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output)\n{\n    int g, b;\n    for(b = 0; b < batch; ++b){\n        for(g = 0; g < groups; ++g){\n            softmax(input + b*batch_offset + g*group_offset, n, temp, stride, output + b*batch_offset + g*group_offset);\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/blas.h",
    "content": "#ifndef BLAS_H\n#define BLAS_H\n#include \"darknet.h\"\n\nvoid flatten(float *x, int size, int layers, int batch, int forward);\nvoid pm(int M, int N, float *A);\nfloat *random_matrix(int rows, int cols);\nvoid time_random_matrix(int TA, int TB, int m, int k, int n);\nvoid reorg_cpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out);\n\nvoid test_blas();\n\nvoid inter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT);\nvoid deinter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT);\nvoid mult_add_into_cpu(int N, float *X, float *Y, float *Z);\n\nvoid const_cpu(int N, float ALPHA, float *X, int INCX);\nvoid constrain_gpu(int N, float ALPHA, float * X, int INCX);\nvoid pow_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);\nvoid mul_cpu(int N, float *X, int INCX, float *Y, int INCY);\n\nvoid fill_cpu(int N, float ALPHA, float * X, int INCX);\nfloat dot_cpu(int N, float *X, int INCX, float *Y, int INCY);\nint test_gpu_blas();\nvoid shortcut_cpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out);\n\nvoid mean_cpu(float *x, int batch, int filters, int spatial, float *mean);\nvoid variance_cpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);\n\nvoid scale_bias(float *output, float *scales, int batch, int n, int size);\nvoid backward_scale_cpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates);\nvoid mean_delta_cpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta);\nvoid  variance_delta_cpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta);\nvoid normalize_delta_cpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta);\n\nvoid smooth_l1_cpu(int n, float *pred, float *truth, float *delta, float *error);\nvoid l2_cpu(int n, float *pred, float *truth, float *delta, float *error);\nvoid l1_cpu(int n, float *pred, float *truth, float *delta, float *error);\nvoid weighted_sum_cpu(float *a, float *b, float *s, int num, float *c);\nvoid weighted_delta_cpu(float *a, float *b, float *s, float *da, float *db, float *ds, int n, float *dc);\n\nvoid softmax(float *input, int n, float temp, int stride, float *output);\nvoid softmax_cpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output);\n\n#ifdef GPU\n#include \"cuda.h\"\n#include \"tree.h\"\n\nvoid axpy_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY);\nvoid axpy_gpu_offset(int N, float ALPHA, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY);\nvoid copy_gpu(int N, float * X, int INCX, float * Y, int INCY);\nvoid copy_gpu_offset(int N, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY);\nvoid add_gpu(int N, float ALPHA, float * X, int INCX);\nvoid supp_gpu(int N, float ALPHA, float * X, int INCX);\nvoid mask_gpu(int N, float * X, float mask_num, float * mask);\nvoid scale_mask_gpu(int N, float * X, float mask_num, float * mask, float scale);\nvoid const_gpu(int N, float ALPHA, float *X, int INCX);\nvoid pow_gpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);\nvoid mul_gpu(int N, float *X, int INCX, float *Y, int INCY);\n\nvoid mean_gpu(float *x, int batch, int filters, int spatial, float *mean);\nvoid variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);\nvoid normalize_gpu(float *x, float *mean, float *variance, int batch, int filters, int spatial);\n\nvoid normalize_delta_gpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta);\n\nvoid fast_mean_delta_gpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta);\nvoid fast_variance_delta_gpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta);\n\nvoid fast_variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);\nvoid fast_mean_gpu(float *x, int batch, int filters, int spatial, float *mean);\nvoid shortcut_gpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out);\nvoid scale_bias_gpu(float *output, float *biases, int batch, int n, int size);\nvoid backward_scale_gpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates);\nvoid scale_bias_gpu(float *output, float *biases, int batch, int n, int size);\nvoid add_bias_gpu(float *output, float *biases, int batch, int n, int size);\nvoid backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size);\n\nvoid smooth_l1_gpu(int n, float *pred, float *truth, float *delta, float *error);\nvoid l2_gpu(int n, float *pred, float *truth, float *delta, float *error);\nvoid l1_gpu(int n, float *pred, float *truth, float *delta, float *error);\nvoid weighted_delta_gpu(float *a, float *b, float *s, float *da, float *db, float *ds, int num, float *dc);\nvoid weighted_sum_gpu(float *a, float *b, float *s, int num, float *c);\nvoid mult_add_into_gpu(int num, float *a, float *b, float *c);\nvoid inter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT);\nvoid deinter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT);\n\nvoid reorg_gpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out);\n\nvoid softmax_gpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output);\nvoid adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t);\nvoid adam_gpu(int n, float *x, float *m, float *v, float B1, float B2, float rate, float eps, int t);\n\nvoid flatten_gpu(float *x, int spatial, int layers, int batch, int forward, float *out);\nvoid softmax_tree(float *input, int spatial, int batch, int stride, float temp, float *output, tree hier);\n\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/blas_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n#include <assert.h>\n\nextern \"C\" {\n#include \"blas.h\"\n#include \"cuda.h\"\n#include \"utils.h\"\n}\n\n__global__ void scale_bias_kernel(float *output, float *biases, int n, int size)\n{\n    int offset = blockIdx.x * blockDim.x + threadIdx.x;\n    int filter = blockIdx.y;\n    int batch = blockIdx.z;\n\n    if(offset < size) output[(batch*n+filter)*size + offset] *= biases[filter];\n}\n\nvoid scale_bias_gpu(float *output, float *biases, int batch, int n, int size)\n{\n    dim3 dimGrid((size-1)/BLOCK + 1, n, batch);\n    dim3 dimBlock(BLOCK, 1, 1);\n\n    scale_bias_kernel<<<dimGrid, dimBlock>>>(output, biases, n, size);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void backward_scale_kernel(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates)\n{\n    __shared__ float part[BLOCK];\n    int i,b;\n    int filter = blockIdx.x;\n    int p = threadIdx.x;\n    float sum = 0;\n    for(b = 0; b < batch; ++b){\n        for(i = 0; i < size; i += BLOCK){\n            int index = p + i + size*(filter + n*b);\n            sum += (p+i < size) ? delta[index]*x_norm[index] : 0;\n        }\n    }\n    part[p] = sum;\n    __syncthreads();\n    if (p == 0) {\n        for(i = 0; i < BLOCK; ++i) scale_updates[filter] += part[i];\n    }\n}\n\nvoid backward_scale_gpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates)\n{\n    backward_scale_kernel<<<n, BLOCK>>>(x_norm, delta, batch, n, size, scale_updates);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void add_bias_kernel(float *output, float *biases, int batch, int n, int size)\n{\n    int index = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (index >= n*size*batch) return;\n    int i = index % size;\n    index /= size;\n    int j = index % n;\n    index /= n;\n    int k = index;\n\n    output[(k*n+j)*size + i] += biases[j];\n}\n\nvoid add_bias_gpu(float *output, float *biases, int batch, int n, int size)\n{\n    int num = n*size*batch;\n\n    add_bias_kernel<<<cuda_gridsize(num), BLOCK>>>(output, biases, batch, n, size);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void backward_bias_conn_kernel(float *bias_updates, float *delta, int batch, int n)\n{\n    int index = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (index >= n) return;\n    int b;\n    float sum = 0;\n    for(b = 0; b < batch; ++b){\n        int i = b*n + index;\n        sum += delta[i];\n    }\n    bias_updates[index] += sum;\n}\n\n__global__ void backward_bias_kernel(float *bias_updates, float *delta, int batch, int n, int size)\n{\n    __shared__ float part[BLOCK];\n    int i,b;\n    int filter = blockIdx.x;\n    int p = threadIdx.x;\n    float sum = 0;\n    for(b = 0; b < batch; ++b){\n        for(i = 0; i < size; i += BLOCK){\n            int index = p + i + size*(filter + n*b);\n            sum += (p+i < size) ? delta[index] : 0;\n        }\n    }\n    part[p] = sum;\n    __syncthreads();\n    if (p == 0) {\n        for(i = 0; i < BLOCK; ++i) bias_updates[filter] += part[i];\n    }\n}\n\nvoid backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size)\n{\n    if(size == 1){\n        backward_bias_conn_kernel<<<cuda_gridsize(n), BLOCK>>>(bias_updates, delta, batch, n);\n    }else{\n        backward_bias_kernel<<<n, BLOCK>>>(bias_updates, delta, batch, n, size);\n    }\n    check_error(cudaPeekAtLastError());\n}\n\n/*\n__global__ void dot_kernel(float *output, float scale, int batch, int n, int size, float *delta)\n{\n    int index = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    int f1 = index / n;\n    int f2 = index % n;\n    if (f2 <= f1) return;\n    \n    float sum = 0;\n    float norm1 = 0;\n    float norm2 = 0;\n    int b, i;\n    for(b = 0; b <  batch; ++b){\n        for(i = 0; i < size; ++i){\n            int i1 = b * size * n + f1 * size + i;\n            int i2 = b * size * n + f2 * size + i;\n            sum += output[i1] * output[i2];\n            norm1 += output[i1] * output[i1];\n            norm2 += output[i2] * output[i2];\n        }\n    }\n    norm1 = sqrt(norm1);\n    norm2 = sqrt(norm2);\n    float norm = norm1 * norm2;\n    sum = sum / norm;\n    for(b = 0; b <  batch; ++b){\n        for(i = 0; i < size; ++i){\n            int i1 = b * size * n + f1 * size + i;\n            int i2 = b * size * n + f2 * size + i;\n            delta[i1] += - scale * sum * output[i2] / norm;\n            delta[i2] += - scale * sum * output[i1] / norm;\n        }\n    }\n}\n\nvoid dot_error_gpu(layer l)\n{\n    dot_kernel<<<cuda_gridsize(l.n*l.n), BLOCK>>>(l.output_gpu, l.dot, l.batch, l.n, l.out_w * l.out_h, l.delta_gpu);\n    check_error(cudaPeekAtLastError());\n}\n*/\n\n\n__global__ void adam_kernel(int N, float *x, float *m, float *v, float B1, float B2, float rate, float eps, int t)\n{\n    int index = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (index >= N) return;\n    \n    x[index] = x[index] + (rate * sqrtf(1.f-powf(B2, t)) / (1.f-powf(B1, t)) * m[index] / (sqrtf(v[index]) + eps));\n}\n\nextern \"C\" void adam_gpu(int n, float *x, float *m, float *v, float B1, float B2, float rate, float eps, int t)\n{\n    adam_kernel<<<cuda_gridsize(n), BLOCK>>>(n, x, m, v, B1, B2, rate, eps, t);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t)\n{\n    scal_gpu(n, B1, m, 1);\n    scal_gpu(n, B2, v, 1);\n    axpy_gpu(n, -decay*batch, w, 1, d, 1);\n\n    axpy_gpu(n, (1-B1), d, 1, m, 1);\n    mul_gpu(n, d, 1, d, 1);\n    axpy_gpu(n, (1-B2), d, 1, v, 1);\n\n    adam_gpu(n, w, m, v, B1, B2, rate, eps, t);\n    fill_gpu(n, 0, d, 1);\n}\n\n__global__ void normalize_kernel(int N, float *x, float *mean, float *variance, int batch, int filters, int spatial)\n{\n    int index = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (index >= N) return;\n    int f = (index/spatial)%filters;\n    \n    x[index] = (x[index] - mean[f])/(sqrtf(variance[f] + .00001f));\n}\n\n__global__ void normalize_delta_kernel(int N, float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta)\n{\n    int index = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (index >= N) return;\n    int f = (index/spatial)%filters;\n    \n    delta[index] = delta[index] * 1.f/(sqrtf(variance[f] + .00001f)) + variance_delta[f] * 2.f * (x[index] - mean[f]) / (spatial * batch) + mean_delta[f]/(spatial*batch);\n}\n\nextern \"C\" void normalize_delta_gpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta)\n{\n    size_t N = batch*filters*spatial;\n    normalize_delta_kernel<<<cuda_gridsize(N), BLOCK>>>(N, x, mean, variance, mean_delta, variance_delta, batch, filters, spatial, delta);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void  variance_delta_kernel(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (i >= filters) return;\n    int j,k;\n    variance_delta[i] = 0;\n    for(j = 0; j < batch; ++j){\n        for(k = 0; k < spatial; ++k){\n            int index = j*filters*spatial + i*spatial + k;\n            variance_delta[i] += delta[index]*(x[index] - mean[i]);\n        }\n    }\n    variance_delta[i] *= -.5f * powf(variance[i] + .00001f, (float)(-3.f/2.f));\n}\n\n__global__ void accumulate_kernel(float *x, int n, int groups, float *sum)\n{\n    int k;\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (i >= groups) return;\n    sum[i] = 0;\n    for(k = 0; k < n; ++k){\n        sum[i] += x[k*groups + i];\n    }\n}\n\n__global__ void fast_mean_delta_kernel(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta)\n{\n    const int threads = BLOCK;\n    __shared__ float local[threads];\n\n    int id = threadIdx.x;\n    local[id] = 0;\n\n    int filter = blockIdx.x;\n\n    int i, j;\n    for(j = 0; j < batch; ++j){\n        for(i = 0; i < spatial; i += threads){\n            int index = j*spatial*filters + filter*spatial + i + id;\n            local[id] += (i+id < spatial) ? delta[index] : 0;\n        }\n    }\n\n    __syncthreads();\n\n    if(id == 0){\n        mean_delta[filter] = 0;\n        for(i = 0; i < threads; ++i){\n            mean_delta[filter] += local[i];\n        }\n        mean_delta[filter] *= (-1.f/sqrtf(variance[filter] + .00001f));\n    }\n}\n\n__global__ void  fast_variance_delta_kernel(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta)\n{\n    const int threads = BLOCK;\n    __shared__ float local[threads];\n\n    int id = threadIdx.x;\n    local[id] = 0;\n\n    int filter = blockIdx.x;\n\n    int i, j;\n    for(j = 0; j < batch; ++j){\n        for(i = 0; i < spatial; i += threads){\n            int index = j*spatial*filters + filter*spatial + i + id;\n\n            local[id] += (i+id < spatial) ? delta[index]*(x[index] - mean[filter]) : 0;\n        }\n    }\n\n    __syncthreads();\n\n    if(id == 0){\n        variance_delta[filter] = 0;\n        for(i = 0; i < threads; ++i){\n            variance_delta[filter] += local[i];\n        }\n        variance_delta[filter] *= -.5f * powf(variance[filter] + .00001f, (float)(-3.f/2.f));\n    }\n}\n\n\n__global__ void mean_delta_kernel(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (i >= filters) return;\n    int j,k;\n    mean_delta[i] = 0;\n    for (j = 0; j < batch; ++j) {\n        for (k = 0; k < spatial; ++k) {\n            int index = j*filters*spatial + i*spatial + k;\n            mean_delta[i] += delta[index];\n        }\n    }\n    mean_delta[i] *= (-1.f/sqrtf(variance[i] + .00001f));\n}\n\nextern \"C\" void mean_delta_gpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta)\n{\n    mean_delta_kernel<<<cuda_gridsize(filters), BLOCK>>>(delta, variance, batch, filters, spatial, mean_delta);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void fast_mean_delta_gpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta)\n{\n    fast_mean_delta_kernel<<<filters, BLOCK>>>(delta, variance, batch, filters, spatial, mean_delta);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void fast_variance_delta_gpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta)\n{\n    fast_variance_delta_kernel<<<filters, BLOCK>>>(x, delta, mean, variance, batch, filters, spatial, variance_delta);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void  mean_kernel(float *x, int batch, int filters, int spatial, float *mean)\n{\n    float scale = 1.f/(batch * spatial);\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (i >= filters) return;\n    int j,k;\n    mean[i] = 0;\n    for(j = 0; j < batch; ++j){\n        for(k = 0; k < spatial; ++k){\n            int index = j*filters*spatial + i*spatial + k;\n            mean[i] += x[index];\n        }\n    }\n    mean[i] *= scale;\n}\n\n__global__ void variance_kernel(float *x, float *mean, int batch, int filters, int spatial, float *variance)\n{\n    float scale = 1.f/(batch * spatial - 1);\n    int j,k;\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (i >= filters) return;\n    variance[i] = 0;\n    for(j = 0; j < batch; ++j){\n        for(k = 0; k < spatial; ++k){\n            int index = j*filters*spatial + i*spatial + k;\n            variance[i] += powf((x[index] - mean[i]), 2);\n        }\n    }\n    variance[i] *= scale;\n}\n\n__global__ void reorg_kernel(int N, float *x, int w, int h, int c, int batch, int stride, int forward, float *out)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i >= N) return;\n    int in_index = i;\n    int in_w = i%w;\n    i = i/w;\n    int in_h = i%h;\n    i = i/h;\n    int in_c = i%c;\n    i = i/c;\n    int b = i%batch;\n\n    int out_c = c/(stride*stride);\n\n    int c2 = in_c % out_c;\n    int offset = in_c / out_c;\n    int w2 = in_w*stride + offset % stride;\n    int h2 = in_h*stride + offset / stride;\n    //printf(\"%d\\n\", offset);\n    int out_index = w2 + w*stride*(h2 + h*stride*(c2 + out_c*b));\n\n   // printf(\"%d %d %d\\n\", w2, h2, c2);\n    //printf(\"%d %d\\n\", in_index, out_index);\n    //if(out_index >= N || out_index < 0) printf(\"bad bad bad \\n\");\n\n    if(forward) out[out_index] = x[in_index];\n    else out[in_index] = x[out_index];\n    //if(forward) out[1] = x[1];\n    //else out[0] = x[0];\n}\n\n__global__ void axpy_kernel(int N, float ALPHA, float *X, int OFFX, int INCX,  float *Y, int OFFY, int INCY)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) Y[OFFY+i*INCY] += ALPHA*X[OFFX+i*INCX];\n}\n\n__global__ void pow_kernel(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) Y[i*INCY] = pow(X[i*INCX], ALPHA);\n}\n\n__global__ void const_kernel(int N, float ALPHA, float *X, int INCX)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) X[i*INCX] = ALPHA;\n}\n\n__global__ void constrain_kernel(int N, float ALPHA, float *X, int INCX)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) X[i*INCX] = fminf(ALPHA, fmaxf(-ALPHA, X[i*INCX]));\n}\n\n__global__ void supp_kernel(int N, float ALPHA, float *X, int INCX)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) {\n        if((X[i*INCX] * X[i*INCX]) < (ALPHA * ALPHA)) X[i*INCX] = 0;\n    }\n}\n\n__global__ void add_kernel(int N, float ALPHA, float *X, int INCX)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) X[i*INCX] += ALPHA;\n}\n\n__global__ void scal_kernel(int N, float ALPHA, float *X, int INCX)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) X[i*INCX] *= ALPHA;\n}\n\n__global__ void fill_kernel(int N, float ALPHA, float *X, int INCX)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) X[i*INCX] = ALPHA;\n}\n\n__global__ void mask_kernel(int n,  float *x, float mask_num, float *mask)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n && mask[i] == mask_num) x[i] = mask_num;\n}\n\n__global__ void copy_kernel(int N,  float *X, int OFFX, int INCX, float *Y, int OFFY, int INCY)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) Y[i*INCY + OFFY] = X[i*INCX + OFFX];\n}\n\n__global__ void mul_kernel(int N, float *X, int INCX, float *Y, int INCY)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < N) Y[i*INCY] *= X[i*INCX];\n}\n\n\nextern \"C\" void normalize_gpu(float *x, float *mean, float *variance, int batch, int filters, int spatial)\n{\n    size_t N = batch*filters*spatial;\n    normalize_kernel<<<cuda_gridsize(N), BLOCK>>>(N, x, mean, variance, batch, filters, spatial);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void  fast_mean_kernel(float *x, int batch, int filters, int spatial, float *mean)\n{\n    const int threads = BLOCK;\n    __shared__ float local[threads];\n\n    int id = threadIdx.x;\n    local[id] = 0;\n\n    int filter = blockIdx.x;\n\n    int i, j;\n    for(j = 0; j < batch; ++j){\n        for(i = 0; i < spatial; i += threads){\n            int index = j*spatial*filters + filter*spatial + i + id;\n            local[id] += (i+id < spatial) ? x[index] : 0;\n        }\n    }\n\n    __syncthreads();\n\n    if(id == 0){\n        mean[filter] = 0;\n        for(i = 0; i < threads; ++i){\n            mean[filter] += local[i];\n        }\n        mean[filter] /= spatial * batch;\n    }\n}\n\n__global__ void  fast_variance_kernel(float *x, float *mean, int batch, int filters, int spatial, float *variance)\n{\n    const int threads = BLOCK;\n    __shared__ float local[threads];\n\n    int id = threadIdx.x;\n    local[id] = 0;\n\n    int filter = blockIdx.x;\n\n    int i, j;\n    for(j = 0; j < batch; ++j){\n        for(i = 0; i < spatial; i += threads){\n            int index = j*spatial*filters + filter*spatial + i + id;\n\n            local[id] += (i+id < spatial) ? powf((x[index] - mean[filter]), 2) : 0;\n        }\n    }\n\n    __syncthreads();\n\n    if(id == 0){\n        variance[filter] = 0;\n        for(i = 0; i < threads; ++i){\n            variance[filter] += local[i];\n        }\n        variance[filter] /= (spatial * batch - 1);\n    }\n}\n\nextern \"C\" void fast_mean_gpu(float *x, int batch, int filters, int spatial, float *mean)\n{\n    fast_mean_kernel<<<filters, BLOCK>>>(x, batch, filters, spatial, mean);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void fast_variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance)\n{\n    fast_variance_kernel<<<filters, BLOCK>>>(x, mean, batch, filters, spatial, variance);\n    check_error(cudaPeekAtLastError());\n}\n\n\nextern \"C\" void mean_gpu(float *x, int batch, int filters, int spatial, float *mean)\n{\n    mean_kernel<<<cuda_gridsize(filters), BLOCK>>>(x, batch, filters, spatial, mean);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance)\n{\n    variance_kernel<<<cuda_gridsize(filters), BLOCK>>>(x, mean, batch, filters, spatial, variance);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void axpy_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY)\n{\n    axpy_gpu_offset(N, ALPHA, X, 0, INCX, Y, 0, INCY);\n}\n\nextern \"C\" void pow_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY)\n{\n    pow_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX, Y, INCY);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void axpy_gpu_offset(int N, float ALPHA, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY)\n{\n    axpy_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, OFFX, INCX, Y, OFFY, INCY);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void copy_gpu(int N, float * X, int INCX, float * Y, int INCY)\n{\n    copy_gpu_offset(N, X, 0, INCX, Y, 0, INCY);\n}\n\nextern \"C\" void mul_gpu(int N, float * X, int INCX, float * Y, int INCY)\n{\n    mul_kernel<<<cuda_gridsize(N), BLOCK>>>(N, X, INCX, Y, INCY);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void copy_gpu_offset(int N, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY)\n{\n    copy_kernel<<<cuda_gridsize(N), BLOCK>>>(N, X, OFFX, INCX, Y, OFFY, INCY);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void flatten_kernel(int N, float *x, int spatial, int layers, int batch, int forward, float *out)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i >= N) return;\n    int in_s = i%spatial;\n    i = i/spatial;\n    int in_c = i%layers;\n    i = i/layers;\n    int b = i;\n\n    int i1 = b*layers*spatial + in_c*spatial + in_s;\n    int i2 = b*layers*spatial + in_s*layers +  in_c;\n\n    if (forward) out[i2] = x[i1];\n    else out[i1] = x[i2];\n}\n\nextern \"C\" void flatten_gpu(float *x, int spatial, int layers, int batch, int forward, float *out)\n{\n    int size = spatial*batch*layers;\n    flatten_kernel<<<cuda_gridsize(size), BLOCK>>>(size, x, spatial, layers, batch, forward, out);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void reorg_gpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out)\n{\n    int size = w*h*c*batch;\n    reorg_kernel<<<cuda_gridsize(size), BLOCK>>>(size, x, w, h, c, batch, stride, forward, out);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void scale_mask_kernel(int n,  float *x, float mask_num, float *mask, float scale)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n && mask[i] == mask_num) x[i] *= scale;\n}\n\nextern \"C\" void scale_mask_gpu(int N, float * X, float mask_num, float * mask, float scale)\n{\n    scale_mask_kernel<<<cuda_gridsize(N), BLOCK>>>(N, X, mask_num, mask, scale);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void mask_gpu(int N, float * X, float mask_num, float * mask)\n{\n    mask_kernel<<<cuda_gridsize(N), BLOCK>>>(N, X, mask_num, mask);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void const_gpu(int N, float ALPHA, float * X, int INCX)\n{\n    const_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void constrain_gpu(int N, float ALPHA, float * X, int INCX)\n{\n    constrain_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX);\n    check_error(cudaPeekAtLastError());\n}\n\n\nextern \"C\" void add_gpu(int N, float ALPHA, float * X, int INCX)\n{\n    add_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void scal_gpu(int N, float ALPHA, float * X, int INCX)\n{\n    scal_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void supp_gpu(int N, float ALPHA, float * X, int INCX)\n{\n    supp_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void fill_gpu(int N, float ALPHA, float * X, int INCX)\n{\n    fill_kernel<<<cuda_gridsize(N), BLOCK>>>(N, ALPHA, X, INCX);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void shortcut_kernel(int size, int minw, int minh, int minc, int stride, int sample, int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (id >= size) return;\n    int i = id % minw;\n    id /= minw;\n    int j = id % minh;\n    id /= minh;\n    int k = id % minc;\n    id /= minc;\n    int b = id % batch;\n\n    int out_index = i*sample + w2*(j*sample + h2*(k + c2*b));\n    int add_index = i*stride + w1*(j*stride + h1*(k + c1*b));\n    out[out_index] += add[add_index];\n}\n\nextern \"C\" void shortcut_gpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out)\n{\n    int minw = (w1 < w2) ? w1 : w2;\n    int minh = (h1 < h2) ? h1 : h2;\n    int minc = (c1 < c2) ? c1 : c2;\n\n    int stride = w1/w2;\n    int sample = w2/w1;\n    assert(stride == h1/h2);\n    assert(sample == h2/h1);\n    if(stride < 1) stride = 1;\n    if(sample < 1) sample = 1;\n\n    int size = batch * minw * minh * minc;\n    shortcut_kernel<<<cuda_gridsize(size), BLOCK>>>(size, minw, minh, minc, stride, sample, batch, w1, h1, c1, add, w2, h2, c2, out);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void smooth_l1_kernel(int n, float *pred, float *truth, float *delta, float *error)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n){\n        float diff = truth[i] - pred[i];\n        float abs_val = fabsf(diff);\n        if(abs_val < 1) {\n            error[i] = diff * diff;\n            delta[i] = diff;\n        }\n        else {\n            error[i] = 2*abs_val - 1;\n            delta[i] = (diff > 0) ? 1 : -1;\n        }\n    }\n}\n\nextern \"C\" void smooth_l1_gpu(int n, float *pred, float *truth, float *delta, float *error)\n{\n    smooth_l1_kernel<<<cuda_gridsize(n), BLOCK>>>(n, pred, truth, delta, error);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void l2_kernel(int n, float *pred, float *truth, float *delta, float *error)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n){\n        float diff = truth[i] - pred[i];\n        error[i] = diff * diff; //I know this is technically wrong, deal with it.\n        delta[i] = diff;\n    }\n}\n\nextern \"C\" void l2_gpu(int n, float *pred, float *truth, float *delta, float *error)\n{\n    l2_kernel<<<cuda_gridsize(n), BLOCK>>>(n, pred, truth, delta, error);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void l1_kernel(int n, float *pred, float *truth, float *delta, float *error)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n){\n        float diff = truth[i] - pred[i];\n        error[i] = abs(diff);\n        delta[i] = (diff > 0) ? 1 : -1;\n    }\n}\n\nextern \"C\" void l1_gpu(int n, float *pred, float *truth, float *delta, float *error)\n{\n    l1_kernel<<<cuda_gridsize(n), BLOCK>>>(n, pred, truth, delta, error);\n    check_error(cudaPeekAtLastError());\n}\n\n\n\n\n__global__ void weighted_sum_kernel(int n, float *a, float *b, float *s, float *c)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n){\n        c[i] = s[i]*a[i] + (1-s[i])*(b ? b[i] : 0);\n    }\n}\n\n__global__ void deinter_kernel(int NX, float *X, int NY, float *Y, int B, float *OUT)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < (NX+NY)*B){\n        int b = i / (NX+NY);\n        int j = i % (NX+NY);\n        if (j < NX){\n            if(X) X[b*NX + j] += OUT[i];\n        } else {\n            if(Y) Y[b*NY + j - NX] += OUT[i];\n        }\n    }\n}\n\nextern \"C\" void deinter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT)\n{\n    deinter_kernel<<<cuda_gridsize((NX+NY)*B), BLOCK>>>(NX, X, NY, Y, B, OUT);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void inter_kernel(int NX, float *X, int NY, float *Y, int B, float *OUT)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < (NX+NY)*B){\n        int b = i / (NX+NY);\n        int j = i % (NX+NY);\n        if (j < NX){\n            OUT[i] = X[b*NX + j];\n        } else {\n            OUT[i] = Y[b*NY + j - NX];\n        }\n    }\n}\n\nextern \"C\" void inter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT)\n{\n    inter_kernel<<<cuda_gridsize((NX+NY)*B), BLOCK>>>(NX, X, NY, Y, B, OUT);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void weighted_sum_gpu(float *a, float *b, float *s, int num, float *c)\n{\n    weighted_sum_kernel<<<cuda_gridsize(num), BLOCK>>>(num, a, b, s, c);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void weighted_delta_kernel(int n, float *a, float *b, float *s, float *da, float *db, float *ds, float *dc)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n){\n        if(da) da[i] += dc[i] * s[i];\n        if(db) db[i] += dc[i] * (1-s[i]);\n        ds[i] += dc[i] * (a[i] - b[i]);\n    }\n}\n\nextern \"C\" void weighted_delta_gpu(float *a, float *b, float *s, float *da, float *db, float *ds, int num, float *dc)\n{\n    weighted_delta_kernel<<<cuda_gridsize(num), BLOCK>>>(num, a, b, s, da, db, ds, dc);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void mult_add_into_kernel(int n, float *a, float *b, float *c)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(i < n){\n        c[i] += a[i]*b[i];\n    }\n}\n\nextern \"C\" void mult_add_into_gpu(int num, float *a, float *b, float *c)\n{\n    mult_add_into_kernel<<<cuda_gridsize(num), BLOCK>>>(num, a, b, c);\n    check_error(cudaPeekAtLastError());\n}\n\n\n__device__ void softmax_device(float *input, int n, float temp, int stride, float *output)\n{\n    int i;\n    float sum = 0;\n    float largest = -INFINITY;\n    for(i = 0; i < n; ++i){\n        int val = input[i*stride];\n        largest = (val>largest) ? val : largest;\n    }\n    for(i = 0; i < n; ++i){\n        float e = expf(input[i*stride]/temp - largest/temp);\n        sum += e;\n        output[i*stride] = e;\n    }\n    for(i = 0; i < n; ++i){\n        output[i*stride] /= sum;\n    }\n}\n\n\n__global__ void softmax_tree_kernel(float *input, int spatial, int batch, int stride, float temp, float *output, int groups, int *group_size, int *group_offset)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (id >= spatial*batch*groups) return;\n    int s = id % spatial;\n    id = id / spatial;\n    int g = id % groups;\n    int b = id / groups;\n    int goff = group_offset[g]*spatial;\n    int boff = b*stride;\n    softmax_device(input + goff + boff + s, group_size[g], temp, spatial, output + goff + boff + s);\n}\n\nextern \"C\" void softmax_tree(float *input, int spatial, int batch, int stride, float temp, float *output, tree hier)\n{\n    int *tree_groups_size = cuda_make_int_array(hier.group_size, hier.groups);\n    int *tree_groups_offset = cuda_make_int_array(hier.group_offset, hier.groups);\n    /*\n    static int *tree_groups_size = 0;\n    static int *tree_groups_offset = 0;\n    if(!tree_groups_size){\n        tree_groups_size = cuda_make_int_array(hier.group_size, hier.groups);\n        tree_groups_offset = cuda_make_int_array(hier.group_offset, hier.groups);\n    }\n    */\n    int num = spatial*batch*hier.groups;\n    softmax_tree_kernel<<<cuda_gridsize(num), BLOCK>>>(input, spatial, batch, stride, temp, output, hier.groups, tree_groups_size, tree_groups_offset);\n    check_error(cudaPeekAtLastError());\n    cuda_free((float *)tree_groups_size);\n    cuda_free((float *)tree_groups_offset);\n}\n\n__global__ void softmax_kernel(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (id >= batch*groups) return;\n    int b = id / groups;\n    int g = id % groups;\n    softmax_device(input + b*batch_offset + g*group_offset, n, temp, stride, output + b*batch_offset + g*group_offset);\n}\n\nextern \"C\" void softmax_gpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output)\n{\n    softmax_kernel<<<cuda_gridsize(batch*groups), BLOCK>>>(input, n, batch, batch_offset, groups, group_offset, stride, temp, output);\n    check_error(cudaPeekAtLastError());\n}\n"
  },
  {
    "path": "lightnet/_darknet/box.c",
    "content": "#include \"box.h\"\n#include <stdio.h>\n#include <math.h>\n#include <stdlib.h>\n\nbox float_to_box(float *f, int stride)\n{\n    box b;\n    b.x = f[0];\n    b.y = f[1*stride];\n    b.w = f[2*stride];\n    b.h = f[3*stride];\n    return b;\n}\n\ndbox derivative(box a, box b)\n{\n    dbox d;\n    d.dx = 0;\n    d.dw = 0;\n    float l1 = a.x - a.w/2;\n    float l2 = b.x - b.w/2;\n    if (l1 > l2){\n        d.dx -= 1;\n        d.dw += .5;\n    }\n    float r1 = a.x + a.w/2;\n    float r2 = b.x + b.w/2;\n    if(r1 < r2){\n        d.dx += 1;\n        d.dw += .5;\n    }\n    if (l1 > r2) {\n        d.dx = -1;\n        d.dw = 0;\n    }\n    if (r1 < l2){\n        d.dx = 1;\n        d.dw = 0;\n    }\n\n    d.dy = 0;\n    d.dh = 0;\n    float t1 = a.y - a.h/2;\n    float t2 = b.y - b.h/2;\n    if (t1 > t2){\n        d.dy -= 1;\n        d.dh += .5;\n    }\n    float b1 = a.y + a.h/2;\n    float b2 = b.y + b.h/2;\n    if(b1 < b2){\n        d.dy += 1;\n        d.dh += .5;\n    }\n    if (t1 > b2) {\n        d.dy = -1;\n        d.dh = 0;\n    }\n    if (b1 < t2){\n        d.dy = 1;\n        d.dh = 0;\n    }\n    return d;\n}\n\nfloat overlap(float x1, float w1, float x2, float w2)\n{\n    float l1 = x1 - w1/2;\n    float l2 = x2 - w2/2;\n    float left = l1 > l2 ? l1 : l2;\n    float r1 = x1 + w1/2;\n    float r2 = x2 + w2/2;\n    float right = r1 < r2 ? r1 : r2;\n    return right - left;\n}\n\nfloat box_intersection(box a, box b)\n{\n    float w = overlap(a.x, a.w, b.x, b.w);\n    float h = overlap(a.y, a.h, b.y, b.h);\n    if(w < 0 || h < 0) return 0;\n    float area = w*h;\n    return area;\n}\n\nfloat box_union(box a, box b)\n{\n    float i = box_intersection(a, b);\n    float u = a.w*a.h + b.w*b.h - i;\n    return u;\n}\n\nfloat box_iou(box a, box b)\n{\n    return box_intersection(a, b)/box_union(a, b);\n}\n\nfloat box_rmse(box a, box b)\n{\n    return sqrt(pow(a.x-b.x, 2) + \n                pow(a.y-b.y, 2) + \n                pow(a.w-b.w, 2) + \n                pow(a.h-b.h, 2));\n}\n\ndbox dintersect(box a, box b)\n{\n    float w = overlap(a.x, a.w, b.x, b.w);\n    float h = overlap(a.y, a.h, b.y, b.h);\n    dbox dover = derivative(a, b);\n    dbox di;\n\n    di.dw = dover.dw*h;\n    di.dx = dover.dx*h;\n    di.dh = dover.dh*w;\n    di.dy = dover.dy*w;\n\n    return di;\n}\n\ndbox dunion(box a, box b)\n{\n    dbox du;\n\n    dbox di = dintersect(a, b);\n    du.dw = a.h - di.dw;\n    du.dh = a.w - di.dh;\n    du.dx = -di.dx;\n    du.dy = -di.dy;\n\n    return du;\n}\n\n\nvoid test_dunion()\n{\n    box a = {0, 0, 1, 1};\n    box dxa= {0+.0001, 0, 1, 1};\n    box dya= {0, 0+.0001, 1, 1};\n    box dwa= {0, 0, 1+.0001, 1};\n    box dha= {0, 0, 1, 1+.0001};\n\n    box b = {.5, .5, .2, .2};\n    dbox di = dunion(a,b);\n    printf(\"Union: %f %f %f %f\\n\", di.dx, di.dy, di.dw, di.dh);\n    float inter =  box_union(a, b);\n    float xinter = box_union(dxa, b);\n    float yinter = box_union(dya, b);\n    float winter = box_union(dwa, b);\n    float hinter = box_union(dha, b);\n    xinter = (xinter - inter)/(.0001);\n    yinter = (yinter - inter)/(.0001);\n    winter = (winter - inter)/(.0001);\n    hinter = (hinter - inter)/(.0001);\n    printf(\"Union Manual %f %f %f %f\\n\", xinter, yinter, winter, hinter);\n}\nvoid test_dintersect()\n{\n    box a = {0, 0, 1, 1};\n    box dxa= {0+.0001, 0, 1, 1};\n    box dya= {0, 0+.0001, 1, 1};\n    box dwa= {0, 0, 1+.0001, 1};\n    box dha= {0, 0, 1, 1+.0001};\n\n    box b = {.5, .5, .2, .2};\n    dbox di = dintersect(a,b);\n    printf(\"Inter: %f %f %f %f\\n\", di.dx, di.dy, di.dw, di.dh);\n    float inter =  box_intersection(a, b);\n    float xinter = box_intersection(dxa, b);\n    float yinter = box_intersection(dya, b);\n    float winter = box_intersection(dwa, b);\n    float hinter = box_intersection(dha, b);\n    xinter = (xinter - inter)/(.0001);\n    yinter = (yinter - inter)/(.0001);\n    winter = (winter - inter)/(.0001);\n    hinter = (hinter - inter)/(.0001);\n    printf(\"Inter Manual %f %f %f %f\\n\", xinter, yinter, winter, hinter);\n}\n\nvoid test_box()\n{\n    test_dintersect();\n    test_dunion();\n    box a = {0, 0, 1, 1};\n    box dxa= {0+.00001, 0, 1, 1};\n    box dya= {0, 0+.00001, 1, 1};\n    box dwa= {0, 0, 1+.00001, 1};\n    box dha= {0, 0, 1, 1+.00001};\n\n    box b = {.5, 0, .2, .2};\n\n    float iou = box_iou(a,b);\n    iou = (1-iou)*(1-iou);\n    printf(\"%f\\n\", iou);\n    dbox d = diou(a, b);\n    printf(\"%f %f %f %f\\n\", d.dx, d.dy, d.dw, d.dh);\n\n    float xiou = box_iou(dxa, b);\n    float yiou = box_iou(dya, b);\n    float wiou = box_iou(dwa, b);\n    float hiou = box_iou(dha, b);\n    xiou = ((1-xiou)*(1-xiou) - iou)/(.00001);\n    yiou = ((1-yiou)*(1-yiou) - iou)/(.00001);\n    wiou = ((1-wiou)*(1-wiou) - iou)/(.00001);\n    hiou = ((1-hiou)*(1-hiou) - iou)/(.00001);\n    printf(\"manual %f %f %f %f\\n\", xiou, yiou, wiou, hiou);\n}\n\ndbox diou(box a, box b)\n{\n    float u = box_union(a,b);\n    float i = box_intersection(a,b);\n    dbox di = dintersect(a,b);\n    dbox du = dunion(a,b);\n    dbox dd = {0,0,0,0};\n\n    if(i <= 0 || 1) {\n        dd.dx = b.x - a.x;\n        dd.dy = b.y - a.y;\n        dd.dw = b.w - a.w;\n        dd.dh = b.h - a.h;\n        return dd;\n    }\n\n    dd.dx = 2*pow((1-(i/u)),1)*(di.dx*u - du.dx*i)/(u*u);\n    dd.dy = 2*pow((1-(i/u)),1)*(di.dy*u - du.dy*i)/(u*u);\n    dd.dw = 2*pow((1-(i/u)),1)*(di.dw*u - du.dw*i)/(u*u);\n    dd.dh = 2*pow((1-(i/u)),1)*(di.dh*u - du.dh*i)/(u*u);\n    return dd;\n}\n\ntypedef struct{\n    int index;\n    int class;\n    float **probs;\n} sortable_bbox;\n\nint nms_comparator(const void *pa, const void *pb)\n{\n    sortable_bbox a = *(sortable_bbox *)pa;\n    sortable_bbox b = *(sortable_bbox *)pb;\n    float diff = a.probs[a.index][b.class] - b.probs[b.index][b.class];\n    if(diff < 0) return 1;\n    else if(diff > 0) return -1;\n    return 0;\n}\n\nvoid do_nms_obj(box *boxes, float **probs, int total, int classes, float thresh)\n{\n    int i, j, k;\n    sortable_bbox *s = calloc(total, sizeof(sortable_bbox));\n\n    for(i = 0; i < total; ++i){\n        s[i].index = i;       \n        s[i].class = classes;\n        s[i].probs = probs;\n    }\n\n    qsort(s, total, sizeof(sortable_bbox), nms_comparator);\n    for(i = 0; i < total; ++i){\n        if(probs[s[i].index][classes] == 0) continue;\n        box a = boxes[s[i].index];\n        for(j = i+1; j < total; ++j){\n            box b = boxes[s[j].index];\n            if (box_iou(a, b) > thresh){\n                for(k = 0; k < classes+1; ++k){\n                    probs[s[j].index][k] = 0;\n                }\n            }\n        }\n    }\n    free(s);\n}\n\n\nvoid do_nms_sort(box *boxes, float **probs, int total, int classes, float thresh)\n{\n    int i, j, k;\n    sortable_bbox *s = calloc(total, sizeof(sortable_bbox));\n\n    for(i = 0; i < total; ++i){\n        s[i].index = i;       \n        s[i].class = 0;\n        s[i].probs = probs;\n    }\n\n    for(k = 0; k < classes; ++k){\n        for(i = 0; i < total; ++i){\n            s[i].class = k;\n        }\n        qsort(s, total, sizeof(sortable_bbox), nms_comparator);\n        for(i = 0; i < total; ++i){\n            if(probs[s[i].index][k] == 0) continue;\n            box a = boxes[s[i].index];\n            for(j = i+1; j < total; ++j){\n                box b = boxes[s[j].index];\n                if (box_iou(a, b) > thresh){\n                    probs[s[j].index][k] = 0;\n                }\n            }\n        }\n    }\n    free(s);\n}\n\nvoid do_nms(box *boxes, float **probs, int total, int classes, float thresh)\n{\n    int i, j, k;\n    for(i = 0; i < total; ++i){\n        int any = 0;\n        for(k = 0; k < classes; ++k) any = any || (probs[i][k] > 0);\n        if(!any) {\n            continue;\n        }\n        for(j = i+1; j < total; ++j){\n            if (box_iou(boxes[i], boxes[j]) > thresh){\n                for(k = 0; k < classes; ++k){\n                    if (probs[i][k] < probs[j][k]) probs[i][k] = 0;\n                    else probs[j][k] = 0;\n                }\n            }\n        }\n    }\n}\n\nbox encode_box(box b, box anchor)\n{\n    box encode;\n    encode.x = (b.x - anchor.x) / anchor.w;\n    encode.y = (b.y - anchor.y) / anchor.h;\n    encode.w = log2(b.w / anchor.w);\n    encode.h = log2(b.h / anchor.h);\n    return encode;\n}\n\nbox decode_box(box b, box anchor)\n{\n    box decode;\n    decode.x = b.x * anchor.w + anchor.x;\n    decode.y = b.y * anchor.h + anchor.y;\n    decode.w = pow(2., b.w) * anchor.w;\n    decode.h = pow(2., b.h) * anchor.h;\n    return decode;\n}\n"
  },
  {
    "path": "lightnet/_darknet/box.h",
    "content": "#ifndef BOX_H\n#define BOX_H\n#include \"darknet.h\"\n\ntypedef struct{\n    float dx, dy, dw, dh;\n} dbox;\n\nfloat box_rmse(box a, box b);\ndbox diou(box a, box b);\nbox decode_box(box b, box anchor);\nbox encode_box(box b, box anchor);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/classifier.h",
    "content": "\n"
  },
  {
    "path": "lightnet/_darknet/col2im.c",
    "content": "#include <stdio.h>\n#include <math.h>\nvoid col2im_add_pixel(float *im, int height, int width, int channels,\n                        int row, int col, int channel, int pad, float val)\n{\n    row -= pad;\n    col -= pad;\n\n    if (row < 0 || col < 0 ||\n        row >= height || col >= width) return;\n    im[col + width*(row + height*channel)] += val;\n}\n//This one might be too, can't remember.\nvoid col2im_cpu(float* data_col,\n         int channels,  int height,  int width,\n         int ksize,  int stride, int pad, float* data_im) \n{\n    int c,h,w;\n    int height_col = (height + 2*pad - ksize) / stride + 1;\n    int width_col = (width + 2*pad - ksize) / stride + 1;\n\n    int channels_col = channels * ksize * ksize;\n    for (c = 0; c < channels_col; ++c) {\n        int w_offset = c % ksize;\n        int h_offset = (c / ksize) % ksize;\n        int c_im = c / ksize / ksize;\n        for (h = 0; h < height_col; ++h) {\n            for (w = 0; w < width_col; ++w) {\n                int im_row = h_offset + h * stride;\n                int im_col = w_offset + w * stride;\n                int col_index = (c * height_col + h) * width_col + w;\n                double val = data_col[col_index];\n                col2im_add_pixel(data_im, height, width, channels,\n                        im_row, im_col, c_im, pad, val);\n            }\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/col2im.h",
    "content": "#ifndef COL2IM_H\n#define COL2IM_H\n\nvoid col2im_cpu(float* data_col,\n        int channels, int height, int width,\n        int ksize, int stride, int pad, float* data_im);\n\n#ifdef GPU\nvoid col2im_gpu(float *data_col,\n        int channels, int height, int width,\n        int ksize, int stride, int pad, float *data_im);\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/col2im_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"col2im.h\"\n#include \"cuda.h\"\n}\n\n// src: https://github.com/BVLC/caffe/blob/master/src/caffe/util/im2col.cu\n// You may also want to read: https://github.com/BVLC/caffe/blob/master/LICENSE\n\n__global__ void col2im_gpu_kernel(const int n, const float* data_col,\n        const int height, const int width, const int ksize,\n        const int pad,\n        const int stride,\n        const int height_col, const int width_col,\n        float *data_im) {\n    int index = blockIdx.x*blockDim.x+threadIdx.x;\n    for(; index < n; index += blockDim.x*gridDim.x){\n        float val = 0;\n        int w = index % width + pad;\n        int h = (index / width) % height + pad;\n        int c = index / (width * height);\n        // compute the start and end of the output\n        int w_col_start = (w < ksize) ? 0 : (w - ksize) / stride + 1;\n        int w_col_end = min(w / stride + 1, width_col);\n        int h_col_start = (h < ksize) ? 0 : (h - ksize) / stride + 1;\n        int h_col_end = min(h / stride + 1, height_col);\n        // equivalent implementation\n        int offset =\n            (c * ksize * ksize + h * ksize + w) * height_col * width_col;\n        int coeff_h_col = (1 - stride * ksize * height_col) * width_col;\n        int coeff_w_col = (1 - stride * height_col * width_col);\n        for (int h_col = h_col_start; h_col < h_col_end; ++h_col) {\n            for (int w_col = w_col_start; w_col < w_col_end; ++w_col) {\n                val += data_col[offset + h_col * coeff_h_col + w_col * coeff_w_col];\n            }\n        }\n        data_im[index] += val;\n    }\n}\n\nvoid col2im_gpu(float *data_col,\n        int channels, int height, int width,\n        int ksize, int stride, int pad, float *data_im){\n    // We are going to launch channels * height_col * width_col kernels, each\n    // kernel responsible for copying a single-channel grid.\n    int height_col = (height + 2 * pad - ksize) / stride + 1;\n    int width_col = (width + 2 * pad - ksize) / stride + 1;\n    int num_kernels = channels * height * width;\n    col2im_gpu_kernel<<<(num_kernels+BLOCK-1)/BLOCK,\n        BLOCK>>>(\n                num_kernels, data_col, height, width, ksize, pad,\n                stride, height_col,\n                width_col, data_im);\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/connected_layer.c",
    "content": "#include \"connected_layer.h\"\n#include \"convolutional_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nlayer make_connected_layer(int batch, int inputs, int outputs, ACTIVATION activation, int batch_normalize, int adam)\n{\n    int i;\n    layer l = {0};\n    l.learning_rate_scale = 1;\n    l.type = CONNECTED;\n\n    l.inputs = inputs;\n    l.outputs = outputs;\n    l.batch=batch;\n    l.batch_normalize = batch_normalize;\n    l.h = 1;\n    l.w = 1;\n    l.c = inputs;\n    l.out_h = 1;\n    l.out_w = 1;\n    l.out_c = outputs;\n\n    l.output = calloc(batch*outputs, sizeof(float));\n    l.delta = calloc(batch*outputs, sizeof(float));\n\n    l.weight_updates = calloc(inputs*outputs, sizeof(float));\n    l.bias_updates = calloc(outputs, sizeof(float));\n\n    l.weights = calloc(outputs*inputs, sizeof(float));\n    l.biases = calloc(outputs, sizeof(float));\n\n    l.forward = forward_connected_layer;\n    l.backward = backward_connected_layer;\n    l.update = update_connected_layer;\n\n    //float scale = 1./sqrt(inputs);\n    float scale = sqrt(2./inputs);\n    for(i = 0; i < outputs*inputs; ++i){\n        l.weights[i] = scale*rand_uniform(-1, 1);\n    }\n\n    for(i = 0; i < outputs; ++i){\n        l.biases[i] = 0;\n    }\n\n    if(adam){\n        l.m = calloc(l.inputs*l.outputs, sizeof(float));\n        l.v = calloc(l.inputs*l.outputs, sizeof(float));\n        l.bias_m = calloc(l.outputs, sizeof(float));\n        l.scale_m = calloc(l.outputs, sizeof(float));\n        l.bias_v = calloc(l.outputs, sizeof(float));\n        l.scale_v = calloc(l.outputs, sizeof(float));\n    }\n    if(batch_normalize){\n        l.scales = calloc(outputs, sizeof(float));\n        l.scale_updates = calloc(outputs, sizeof(float));\n        for(i = 0; i < outputs; ++i){\n            l.scales[i] = 1;\n        }\n\n        l.mean = calloc(outputs, sizeof(float));\n        l.mean_delta = calloc(outputs, sizeof(float));\n        l.variance = calloc(outputs, sizeof(float));\n        l.variance_delta = calloc(outputs, sizeof(float));\n\n        l.rolling_mean = calloc(outputs, sizeof(float));\n        l.rolling_variance = calloc(outputs, sizeof(float));\n\n        l.x = calloc(batch*outputs, sizeof(float));\n        l.x_norm = calloc(batch*outputs, sizeof(float));\n    }\n\n#ifdef GPU\n    l.forward_gpu = forward_connected_layer_gpu;\n    l.backward_gpu = backward_connected_layer_gpu;\n    l.update_gpu = update_connected_layer_gpu;\n\n    l.weights_gpu = cuda_make_array(l.weights, outputs*inputs);\n    l.biases_gpu = cuda_make_array(l.biases, outputs);\n\n    l.weight_updates_gpu = cuda_make_array(l.weight_updates, outputs*inputs);\n    l.bias_updates_gpu = cuda_make_array(l.bias_updates, outputs);\n\n    l.output_gpu = cuda_make_array(l.output, outputs*batch);\n    l.delta_gpu = cuda_make_array(l.delta, outputs*batch);\n    if (adam) {\n        l.m_gpu =       cuda_make_array(0, inputs*outputs);\n        l.v_gpu =       cuda_make_array(0, inputs*outputs);\n        l.bias_m_gpu =  cuda_make_array(0, outputs);\n        l.bias_v_gpu =  cuda_make_array(0, outputs);\n        l.scale_m_gpu = cuda_make_array(0, outputs);\n        l.scale_v_gpu = cuda_make_array(0, outputs);\n    }\n\n    if(batch_normalize){\n        l.mean_gpu = cuda_make_array(l.mean, outputs);\n        l.variance_gpu = cuda_make_array(l.variance, outputs);\n\n        l.rolling_mean_gpu = cuda_make_array(l.mean, outputs);\n        l.rolling_variance_gpu = cuda_make_array(l.variance, outputs);\n\n        l.mean_delta_gpu = cuda_make_array(l.mean, outputs);\n        l.variance_delta_gpu = cuda_make_array(l.variance, outputs);\n\n        l.scales_gpu = cuda_make_array(l.scales, outputs);\n        l.scale_updates_gpu = cuda_make_array(l.scale_updates, outputs);\n\n        l.x_gpu = cuda_make_array(l.output, l.batch*outputs);\n        l.x_norm_gpu = cuda_make_array(l.output, l.batch*outputs);\n#ifdef CUDNN\n        cudnnCreateTensorDescriptor(&l.normTensorDesc);\n        cudnnCreateTensorDescriptor(&l.dstTensorDesc);\n        cudnnSetTensor4dDescriptor(l.dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l.batch, l.out_c, l.out_h, l.out_w); \n        cudnnSetTensor4dDescriptor(l.normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l.out_c, 1, 1); \n#endif\n    }\n#endif\n    l.activation = activation;\n    fprintf(stderr, \"connected                            %4d  ->  %4d\\n\", inputs, outputs);\n    return l;\n}\n\nvoid update_connected_layer(layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n    axpy_cpu(l.outputs, learning_rate/batch, l.bias_updates, 1, l.biases, 1);\n    scal_cpu(l.outputs, momentum, l.bias_updates, 1);\n\n    if(l.batch_normalize){\n        axpy_cpu(l.outputs, learning_rate/batch, l.scale_updates, 1, l.scales, 1);\n        scal_cpu(l.outputs, momentum, l.scale_updates, 1);\n    }\n\n    axpy_cpu(l.inputs*l.outputs, -decay*batch, l.weights, 1, l.weight_updates, 1);\n    axpy_cpu(l.inputs*l.outputs, learning_rate/batch, l.weight_updates, 1, l.weights, 1);\n    scal_cpu(l.inputs*l.outputs, momentum, l.weight_updates, 1);\n}\n\nvoid forward_connected_layer(layer l, network net)\n{\n    fill_cpu(l.outputs*l.batch, 0, l.output, 1);\n    int m = l.batch;\n    int k = l.inputs;\n    int n = l.outputs;\n    float *a = net.input;\n    float *b = l.weights;\n    float *c = l.output;\n    gemm(0,1,m,n,k,1,a,k,b,k,1,c,n);\n    if(l.batch_normalize){\n        forward_batchnorm_layer(l, net);\n    } else {\n        add_bias(l.output, l.biases, l.batch, l.outputs, 1);\n    }\n    activate_array(l.output, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_connected_layer(layer l, network net)\n{\n    gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);\n\n    if(l.batch_normalize){\n        backward_batchnorm_layer(l, net);\n    } else {\n        backward_bias(l.bias_updates, l.delta, l.batch, l.outputs, 1);\n    }\n\n    int m = l.outputs;\n    int k = l.batch;\n    int n = l.inputs;\n    float *a = l.delta;\n    float *b = net.input;\n    float *c = l.weight_updates;\n    gemm(1,0,m,n,k,1,a,m,b,n,1,c,n);\n\n    m = l.batch;\n    k = l.outputs;\n    n = l.inputs;\n\n    a = l.delta;\n    b = l.weights;\n    c = net.delta;\n\n    if(c) gemm(0,0,m,n,k,1,a,k,b,n,1,c,n);\n}\n\n\nvoid denormalize_connected_layer(layer l)\n{\n    int i, j;\n    for(i = 0; i < l.outputs; ++i){\n        float scale = l.scales[i]/sqrt(l.rolling_variance[i] + .000001);\n        for(j = 0; j < l.inputs; ++j){\n            l.weights[i*l.inputs + j] *= scale;\n        }\n        l.biases[i] -= l.rolling_mean[i] * scale;\n        l.scales[i] = 1;\n        l.rolling_mean[i] = 0;\n        l.rolling_variance[i] = 1;\n    }\n}\n\n\nvoid statistics_connected_layer(layer l)\n{\n    if(l.batch_normalize){\n        printf(\"Scales \");\n        print_statistics(l.scales, l.outputs);\n        /*\n           printf(\"Rolling Mean \");\n           print_statistics(l.rolling_mean, l.outputs);\n           printf(\"Rolling Variance \");\n           print_statistics(l.rolling_variance, l.outputs);\n         */\n    }\n    printf(\"Biases \");\n    print_statistics(l.biases, l.outputs);\n    printf(\"Weights \");\n    print_statistics(l.weights, l.outputs);\n}\n\n#ifdef GPU\n\nvoid pull_connected_layer(layer l)\n{\n    cuda_pull_array(l.weights_gpu, l.weights, l.inputs*l.outputs);\n    cuda_pull_array(l.biases_gpu, l.biases, l.outputs);\n    cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.inputs*l.outputs);\n    cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.outputs);\n    if (l.batch_normalize){\n        cuda_pull_array(l.scales_gpu, l.scales, l.outputs);\n        cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.outputs);\n        cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.outputs);\n    }\n}\n\nvoid push_connected_layer(layer l)\n{\n    cuda_push_array(l.weights_gpu, l.weights, l.inputs*l.outputs);\n    cuda_push_array(l.biases_gpu, l.biases, l.outputs);\n    cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.inputs*l.outputs);\n    cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.outputs);\n    if (l.batch_normalize){\n        cuda_push_array(l.scales_gpu, l.scales, l.outputs);\n        cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.outputs);\n        cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.outputs);\n    }\n}\n\nvoid update_connected_layer_gpu(layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n    if(a.adam){\n        adam_update_gpu(l.weights_gpu, l.weight_updates_gpu, l.m_gpu, l.v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.inputs*l.outputs, batch, a.t);\n        adam_update_gpu(l.biases_gpu, l.bias_updates_gpu, l.bias_m_gpu, l.bias_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.outputs, batch, a.t);\n        if(l.scales_gpu){\n            adam_update_gpu(l.scales_gpu, l.scale_updates_gpu, l.scale_m_gpu, l.scale_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.outputs, batch, a.t);\n        }\n    }else{\n        axpy_gpu(l.outputs, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);\n        scal_gpu(l.outputs, momentum, l.bias_updates_gpu, 1);\n\n        if(l.batch_normalize){\n            axpy_gpu(l.outputs, learning_rate/batch, l.scale_updates_gpu, 1, l.scales_gpu, 1);\n            scal_gpu(l.outputs, momentum, l.scale_updates_gpu, 1);\n        }\n\n        axpy_gpu(l.inputs*l.outputs, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);\n        axpy_gpu(l.inputs*l.outputs, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);\n        scal_gpu(l.inputs*l.outputs, momentum, l.weight_updates_gpu, 1);\n    }\n}\n\nvoid forward_connected_layer_gpu(layer l, network net)\n{\n    fill_gpu(l.outputs*l.batch, 0, l.output_gpu, 1);\n\n    int m = l.batch;\n    int k = l.inputs;\n    int n = l.outputs;\n    float * a = net.input_gpu;\n    float * b = l.weights_gpu;\n    float * c = l.output_gpu;\n    gemm_gpu(0,1,m,n,k,1,a,k,b,k,1,c,n);\n\n    if (l.batch_normalize) {\n        forward_batchnorm_layer_gpu(l, net);\n    } else {\n        add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.outputs, 1);\n    }\n    activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_connected_layer_gpu(layer l, network net)\n{\n    constrain_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1);\n    gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);\n    if(l.batch_normalize){\n        backward_batchnorm_layer_gpu(l, net);\n    } else {\n        backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.outputs, 1);\n    }\n\n    int m = l.outputs;\n    int k = l.batch;\n    int n = l.inputs;\n    float * a = l.delta_gpu;\n    float * b = net.input_gpu;\n    float * c = l.weight_updates_gpu;\n    gemm_gpu(1,0,m,n,k,1,a,m,b,n,1,c,n);\n\n    m = l.batch;\n    k = l.outputs;\n    n = l.inputs;\n\n    a = l.delta_gpu;\n    b = l.weights_gpu;\n    c = net.delta_gpu;\n\n    if(c) gemm_gpu(0,0,m,n,k,1,a,k,b,n,1,c,n);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/connected_layer.h",
    "content": "#ifndef CONNECTED_LAYER_H\n#define CONNECTED_LAYER_H\n\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_connected_layer(int batch, int inputs, int outputs, ACTIVATION activation, int batch_normalize, int adam);\n\nvoid forward_connected_layer(layer l, network net);\nvoid backward_connected_layer(layer l, network net);\nvoid update_connected_layer(layer l, update_args a);\n\n#ifdef GPU\nvoid forward_connected_layer_gpu(layer l, network net);\nvoid backward_connected_layer_gpu(layer l, network net);\nvoid update_connected_layer_gpu(layer l, update_args a);\nvoid push_connected_layer(layer l);\nvoid pull_connected_layer(layer l);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/convolutional_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"convolutional_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"gemm.h\"\n#include \"blas.h\"\n#include \"im2col.h\"\n#include \"col2im.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n}\n\n__global__ void binarize_kernel(float *x, int n, float *binary)\n{\n    int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (i >= n) return;\n    binary[i] = (x[i] >= 0) ? 1 : -1;\n}\n\nvoid binarize_gpu(float *x, int n, float *binary)\n{\n    binarize_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, binary);\n    check_error(cudaPeekAtLastError());\n}\n\n__global__ void binarize_input_kernel(float *input, int n, int size, float *binary)\n{\n    int s = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (s >= size) return;\n    int i = 0;\n    float mean = 0;\n    for(i = 0; i < n; ++i){\n        mean += fabsf(input[i*size + s]);\n    }\n    mean = mean / n;\n    for(i = 0; i < n; ++i){\n        binary[i*size + s] = (input[i*size + s] > 0) ? mean : -mean;\n    }\n}\n\nvoid binarize_input_gpu(float *input, int n, int size, float *binary)\n{\n    binarize_input_kernel<<<cuda_gridsize(size), BLOCK>>>(input, n, size, binary);\n    check_error(cudaPeekAtLastError());\n}\n\n\n__global__ void binarize_weights_kernel(float *weights, int n, int size, float *binary)\n{\n    int f = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if (f >= n) return;\n    int i = 0;\n    float mean = 0;\n    for(i = 0; i < size; ++i){\n        mean += fabsf(weights[f*size + i]);\n    }\n    mean = mean / size;\n    for(i = 0; i < size; ++i){\n        binary[f*size + i] = (weights[f*size + i] > 0) ? mean : -mean;\n        //binary[f*size + i] = weights[f*size + i];\n    }\n}\n\nvoid binarize_weights_gpu(float *weights, int n, int size, float *binary)\n{\n    binarize_weights_kernel<<<cuda_gridsize(n), BLOCK>>>(weights, n, size, binary);\n    check_error(cudaPeekAtLastError());\n}\n\nvoid forward_convolutional_layer_gpu(convolutional_layer l, network net)\n{\n    fill_gpu(l.outputs*l.batch, 0, l.output_gpu, 1);\n    if(l.binary){\n        binarize_weights_gpu(l.weights_gpu, l.n, l.c/l.groups*l.size*l.size, l.binary_weights_gpu);\n        swap_binary(&l);\n    }\n\n    if(l.xnor){\n        binarize_weights_gpu(l.weights_gpu, l.n, l.c/l.groups*l.size*l.size, l.binary_weights_gpu);\n        swap_binary(&l);\n        binarize_gpu(net.input_gpu, l.c*l.h*l.w*l.batch, l.binary_input_gpu);\n        net.input_gpu = l.binary_input_gpu;\n    }\n\n#ifdef CUDNN\n    float one = 1;\n    cudnnConvolutionForward(cudnn_handle(),\n                &one,\n                l.srcTensorDesc,\n                net.input_gpu,\n                l.weightDesc,\n                l.weights_gpu,\n                l.convDesc,\n                l.fw_algo,\n                net.workspace,\n                l.workspace_size,\n                &one,\n                l.dstTensorDesc,\n                l.output_gpu);\n\n#else\n    int i, j;\n    int m = l.n/l.groups;\n    int k = l.size*l.size*l.c/l.groups;\n    int n = l.out_w*l.out_h;\n    for(i = 0; i < l.batch; ++i){\n        for(j = 0; j < l.groups; ++j){\n            float *a = l.weights_gpu + j*l.nweights/l.groups;\n            float *b = net.workspace;\n            float *c = l.output_gpu + (i*l.groups + j)*n*m;\n\n            im2col_gpu(net.input_gpu + (i*l.groups + j)*l.c/l.groups*l.h*l.w,\n                l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, b);\n            gemm_gpu(0,0,m,n,k,1,a,k,b,n,1,c,n);\n        }\n    }\n#endif\n\n    if (l.batch_normalize) {\n        forward_batchnorm_layer_gpu(l, net);\n    } else {\n        add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.n, l.out_w*l.out_h);\n    }\n\n    activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);\n    //if(l.dot > 0) dot_error_gpu(l);\n    if(l.binary || l.xnor) swap_binary(&l);\n}\n\n__global__ void smooth_kernel(float *x, int n, int w, int h, int c, int size, float rate, float *delta)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= n) return;\n\n    int j = id % w;\n    id /= w;\n    int i = id % h;\n    id /= h;\n    int k = id % c;\n    id /= c;\n    int b = id;\n\n    int w_offset = -(size/2.f);\n    int h_offset = -(size/2.f);\n\n    int out_index = j + w*(i + h*(k + c*b));\n    int l, m;\n    for(l = 0; l < size; ++l){\n        for(m = 0; m < size; ++m){\n            int cur_h = h_offset + i + l;\n            int cur_w = w_offset + j + m;\n            int index = cur_w + w*(cur_h + h*(k + b*c));\n            int valid = (cur_h >= 0 && cur_h < h &&\n                    cur_w >= 0 && cur_w < w);\n            delta[out_index] += valid ? rate*(x[index] - x[out_index]) : 0;\n        }\n    }\n}\n\nextern \"C\" void smooth_layer(layer l, int size, float rate)\n{\n    int h = l.out_h;\n    int w = l.out_w;\n    int c = l.out_c;\n\n    size_t n = h*w*c*l.batch;\n\n    smooth_kernel<<<cuda_gridsize(n), BLOCK>>>(l.output_gpu, n, l.w, l.h, l.c, size, rate, l.delta_gpu);\n    check_error(cudaPeekAtLastError());\n}\n\nvoid backward_convolutional_layer_gpu(convolutional_layer l, network net)\n{\n    if(l.smooth){\n        smooth_layer(l, 5, l.smooth);\n    }\n    constrain_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1);\n    gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);\n\n\n    if(l.batch_normalize){\n        backward_batchnorm_layer_gpu(l, net);\n    } else {\n        backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.n, l.out_w*l.out_h);\n    }\n    float *original_input = net.input_gpu;\n\n    if(l.xnor) net.input_gpu = l.binary_input_gpu;\n#ifdef CUDNN\n    float one = 1;\n    cudnnConvolutionBackwardFilter(cudnn_handle(),\n            &one,\n            l.srcTensorDesc,\n            net.input_gpu,\n            l.ddstTensorDesc,\n            l.delta_gpu,\n            l.convDesc,\n            l.bf_algo,\n            net.workspace,\n            l.workspace_size,\n            &one,\n            l.dweightDesc,\n            l.weight_updates_gpu);\n\n    if(net.delta_gpu){\n        if(l.binary || l.xnor) swap_binary(&l);\n        cudnnConvolutionBackwardData(cudnn_handle(),\n                &one,\n                l.weightDesc,\n                l.weights_gpu,\n                l.ddstTensorDesc,\n                l.delta_gpu,\n                l.convDesc,\n                l.bd_algo,\n                net.workspace,\n                l.workspace_size,\n                &one,\n                l.dsrcTensorDesc,\n                net.delta_gpu);\n        if(l.binary || l.xnor) swap_binary(&l);\n        if(l.xnor) gradient_array_gpu(original_input, l.batch*l.c*l.h*l.w, HARDTAN, net.delta_gpu);\n    }\n\n#else\n    int m = l.n/l.groups;\n    int n = l.size*l.size*l.c/l.groups;\n    int k = l.out_w*l.out_h;\n\n    int i, j;\n    for(i = 0; i < l.batch; ++i){\n        for(j = 0; j < l.groups; ++j){\n            float *a = l.delta_gpu + (i*l.groups + j)*m*k;\n            float *b = net.workspace;\n            float *c = l.weight_updates_gpu + j*l.nweights/l.groups;\n\n            float *im = net.input+(i*l.groups + j)*l.c/l.groups*l.h*l.w;\n\n            im2col_gpu(im, l.c/l.groups, l.h, l.w,\n                    l.size, l.stride, l.pad, b);\n            gemm_gpu(0,1,m,n,k,1,a,k,b,k,1,c,n);\n\n            if(net.delta_gpu){\n                if(l.binary || l.xnor) swap_binary(&l);\n                a = l.weights_gpu + j*l.nweights/l.groups;\n                b = l.delta_gpu + (i*l.groups + j)*m*k;\n                c = net.workspace;\n\n                gemm_gpu(1,0,n,k,m,1,a,n,b,k,0,c,k);\n\n                col2im_gpu(net.workspace, l.c/l.groups, l.h, l.w, l.size, l.stride, \n                    l.pad, net.delta_gpu + (i*l.groups + j)*l.c/l.groups*l.h*l.w);\n                if(l.binary || l.xnor) {\n                    swap_binary(&l);\n                }\n            }\n            if(l.xnor) gradient_array_gpu(original_input + i*l.c*l.h*l.w, l.c*l.h*l.w, HARDTAN, net.delta_gpu + i*l.c*l.h*l.w);\n        }\n    }\n#endif\n}\n\nvoid pull_convolutional_layer(layer l)\n{\n    cuda_pull_array(l.weights_gpu, l.weights, l.nweights);\n    cuda_pull_array(l.biases_gpu, l.biases, l.n);\n    cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.nweights);\n    cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.n);\n    if (l.batch_normalize){\n        cuda_pull_array(l.scales_gpu, l.scales, l.n);\n        cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.n);\n        cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.n);\n    }\n}\n\nvoid push_convolutional_layer(layer l)\n{\n    cuda_push_array(l.weights_gpu, l.weights, l.nweights);\n    cuda_push_array(l.biases_gpu, l.biases, l.n);\n    cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.nweights);\n    cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.n);\n    if (l.batch_normalize){\n        cuda_push_array(l.scales_gpu, l.scales, l.n);\n        cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.n);\n        cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.n);\n    }\n}\n\nvoid update_convolutional_layer_gpu(layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n\n    if(a.adam){\n        adam_update_gpu(l.weights_gpu, l.weight_updates_gpu, l.m_gpu, l.v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.nweights, batch, a.t);\n        adam_update_gpu(l.biases_gpu, l.bias_updates_gpu, l.bias_m_gpu, l.bias_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);\n        if(l.scales_gpu){\n            adam_update_gpu(l.scales_gpu, l.scale_updates_gpu, l.scale_m_gpu, l.scale_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);\n        }\n    }else{\n        axpy_gpu(l.nweights, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);\n        axpy_gpu(l.nweights, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);\n        scal_gpu(l.nweights, momentum, l.weight_updates_gpu, 1);\n\n        axpy_gpu(l.n, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);\n        scal_gpu(l.n, momentum, l.bias_updates_gpu, 1);\n\n        if(l.scales_gpu){\n            axpy_gpu(l.n, learning_rate/batch, l.scale_updates_gpu, 1, l.scales_gpu, 1);\n            scal_gpu(l.n, momentum, l.scale_updates_gpu, 1);\n        }\n    }\n}\n\n\n"
  },
  {
    "path": "lightnet/_darknet/convolutional_layer.c",
    "content": "#include \"convolutional_layer.h\"\n#include \"utils.h\"\n#include \"batchnorm_layer.h\"\n#include \"im2col.h\"\n#include \"col2im.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n#include <stdio.h>\n#include <time.h>\n\n#ifdef AI2\n#include \"xnor_layer.h\"\n#endif\n\nvoid swap_binary(convolutional_layer *l)\n{\n    float *swap = l->weights;\n    l->weights = l->binary_weights;\n    l->binary_weights = swap;\n\n#ifdef GPU\n    swap = l->weights_gpu;\n    l->weights_gpu = l->binary_weights_gpu;\n    l->binary_weights_gpu = swap;\n#endif\n}\n\nvoid binarize_weights(float *weights, int n, int size, float *binary)\n{\n    int i, f;\n    for(f = 0; f < n; ++f){\n        float mean = 0;\n        for(i = 0; i < size; ++i){\n            mean += fabs(weights[f*size + i]);\n        }\n        mean = mean / size;\n        for(i = 0; i < size; ++i){\n            binary[f*size + i] = (weights[f*size + i] > 0) ? mean : -mean;\n        }\n    }\n}\n\nvoid binarize_cpu(float *input, int n, float *binary)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        binary[i] = (input[i] > 0) ? 1 : -1;\n    }\n}\n\nvoid binarize_input(float *input, int n, int size, float *binary)\n{\n    int i, s;\n    for(s = 0; s < size; ++s){\n        float mean = 0;\n        for(i = 0; i < n; ++i){\n            mean += fabs(input[i*size + s]);\n        }\n        mean = mean / n;\n        for(i = 0; i < n; ++i){\n            binary[i*size + s] = (input[i*size + s] > 0) ? mean : -mean;\n        }\n    }\n}\n\nint convolutional_out_height(convolutional_layer l)\n{\n    return (l.h + 2*l.pad - l.size) / l.stride + 1;\n}\n\nint convolutional_out_width(convolutional_layer l)\n{\n    return (l.w + 2*l.pad - l.size) / l.stride + 1;\n}\n\nimage get_convolutional_image(convolutional_layer l)\n{\n    return float_to_image(l.out_w,l.out_h,l.out_c,l.output);\n}\n\nimage get_convolutional_delta(convolutional_layer l)\n{\n    return float_to_image(l.out_w,l.out_h,l.out_c,l.delta);\n}\n\nstatic size_t get_workspace_size(layer l){\n#ifdef CUDNN\n    if(gpu_index >= 0){\n        size_t most = 0;\n        size_t s = 0;\n        cudnnGetConvolutionForwardWorkspaceSize(cudnn_handle(),\n                l.srcTensorDesc,\n                l.weightDesc,\n                l.convDesc,\n                l.dstTensorDesc,\n                l.fw_algo,\n                &s);\n        if (s > most) most = s;\n        cudnnGetConvolutionBackwardFilterWorkspaceSize(cudnn_handle(),\n                l.srcTensorDesc,\n                l.ddstTensorDesc,\n                l.convDesc,\n                l.dweightDesc,\n                l.bf_algo,\n                &s);\n        if (s > most) most = s;\n        cudnnGetConvolutionBackwardDataWorkspaceSize(cudnn_handle(),\n                l.weightDesc,\n                l.ddstTensorDesc,\n                l.convDesc,\n                l.dsrcTensorDesc,\n                l.bd_algo,\n                &s);\n        if (s > most) most = s;\n        return most;\n    }\n#endif\n    return (size_t)l.out_h*l.out_w*l.size*l.size*l.c/l.groups*sizeof(float);\n}\n\n#ifdef GPU\n#ifdef CUDNN\nvoid cudnn_convolutional_setup(layer *l)\n{\n    cudnnSetTensor4dDescriptor(l->dsrcTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->c, l->h, l->w); \n    cudnnSetTensor4dDescriptor(l->ddstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->out_c, l->out_h, l->out_w); \n\n    cudnnSetTensor4dDescriptor(l->srcTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->c, l->h, l->w); \n    cudnnSetTensor4dDescriptor(l->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->out_c, l->out_h, l->out_w); \n    cudnnSetTensor4dDescriptor(l->normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l->out_c, 1, 1); \n\n    cudnnSetFilter4dDescriptor(l->dweightDesc, CUDNN_DATA_FLOAT, CUDNN_TENSOR_NCHW, l->n, l->c/l->groups, l->size, l->size); \n    cudnnSetFilter4dDescriptor(l->weightDesc, CUDNN_DATA_FLOAT, CUDNN_TENSOR_NCHW, l->n, l->c/l->groups, l->size, l->size); \n    #if CUDNN_MAJOR >= 6\n    cudnnSetConvolution2dDescriptor(l->convDesc, l->pad, l->pad, l->stride, l->stride, 1, 1, CUDNN_CROSS_CORRELATION, CUDNN_DATA_FLOAT);\n    #else\n    cudnnSetConvolution2dDescriptor(l->convDesc, l->pad, l->pad, l->stride, l->stride, 1, 1, CUDNN_CROSS_CORRELATION);\n    #endif\n\n    #if CUDNN_MAJOR >= 7\n    cudnnSetConvolutionGroupCount(l->convDesc, l->groups);\n    #else\n    if(l->groups > 1){\n        error(\"CUDNN < 7 doesn't support groups, please upgrade!\");\n    }\n    #endif\n\n    cudnnGetConvolutionForwardAlgorithm(cudnn_handle(),\n            l->srcTensorDesc,\n            l->weightDesc,\n            l->convDesc,\n            l->dstTensorDesc,\n            CUDNN_CONVOLUTION_FWD_PREFER_FASTEST,\n            0,\n            &l->fw_algo);\n    cudnnGetConvolutionBackwardDataAlgorithm(cudnn_handle(),\n            l->weightDesc,\n            l->ddstTensorDesc,\n            l->convDesc,\n            l->dsrcTensorDesc,\n            CUDNN_CONVOLUTION_BWD_DATA_PREFER_FASTEST,\n            0,\n            &l->bd_algo);\n    cudnnGetConvolutionBackwardFilterAlgorithm(cudnn_handle(),\n            l->srcTensorDesc,\n            l->ddstTensorDesc,\n            l->convDesc,\n            l->dweightDesc,\n            CUDNN_CONVOLUTION_BWD_FILTER_PREFER_FASTEST,\n            0,\n            &l->bf_algo);\n}\n#endif\n#endif\n\nconvolutional_layer make_convolutional_layer(int batch, int h, int w, int c, int n, int groups, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int binary, int xnor, int adam)\n{\n    int i;\n    convolutional_layer l = {0};\n    l.type = CONVOLUTIONAL;\n\n    l.groups = groups;\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.n = n;\n    l.binary = binary;\n    l.xnor = xnor;\n    l.batch = batch;\n    l.stride = stride;\n    l.size = size;\n    l.pad = padding;\n    l.batch_normalize = batch_normalize;\n\n    l.weights = calloc(c/groups*n*size*size, sizeof(float));\n    l.weight_updates = calloc(c/groups*n*size*size, sizeof(float));\n\n    l.biases = calloc(n, sizeof(float));\n    l.bias_updates = calloc(n, sizeof(float));\n\n    l.nweights = c/groups*n*size*size;\n    l.nbiases = n;\n\n    // float scale = 1./sqrt(size*size*c);\n    float scale = sqrt(2./(size*size*c/l.groups));\n    //scale = .02;\n    //for(i = 0; i < c*n*size*size; ++i) l.weights[i] = scale*rand_uniform(-1, 1);\n    for(i = 0; i < l.nweights; ++i) l.weights[i] = scale*rand_normal();\n    int out_w = convolutional_out_width(l);\n    int out_h = convolutional_out_height(l);\n    l.out_h = out_h;\n    l.out_w = out_w;\n    l.out_c = n;\n    l.outputs = l.out_h * l.out_w * l.out_c;\n    l.inputs = l.w * l.h * l.c;\n\n    l.output = calloc(l.batch*l.outputs, sizeof(float));\n    l.delta  = calloc(l.batch*l.outputs, sizeof(float));\n\n    l.forward = forward_convolutional_layer;\n    l.backward = backward_convolutional_layer;\n    l.update = update_convolutional_layer;\n    if(binary){\n        l.binary_weights = calloc(l.nweights, sizeof(float));\n        l.cweights = calloc(l.nweights, sizeof(char));\n        l.scales = calloc(n, sizeof(float));\n    }\n    if(xnor){\n        l.binary_weights = calloc(l.nweights, sizeof(float));\n        l.binary_input = calloc(l.inputs*l.batch, sizeof(float));\n    }\n\n    if(batch_normalize){\n        l.scales = calloc(n, sizeof(float));\n        l.scale_updates = calloc(n, sizeof(float));\n        for(i = 0; i < n; ++i){\n            l.scales[i] = 1;\n        }\n\n        l.mean = calloc(n, sizeof(float));\n        l.variance = calloc(n, sizeof(float));\n\n        l.mean_delta = calloc(n, sizeof(float));\n        l.variance_delta = calloc(n, sizeof(float));\n\n        l.rolling_mean = calloc(n, sizeof(float));\n        l.rolling_variance = calloc(n, sizeof(float));\n        l.x = calloc(l.batch*l.outputs, sizeof(float));\n        l.x_norm = calloc(l.batch*l.outputs, sizeof(float));\n    }\n    if(adam){\n        l.m = calloc(l.nweights, sizeof(float));\n        l.v = calloc(l.nweights, sizeof(float));\n        l.bias_m = calloc(n, sizeof(float));\n        l.scale_m = calloc(n, sizeof(float));\n        l.bias_v = calloc(n, sizeof(float));\n        l.scale_v = calloc(n, sizeof(float));\n    }\n\n#ifdef GPU\n    l.forward_gpu = forward_convolutional_layer_gpu;\n    l.backward_gpu = backward_convolutional_layer_gpu;\n    l.update_gpu = update_convolutional_layer_gpu;\n\n    if(gpu_index >= 0){\n        if (adam) {\n            l.m_gpu = cuda_make_array(l.m, l.nweights);\n            l.v_gpu = cuda_make_array(l.v, l.nweights);\n            l.bias_m_gpu = cuda_make_array(l.bias_m, n);\n            l.bias_v_gpu = cuda_make_array(l.bias_v, n);\n            l.scale_m_gpu = cuda_make_array(l.scale_m, n);\n            l.scale_v_gpu = cuda_make_array(l.scale_v, n);\n        }\n\n        l.weights_gpu = cuda_make_array(l.weights, l.nweights);\n        l.weight_updates_gpu = cuda_make_array(l.weight_updates, l.nweights);\n\n        l.biases_gpu = cuda_make_array(l.biases, n);\n        l.bias_updates_gpu = cuda_make_array(l.bias_updates, n);\n\n        l.delta_gpu = cuda_make_array(l.delta, l.batch*out_h*out_w*n);\n        l.output_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);\n\n        if(binary){\n            l.binary_weights_gpu = cuda_make_array(l.weights, l.nweights);\n        }\n        if(xnor){\n            l.binary_weights_gpu = cuda_make_array(l.weights, l.nweights);\n            l.binary_input_gpu = cuda_make_array(0, l.inputs*l.batch);\n        }\n\n        if(batch_normalize){\n            l.mean_gpu = cuda_make_array(l.mean, n);\n            l.variance_gpu = cuda_make_array(l.variance, n);\n\n            l.rolling_mean_gpu = cuda_make_array(l.mean, n);\n            l.rolling_variance_gpu = cuda_make_array(l.variance, n);\n\n            l.mean_delta_gpu = cuda_make_array(l.mean, n);\n            l.variance_delta_gpu = cuda_make_array(l.variance, n);\n\n            l.scales_gpu = cuda_make_array(l.scales, n);\n            l.scale_updates_gpu = cuda_make_array(l.scale_updates, n);\n\n            l.x_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);\n            l.x_norm_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);\n        }\n#ifdef CUDNN\n        cudnnCreateTensorDescriptor(&l.normTensorDesc);\n        cudnnCreateTensorDescriptor(&l.srcTensorDesc);\n        cudnnCreateTensorDescriptor(&l.dstTensorDesc);\n        cudnnCreateFilterDescriptor(&l.weightDesc);\n        cudnnCreateTensorDescriptor(&l.dsrcTensorDesc);\n        cudnnCreateTensorDescriptor(&l.ddstTensorDesc);\n        cudnnCreateFilterDescriptor(&l.dweightDesc);\n        cudnnCreateConvolutionDescriptor(&l.convDesc);\n        cudnn_convolutional_setup(&l);\n#endif\n    }\n#endif\n    l.workspace_size = get_workspace_size(l);\n    l.activation = activation;\n\n    //fprintf(stderr, \"conv  %5d %2d x%2d /%2d  %4d x%4d x%4d   ->  %4d x%4d x%4d\\n\", n, size, size, stride, w, h, c, l.out_w, l.out_h, l.out_c);\n\n    return l;\n}\n\nvoid denormalize_convolutional_layer(convolutional_layer l)\n{\n    int i, j;\n    for(i = 0; i < l.n; ++i){\n        float scale = l.scales[i]/sqrt(l.rolling_variance[i] + .00001);\n        for(j = 0; j < l.c/l.groups*l.size*l.size; ++j){\n            l.weights[i*l.c/l.groups*l.size*l.size + j] *= scale;\n        }\n        l.biases[i] -= l.rolling_mean[i] * scale;\n        l.scales[i] = 1;\n        l.rolling_mean[i] = 0;\n        l.rolling_variance[i] = 1;\n    }\n}\n\n/*\nvoid test_convolutional_layer()\n{\n    convolutional_layer l = make_convolutional_layer(1, 5, 5, 3, 2, 5, 2, 1, LEAKY, 1, 0, 0, 0);\n    l.batch_normalize = 1;\n    float data[] = {1,1,1,1,1,\n        1,1,1,1,1,\n        1,1,1,1,1,\n        1,1,1,1,1,\n        1,1,1,1,1,\n        2,2,2,2,2,\n        2,2,2,2,2,\n        2,2,2,2,2,\n        2,2,2,2,2,\n        2,2,2,2,2,\n        3,3,3,3,3,\n        3,3,3,3,3,\n        3,3,3,3,3,\n        3,3,3,3,3,\n        3,3,3,3,3};\n    //net.input = data;\n    //forward_convolutional_layer(l);\n}\n*/\n\nvoid resize_convolutional_layer(convolutional_layer *l, int w, int h)\n{\n    l->w = w;\n    l->h = h;\n    int out_w = convolutional_out_width(*l);\n    int out_h = convolutional_out_height(*l);\n\n    l->out_w = out_w;\n    l->out_h = out_h;\n\n    l->outputs = l->out_h * l->out_w * l->out_c;\n    l->inputs = l->w * l->h * l->c;\n\n    l->output = realloc(l->output, l->batch*l->outputs*sizeof(float));\n    l->delta  = realloc(l->delta,  l->batch*l->outputs*sizeof(float));\n    if(l->batch_normalize){\n        l->x = realloc(l->x, l->batch*l->outputs*sizeof(float));\n        l->x_norm  = realloc(l->x_norm, l->batch*l->outputs*sizeof(float));\n    }\n\n#ifdef GPU\n    cuda_free(l->delta_gpu);\n    cuda_free(l->output_gpu);\n\n    l->delta_gpu =  cuda_make_array(l->delta,  l->batch*l->outputs);\n    l->output_gpu = cuda_make_array(l->output, l->batch*l->outputs);\n\n    if(l->batch_normalize){\n        cuda_free(l->x_gpu);\n        cuda_free(l->x_norm_gpu);\n\n        l->x_gpu = cuda_make_array(l->output, l->batch*l->outputs);\n        l->x_norm_gpu = cuda_make_array(l->output, l->batch*l->outputs);\n    }\n#ifdef CUDNN\n    cudnn_convolutional_setup(l);\n#endif\n#endif\n    l->workspace_size = get_workspace_size(*l);\n}\n\nvoid add_bias(float *output, float *biases, int batch, int n, int size)\n{\n    int i,j,b;\n    for(b = 0; b < batch; ++b){\n        for(i = 0; i < n; ++i){\n            for(j = 0; j < size; ++j){\n                output[(b*n + i)*size + j] += biases[i];\n            }\n        }\n    }\n}\n\nvoid scale_bias(float *output, float *scales, int batch, int n, int size)\n{\n    int i,j,b;\n    for(b = 0; b < batch; ++b){\n        for(i = 0; i < n; ++i){\n            for(j = 0; j < size; ++j){\n                output[(b*n + i)*size + j] *= scales[i];\n            }\n        }\n    }\n}\n\nvoid backward_bias(float *bias_updates, float *delta, int batch, int n, int size)\n{\n    int i,b;\n    for(b = 0; b < batch; ++b){\n        for(i = 0; i < n; ++i){\n            bias_updates[i] += sum_array(delta+size*(i+b*n), size);\n        }\n    }\n}\n\nvoid forward_convolutional_layer(convolutional_layer l, network net)\n{\n    int i, j;\n\n    fill_cpu(l.outputs*l.batch, 0, l.output, 1);\n\n    if(l.xnor){\n        binarize_weights(l.weights, l.n, l.c/l.groups*l.size*l.size, l.binary_weights);\n        swap_binary(&l);\n        binarize_cpu(net.input, l.c*l.h*l.w*l.batch, l.binary_input);\n        net.input = l.binary_input;\n    }\n\n    int m = l.n/l.groups;\n    int k = l.size*l.size*l.c/l.groups;\n    int n = l.out_w*l.out_h;\n    for(i = 0; i < l.batch; ++i){\n        for(j = 0; j < l.groups; ++j){\n            float *a = l.weights + j*l.nweights/l.groups;\n            float *b = net.workspace;\n            float *c = l.output + (i*l.groups + j)*n*m;\n\n            im2col_cpu(net.input + (i*l.groups + j)*l.c/l.groups*l.h*l.w,\n                l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, b);\n            gemm(0,0,m,n,k,1,a,k,b,n,1,c,n);\n        }\n    }\n\n    if(l.batch_normalize){\n        forward_batchnorm_layer(l, net);\n    } else {\n        add_bias(l.output, l.biases, l.batch, l.n, l.out_h*l.out_w);\n    }\n\n    activate_array(l.output, l.outputs*l.batch, l.activation);\n    if(l.binary || l.xnor) swap_binary(&l);\n}\n\nvoid backward_convolutional_layer(convolutional_layer l, network net)\n{\n    int i, j;\n    int m = l.n/l.groups;\n    int n = l.size*l.size*l.c/l.groups;\n    int k = l.out_w*l.out_h;\n\n    gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);\n\n    if(l.batch_normalize){\n        backward_batchnorm_layer(l, net);\n    } else {\n        backward_bias(l.bias_updates, l.delta, l.batch, l.n, k);\n    }\n\n    for(i = 0; i < l.batch; ++i){\n        for(j = 0; j < l.groups; ++j){\n            float *a = l.delta + (i*l.groups + j)*m*k;\n            float *b = net.workspace;\n            float *c = l.weight_updates + j*l.nweights/l.groups;\n\n            float *im = net.input+(i*l.groups + j)*l.c/l.groups*l.h*l.w;\n\n            im2col_cpu(im, l.c/l.groups, l.h, l.w, \n                    l.size, l.stride, l.pad, b);\n            gemm(0,1,m,n,k,1,a,k,b,k,1,c,n);\n\n            if(net.delta){\n                a = l.weights + j*l.nweights/l.groups;\n                b = l.delta + (i*l.groups + j)*m*k;\n                c = net.workspace;\n\n                gemm(1,0,n,k,m,1,a,n,b,k,0,c,k);\n\n                col2im_cpu(net.workspace, l.c/l.groups, l.h, l.w, l.size, l.stride, \n                    l.pad, net.delta + (i*l.groups + j)*l.c/l.groups*l.h*l.w);\n            }\n        }\n    }\n}\n\nvoid update_convolutional_layer(convolutional_layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n\n    axpy_cpu(l.n, learning_rate/batch, l.bias_updates, 1, l.biases, 1);\n    scal_cpu(l.n, momentum, l.bias_updates, 1);\n\n    if(l.scales){\n        axpy_cpu(l.n, learning_rate/batch, l.scale_updates, 1, l.scales, 1);\n        scal_cpu(l.n, momentum, l.scale_updates, 1);\n    }\n\n    axpy_cpu(l.nweights, -decay*batch, l.weights, 1, l.weight_updates, 1);\n    axpy_cpu(l.nweights, learning_rate/batch, l.weight_updates, 1, l.weights, 1);\n    scal_cpu(l.nweights, momentum, l.weight_updates, 1);\n}\n\n\nimage get_convolutional_weight(convolutional_layer l, int i)\n{\n    int h = l.size;\n    int w = l.size;\n    int c = l.c/l.groups;\n    return float_to_image(w,h,c,l.weights+i*h*w*c);\n}\n\nvoid rgbgr_weights(convolutional_layer l)\n{\n    int i;\n    for(i = 0; i < l.n; ++i){\n        image im = get_convolutional_weight(l, i);\n        if (im.c == 3) {\n            rgbgr_image(im);\n        }\n    }\n}\n\nvoid rescale_weights(convolutional_layer l, float scale, float trans)\n{\n    int i;\n    for(i = 0; i < l.n; ++i){\n        image im = get_convolutional_weight(l, i);\n        if (im.c == 3) {\n            scale_image(im, scale);\n            float sum = sum_array(im.data, im.w*im.h*im.c);\n            l.biases[i] += sum*trans;\n        }\n    }\n}\n\nimage *get_weights(convolutional_layer l)\n{\n    image *weights = calloc(l.n, sizeof(image));\n    int i;\n    for(i = 0; i < l.n; ++i){\n        weights[i] = copy_image(get_convolutional_weight(l, i));\n        normalize_image(weights[i]);\n        /*\n           char buff[256];\n           sprintf(buff, \"filter%d\", i);\n           save_image(weights[i], buff);\n         */\n    }\n    //error(\"hey\");\n    return weights;\n}\n\nimage *visualize_convolutional_layer(convolutional_layer l, char *window, image *prev_weights)\n{\n    image *single_weights = get_weights(l);\n    show_images(single_weights, l.n, window);\n\n    image delta = get_convolutional_image(l);\n    image dc = collapse_image_layers(delta, 1);\n    char buff[256];\n    //sprintf(buff, \"%s: Output\", window);\n    //show_image(dc, buff);\n    //save_image(dc, buff);\n    free_image(dc);\n    return single_weights;\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/convolutional_layer.h",
    "content": "#ifndef CONVOLUTIONAL_LAYER_H\n#define CONVOLUTIONAL_LAYER_H\n\n#include \"cuda.h\"\n#include \"image.h\"\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer convolutional_layer;\n\n#ifdef GPU\nvoid forward_convolutional_layer_gpu(convolutional_layer layer, network net);\nvoid backward_convolutional_layer_gpu(convolutional_layer layer, network net);\nvoid update_convolutional_layer_gpu(convolutional_layer layer, update_args a);\n\nvoid push_convolutional_layer(convolutional_layer layer);\nvoid pull_convolutional_layer(convolutional_layer layer);\n\nvoid add_bias_gpu(float *output, float *biases, int batch, int n, int size);\nvoid backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size);\nvoid adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t);\n#ifdef CUDNN\nvoid cudnn_convolutional_setup(layer *l);\n#endif\n#endif\n\nconvolutional_layer make_convolutional_layer(int batch, int h, int w, int c, int n, int groups, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int binary, int xnor, int adam);\nvoid resize_convolutional_layer(convolutional_layer *layer, int w, int h);\nvoid forward_convolutional_layer(const convolutional_layer layer, network net);\nvoid update_convolutional_layer(convolutional_layer layer, update_args a);\nimage *visualize_convolutional_layer(convolutional_layer layer, char *window, image *prev_weights);\nvoid binarize_weights(float *weights, int n, int size, float *binary);\nvoid swap_binary(convolutional_layer *l);\nvoid binarize_weights2(float *weights, int n, int size, char *binary, float *scales);\n\nvoid backward_convolutional_layer(convolutional_layer layer, network net);\n\nvoid add_bias(float *output, float *biases, int batch, int n, int size);\nvoid backward_bias(float *bias_updates, float *delta, int batch, int n, int size);\n\nimage get_convolutional_image(convolutional_layer layer);\nimage get_convolutional_delta(convolutional_layer layer);\nimage get_convolutional_weight(convolutional_layer layer, int i);\n\nint convolutional_out_height(convolutional_layer layer);\nint convolutional_out_width(convolutional_layer layer);\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/cost_layer.c",
    "content": "#include \"cost_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include <math.h>\n#include <string.h>\n#include <stdlib.h>\n#include <stdio.h>\n\nCOST_TYPE get_cost_type(char *s)\n{\n    if (strcmp(s, \"seg\")==0) return SEG;\n    if (strcmp(s, \"sse\")==0) return SSE;\n    if (strcmp(s, \"masked\")==0) return MASKED;\n    if (strcmp(s, \"smooth\")==0) return SMOOTH;\n    if (strcmp(s, \"L1\")==0) return L1;\n    fprintf(stderr, \"Couldn't find cost type %s, going with SSE\\n\", s);\n    return SSE;\n}\n\nchar *get_cost_string(COST_TYPE a)\n{\n    switch(a){\n        case SEG:\n            return \"seg\";\n        case SSE:\n            return \"sse\";\n        case MASKED:\n            return \"masked\";\n        case SMOOTH:\n            return \"smooth\";\n        case L1:\n            return \"L1\";\n    }\n    return \"sse\";\n}\n\ncost_layer make_cost_layer(int batch, int inputs, COST_TYPE cost_type, float scale)\n{\n    fprintf(stderr, \"cost                                           %4d\\n\",  inputs);\n    cost_layer l = {0};\n    l.type = COST;\n\n    l.scale = scale;\n    l.batch = batch;\n    l.inputs = inputs;\n    l.outputs = inputs;\n    l.cost_type = cost_type;\n    l.delta = calloc(inputs*batch, sizeof(float));\n    l.output = calloc(inputs*batch, sizeof(float));\n    l.cost = calloc(1, sizeof(float));\n\n    l.forward = forward_cost_layer;\n    l.backward = backward_cost_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_cost_layer_gpu;\n    l.backward_gpu = backward_cost_layer_gpu;\n\n    l.delta_gpu = cuda_make_array(l.output, inputs*batch);\n    l.output_gpu = cuda_make_array(l.delta, inputs*batch);\n    #endif\n    return l;\n}\n\nvoid resize_cost_layer(cost_layer *l, int inputs)\n{\n    l->inputs = inputs;\n    l->outputs = inputs;\n    l->delta = realloc(l->delta, inputs*l->batch*sizeof(float));\n    l->output = realloc(l->output, inputs*l->batch*sizeof(float));\n#ifdef GPU\n    cuda_free(l->delta_gpu);\n    cuda_free(l->output_gpu);\n    l->delta_gpu = cuda_make_array(l->delta, inputs*l->batch);\n    l->output_gpu = cuda_make_array(l->output, inputs*l->batch);\n#endif\n}\n\nvoid forward_cost_layer(cost_layer l, network net)\n{\n    if (!net.truth) return;\n    if(l.cost_type == MASKED){\n        int i;\n        for(i = 0; i < l.batch*l.inputs; ++i){\n            if(net.truth[i] == SECRET_NUM) net.input[i] = SECRET_NUM;\n        }\n    }\n    if(l.cost_type == SMOOTH){\n        smooth_l1_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);\n    }else if(l.cost_type == L1){\n        l1_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);\n    } else {\n        l2_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);\n    }\n    l.cost[0] = sum_array(l.output, l.batch*l.inputs);\n}\n\nvoid backward_cost_layer(const cost_layer l, network net)\n{\n    axpy_cpu(l.batch*l.inputs, l.scale, l.delta, 1, net.delta, 1);\n}\n\n#ifdef GPU\n\nvoid pull_cost_layer(cost_layer l)\n{\n    cuda_pull_array(l.delta_gpu, l.delta, l.batch*l.inputs);\n}\n\nvoid push_cost_layer(cost_layer l)\n{\n    cuda_push_array(l.delta_gpu, l.delta, l.batch*l.inputs);\n}\n\nint float_abs_compare (const void * a, const void * b)\n{\n    float fa = *(const float*) a;\n    if(fa < 0) fa = -fa;\n    float fb = *(const float*) b;\n    if(fb < 0) fb = -fb;\n    return (fa > fb) - (fa < fb);\n}\n\nvoid forward_cost_layer_gpu(cost_layer l, network net)\n{\n    if (!net.truth_gpu) return;\n    if(l.smooth){\n        scal_gpu(l.batch*l.inputs, (1-l.smooth), net.truth_gpu, 1);\n        add_gpu(l.batch*l.inputs, l.smooth * 1./l.inputs, net.truth_gpu, 1);\n    }\n    if (l.cost_type == MASKED) {\n        mask_gpu(l.batch*l.inputs, net.input_gpu, SECRET_NUM, net.truth_gpu);\n    }\n\n    if(l.cost_type == SMOOTH){\n        smooth_l1_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);\n    } else if (l.cost_type == L1){\n        l1_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);\n    } else {\n        l2_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);\n    }\n\n    if (l.cost_type == SEG && l.noobject_scale != 1) {\n        scale_mask_gpu(l.batch*l.inputs, l.delta_gpu, 0, net.truth_gpu, l.noobject_scale);\n        scale_mask_gpu(l.batch*l.inputs, l.output_gpu, 0, net.truth_gpu, l.noobject_scale);\n    }\n\n    if(l.ratio){\n        cuda_pull_array(l.delta_gpu, l.delta, l.batch*l.inputs);\n        qsort(l.delta, l.batch*l.inputs, sizeof(float), float_abs_compare);\n        int n = (1-l.ratio) * l.batch*l.inputs;\n        float thresh = l.delta[n];\n        thresh = 0;\n        printf(\"%f\\n\", thresh);\n        supp_gpu(l.batch*l.inputs, thresh, l.delta_gpu, 1);\n    }\n\n    if(l.thresh){\n        supp_gpu(l.batch*l.inputs, l.thresh*1./l.inputs, l.delta_gpu, 1);\n    }\n\n    cuda_pull_array(l.output_gpu, l.output, l.batch*l.inputs);\n    l.cost[0] = sum_array(l.output, l.batch*l.inputs);\n}\n\nvoid backward_cost_layer_gpu(const cost_layer l, network net)\n{\n    axpy_gpu(l.batch*l.inputs, l.scale, l.delta_gpu, 1, net.delta_gpu, 1);\n}\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/cost_layer.h",
    "content": "#ifndef COST_LAYER_H\n#define COST_LAYER_H\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer cost_layer;\n\nCOST_TYPE get_cost_type(char *s);\nchar *get_cost_string(COST_TYPE a);\ncost_layer make_cost_layer(int batch, int inputs, COST_TYPE type, float scale);\nvoid forward_cost_layer(const cost_layer l, network net);\nvoid backward_cost_layer(const cost_layer l, network net);\nvoid resize_cost_layer(cost_layer *l, int inputs);\n\n#ifdef GPU\nvoid forward_cost_layer_gpu(cost_layer l, network net);\nvoid backward_cost_layer_gpu(const cost_layer l, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/crnn_layer.c",
    "content": "#include \"crnn_layer.h\"\n#include \"convolutional_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nstatic void increment_layer(layer *l, int steps)\n{\n    int num = l->outputs*l->batch*steps;\n    l->output += num;\n    l->delta += num;\n    l->x += num;\n    l->x_norm += num;\n\n#ifdef GPU\n    l->output_gpu += num;\n    l->delta_gpu += num;\n    l->x_gpu += num;\n    l->x_norm_gpu += num;\n#endif\n}\n\nlayer make_crnn_layer(int batch, int h, int w, int c, int hidden_filters, int output_filters, int steps, ACTIVATION activation, int batch_normalize)\n{\n    fprintf(stderr, \"CRNN Layer: %d x %d x %d image, %d filters\\n\", h,w,c,output_filters);\n    batch = batch / steps;\n    layer l = {0};\n    l.batch = batch;\n    l.type = CRNN;\n    l.steps = steps;\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.out_h = h;\n    l.out_w = w;\n    l.out_c = output_filters;\n    l.inputs = h*w*c;\n    l.hidden = h * w * hidden_filters;\n    l.outputs = l.out_h * l.out_w * l.out_c;\n\n    l.state = calloc(l.hidden*batch*(steps+1), sizeof(float));\n\n    l.input_layer = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.input_layer) = make_convolutional_layer(batch*steps, h, w, c, hidden_filters, 1, 3, 1, 1,  activation, batch_normalize, 0, 0, 0);\n    l.input_layer->batch = batch;\n\n    l.self_layer = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.self_layer) = make_convolutional_layer(batch*steps, h, w, hidden_filters, hidden_filters, 1, 3, 1, 1,  activation, batch_normalize, 0, 0, 0);\n    l.self_layer->batch = batch;\n\n    l.output_layer = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.output_layer) = make_convolutional_layer(batch*steps, h, w, hidden_filters, output_filters, 1, 3, 1, 1,  activation, batch_normalize, 0, 0, 0);\n    l.output_layer->batch = batch;\n\n    l.output = l.output_layer->output;\n    l.delta = l.output_layer->delta;\n\n    l.forward = forward_crnn_layer;\n    l.backward = backward_crnn_layer;\n    l.update = update_crnn_layer;\n\n#ifdef GPU\n    l.forward_gpu = forward_crnn_layer_gpu;\n    l.backward_gpu = backward_crnn_layer_gpu;\n    l.update_gpu = update_crnn_layer_gpu;\n\n    l.state_gpu = cuda_make_array(l.state, l.hidden*batch*(steps+1));\n    l.output_gpu = l.output_layer->output_gpu;\n    l.delta_gpu = l.output_layer->delta_gpu;\n#endif\n\n    return l;\n}\n\nvoid update_crnn_layer(layer l, update_args a)\n{\n    update_convolutional_layer(*(l.input_layer),  a);\n    update_convolutional_layer(*(l.self_layer),   a);\n    update_convolutional_layer(*(l.output_layer), a);\n}\n\nvoid forward_crnn_layer(layer l, network net)\n{\n    network s = net;\n    s.train = net.train;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n\n    fill_cpu(l.outputs * l.batch * l.steps, 0, output_layer.delta, 1);\n    fill_cpu(l.hidden * l.batch * l.steps, 0, self_layer.delta, 1);\n    fill_cpu(l.hidden * l.batch * l.steps, 0, input_layer.delta, 1);\n    if(net.train) fill_cpu(l.hidden * l.batch, 0, l.state, 1);\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input = net.input;\n        forward_convolutional_layer(input_layer, s);\n\n        s.input = l.state;\n        forward_convolutional_layer(self_layer, s);\n\n        float *old_state = l.state;\n        if(net.train) l.state += l.hidden*l.batch;\n        if(l.shortcut){\n            copy_cpu(l.hidden * l.batch, old_state, 1, l.state, 1);\n        }else{\n            fill_cpu(l.hidden * l.batch, 0, l.state, 1);\n        }\n        axpy_cpu(l.hidden * l.batch, 1, input_layer.output, 1, l.state, 1);\n        axpy_cpu(l.hidden * l.batch, 1, self_layer.output, 1, l.state, 1);\n\n        s.input = l.state;\n        forward_convolutional_layer(output_layer, s);\n\n        net.input += l.inputs*l.batch;\n        increment_layer(&input_layer, 1);\n        increment_layer(&self_layer, 1);\n        increment_layer(&output_layer, 1);\n    }\n}\n\nvoid backward_crnn_layer(layer l, network net)\n{\n    network s = net;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n\n    increment_layer(&input_layer, l.steps-1);\n    increment_layer(&self_layer, l.steps-1);\n    increment_layer(&output_layer, l.steps-1);\n\n    l.state += l.hidden*l.batch*l.steps;\n    for (i = l.steps-1; i >= 0; --i) {\n        copy_cpu(l.hidden * l.batch, input_layer.output, 1, l.state, 1);\n        axpy_cpu(l.hidden * l.batch, 1, self_layer.output, 1, l.state, 1);\n\n        s.input = l.state;\n        s.delta = self_layer.delta;\n        backward_convolutional_layer(output_layer, s);\n\n        l.state -= l.hidden*l.batch;\n        /*\n           if(i > 0){\n           copy_cpu(l.hidden * l.batch, input_layer.output - l.hidden*l.batch, 1, l.state, 1);\n           axpy_cpu(l.hidden * l.batch, 1, self_layer.output - l.hidden*l.batch, 1, l.state, 1);\n           }else{\n           fill_cpu(l.hidden * l.batch, 0, l.state, 1);\n           }\n         */\n\n        s.input = l.state;\n        s.delta = self_layer.delta - l.hidden*l.batch;\n        if (i == 0) s.delta = 0;\n        backward_convolutional_layer(self_layer, s);\n\n        copy_cpu(l.hidden*l.batch, self_layer.delta, 1, input_layer.delta, 1);\n        if (i > 0 && l.shortcut) axpy_cpu(l.hidden*l.batch, 1, self_layer.delta, 1, self_layer.delta - l.hidden*l.batch, 1);\n        s.input = net.input + i*l.inputs*l.batch;\n        if(net.delta) s.delta = net.delta + i*l.inputs*l.batch;\n        else s.delta = 0;\n        backward_convolutional_layer(input_layer, s);\n\n        increment_layer(&input_layer, -1);\n        increment_layer(&self_layer, -1);\n        increment_layer(&output_layer, -1);\n    }\n}\n\n#ifdef GPU\n\nvoid pull_crnn_layer(layer l)\n{\n    pull_convolutional_layer(*(l.input_layer));\n    pull_convolutional_layer(*(l.self_layer));\n    pull_convolutional_layer(*(l.output_layer));\n}\n\nvoid push_crnn_layer(layer l)\n{\n    push_convolutional_layer(*(l.input_layer));\n    push_convolutional_layer(*(l.self_layer));\n    push_convolutional_layer(*(l.output_layer));\n}\n\nvoid update_crnn_layer_gpu(layer l, update_args a)\n{\n    update_convolutional_layer_gpu(*(l.input_layer),  a);\n    update_convolutional_layer_gpu(*(l.self_layer),   a);\n    update_convolutional_layer_gpu(*(l.output_layer), a);\n}\n\nvoid forward_crnn_layer_gpu(layer l, network net)\n{\n    network s = net;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n\n    fill_gpu(l.outputs * l.batch * l.steps, 0, output_layer.delta_gpu, 1);\n    fill_gpu(l.hidden * l.batch * l.steps, 0, self_layer.delta_gpu, 1);\n    fill_gpu(l.hidden * l.batch * l.steps, 0, input_layer.delta_gpu, 1);\n    if(net.train) fill_gpu(l.hidden * l.batch, 0, l.state_gpu, 1);\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input_gpu = net.input_gpu;\n        forward_convolutional_layer_gpu(input_layer, s);\n\n        s.input_gpu = l.state_gpu;\n        forward_convolutional_layer_gpu(self_layer, s);\n\n        float *old_state = l.state_gpu;\n        if(net.train) l.state_gpu += l.hidden*l.batch;\n        if(l.shortcut){\n            copy_gpu(l.hidden * l.batch, old_state, 1, l.state_gpu, 1);\n        }else{\n            fill_gpu(l.hidden * l.batch, 0, l.state_gpu, 1);\n        }\n        axpy_gpu(l.hidden * l.batch, 1, input_layer.output_gpu, 1, l.state_gpu, 1);\n        axpy_gpu(l.hidden * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);\n\n        s.input_gpu = l.state_gpu;\n        forward_convolutional_layer_gpu(output_layer, s);\n\n        net.input_gpu += l.inputs*l.batch;\n        increment_layer(&input_layer, 1);\n        increment_layer(&self_layer, 1);\n        increment_layer(&output_layer, 1);\n    }\n}\n\nvoid backward_crnn_layer_gpu(layer l, network net)\n{\n    network s = net;\n    s.train = net.train;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n    increment_layer(&input_layer,  l.steps - 1);\n    increment_layer(&self_layer,   l.steps - 1);\n    increment_layer(&output_layer, l.steps - 1);\n    l.state_gpu += l.hidden*l.batch*l.steps;\n    for (i = l.steps-1; i >= 0; --i) {\n        copy_gpu(l.hidden * l.batch, input_layer.output_gpu, 1, l.state_gpu, 1);\n        axpy_gpu(l.hidden * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);\n\n        s.input_gpu = l.state_gpu;\n        s.delta_gpu = self_layer.delta_gpu;\n        backward_convolutional_layer_gpu(output_layer, s);\n\n        l.state_gpu -= l.hidden*l.batch;\n\n        s.input_gpu = l.state_gpu;\n        s.delta_gpu = self_layer.delta_gpu - l.hidden*l.batch;\n        if (i == 0) s.delta_gpu = 0;\n        backward_convolutional_layer_gpu(self_layer, s);\n\n        copy_gpu(l.hidden*l.batch, self_layer.delta_gpu, 1, input_layer.delta_gpu, 1);\n        if (i > 0 && l.shortcut) axpy_gpu(l.hidden*l.batch, 1, self_layer.delta_gpu, 1, self_layer.delta_gpu - l.hidden*l.batch, 1);\n        s.input_gpu = net.input_gpu + i*l.inputs*l.batch;\n        if(net.delta_gpu) s.delta_gpu = net.delta_gpu + i*l.inputs*l.batch;\n        else s.delta_gpu = 0;\n        backward_convolutional_layer_gpu(input_layer, s);\n\n        increment_layer(&input_layer,  -1);\n        increment_layer(&self_layer,   -1);\n        increment_layer(&output_layer, -1);\n    }\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/crnn_layer.h",
    "content": "\n#ifndef CRNN_LAYER_H\n#define CRNN_LAYER_H\n\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_crnn_layer(int batch, int h, int w, int c, int hidden_filters, int output_filters, int steps, ACTIVATION activation, int batch_normalize);\n\nvoid forward_crnn_layer(layer l, network net);\nvoid backward_crnn_layer(layer l, network net);\nvoid update_crnn_layer(layer l, update_args a);\n\n#ifdef GPU\nvoid forward_crnn_layer_gpu(layer l, network net);\nvoid backward_crnn_layer_gpu(layer l, network net);\nvoid update_crnn_layer_gpu(layer l, update_args a);\nvoid push_crnn_layer(layer l);\nvoid pull_crnn_layer(layer l);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/crop_layer.c",
    "content": "#include \"crop_layer.h\"\n#include \"cuda.h\"\n#include <stdio.h>\n\nimage get_crop_image(crop_layer l)\n{\n    int h = l.out_h;\n    int w = l.out_w;\n    int c = l.out_c;\n    return float_to_image(w,h,c,l.output);\n}\n\nvoid backward_crop_layer(const crop_layer l, network net){}\nvoid backward_crop_layer_gpu(const crop_layer l, network net){}\n\ncrop_layer make_crop_layer(int batch, int h, int w, int c, int crop_height, int crop_width, int flip, float angle, float saturation, float exposure)\n{\n    fprintf(stderr, \"Crop Layer: %d x %d -> %d x %d x %d image\\n\", h,w,crop_height,crop_width,c);\n    crop_layer l = {0};\n    l.type = CROP;\n    l.batch = batch;\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.scale = (float)crop_height / h;\n    l.flip = flip;\n    l.angle = angle;\n    l.saturation = saturation;\n    l.exposure = exposure;\n    l.out_w = crop_width;\n    l.out_h = crop_height;\n    l.out_c = c;\n    l.inputs = l.w * l.h * l.c;\n    l.outputs = l.out_w * l.out_h * l.out_c;\n    l.output = calloc(l.outputs*batch, sizeof(float));\n    l.forward = forward_crop_layer;\n    l.backward = backward_crop_layer;\n\n    #ifdef GPU\n    l.forward_gpu = forward_crop_layer_gpu;\n    l.backward_gpu = backward_crop_layer_gpu;\n    l.output_gpu = cuda_make_array(l.output, l.outputs*batch);\n    l.rand_gpu   = cuda_make_array(0, l.batch*8);\n    #endif\n    return l;\n}\n\nvoid resize_crop_layer(layer *l, int w, int h)\n{\n    l->w = w;\n    l->h = h;\n\n    l->out_w =  l->scale*w;\n    l->out_h =  l->scale*h;\n\n    l->inputs = l->w * l->h * l->c;\n    l->outputs = l->out_h * l->out_w * l->out_c;\n\n    l->output = realloc(l->output, l->batch*l->outputs*sizeof(float));\n    #ifdef GPU\n    cuda_free(l->output_gpu);\n    l->output_gpu = cuda_make_array(l->output, l->outputs*l->batch);\n    #endif\n}\n\n\nvoid forward_crop_layer(const crop_layer l, network net)\n{\n    int i,j,c,b,row,col;\n    int index;\n    int count = 0;\n    int flip = (l.flip && rand()%2);\n    int dh = rand()%(l.h - l.out_h + 1);\n    int dw = rand()%(l.w - l.out_w + 1);\n    float scale = 2;\n    float trans = -1;\n    if(l.noadjust){\n        scale = 1;\n        trans = 0;\n    }\n    if(!net.train){\n        flip = 0;\n        dh = (l.h - l.out_h)/2;\n        dw = (l.w - l.out_w)/2;\n    }\n    for(b = 0; b < l.batch; ++b){\n        for(c = 0; c < l.c; ++c){\n            for(i = 0; i < l.out_h; ++i){\n                for(j = 0; j < l.out_w; ++j){\n                    if(flip){\n                        col = l.w - dw - j - 1;    \n                    }else{\n                        col = j + dw;\n                    }\n                    row = i + dh;\n                    index = col+l.w*(row+l.h*(c + l.c*b)); \n                    l.output[count++] = net.input[index]*scale + trans;\n                }\n            }\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/crop_layer.h",
    "content": "#ifndef CROP_LAYER_H\n#define CROP_LAYER_H\n\n#include \"image.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer crop_layer;\n\nimage get_crop_image(crop_layer l);\ncrop_layer make_crop_layer(int batch, int h, int w, int c, int crop_height, int crop_width, int flip, float angle, float saturation, float exposure);\nvoid forward_crop_layer(const crop_layer l, network net);\nvoid resize_crop_layer(layer *l, int w, int h);\n\n#ifdef GPU\nvoid forward_crop_layer_gpu(crop_layer l, network net);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/crop_layer_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"crop_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"image.h\"\n}\n\n__device__ float get_pixel_kernel(float *image, int w, int h, int x, int y, int c)\n{\n    if(x < 0 || x >= w || y < 0 || y >= h) return 0;\n    return image[x + w*(y + c*h)];\n}\n\n__device__ float3 rgb_to_hsv_kernel(float3 rgb)\n{\n    float r = rgb.x;\n    float g = rgb.y; \n    float b = rgb.z;\n\n    float h, s, v;\n    float max = (r > g) ? ( (r > b) ? r : b) : ( (g > b) ? g : b);\n    float min = (r < g) ? ( (r < b) ? r : b) : ( (g < b) ? g : b);\n    float delta = max - min;\n    v = max;\n    if(max == 0){\n        s = 0;\n        h = -1;\n    }else{\n        s = delta/max;\n        if(r == max){\n            h = (g - b) / delta;\n        } else if (g == max) {\n            h = 2 + (b - r) / delta;\n        } else {\n            h = 4 + (r - g) / delta;\n        }\n        if (h < 0) h += 6;\n    }\n    return make_float3(h, s, v);\n}\n\n__device__ float3 hsv_to_rgb_kernel(float3 hsv)\n{\n    float h = hsv.x;\n    float s = hsv.y; \n    float v = hsv.z;\n\n    float r, g, b;\n    float f, p, q, t;\n\n    if (s == 0) {\n        r = g = b = v;\n    } else {\n        int index = (int) floorf(h);\n        f = h - index;\n        p = v*(1-s);\n        q = v*(1-s*f);\n        t = v*(1-s*(1-f));\n        if(index == 0){\n            r = v; g = t; b = p;\n        } else if(index == 1){\n            r = q; g = v; b = p;\n        } else if(index == 2){\n            r = p; g = v; b = t;\n        } else if(index == 3){\n            r = p; g = q; b = v;\n        } else if(index == 4){\n            r = t; g = p; b = v;\n        } else {\n            r = v; g = p; b = q;\n        }\n    }\n    r = (r < 0) ? 0 : ((r > 1) ? 1 : r);\n    g = (g < 0) ? 0 : ((g > 1) ? 1 : g);\n    b = (b < 0) ? 0 : ((b > 1) ? 1 : b);\n    return make_float3(r, g, b);\n}\n\n__device__ float bilinear_interpolate_kernel(float *image, int w, int h, float x, float y, int c)\n{\n    int ix = (int) floorf(x);\n    int iy = (int) floorf(y);\n\n    float dx = x - ix;\n    float dy = y - iy;\n\n    float val = (1-dy) * (1-dx) * get_pixel_kernel(image, w, h, ix, iy, c) + \n        dy     * (1-dx) * get_pixel_kernel(image, w, h, ix, iy+1, c) + \n        (1-dy) *   dx   * get_pixel_kernel(image, w, h, ix+1, iy, c) +\n        dy     *   dx   * get_pixel_kernel(image, w, h, ix+1, iy+1, c);\n    return val;\n}\n\n__global__ void levels_image_kernel(float *image, float *rand, int batch, int w, int h, int train, float saturation, float exposure, float translate, float scale, float shift)\n{\n    int size = batch * w * h;\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= size) return;\n    int x = id % w;\n    id /= w;\n    int y = id % h;\n    id /= h;\n    float rshift = rand[0];\n    float gshift = rand[1];\n    float bshift = rand[2];\n    float r0 = rand[8*id + 0];\n    float r1 = rand[8*id + 1];\n    float r2 = rand[8*id + 2];\n    float r3 = rand[8*id + 3];\n\n    saturation = r0*(saturation - 1) + 1;\n    saturation = (r1 > .5f) ? 1.f/saturation : saturation;\n    exposure = r2*(exposure - 1) + 1;\n    exposure = (r3 > .5f) ? 1.f/exposure : exposure;\n\n    size_t offset = id * h * w * 3;\n    image += offset;\n    float r = image[x + w*(y + h*0)];\n    float g = image[x + w*(y + h*1)];\n    float b = image[x + w*(y + h*2)];\n    float3 rgb = make_float3(r,g,b);\n    if(train){\n        float3 hsv = rgb_to_hsv_kernel(rgb);\n        hsv.y *= saturation;\n        hsv.z *= exposure;\n        rgb = hsv_to_rgb_kernel(hsv);\n    } else {\n        shift = 0;\n    }\n    image[x + w*(y + h*0)] = rgb.x*scale + translate + (rshift - .5f)*shift;\n    image[x + w*(y + h*1)] = rgb.y*scale + translate + (gshift - .5f)*shift;\n    image[x + w*(y + h*2)] = rgb.z*scale + translate + (bshift - .5f)*shift;\n}\n\n__global__ void forward_crop_layer_kernel(float *input, float *rand, int size, int c, int h, int w, int crop_height, int crop_width, int train, int flip, float angle, float *output)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= size) return;\n\n    float cx = w/2.f;\n    float cy = h/2.f;\n\n    int count = id;\n    int j = id % crop_width;\n    id /= crop_width;\n    int i = id % crop_height;\n    id /= crop_height;\n    int k = id % c;\n    id /= c;\n    int b = id;\n\n    float r4 = rand[8*b + 4];\n    float r5 = rand[8*b + 5];\n    float r6 = rand[8*b + 6];\n    float r7 = rand[8*b + 7];\n\n    float dw = (w - crop_width)*r4;\n    float dh = (h - crop_height)*r5;\n    flip = (flip && (r6 > .5f));\n    angle = 2*angle*r7 - angle;\n    if(!train){\n        dw = (w - crop_width)/2.f;\n        dh = (h - crop_height)/2.f;\n        flip = 0;\n        angle = 0;\n    }\n\n    input += w*h*c*b;\n\n    float x = (flip) ? w - dw - j - 1 : j + dw;    \n    float y = i + dh;\n\n    float rx = cosf(angle)*(x-cx) - sinf(angle)*(y-cy) + cx;\n    float ry = sinf(angle)*(x-cx) + cosf(angle)*(y-cy) + cy;\n\n    output[count] = bilinear_interpolate_kernel(input, w, h, rx, ry, k);\n}\n\nextern \"C\" void forward_crop_layer_gpu(crop_layer layer, network net)\n{\n    cuda_random(layer.rand_gpu, layer.batch*8);\n\n    float radians = layer.angle*3.14159265f/180.f;\n\n    float scale = 2;\n    float translate = -1;\n    if(layer.noadjust){\n        scale = 1;\n        translate = 0;\n    }\n\n    int size = layer.batch * layer.w * layer.h;\n\n    levels_image_kernel<<<cuda_gridsize(size), BLOCK>>>(net.input_gpu, layer.rand_gpu, layer.batch, layer.w, layer.h, net.train, layer.saturation, layer.exposure, translate, scale, layer.shift);\n    check_error(cudaPeekAtLastError());\n\n    size = layer.batch*layer.c*layer.out_w*layer.out_h;\n\n    forward_crop_layer_kernel<<<cuda_gridsize(size), BLOCK>>>(net.input_gpu, layer.rand_gpu, size, layer.c, layer.h, layer.w, layer.out_h, layer.out_w, net.train, layer.flip, radians, layer.output_gpu);\n    check_error(cudaPeekAtLastError());\n\n/*\n       cuda_pull_array(layer.output_gpu, layer.output, size);\n       image im = float_to_image(layer.crop_width, layer.crop_height, layer.c, layer.output + 0*(size/layer.batch));\n       image im2 = float_to_image(layer.crop_width, layer.crop_height, layer.c, layer.output + 1*(size/layer.batch));\n       image im3 = float_to_image(layer.crop_width, layer.crop_height, layer.c, layer.output + 2*(size/layer.batch));\n\n       translate_image(im, -translate);\n       scale_image(im, 1/scale);\n       translate_image(im2, -translate);\n       scale_image(im2, 1/scale);\n       translate_image(im3, -translate);\n       scale_image(im3, 1/scale);\n       \n       show_image(im, \"cropped\");\n       show_image(im2, \"cropped2\");\n       show_image(im3, \"cropped3\");\n       cvWaitKey(0);\n       */\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/cuda.c",
    "content": "int gpu_index = 0;\n\n#ifdef GPU\n\n#include \"cuda.h\"\n#include \"utils.h\"\n#include \"blas.h\"\n#include <assert.h>\n#include <stdlib.h>\n#include <time.h>\n\nvoid cuda_set_device(int n)\n{\n    gpu_index = n;\n    cudaError_t status = cudaSetDevice(n);\n    check_error(status);\n}\n\nint cuda_get_device()\n{\n    int n = 0;\n    cudaError_t status = cudaGetDevice(&n);\n    check_error(status);\n    return n;\n}\n\nvoid check_error(cudaError_t status)\n{\n    //cudaDeviceSynchronize();\n    cudaError_t status2 = cudaGetLastError();\n    if (status != cudaSuccess)\n    {   \n        const char *s = cudaGetErrorString(status);\n        char buffer[256];\n        printf(\"CUDA Error: %s\\n\", s);\n        assert(0);\n        snprintf(buffer, 256, \"CUDA Error: %s\", s);\n        error(buffer);\n    } \n    if (status2 != cudaSuccess)\n    {   \n        const char *s = cudaGetErrorString(status);\n        char buffer[256];\n        printf(\"CUDA Error Prev: %s\\n\", s);\n        assert(0);\n        snprintf(buffer, 256, \"CUDA Error Prev: %s\", s);\n        error(buffer);\n    } \n}\n\ndim3 cuda_gridsize(size_t n){\n    size_t k = (n-1) / BLOCK + 1;\n    size_t x = k;\n    size_t y = 1;\n    if(x > 65535){\n        x = ceil(sqrt(k));\n        y = (n-1)/(x*BLOCK) + 1;\n    }\n    dim3 d = {x, y, 1};\n    //printf(\"%ld %ld %ld %ld\\n\", n, x, y, x*y*BLOCK);\n    return d;\n}\n\n#ifdef CUDNN\ncudnnHandle_t cudnn_handle()\n{\n    static int init[16] = {0};\n    static cudnnHandle_t handle[16];\n    int i = cuda_get_device();\n    if(!init[i]) {\n        cudnnCreate(&handle[i]);\n        init[i] = 1;\n    }\n    return handle[i];\n}\n#endif\n\ncublasHandle_t blas_handle()\n{\n    static int init[16] = {0};\n    static cublasHandle_t handle[16];\n    int i = cuda_get_device();\n    if(!init[i]) {\n        cublasCreate(&handle[i]);\n        init[i] = 1;\n    }\n    return handle[i];\n}\n\nfloat *cuda_make_array(float *x, size_t n)\n{\n    float *x_gpu;\n    size_t size = sizeof(float)*n;\n    cudaError_t status = cudaMalloc((void **)&x_gpu, size);\n    check_error(status);\n    if(x){\n        status = cudaMemcpy(x_gpu, x, size, cudaMemcpyHostToDevice);\n        check_error(status);\n    } else {\n        fill_gpu(n, 0, x_gpu, 1);\n    }\n    if(!x_gpu) error(\"Cuda malloc failed\\n\");\n    return x_gpu;\n}\n\nvoid cuda_random(float *x_gpu, size_t n)\n{\n    static curandGenerator_t gen[16];\n    static int init[16] = {0};\n    int i = cuda_get_device();\n    if(!init[i]){\n        curandCreateGenerator(&gen[i], CURAND_RNG_PSEUDO_DEFAULT);\n        curandSetPseudoRandomGeneratorSeed(gen[i], time(0));\n        init[i] = 1;\n    }\n    curandGenerateUniform(gen[i], x_gpu, n);\n    check_error(cudaPeekAtLastError());\n}\n\nfloat cuda_compare(float *x_gpu, float *x, size_t n, char *s)\n{\n    float *tmp = calloc(n, sizeof(float));\n    cuda_pull_array(x_gpu, tmp, n);\n    //int i;\n    //for(i = 0; i < n; ++i) printf(\"%f %f\\n\", tmp[i], x[i]);\n    axpy_cpu(n, -1, x, 1, tmp, 1);\n    float err = dot_cpu(n, tmp, 1, tmp, 1);\n    printf(\"Error %s: %f\\n\", s, sqrt(err/n));\n    free(tmp);\n    return err;\n}\n\nint *cuda_make_int_array(int *x, size_t n)\n{\n    int *x_gpu;\n    size_t size = sizeof(int)*n;\n    cudaError_t status = cudaMalloc((void **)&x_gpu, size);\n    check_error(status);\n    if(x){\n        status = cudaMemcpy(x_gpu, x, size, cudaMemcpyHostToDevice);\n        check_error(status);\n    }\n    if(!x_gpu) error(\"Cuda malloc failed\\n\");\n    return x_gpu;\n}\n\nvoid cuda_free(float *x_gpu)\n{\n    cudaError_t status = cudaFree(x_gpu);\n    check_error(status);\n}\n\nvoid cuda_push_array(float *x_gpu, float *x, size_t n)\n{\n    size_t size = sizeof(float)*n;\n    cudaError_t status = cudaMemcpy(x_gpu, x, size, cudaMemcpyHostToDevice);\n    check_error(status);\n}\n\nvoid cuda_pull_array(float *x_gpu, float *x, size_t n)\n{\n    size_t size = sizeof(float)*n;\n    cudaError_t status = cudaMemcpy(x, x_gpu, size, cudaMemcpyDeviceToHost);\n    check_error(status);\n}\n\nfloat cuda_mag_array(float *x_gpu, size_t n)\n{\n    float *temp = calloc(n, sizeof(float));\n    cuda_pull_array(x_gpu, temp, n);\n    float m = mag_array(temp, n);\n    free(temp);\n    return m;\n}\n#else\nvoid cuda_set_device(int n){}\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/cuda.h",
    "content": "#ifndef CUDA_H\n#define CUDA_H\n\n#include \"darknet.h\"\n\n#ifdef GPU\n\nvoid check_error(cudaError_t status);\ncublasHandle_t blas_handle();\nint *cuda_make_int_array(int *x, size_t n);\nvoid cuda_random(float *x_gpu, size_t n);\nfloat cuda_compare(float *x_gpu, float *x, size_t n, char *s);\ndim3 cuda_gridsize(size_t n);\n\n#ifdef CUDNN\ncudnnHandle_t cudnn_handle();\n#endif\n\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/darknet.h",
    "content": "#ifndef DARKNET_API\n#define DARKNET_API\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n\n#include <pthread.h>\n\n#define SECRET_NUM -1234\nextern int gpu_index;\n\n#ifdef GPU\n    #define BLOCK 512\n\n    #include \"cuda_runtime.h\"\n    #include \"curand.h\"\n    #include \"cublas_v2.h\"\n\n    #ifdef CUDNN\n    #include \"cudnn.h\"\n    #endif\n#endif\n\n#ifndef __cplusplus\n    #ifdef OPENCV\n    #include \"opencv2/highgui/highgui_c.h\"\n    #include \"opencv2/imgproc/imgproc_c.h\"\n    #include \"opencv2/core/version.hpp\"\n    #if CV_MAJOR_VERSION == 3\n    #include \"opencv2/videoio/videoio_c.h\"\n    #include \"opencv2/imgcodecs/imgcodecs_c.h\"\n    #endif\n    #endif\n#endif\n\ntypedef struct{\n    int classes;\n    char **names;\n} metadata;\n\nmetadata get_metadata(char *file);\n\ntypedef struct{\n    int *leaf;\n    int n;\n    int *parent;\n    int *child;\n    int *group;\n    char **name;\n\n    int groups;\n    int *group_size;\n    int *group_offset;\n} tree;\n\ntypedef enum{\n    LOGISTIC, RELU, RELIE, LINEAR, RAMP, TANH, PLSE, LEAKY, ELU, LOGGY, STAIR, HARDTAN, LHTAN\n} ACTIVATION;\n\ntypedef enum{\n    MULT, ADD, SUB, DIV\n} BINARY_ACTIVATION;\n\ntypedef enum {\n    CONVOLUTIONAL,\n    DECONVOLUTIONAL,\n    CONNECTED,\n    MAXPOOL,\n    SOFTMAX,\n    DETECTION,\n    DROPOUT,\n    CROP,\n    ROUTE,\n    COST,\n    NORMALIZATION,\n    AVGPOOL,\n    LOCAL,\n    SHORTCUT,\n    ACTIVE,\n    RNN,\n    GRU,\n    LSTM,\n    CRNN,\n    BATCHNORM,\n    NETWORK,\n    XNOR,\n    REGION,\n    REORG,\n    BLANK\n} LAYER_TYPE;\n\ntypedef enum{\n    SSE, MASKED, L1, SEG, SMOOTH\n} COST_TYPE;\n\ntypedef struct{\n    int batch;\n    float learning_rate;\n    float momentum;\n    float decay;\n    int adam;\n    float B1;\n    float B2;\n    float eps;\n    int t;\n} update_args;\n\nstruct network;\ntypedef struct network network;\n\nstruct layer;\ntypedef struct layer layer;\n\nstruct layer{\n    LAYER_TYPE type;\n    ACTIVATION activation;\n    COST_TYPE cost_type;\n    void (*forward)   (struct layer, struct network);\n    void (*backward)  (struct layer, struct network);\n    void (*update)    (struct layer, update_args);\n    void (*forward_gpu)   (struct layer, struct network);\n    void (*backward_gpu)  (struct layer, struct network);\n    void (*update_gpu)    (struct layer, update_args);\n    int batch_normalize;\n    int shortcut;\n    int batch;\n    int forced;\n    int flipped;\n    int inputs;\n    int outputs;\n    int nweights;\n    int nbiases;\n    int extra;\n    int truths;\n    int h,w,c;\n    int out_h, out_w, out_c;\n    int n;\n    int max_boxes;\n    int groups;\n    int size;\n    int side;\n    int stride;\n    int reverse;\n    int flatten;\n    int spatial;\n    int pad;\n    int sqrt;\n    int flip;\n    int index;\n    int binary;\n    int xnor;\n    int steps;\n    int hidden;\n    int truth;\n    float smooth;\n    float dot;\n    float angle;\n    float jitter;\n    float saturation;\n    float exposure;\n    float shift;\n    float ratio;\n    float learning_rate_scale;\n    int softmax;\n    int classes;\n    int coords;\n    int background;\n    int rescore;\n    int objectness;\n    int does_cost;\n    int joint;\n    int noadjust;\n    int reorg;\n    int log;\n    int tanh;\n\n    float alpha;\n    float beta;\n    float kappa;\n\n    float coord_scale;\n    float object_scale;\n    float noobject_scale;\n    float mask_scale;\n    float class_scale;\n    int bias_match;\n    int random;\n    float thresh;\n    int classfix;\n    int absolute;\n\n    int onlyforward;\n    int stopbackward;\n    int dontload;\n    int dontloadscales;\n\n    float temperature;\n    float probability;\n    float scale;\n\n    char  * cweights;\n    int   * indexes;\n    int   * input_layers;\n    int   * input_sizes;\n    int   * map;\n    float * rand;\n    float * cost;\n    float * state;\n    float * prev_state;\n    float * forgot_state;\n    float * forgot_delta;\n    float * state_delta;\n    float * combine_cpu;\n    float * combine_delta_cpu;\n\n    float * concat;\n    float * concat_delta;\n\n    float * binary_weights;\n\n    float * biases;\n    float * bias_updates;\n\n    float * scales;\n    float * scale_updates;\n\n    float * weights;\n    float * weight_updates;\n\n    float * delta;\n    float * output;\n    float * squared;\n    float * norms;\n\n    float * spatial_mean;\n    float * mean;\n    float * variance;\n\n    float * mean_delta;\n    float * variance_delta;\n\n    float * rolling_mean;\n    float * rolling_variance;\n\n    float * x;\n    float * x_norm;\n\n    float * m;\n    float * v;\n    \n    float * bias_m;\n    float * bias_v;\n    float * scale_m;\n    float * scale_v;\n\n\n    float *z_cpu;\n    float *r_cpu;\n    float *h_cpu;\n    float * prev_state_cpu;\n\n    float *temp_cpu;\n    float *temp2_cpu;\n    float *temp3_cpu;\n\n    float *dh_cpu;\n    float *hh_cpu;\n    float *prev_cell_cpu;\n    float *cell_cpu;\n    float *f_cpu;\n    float *i_cpu;\n    float *g_cpu;\n    float *o_cpu;\n    float *c_cpu;\n    float *dc_cpu; \n\n    float * binary_input;\n\n    struct layer *input_layer;\n    struct layer *self_layer;\n    struct layer *output_layer;\n\n    struct layer *reset_layer;\n    struct layer *update_layer;\n    struct layer *state_layer;\n\n    struct layer *input_gate_layer;\n    struct layer *state_gate_layer;\n    struct layer *input_save_layer;\n    struct layer *state_save_layer;\n    struct layer *input_state_layer;\n    struct layer *state_state_layer;\n\n    struct layer *input_z_layer;\n    struct layer *state_z_layer;\n\n    struct layer *input_r_layer;\n    struct layer *state_r_layer;\n\n    struct layer *input_h_layer;\n    struct layer *state_h_layer;\n\t\n    struct layer *wz;\n    struct layer *uz;\n    struct layer *wr;\n    struct layer *ur;\n    struct layer *wh;\n    struct layer *uh;\n    struct layer *uo;\n    struct layer *wo;\n    struct layer *uf;\n    struct layer *wf;\n    struct layer *ui;\n    struct layer *wi;\n    struct layer *ug;\n    struct layer *wg;\n\n    tree *softmax_tree;\n\n    size_t workspace_size;\n\n#ifdef GPU\n    int *indexes_gpu;\n\n    float *z_gpu;\n    float *r_gpu;\n    float *h_gpu;\n\n    float *temp_gpu;\n    float *temp2_gpu;\n    float *temp3_gpu;\n\n    float *dh_gpu;\n    float *hh_gpu;\n    float *prev_cell_gpu;\n    float *cell_gpu;\n    float *f_gpu;\n    float *i_gpu;\n    float *g_gpu;\n    float *o_gpu;\n    float *c_gpu;\n    float *dc_gpu; \n\n    float *m_gpu;\n    float *v_gpu;\n    float *bias_m_gpu;\n    float *scale_m_gpu;\n    float *bias_v_gpu;\n    float *scale_v_gpu;\n\n    float * combine_gpu;\n    float * combine_delta_gpu;\n\n    float * prev_state_gpu;\n    float * forgot_state_gpu;\n    float * forgot_delta_gpu;\n    float * state_gpu;\n    float * state_delta_gpu;\n    float * gate_gpu;\n    float * gate_delta_gpu;\n    float * save_gpu;\n    float * save_delta_gpu;\n    float * concat_gpu;\n    float * concat_delta_gpu;\n\n    float * binary_input_gpu;\n    float * binary_weights_gpu;\n\n    float * mean_gpu;\n    float * variance_gpu;\n\n    float * rolling_mean_gpu;\n    float * rolling_variance_gpu;\n\n    float * variance_delta_gpu;\n    float * mean_delta_gpu;\n\n    float * x_gpu;\n    float * x_norm_gpu;\n    float * weights_gpu;\n    float * weight_updates_gpu;\n    float * weight_change_gpu;\n\n    float * biases_gpu;\n    float * bias_updates_gpu;\n    float * bias_change_gpu;\n\n    float * scales_gpu;\n    float * scale_updates_gpu;\n    float * scale_change_gpu;\n\n    float * output_gpu;\n    float * delta_gpu;\n    float * rand_gpu;\n    float * squared_gpu;\n    float * norms_gpu;\n#ifdef CUDNN\n    cudnnTensorDescriptor_t srcTensorDesc, dstTensorDesc;\n    cudnnTensorDescriptor_t dsrcTensorDesc, ddstTensorDesc;\n    cudnnTensorDescriptor_t normTensorDesc;\n    cudnnFilterDescriptor_t weightDesc;\n    cudnnFilterDescriptor_t dweightDesc;\n    cudnnConvolutionDescriptor_t convDesc;\n    cudnnConvolutionFwdAlgo_t fw_algo;\n    cudnnConvolutionBwdDataAlgo_t bd_algo;\n    cudnnConvolutionBwdFilterAlgo_t bf_algo;\n#endif\n#endif\n};\n\nvoid free_layer(layer);\n\ntypedef enum {\n    CONSTANT, STEP, EXP, POLY, STEPS, SIG, RANDOM\n} learning_rate_policy;\n\ntypedef struct network{\n    int n;\n    int batch;\n    size_t *seen;\n    int *t;\n    float epoch;\n    int subdivisions;\n    layer *layers;\n    float *output;\n    learning_rate_policy policy;\n\n    float learning_rate;\n    float momentum;\n    float decay;\n    float gamma;\n    float scale;\n    float power;\n    int time_steps;\n    int step;\n    int max_batches;\n    float *scales;\n    int   *steps;\n    int num_steps;\n    int burn_in;\n\n    int adam;\n    float B1;\n    float B2;\n    float eps;\n\n    int inputs;\n    int outputs;\n    int truths;\n    int notruth;\n    int h, w, c;\n    int max_crop;\n    int min_crop;\n    float max_ratio;\n    float min_ratio;\n    int center;\n    float angle;\n    float aspect;\n    float exposure;\n    float saturation;\n    float hue;\n    int random;\n\n    int gpu_index;\n    tree *hierarchy;\n\n    float *input;\n    float *truth;\n    float *delta;\n    float *workspace;\n    int train;\n    int index;\n    float *cost;\n\n#ifdef GPU\n    float *input_gpu;\n    float *truth_gpu;\n    float *delta_gpu;\n    float *output_gpu;\n#endif\n\n} network;\n\ntypedef struct {\n    int w;\n    int h;\n    float scale;\n    float rad;\n    float dx;\n    float dy;\n    float aspect;\n} augment_args;\n\ntypedef struct {\n    int w;\n    int h;\n    int c;\n    float *data;\n} image;\n\ntypedef struct{\n    float x, y, w, h;\n} box;\n\ntypedef struct matrix{\n    int rows, cols;\n    float **vals;\n} matrix;\n\n\ntypedef struct{\n    int w, h;\n    matrix X;\n    matrix y;\n    int shallow;\n    int *num_boxes;\n    box **boxes;\n} data;\n\ntypedef enum {\n    CLASSIFICATION_DATA, DETECTION_DATA, CAPTCHA_DATA, REGION_DATA, IMAGE_DATA, COMPARE_DATA, WRITING_DATA, SWAG_DATA, TAG_DATA, OLD_CLASSIFICATION_DATA, STUDY_DATA, DET_DATA, SUPER_DATA, LETTERBOX_DATA, REGRESSION_DATA, SEGMENTATION_DATA, INSTANCE_DATA\n} data_type;\n\ntypedef struct load_args{\n    int threads;\n    char **paths;\n    char *path;\n    int n;\n    int m;\n    char **labels;\n    int h;\n    int w;\n    int out_w;\n    int out_h;\n    int nh;\n    int nw;\n    int num_boxes;\n    int min, max, size;\n    int classes;\n    int background;\n    int scale;\n    int center;\n    int coords;\n    float jitter;\n    float angle;\n    float aspect;\n    float saturation;\n    float exposure;\n    float hue;\n    data *d;\n    image *im;\n    image *resized;\n    data_type type;\n    tree *hierarchy;\n} load_args;\n\ntypedef struct{\n    int id;\n    float x,y,w,h;\n    float left, right, top, bottom;\n} box_label;\n\n\nnetwork *load_network(char *cfg, char *weights, int clear);\nload_args get_base_args(network *net);\n\nvoid free_data(data d);\n\ntypedef struct node{\n    void *val;\n    struct node *next;\n    struct node *prev;\n} node;\n\ntypedef struct list{\n    int size;\n    node *front;\n    node *back;\n} list;\n\npthread_t load_data(load_args args);\nlist *read_data_cfg(char *filename);\nlist *read_cfg(char *filename);\nunsigned char *read_file(char *filename);\ndata resize_data(data orig, int w, int h);\ndata *tile_data(data orig, int divs, int size);\ndata select_data(data *orig, int *inds);\n\ndata load_data_region(int n, char **paths, int m, int w, int h, int size, int classes, float jitter, float hue, float saturation, float exposure);\n\n\nvoid forward_network(network *net);\nvoid backward_network(network *net);\nvoid update_network(network *net);\n\n\nvoid axpy_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);\nvoid copy_cpu(int N, float *X, int INCX, float *Y, int INCY);\nvoid scal_cpu(int N, float ALPHA, float *X, int INCX);\nvoid normalize_cpu(float *x, float *mean, float *variance, int batch, int filters, int spatial);\nvoid softmax(float *input, int n, float temp, int stride, float *output);\n\nint best_3d_shift_r(image a, image b, int min, int max);\n#ifdef GPU\nvoid axpy_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY);\nvoid fill_gpu(int N, float ALPHA, float * X, int INCX);\nvoid scal_gpu(int N, float ALPHA, float * X, int INCX);\nvoid copy_gpu(int N, float * X, int INCX, float * Y, int INCY);\n\nvoid cuda_set_device(int n);\nvoid cuda_free(float *x_gpu);\nfloat *cuda_make_array(float *x, size_t n);\nvoid cuda_pull_array(float *x_gpu, float *x, size_t n);\nfloat cuda_mag_array(float *x_gpu, size_t n);\nvoid cuda_push_array(float *x_gpu, float *x, size_t n);\n\nvoid forward_network_gpu(network *net);\nvoid backward_network_gpu(network *net);\nvoid update_network_gpu(network *net);\n\nfloat train_networks(network **nets, int n, data d, int interval);\nvoid sync_nets(network **nets, int n, int interval);\nvoid harmless_update_network_gpu(network *net);\n#endif\nvoid save_image_png(image im, const char *name);\nvoid get_next_batch(data d, int n, int offset, float *X, float *y);\nvoid grayscale_image_3c(image im);\nvoid normalize_image(image p);\nvoid matrix_to_csv(matrix m);\nfloat train_network_sgd(network *net, data d, int n);\nvoid rgbgr_image(image im);\ndata copy_data(data d);\ndata concat_data(data d1, data d2);\ndata load_cifar10_data(char *filename);\nfloat matrix_topk_accuracy(matrix truth, matrix guess, int k);\nvoid matrix_add_matrix(matrix from, matrix to);\nvoid scale_matrix(matrix m, float scale);\nmatrix csv_to_matrix(char *filename);\nfloat *network_accuracies(network *net, data d, int n);\nfloat train_network_datum(network *net);\nimage make_random_image(int w, int h, int c);\n\nvoid denormalize_connected_layer(layer l);\nvoid denormalize_convolutional_layer(layer l);\nvoid statistics_connected_layer(layer l);\nvoid rescale_weights(layer l, float scale, float trans);\nvoid rgbgr_weights(layer l);\nimage *get_weights(layer l);\n\nvoid demo(char *cfgfile, char *weightfile, float thresh, int cam_index, const char *filename, char **names, int classes, int frame_skip, char *prefix, int avg, float hier_thresh, int w, int h, int fps, int fullscreen);\nvoid get_detection_boxes(layer l, int w, int h, float thresh, float **probs, box *boxes, int only_objectness);\n\nchar *option_find_str(list *l, char *key, char *def);\nint option_find_int(list *l, char *key, int def);\n\nnetwork *parse_network_cfg(char *filename);\nvoid save_weights(network *net, char *filename);\nvoid load_weights(network *net, char *filename);\nvoid save_weights_upto(network *net, char *filename, int cutoff);\nvoid load_weights_upto(network *net, char *filename, int start, int cutoff);\n\nvoid zero_objectness(layer l);\nvoid get_region_boxes(layer l, int w, int h, int netw, int neth, float thresh, float **probs, box *boxes, float **masks, int only_objectness, int *map, float tree_thresh, int relative);\nvoid free_network(network *net);\nvoid set_batch_network(network *net, int b);\nvoid set_temp_network(network *net, float t);\nimage load_image(char *filename, int w, int h, int c);\nimage load_image_color(char *filename, int w, int h);\nimage make_image(int w, int h, int c);\nimage resize_image(image im, int w, int h);\nimage letterbox_image(image im, int w, int h);\nimage crop_image(image im, int dx, int dy, int w, int h);\nimage resize_min(image im, int min);\nimage resize_max(image im, int max);\nimage threshold_image(image im, float thresh);\nimage mask_to_rgb(image mask);\nint resize_network(network *net, int w, int h);\nvoid free_matrix(matrix m);\nvoid test_resize(char *filename);\nvoid save_image(image p, const char *name);\nvoid show_image(image p, const char *name);\nimage copy_image(image p);\nvoid draw_box_width(image a, int x1, int y1, int x2, int y2, int w, float r, float g, float b);\nfloat get_current_rate(network *net);\nvoid composite_3d(char *f1, char *f2, char *out, int delta);\ndata load_data_old(char **paths, int n, int m, char **labels, int k, int w, int h);\nsize_t get_current_batch(network *net);\nvoid constrain_image(image im);\nimage get_network_image_layer(network *net, int i);\nlayer get_network_output_layer(network *net);\nvoid top_predictions(network *net, int n, int *index);\nvoid flip_image(image a);\nimage float_to_image(int w, int h, int c, float *data);\nvoid ghost_image(image source, image dest, int dx, int dy);\nfloat network_accuracy(network *net, data d);\nvoid random_distort_image(image im, float hue, float saturation, float exposure);\nvoid fill_image(image m, float s);\nimage grayscale_image(image im);\nvoid rotate_image_cw(image im, int times);\ndouble what_time_is_it_now(void);\nimage rotate_image(image m, float rad);\nvoid visualize_network(network *net);\nfloat box_iou(box a, box b);\nvoid do_nms(box *boxes, float **probs, int total, int classes, float thresh);\ndata load_all_cifar10(void);\nbox_label *read_boxes(char *filename, int *n);\nbox float_to_box(float *f, int stride);\nvoid draw_detections(image im, int num, float thresh, box *boxes, float **probs, float **masks, char **names, image **alphabet, int classes);\n\nmatrix network_predict_data(network *net, data test);\nimage **load_alphabet(void);\nimage get_network_image(network *net);\nfloat *network_predict(network *net, float *input);\n\nint network_width(network *net);\nint network_height(network *net);\nfloat *network_predict_image(network *net, image im);\nvoid network_detect(network *net, image im, float thresh, float hier_thresh, float nms, box *boxes, float **probs);\nint num_boxes(network *net);\nbox *make_boxes(network *net);\n\nvoid reset_network_state(network *net, int b);\n\nchar **get_labels(char *filename);\nvoid do_nms_sort(box *boxes, float **probs, int total, int classes, float thresh);\nvoid do_nms_obj(box *boxes, float **probs, int total, int classes, float thresh);\n\nmatrix make_matrix(int rows, int cols);\n\nfloat **make_probs(network *net);\n\n#ifndef __cplusplus\n#ifdef OPENCV\nimage get_image_from_stream(CvCapture *cap);\n#endif\n#endif\nvoid free_image(image m);\nfloat train_network(network *net, data d);\npthread_t load_data_in_thread(load_args args);\nvoid load_data_blocking(load_args args);\nlist *get_paths(char *filename);\nvoid hierarchy_predictions(float *predictions, int n, tree *hier, int only_leaves, int stride);\nvoid change_leaves(tree *t, char *leaf_list);\n\nint find_int_arg(int argc, char **argv, char *arg, int def);\nfloat find_float_arg(int argc, char **argv, char *arg, float def);\nint find_arg(int argc, char* argv[], char *arg);\nchar *find_char_arg(int argc, char **argv, char *arg, char *def);\nchar *basecfg(char *cfgfile);\nvoid find_replace(char *str, char *orig, char *rep, char *output);\nvoid free_ptrs(void **ptrs, int n);\nchar *fgetl(FILE *fp);\nvoid strip(char *s);\nfloat sec(clock_t clocks);\nvoid **list_to_array(list *l);\nvoid top_k(float *a, int n, int k, int *index);\nint *read_map(char *filename);\nvoid error(const char *s);\nint max_index(float *a, int n);\nint max_int_index(int *a, int n);\nint sample_array(float *a, int n);\nint *random_index_order(int min, int max);\nvoid free_list(list *l);\nfloat mse_array(float *a, int n);\nfloat variance_array(float *a, int n);\nfloat mag_array(float *a, int n);\nfloat mean_array(float *a, int n);\nfloat sum_array(float *a, int n);\nvoid normalize_array(float *a, int n);\nint *read_intlist(char *s, int *n, int d);\nsize_t rand_size_t(void);\nfloat rand_normal(void);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/data.c",
    "content": "#include \"data.h\"\n#include \"utils.h\"\n#include \"image.h\"\n#include \"cuda.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\npthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;\n\nlist *get_paths(char *filename)\n{\n    char *path;\n    FILE *file = fopen(filename, \"r\");\n    if(!file) file_error(filename);\n    list *lines = make_list();\n    while((path=fgetl(file))){\n        list_insert(lines, path);\n    }\n    fclose(file);\n    return lines;\n}\n\n/*\nchar **get_random_paths_indexes(char **paths, int n, int m, int *indexes)\n{\n    char **random_paths = calloc(n, sizeof(char*));\n    int i;\n    pthread_mutex_lock(&mutex);\n    for(i = 0; i < n; ++i){\n        int index = rand()%m;\n        indexes[i] = index;\n        random_paths[i] = paths[index];\n        if(i == 0) printf(\"%s\\n\", paths[index]);\n    }\n    pthread_mutex_unlock(&mutex);\n    return random_paths;\n}\n*/\n\nchar **get_random_paths(char **paths, int n, int m)\n{\n    char **random_paths = calloc(n, sizeof(char*));\n    int i;\n    pthread_mutex_lock(&mutex);\n    for(i = 0; i < n; ++i){\n        int index = rand()%m;\n        random_paths[i] = paths[index];\n        //if(i == 0) printf(\"%s\\n\", paths[index]);\n    }\n    pthread_mutex_unlock(&mutex);\n    return random_paths;\n}\n\nchar **find_replace_paths(char **paths, int n, char *find, char *replace)\n{\n    char **replace_paths = calloc(n, sizeof(char*));\n    int i;\n    for(i = 0; i < n; ++i){\n        char replaced[4096];\n        find_replace(paths[i], find, replace, replaced);\n        replace_paths[i] = copy_string(replaced);\n    }\n    return replace_paths;\n}\n\nmatrix load_image_paths_gray(char **paths, int n, int w, int h)\n{\n    int i;\n    matrix X;\n    X.rows = n;\n    X.vals = calloc(X.rows, sizeof(float*));\n    X.cols = 0;\n\n    for(i = 0; i < n; ++i){\n        image im = load_image(paths[i], w, h, 3);\n\n        image gray = grayscale_image(im);\n        free_image(im);\n        im = gray;\n\n        X.vals[i] = im.data;\n        X.cols = im.h*im.w*im.c;\n    }\n    return X;\n}\n\nmatrix load_image_paths(char **paths, int n, int w, int h)\n{\n    int i;\n    matrix X;\n    X.rows = n;\n    X.vals = calloc(X.rows, sizeof(float*));\n    X.cols = 0;\n\n    for(i = 0; i < n; ++i){\n        image im = load_image_color(paths[i], w, h);\n        X.vals[i] = im.data;\n        X.cols = im.h*im.w*im.c;\n    }\n    return X;\n}\n\nmatrix load_image_augment_paths(char **paths, int n, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure, int center)\n{\n    int i;\n    matrix X;\n    X.rows = n;\n    X.vals = calloc(X.rows, sizeof(float*));\n    X.cols = 0;\n\n    for(i = 0; i < n; ++i){\n        image im = load_image_color(paths[i], 0, 0);\n        image crop;\n        if(center){\n            crop = center_crop_image(im, size, size);\n        } else {\n            crop = random_augment_image(im, angle, aspect, min, max, size, size);\n        }\n        int flip = rand()%2;\n        if (flip) flip_image(crop);\n        random_distort_image(crop, hue, saturation, exposure);\n\n        /*\n        show_image(im, \"orig\");\n        show_image(crop, \"crop\");\n        cvWaitKey(0);\n        */\n        free_image(im);\n        X.vals[i] = crop.data;\n        X.cols = crop.h*crop.w*crop.c;\n    }\n    return X;\n}\n\n\nbox_label *read_boxes(char *filename, int *n)\n{\n    FILE *file = fopen(filename, \"r\");\n    if(!file) file_error(filename);\n    float x, y, h, w;\n    int id;\n    int count = 0;\n    int size = 64;\n    box_label *boxes = calloc(size, sizeof(box_label));\n    while(fscanf(file, \"%d %f %f %f %f\", &id, &x, &y, &w, &h) == 5){\n        if(count == size) {\n            size = size * 2;\n            boxes = realloc(boxes, size*sizeof(box_label));\n        }\n        boxes[count].id = id;\n        boxes[count].x = x;\n        boxes[count].y = y;\n        boxes[count].h = h;\n        boxes[count].w = w;\n        boxes[count].left   = x - w/2;\n        boxes[count].right  = x + w/2;\n        boxes[count].top    = y - h/2;\n        boxes[count].bottom = y + h/2;\n        ++count;\n    }\n    fclose(file);\n    *n = count;\n    return boxes;\n}\n\nvoid randomize_boxes(box_label *b, int n)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        box_label swap = b[i];\n        int index = rand()%n;\n        b[i] = b[index];\n        b[index] = swap;\n    }\n}\n\nvoid correct_boxes(box_label *boxes, int n, float dx, float dy, float sx, float sy, int flip)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        if(boxes[i].x == 0 && boxes[i].y == 0) {\n            boxes[i].x = 999999;\n            boxes[i].y = 999999;\n            boxes[i].w = 999999;\n            boxes[i].h = 999999;\n            continue;\n        }\n        boxes[i].left   = boxes[i].left  * sx - dx;\n        boxes[i].right  = boxes[i].right * sx - dx;\n        boxes[i].top    = boxes[i].top   * sy - dy;\n        boxes[i].bottom = boxes[i].bottom* sy - dy;\n\n        if(flip){\n            float swap = boxes[i].left;\n            boxes[i].left = 1. - boxes[i].right;\n            boxes[i].right = 1. - swap;\n        }\n\n        boxes[i].left =  constrain(0, 1, boxes[i].left);\n        boxes[i].right = constrain(0, 1, boxes[i].right);\n        boxes[i].top =   constrain(0, 1, boxes[i].top);\n        boxes[i].bottom =   constrain(0, 1, boxes[i].bottom);\n\n        boxes[i].x = (boxes[i].left+boxes[i].right)/2;\n        boxes[i].y = (boxes[i].top+boxes[i].bottom)/2;\n        boxes[i].w = (boxes[i].right - boxes[i].left);\n        boxes[i].h = (boxes[i].bottom - boxes[i].top);\n\n        boxes[i].w = constrain(0, 1, boxes[i].w);\n        boxes[i].h = constrain(0, 1, boxes[i].h);\n    }\n}\n\nvoid fill_truth_swag(char *path, float *truth, int classes, int flip, float dx, float dy, float sx, float sy)\n{\n    char labelpath[4096];\n    find_replace(path, \"images\", \"labels\", labelpath);\n    find_replace(labelpath, \"JPEGImages\", \"labels\", labelpath);\n    find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPG\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPEG\", \".txt\", labelpath);\n\n    int count = 0;\n    box_label *boxes = read_boxes(labelpath, &count);\n    randomize_boxes(boxes, count);\n    correct_boxes(boxes, count, dx, dy, sx, sy, flip);\n    float x,y,w,h;\n    int id;\n    int i;\n\n    for (i = 0; i < count && i < 30; ++i) {\n        x =  boxes[i].x;\n        y =  boxes[i].y;\n        w =  boxes[i].w;\n        h =  boxes[i].h;\n        id = boxes[i].id;\n\n        if (w < .0 || h < .0) continue;\n\n        int index = (4+classes) * i;\n\n        truth[index++] = x;\n        truth[index++] = y;\n        truth[index++] = w;\n        truth[index++] = h;\n\n        if (id < classes) truth[index+id] = 1;\n    }\n    free(boxes);\n}\n\nvoid fill_truth_region(char *path, float *truth, int classes, int num_boxes, int flip, float dx, float dy, float sx, float sy)\n{\n    char labelpath[4096];\n    find_replace(path, \"images\", \"labels\", labelpath);\n    find_replace(labelpath, \"JPEGImages\", \"labels\", labelpath);\n\n    find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n    find_replace(labelpath, \".png\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPG\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPEG\", \".txt\", labelpath);\n    int count = 0;\n    box_label *boxes = read_boxes(labelpath, &count);\n    randomize_boxes(boxes, count);\n    correct_boxes(boxes, count, dx, dy, sx, sy, flip);\n    float x,y,w,h;\n    int id;\n    int i;\n\n    for (i = 0; i < count; ++i) {\n        x =  boxes[i].x;\n        y =  boxes[i].y;\n        w =  boxes[i].w;\n        h =  boxes[i].h;\n        id = boxes[i].id;\n\n        if (w < .005 || h < .005) continue;\n\n        int col = (int)(x*num_boxes);\n        int row = (int)(y*num_boxes);\n\n        x = x*num_boxes - col;\n        y = y*num_boxes - row;\n\n        int index = (col+row*num_boxes)*(5+classes);\n        if (truth[index]) continue;\n        truth[index++] = 1;\n\n        if (id < classes) truth[index+id] = 1;\n        index += classes;\n\n        truth[index++] = x;\n        truth[index++] = y;\n        truth[index++] = w;\n        truth[index++] = h;\n    }\n    free(boxes);\n}\n\nvoid load_rle(image im, int *rle, int n)\n{\n    int count = 0;\n    int curr = 0;\n    int i,j;\n    for(i = 0; i < n; ++i){\n        for(j = 0; j < rle[i]; ++j){\n            im.data[count++] = curr;\n        }\n        curr = 1 - curr;\n    }\n    for(; count < im.h*im.w*im.c; ++count){\n        im.data[count] = curr;\n    }\n}\n\nvoid or_image(image src, image dest, int c)\n{\n    int i;\n    for(i = 0; i < src.w*src.h; ++i){\n        if(src.data[i]) dest.data[dest.w*dest.h*c + i] = 1;\n    }\n}\n\nvoid exclusive_image(image src)\n{\n    int k, j, i;\n    int s = src.w*src.h;\n    for(k = 0; k < src.c-1; ++k){\n        for(i = 0; i < s; ++i){\n            if (src.data[k*s + i]){\n                for(j = k+1; j < src.c; ++j){\n                    src.data[j*s + i] = 0;\n                }\n            }\n        }\n    }\n}\n\nbox bound_image(image im)\n{\n    int x,y;\n    int minx = im.w;\n    int miny = im.h;\n    int maxx = 0;\n    int maxy = 0;\n    for(y = 0; y < im.h; ++y){\n        for(x = 0; x < im.w; ++x){\n            if(im.data[y*im.w + x]){\n                minx = (x < minx) ? x : minx;\n                miny = (y < miny) ? y : miny;\n                maxx = (x > maxx) ? x : maxx;\n                maxy = (y > maxy) ? y : maxy;\n            }\n        }\n    }\n    box b = {minx, miny, maxx-minx + 1, maxy-miny + 1};\n    //printf(\"%f %f %f %f\\n\", b.x, b.y, b.w, b.h);\n    return b;\n}\n\nvoid fill_truth_iseg(char *path, int num_boxes, float *truth, int classes, int w, int h, augment_args aug, int flip, int mw, int mh)\n{\n    char labelpath[4096];\n    find_replace(path, \"images\", \"mask\", labelpath);\n    find_replace(labelpath, \"JPEGImages\", \"mask\", labelpath);\n    find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPG\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPEG\", \".txt\", labelpath);\n    FILE *file = fopen(labelpath, \"r\");\n    if(!file) file_error(labelpath);\n    char buff[32788];\n    int id;\n    int i = 0;\n    image part = make_image(w, h, 1);\n    while((fscanf(file, \"%d %s\", &id, buff) == 2) && i < num_boxes){\n        int n = 0;\n        int *rle = read_intlist(buff, &n, 0);\n        load_rle(part, rle, n);\n        image sized = rotate_crop_image(part, aug.rad, aug.scale, aug.w, aug.h, aug.dx, aug.dy, aug.aspect);\n        if(flip) flip_image(sized);\n        box b = bound_image(sized);\n        if(b.w > 0){\n            image crop = crop_image(sized, b.x, b.y, b.w, b.h);\n            image mask = resize_image(crop, mw, mh);\n            truth[i*(4 + mw*mh + 1) + 0] = (b.x + b.w/2.)/sized.w;\n            truth[i*(4 + mw*mh + 1) + 1] = (b.y + b.h/2.)/sized.h;\n            truth[i*(4 + mw*mh + 1) + 2] = b.w/sized.w;\n            truth[i*(4 + mw*mh + 1) + 3] = b.h/sized.h;\n            int j;\n            for(j = 0; j < mw*mh; ++j){\n                truth[i*(4 + mw*mh + 1) + 4 + j] = mask.data[j];\n            }\n            truth[i*(4 + mw*mh + 1) + 4 + mw*mh] = id;\n            free_image(crop);\n            free_image(mask);\n            ++i;\n        }\n        free_image(sized);\n        free(rle);\n    }\n    fclose(file);\n    free_image(part);\n}\n\n\nvoid fill_truth_detection(char *path, int num_boxes, float *truth, int classes, int flip, float dx, float dy, float sx, float sy)\n{\n    char labelpath[4096];\n    find_replace(path, \"images\", \"labels\", labelpath);\n    find_replace(labelpath, \"JPEGImages\", \"labels\", labelpath);\n\n    find_replace(labelpath, \"raw\", \"labels\", labelpath);\n    find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n    find_replace(labelpath, \".png\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPG\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPEG\", \".txt\", labelpath);\n    int count = 0;\n    box_label *boxes = read_boxes(labelpath, &count);\n    randomize_boxes(boxes, count);\n    correct_boxes(boxes, count, dx, dy, sx, sy, flip);\n    if(count > num_boxes) count = num_boxes;\n    float x,y,w,h;\n    int id;\n    int i;\n\n    for (i = 0; i < count; ++i) {\n        x =  boxes[i].x;\n        y =  boxes[i].y;\n        w =  boxes[i].w;\n        h =  boxes[i].h;\n        id = boxes[i].id;\n\n        if ((w < .001 || h < .001)) continue;\n\n        truth[i*5+0] = x;\n        truth[i*5+1] = y;\n        truth[i*5+2] = w;\n        truth[i*5+3] = h;\n        truth[i*5+4] = id;\n    }\n    free(boxes);\n}\n\n#define NUMCHARS 37\n\nvoid print_letters(float *pred, int n)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        int index = max_index(pred+i*NUMCHARS, NUMCHARS);\n        printf(\"%c\", int_to_alphanum(index));\n    }\n    printf(\"\\n\");\n}\n\nvoid fill_truth_captcha(char *path, int n, float *truth)\n{\n    char *begin = strrchr(path, '/');\n    ++begin;\n    int i;\n    for(i = 0; i < strlen(begin) && i < n && begin[i] != '.'; ++i){\n        int index = alphanum_to_int(begin[i]);\n        if(index > 35) printf(\"Bad %c\\n\", begin[i]);\n        truth[i*NUMCHARS+index] = 1;\n    }\n    for(;i < n; ++i){\n        truth[i*NUMCHARS + NUMCHARS-1] = 1;\n    }\n}\n\ndata load_data_captcha(char **paths, int n, int m, int k, int w, int h)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.shallow = 0;\n    d.X = load_image_paths(paths, n, w, h);\n    d.y = make_matrix(n, k*NUMCHARS);\n    int i;\n    for(i = 0; i < n; ++i){\n        fill_truth_captcha(paths[i], k, d.y.vals[i]);\n    }\n    if(m) free(paths);\n    return d;\n}\n\ndata load_data_captcha_encode(char **paths, int n, int m, int w, int h)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.shallow = 0;\n    d.X = load_image_paths(paths, n, w, h);\n    d.X.cols = 17100;\n    d.y = d.X;\n    if(m) free(paths);\n    return d;\n}\n\nvoid fill_truth(char *path, char **labels, int k, float *truth)\n{\n    int i;\n    memset(truth, 0, k*sizeof(float));\n    int count = 0;\n    for(i = 0; i < k; ++i){\n        if(strstr(path, labels[i])){\n            truth[i] = 1;\n            ++count;\n        }\n    }\n    if(count != 1 && (k != 1 || count != 0)) printf(\"Too many or too few labels: %d, %s\\n\", count, path);\n}\n\nvoid fill_hierarchy(float *truth, int k, tree *hierarchy)\n{\n    int j;\n    for(j = 0; j < k; ++j){\n        if(truth[j]){\n            int parent = hierarchy->parent[j];\n            while(parent >= 0){\n                truth[parent] = 1;\n                parent = hierarchy->parent[parent];\n            }\n        }\n    }\n    int i;\n    int count = 0;\n    for(j = 0; j < hierarchy->groups; ++j){\n        //printf(\"%d\\n\", count);\n        int mask = 1;\n        for(i = 0; i < hierarchy->group_size[j]; ++i){\n            if(truth[count + i]){\n                mask = 0;\n                break;\n            }\n        }\n        if (mask) {\n            for(i = 0; i < hierarchy->group_size[j]; ++i){\n                truth[count + i] = SECRET_NUM;\n            }\n        }\n        count += hierarchy->group_size[j];\n    }\n}\n\nmatrix load_regression_labels_paths(char **paths, int n)\n{\n    matrix y = make_matrix(n, 1);\n    int i;\n    for(i = 0; i < n; ++i){\n        char labelpath[4096];\n        find_replace(paths[i], \"images\", \"targets\", labelpath);\n        find_replace(labelpath, \"JPEGImages\", \"targets\", labelpath);\n        find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n        find_replace(labelpath, \".png\", \".txt\", labelpath);\n\n        FILE *file = fopen(labelpath, \"r\");\n        fscanf(file, \"%f\", &(y.vals[i][0]));\n        fclose(file);\n    }\n    return y;\n}\n\nmatrix load_labels_paths(char **paths, int n, char **labels, int k, tree *hierarchy)\n{\n    matrix y = make_matrix(n, k);\n    int i;\n    for(i = 0; i < n && labels; ++i){\n        fill_truth(paths[i], labels, k, y.vals[i]);\n        if(hierarchy){\n            fill_hierarchy(y.vals[i], k, hierarchy);\n        }\n    }\n    return y;\n}\n\nmatrix load_tags_paths(char **paths, int n, int k)\n{\n    matrix y = make_matrix(n, k);\n    int i;\n    int count = 0;\n    for(i = 0; i < n; ++i){\n        char label[4096];\n        find_replace(paths[i], \"imgs\", \"labels\", label);\n        find_replace(label, \"_iconl.jpeg\", \".txt\", label);\n        FILE *file = fopen(label, \"r\");\n        if(!file){\n            find_replace(label, \"labels\", \"labels2\", label);\n            file = fopen(label, \"r\");\n            if(!file) continue;\n        }\n        ++count;\n        int tag;\n        while(fscanf(file, \"%d\", &tag) == 1){\n            if(tag < k){\n                y.vals[i][tag] = 1;\n            }\n        }\n        fclose(file);\n    }\n    printf(\"%d/%d\\n\", count, n);\n    return y;\n}\n\nchar **get_labels(char *filename)\n{\n    list *plist = get_paths(filename);\n    char **labels = (char **)list_to_array(plist);\n    free_list(plist);\n    return labels;\n}\n\nvoid free_data(data d)\n{\n    if(!d.shallow){\n        free_matrix(d.X);\n        free_matrix(d.y);\n    }else{\n        free(d.X.vals);\n        free(d.y.vals);\n    }\n}\n\nimage get_segmentation_image(char *path, int w, int h, int classes)\n{\n    char labelpath[4096];\n    find_replace(path, \"images\", \"mask\", labelpath);\n    find_replace(labelpath, \"JPEGImages\", \"mask\", labelpath);\n    find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPG\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPEG\", \".txt\", labelpath);\n    image mask = make_image(w, h, classes);\n    FILE *file = fopen(labelpath, \"r\");\n    if(!file) file_error(labelpath);\n    char buff[32788];\n    int id;\n    image part = make_image(w, h, 1);\n    while(fscanf(file, \"%d %s\", &id, buff) == 2){\n        int n = 0;\n        int *rle = read_intlist(buff, &n, 0);\n        load_rle(part, rle, n);\n        or_image(part, mask, id);\n        free(rle);\n    }\n    //exclusive_image(mask);\n    fclose(file);\n    free_image(part);\n    return mask;\n}\n\nimage get_segmentation_image2(char *path, int w, int h, int classes)\n{\n    char labelpath[4096];\n    find_replace(path, \"images\", \"mask\", labelpath);\n    find_replace(labelpath, \"JPEGImages\", \"mask\", labelpath);\n    find_replace(labelpath, \".jpg\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPG\", \".txt\", labelpath);\n    find_replace(labelpath, \".JPEG\", \".txt\", labelpath);\n    image mask = make_image(w, h, classes+1);\n    int i;\n    for(i = 0; i < w*h; ++i){\n        mask.data[w*h*classes + i] = 1;\n    }\n    FILE *file = fopen(labelpath, \"r\");\n    if(!file) file_error(labelpath);\n    char buff[32788];\n    int id;\n    image part = make_image(w, h, 1);\n    while(fscanf(file, \"%d %s\", &id, buff) == 2){\n        int n = 0;\n        int *rle = read_intlist(buff, &n, 0);\n        load_rle(part, rle, n);\n        or_image(part, mask, id);\n        for(i = 0; i < w*h; ++i){\n            if(part.data[i]) mask.data[w*h*classes + i] = 0;\n        }\n        free(rle);\n    }\n    //exclusive_image(mask);\n    fclose(file);\n    free_image(part);\n    return mask;\n}\n\ndata load_data_seg(int n, char **paths, int m, int w, int h, int classes, int min, int max, float angle, float aspect, float hue, float saturation, float exposure, int div)\n{\n    char **random_paths = get_random_paths(paths, n, m);\n    int i;\n    data d = {0};\n    d.shallow = 0;\n\n    d.X.rows = n;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n    d.X.cols = h*w*3;\n\n\n    d.y.rows = n;\n    d.y.cols = h*w*classes/div/div;\n    d.y.vals = calloc(d.X.rows, sizeof(float*));\n\n    for(i = 0; i < n; ++i){\n        image orig = load_image_color(random_paths[i], 0, 0);\n        augment_args a = random_augment_args(orig, angle, aspect, min, max, w, h);\n        image sized = rotate_crop_image(orig, a.rad, a.scale, a.w, a.h, a.dx, a.dy, a.aspect);\n\n        int flip = rand()%2;\n        if(flip) flip_image(sized);\n        random_distort_image(sized, hue, saturation, exposure);\n        d.X.vals[i] = sized.data;\n\n        image mask = get_segmentation_image(random_paths[i], orig.w, orig.h, classes);\n        //image mask = make_image(orig.w, orig.h, classes+1);\n        image sized_m = rotate_crop_image(mask, a.rad, a.scale/div, a.w/div, a.h/div, a.dx/div, a.dy/div, a.aspect);\n\n        if(flip) flip_image(sized_m);\n        d.y.vals[i] = sized_m.data;\n\n        free_image(orig);\n        free_image(mask);\n\n        /*\n           image rgb = mask_to_rgb(sized_m, classes);\n           show_image(rgb, \"part\");\n           show_image(sized, \"orig\");\n           cvWaitKey(0);\n           free_image(rgb);\n         */\n    }\n    free(random_paths);\n    return d;\n}\n\ndata load_data_iseg(int n, char **paths, int m, int w, int h, int classes, int boxes, int coords, int min, int max, float angle, float aspect, float hue, float saturation, float exposure)\n{\n    char **random_paths = get_random_paths(paths, n, m);\n    int i;\n    data d = {0};\n    d.shallow = 0;\n\n    d.X.rows = n;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n    d.X.cols = h*w*3;\n\n    d.y = make_matrix(n, (coords+1)*boxes);\n\n    for(i = 0; i < n; ++i){\n        image orig = load_image_color(random_paths[i], 0, 0);\n        augment_args a = random_augment_args(orig, angle, aspect, min, max, w, h);\n        image sized = rotate_crop_image(orig, a.rad, a.scale, a.w, a.h, a.dx, a.dy, a.aspect);\n\n        int flip = rand()%2;\n        if(flip) flip_image(sized);\n        random_distort_image(sized, hue, saturation, exposure);\n        d.X.vals[i] = sized.data;\n        //show_image(sized, \"image\");\n\n        fill_truth_iseg(random_paths[i], boxes, d.y.vals[i], classes, orig.w, orig.h, a, flip, 14, 14);\n\n        free_image(orig);\n\n        /*\n           image rgb = mask_to_rgb(sized_m, classes);\n           show_image(rgb, \"part\");\n           show_image(sized, \"orig\");\n           cvWaitKey(0);\n           free_image(rgb);\n         */\n    }\n    free(random_paths);\n    return d;\n}\n\ndata load_data_region(int n, char **paths, int m, int w, int h, int size, int classes, float jitter, float hue, float saturation, float exposure)\n{\n    char **random_paths = get_random_paths(paths, n, m);\n    int i;\n    data d = {0};\n    d.shallow = 0;\n\n    d.X.rows = n;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n    d.X.cols = h*w*3;\n\n    int k = size*size*(5+classes);\n    d.y = make_matrix(n, k);\n    for(i = 0; i < n; ++i){\n        image orig = load_image_color(random_paths[i], 0, 0);\n\n        int oh = orig.h;\n        int ow = orig.w;\n\n        int dw = (ow*jitter);\n        int dh = (oh*jitter);\n\n        int pleft  = rand_uniform(-dw, dw);\n        int pright = rand_uniform(-dw, dw);\n        int ptop   = rand_uniform(-dh, dh);\n        int pbot   = rand_uniform(-dh, dh);\n\n        int swidth =  ow - pleft - pright;\n        int sheight = oh - ptop - pbot;\n\n        float sx = (float)swidth  / ow;\n        float sy = (float)sheight / oh;\n\n        int flip = rand()%2;\n        image cropped = crop_image(orig, pleft, ptop, swidth, sheight);\n\n        float dx = ((float)pleft/ow)/sx;\n        float dy = ((float)ptop /oh)/sy;\n\n        image sized = resize_image(cropped, w, h);\n        if(flip) flip_image(sized);\n        random_distort_image(sized, hue, saturation, exposure);\n        d.X.vals[i] = sized.data;\n\n        fill_truth_region(random_paths[i], d.y.vals[i], classes, size, flip, dx, dy, 1./sx, 1./sy);\n\n        free_image(orig);\n        free_image(cropped);\n    }\n    free(random_paths);\n    return d;\n}\n\ndata load_data_compare(int n, char **paths, int m, int classes, int w, int h)\n{\n    if(m) paths = get_random_paths(paths, 2*n, m);\n    int i,j;\n    data d = {0};\n    d.shallow = 0;\n\n    d.X.rows = n;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n    d.X.cols = h*w*6;\n\n    int k = 2*(classes);\n    d.y = make_matrix(n, k);\n    for(i = 0; i < n; ++i){\n        image im1 = load_image_color(paths[i*2],   w, h);\n        image im2 = load_image_color(paths[i*2+1], w, h);\n\n        d.X.vals[i] = calloc(d.X.cols, sizeof(float));\n        memcpy(d.X.vals[i],         im1.data, h*w*3*sizeof(float));\n        memcpy(d.X.vals[i] + h*w*3, im2.data, h*w*3*sizeof(float));\n\n        int id;\n        float iou;\n\n        char imlabel1[4096];\n        char imlabel2[4096];\n        find_replace(paths[i*2],   \"imgs\", \"labels\", imlabel1);\n        find_replace(imlabel1, \"jpg\", \"txt\", imlabel1);\n        FILE *fp1 = fopen(imlabel1, \"r\");\n\n        while(fscanf(fp1, \"%d %f\", &id, &iou) == 2){\n            if (d.y.vals[i][2*id] < iou) d.y.vals[i][2*id] = iou;\n        }\n\n        find_replace(paths[i*2+1], \"imgs\", \"labels\", imlabel2);\n        find_replace(imlabel2, \"jpg\", \"txt\", imlabel2);\n        FILE *fp2 = fopen(imlabel2, \"r\");\n\n        while(fscanf(fp2, \"%d %f\", &id, &iou) == 2){\n            if (d.y.vals[i][2*id + 1] < iou) d.y.vals[i][2*id + 1] = iou;\n        }\n\n        for (j = 0; j < classes; ++j){\n            if (d.y.vals[i][2*j] > .5 &&  d.y.vals[i][2*j+1] < .5){\n                d.y.vals[i][2*j] = 1;\n                d.y.vals[i][2*j+1] = 0;\n            } else if (d.y.vals[i][2*j] < .5 &&  d.y.vals[i][2*j+1] > .5){\n                d.y.vals[i][2*j] = 0;\n                d.y.vals[i][2*j+1] = 1;\n            } else {\n                d.y.vals[i][2*j]   = SECRET_NUM;\n                d.y.vals[i][2*j+1] = SECRET_NUM;\n            }\n        }\n        fclose(fp1);\n        fclose(fp2);\n\n        free_image(im1);\n        free_image(im2);\n    }\n    if(m) free(paths);\n    return d;\n}\n\ndata load_data_swag(char **paths, int n, int classes, float jitter)\n{\n    int index = rand()%n;\n    char *random_path = paths[index];\n\n    image orig = load_image_color(random_path, 0, 0);\n    int h = orig.h;\n    int w = orig.w;\n\n    data d = {0};\n    d.shallow = 0;\n    d.w = w;\n    d.h = h;\n\n    d.X.rows = 1;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n    d.X.cols = h*w*3;\n\n    int k = (4+classes)*30;\n    d.y = make_matrix(1, k);\n\n    int dw = w*jitter;\n    int dh = h*jitter;\n\n    int pleft  = rand_uniform(-dw, dw);\n    int pright = rand_uniform(-dw, dw);\n    int ptop   = rand_uniform(-dh, dh);\n    int pbot   = rand_uniform(-dh, dh);\n\n    int swidth =  w - pleft - pright;\n    int sheight = h - ptop - pbot;\n\n    float sx = (float)swidth  / w;\n    float sy = (float)sheight / h;\n\n    int flip = rand()%2;\n    image cropped = crop_image(orig, pleft, ptop, swidth, sheight);\n\n    float dx = ((float)pleft/w)/sx;\n    float dy = ((float)ptop /h)/sy;\n\n    image sized = resize_image(cropped, w, h);\n    if(flip) flip_image(sized);\n    d.X.vals[0] = sized.data;\n\n    fill_truth_swag(random_path, d.y.vals[0], classes, flip, dx, dy, 1./sx, 1./sy);\n\n    free_image(orig);\n    free_image(cropped);\n\n    return d;\n}\n\ndata load_data_detection(int n, char **paths, int m, int w, int h, int boxes, int classes, float jitter, float hue, float saturation, float exposure)\n{\n    char **random_paths = get_random_paths(paths, n, m);\n    int i;\n    data d = {0};\n    d.shallow = 0;\n\n    d.X.rows = n;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n    d.X.cols = h*w*3;\n\n    d.y = make_matrix(n, 5*boxes);\n    for(i = 0; i < n; ++i){\n        image orig = load_image_color(random_paths[i], 0, 0);\n        image sized = make_image(w, h, orig.c);\n        fill_image(sized, .5);\n\n        float dw = jitter * orig.w;\n        float dh = jitter * orig.h;\n\n        float new_ar = (orig.w + rand_uniform(-dw, dw)) / (orig.h + rand_uniform(-dh, dh));\n        float scale = rand_uniform(.25, 2);\n\n        float nw, nh;\n\n        if(new_ar < 1){\n            nh = scale * h;\n            nw = nh * new_ar;\n        } else {\n            nw = scale * w;\n            nh = nw / new_ar;\n        }\n\n        float dx = rand_uniform(0, w - nw);\n        float dy = rand_uniform(0, h - nh);\n\n        place_image(orig, nw, nh, dx, dy, sized);\n\n        random_distort_image(sized, hue, saturation, exposure);\n\n        int flip = rand()%2;\n        if(flip) flip_image(sized);\n        d.X.vals[i] = sized.data;\n\n        fill_truth_detection(random_paths[i], boxes, d.y.vals[i], classes, flip, -dx/w, -dy/h, nw/w, nh/h);\n\n        free_image(orig);\n    }\n    free(random_paths);\n    return d;\n}\n\nvoid *load_thread(void *ptr)\n{\n    //printf(\"Loading data: %d\\n\", rand());\n    load_args a = *(struct load_args*)ptr;\n    if(a.exposure == 0) a.exposure = 1;\n    if(a.saturation == 0) a.saturation = 1;\n    if(a.aspect == 0) a.aspect = 1;\n\n    if (a.type == OLD_CLASSIFICATION_DATA){\n        *a.d = load_data_old(a.paths, a.n, a.m, a.labels, a.classes, a.w, a.h);\n    } else if (a.type == REGRESSION_DATA){\n        *a.d = load_data_regression(a.paths, a.n, a.m, a.min, a.max, a.size, a.angle, a.aspect, a.hue, a.saturation, a.exposure);\n    } else if (a.type == CLASSIFICATION_DATA){\n        *a.d = load_data_augment(a.paths, a.n, a.m, a.labels, a.classes, a.hierarchy, a.min, a.max, a.size, a.angle, a.aspect, a.hue, a.saturation, a.exposure, a.center);\n    } else if (a.type == SUPER_DATA){\n        *a.d = load_data_super(a.paths, a.n, a.m, a.w, a.h, a.scale);\n    } else if (a.type == WRITING_DATA){\n        *a.d = load_data_writing(a.paths, a.n, a.m, a.w, a.h, a.out_w, a.out_h);\n    } else if (a.type == INSTANCE_DATA){\n        *a.d = load_data_iseg(a.n, a.paths, a.m, a.w, a.h, a.classes, a.num_boxes, a.coords, a.min, a.max, a.angle, a.aspect, a.hue, a.saturation, a.exposure);\n    } else if (a.type == SEGMENTATION_DATA){\n        *a.d = load_data_seg(a.n, a.paths, a.m, a.w, a.h, a.classes, a.min, a.max, a.angle, a.aspect, a.hue, a.saturation, a.exposure, a.scale);\n    } else if (a.type == REGION_DATA){\n        *a.d = load_data_region(a.n, a.paths, a.m, a.w, a.h, a.num_boxes, a.classes, a.jitter, a.hue, a.saturation, a.exposure);\n    } else if (a.type == DETECTION_DATA){\n        *a.d = load_data_detection(a.n, a.paths, a.m, a.w, a.h, a.num_boxes, a.classes, a.jitter, a.hue, a.saturation, a.exposure);\n    } else if (a.type == SWAG_DATA){\n        *a.d = load_data_swag(a.paths, a.n, a.classes, a.jitter);\n    } else if (a.type == COMPARE_DATA){\n        *a.d = load_data_compare(a.n, a.paths, a.m, a.classes, a.w, a.h);\n    } else if (a.type == IMAGE_DATA){\n        *(a.im) = load_image_color(a.path, 0, 0);\n        *(a.resized) = resize_image(*(a.im), a.w, a.h);\n    } else if (a.type == LETTERBOX_DATA){\n        *(a.im) = load_image_color(a.path, 0, 0);\n        *(a.resized) = letterbox_image(*(a.im), a.w, a.h);\n    } else if (a.type == TAG_DATA){\n        *a.d = load_data_tag(a.paths, a.n, a.m, a.classes, a.min, a.max, a.size, a.angle, a.aspect, a.hue, a.saturation, a.exposure);\n    }\n    free(ptr);\n    return 0;\n}\n\npthread_t load_data_in_thread(load_args args)\n{\n    pthread_t thread;\n    struct load_args *ptr = calloc(1, sizeof(struct load_args));\n    *ptr = args;\n    if(pthread_create(&thread, 0, load_thread, ptr)) error(\"Thread creation failed\");\n    return thread;\n}\n\nvoid *load_threads(void *ptr)\n{\n    int i;\n    load_args args = *(load_args *)ptr;\n    if (args.threads == 0) args.threads = 1;\n    data *out = args.d;\n    int total = args.n;\n    free(ptr);\n    data *buffers = calloc(args.threads, sizeof(data));\n    pthread_t *threads = calloc(args.threads, sizeof(pthread_t));\n    for(i = 0; i < args.threads; ++i){\n        args.d = buffers + i;\n        args.n = (i+1) * total/args.threads - i * total/args.threads;\n        threads[i] = load_data_in_thread(args);\n    }\n    for(i = 0; i < args.threads; ++i){\n        pthread_join(threads[i], 0);\n    }\n    *out = concat_datas(buffers, args.threads);\n    out->shallow = 0;\n    for(i = 0; i < args.threads; ++i){\n        buffers[i].shallow = 1;\n        free_data(buffers[i]);\n    }\n    free(buffers);\n    free(threads);\n    return 0;\n}\n\nvoid load_data_blocking(load_args args)\n{\n    struct load_args *ptr = calloc(1, sizeof(struct load_args));\n    *ptr = args;\n    load_thread(ptr);\n}\n\npthread_t load_data(load_args args)\n{\n    pthread_t thread;\n    struct load_args *ptr = calloc(1, sizeof(struct load_args));\n    *ptr = args;\n    if(pthread_create(&thread, 0, load_threads, ptr)) error(\"Thread creation failed\");\n    return thread;\n}\n\ndata load_data_writing(char **paths, int n, int m, int w, int h, int out_w, int out_h)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    char **replace_paths = find_replace_paths(paths, n, \".png\", \"-label.png\");\n    data d = {0};\n    d.shallow = 0;\n    d.X = load_image_paths(paths, n, w, h);\n    d.y = load_image_paths_gray(replace_paths, n, out_w, out_h);\n    if(m) free(paths);\n    int i;\n    for(i = 0; i < n; ++i) free(replace_paths[i]);\n    free(replace_paths);\n    return d;\n}\n\ndata load_data_old(char **paths, int n, int m, char **labels, int k, int w, int h)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.shallow = 0;\n    d.X = load_image_paths(paths, n, w, h);\n    d.y = load_labels_paths(paths, n, labels, k, 0);\n    if(m) free(paths);\n    return d;\n}\n\n/*\n   data load_data_study(char **paths, int n, int m, char **labels, int k, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure)\n   {\n   data d = {0};\n   d.indexes = calloc(n, sizeof(int));\n   if(m) paths = get_random_paths_indexes(paths, n, m, d.indexes);\n   d.shallow = 0;\n   d.X = load_image_augment_paths(paths, n, min, max, size, angle, aspect, hue, saturation, exposure);\n   d.y = load_labels_paths(paths, n, labels, k);\n   if(m) free(paths);\n   return d;\n   }\n */\n\ndata load_data_super(char **paths, int n, int m, int w, int h, int scale)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.shallow = 0;\n\n    int i;\n    d.X.rows = n;\n    d.X.vals = calloc(n, sizeof(float*));\n    d.X.cols = w*h*3;\n\n    d.y.rows = n;\n    d.y.vals = calloc(n, sizeof(float*));\n    d.y.cols = w*scale * h*scale * 3;\n\n    for(i = 0; i < n; ++i){\n        image im = load_image_color(paths[i], 0, 0);\n        image crop = random_crop_image(im, w*scale, h*scale);\n        int flip = rand()%2;\n        if (flip) flip_image(crop);\n        image resize = resize_image(crop, w, h);\n        d.X.vals[i] = resize.data;\n        d.y.vals[i] = crop.data;\n        free_image(im);\n    }\n\n    if(m) free(paths);\n    return d;\n}\n\ndata load_data_regression(char **paths, int n, int m, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.shallow = 0;\n    d.X = load_image_augment_paths(paths, n, min, max, size, angle, aspect, hue, saturation, exposure, 0);\n    d.y = load_regression_labels_paths(paths, n);\n    if(m) free(paths);\n    return d;\n}\n\ndata select_data(data *orig, int *inds)\n{\n    data d = {0};\n    d.shallow = 1;\n    d.w = orig[0].w;\n    d.h = orig[0].h;\n\n    d.X.rows = orig[0].X.rows;\n    d.y.rows = orig[0].X.rows;\n\n    d.X.cols = orig[0].X.cols;\n    d.y.cols = orig[0].y.cols;\n\n    d.X.vals = calloc(orig[0].X.rows, sizeof(float *));\n    d.y.vals = calloc(orig[0].y.rows, sizeof(float *));\n    int i;\n    for(i = 0; i < d.X.rows; ++i){\n        d.X.vals[i] = orig[inds[i]].X.vals[i];\n        d.y.vals[i] = orig[inds[i]].y.vals[i];\n    }\n    return d;\n}\n\ndata *tile_data(data orig, int divs, int size)\n{\n    data *ds = calloc(divs*divs, sizeof(data));\n    int i, j;\n    #pragma omp parallel for\n    for(i = 0; i < divs*divs; ++i){\n        data d;\n        d.shallow = 0;\n        d.w = orig.w/divs * size;\n        d.h = orig.h/divs * size;\n        d.X.rows = orig.X.rows;\n        d.X.cols = d.w*d.h*3;\n        d.X.vals = calloc(d.X.rows, sizeof(float*));\n\n        d.y = copy_matrix(orig.y);\n        #pragma omp parallel for\n        for(j = 0; j < orig.X.rows; ++j){\n            int x = (i%divs) * orig.w / divs - (d.w - orig.w/divs)/2;\n            int y = (i/divs) * orig.h / divs - (d.h - orig.h/divs)/2;\n            image im = float_to_image(orig.w, orig.h, 3, orig.X.vals[j]);\n            d.X.vals[j] = crop_image(im, x, y, d.w, d.h).data;\n        }\n        ds[i] = d;\n    }\n    return ds;\n}\n\ndata resize_data(data orig, int w, int h)\n{\n    data d = {0};\n    d.shallow = 0;\n    d.w = w;\n    d.h = h;\n    int i;\n    d.X.rows = orig.X.rows;\n    d.X.cols = w*h*3;\n    d.X.vals = calloc(d.X.rows, sizeof(float*));\n\n    d.y = copy_matrix(orig.y);\n    #pragma omp parallel for\n    for(i = 0; i < orig.X.rows; ++i){\n        image im = float_to_image(orig.w, orig.h, 3, orig.X.vals[i]);\n        d.X.vals[i] = resize_image(im, w, h).data;\n    }\n    return d;\n}\n\ndata load_data_augment(char **paths, int n, int m, char **labels, int k, tree *hierarchy, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure, int center)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.shallow = 0;\n    d.w=size;\n    d.h=size;\n    d.X = load_image_augment_paths(paths, n, min, max, size, angle, aspect, hue, saturation, exposure, center);\n    d.y = load_labels_paths(paths, n, labels, k, hierarchy);\n    if(m) free(paths);\n    return d;\n}\n\ndata load_data_tag(char **paths, int n, int m, int k, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure)\n{\n    if(m) paths = get_random_paths(paths, n, m);\n    data d = {0};\n    d.w = size;\n    d.h = size;\n    d.shallow = 0;\n    d.X = load_image_augment_paths(paths, n, min, max, size, angle, aspect, hue, saturation, exposure, 0);\n    d.y = load_tags_paths(paths, n, k);\n    if(m) free(paths);\n    return d;\n}\n\nmatrix concat_matrix(matrix m1, matrix m2)\n{\n    int i, count = 0;\n    matrix m;\n    m.cols = m1.cols;\n    m.rows = m1.rows+m2.rows;\n    m.vals = calloc(m1.rows + m2.rows, sizeof(float*));\n    for(i = 0; i < m1.rows; ++i){\n        m.vals[count++] = m1.vals[i];\n    }\n    for(i = 0; i < m2.rows; ++i){\n        m.vals[count++] = m2.vals[i];\n    }\n    return m;\n}\n\ndata concat_data(data d1, data d2)\n{\n    data d = {0};\n    d.shallow = 1;\n    d.X = concat_matrix(d1.X, d2.X);\n    d.y = concat_matrix(d1.y, d2.y);\n    d.w = d1.w;\n    d.h = d1.h;\n    return d;\n}\n\ndata concat_datas(data *d, int n)\n{\n    int i;\n    data out = {0};\n    for(i = 0; i < n; ++i){\n        data new = concat_data(d[i], out);\n        free_data(out);\n        out = new;\n    }\n    return out;\n}\n\ndata load_categorical_data_csv(char *filename, int target, int k)\n{\n    data d = {0};\n    d.shallow = 0;\n    matrix X = csv_to_matrix(filename);\n    float *truth_1d = pop_column(&X, target);\n    float **truth = one_hot_encode(truth_1d, X.rows, k);\n    matrix y;\n    y.rows = X.rows;\n    y.cols = k;\n    y.vals = truth;\n    d.X = X;\n    d.y = y;\n    free(truth_1d);\n    return d;\n}\n\ndata load_cifar10_data(char *filename)\n{\n    data d = {0};\n    d.shallow = 0;\n    long i,j;\n    matrix X = make_matrix(10000, 3072);\n    matrix y = make_matrix(10000, 10);\n    d.X = X;\n    d.y = y;\n\n    FILE *fp = fopen(filename, \"rb\");\n    if(!fp) file_error(filename);\n    for(i = 0; i < 10000; ++i){\n        unsigned char bytes[3073];\n        fread(bytes, 1, 3073, fp);\n        int class = bytes[0];\n        y.vals[i][class] = 1;\n        for(j = 0; j < X.cols; ++j){\n            X.vals[i][j] = (double)bytes[j+1];\n        }\n    }\n    scale_data_rows(d, 1./255);\n    //normalize_data_rows(d);\n    fclose(fp);\n    return d;\n}\n\nvoid get_random_batch(data d, int n, float *X, float *y)\n{\n    int j;\n    for(j = 0; j < n; ++j){\n        int index = rand()%d.X.rows;\n        memcpy(X+j*d.X.cols, d.X.vals[index], d.X.cols*sizeof(float));\n        memcpy(y+j*d.y.cols, d.y.vals[index], d.y.cols*sizeof(float));\n    }\n}\n\nvoid get_next_batch(data d, int n, int offset, float *X, float *y)\n{\n    int j;\n    for(j = 0; j < n; ++j){\n        int index = offset + j;\n        memcpy(X+j*d.X.cols, d.X.vals[index], d.X.cols*sizeof(float));\n        if(y) memcpy(y+j*d.y.cols, d.y.vals[index], d.y.cols*sizeof(float));\n    }\n}\n\nvoid smooth_data(data d)\n{\n    int i, j;\n    float scale = 1. / d.y.cols;\n    float eps = .1;\n    for(i = 0; i < d.y.rows; ++i){\n        for(j = 0; j < d.y.cols; ++j){\n            d.y.vals[i][j] = eps * scale + (1-eps) * d.y.vals[i][j];\n        }\n    }\n}\n\ndata load_all_cifar10()\n{\n    data d = {0};\n    d.shallow = 0;\n    int i,j,b;\n    matrix X = make_matrix(50000, 3072);\n    matrix y = make_matrix(50000, 10);\n    d.X = X;\n    d.y = y;\n\n\n    for(b = 0; b < 5; ++b){\n        char buff[256];\n        sprintf(buff, \"data/cifar/cifar-10-batches-bin/data_batch_%d.bin\", b+1);\n        FILE *fp = fopen(buff, \"rb\");\n        if(!fp) file_error(buff);\n        for(i = 0; i < 10000; ++i){\n            unsigned char bytes[3073];\n            fread(bytes, 1, 3073, fp);\n            int class = bytes[0];\n            y.vals[i+b*10000][class] = 1;\n            for(j = 0; j < X.cols; ++j){\n                X.vals[i+b*10000][j] = (double)bytes[j+1];\n            }\n        }\n        fclose(fp);\n    }\n    //normalize_data_rows(d);\n    scale_data_rows(d, 1./255);\n    smooth_data(d);\n    return d;\n}\n\ndata load_go(char *filename)\n{\n    FILE *fp = fopen(filename, \"rb\");\n    matrix X = make_matrix(3363059, 361);\n    matrix y = make_matrix(3363059, 361);\n    int row, col;\n\n    if(!fp) file_error(filename);\n    char *label;\n    int count = 0;\n    while((label = fgetl(fp))){\n        int i;\n        if(count == X.rows){\n            X = resize_matrix(X, count*2);\n            y = resize_matrix(y, count*2);\n        }\n        sscanf(label, \"%d %d\", &row, &col);\n        char *board = fgetl(fp);\n\n        int index = row*19 + col;\n        y.vals[count][index] = 1;\n\n        for(i = 0; i < 19*19; ++i){\n            float val = 0;\n            if(board[i] == '1') val = 1;\n            else if(board[i] == '2') val = -1;\n            X.vals[count][i] = val;\n        }\n        ++count;\n        free(label);\n        free(board);\n    }\n    X = resize_matrix(X, count);\n    y = resize_matrix(y, count);\n\n    data d = {0};\n    d.shallow = 0;\n    d.X = X;\n    d.y = y;\n\n\n    fclose(fp);\n\n    return d;\n}\n\n\nvoid randomize_data(data d)\n{\n    int i;\n    for(i = d.X.rows-1; i > 0; --i){\n        int index = rand()%i;\n        float *swap = d.X.vals[index];\n        d.X.vals[index] = d.X.vals[i];\n        d.X.vals[i] = swap;\n\n        swap = d.y.vals[index];\n        d.y.vals[index] = d.y.vals[i];\n        d.y.vals[i] = swap;\n    }\n}\n\nvoid scale_data_rows(data d, float s)\n{\n    int i;\n    for(i = 0; i < d.X.rows; ++i){\n        scale_array(d.X.vals[i], d.X.cols, s);\n    }\n}\n\nvoid translate_data_rows(data d, float s)\n{\n    int i;\n    for(i = 0; i < d.X.rows; ++i){\n        translate_array(d.X.vals[i], d.X.cols, s);\n    }\n}\n\ndata copy_data(data d)\n{\n    data c = {0};\n    c.w = d.w;\n    c.h = d.h;\n    c.shallow = 0;\n    c.num_boxes = d.num_boxes;\n    c.boxes = d.boxes;\n    c.X = copy_matrix(d.X);\n    c.y = copy_matrix(d.y);\n    return c;\n}\n\nvoid normalize_data_rows(data d)\n{\n    int i;\n    for(i = 0; i < d.X.rows; ++i){\n        normalize_array(d.X.vals[i], d.X.cols);\n    }\n}\n\ndata get_data_part(data d, int part, int total)\n{\n    data p = {0};\n    p.shallow = 1;\n    p.X.rows = d.X.rows * (part + 1) / total - d.X.rows * part / total;\n    p.y.rows = d.y.rows * (part + 1) / total - d.y.rows * part / total;\n    p.X.cols = d.X.cols;\n    p.y.cols = d.y.cols;\n    p.X.vals = d.X.vals + d.X.rows * part / total;\n    p.y.vals = d.y.vals + d.y.rows * part / total;\n    return p;\n}\n\ndata get_random_data(data d, int num)\n{\n    data r = {0};\n    r.shallow = 1;\n\n    r.X.rows = num;\n    r.y.rows = num;\n\n    r.X.cols = d.X.cols;\n    r.y.cols = d.y.cols;\n\n    r.X.vals = calloc(num, sizeof(float *));\n    r.y.vals = calloc(num, sizeof(float *));\n\n    int i;\n    for(i = 0; i < num; ++i){\n        int index = rand()%d.X.rows;\n        r.X.vals[i] = d.X.vals[index];\n        r.y.vals[i] = d.y.vals[index];\n    }\n    return r;\n}\n\ndata *split_data(data d, int part, int total)\n{\n    data *split = calloc(2, sizeof(data));\n    int i;\n    int start = part*d.X.rows/total;\n    int end = (part+1)*d.X.rows/total;\n    data train;\n    data test;\n    train.shallow = test.shallow = 1;\n\n    test.X.rows = test.y.rows = end-start;\n    train.X.rows = train.y.rows = d.X.rows - (end-start);\n    train.X.cols = test.X.cols = d.X.cols;\n    train.y.cols = test.y.cols = d.y.cols;\n\n    train.X.vals = calloc(train.X.rows, sizeof(float*));\n    test.X.vals = calloc(test.X.rows, sizeof(float*));\n    train.y.vals = calloc(train.y.rows, sizeof(float*));\n    test.y.vals = calloc(test.y.rows, sizeof(float*));\n\n    for(i = 0; i < start; ++i){\n        train.X.vals[i] = d.X.vals[i];\n        train.y.vals[i] = d.y.vals[i];\n    }\n    for(i = start; i < end; ++i){\n        test.X.vals[i-start] = d.X.vals[i];\n        test.y.vals[i-start] = d.y.vals[i];\n    }\n    for(i = end; i < d.X.rows; ++i){\n        train.X.vals[i-(end-start)] = d.X.vals[i];\n        train.y.vals[i-(end-start)] = d.y.vals[i];\n    }\n    split[0] = train;\n    split[1] = test;\n    return split;\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/data.h",
    "content": "#ifndef DATA_H\n#define DATA_H\n#include <pthread.h>\n\n#include \"darknet.h\"\n#include \"matrix.h\"\n#include \"list.h\"\n#include \"image.h\"\n#include \"tree.h\"\n\nstatic inline float distance_from_edge(int x, int max)\n{\n    int dx = (max/2) - x;\n    if (dx < 0) dx = -dx;\n    dx = (max/2) + 1 - dx;\n    dx *= 2;\n    float dist = (float)dx/max;\n    if (dist > 1) dist = 1;\n    return dist;\n}\nvoid load_data_blocking(load_args args);\n\n\nvoid print_letters(float *pred, int n);\ndata load_data_captcha(char **paths, int n, int m, int k, int w, int h);\ndata load_data_captcha_encode(char **paths, int n, int m, int w, int h);\ndata load_data_detection(int n, char **paths, int m, int w, int h, int boxes, int classes, float jitter, float hue, float saturation, float exposure);\ndata load_data_tag(char **paths, int n, int m, int k, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure);\nmatrix load_image_augment_paths(char **paths, int n, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure, int center);\ndata load_data_super(char **paths, int n, int m, int w, int h, int scale);\ndata load_data_augment(char **paths, int n, int m, char **labels, int k, tree *hierarchy, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure, int center);\ndata load_data_regression(char **paths, int n, int m, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure);\ndata load_go(char *filename);\n\ndata load_data_region(int n, char **paths, int m, int w, int h, int size, int classes, float jitter, float hue, float saturation, float exposure);\n\n\ndata load_data_writing(char **paths, int n, int m, int w, int h, int out_w, int out_h);\n\nvoid get_random_batch(data d, int n, float *X, float *y);\ndata get_data_part(data d, int part, int total);\ndata get_random_data(data d, int num);\ndata load_categorical_data_csv(char *filename, int target, int k);\nvoid normalize_data_rows(data d);\nvoid scale_data_rows(data d, float s);\nvoid translate_data_rows(data d, float s);\nvoid randomize_data(data d);\nvoid randomize_boxes(box_label *b, int n);\nvoid correct_boxes(box_label *boxes, int n, float dx, float dy, float sx, float sy, int flip);\ndata *split_data(data d, int part, int total);\ndata concat_datas(data *d, int n);\nvoid fill_truth(char *path, char **labels, int k, float *truth);\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/deconvolutional_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"convolutional_layer.h\"\n#include \"deconvolutional_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"gemm.h\"\n#include \"blas.h\"\n#include \"im2col.h\"\n#include \"col2im.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n}\n\nextern \"C\" void forward_deconvolutional_layer_gpu(layer l, network net)\n{\n    int i;\n\n    int m = l.size*l.size*l.n;\n    int n = l.h*l.w;\n    int k = l.c;\n\n    fill_gpu(l.outputs*l.batch, 0, l.output_gpu, 1);\n\n    for(i = 0; i < l.batch; ++i){\n        float *a = l.weights_gpu;\n        float *b = net.input_gpu + i*l.c*l.h*l.w;\n        float *c = net.workspace;\n\n        gemm_gpu(1,0,m,n,k,1,a,m,b,n,0,c,n);\n\n        col2im_gpu(net.workspace, l.out_c, l.out_h, l.out_w, l.size, l.stride, l.pad, l.output_gpu+i*l.outputs);\n    }\n    if (l.batch_normalize) {\n        forward_batchnorm_layer_gpu(l, net);\n    } else {\n        add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.n, l.out_w*l.out_h);\n    }\n    activate_array_gpu(l.output_gpu, l.batch*l.n*l.out_w*l.out_h, l.activation);\n}\n\nextern \"C\" void backward_deconvolutional_layer_gpu(layer l, network net)\n{\n    int i;\n\n    constrain_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1);\n    gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);\n\n    if(l.batch_normalize){\n        backward_batchnorm_layer_gpu(l, net);\n    } else {\n        backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.n, l.out_w*l.out_h);\n    }\n\n    //if(net.delta_gpu) memset(net.delta_gpu, 0, l.batch*l.h*l.w*l.c*sizeof(float));\n\n    for(i = 0; i < l.batch; ++i){\n        int m = l.c;\n        int n = l.size*l.size*l.n;\n        int k = l.h*l.w;\n\n        float *a = net.input_gpu + i*m*k;\n        float *b = net.workspace;\n        float *c = l.weight_updates_gpu;\n\n        im2col_gpu(l.delta_gpu + i*l.outputs, l.out_c, l.out_h, l.out_w, \n                l.size, l.stride, l.pad, b);\n        gemm_gpu(0,1,m,n,k,1,a,k,b,k,1,c,n);\n\n        if(net.delta_gpu){\n            int m = l.c;\n            int n = l.h*l.w;\n            int k = l.size*l.size*l.n;\n\n            float *a = l.weights_gpu;\n            float *b = net.workspace;\n            float *c = net.delta_gpu + i*n*m;\n\n            gemm_gpu(0,0,m,n,k,1,a,k,b,n,1,c,n);\n        }\n    }\n}\n\nextern \"C\" void pull_deconvolutional_layer(layer l)\n{\n    cuda_pull_array(l.weights_gpu, l.weights, l.c*l.n*l.size*l.size);\n    cuda_pull_array(l.biases_gpu, l.biases, l.n);\n    cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.c*l.n*l.size*l.size);\n    cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.n);\n    if (l.batch_normalize){\n        cuda_pull_array(l.scales_gpu, l.scales, l.n);\n        cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.n);\n        cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.n);\n    }\n}\n\nextern \"C\" void push_deconvolutional_layer(layer l)\n{\n    cuda_push_array(l.weights_gpu, l.weights, l.c*l.n*l.size*l.size);\n    cuda_push_array(l.biases_gpu, l.biases, l.n);\n    cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.c*l.n*l.size*l.size);\n    cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.n);\n    if (l.batch_normalize){\n        cuda_push_array(l.scales_gpu, l.scales, l.n);\n        cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.n);\n        cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.n);\n    }\n}\n\nvoid update_deconvolutional_layer_gpu(layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n\n    int size = l.size*l.size*l.c*l.n;\n\n    if(a.adam){\n        adam_update_gpu(l.weights_gpu, l.weight_updates_gpu, l.m_gpu, l.v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, size, batch, a.t);\n        adam_update_gpu(l.biases_gpu, l.bias_updates_gpu, l.bias_m_gpu, l.bias_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);\n        if(l.scales_gpu){\n            adam_update_gpu(l.scales_gpu, l.scale_updates_gpu, l.scale_m_gpu, l.scale_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);\n        }\n    }else{\n        axpy_gpu(size, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);\n        axpy_gpu(size, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);\n        scal_gpu(size, momentum, l.weight_updates_gpu, 1);\n\n        axpy_gpu(l.n, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);\n        scal_gpu(l.n, momentum, l.bias_updates_gpu, 1);\n\n        if(l.scales_gpu){\n            axpy_gpu(l.n, learning_rate/batch, l.scale_updates_gpu, 1, l.scales_gpu, 1);\n            scal_gpu(l.n, momentum, l.scale_updates_gpu, 1);\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/deconvolutional_layer.c",
    "content": "#include \"deconvolutional_layer.h\"\n#include \"convolutional_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"utils.h\"\n#include \"im2col.h\"\n#include \"col2im.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <stdio.h>\n#include <time.h>\n\n\nstatic size_t get_workspace_size(layer l){\n    return (size_t)l.h*l.w*l.size*l.size*l.n*sizeof(float);\n}\n\n\nlayer make_deconvolutional_layer(int batch, int h, int w, int c, int n, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int adam)\n{\n    int i;\n    layer l = {0};\n    l.type = DECONVOLUTIONAL;\n\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.n = n;\n    l.batch = batch;\n    l.stride = stride;\n    l.size = size;\n\n    l.nweights = c*n*size*size;\n    l.nbiases = n;\n\n    l.weights = calloc(c*n*size*size, sizeof(float));\n    l.weight_updates = calloc(c*n*size*size, sizeof(float));\n\n    l.biases = calloc(n, sizeof(float));\n    l.bias_updates = calloc(n, sizeof(float));\n    float scale = .02;\n    for(i = 0; i < c*n*size*size; ++i) l.weights[i] = scale*rand_normal();\n    for(i = 0; i < n; ++i){\n        l.biases[i] = 0;\n    }\n    l.pad = padding;\n\n    l.out_h = (l.h - 1) * l.stride + l.size - 2*l.pad;\n    l.out_w = (l.w - 1) * l.stride + l.size - 2*l.pad;\n    l.out_c = n;\n    l.outputs = l.out_w * l.out_h * l.out_c;\n    l.inputs = l.w * l.h * l.c;\n\n    l.output = calloc(l.batch*l.outputs, sizeof(float));\n    l.delta  = calloc(l.batch*l.outputs, sizeof(float));\n\n    l.forward = forward_deconvolutional_layer;\n    l.backward = backward_deconvolutional_layer;\n    l.update = update_deconvolutional_layer;\n\n    l.batch_normalize = batch_normalize;\n\n    if(batch_normalize){\n        l.scales = calloc(n, sizeof(float));\n        l.scale_updates = calloc(n, sizeof(float));\n        for(i = 0; i < n; ++i){\n            l.scales[i] = 1;\n        }\n\n        l.mean = calloc(n, sizeof(float));\n        l.variance = calloc(n, sizeof(float));\n\n        l.mean_delta = calloc(n, sizeof(float));\n        l.variance_delta = calloc(n, sizeof(float));\n\n        l.rolling_mean = calloc(n, sizeof(float));\n        l.rolling_variance = calloc(n, sizeof(float));\n        l.x = calloc(l.batch*l.outputs, sizeof(float));\n        l.x_norm = calloc(l.batch*l.outputs, sizeof(float));\n    }\n    if(adam){\n        l.m = calloc(c*n*size*size, sizeof(float));\n        l.v = calloc(c*n*size*size, sizeof(float));\n        l.bias_m = calloc(n, sizeof(float));\n        l.scale_m = calloc(n, sizeof(float));\n        l.bias_v = calloc(n, sizeof(float));\n        l.scale_v = calloc(n, sizeof(float));\n    }\n\n#ifdef GPU\n    l.forward_gpu = forward_deconvolutional_layer_gpu;\n    l.backward_gpu = backward_deconvolutional_layer_gpu;\n    l.update_gpu = update_deconvolutional_layer_gpu;\n\n    if(gpu_index >= 0){\n\n        if (adam) {\n            l.m_gpu = cuda_make_array(l.m, c*n*size*size);\n            l.v_gpu = cuda_make_array(l.v, c*n*size*size);\n            l.bias_m_gpu = cuda_make_array(l.bias_m, n);\n            l.bias_v_gpu = cuda_make_array(l.bias_v, n);\n            l.scale_m_gpu = cuda_make_array(l.scale_m, n);\n            l.scale_v_gpu = cuda_make_array(l.scale_v, n);\n        }\n        l.weights_gpu = cuda_make_array(l.weights, c*n*size*size);\n        l.weight_updates_gpu = cuda_make_array(l.weight_updates, c*n*size*size);\n\n        l.biases_gpu = cuda_make_array(l.biases, n);\n        l.bias_updates_gpu = cuda_make_array(l.bias_updates, n);\n\n        l.delta_gpu = cuda_make_array(l.delta, l.batch*l.out_h*l.out_w*n);\n        l.output_gpu = cuda_make_array(l.output, l.batch*l.out_h*l.out_w*n);\n\n        if(batch_normalize){\n            l.mean_gpu = cuda_make_array(0, n);\n            l.variance_gpu = cuda_make_array(0, n);\n\n            l.rolling_mean_gpu = cuda_make_array(0, n);\n            l.rolling_variance_gpu = cuda_make_array(0, n);\n\n            l.mean_delta_gpu = cuda_make_array(0, n);\n            l.variance_delta_gpu = cuda_make_array(0, n);\n\n            l.scales_gpu = cuda_make_array(0, n);\n            l.scale_updates_gpu = cuda_make_array(0, n);\n\n            l.x_gpu = cuda_make_array(0, l.batch*l.out_h*l.out_w*n);\n            l.x_norm_gpu = cuda_make_array(0, l.batch*l.out_h*l.out_w*n);\n        }\n    }\n    #ifdef CUDNN\n        cudnnCreateTensorDescriptor(&l.dstTensorDesc);\n        cudnnCreateTensorDescriptor(&l.normTensorDesc);\n        cudnnSetTensor4dDescriptor(l.dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l.batch, l.out_c, l.out_h, l.out_w); \n        cudnnSetTensor4dDescriptor(l.normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l.out_c, 1, 1); \n    #endif\n#endif\n\n    l.activation = activation;\n    l.workspace_size = get_workspace_size(l);\n\n    fprintf(stderr, \"deconv%5d %2d x%2d /%2d  %4d x%4d x%4d   ->  %4d x%4d x%4d\\n\", n, size, size, stride, w, h, c, l.out_w, l.out_h, l.out_c);\n\n    return l;\n}\n\nvoid denormalize_deconvolutional_layer(layer l)\n{\n    int i, j;\n    for(i = 0; i < l.n; ++i){\n        float scale = l.scales[i]/sqrt(l.rolling_variance[i] + .00001);\n        for(j = 0; j < l.c*l.size*l.size; ++j){\n            l.weights[i*l.c*l.size*l.size + j] *= scale;\n        }\n        l.biases[i] -= l.rolling_mean[i] * scale;\n        l.scales[i] = 1;\n        l.rolling_mean[i] = 0;\n        l.rolling_variance[i] = 1;\n    }\n}\n\nvoid resize_deconvolutional_layer(layer *l, int h, int w)\n{\n    l->h = h;\n    l->w = w;\n    l->out_h = (l->h - 1) * l->stride + l->size - 2*l->pad;\n    l->out_w = (l->w - 1) * l->stride + l->size - 2*l->pad;\n\n    l->outputs = l->out_h * l->out_w * l->out_c;\n    l->inputs = l->w * l->h * l->c;\n\n    l->output = realloc(l->output, l->batch*l->outputs*sizeof(float));\n    l->delta  = realloc(l->delta,  l->batch*l->outputs*sizeof(float));\n    if(l->batch_normalize){\n        l->x = realloc(l->x, l->batch*l->outputs*sizeof(float));\n        l->x_norm  = realloc(l->x_norm, l->batch*l->outputs*sizeof(float));\n    }\n\n#ifdef GPU\n    cuda_free(l->delta_gpu);\n    cuda_free(l->output_gpu);\n\n    l->delta_gpu =  cuda_make_array(l->delta,  l->batch*l->outputs);\n    l->output_gpu = cuda_make_array(l->output, l->batch*l->outputs);\n\n    if(l->batch_normalize){\n        cuda_free(l->x_gpu);\n        cuda_free(l->x_norm_gpu);\n\n        l->x_gpu = cuda_make_array(l->output, l->batch*l->outputs);\n        l->x_norm_gpu = cuda_make_array(l->output, l->batch*l->outputs);\n    }\n    #ifdef CUDNN\n        cudnnSetTensor4dDescriptor(l->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->out_c, l->out_h, l->out_w); \n        cudnnSetTensor4dDescriptor(l->normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l->out_c, 1, 1); \n    #endif\n#endif\n    l->workspace_size = get_workspace_size(*l);\n}\n\nvoid forward_deconvolutional_layer(const layer l, network net)\n{\n    int i;\n\n    int m = l.size*l.size*l.n;\n    int n = l.h*l.w;\n    int k = l.c;\n\n    fill_cpu(l.outputs*l.batch, 0, l.output, 1);\n\n    for(i = 0; i < l.batch; ++i){\n        float *a = l.weights;\n        float *b = net.input + i*l.c*l.h*l.w;\n        float *c = net.workspace;\n\n        gemm_cpu(1,0,m,n,k,1,a,m,b,n,0,c,n);\n\n        col2im_cpu(net.workspace, l.out_c, l.out_h, l.out_w, l.size, l.stride, l.pad, l.output+i*l.outputs);\n    }\n    if (l.batch_normalize) {\n        forward_batchnorm_layer(l, net);\n    } else {\n        add_bias(l.output, l.biases, l.batch, l.n, l.out_w*l.out_h);\n    }\n    activate_array(l.output, l.batch*l.n*l.out_w*l.out_h, l.activation);\n}\n\nvoid backward_deconvolutional_layer(layer l, network net)\n{\n    int i;\n\n    gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);\n\n    if(l.batch_normalize){\n        backward_batchnorm_layer(l, net);\n    } else {\n        backward_bias(l.bias_updates, l.delta, l.batch, l.n, l.out_w*l.out_h);\n    }\n\n    //if(net.delta) memset(net.delta, 0, l.batch*l.h*l.w*l.c*sizeof(float));\n\n    for(i = 0; i < l.batch; ++i){\n        int m = l.c;\n        int n = l.size*l.size*l.n;\n        int k = l.h*l.w;\n\n        float *a = net.input + i*m*k;\n        float *b = net.workspace;\n        float *c = l.weight_updates;\n\n        im2col_cpu(l.delta + i*l.outputs, l.out_c, l.out_h, l.out_w, \n                l.size, l.stride, l.pad, b);\n        gemm_cpu(0,1,m,n,k,1,a,k,b,k,1,c,n);\n\n        if(net.delta){\n            int m = l.c;\n            int n = l.h*l.w;\n            int k = l.size*l.size*l.n;\n\n            float *a = l.weights;\n            float *b = net.workspace;\n            float *c = net.delta + i*n*m;\n\n            gemm_cpu(0,0,m,n,k,1,a,k,b,n,1,c,n);\n        }\n    }\n}\n\nvoid update_deconvolutional_layer(layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n\n    int size = l.size*l.size*l.c*l.n;\n    axpy_cpu(l.n, learning_rate/batch, l.bias_updates, 1, l.biases, 1);\n    scal_cpu(l.n, momentum, l.bias_updates, 1);\n\n    if(l.scales){\n        axpy_cpu(l.n, learning_rate/batch, l.scale_updates, 1, l.scales, 1);\n        scal_cpu(l.n, momentum, l.scale_updates, 1);\n    }\n\n    axpy_cpu(size, -decay*batch, l.weights, 1, l.weight_updates, 1);\n    axpy_cpu(size, learning_rate/batch, l.weight_updates, 1, l.weights, 1);\n    scal_cpu(size, momentum, l.weight_updates, 1);\n}\n\n\n\n"
  },
  {
    "path": "lightnet/_darknet/deconvolutional_layer.h",
    "content": "#ifndef DECONVOLUTIONAL_LAYER_H\n#define DECONVOLUTIONAL_LAYER_H\n\n#include \"cuda.h\"\n#include \"image.h\"\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\n#ifdef GPU\nvoid forward_deconvolutional_layer_gpu(layer l, network net);\nvoid backward_deconvolutional_layer_gpu(layer l, network net);\nvoid update_deconvolutional_layer_gpu(layer l, update_args a);\nvoid push_deconvolutional_layer(layer l);\nvoid pull_deconvolutional_layer(layer l);\n#endif\n\nlayer make_deconvolutional_layer(int batch, int h, int w, int c, int n, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int adam);\nvoid resize_deconvolutional_layer(layer *l, int h, int w);\nvoid forward_deconvolutional_layer(const layer l, network net);\nvoid update_deconvolutional_layer(layer l, update_args a);\nvoid backward_deconvolutional_layer(layer l, network net);\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/demo.c",
    "content": "#include \"network.h\"\n#include \"detection_layer.h\"\n#include \"region_layer.h\"\n#include \"cost_layer.h\"\n#include \"utils.h\"\n#include \"parser.h\"\n#include \"box.h\"\n#include \"image.h\"\n#include \"demo.h\"\n#include <sys/time.h>\n\n#define DEMO 1\n\n#ifdef OPENCV\n\nstatic char **demo_names;\nstatic image **demo_alphabet;\nstatic int demo_classes;\n\nstatic float **probs;\nstatic box *boxes;\nstatic network *net;\nstatic image buff [3];\nstatic image buff_letter[3];\nstatic int buff_index = 0;\nstatic CvCapture * cap;\nstatic IplImage  * ipl;\nstatic float fps = 0;\nstatic float demo_thresh = 0;\nstatic float demo_hier = .5;\nstatic int running = 0;\n\nstatic int demo_frame = 3;\nstatic int demo_detections = 0;\nstatic float **predictions;\nstatic int demo_index = 0;\nstatic int demo_done = 0;\nstatic float *avg;\ndouble demo_time;\n\nvoid *detect_in_thread(void *ptr)\n{\n    running = 1;\n    float nms = .4;\n\n    layer l = net->layers[net->n-1];\n    float *X = buff_letter[(buff_index+2)%3].data;\n    float *prediction = network_predict(net, X);\n\n    memcpy(predictions[demo_index], prediction, l.outputs*sizeof(float));\n    mean_arrays(predictions, demo_frame, l.outputs, avg);\n    l.output = avg;\n    if(l.type == DETECTION){\n        get_detection_boxes(l, 1, 1, demo_thresh, probs, boxes, 0);\n    } else if (l.type == REGION){\n        get_region_boxes(l, buff[0].w, buff[0].h, net->w, net->h, demo_thresh, probs, boxes, 0, 0, 0, demo_hier, 1);\n    } else {\n        error(\"Last layer must produce detections\\n\");\n    }\n    if (nms > 0) do_nms_obj(boxes, probs, l.w*l.h*l.n, l.classes, nms);\n\n    printf(\"\\033[2J\");\n    printf(\"\\033[1;1H\");\n    printf(\"\\nFPS:%.1f\\n\",fps);\n    printf(\"Objects:\\n\\n\");\n    image display = buff[(buff_index+2) % 3];\n    draw_detections(display, demo_detections, demo_thresh, boxes, probs, 0, demo_names, demo_alphabet, demo_classes);\n\n    demo_index = (demo_index + 1)%demo_frame;\n    running = 0;\n    return 0;\n}\n\nvoid *fetch_in_thread(void *ptr)\n{\n    int status = fill_image_from_stream(cap, buff[buff_index]);\n    letterbox_image_into(buff[buff_index], net->w, net->h, buff_letter[buff_index]);\n    if(status == 0) demo_done = 1;\n    return 0;\n}\n\nvoid *display_in_thread(void *ptr)\n{\n    show_image_cv(buff[(buff_index + 1)%3], \"Demo\", ipl);\n    int c = cvWaitKey(1);\n    if (c != -1) c = c%256;\n    if (c == 27) {\n        demo_done = 1;\n        return 0;\n    } else if (c == 82) {\n        demo_thresh += .02;\n    } else if (c == 84) {\n        demo_thresh -= .02;\n        if(demo_thresh <= .02) demo_thresh = .02;\n    } else if (c == 83) {\n        demo_hier += .02;\n    } else if (c == 81) {\n        demo_hier -= .02;\n        if(demo_hier <= .0) demo_hier = .0;\n    }\n    return 0;\n}\n\nvoid *display_loop(void *ptr)\n{\n    while(1){\n        display_in_thread(0);\n    }\n}\n\nvoid *detect_loop(void *ptr)\n{\n    while(1){\n        detect_in_thread(0);\n    }\n}\n\nvoid demo(char *cfgfile, char *weightfile, float thresh, int cam_index, const char *filename, char **names, int classes, int delay, char *prefix, int avg_frames, float hier, int w, int h, int frames, int fullscreen)\n{\n    demo_frame = avg_frames;\n    predictions = calloc(demo_frame, sizeof(float*));\n    image **alphabet = load_alphabet();\n    demo_names = names;\n    demo_alphabet = alphabet;\n    demo_classes = classes;\n    demo_thresh = thresh;\n    demo_hier = hier;\n    printf(\"Demo\\n\");\n    net = load_network(cfgfile, weightfile, 0);\n    set_batch_network(net, 1);\n    pthread_t detect_thread;\n    pthread_t fetch_thread;\n\n    srand(2222222);\n\n    if(filename){\n        printf(\"video file: %s\\n\", filename);\n        cap = cvCaptureFromFile(filename);\n    }else{\n        cap = cvCaptureFromCAM(cam_index);\n\n        if(w){\n            cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, w);\n        }\n        if(h){\n            cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, h);\n        }\n        if(frames){\n            cvSetCaptureProperty(cap, CV_CAP_PROP_FPS, frames);\n        }\n    }\n\n    if(!cap) error(\"Couldn't connect to webcam.\\n\");\n\n    layer l = net->layers[net->n-1];\n    demo_detections = l.n*l.w*l.h;\n    int j;\n\n    avg = (float *) calloc(l.outputs, sizeof(float));\n    for(j = 0; j < demo_frame; ++j) predictions[j] = (float *) calloc(l.outputs, sizeof(float));\n\n    boxes = (box *)calloc(l.w*l.h*l.n, sizeof(box));\n    probs = (float **)calloc(l.w*l.h*l.n, sizeof(float *));\n    for(j = 0; j < l.w*l.h*l.n; ++j) probs[j] = (float *)calloc(l.classes+1, sizeof(float));\n\n    buff[0] = get_image_from_stream(cap);\n    buff[1] = copy_image(buff[0]);\n    buff[2] = copy_image(buff[0]);\n    buff_letter[0] = letterbox_image(buff[0], net->w, net->h);\n    buff_letter[1] = letterbox_image(buff[0], net->w, net->h);\n    buff_letter[2] = letterbox_image(buff[0], net->w, net->h);\n    ipl = cvCreateImage(cvSize(buff[0].w,buff[0].h), IPL_DEPTH_8U, buff[0].c);\n\n    int count = 0;\n    if(!prefix){\n        cvNamedWindow(\"Demo\", CV_WINDOW_NORMAL); \n        if(fullscreen){\n            cvSetWindowProperty(\"Demo\", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);\n        } else {\n            cvMoveWindow(\"Demo\", 0, 0);\n            cvResizeWindow(\"Demo\", 1352, 1013);\n        }\n    }\n\n    demo_time = what_time_is_it_now();\n\n    while(!demo_done){\n        buff_index = (buff_index + 1) %3;\n        if(pthread_create(&fetch_thread, 0, fetch_in_thread, 0)) error(\"Thread creation failed\");\n        if(pthread_create(&detect_thread, 0, detect_in_thread, 0)) error(\"Thread creation failed\");\n        if(!prefix){\n            fps = 1./(what_time_is_it_now() - demo_time);\n            demo_time = what_time_is_it_now();\n            display_in_thread(0);\n        }else{\n            char name[256];\n            sprintf(name, \"%s_%08d\", prefix, count);\n            save_image(buff[(buff_index + 1)%3], name);\n        }\n        pthread_join(fetch_thread, 0);\n        pthread_join(detect_thread, 0);\n        ++count;\n    }\n}\n\nvoid demo_compare(char *cfg1, char *weight1, char *cfg2, char *weight2, float thresh, int cam_index, const char *filename, char **names, int classes, int delay, char *prefix, int avg_frames, float hier, int w, int h, int frames, int fullscreen)\n{\n    demo_frame = avg_frames;\n    predictions = calloc(demo_frame, sizeof(float*));\n    image **alphabet = load_alphabet();\n    demo_names = names;\n    demo_alphabet = alphabet;\n    demo_classes = classes;\n    demo_thresh = thresh;\n    demo_hier = hier;\n    printf(\"Demo\\n\");\n    net = load_network(cfg1, weight1, 0);\n    set_batch_network(net, 1);\n    pthread_t detect_thread;\n    pthread_t fetch_thread;\n\n    srand(2222222);\n\n    if(filename){\n        printf(\"video file: %s\\n\", filename);\n        cap = cvCaptureFromFile(filename);\n    }else{\n        cap = cvCaptureFromCAM(cam_index);\n\n        if(w){\n            cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, w);\n        }\n        if(h){\n            cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, h);\n        }\n        if(frames){\n            cvSetCaptureProperty(cap, CV_CAP_PROP_FPS, frames);\n        }\n    }\n\n    if(!cap) error(\"Couldn't connect to webcam.\\n\");\n\n    layer l = net->layers[net->n-1];\n    demo_detections = l.n*l.w*l.h;\n    int j;\n\n    avg = (float *) calloc(l.outputs, sizeof(float));\n    for(j = 0; j < demo_frame; ++j) predictions[j] = (float *) calloc(l.outputs, sizeof(float));\n\n    boxes = (box *)calloc(l.w*l.h*l.n, sizeof(box));\n    probs = (float **)calloc(l.w*l.h*l.n, sizeof(float *));\n    for(j = 0; j < l.w*l.h*l.n; ++j) probs[j] = (float *)calloc(l.classes+1, sizeof(float));\n\n    buff[0] = get_image_from_stream(cap);\n    buff[1] = copy_image(buff[0]);\n    buff[2] = copy_image(buff[0]);\n    buff_letter[0] = letterbox_image(buff[0], net->w, net->h);\n    buff_letter[1] = letterbox_image(buff[0], net->w, net->h);\n    buff_letter[2] = letterbox_image(buff[0], net->w, net->h);\n    ipl = cvCreateImage(cvSize(buff[0].w,buff[0].h), IPL_DEPTH_8U, buff[0].c);\n\n    int count = 0;\n    if(!prefix){\n        cvNamedWindow(\"Demo\", CV_WINDOW_NORMAL); \n        if(fullscreen){\n            cvSetWindowProperty(\"Demo\", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);\n        } else {\n            cvMoveWindow(\"Demo\", 0, 0);\n            cvResizeWindow(\"Demo\", 1352, 1013);\n        }\n    }\n\n    demo_time = what_time_is_it_now();\n\n    while(!demo_done){\n        buff_index = (buff_index + 1) %3;\n        if(pthread_create(&fetch_thread, 0, fetch_in_thread, 0)) error(\"Thread creation failed\");\n        if(pthread_create(&detect_thread, 0, detect_in_thread, 0)) error(\"Thread creation failed\");\n        if(!prefix){\n            fps = 1./(what_time_is_it_now() - demo_time);\n            demo_time = what_time_is_it_now();\n            display_in_thread(0);\n        }else{\n            char name[256];\n            sprintf(name, \"%s_%08d\", prefix, count);\n            save_image(buff[(buff_index + 1)%3], name);\n        }\n        pthread_join(fetch_thread, 0);\n        pthread_join(detect_thread, 0);\n        ++count;\n    }\n}\n#else\nvoid demo(char *cfgfile, char *weightfile, float thresh, int cam_index, const char *filename, char **names, int classes, int delay, char *prefix, int avg, float hier, int w, int h, int frames, int fullscreen)\n{\n    fprintf(stderr, \"Demo needs OpenCV for webcam images.\\n\");\n}\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/demo.h",
    "content": "#ifndef DEMO_H\n#define DEMO_H\n\n#include \"image.h\"\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/detection_layer.c",
    "content": "#include \"detection_layer.h\"\n#include \"activations.h\"\n#include \"softmax_layer.h\"\n#include \"blas.h\"\n#include \"box.h\"\n#include \"cuda.h\"\n#include \"utils.h\"\n\n#include <stdio.h>\n#include <assert.h>\n#include <string.h>\n#include <stdlib.h>\n\ndetection_layer make_detection_layer(int batch, int inputs, int n, int side, int classes, int coords, int rescore)\n{\n    detection_layer l = {0};\n    l.type = DETECTION;\n\n    l.n = n;\n    l.batch = batch;\n    l.inputs = inputs;\n    l.classes = classes;\n    l.coords = coords;\n    l.rescore = rescore;\n    l.side = side;\n    l.w = side;\n    l.h = side;\n    assert(side*side*((1 + l.coords)*l.n + l.classes) == inputs);\n    l.cost = calloc(1, sizeof(float));\n    l.outputs = l.inputs;\n    l.truths = l.side*l.side*(1+l.coords+l.classes);\n    l.output = calloc(batch*l.outputs, sizeof(float));\n    l.delta = calloc(batch*l.outputs, sizeof(float));\n\n    l.forward = forward_detection_layer;\n    l.backward = backward_detection_layer;\n#ifdef GPU\n    l.forward_gpu = forward_detection_layer_gpu;\n    l.backward_gpu = backward_detection_layer_gpu;\n    l.output_gpu = cuda_make_array(l.output, batch*l.outputs);\n    l.delta_gpu = cuda_make_array(l.delta, batch*l.outputs);\n#endif\n\n    fprintf(stderr, \"Detection Layer\\n\");\n    srand(0);\n\n    return l;\n}\n\nvoid forward_detection_layer(const detection_layer l, network net)\n{\n    int locations = l.side*l.side;\n    int i,j;\n    memcpy(l.output, net.input, l.outputs*l.batch*sizeof(float));\n    //if(l.reorg) reorg(l.output, l.w*l.h, size*l.n, l.batch, 1);\n    int b;\n    if (l.softmax){\n        for(b = 0; b < l.batch; ++b){\n            int index = b*l.inputs;\n            for (i = 0; i < locations; ++i) {\n                int offset = i*l.classes;\n                softmax(l.output + index + offset, l.classes, 1, 1,\n                        l.output + index + offset);\n            }\n        }\n    }\n    if(net.train){\n        float avg_iou = 0;\n        float avg_cat = 0;\n        float avg_allcat = 0;\n        float avg_obj = 0;\n        float avg_anyobj = 0;\n        int count = 0;\n        *(l.cost) = 0;\n        int size = l.inputs * l.batch;\n        memset(l.delta, 0, size * sizeof(float));\n        for (b = 0; b < l.batch; ++b){\n            int index = b*l.inputs;\n            for (i = 0; i < locations; ++i) {\n                int truth_index = (b*locations + i)*(1+l.coords+l.classes);\n                int is_obj = net.truth[truth_index];\n                for (j = 0; j < l.n; ++j) {\n                    int p_index = index + locations*l.classes + i*l.n + j;\n                    l.delta[p_index] = l.noobject_scale*(0 - l.output[p_index]);\n                    *(l.cost) += l.noobject_scale*pow(l.output[p_index], 2);\n                    avg_anyobj += l.output[p_index];\n                }\n\n                int best_index = -1;\n                float best_iou = 0;\n                float best_rmse = 20;\n\n                if (!is_obj){\n                    continue;\n                }\n\n                int class_index = index + i*l.classes;\n                for(j = 0; j < l.classes; ++j) {\n                    l.delta[class_index+j] = l.class_scale * (net.truth[truth_index+1+j] - l.output[class_index+j]);\n                    *(l.cost) += l.class_scale * pow(net.truth[truth_index+1+j] - l.output[class_index+j], 2);\n                    if(net.truth[truth_index + 1 + j]) avg_cat += l.output[class_index+j];\n                    avg_allcat += l.output[class_index+j];\n                }\n\n                box truth = float_to_box(net.truth + truth_index + 1 + l.classes, 1);\n                truth.x /= l.side;\n                truth.y /= l.side;\n\n                for(j = 0; j < l.n; ++j){\n                    int box_index = index + locations*(l.classes + l.n) + (i*l.n + j) * l.coords;\n                    box out = float_to_box(l.output + box_index, 1);\n                    out.x /= l.side;\n                    out.y /= l.side;\n\n                    if (l.sqrt){\n                        out.w = out.w*out.w;\n                        out.h = out.h*out.h;\n                    }\n\n                    float iou  = box_iou(out, truth);\n                    //iou = 0;\n                    float rmse = box_rmse(out, truth);\n                    if(best_iou > 0 || iou > 0){\n                        if(iou > best_iou){\n                            best_iou = iou;\n                            best_index = j;\n                        }\n                    }else{\n                        if(rmse < best_rmse){\n                            best_rmse = rmse;\n                            best_index = j;\n                        }\n                    }\n                }\n\n                if(l.forced){\n                    if(truth.w*truth.h < .1){\n                        best_index = 1;\n                    }else{\n                        best_index = 0;\n                    }\n                }\n                if(l.random && *(net.seen) < 64000){\n                    best_index = rand()%l.n;\n                }\n\n                int box_index = index + locations*(l.classes + l.n) + (i*l.n + best_index) * l.coords;\n                int tbox_index = truth_index + 1 + l.classes;\n\n                box out = float_to_box(l.output + box_index, 1);\n                out.x /= l.side;\n                out.y /= l.side;\n                if (l.sqrt) {\n                    out.w = out.w*out.w;\n                    out.h = out.h*out.h;\n                }\n                float iou  = box_iou(out, truth);\n\n                //printf(\"%d,\", best_index);\n                int p_index = index + locations*l.classes + i*l.n + best_index;\n                *(l.cost) -= l.noobject_scale * pow(l.output[p_index], 2);\n                *(l.cost) += l.object_scale * pow(1-l.output[p_index], 2);\n                avg_obj += l.output[p_index];\n                l.delta[p_index] = l.object_scale * (1.-l.output[p_index]);\n\n                if(l.rescore){\n                    l.delta[p_index] = l.object_scale * (iou - l.output[p_index]);\n                }\n\n                l.delta[box_index+0] = l.coord_scale*(net.truth[tbox_index + 0] - l.output[box_index + 0]);\n                l.delta[box_index+1] = l.coord_scale*(net.truth[tbox_index + 1] - l.output[box_index + 1]);\n                l.delta[box_index+2] = l.coord_scale*(net.truth[tbox_index + 2] - l.output[box_index + 2]);\n                l.delta[box_index+3] = l.coord_scale*(net.truth[tbox_index + 3] - l.output[box_index + 3]);\n                if(l.sqrt){\n                    l.delta[box_index+2] = l.coord_scale*(sqrt(net.truth[tbox_index + 2]) - l.output[box_index + 2]);\n                    l.delta[box_index+3] = l.coord_scale*(sqrt(net.truth[tbox_index + 3]) - l.output[box_index + 3]);\n                }\n\n                *(l.cost) += pow(1-iou, 2);\n                avg_iou += iou;\n                ++count;\n            }\n        }\n\n        if(0){\n            float *costs = calloc(l.batch*locations*l.n, sizeof(float));\n            for (b = 0; b < l.batch; ++b) {\n                int index = b*l.inputs;\n                for (i = 0; i < locations; ++i) {\n                    for (j = 0; j < l.n; ++j) {\n                        int p_index = index + locations*l.classes + i*l.n + j;\n                        costs[b*locations*l.n + i*l.n + j] = l.delta[p_index]*l.delta[p_index];\n                    }\n                }\n            }\n            int indexes[100];\n            top_k(costs, l.batch*locations*l.n, 100, indexes);\n            float cutoff = costs[indexes[99]];\n            for (b = 0; b < l.batch; ++b) {\n                int index = b*l.inputs;\n                for (i = 0; i < locations; ++i) {\n                    for (j = 0; j < l.n; ++j) {\n                        int p_index = index + locations*l.classes + i*l.n + j;\n                        if (l.delta[p_index]*l.delta[p_index] < cutoff) l.delta[p_index] = 0;\n                    }\n                }\n            }\n            free(costs);\n        }\n\n\n        *(l.cost) = pow(mag_array(l.delta, l.outputs * l.batch), 2);\n\n\n        printf(\"Detection Avg IOU: %f, Pos Cat: %f, All Cat: %f, Pos Obj: %f, Any Obj: %f, count: %d\\n\", avg_iou/count, avg_cat/count, avg_allcat/(count*l.classes), avg_obj/count, avg_anyobj/(l.batch*locations*l.n), count);\n        //if(l.reorg) reorg(l.delta, l.w*l.h, size*l.n, l.batch, 0);\n    }\n}\n\nvoid backward_detection_layer(const detection_layer l, network net)\n{\n    axpy_cpu(l.batch*l.inputs, 1, l.delta, 1, net.delta, 1);\n}\n\nvoid get_detection_boxes(layer l, int w, int h, float thresh, float **probs, box *boxes, int only_objectness)\n{\n    int i,j,n;\n    // Length: side * side * clases + (n*n) + n\n    float *predictions = l.output;\n    //int per_cell = 5*num+classes;\n    for (i = 0; i < l.side*l.side; ++i){\n        int row = i / l.side;\n        int col = i % l.side;\n        for(n = 0; n < l.n; ++n){\n            int index = i*l.n + n;\n            int p_index = l.side*l.side*l.classes + i*l.n + n;\n            float scale = predictions[p_index];\n            int box_index = l.side*l.side*(l.classes + l.n) + (i*l.n + n)*4;\n            boxes[index].x = (predictions[box_index + 0] + col) / l.side * w;\n            boxes[index].y = (predictions[box_index + 1] + row) / l.side * h;\n            boxes[index].w = pow(predictions[box_index + 2], (l.sqrt?2:1)) * w;\n            boxes[index].h = pow(predictions[box_index + 3], (l.sqrt?2:1)) * h;\n            for(j = 0; j < l.classes; ++j){\n                int class_index = i*l.classes;\n                float prob = scale*predictions[class_index+j];\n                probs[index][j] = (prob > thresh) ? prob : 0;\n            }\n            if(only_objectness){\n                probs[index][0] = scale;\n            }\n        }\n    }\n}\n\n#ifdef GPU\n\nvoid forward_detection_layer_gpu(const detection_layer l, network net)\n{\n    if(!net.train){\n        copy_gpu(l.batch*l.inputs, net.input_gpu, 1, l.output_gpu, 1);\n        return;\n    }\n\n    //float *in_cpu = calloc(l.batch*l.inputs, sizeof(float));\n    //float *truth_cpu = 0;\n\n    forward_detection_layer(l, net);\n    cuda_push_array(l.output_gpu, l.output, l.batch*l.outputs);\n    cuda_push_array(l.delta_gpu, l.delta, l.batch*l.inputs);\n}\n\nvoid backward_detection_layer_gpu(detection_layer l, network net)\n{\n    axpy_gpu(l.batch*l.inputs, 1, l.delta_gpu, 1, net.delta_gpu, 1);\n    //copy_gpu(l.batch*l.inputs, l.delta_gpu, 1, net.delta_gpu, 1);\n}\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/detection_layer.h",
    "content": "#ifndef DETECTION_LAYER_H\n#define DETECTION_LAYER_H\n\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer detection_layer;\n\ndetection_layer make_detection_layer(int batch, int inputs, int n, int size, int classes, int coords, int rescore);\nvoid forward_detection_layer(const detection_layer l, network net);\nvoid backward_detection_layer(const detection_layer l, network net);\n\n#ifdef GPU\nvoid forward_detection_layer_gpu(const detection_layer l, network net);\nvoid backward_detection_layer_gpu(detection_layer l, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/dropout_layer.c",
    "content": "#include \"dropout_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include <stdlib.h>\n#include <stdio.h>\n\ndropout_layer make_dropout_layer(int batch, int inputs, float probability)\n{\n    dropout_layer l = {0};\n    l.type = DROPOUT;\n    l.probability = probability;\n    l.inputs = inputs;\n    l.outputs = inputs;\n    l.batch = batch;\n    l.rand = calloc(inputs*batch, sizeof(float));\n    l.scale = 1./(1.-probability);\n    l.forward = forward_dropout_layer;\n    l.backward = backward_dropout_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_dropout_layer_gpu;\n    l.backward_gpu = backward_dropout_layer_gpu;\n    l.rand_gpu = cuda_make_array(l.rand, inputs*batch);\n    #endif\n    fprintf(stderr, \"dropout       p = %.2f               %4d  ->  %4d\\n\", probability, inputs, inputs);\n    return l;\n} \n\nvoid resize_dropout_layer(dropout_layer *l, int inputs)\n{\n    l->rand = realloc(l->rand, l->inputs*l->batch*sizeof(float));\n    #ifdef GPU\n    cuda_free(l->rand_gpu);\n\n    l->rand_gpu = cuda_make_array(l->rand, inputs*l->batch);\n    #endif\n}\n\nvoid forward_dropout_layer(dropout_layer l, network net)\n{\n    int i;\n    if (!net.train) return;\n    for(i = 0; i < l.batch * l.inputs; ++i){\n        float r = rand_uniform(0, 1);\n        l.rand[i] = r;\n        if(r < l.probability) net.input[i] = 0;\n        else net.input[i] *= l.scale;\n    }\n}\n\nvoid backward_dropout_layer(dropout_layer l, network net)\n{\n    int i;\n    if(!net.delta) return;\n    for(i = 0; i < l.batch * l.inputs; ++i){\n        float r = l.rand[i];\n        if(r < l.probability) net.delta[i] = 0;\n        else net.delta[i] *= l.scale;\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/dropout_layer.h",
    "content": "#ifndef DROPOUT_LAYER_H\n#define DROPOUT_LAYER_H\n\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer dropout_layer;\n\ndropout_layer make_dropout_layer(int batch, int inputs, float probability);\n\nvoid forward_dropout_layer(dropout_layer l, network net);\nvoid backward_dropout_layer(dropout_layer l, network net);\nvoid resize_dropout_layer(dropout_layer *l, int inputs);\n\n#ifdef GPU\nvoid forward_dropout_layer_gpu(dropout_layer l, network net);\nvoid backward_dropout_layer_gpu(dropout_layer l, network net);\n\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/dropout_layer_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"dropout_layer.h\"\n#include \"cuda.h\"\n#include \"utils.h\"\n}\n\n__global__ void yoloswag420blazeit360noscope(float *input, int size, float *rand, float prob, float scale)\n{\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id < size) input[id] = (rand[id] < prob) ? 0 : input[id]*scale;\n}\n\nvoid forward_dropout_layer_gpu(dropout_layer layer, network net)\n{\n    if (!net.train) return;\n    int size = layer.inputs*layer.batch;\n    cuda_random(layer.rand_gpu, size);\n    /*\n    int i;\n    for(i = 0; i < size; ++i){\n        layer.rand[i] = rand_uniform();\n    }\n    cuda_push_array(layer.rand_gpu, layer.rand, size);\n    */\n\n    yoloswag420blazeit360noscope<<<cuda_gridsize(size), BLOCK>>>(net.input_gpu, size, layer.rand_gpu, layer.probability, layer.scale);\n    check_error(cudaPeekAtLastError());\n}\n\nvoid backward_dropout_layer_gpu(dropout_layer layer, network net)\n{\n    if(!net.delta_gpu) return;\n    int size = layer.inputs*layer.batch;\n\n    yoloswag420blazeit360noscope<<<cuda_gridsize(size), BLOCK>>>(net.delta_gpu, size, layer.rand_gpu, layer.probability, layer.scale);\n    check_error(cudaPeekAtLastError());\n}\n"
  },
  {
    "path": "lightnet/_darknet/gemm.c",
    "content": "#include \"gemm.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include <stdlib.h>\n#include <stdio.h>\n#include <math.h>\n\nvoid gemm_bin(int M, int N, int K, float ALPHA, \n        char  *A, int lda, \n        float *B, int ldb,\n        float *C, int ldc)\n{\n    int i,j,k;\n    for(i = 0; i < M; ++i){\n        for(k = 0; k < K; ++k){\n            char A_PART = A[i*lda+k];\n            if(A_PART){\n                for(j = 0; j < N; ++j){\n                    C[i*ldc+j] += B[k*ldb+j];\n                }\n            } else {\n                for(j = 0; j < N; ++j){\n                    C[i*ldc+j] -= B[k*ldb+j];\n                }\n            }\n        }\n    }\n}\n\nfloat *random_matrix(int rows, int cols)\n{\n    int i;\n    float *m = calloc(rows*cols, sizeof(float));\n    for(i = 0; i < rows*cols; ++i){\n        m[i] = (float)rand()/RAND_MAX;\n    }\n    return m;\n}\n\nvoid time_random_matrix(int TA, int TB, int m, int k, int n)\n{\n    float *a;\n    if(!TA) a = random_matrix(m,k);\n    else a = random_matrix(k,m);\n    int lda = (!TA)?k:m;\n    float *b;\n    if(!TB) b = random_matrix(k,n);\n    else b = random_matrix(n,k);\n    int ldb = (!TB)?n:k;\n\n    float *c = random_matrix(m,n);\n    int i;\n    clock_t start = clock(), end;\n    for(i = 0; i<10; ++i){\n        gemm_cpu(TA,TB,m,n,k,1,a,lda,b,ldb,1,c,n);\n    }\n    end = clock();\n    printf(\"Matrix Multiplication %dx%d * %dx%d, TA=%d, TB=%d: %lf ms\\n\",m,k,k,n, TA, TB, (float)(end-start)/CLOCKS_PER_SEC);\n    free(a);\n    free(b);\n    free(c);\n}\n\n\nvoid gemm(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float BETA,\n        float *C, int ldc)\n{\n\n    #ifdef CBLAS\n    gemm_cblas(TA, TB, M, N, K, ALPHA, A, lda, B, ldb, BETA, C, ldc);\n    #endif\n    #ifndef CBLAS\n    gemm_cpu( TA,  TB,  M, N, K, ALPHA,A,lda, B, ldb,BETA,C,ldc);\n    #endif\n}\n\nvoid gemm_nn(int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float *C, int ldc)\n{\n    int i,j,k;\n    #pragma omp parallel for\n    for(i = 0; i < M; ++i){\n        for(k = 0; k < K; ++k){\n            register float A_PART = ALPHA*A[i*lda+k];\n            for(j = 0; j < N; ++j){\n                C[i*ldc+j] += A_PART*B[k*ldb+j];\n            }\n        }\n    }\n}\n\nvoid gemm_nt(int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float *C, int ldc)\n{\n    int i,j,k;\n    #pragma omp parallel for\n    for(i = 0; i < M; ++i){\n        for(j = 0; j < N; ++j){\n            register float sum = 0;\n            for(k = 0; k < K; ++k){\n                sum += ALPHA*A[i*lda+k]*B[j*ldb + k];\n            }\n            C[i*ldc+j] += sum;\n        }\n    }\n}\n\nvoid gemm_tn(int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float *C, int ldc)\n{\n    int i,j,k;\n    #pragma omp parallel for\n    for(i = 0; i < M; ++i){\n        for(k = 0; k < K; ++k){\n            register float A_PART = ALPHA*A[k*lda+i];\n            for(j = 0; j < N; ++j){\n                C[i*ldc+j] += A_PART*B[k*ldb+j];\n            }\n        }\n    }\n}\n\nvoid gemm_tt(int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float *C, int ldc)\n{\n    int i,j,k;\n    #pragma omp parallel for\n    for(i = 0; i < M; ++i){\n        for(j = 0; j < N; ++j){\n            register float sum = 0;\n            for(k = 0; k < K; ++k){\n                sum += ALPHA*A[i+k*lda]*B[k+j*ldb];\n            }\n            C[i*ldc+j] += sum;\n        }\n    }\n}\n\n#ifdef CBLAS\n#ifdef __APPLE__\n#include <Accelerate/Accelerate.h>\n#else\n#include <cblas.h>\n#endif\nvoid gemm_cblas(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float BETA,\n        float *C, int ldc)\n{\n    if(!TA && !TB)\n        cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, M, N, K, ALPHA, A, lda, B, ldb, BETA, C, ldc);\n    else if(TA && !TB)\n        cblas_sgemm(CblasRowMajor, CblasTrans, CblasNoTrans, M, N, K, ALPHA, A, lda, B, ldb, BETA, C, ldc);\n    else if(!TA && TB)\n        cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans, M, N, K, ALPHA, A, lda, B, ldb, BETA, C, ldc);\n    else\n        cblas_sgemm(CblasRowMajor, CblasTrans, CblasTrans, M, N, K, ALPHA, A, lda, B, ldb, BETA, C, ldc);\n}\n#endif\n\nvoid gemm_cpu(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float BETA,\n        float *C, int ldc)\n{\n    //printf(\"cpu: %d %d %d %d %d %f %d %d %f %d\\n\",TA, TB, M, N, K, ALPHA, lda, ldb, BETA, ldc);\n    int i, j;\n    for(i = 0; i < M; ++i){\n        for(j = 0; j < N; ++j){\n            C[i*ldc + j] *= BETA;\n        }\n    }\n    if(!TA && !TB)\n        gemm_nn(M, N, K, ALPHA,A,lda, B, ldb,C,ldc);\n    else if(TA && !TB)\n        gemm_tn(M, N, K, ALPHA,A,lda, B, ldb,C,ldc);\n    else if(!TA && TB)\n        gemm_nt(M, N, K, ALPHA,A,lda, B, ldb,C,ldc);\n    else\n        gemm_tt(M, N, K, ALPHA,A,lda, B, ldb,C,ldc);\n}\n\n#ifdef GPU\n\n#include <math.h>\n\nvoid gemm_gpu(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A_gpu, int lda, \n        float *B_gpu, int ldb,\n        float BETA,\n        float *C_gpu, int ldc)\n{\n    cublasHandle_t handle = blas_handle();\n    cudaError_t status = cublasSgemm(handle, (TB ? CUBLAS_OP_T : CUBLAS_OP_N), \n            (TA ? CUBLAS_OP_T : CUBLAS_OP_N), N, M, K, &ALPHA, B_gpu, ldb, A_gpu, lda, &BETA, C_gpu, ldc);\n    check_error(status);\n}\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n\nvoid time_gpu_random_matrix(int TA, int TB, int m, int k, int n)\n{\n    float *a;\n    if(!TA) a = random_matrix(m,k);\n    else a = random_matrix(k,m);\n    int lda = (!TA)?k:m;\n    float *b;\n    if(!TB) b = random_matrix(k,n);\n    else b = random_matrix(n,k);\n    int ldb = (!TB)?n:k;\n\n    float *c = random_matrix(m,n);\n    int i;\n    clock_t start = clock(), end;\n    for(i = 0; i<32; ++i){\n        gemm_gpu(TA,TB,m,n,k,1,a,lda,b,ldb,1,c,n);\n    }\n    end = clock();\n    printf(\"Matrix Multiplication %dx%d * %dx%d, TA=%d, TB=%d: %lf s\\n\",m,k,k,n, TA, TB, (float)(end-start)/CLOCKS_PER_SEC);\n    free(a);\n    free(b);\n    free(c);\n}\n\nvoid time_gpu(int TA, int TB, int m, int k, int n)\n{\n    int iter = 10;\n    float *a = random_matrix(m,k);\n    float *b = random_matrix(k,n);\n\n    int lda = (!TA)?k:m;\n    int ldb = (!TB)?n:k;\n\n    float *c = random_matrix(m,n);\n\n    float *a_cl = cuda_make_array(a, m*k);\n    float *b_cl = cuda_make_array(b, k*n);\n    float *c_cl = cuda_make_array(c, m*n);\n\n    int i;\n    clock_t start = clock(), end;\n    for(i = 0; i<iter; ++i){\n        gemm_gpu(TA,TB,m,n,k,1,a_cl,lda,b_cl,ldb,1,c_cl,n);\n        cudaThreadSynchronize();\n    }\n    double flop = ((double)m)*n*(2.*k + 2.)*iter;\n    double gflop = flop/pow(10., 9);\n    end = clock();\n    double seconds = sec(end-start);\n    printf(\"Matrix Multiplication %dx%d * %dx%d, TA=%d, TB=%d: %lf s, %lf GFLOPS\\n\",m,k,k,n, TA, TB, seconds, gflop/seconds);\n    cuda_free(a_cl);\n    cuda_free(b_cl);\n    cuda_free(c_cl);\n    free(a);\n    free(b);\n    free(c);\n}\n\n\nvoid test_gpu_accuracy(int TA, int TB, int m, int k, int n)\n{\n    srand(0);\n    float *a;\n    if(!TA) a = random_matrix(m,k);\n    else a = random_matrix(k,m);\n    int lda = (!TA)?k:m;\n    float *b;\n    if(!TB) b = random_matrix(k,n);\n    else b = random_matrix(n,k);\n    int ldb = (!TB)?n:k;\n\n    float *c = random_matrix(m,n);\n    float *c_gpu = random_matrix(m,n);\n    memset(c, 0, m*n*sizeof(float));\n    memset(c_gpu, 0, m*n*sizeof(float));\n    int i;\n    //pm(m,k,b);\n    gemm_gpu(TA,TB,m,n,k,1,a,lda,b,ldb,1,c_gpu,n);\n    //printf(\"GPU\\n\");\n    //pm(m, n, c_gpu);\n\n    gemm_cpu(TA,TB,m,n,k,1,a,lda,b,ldb,1,c,n);\n    //printf(\"\\n\\nCPU\\n\");\n    //pm(m, n, c);\n    double sse = 0;\n    for(i = 0; i < m*n; ++i) {\n        //printf(\"%f %f\\n\", c[i], c_gpu[i]);\n        sse += pow(c[i]-c_gpu[i], 2);\n    }\n    printf(\"Matrix Multiplication %dx%d * %dx%d, TA=%d, TB=%d: %g SSE\\n\",m,k,k,n, TA, TB, sse/(m*n));\n    free(a);\n    free(b);\n    free(c);\n    free(c_gpu);\n}\n\nint test_gpu_blas()\n{\n    /*\n       test_gpu_accuracy(0,0,10,576,75); \n\n       test_gpu_accuracy(0,0,17,10,10); \n       test_gpu_accuracy(1,0,17,10,10); \n       test_gpu_accuracy(0,1,17,10,10); \n       test_gpu_accuracy(1,1,17,10,10); \n\n       test_gpu_accuracy(0,0,1000,10,100); \n       test_gpu_accuracy(1,0,1000,10,100); \n       test_gpu_accuracy(0,1,1000,10,100); \n       test_gpu_accuracy(1,1,1000,10,100); \n\n       test_gpu_accuracy(0,0,10,10,10); \n\n       time_gpu(0,0,64,2916,363); \n       time_gpu(0,0,64,2916,363); \n       time_gpu(0,0,64,2916,363); \n       time_gpu(0,0,192,729,1600); \n       time_gpu(0,0,384,196,1728); \n       time_gpu(0,0,256,196,3456); \n       time_gpu(0,0,256,196,2304); \n       time_gpu(0,0,128,4096,12544); \n       time_gpu(0,0,128,4096,4096); \n     */\n    time_gpu(0,0,64,75,12544); \n    time_gpu(0,0,64,75,12544); \n    time_gpu(0,0,64,75,12544); \n    time_gpu(0,0,64,576,12544); \n    time_gpu(0,0,256,2304,784); \n    time_gpu(1,1,2304,256,784); \n    time_gpu(0,0,512,4608,196); \n    time_gpu(1,1,4608,512,196); \n\n    return 0;\n}\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/gemm.h",
    "content": "#ifndef GEMM_H\n#define GEMM_H\n\nvoid gemm_bin(int M, int N, int K, float ALPHA, \n        char  *A, int lda, \n        float *B, int ldb,\n        float *C, int ldc);\n        \nvoid gemm(int TA, int TB, int M, int N, int K, float ALPHA, \n                    float *A, int lda, \n                    float *B, int ldb,\n                    float BETA,\n                    float *C, int ldc);\n\nvoid gemm_cpu(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float BETA,\n        float *C, int ldc);\n\n\n#ifdef CBLAS\nvoid gemm_cblas(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float BETA,\n        float *C, int ldc);\n#endif\n\n#ifdef GPU\nvoid gemm_gpu(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A_gpu, int lda, \n        float *B_gpu, int ldb,\n        float BETA,\n        float *C_gpu, int ldc);\n\nvoid gemm_gpu(int TA, int TB, int M, int N, int K, float ALPHA, \n        float *A, int lda, \n        float *B, int ldb,\n        float BETA,\n        float *C, int ldc);\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/gru_layer.c",
    "content": "#include \"gru_layer.h\"\n#include \"connected_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nstatic void increment_layer(layer *l, int steps)\n{\n    int num = l->outputs*l->batch*steps;\n    l->output += num;\n    l->delta += num;\n    l->x += num;\n    l->x_norm += num;\n\n#ifdef GPU\n    l->output_gpu += num;\n    l->delta_gpu += num;\n    l->x_gpu += num;\n    l->x_norm_gpu += num;\n#endif\n}\n\nlayer make_gru_layer(int batch, int inputs, int outputs, int steps, int batch_normalize, int adam)\n{\n    fprintf(stderr, \"GRU Layer: %d inputs, %d outputs\\n\", inputs, outputs);\n    batch = batch / steps;\n    layer l = {0};\n    l.batch = batch;\n    l.type = GRU;\n    l.steps = steps;\n    l.inputs = inputs;\n\n    l.uz = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.uz) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.uz->batch = batch;\n\n    l.wz = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wz) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wz->batch = batch;\n\n    l.ur = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.ur) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.ur->batch = batch;\n\n    l.wr = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wr) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wr->batch = batch;\n\n\n\n    l.uh = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.uh) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.uh->batch = batch;\n\n    l.wh = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wh) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wh->batch = batch;\n\n    l.batch_normalize = batch_normalize;\n\n\n    l.outputs = outputs;\n    l.output = calloc(outputs*batch*steps, sizeof(float));\n    l.delta = calloc(outputs*batch*steps, sizeof(float));\n    l.state = calloc(outputs*batch, sizeof(float));\n    l.prev_state = calloc(outputs*batch, sizeof(float));\n    l.forgot_state = calloc(outputs*batch, sizeof(float));\n    l.forgot_delta = calloc(outputs*batch, sizeof(float));\n\n    l.r_cpu = calloc(outputs*batch, sizeof(float));\n    l.z_cpu = calloc(outputs*batch, sizeof(float));\n    l.h_cpu = calloc(outputs*batch, sizeof(float));\n\n    l.forward = forward_gru_layer;\n    l.backward = backward_gru_layer;\n    l.update = update_gru_layer;\n\n#ifdef GPU\n    l.forward_gpu = forward_gru_layer_gpu;\n    l.backward_gpu = backward_gru_layer_gpu;\n    l.update_gpu = update_gru_layer_gpu;\n\n    l.forgot_state_gpu = cuda_make_array(0, batch*outputs);\n    l.forgot_delta_gpu = cuda_make_array(0, batch*outputs);\n    l.prev_state_gpu = cuda_make_array(0, batch*outputs);\n    l.state_gpu = cuda_make_array(0, batch*outputs);\n    l.output_gpu = cuda_make_array(0, batch*outputs*steps);\n    l.delta_gpu = cuda_make_array(0, batch*outputs*steps);\n    l.r_gpu = cuda_make_array(0, batch*outputs);\n    l.z_gpu = cuda_make_array(0, batch*outputs);\n    l.h_gpu = cuda_make_array(0, batch*outputs);\n\n#ifdef CUDNN\n    cudnnSetTensor4dDescriptor(l.uz->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.uz->out_c, l.uz->out_h, l.uz->out_w); \n    cudnnSetTensor4dDescriptor(l.uh->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.uh->out_c, l.uh->out_h, l.uh->out_w); \n    cudnnSetTensor4dDescriptor(l.ur->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.ur->out_c, l.ur->out_h, l.ur->out_w); \n    cudnnSetTensor4dDescriptor(l.wz->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wz->out_c, l.wz->out_h, l.wz->out_w); \n    cudnnSetTensor4dDescriptor(l.wh->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wh->out_c, l.wh->out_h, l.wh->out_w); \n    cudnnSetTensor4dDescriptor(l.wr->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wr->out_c, l.wr->out_h, l.wr->out_w); \n#endif\n#endif\n\n    return l;\n}\n\nvoid update_gru_layer(layer l, update_args a)\n{\n    update_connected_layer(*(l.ur), a);\n    update_connected_layer(*(l.uz), a);\n    update_connected_layer(*(l.uh), a);\n    update_connected_layer(*(l.wr), a);\n    update_connected_layer(*(l.wz), a);\n    update_connected_layer(*(l.wh), a);\n}\n\nvoid forward_gru_layer(layer l, network net)\n{\n    network s = net;\n    s.train = net.train;\n    int i;\n    layer uz = *(l.uz);\n    layer ur = *(l.ur);\n    layer uh = *(l.uh);\n\n    layer wz = *(l.wz);\n    layer wr = *(l.wr);\n    layer wh = *(l.wh);\n\n    fill_cpu(l.outputs * l.batch * l.steps, 0, uz.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, ur.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, uh.delta, 1);\n\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wz.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wr.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wh.delta, 1);\n    if(net.train) {\n        fill_cpu(l.outputs * l.batch * l.steps, 0, l.delta, 1);\n        copy_cpu(l.outputs*l.batch, l.state, 1, l.prev_state, 1);\n    }\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input = l.state;\n        forward_connected_layer(wz, s);\n        forward_connected_layer(wr, s);\n\n        s.input = net.input;\n        forward_connected_layer(uz, s);\n        forward_connected_layer(ur, s);\n        forward_connected_layer(uh, s);\n\n\n        copy_cpu(l.outputs*l.batch, uz.output, 1, l.z_cpu, 1);\n        axpy_cpu(l.outputs*l.batch, 1, wz.output, 1, l.z_cpu, 1);\n\n        copy_cpu(l.outputs*l.batch, ur.output, 1, l.r_cpu, 1);\n        axpy_cpu(l.outputs*l.batch, 1, wr.output, 1, l.r_cpu, 1);\n\n        activate_array(l.z_cpu, l.outputs*l.batch, LOGISTIC);\n        activate_array(l.r_cpu, l.outputs*l.batch, LOGISTIC);\n\n        copy_cpu(l.outputs*l.batch, l.state, 1, l.forgot_state, 1);\n        mul_cpu(l.outputs*l.batch, l.r_cpu, 1, l.forgot_state, 1);\n\n        s.input = l.forgot_state;\n        forward_connected_layer(wh, s);\n\n        copy_cpu(l.outputs*l.batch, uh.output, 1, l.h_cpu, 1);\n        axpy_cpu(l.outputs*l.batch, 1, wh.output, 1, l.h_cpu, 1);\n\n        if(l.tanh){\n            activate_array(l.h_cpu, l.outputs*l.batch, TANH);\n        } else {\n            activate_array(l.h_cpu, l.outputs*l.batch, LOGISTIC);\n        }\n\n        weighted_sum_cpu(l.state, l.h_cpu, l.z_cpu, l.outputs*l.batch, l.output);\n\n        copy_cpu(l.outputs*l.batch, l.output, 1, l.state, 1);\n\n        net.input += l.inputs*l.batch;\n        l.output += l.outputs*l.batch;\n        increment_layer(&uz, 1);\n        increment_layer(&ur, 1);\n        increment_layer(&uh, 1);\n\n        increment_layer(&wz, 1);\n        increment_layer(&wr, 1);\n        increment_layer(&wh, 1);\n    }\n}\n\nvoid backward_gru_layer(layer l, network net)\n{\n}\n\n#ifdef GPU\n\nvoid pull_gru_layer(layer l)\n{\n}\n\nvoid push_gru_layer(layer l)\n{\n}\n\nvoid update_gru_layer_gpu(layer l, update_args a)\n{\n    update_connected_layer_gpu(*(l.ur), a);\n    update_connected_layer_gpu(*(l.uz), a);\n    update_connected_layer_gpu(*(l.uh), a);\n    update_connected_layer_gpu(*(l.wr), a);\n    update_connected_layer_gpu(*(l.wz), a);\n    update_connected_layer_gpu(*(l.wh), a);\n}\n\nvoid forward_gru_layer_gpu(layer l, network net)\n{\n    network s = {0};\n    s.train = net.train;\n    int i;\n    layer uz = *(l.uz);\n    layer ur = *(l.ur);\n    layer uh = *(l.uh);\n\n    layer wz = *(l.wz);\n    layer wr = *(l.wr);\n    layer wh = *(l.wh);\n\n    fill_gpu(l.outputs * l.batch * l.steps, 0, uz.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, ur.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, uh.delta_gpu, 1);\n\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wz.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wr.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wh.delta_gpu, 1);\n    if(net.train) {\n        fill_gpu(l.outputs * l.batch * l.steps, 0, l.delta_gpu, 1);\n        copy_gpu(l.outputs*l.batch, l.state_gpu, 1, l.prev_state_gpu, 1);\n    }\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input_gpu = l.state_gpu;\n        forward_connected_layer_gpu(wz, s);\n        forward_connected_layer_gpu(wr, s);\n\n        s.input_gpu = net.input_gpu;\n        forward_connected_layer_gpu(uz, s);\n        forward_connected_layer_gpu(ur, s);\n        forward_connected_layer_gpu(uh, s);\n\n        copy_gpu(l.outputs*l.batch, uz.output_gpu, 1, l.z_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, wz.output_gpu, 1, l.z_gpu, 1);\n\n        copy_gpu(l.outputs*l.batch, ur.output_gpu, 1, l.r_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, wr.output_gpu, 1, l.r_gpu, 1);\n\n        activate_array_gpu(l.z_gpu, l.outputs*l.batch, LOGISTIC);\n        activate_array_gpu(l.r_gpu, l.outputs*l.batch, LOGISTIC);\n\n        copy_gpu(l.outputs*l.batch, l.state_gpu, 1, l.forgot_state_gpu, 1);\n        mul_gpu(l.outputs*l.batch, l.r_gpu, 1, l.forgot_state_gpu, 1);\n\n        s.input_gpu = l.forgot_state_gpu;\n        forward_connected_layer_gpu(wh, s);\n\n        copy_gpu(l.outputs*l.batch, uh.output_gpu, 1, l.h_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, wh.output_gpu, 1, l.h_gpu, 1);\n\n        if(l.tanh){\n            activate_array_gpu(l.h_gpu, l.outputs*l.batch, TANH);\n        } else {\n            activate_array_gpu(l.h_gpu, l.outputs*l.batch, LOGISTIC);\n        }\n\n        weighted_sum_gpu(l.state_gpu, l.h_gpu, l.z_gpu, l.outputs*l.batch, l.output_gpu);\n        copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.state_gpu, 1);\n\n        net.input_gpu += l.inputs*l.batch;\n        l.output_gpu += l.outputs*l.batch;\n        increment_layer(&uz, 1);\n        increment_layer(&ur, 1);\n        increment_layer(&uh, 1);\n\n        increment_layer(&wz, 1);\n        increment_layer(&wr, 1);\n        increment_layer(&wh, 1);\n    }\n}\n\nvoid backward_gru_layer_gpu(layer l, network net)\n{\n    network s = {0};\n    s.train = net.train;\n    int i;\n    layer uz = *(l.uz);\n    layer ur = *(l.ur);\n    layer uh = *(l.uh);\n\n    layer wz = *(l.wz);\n    layer wr = *(l.wr);\n    layer wh = *(l.wh);\n\n    increment_layer(&uz, l.steps - 1);\n    increment_layer(&ur, l.steps - 1);\n    increment_layer(&uh, l.steps - 1);\n\n    increment_layer(&wz, l.steps - 1);\n    increment_layer(&wr, l.steps - 1);\n    increment_layer(&wh, l.steps - 1);\n\n    net.input_gpu += l.inputs*l.batch*(l.steps-1);\n    if(net.delta_gpu) net.delta_gpu += l.inputs*l.batch*(l.steps-1);\n    l.output_gpu += l.outputs*l.batch*(l.steps-1);\n    l.delta_gpu += l.outputs*l.batch*(l.steps-1);\n    float *end_state = l.output_gpu;\n    for (i = l.steps-1; i >= 0; --i) {\n        if(i != 0) copy_gpu(l.outputs*l.batch, l.output_gpu - l.outputs*l.batch, 1, l.state_gpu, 1);\n        else copy_gpu(l.outputs*l.batch, l.prev_state_gpu, 1, l.state_gpu, 1);\n        float *prev_delta_gpu = (i == 0) ? 0 : l.delta_gpu - l.outputs*l.batch;\n\n        copy_gpu(l.outputs*l.batch, uz.output_gpu, 1, l.z_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, wz.output_gpu, 1, l.z_gpu, 1);\n\n        copy_gpu(l.outputs*l.batch, ur.output_gpu, 1, l.r_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, wr.output_gpu, 1, l.r_gpu, 1);\n\n        activate_array_gpu(l.z_gpu, l.outputs*l.batch, LOGISTIC);\n        activate_array_gpu(l.r_gpu, l.outputs*l.batch, LOGISTIC);\n\n        copy_gpu(l.outputs*l.batch, uh.output_gpu, 1, l.h_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, wh.output_gpu, 1, l.h_gpu, 1);\n\n        if(l.tanh){\n            activate_array_gpu(l.h_gpu, l.outputs*l.batch, TANH);\n        } else {\n            activate_array_gpu(l.h_gpu, l.outputs*l.batch, LOGISTIC);\n        }\n\n        weighted_delta_gpu(l.state_gpu, l.h_gpu, l.z_gpu, prev_delta_gpu, uh.delta_gpu, uz.delta_gpu, l.outputs*l.batch, l.delta_gpu);\n\n        if(l.tanh){\n            gradient_array_gpu(l.h_gpu, l.outputs*l.batch, TANH, uh.delta_gpu);\n        } else {\n            gradient_array_gpu(l.h_gpu, l.outputs*l.batch, LOGISTIC, uh.delta_gpu);\n        }\n\n        copy_gpu(l.outputs*l.batch, uh.delta_gpu, 1, wh.delta_gpu, 1);\n\n        copy_gpu(l.outputs*l.batch, l.state_gpu, 1, l.forgot_state_gpu, 1);\n        mul_gpu(l.outputs*l.batch, l.r_gpu, 1, l.forgot_state_gpu, 1);\n        fill_gpu(l.outputs*l.batch, 0, l.forgot_delta_gpu, 1);\n\n        s.input_gpu = l.forgot_state_gpu;\n        s.delta_gpu = l.forgot_delta_gpu;\n\n        backward_connected_layer_gpu(wh, s);\n        if(prev_delta_gpu) mult_add_into_gpu(l.outputs*l.batch, l.forgot_delta_gpu, l.r_gpu, prev_delta_gpu);\n        mult_add_into_gpu(l.outputs*l.batch, l.forgot_delta_gpu, l.state_gpu, ur.delta_gpu);\n\n        gradient_array_gpu(l.r_gpu, l.outputs*l.batch, LOGISTIC, ur.delta_gpu);\n        copy_gpu(l.outputs*l.batch, ur.delta_gpu, 1, wr.delta_gpu, 1);\n\n        gradient_array_gpu(l.z_gpu, l.outputs*l.batch, LOGISTIC, uz.delta_gpu);\n        copy_gpu(l.outputs*l.batch, uz.delta_gpu, 1, wz.delta_gpu, 1);\n\n        s.input_gpu = l.state_gpu;\n        s.delta_gpu = prev_delta_gpu;\n\n        backward_connected_layer_gpu(wr, s);\n        backward_connected_layer_gpu(wz, s);\n\n        s.input_gpu = net.input_gpu;\n        s.delta_gpu = net.delta_gpu;\n\n        backward_connected_layer_gpu(uh, s);\n        backward_connected_layer_gpu(ur, s);\n        backward_connected_layer_gpu(uz, s);\n\n\n        net.input_gpu -= l.inputs*l.batch;\n        if(net.delta_gpu) net.delta_gpu -= l.inputs*l.batch;\n        l.output_gpu -= l.outputs*l.batch;\n        l.delta_gpu -= l.outputs*l.batch;\n        increment_layer(&uz, -1);\n        increment_layer(&ur, -1);\n        increment_layer(&uh, -1);\n\n        increment_layer(&wz, -1);\n        increment_layer(&wr, -1);\n        increment_layer(&wh, -1);\n    }\n    copy_gpu(l.outputs*l.batch, end_state, 1, l.state_gpu, 1);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/gru_layer.h",
    "content": "\n#ifndef GRU_LAYER_H\n#define GRU_LAYER_H\n\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_gru_layer(int batch, int inputs, int outputs, int steps, int batch_normalize, int adam);\n\nvoid forward_gru_layer(layer l, network state);\nvoid backward_gru_layer(layer l, network state);\nvoid update_gru_layer(layer l, update_args a);\n\n#ifdef GPU\nvoid forward_gru_layer_gpu(layer l, network state);\nvoid backward_gru_layer_gpu(layer l, network state);\nvoid update_gru_layer_gpu(layer l, update_args a);\nvoid push_gru_layer(layer l);\nvoid pull_gru_layer(layer l);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/im2col.c",
    "content": "#include \"im2col.h\"\n#include <stdio.h>\ninline float im2col_get_pixel(float *im, int height, int width, int channels,\n                        int row, int col, int channel, int pad)\n{\n    row -= pad;\n    col -= pad;\n\n    if (row < 0 || col < 0 ||\n        row >= height || col >= width) return 0;\n    return im[col + width*(row + height*channel)];\n}\n\n//From Berkeley Vision's Caffe!\n//https://github.com/BVLC/caffe/blob/master/LICENSE\nvoid im2col_cpu(float* data_im,\n     int channels,  int height,  int width,\n     int ksize,  int stride, int pad, float* data_col) \n{\n    int c,h,w;\n    int height_col = (height + 2*pad - ksize) / stride + 1;\n    int width_col = (width + 2*pad - ksize) / stride + 1;\n\n    int channels_col = channels * ksize * ksize;\n    for (c = 0; c < channels_col; ++c) {\n        int w_offset = c % ksize;\n        int h_offset = (c / ksize) % ksize;\n        int c_im = c / ksize / ksize;\n        for (h = 0; h < height_col; ++h) {\n            for (w = 0; w < width_col; ++w) {\n                int im_row = h_offset + h * stride;\n                int im_col = w_offset + w * stride;\n                int col_index = (c * height_col + h) * width_col + w;\n                data_col[col_index] = im2col_get_pixel(data_im, height, width, channels,\n                        im_row, im_col, c_im, pad);\n            }\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/im2col.h",
    "content": "#ifndef IM2COL_H\n#define IM2COL_H\n\nvoid im2col_cpu(float* data_im,\n        int channels, int height, int width,\n        int ksize, int stride, int pad, float* data_col);\n\n#ifdef GPU\n\nvoid im2col_gpu(float *im,\n         int channels, int height, int width,\n         int ksize, int stride, int pad,float *data_col);\n\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/im2col_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"im2col.h\"\n#include \"cuda.h\"\n}\n\n// src: https://github.com/BVLC/caffe/blob/master/src/caffe/util/im2col.cu\n// You may also want to read: https://github.com/BVLC/caffe/blob/master/LICENSE\n\n__global__ void im2col_gpu_kernel(const int n, const float* data_im,\n        const int height, const int width, const int ksize,\n        const int pad,\n        const int stride,\n        const int height_col, const int width_col,\n        float *data_col) {\n    int index = blockIdx.x*blockDim.x+threadIdx.x;\n    for(; index < n; index += blockDim.x*gridDim.x){\n        int w_out = index % width_col;\n        int h_index = index / width_col;\n        int h_out = h_index % height_col;\n        int channel_in = h_index / height_col;\n        int channel_out = channel_in * ksize * ksize;\n        int h_in = h_out * stride - pad;\n        int w_in = w_out * stride - pad;\n        float* data_col_ptr = data_col;\n        data_col_ptr += (channel_out * height_col + h_out) * width_col + w_out;\n        const float* data_im_ptr = data_im;\n        data_im_ptr += (channel_in * height + h_in) * width + w_in;\n        for (int i = 0; i < ksize; ++i) {\n            for (int j = 0; j < ksize; ++j) {\n                int h = h_in + i;\n                int w = w_in + j;\n\n                *data_col_ptr = (h >= 0 && w >= 0 && h < height && w < width) ?\n                    data_im_ptr[i * width + j] : 0;\n\n                //*data_col_ptr = data_im_ptr[ii * width + jj];\n\n                data_col_ptr += height_col * width_col;\n            }\n        }\n    }\n}\n\nvoid im2col_gpu(float *im,\n         int channels, int height, int width,\n         int ksize, int stride, int pad, float *data_col){\n    // We are going to launch channels * height_col * width_col kernels, each\n    // kernel responsible for copying a single-channel grid.\n    int height_col = (height + 2 * pad - ksize) / stride + 1;\n    int width_col = (width + 2 * pad - ksize) / stride + 1;\n    int num_kernels = channels * height_col * width_col;\n    im2col_gpu_kernel<<<(num_kernels+BLOCK-1)/BLOCK,\n        BLOCK>>>(\n                num_kernels, im, height, width, ksize, pad,\n                stride, height_col,\n                width_col, data_col);\n}\n"
  },
  {
    "path": "lightnet/_darknet/image.c",
    "content": "#include \"image.h\"\n#include \"utils.h\"\n#include \"blas.h\"\n#include \"cuda.h\"\n#include <stdio.h>\n#include <math.h>\n\n#define STB_IMAGE_IMPLEMENTATION\n#include \"stb_image.h\"\n#define STB_IMAGE_WRITE_IMPLEMENTATION\n#include \"stb_image_write.h\"\n\nint windows = 0;\n\nfloat colors[6][3] = { {1,0,1}, {0,0,1},{0,1,1},{0,1,0},{1,1,0},{1,0,0} };\n\nfloat get_color(int c, int x, int max)\n{\n    float ratio = ((float)x/max)*5;\n    int i = floor(ratio);\n    int j = ceil(ratio);\n    ratio -= i;\n    float r = (1-ratio) * colors[i][c] + ratio*colors[j][c];\n    //printf(\"%f\\n\", r);\n    return r;\n}\n\nimage mask_to_rgb(image mask)\n{\n    int n = mask.c;\n    image im = make_image(mask.w, mask.h, 3);\n    int i, j;\n    for(j = 0; j < n; ++j){\n        int offset = j*123457 % n;\n        float red = get_color(2,offset,n);\n        float green = get_color(1,offset,n);\n        float blue = get_color(0,offset,n);\n        for(i = 0; i < im.w*im.h; ++i){\n            im.data[i + 0*im.w*im.h] += mask.data[j*im.h*im.w + i]*red;\n            im.data[i + 1*im.w*im.h] += mask.data[j*im.h*im.w + i]*green;\n            im.data[i + 2*im.w*im.h] += mask.data[j*im.h*im.w + i]*blue;\n        }\n    }\n    return im;\n}\n\nstatic float get_pixel(image m, int x, int y, int c)\n{\n    assert(x < m.w && y < m.h && c < m.c);\n    return m.data[c*m.h*m.w + y*m.w + x];\n}\nstatic float get_pixel_extend(image m, int x, int y, int c)\n{\n    if(x < 0 || x >= m.w || y < 0 || y >= m.h) return 0;\n    /*\n    if(x < 0) x = 0;\n    if(x >= m.w) x = m.w-1;\n    if(y < 0) y = 0;\n    if(y >= m.h) y = m.h-1;\n    */\n    if(c < 0 || c >= m.c) return 0;\n    return get_pixel(m, x, y, c);\n}\nstatic void set_pixel(image m, int x, int y, int c, float val)\n{\n    if (x < 0 || y < 0 || c < 0 || x >= m.w || y >= m.h || c >= m.c) return;\n    assert(x < m.w && y < m.h && c < m.c);\n    m.data[c*m.h*m.w + y*m.w + x] = val;\n}\nstatic void add_pixel(image m, int x, int y, int c, float val)\n{\n    assert(x < m.w && y < m.h && c < m.c);\n    m.data[c*m.h*m.w + y*m.w + x] += val;\n}\n\nstatic float bilinear_interpolate(image im, float x, float y, int c)\n{\n    int ix = (int) floorf(x);\n    int iy = (int) floorf(y);\n\n    float dx = x - ix;\n    float dy = y - iy;\n\n    float val = (1-dy) * (1-dx) * get_pixel_extend(im, ix, iy, c) + \n        dy     * (1-dx) * get_pixel_extend(im, ix, iy+1, c) + \n        (1-dy) *   dx   * get_pixel_extend(im, ix+1, iy, c) +\n        dy     *   dx   * get_pixel_extend(im, ix+1, iy+1, c);\n    return val;\n}\n\n\nvoid composite_image(image source, image dest, int dx, int dy)\n{\n    int x,y,k;\n    for(k = 0; k < source.c; ++k){\n        for(y = 0; y < source.h; ++y){\n            for(x = 0; x < source.w; ++x){\n                float val = get_pixel(source, x, y, k);\n                float val2 = get_pixel_extend(dest, dx+x, dy+y, k);\n                set_pixel(dest, dx+x, dy+y, k, val * val2);\n            }\n        }\n    }\n}\n\nimage border_image(image a, int border)\n{\n    image b = make_image(a.w + 2*border, a.h + 2*border, a.c);\n    int x,y,k;\n    for(k = 0; k < b.c; ++k){\n        for(y = 0; y < b.h; ++y){\n            for(x = 0; x < b.w; ++x){\n                float val = get_pixel_extend(a, x - border, y - border, k);\n                if(x - border < 0 || x - border >= a.w || y - border < 0 || y - border >= a.h) val = 1;\n                set_pixel(b, x, y, k, val);\n            }\n        }\n    }\n    return b;\n}\n\nimage tile_images(image a, image b, int dx)\n{\n    if(a.w == 0) return copy_image(b);\n    image c = make_image(a.w + b.w + dx, (a.h > b.h) ? a.h : b.h, (a.c > b.c) ? a.c : b.c);\n    fill_cpu(c.w*c.h*c.c, 1, c.data, 1);\n    embed_image(a, c, 0, 0); \n    composite_image(b, c, a.w + dx, 0);\n    return c;\n}\n\nimage get_label(image **characters, char *string, int size)\n{\n    if(size > 7) size = 7;\n    image label = make_empty_image(0,0,0);\n    while(*string){\n        image l = characters[size][(int)*string];\n        image n = tile_images(label, l, -size - 1 + (size+1)/2);\n        free_image(label);\n        label = n;\n        ++string;\n    }\n    image b = border_image(label, label.h*.25);\n    free_image(label);\n    return b;\n}\n\nvoid draw_label(image a, int r, int c, image label, const float *rgb)\n{\n    int w = label.w;\n    int h = label.h;\n    if (r - h >= 0) r = r - h;\n\n    int i, j, k;\n    for(j = 0; j < h && j + r < a.h; ++j){\n        for(i = 0; i < w && i + c < a.w; ++i){\n            for(k = 0; k < label.c; ++k){\n                float val = get_pixel(label, i, j, k);\n                set_pixel(a, i+c, j+r, k, rgb[k] * val);\n            }\n        }\n    }\n}\n\nvoid draw_box(image a, int x1, int y1, int x2, int y2, float r, float g, float b)\n{\n    //normalize_image(a);\n    int i;\n    if(x1 < 0) x1 = 0;\n    if(x1 >= a.w) x1 = a.w-1;\n    if(x2 < 0) x2 = 0;\n    if(x2 >= a.w) x2 = a.w-1;\n\n    if(y1 < 0) y1 = 0;\n    if(y1 >= a.h) y1 = a.h-1;\n    if(y2 < 0) y2 = 0;\n    if(y2 >= a.h) y2 = a.h-1;\n\n    for(i = x1; i <= x2; ++i){\n        a.data[i + y1*a.w + 0*a.w*a.h] = r;\n        a.data[i + y2*a.w + 0*a.w*a.h] = r;\n\n        a.data[i + y1*a.w + 1*a.w*a.h] = g;\n        a.data[i + y2*a.w + 1*a.w*a.h] = g;\n\n        a.data[i + y1*a.w + 2*a.w*a.h] = b;\n        a.data[i + y2*a.w + 2*a.w*a.h] = b;\n    }\n    for(i = y1; i <= y2; ++i){\n        a.data[x1 + i*a.w + 0*a.w*a.h] = r;\n        a.data[x2 + i*a.w + 0*a.w*a.h] = r;\n\n        a.data[x1 + i*a.w + 1*a.w*a.h] = g;\n        a.data[x2 + i*a.w + 1*a.w*a.h] = g;\n\n        a.data[x1 + i*a.w + 2*a.w*a.h] = b;\n        a.data[x2 + i*a.w + 2*a.w*a.h] = b;\n    }\n}\n\nvoid draw_box_width(image a, int x1, int y1, int x2, int y2, int w, float r, float g, float b)\n{\n    int i;\n    for(i = 0; i < w; ++i){\n        draw_box(a, x1+i, y1+i, x2-i, y2-i, r, g, b);\n    }\n}\n\nvoid draw_bbox(image a, box bbox, int w, float r, float g, float b)\n{\n    int left  = (bbox.x-bbox.w/2)*a.w;\n    int right = (bbox.x+bbox.w/2)*a.w;\n    int top   = (bbox.y-bbox.h/2)*a.h;\n    int bot   = (bbox.y+bbox.h/2)*a.h;\n\n    int i;\n    for(i = 0; i < w; ++i){\n        draw_box(a, left+i, top+i, right-i, bot-i, r, g, b);\n    }\n}\n\nimage **load_alphabet()\n{\n    int i, j;\n    const int nsize = 8;\n    image **alphabets = calloc(nsize, sizeof(image));\n    for(j = 0; j < nsize; ++j){\n        alphabets[j] = calloc(128, sizeof(image));\n        for(i = 32; i < 127; ++i){\n            char buff[256];\n            sprintf(buff, \"data/labels/%d_%d.png\", i, j);\n            alphabets[j][i] = load_image_color(buff, 0, 0);\n        }\n    }\n    return alphabets;\n}\n\nvoid draw_detections(image im, int num, float thresh, box *boxes, float **probs, float **masks, char **names, image **alphabet, int classes)\n{\n    int i,j;\n\n    for(i = 0; i < num; ++i){\n        char labelstr[4096] = {0};\n        int class = -1;\n        for(j = 0; j < classes; ++j){\n            if (probs[i][j] > thresh){\n                if (class < 0) {\n                    strcat(labelstr, names[j]);\n                    class = j;\n                } else {\n                    strcat(labelstr, \", \");\n                    strcat(labelstr, names[j]);\n                }\n                printf(\"%s: %.0f%%\\n\", names[j], probs[i][j]*100);\n            }\n        }\n        if(class >= 0){\n            int width = im.h * .006;\n\n            /*\n               if(0){\n               width = pow(prob, 1./2.)*10+1;\n               alphabet = 0;\n               }\n             */\n\n            //printf(\"%d %s: %.0f%%\\n\", i, names[class], prob*100);\n            int offset = class*123457 % classes;\n            float red = get_color(2,offset,classes);\n            float green = get_color(1,offset,classes);\n            float blue = get_color(0,offset,classes);\n            float rgb[3];\n\n            //width = prob*20+2;\n\n            rgb[0] = red;\n            rgb[1] = green;\n            rgb[2] = blue;\n            box b = boxes[i];\n\n            int left  = (b.x-b.w/2.)*im.w;\n            int right = (b.x+b.w/2.)*im.w;\n            int top   = (b.y-b.h/2.)*im.h;\n            int bot   = (b.y+b.h/2.)*im.h;\n\n            if(left < 0) left = 0;\n            if(right > im.w-1) right = im.w-1;\n            if(top < 0) top = 0;\n            if(bot > im.h-1) bot = im.h-1;\n\n            draw_box_width(im, left, top, right, bot, width, red, green, blue);\n            if (alphabet) {\n                image label = get_label(alphabet, labelstr, (im.h*.03)/10);\n                draw_label(im, top + width, left, label, rgb);\n                free_image(label);\n            }\n            if (masks){\n                image mask = float_to_image(14, 14, 1, masks[i]);\n                image resized_mask = resize_image(mask, b.w*im.w, b.h*im.h);\n                image tmask = threshold_image(resized_mask, .5);\n                embed_image(tmask, im, left, top);\n                free_image(mask);\n                free_image(resized_mask);\n                free_image(tmask);\n            }\n        }\n    }\n}\n\nvoid transpose_image(image im)\n{\n    assert(im.w == im.h);\n    int n, m;\n    int c;\n    for(c = 0; c < im.c; ++c){\n        for(n = 0; n < im.w-1; ++n){\n            for(m = n + 1; m < im.w; ++m){\n                float swap = im.data[m + im.w*(n + im.h*c)];\n                im.data[m + im.w*(n + im.h*c)] = im.data[n + im.w*(m + im.h*c)];\n                im.data[n + im.w*(m + im.h*c)] = swap;\n            }\n        }\n    }\n}\n\nvoid rotate_image_cw(image im, int times)\n{\n    assert(im.w == im.h);\n    times = (times + 400) % 4;\n    int i, x, y, c;\n    int n = im.w;\n    for(i = 0; i < times; ++i){\n        for(c = 0; c < im.c; ++c){\n            for(x = 0; x < n/2; ++x){\n                for(y = 0; y < (n-1)/2 + 1; ++y){\n                    float temp = im.data[y + im.w*(x + im.h*c)];\n                    im.data[y + im.w*(x + im.h*c)] = im.data[n-1-x + im.w*(y + im.h*c)];\n                    im.data[n-1-x + im.w*(y + im.h*c)] = im.data[n-1-y + im.w*(n-1-x + im.h*c)];\n                    im.data[n-1-y + im.w*(n-1-x + im.h*c)] = im.data[x + im.w*(n-1-y + im.h*c)];\n                    im.data[x + im.w*(n-1-y + im.h*c)] = temp;\n                }\n            }\n        }\n    }\n}\n\nvoid flip_image(image a)\n{\n    int i,j,k;\n    for(k = 0; k < a.c; ++k){\n        for(i = 0; i < a.h; ++i){\n            for(j = 0; j < a.w/2; ++j){\n                int index = j + a.w*(i + a.h*(k));\n                int flip = (a.w - j - 1) + a.w*(i + a.h*(k));\n                float swap = a.data[flip];\n                a.data[flip] = a.data[index];\n                a.data[index] = swap;\n            }\n        }\n    }\n}\n\nimage image_distance(image a, image b)\n{\n    int i,j;\n    image dist = make_image(a.w, a.h, 1);\n    for(i = 0; i < a.c; ++i){\n        for(j = 0; j < a.h*a.w; ++j){\n            dist.data[j] += pow(a.data[i*a.h*a.w+j]-b.data[i*a.h*a.w+j],2);\n        }\n    }\n    for(j = 0; j < a.h*a.w; ++j){\n        dist.data[j] = sqrt(dist.data[j]);\n    }\n    return dist;\n}\n\nvoid ghost_image(image source, image dest, int dx, int dy)\n{\n    int x,y,k;\n    float max_dist = sqrt((-source.w/2. + .5)*(-source.w/2. + .5));\n    for(k = 0; k < source.c; ++k){\n        for(y = 0; y < source.h; ++y){\n            for(x = 0; x < source.w; ++x){\n                float dist = sqrt((x - source.w/2. + .5)*(x - source.w/2. + .5) + (y - source.h/2. + .5)*(y - source.h/2. + .5));\n                float alpha = (1 - dist/max_dist);\n                if(alpha < 0) alpha = 0;\n                float v1 = get_pixel(source, x,y,k);\n                float v2 = get_pixel(dest, dx+x,dy+y,k);\n                float val = alpha*v1 + (1-alpha)*v2;\n                set_pixel(dest, dx+x, dy+y, k, val);\n            }\n        }\n    }\n}\n\nvoid embed_image(image source, image dest, int dx, int dy)\n{\n    int x,y,k;\n    for(k = 0; k < source.c; ++k){\n        for(y = 0; y < source.h; ++y){\n            for(x = 0; x < source.w; ++x){\n                float val = get_pixel(source, x,y,k);\n                set_pixel(dest, dx+x, dy+y, k, val);\n            }\n        }\n    }\n}\n\nimage collapse_image_layers(image source, int border)\n{\n    int h = source.h;\n    h = (h+border)*source.c - border;\n    image dest = make_image(source.w, h, 1);\n    int i;\n    for(i = 0; i < source.c; ++i){\n        image layer = get_image_layer(source, i);\n        int h_offset = i*(source.h+border);\n        embed_image(layer, dest, 0, h_offset);\n        free_image(layer);\n    }\n    return dest;\n}\n\nvoid constrain_image(image im)\n{\n    int i;\n    for(i = 0; i < im.w*im.h*im.c; ++i){\n        if(im.data[i] < 0) im.data[i] = 0;\n        if(im.data[i] > 1) im.data[i] = 1;\n    }\n}\n\nvoid normalize_image(image p)\n{\n    int i;\n    float min = 9999999;\n    float max = -999999;\n\n    for(i = 0; i < p.h*p.w*p.c; ++i){\n        float v = p.data[i];\n        if(v < min) min = v;\n        if(v > max) max = v;\n    }\n    if(max - min < .000000001){\n        min = 0;\n        max = 1;\n    }\n    for(i = 0; i < p.c*p.w*p.h; ++i){\n        p.data[i] = (p.data[i] - min)/(max-min);\n    }\n}\n\nvoid normalize_image2(image p)\n{\n    float *min = calloc(p.c, sizeof(float));\n    float *max = calloc(p.c, sizeof(float));\n    int i,j;\n    for(i = 0; i < p.c; ++i) min[i] = max[i] = p.data[i*p.h*p.w];\n\n    for(j = 0; j < p.c; ++j){\n        for(i = 0; i < p.h*p.w; ++i){\n            float v = p.data[i+j*p.h*p.w];\n            if(v < min[j]) min[j] = v;\n            if(v > max[j]) max[j] = v;\n        }\n    }\n    for(i = 0; i < p.c; ++i){\n        if(max[i] - min[i] < .000000001){\n            min[i] = 0;\n            max[i] = 1;\n        }\n    }\n    for(j = 0; j < p.c; ++j){\n        for(i = 0; i < p.w*p.h; ++i){\n            p.data[i+j*p.h*p.w] = (p.data[i+j*p.h*p.w] - min[j])/(max[j]-min[j]);\n        }\n    }\n    free(min);\n    free(max);\n}\n\nvoid copy_image_into(image src, image dest)\n{\n    memcpy(dest.data, src.data, src.h*src.w*src.c*sizeof(float));\n}\n\nimage copy_image(image p)\n{\n    image copy = p;\n    copy.data = calloc(p.h*p.w*p.c, sizeof(float));\n    memcpy(copy.data, p.data, p.h*p.w*p.c*sizeof(float));\n    return copy;\n}\n\nvoid rgbgr_image(image im)\n{\n    int i;\n    for(i = 0; i < im.w*im.h; ++i){\n        float swap = im.data[i];\n        im.data[i] = im.data[i+im.w*im.h*2];\n        im.data[i+im.w*im.h*2] = swap;\n    }\n}\n\n#ifdef OPENCV\nvoid show_image_cv(image p, const char *name, IplImage *disp)\n{\n    int x,y,k;\n    if(p.c == 3) rgbgr_image(p);\n    //normalize_image(copy);\n\n    char buff[256];\n    //sprintf(buff, \"%s (%d)\", name, windows);\n    sprintf(buff, \"%s\", name);\n\n    int step = disp->widthStep;\n    cvNamedWindow(buff, CV_WINDOW_NORMAL); \n    //cvMoveWindow(buff, 100*(windows%10) + 200*(windows/10), 100*(windows%10));\n    ++windows;\n    for(y = 0; y < p.h; ++y){\n        for(x = 0; x < p.w; ++x){\n            for(k= 0; k < p.c; ++k){\n                disp->imageData[y*step + x*p.c + k] = (unsigned char)(get_pixel(p,x,y,k)*255);\n            }\n        }\n    }\n    if(0){\n        int w = 448;\n        int h = w*p.h/p.w;\n        if(h > 1000){\n            h = 1000;\n            w = h*p.w/p.h;\n        }\n        IplImage *buffer = disp;\n        disp = cvCreateImage(cvSize(w, h), buffer->depth, buffer->nChannels);\n        cvResize(buffer, disp, CV_INTER_LINEAR);\n        cvReleaseImage(&buffer);\n    }\n    cvShowImage(buff, disp);\n}\n#endif\n\nvoid show_image(image p, const char *name)\n{\n#ifdef OPENCV\n    IplImage *disp = cvCreateImage(cvSize(p.w,p.h), IPL_DEPTH_8U, p.c);\n    image copy = copy_image(p);\n    constrain_image(copy);\n    show_image_cv(copy, name, disp);\n    free_image(copy);\n    cvReleaseImage(&disp);\n#else\n    fprintf(stderr, \"Not compiled with OpenCV, saving to %s.png instead\\n\", name);\n    save_image(p, name);\n#endif\n}\n\n#ifdef OPENCV\n\nvoid ipl_into_image(IplImage* src, image im)\n{\n    unsigned char *data = (unsigned char *)src->imageData;\n    int h = src->height;\n    int w = src->width;\n    int c = src->nChannels;\n    int step = src->widthStep;\n    int i, j, k;\n\n    for(i = 0; i < h; ++i){\n        for(k= 0; k < c; ++k){\n            for(j = 0; j < w; ++j){\n                im.data[k*w*h + i*w + j] = data[i*step + j*c + k]/255.;\n            }\n        }\n    }\n}\n\nimage ipl_to_image(IplImage* src)\n{\n    int h = src->height;\n    int w = src->width;\n    int c = src->nChannels;\n    image out = make_image(w, h, c);\n    ipl_into_image(src, out);\n    return out;\n}\n\nimage load_image_cv(char *filename, int channels)\n{\n    IplImage* src = 0;\n    int flag = -1;\n    if (channels == 0) flag = -1;\n    else if (channels == 1) flag = 0;\n    else if (channels == 3) flag = 1;\n    else {\n        fprintf(stderr, \"OpenCV can't force load with %d channels\\n\", channels);\n    }\n\n    if( (src = cvLoadImage(filename, flag)) == 0 )\n    {\n        fprintf(stderr, \"Cannot load image \\\"%s\\\"\\n\", filename);\n        char buff[256];\n        sprintf(buff, \"echo %s >> bad.list\", filename);\n        system(buff);\n        return make_image(10,10,3);\n        //exit(0);\n    }\n    image out = ipl_to_image(src);\n    cvReleaseImage(&src);\n    rgbgr_image(out);\n    return out;\n}\n\nvoid flush_stream_buffer(CvCapture *cap, int n)\n{\n    int i;\n    for(i = 0; i < n; ++i) {\n        cvQueryFrame(cap);\n    }\n}\n\nimage get_image_from_stream(CvCapture *cap)\n{\n    IplImage* src = cvQueryFrame(cap);\n    if (!src) return make_empty_image(0,0,0);\n    image im = ipl_to_image(src);\n    rgbgr_image(im);\n    return im;\n}\n\nint fill_image_from_stream(CvCapture *cap, image im)\n{\n    IplImage* src = cvQueryFrame(cap);\n    if (!src) return 0;\n    ipl_into_image(src, im);\n    rgbgr_image(im);\n    return 1;\n}\n\nvoid save_image_jpg(image p, const char *name)\n{\n    image copy = copy_image(p);\n    if(p.c == 3) rgbgr_image(copy);\n    int x,y,k;\n\n    char buff[256];\n    sprintf(buff, \"%s.jpg\", name);\n\n    IplImage *disp = cvCreateImage(cvSize(p.w,p.h), IPL_DEPTH_8U, p.c);\n    int step = disp->widthStep;\n    for(y = 0; y < p.h; ++y){\n        for(x = 0; x < p.w; ++x){\n            for(k= 0; k < p.c; ++k){\n                disp->imageData[y*step + x*p.c + k] = (unsigned char)(get_pixel(copy,x,y,k)*255);\n            }\n        }\n    }\n    cvSaveImage(buff, disp,0);\n    cvReleaseImage(&disp);\n    free_image(copy);\n}\n#endif\n\nvoid save_image_png(image im, const char *name)\n{\n    char buff[256];\n    //sprintf(buff, \"%s (%d)\", name, windows);\n    sprintf(buff, \"%s.png\", name);\n    unsigned char *data = calloc(im.w*im.h*im.c, sizeof(char));\n    int i,k;\n    for(k = 0; k < im.c; ++k){\n        for(i = 0; i < im.w*im.h; ++i){\n            data[i*im.c+k] = (unsigned char) (255*im.data[i + k*im.w*im.h]);\n        }\n    }\n    int success = stbi_write_png(buff, im.w, im.h, im.c, data, im.w*im.c);\n    free(data);\n    if(!success) fprintf(stderr, \"Failed to write image %s\\n\", buff);\n}\n\nvoid save_image(image im, const char *name)\n{\n#ifdef OPENCV\n    save_image_jpg(im, name);\n#else\n    save_image_png(im, name);\n#endif\n}\n\n\nvoid show_image_layers(image p, char *name)\n{\n    int i;\n    char buff[256];\n    for(i = 0; i < p.c; ++i){\n        sprintf(buff, \"%s - Layer %d\", name, i);\n        image layer = get_image_layer(p, i);\n        show_image(layer, buff);\n        free_image(layer);\n    }\n}\n\nvoid show_image_collapsed(image p, char *name)\n{\n    image c = collapse_image_layers(p, 1);\n    show_image(c, name);\n    free_image(c);\n}\n\nimage make_empty_image(int w, int h, int c)\n{\n    image out;\n    out.data = 0;\n    out.h = h;\n    out.w = w;\n    out.c = c;\n    return out;\n}\n\nimage make_image(int w, int h, int c)\n{\n    image out = make_empty_image(w,h,c);\n    out.data = calloc(h*w*c, sizeof(float));\n    return out;\n}\n\nimage make_random_image(int w, int h, int c)\n{\n    image out = make_empty_image(w,h,c);\n    out.data = calloc(h*w*c, sizeof(float));\n    int i;\n    for(i = 0; i < w*h*c; ++i){\n        out.data[i] = (rand_normal() * .25) + .5;\n    }\n    return out;\n}\n\nimage float_to_image(int w, int h, int c, float *data)\n{\n    image out = make_empty_image(w,h,c);\n    out.data = data;\n    return out;\n}\n\nvoid place_image(image im, int w, int h, int dx, int dy, image canvas)\n{\n    int x, y, c;\n    for(c = 0; c < im.c; ++c){\n        for(y = 0; y < h; ++y){\n            for(x = 0; x < w; ++x){\n                int rx = ((float)x / w) * im.w;\n                int ry = ((float)y / h) * im.h;\n                float val = bilinear_interpolate(im, rx, ry, c);\n                set_pixel(canvas, x + dx, y + dy, c, val);\n            }\n        }\n    }\n}\n\nimage center_crop_image(image im, int w, int h)\n{\n    int m = (im.w < im.h) ? im.w : im.h;   \n    image c = crop_image(im, (im.w - m) / 2, (im.h - m)/2, m, m);\n    image r = resize_image(c, w, h);\n    free_image(c);\n    return r;\n}\n\nimage rotate_crop_image(image im, float rad, float s, int w, int h, float dx, float dy, float aspect)\n{\n    int x, y, c;\n    float cx = im.w/2.;\n    float cy = im.h/2.;\n    image rot = make_image(w, h, im.c);\n    for(c = 0; c < im.c; ++c){\n        for(y = 0; y < h; ++y){\n            for(x = 0; x < w; ++x){\n                float rx = cos(rad)*((x - w/2.)/s*aspect + dx/s*aspect) - sin(rad)*((y - h/2.)/s + dy/s) + cx;\n                float ry = sin(rad)*((x - w/2.)/s*aspect + dx/s*aspect) + cos(rad)*((y - h/2.)/s + dy/s) + cy;\n                float val = bilinear_interpolate(im, rx, ry, c);\n                set_pixel(rot, x, y, c, val);\n            }\n        }\n    }\n    return rot;\n}\n\nimage rotate_image(image im, float rad)\n{\n    int x, y, c;\n    float cx = im.w/2.;\n    float cy = im.h/2.;\n    image rot = make_image(im.w, im.h, im.c);\n    for(c = 0; c < im.c; ++c){\n        for(y = 0; y < im.h; ++y){\n            for(x = 0; x < im.w; ++x){\n                float rx = cos(rad)*(x-cx) - sin(rad)*(y-cy) + cx;\n                float ry = sin(rad)*(x-cx) + cos(rad)*(y-cy) + cy;\n                float val = bilinear_interpolate(im, rx, ry, c);\n                set_pixel(rot, x, y, c, val);\n            }\n        }\n    }\n    return rot;\n}\n\nvoid fill_image(image m, float s)\n{\n    int i;\n    for(i = 0; i < m.h*m.w*m.c; ++i) m.data[i] = s;\n}\n\nvoid translate_image(image m, float s)\n{\n    int i;\n    for(i = 0; i < m.h*m.w*m.c; ++i) m.data[i] += s;\n}\n\nvoid scale_image(image m, float s)\n{\n    int i;\n    for(i = 0; i < m.h*m.w*m.c; ++i) m.data[i] *= s;\n}\n\nimage crop_image(image im, int dx, int dy, int w, int h)\n{\n    image cropped = make_image(w, h, im.c);\n    int i, j, k;\n    for(k = 0; k < im.c; ++k){\n        for(j = 0; j < h; ++j){\n            for(i = 0; i < w; ++i){\n                int r = j + dy;\n                int c = i + dx;\n                float val = 0;\n                r = constrain_int(r, 0, im.h-1);\n                c = constrain_int(c, 0, im.w-1);\n                val = get_pixel(im, c, r, k);\n                set_pixel(cropped, i, j, k, val);\n            }\n        }\n    }\n    return cropped;\n}\n\nint best_3d_shift_r(image a, image b, int min, int max)\n{\n    if(min == max) return min;\n    int mid = floor((min + max) / 2.);\n    image c1 = crop_image(b, 0, mid, b.w, b.h);\n    image c2 = crop_image(b, 0, mid+1, b.w, b.h);\n    float d1 = dist_array(c1.data, a.data, a.w*a.h*a.c, 10);\n    float d2 = dist_array(c2.data, a.data, a.w*a.h*a.c, 10);\n    free_image(c1);\n    free_image(c2);\n    if(d1 < d2) return best_3d_shift_r(a, b, min, mid);\n    else return best_3d_shift_r(a, b, mid+1, max);\n}\n\nint best_3d_shift(image a, image b, int min, int max)\n{\n    int i;\n    int best = 0;\n    float best_distance = FLT_MAX;\n    for(i = min; i <= max; i += 2){\n        image c = crop_image(b, 0, i, b.w, b.h);\n        float d = dist_array(c.data, a.data, a.w*a.h*a.c, 100);\n        if(d < best_distance){\n            best_distance = d;\n            best = i;\n        }\n        printf(\"%d %f\\n\", i, d);\n        free_image(c);\n    }\n    return best;\n}\n\nvoid composite_3d(char *f1, char *f2, char *out, int delta)\n{\n    if(!out) out = \"out\";\n    image a = load_image(f1, 0,0,0);\n    image b = load_image(f2, 0,0,0);\n    int shift = best_3d_shift_r(a, b, -a.h/100, a.h/100);\n\n    image c1 = crop_image(b, 10, shift, b.w, b.h);\n    float d1 = dist_array(c1.data, a.data, a.w*a.h*a.c, 100);\n    image c2 = crop_image(b, -10, shift, b.w, b.h);\n    float d2 = dist_array(c2.data, a.data, a.w*a.h*a.c, 100);\n\n    if(d2 < d1 && 0){\n        image swap = a;\n        a = b;\n        b = swap;\n        shift = -shift;\n        printf(\"swapped, %d\\n\", shift);\n    }\n    else{\n        printf(\"%d\\n\", shift);\n    }\n\n    image c = crop_image(b, delta, shift, a.w, a.h);\n    int i;\n    for(i = 0; i < c.w*c.h; ++i){\n        c.data[i] = a.data[i];\n    }\n#ifdef OPENCV\n    save_image_jpg(c, out);\n#else\n    save_image(c, out);\n#endif\n}\n\nvoid letterbox_image_into(image im, int w, int h, image boxed)\n{\n    int new_w = im.w;\n    int new_h = im.h;\n    if (((float)w/im.w) < ((float)h/im.h)) {\n        new_w = w;\n        new_h = (im.h * w)/im.w;\n    } else {\n        new_h = h;\n        new_w = (im.w * h)/im.h;\n    }\n    image resized = resize_image(im, new_w, new_h);\n    embed_image(resized, boxed, (w-new_w)/2, (h-new_h)/2); \n    free_image(resized);\n}\n\nimage letterbox_image(image im, int w, int h)\n{\n    int new_w = im.w;\n    int new_h = im.h;\n    if (((float)w/im.w) < ((float)h/im.h)) {\n        new_w = w;\n        new_h = (im.h * w)/im.w;\n    } else {\n        new_h = h;\n        new_w = (im.w * h)/im.h;\n    }\n    image resized = resize_image(im, new_w, new_h);\n    image boxed = make_image(w, h, im.c);\n    fill_image(boxed, .5);\n    //int i;\n    //for(i = 0; i < boxed.w*boxed.h*boxed.c; ++i) boxed.data[i] = 0;\n    embed_image(resized, boxed, (w-new_w)/2, (h-new_h)/2); \n    free_image(resized);\n    return boxed;\n}\n\nimage resize_max(image im, int max)\n{\n    int w = im.w;\n    int h = im.h;\n    if(w > h){\n        h = (h * max) / w;\n        w = max;\n    } else {\n        w = (w * max) / h;\n        h = max;\n    }\n    if(w == im.w && h == im.h) return im;\n    image resized = resize_image(im, w, h);\n    return resized;\n}\n\nimage resize_min(image im, int min)\n{\n    int w = im.w;\n    int h = im.h;\n    if(w < h){\n        h = (h * min) / w;\n        w = min;\n    } else {\n        w = (w * min) / h;\n        h = min;\n    }\n    if(w == im.w && h == im.h) return im;\n    image resized = resize_image(im, w, h);\n    return resized;\n}\n\nimage random_crop_image(image im, int w, int h)\n{\n    int dx = rand_int(0, im.w - w);\n    int dy = rand_int(0, im.h - h);\n    image crop = crop_image(im, dx, dy, w, h);\n    return crop;\n}\n\naugment_args random_augment_args(image im, float angle, float aspect, int low, int high, int w, int h)\n{\n    augment_args a = {0};\n    aspect = rand_scale(aspect);\n    int r = rand_int(low, high);\n    int min = (im.h < im.w*aspect) ? im.h : im.w*aspect;\n    float scale = (float)r / min;\n\n    float rad = rand_uniform(-angle, angle) * TWO_PI / 360.;\n\n    float dx = (im.w*scale/aspect - w) / 2.;\n    float dy = (im.h*scale - w) / 2.;\n    //if(dx < 0) dx = 0;\n    //if(dy < 0) dy = 0;\n    dx = rand_uniform(-dx, dx);\n    dy = rand_uniform(-dy, dy);\n\n    a.rad = rad;\n    a.scale = scale;\n    a.w = w;\n    a.h = h;\n    a.dx = dx;\n    a.dy = dy;\n    a.aspect = aspect;\n    return a;\n}\n\nimage random_augment_image(image im, float angle, float aspect, int low, int high, int w, int h)\n{\n    augment_args a = random_augment_args(im, angle, aspect, low, high, w, h);\n    image crop = rotate_crop_image(im, a.rad, a.scale, a.w, a.h, a.dx, a.dy, a.aspect);\n    return crop;\n}\n\nfloat three_way_max(float a, float b, float c)\n{\n    return (a > b) ? ( (a > c) ? a : c) : ( (b > c) ? b : c) ;\n}\n\nfloat three_way_min(float a, float b, float c)\n{\n    return (a < b) ? ( (a < c) ? a : c) : ( (b < c) ? b : c) ;\n}\n\nvoid yuv_to_rgb(image im)\n{\n    assert(im.c == 3);\n    int i, j;\n    float r, g, b;\n    float y, u, v;\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            y = get_pixel(im, i , j, 0);\n            u = get_pixel(im, i , j, 1);\n            v = get_pixel(im, i , j, 2);\n\n            r = y + 1.13983*v;\n            g = y + -.39465*u + -.58060*v;\n            b = y + 2.03211*u;\n\n            set_pixel(im, i, j, 0, r);\n            set_pixel(im, i, j, 1, g);\n            set_pixel(im, i, j, 2, b);\n        }\n    }\n}\n\nvoid rgb_to_yuv(image im)\n{\n    assert(im.c == 3);\n    int i, j;\n    float r, g, b;\n    float y, u, v;\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            r = get_pixel(im, i , j, 0);\n            g = get_pixel(im, i , j, 1);\n            b = get_pixel(im, i , j, 2);\n\n            y = .299*r + .587*g + .114*b;\n            u = -.14713*r + -.28886*g + .436*b;\n            v = .615*r + -.51499*g + -.10001*b;\n\n            set_pixel(im, i, j, 0, y);\n            set_pixel(im, i, j, 1, u);\n            set_pixel(im, i, j, 2, v);\n        }\n    }\n}\n\n// http://www.cs.rit.edu/~ncs/color/t_convert.html\nvoid rgb_to_hsv(image im)\n{\n    assert(im.c == 3);\n    int i, j;\n    float r, g, b;\n    float h, s, v;\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            r = get_pixel(im, i , j, 0);\n            g = get_pixel(im, i , j, 1);\n            b = get_pixel(im, i , j, 2);\n            float max = three_way_max(r,g,b);\n            float min = three_way_min(r,g,b);\n            float delta = max - min;\n            v = max;\n            if(max == 0){\n                s = 0;\n                h = 0;\n            }else{\n                s = delta/max;\n                if(r == max){\n                    h = (g - b) / delta;\n                } else if (g == max) {\n                    h = 2 + (b - r) / delta;\n                } else {\n                    h = 4 + (r - g) / delta;\n                }\n                if (h < 0) h += 6;\n                h = h/6.;\n            }\n            set_pixel(im, i, j, 0, h);\n            set_pixel(im, i, j, 1, s);\n            set_pixel(im, i, j, 2, v);\n        }\n    }\n}\n\nvoid hsv_to_rgb(image im)\n{\n    assert(im.c == 3);\n    int i, j;\n    float r, g, b;\n    float h, s, v;\n    float f, p, q, t;\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            h = 6 * get_pixel(im, i , j, 0);\n            s = get_pixel(im, i , j, 1);\n            v = get_pixel(im, i , j, 2);\n            if (s == 0) {\n                r = g = b = v;\n            } else {\n                int index = floor(h);\n                f = h - index;\n                p = v*(1-s);\n                q = v*(1-s*f);\n                t = v*(1-s*(1-f));\n                if(index == 0){\n                    r = v; g = t; b = p;\n                } else if(index == 1){\n                    r = q; g = v; b = p;\n                } else if(index == 2){\n                    r = p; g = v; b = t;\n                } else if(index == 3){\n                    r = p; g = q; b = v;\n                } else if(index == 4){\n                    r = t; g = p; b = v;\n                } else {\n                    r = v; g = p; b = q;\n                }\n            }\n            set_pixel(im, i, j, 0, r);\n            set_pixel(im, i, j, 1, g);\n            set_pixel(im, i, j, 2, b);\n        }\n    }\n}\n\nvoid grayscale_image_3c(image im)\n{\n    assert(im.c == 3);\n    int i, j, k;\n    float scale[] = {0.299, 0.587, 0.114};\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            float val = 0;\n            for(k = 0; k < 3; ++k){\n                val += scale[k]*get_pixel(im, i, j, k);\n            }\n            im.data[0*im.h*im.w + im.w*j + i] = val;\n            im.data[1*im.h*im.w + im.w*j + i] = val;\n            im.data[2*im.h*im.w + im.w*j + i] = val;\n        }\n    }\n}\n\nimage grayscale_image(image im)\n{\n    assert(im.c == 3);\n    int i, j, k;\n    image gray = make_image(im.w, im.h, 1);\n    float scale[] = {0.299, 0.587, 0.114};\n    for(k = 0; k < im.c; ++k){\n        for(j = 0; j < im.h; ++j){\n            for(i = 0; i < im.w; ++i){\n                gray.data[i+im.w*j] += scale[k]*get_pixel(im, i, j, k);\n            }\n        }\n    }\n    return gray;\n}\n\nimage threshold_image(image im, float thresh)\n{\n    int i;\n    image t = make_image(im.w, im.h, im.c);\n    for(i = 0; i < im.w*im.h*im.c; ++i){\n        t.data[i] = im.data[i]>thresh ? 1 : 0;\n    }\n    return t;\n}\n\nimage blend_image(image fore, image back, float alpha)\n{\n    assert(fore.w == back.w && fore.h == back.h && fore.c == back.c);\n    image blend = make_image(fore.w, fore.h, fore.c);\n    int i, j, k;\n    for(k = 0; k < fore.c; ++k){\n        for(j = 0; j < fore.h; ++j){\n            for(i = 0; i < fore.w; ++i){\n                float val = alpha * get_pixel(fore, i, j, k) + \n                    (1 - alpha)* get_pixel(back, i, j, k);\n                set_pixel(blend, i, j, k, val);\n            }\n        }\n    }\n    return blend;\n}\n\nvoid scale_image_channel(image im, int c, float v)\n{\n    int i, j;\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            float pix = get_pixel(im, i, j, c);\n            pix = pix*v;\n            set_pixel(im, i, j, c, pix);\n        }\n    }\n}\n\nvoid translate_image_channel(image im, int c, float v)\n{\n    int i, j;\n    for(j = 0; j < im.h; ++j){\n        for(i = 0; i < im.w; ++i){\n            float pix = get_pixel(im, i, j, c);\n            pix = pix+v;\n            set_pixel(im, i, j, c, pix);\n        }\n    }\n}\n\nimage binarize_image(image im)\n{\n    image c = copy_image(im);\n    int i;\n    for(i = 0; i < im.w * im.h * im.c; ++i){\n        if(c.data[i] > .5) c.data[i] = 1;\n        else c.data[i] = 0;\n    }\n    return c;\n}\n\nvoid saturate_image(image im, float sat)\n{\n    rgb_to_hsv(im);\n    scale_image_channel(im, 1, sat);\n    hsv_to_rgb(im);\n    constrain_image(im);\n}\n\nvoid hue_image(image im, float hue)\n{\n    rgb_to_hsv(im);\n    int i;\n    for(i = 0; i < im.w*im.h; ++i){\n        im.data[i] = im.data[i] + hue;\n        if (im.data[i] > 1) im.data[i] -= 1;\n        if (im.data[i] < 0) im.data[i] += 1;\n    }\n    hsv_to_rgb(im);\n    constrain_image(im);\n}\n\nvoid exposure_image(image im, float sat)\n{\n    rgb_to_hsv(im);\n    scale_image_channel(im, 2, sat);\n    hsv_to_rgb(im);\n    constrain_image(im);\n}\n\nvoid distort_image(image im, float hue, float sat, float val)\n{\n    rgb_to_hsv(im);\n    scale_image_channel(im, 1, sat);\n    scale_image_channel(im, 2, val);\n    int i;\n    for(i = 0; i < im.w*im.h; ++i){\n        im.data[i] = im.data[i] + hue;\n        if (im.data[i] > 1) im.data[i] -= 1;\n        if (im.data[i] < 0) im.data[i] += 1;\n    }\n    hsv_to_rgb(im);\n    constrain_image(im);\n}\n\nvoid random_distort_image(image im, float hue, float saturation, float exposure)\n{\n    float dhue = rand_uniform(-hue, hue);\n    float dsat = rand_scale(saturation);\n    float dexp = rand_scale(exposure);\n    distort_image(im, dhue, dsat, dexp);\n}\n\nvoid saturate_exposure_image(image im, float sat, float exposure)\n{\n    rgb_to_hsv(im);\n    scale_image_channel(im, 1, sat);\n    scale_image_channel(im, 2, exposure);\n    hsv_to_rgb(im);\n    constrain_image(im);\n}\n\nimage resize_image(image im, int w, int h)\n{\n    image resized = make_image(w, h, im.c);   \n    image part = make_image(w, im.h, im.c);\n    int r, c, k;\n    float w_scale = (float)(im.w - 1) / (w - 1);\n    float h_scale = (float)(im.h - 1) / (h - 1);\n    for(k = 0; k < im.c; ++k){\n        for(r = 0; r < im.h; ++r){\n            for(c = 0; c < w; ++c){\n                float val = 0;\n                if(c == w-1 || im.w == 1){\n                    val = get_pixel(im, im.w-1, r, k);\n                } else {\n                    float sx = c*w_scale;\n                    int ix = (int) sx;\n                    float dx = sx - ix;\n                    val = (1 - dx) * get_pixel(im, ix, r, k) + dx * get_pixel(im, ix+1, r, k);\n                }\n                set_pixel(part, c, r, k, val);\n            }\n        }\n    }\n    for(k = 0; k < im.c; ++k){\n        for(r = 0; r < h; ++r){\n            float sy = r*h_scale;\n            int iy = (int) sy;\n            float dy = sy - iy;\n            for(c = 0; c < w; ++c){\n                float val = (1-dy) * get_pixel(part, c, iy, k);\n                set_pixel(resized, c, r, k, val);\n            }\n            if(r == h-1 || im.h == 1) continue;\n            for(c = 0; c < w; ++c){\n                float val = dy * get_pixel(part, c, iy+1, k);\n                add_pixel(resized, c, r, k, val);\n            }\n        }\n    }\n\n    free_image(part);\n    return resized;\n}\n\n\nvoid test_resize(char *filename)\n{\n    image im = load_image(filename, 0,0, 3);\n    float mag = mag_array(im.data, im.w*im.h*im.c);\n    printf(\"L2 Norm: %f\\n\", mag);\n    image gray = grayscale_image(im);\n\n    image c1 = copy_image(im);\n    image c2 = copy_image(im);\n    image c3 = copy_image(im);\n    image c4 = copy_image(im);\n    distort_image(c1, .1, 1.5, 1.5);\n    distort_image(c2, -.1, .66666, .66666);\n    distort_image(c3, .1, 1.5, .66666);\n    distort_image(c4, .1, .66666, 1.5);\n\n\n    show_image(im,   \"Original\");\n    show_image(gray, \"Gray\");\n    show_image(c1, \"C1\");\n    show_image(c2, \"C2\");\n    show_image(c3, \"C3\");\n    show_image(c4, \"C4\");\n#ifdef OPENCV\n    while(1){\n        image aug = random_augment_image(im, 0, .75, 320, 448, 320, 320);\n        show_image(aug, \"aug\");\n        free_image(aug);\n\n\n        float exposure = 1.15;\n        float saturation = 1.15;\n        float hue = .05;\n\n        image c = copy_image(im);\n\n        float dexp = rand_scale(exposure);\n        float dsat = rand_scale(saturation);\n        float dhue = rand_uniform(-hue, hue);\n\n        distort_image(c, dhue, dsat, dexp);\n        show_image(c, \"rand\");\n        printf(\"%f %f %f\\n\", dhue, dsat, dexp);\n        free_image(c);\n        cvWaitKey(0);\n    }\n#endif\n}\n\n\nimage load_image_stb(char *filename, int channels)\n{\n    int w, h, c;\n    unsigned char *data = stbi_load(filename, &w, &h, &c, channels);\n    if (!data) {\n        fprintf(stderr, \"Cannot load image \\\"%s\\\"\\nSTB Reason: %s\\n\", filename, stbi_failure_reason());\n        exit(0);\n    }\n    if(channels) c = channels;\n    int i,j,k;\n    image im = make_image(w, h, c);\n    for(k = 0; k < c; ++k){\n        for(j = 0; j < h; ++j){\n            for(i = 0; i < w; ++i){\n                int dst_index = i + w*j + w*h*k;\n                int src_index = k + c*i + c*w*j;\n                im.data[dst_index] = (float)data[src_index]/255.;\n            }\n        }\n    }\n    free(data);\n    return im;\n}\n\nimage load_image(char *filename, int w, int h, int c)\n{\n#ifdef OPENCV\n    image out = load_image_cv(filename, c);\n#else\n    image out = load_image_stb(filename, c);\n#endif\n\n    if((h && w) && (h != out.h || w != out.w)){\n        image resized = resize_image(out, w, h);\n        free_image(out);\n        out = resized;\n    }\n    return out;\n}\n\nimage load_image_color(char *filename, int w, int h)\n{\n    return load_image(filename, w, h, 3);\n}\n\nimage get_image_layer(image m, int l)\n{\n    image out = make_image(m.w, m.h, 1);\n    int i;\n    for(i = 0; i < m.h*m.w; ++i){\n        out.data[i] = m.data[i+l*m.h*m.w];\n    }\n    return out;\n}\nvoid print_image(image m)\n{\n    int i, j, k;\n    for(i =0 ; i < m.c; ++i){\n        for(j =0 ; j < m.h; ++j){\n            for(k = 0; k < m.w; ++k){\n                printf(\"%.2lf, \", m.data[i*m.h*m.w + j*m.w + k]);\n                if(k > 30) break;\n            }\n            printf(\"\\n\");\n            if(j > 30) break;\n        }\n        printf(\"\\n\");\n    }\n    printf(\"\\n\");\n}\n\nimage collapse_images_vert(image *ims, int n)\n{\n    int color = 1;\n    int border = 1;\n    int h,w,c;\n    w = ims[0].w;\n    h = (ims[0].h + border) * n - border;\n    c = ims[0].c;\n    if(c != 3 || !color){\n        w = (w+border)*c - border;\n        c = 1;\n    }\n\n    image filters = make_image(w, h, c);\n    int i,j;\n    for(i = 0; i < n; ++i){\n        int h_offset = i*(ims[0].h+border);\n        image copy = copy_image(ims[i]);\n        //normalize_image(copy);\n        if(c == 3 && color){\n            embed_image(copy, filters, 0, h_offset);\n        }\n        else{\n            for(j = 0; j < copy.c; ++j){\n                int w_offset = j*(ims[0].w+border);\n                image layer = get_image_layer(copy, j);\n                embed_image(layer, filters, w_offset, h_offset);\n                free_image(layer);\n            }\n        }\n        free_image(copy);\n    }\n    return filters;\n} \n\nimage collapse_images_horz(image *ims, int n)\n{\n    int color = 1;\n    int border = 1;\n    int h,w,c;\n    int size = ims[0].h;\n    h = size;\n    w = (ims[0].w + border) * n - border;\n    c = ims[0].c;\n    if(c != 3 || !color){\n        h = (h+border)*c - border;\n        c = 1;\n    }\n\n    image filters = make_image(w, h, c);\n    int i,j;\n    for(i = 0; i < n; ++i){\n        int w_offset = i*(size+border);\n        image copy = copy_image(ims[i]);\n        //normalize_image(copy);\n        if(c == 3 && color){\n            embed_image(copy, filters, w_offset, 0);\n        }\n        else{\n            for(j = 0; j < copy.c; ++j){\n                int h_offset = j*(size+border);\n                image layer = get_image_layer(copy, j);\n                embed_image(layer, filters, w_offset, h_offset);\n                free_image(layer);\n            }\n        }\n        free_image(copy);\n    }\n    return filters;\n} \n\nvoid show_image_normalized(image im, const char *name)\n{\n    image c = copy_image(im);\n    normalize_image(c);\n    show_image(c, name);\n    free_image(c);\n}\n\nvoid show_images(image *ims, int n, char *window)\n{\n    image m = collapse_images_vert(ims, n);\n    /*\n       int w = 448;\n       int h = ((float)m.h/m.w) * 448;\n       if(h > 896){\n       h = 896;\n       w = ((float)m.w/m.h) * 896;\n       }\n       image sized = resize_image(m, w, h);\n     */\n    normalize_image(m);\n    save_image(m, window);\n    show_image(m, window);\n    free_image(m);\n}\n\nvoid free_image(image m)\n{\n    if(m.data){\n        free(m.data);\n    }\n}\n"
  },
  {
    "path": "lightnet/_darknet/image.h",
    "content": "#ifndef IMAGE_H\n#define IMAGE_H\n\n#include <stdlib.h>\n#include <stdio.h>\n#include <float.h>\n#include <string.h>\n#include <math.h>\n#include \"box.h\"\n#include \"darknet.h\"\n\n#ifndef __cplusplus\n#ifdef OPENCV\nint fill_image_from_stream(CvCapture *cap, image im);\nimage ipl_to_image(IplImage* src);\nvoid ipl_into_image(IplImage* src, image im);\nvoid flush_stream_buffer(CvCapture *cap, int n);\nvoid show_image_cv(image p, const char *name, IplImage *disp);\n#endif\n#endif\n\nfloat get_color(int c, int x, int max);\nvoid draw_box(image a, int x1, int y1, int x2, int y2, float r, float g, float b);\nvoid draw_bbox(image a, box bbox, int w, float r, float g, float b);\nvoid draw_label(image a, int r, int c, image label, const float *rgb);\nvoid write_label(image a, int r, int c, image *characters, char *string, float *rgb);\nimage image_distance(image a, image b);\nvoid scale_image(image m, float s);\nimage rotate_crop_image(image im, float rad, float s, int w, int h, float dx, float dy, float aspect);\nimage center_crop_image(image im, int w, int h);\nimage random_crop_image(image im, int w, int h);\nimage random_augment_image(image im, float angle, float aspect, int low, int high, int w, int h);\naugment_args random_augment_args(image im, float angle, float aspect, int low, int high, int w, int h);\nvoid letterbox_image_into(image im, int w, int h, image boxed);\nimage resize_max(image im, int max);\nvoid translate_image(image m, float s);\nvoid embed_image(image source, image dest, int dx, int dy);\nvoid place_image(image im, int w, int h, int dx, int dy, image canvas);\nvoid saturate_image(image im, float sat);\nvoid exposure_image(image im, float sat);\nvoid distort_image(image im, float hue, float sat, float val);\nvoid saturate_exposure_image(image im, float sat, float exposure);\nvoid rgb_to_hsv(image im);\nvoid hsv_to_rgb(image im);\nvoid yuv_to_rgb(image im);\nvoid rgb_to_yuv(image im);\n\n\nimage collapse_image_layers(image source, int border);\nimage collapse_images_horz(image *ims, int n);\nimage collapse_images_vert(image *ims, int n);\n\nvoid show_image_normalized(image im, const char *name);\nvoid show_images(image *ims, int n, char *window);\nvoid show_image_layers(image p, char *name);\nvoid show_image_collapsed(image p, char *name);\n\nvoid print_image(image m);\n\nimage make_empty_image(int w, int h, int c);\nvoid copy_image_into(image src, image dest);\n\nimage get_image_layer(image m, int l);\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/layer.c",
    "content": "#include \"layer.h\"\n#include \"cuda.h\"\n\n#include <stdlib.h>\n\nvoid free_layer(layer l)\n{\n    if(l.type == DROPOUT){\n        if(l.rand)           free(l.rand);\n#ifdef GPU\n        if(l.rand_gpu)             cuda_free(l.rand_gpu);\n#endif\n        return;\n    }\n    if(l.cweights)           free(l.cweights);\n    if(l.indexes)            free(l.indexes);\n    if(l.input_layers)       free(l.input_layers);\n    if(l.input_sizes)        free(l.input_sizes);\n    if(l.map)                free(l.map);\n    if(l.rand)               free(l.rand);\n    if(l.cost)               free(l.cost);\n    if(l.state)              free(l.state);\n    if(l.prev_state)         free(l.prev_state);\n    if(l.forgot_state)       free(l.forgot_state);\n    if(l.forgot_delta)       free(l.forgot_delta);\n    if(l.state_delta)        free(l.state_delta);\n    if(l.concat)             free(l.concat);\n    if(l.concat_delta)       free(l.concat_delta);\n    if(l.binary_weights)     free(l.binary_weights);\n    if(l.biases)             free(l.biases);\n    if(l.bias_updates)       free(l.bias_updates);\n    if(l.scales)             free(l.scales);\n    if(l.scale_updates)      free(l.scale_updates);\n    if(l.weights)            free(l.weights);\n    if(l.weight_updates)     free(l.weight_updates);\n    if(l.delta)              free(l.delta);\n    if(l.output)             free(l.output);\n    if(l.squared)            free(l.squared);\n    if(l.norms)              free(l.norms);\n    if(l.spatial_mean)       free(l.spatial_mean);\n    if(l.mean)               free(l.mean);\n    if(l.variance)           free(l.variance);\n    if(l.mean_delta)         free(l.mean_delta);\n    if(l.variance_delta)     free(l.variance_delta);\n    if(l.rolling_mean)       free(l.rolling_mean);\n    if(l.rolling_variance)   free(l.rolling_variance);\n    if(l.x)                  free(l.x);\n    if(l.x_norm)             free(l.x_norm);\n    if(l.m)                  free(l.m);\n    if(l.v)                  free(l.v);\n    if(l.z_cpu)              free(l.z_cpu);\n    if(l.r_cpu)              free(l.r_cpu);\n    if(l.h_cpu)              free(l.h_cpu);\n    if(l.binary_input)       free(l.binary_input);\n\n#ifdef GPU\n    if(l.indexes_gpu)           cuda_free((float *)l.indexes_gpu);\n\n    if(l.z_gpu)                   cuda_free(l.z_gpu);\n    if(l.r_gpu)                   cuda_free(l.r_gpu);\n    if(l.h_gpu)                   cuda_free(l.h_gpu);\n    if(l.m_gpu)                   cuda_free(l.m_gpu);\n    if(l.v_gpu)                   cuda_free(l.v_gpu);\n    if(l.prev_state_gpu)          cuda_free(l.prev_state_gpu);\n    if(l.forgot_state_gpu)        cuda_free(l.forgot_state_gpu);\n    if(l.forgot_delta_gpu)        cuda_free(l.forgot_delta_gpu);\n    if(l.state_gpu)               cuda_free(l.state_gpu);\n    if(l.state_delta_gpu)         cuda_free(l.state_delta_gpu);\n    if(l.gate_gpu)                cuda_free(l.gate_gpu);\n    if(l.gate_delta_gpu)          cuda_free(l.gate_delta_gpu);\n    if(l.save_gpu)                cuda_free(l.save_gpu);\n    if(l.save_delta_gpu)          cuda_free(l.save_delta_gpu);\n    if(l.concat_gpu)              cuda_free(l.concat_gpu);\n    if(l.concat_delta_gpu)        cuda_free(l.concat_delta_gpu);\n    if(l.binary_input_gpu)        cuda_free(l.binary_input_gpu);\n    if(l.binary_weights_gpu)      cuda_free(l.binary_weights_gpu);\n    if(l.mean_gpu)                cuda_free(l.mean_gpu);\n    if(l.variance_gpu)            cuda_free(l.variance_gpu);\n    if(l.rolling_mean_gpu)        cuda_free(l.rolling_mean_gpu);\n    if(l.rolling_variance_gpu)    cuda_free(l.rolling_variance_gpu);\n    if(l.variance_delta_gpu)      cuda_free(l.variance_delta_gpu);\n    if(l.mean_delta_gpu)          cuda_free(l.mean_delta_gpu);\n    if(l.x_gpu)                   cuda_free(l.x_gpu);\n    if(l.x_norm_gpu)              cuda_free(l.x_norm_gpu);\n    if(l.weights_gpu)             cuda_free(l.weights_gpu);\n    if(l.weight_updates_gpu)      cuda_free(l.weight_updates_gpu);\n    if(l.biases_gpu)              cuda_free(l.biases_gpu);\n    if(l.bias_updates_gpu)        cuda_free(l.bias_updates_gpu);\n    if(l.scales_gpu)              cuda_free(l.scales_gpu);\n    if(l.scale_updates_gpu)       cuda_free(l.scale_updates_gpu);\n    if(l.output_gpu)              cuda_free(l.output_gpu);\n    if(l.delta_gpu)               cuda_free(l.delta_gpu);\n    if(l.rand_gpu)                cuda_free(l.rand_gpu);\n    if(l.squared_gpu)             cuda_free(l.squared_gpu);\n    if(l.norms_gpu)               cuda_free(l.norms_gpu);\n#endif\n}\n"
  },
  {
    "path": "lightnet/_darknet/layer.h",
    "content": "#include \"darknet.h\"\n"
  },
  {
    "path": "lightnet/_darknet/list.c",
    "content": "#include <stdlib.h>\n#include <string.h>\n#include \"list.h\"\n\nlist *make_list()\n{\n\tlist *l = malloc(sizeof(list));\n\tl->size = 0;\n\tl->front = 0;\n\tl->back = 0;\n\treturn l;\n}\n\n/*\nvoid transfer_node(list *s, list *d, node *n)\n{\n    node *prev, *next;\n    prev = n->prev;\n    next = n->next;\n    if(prev) prev->next = next;\n    if(next) next->prev = prev;\n    --s->size;\n    if(s->front == n) s->front = next;\n    if(s->back == n) s->back = prev;\n}\n*/\n\nvoid *list_pop(list *l){\n    if(!l->back) return 0;\n    node *b = l->back;\n    void *val = b->val;\n    l->back = b->prev;\n    if(l->back) l->back->next = 0;\n    free(b);\n    --l->size;\n    \n    return val;\n}\n\nvoid list_insert(list *l, void *val)\n{\n\tnode *new = malloc(sizeof(node));\n\tnew->val = val;\n\tnew->next = 0;\n\n\tif(!l->back){\n\t\tl->front = new;\n\t\tnew->prev = 0;\n\t}else{\n\t\tl->back->next = new;\n\t\tnew->prev = l->back;\n\t}\n\tl->back = new;\n\t++l->size;\n}\n\nvoid free_node(node *n)\n{\n\tnode *next;\n\twhile(n) {\n\t\tnext = n->next;\n\t\tfree(n);\n\t\tn = next;\n\t}\n}\n\nvoid free_list(list *l)\n{\n\tfree_node(l->front);\n\tfree(l);\n}\n\nvoid free_list_contents(list *l)\n{\n\tnode *n = l->front;\n\twhile(n){\n\t\tfree(n->val);\n\t\tn = n->next;\n\t}\n}\n\nvoid **list_to_array(list *l)\n{\n    void **a = calloc(l->size, sizeof(void*));\n    int count = 0;\n    node *n = l->front;\n    while(n){\n        a[count++] = n->val;\n        n = n->next;\n    }\n    return a;\n}\n"
  },
  {
    "path": "lightnet/_darknet/list.h",
    "content": "#ifndef LIST_H\n#define LIST_H\n#include \"darknet.h\"\n\nlist *make_list();\nint list_find(list *l, void *val);\n\nvoid list_insert(list *, void *);\n\n\nvoid free_list_contents(list *l);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/local_layer.c",
    "content": "#include \"local_layer.h\"\n#include \"utils.h\"\n#include \"im2col.h\"\n#include \"col2im.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n#include <stdio.h>\n#include <time.h>\n\nint local_out_height(local_layer l)\n{\n    int h = l.h;\n    if (!l.pad) h -= l.size;\n    else h -= 1;\n    return h/l.stride + 1;\n}\n\nint local_out_width(local_layer l)\n{\n    int w = l.w;\n    if (!l.pad) w -= l.size;\n    else w -= 1;\n    return w/l.stride + 1;\n}\n\nlocal_layer make_local_layer(int batch, int h, int w, int c, int n, int size, int stride, int pad, ACTIVATION activation)\n{\n    int i;\n    local_layer l = {0};\n    l.type = LOCAL;\n\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.n = n;\n    l.batch = batch;\n    l.stride = stride;\n    l.size = size;\n    l.pad = pad;\n\n    int out_h = local_out_height(l);\n    int out_w = local_out_width(l);\n    int locations = out_h*out_w;\n    l.out_h = out_h;\n    l.out_w = out_w;\n    l.out_c = n;\n    l.outputs = l.out_h * l.out_w * l.out_c;\n    l.inputs = l.w * l.h * l.c;\n\n    l.weights = calloc(c*n*size*size*locations, sizeof(float));\n    l.weight_updates = calloc(c*n*size*size*locations, sizeof(float));\n\n    l.biases = calloc(l.outputs, sizeof(float));\n    l.bias_updates = calloc(l.outputs, sizeof(float));\n\n    // float scale = 1./sqrt(size*size*c);\n    float scale = sqrt(2./(size*size*c));\n    for(i = 0; i < c*n*size*size; ++i) l.weights[i] = scale*rand_uniform(-1,1);\n\n    l.output = calloc(l.batch*out_h * out_w * n, sizeof(float));\n    l.delta  = calloc(l.batch*out_h * out_w * n, sizeof(float));\n\n    l.workspace_size = out_h*out_w*size*size*c;\n    \n    l.forward = forward_local_layer;\n    l.backward = backward_local_layer;\n    l.update = update_local_layer;\n\n#ifdef GPU\n    l.forward_gpu = forward_local_layer_gpu;\n    l.backward_gpu = backward_local_layer_gpu;\n    l.update_gpu = update_local_layer_gpu;\n\n    l.weights_gpu = cuda_make_array(l.weights, c*n*size*size*locations);\n    l.weight_updates_gpu = cuda_make_array(l.weight_updates, c*n*size*size*locations);\n\n    l.biases_gpu = cuda_make_array(l.biases, l.outputs);\n    l.bias_updates_gpu = cuda_make_array(l.bias_updates, l.outputs);\n\n    l.delta_gpu = cuda_make_array(l.delta, l.batch*out_h*out_w*n);\n    l.output_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);\n\n#endif\n    l.activation = activation;\n\n    fprintf(stderr, \"Local Layer: %d x %d x %d image, %d filters -> %d x %d x %d image\\n\", h,w,c,n, out_h, out_w, n);\n\n    return l;\n}\n\nvoid forward_local_layer(const local_layer l, network net)\n{\n    int out_h = local_out_height(l);\n    int out_w = local_out_width(l);\n    int i, j;\n    int locations = out_h * out_w;\n\n    for(i = 0; i < l.batch; ++i){\n        copy_cpu(l.outputs, l.biases, 1, l.output + i*l.outputs, 1);\n    }\n\n    for(i = 0; i < l.batch; ++i){\n        float *input = net.input + i*l.w*l.h*l.c;\n        im2col_cpu(input, l.c, l.h, l.w, \n                l.size, l.stride, l.pad, net.workspace);\n        float *output = l.output + i*l.outputs;\n        for(j = 0; j < locations; ++j){\n            float *a = l.weights + j*l.size*l.size*l.c*l.n;\n            float *b = net.workspace + j;\n            float *c = output + j;\n\n            int m = l.n;\n            int n = 1;\n            int k = l.size*l.size*l.c;\n\n            gemm(0,0,m,n,k,1,a,k,b,locations,1,c,locations);\n        }\n    }\n    activate_array(l.output, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_local_layer(local_layer l, network net)\n{\n    int i, j;\n    int locations = l.out_w*l.out_h;\n\n    gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);\n\n    for(i = 0; i < l.batch; ++i){\n        axpy_cpu(l.outputs, 1, l.delta + i*l.outputs, 1, l.bias_updates, 1);\n    }\n\n    for(i = 0; i < l.batch; ++i){\n        float *input = net.input + i*l.w*l.h*l.c;\n        im2col_cpu(input, l.c, l.h, l.w, \n                l.size, l.stride, l.pad, net.workspace);\n\n        for(j = 0; j < locations; ++j){ \n            float *a = l.delta + i*l.outputs + j;\n            float *b = net.workspace + j;\n            float *c = l.weight_updates + j*l.size*l.size*l.c*l.n;\n            int m = l.n;\n            int n = l.size*l.size*l.c;\n            int k = 1;\n\n            gemm(0,1,m,n,k,1,a,locations,b,locations,1,c,n);\n        }\n\n        if(net.delta){\n            for(j = 0; j < locations; ++j){ \n                float *a = l.weights + j*l.size*l.size*l.c*l.n;\n                float *b = l.delta + i*l.outputs + j;\n                float *c = net.workspace + j;\n\n                int m = l.size*l.size*l.c;\n                int n = 1;\n                int k = l.n;\n\n                gemm(1,0,m,n,k,1,a,m,b,locations,0,c,locations);\n            }\n\n            col2im_cpu(net.workspace, l.c,  l.h,  l.w,  l.size,  l.stride, l.pad, net.delta+i*l.c*l.h*l.w);\n        }\n    }\n}\n\nvoid update_local_layer(local_layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n\n    int locations = l.out_w*l.out_h;\n    int size = l.size*l.size*l.c*l.n*locations;\n    axpy_cpu(l.outputs, learning_rate/batch, l.bias_updates, 1, l.biases, 1);\n    scal_cpu(l.outputs, momentum, l.bias_updates, 1);\n\n    axpy_cpu(size, -decay*batch, l.weights, 1, l.weight_updates, 1);\n    axpy_cpu(size, learning_rate/batch, l.weight_updates, 1, l.weights, 1);\n    scal_cpu(size, momentum, l.weight_updates, 1);\n}\n\n#ifdef GPU\n\nvoid forward_local_layer_gpu(const local_layer l, network net)\n{\n    int out_h = local_out_height(l);\n    int out_w = local_out_width(l);\n    int i, j;\n    int locations = out_h * out_w;\n\n    for(i = 0; i < l.batch; ++i){\n        copy_gpu(l.outputs, l.biases_gpu, 1, l.output_gpu + i*l.outputs, 1);\n    }\n\n    for(i = 0; i < l.batch; ++i){\n        float *input = net.input_gpu + i*l.w*l.h*l.c;\n        im2col_gpu(input, l.c, l.h, l.w, \n                l.size, l.stride, l.pad, net.workspace);\n        float *output = l.output_gpu + i*l.outputs;\n        for(j = 0; j < locations; ++j){\n            float *a = l.weights_gpu + j*l.size*l.size*l.c*l.n;\n            float *b = net.workspace + j;\n            float *c = output + j;\n\n            int m = l.n;\n            int n = 1;\n            int k = l.size*l.size*l.c;\n\n            gemm_gpu(0,0,m,n,k,1,a,k,b,locations,1,c,locations);\n        }\n    }\n    activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_local_layer_gpu(local_layer l, network net)\n{\n    int i, j;\n    int locations = l.out_w*l.out_h;\n\n    gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);\n    for(i = 0; i < l.batch; ++i){\n        axpy_gpu(l.outputs, 1, l.delta_gpu + i*l.outputs, 1, l.bias_updates_gpu, 1);\n    }\n\n    for(i = 0; i < l.batch; ++i){\n        float *input = net.input_gpu + i*l.w*l.h*l.c;\n        im2col_gpu(input, l.c, l.h, l.w, \n                l.size, l.stride, l.pad, net.workspace);\n\n        for(j = 0; j < locations; ++j){ \n            float *a = l.delta_gpu + i*l.outputs + j;\n            float *b = net.workspace + j;\n            float *c = l.weight_updates_gpu + j*l.size*l.size*l.c*l.n;\n            int m = l.n;\n            int n = l.size*l.size*l.c;\n            int k = 1;\n\n            gemm_gpu(0,1,m,n,k,1,a,locations,b,locations,1,c,n);\n        }\n\n        if(net.delta_gpu){\n            for(j = 0; j < locations; ++j){ \n                float *a = l.weights_gpu + j*l.size*l.size*l.c*l.n;\n                float *b = l.delta_gpu + i*l.outputs + j;\n                float *c = net.workspace + j;\n\n                int m = l.size*l.size*l.c;\n                int n = 1;\n                int k = l.n;\n\n                gemm_gpu(1,0,m,n,k,1,a,m,b,locations,0,c,locations);\n            }\n\n            col2im_gpu(net.workspace, l.c,  l.h,  l.w,  l.size,  l.stride, l.pad, net.delta_gpu+i*l.c*l.h*l.w);\n        }\n    }\n}\n\nvoid update_local_layer_gpu(local_layer l, update_args a)\n{\n    float learning_rate = a.learning_rate*l.learning_rate_scale;\n    float momentum = a.momentum;\n    float decay = a.decay;\n    int batch = a.batch;\n\n    int locations = l.out_w*l.out_h;\n    int size = l.size*l.size*l.c*l.n*locations;\n    axpy_gpu(l.outputs, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);\n    scal_gpu(l.outputs, momentum, l.bias_updates_gpu, 1);\n\n    axpy_gpu(size, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);\n    axpy_gpu(size, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);\n    scal_gpu(size, momentum, l.weight_updates_gpu, 1);\n}\n\nvoid pull_local_layer(local_layer l)\n{\n    int locations = l.out_w*l.out_h;\n    int size = l.size*l.size*l.c*l.n*locations;\n    cuda_pull_array(l.weights_gpu, l.weights, size);\n    cuda_pull_array(l.biases_gpu, l.biases, l.outputs);\n}\n\nvoid push_local_layer(local_layer l)\n{\n    int locations = l.out_w*l.out_h;\n    int size = l.size*l.size*l.c*l.n*locations;\n    cuda_push_array(l.weights_gpu, l.weights, size);\n    cuda_push_array(l.biases_gpu, l.biases, l.outputs);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/local_layer.h",
    "content": "#ifndef LOCAL_LAYER_H\n#define LOCAL_LAYER_H\n\n#include \"cuda.h\"\n#include \"image.h\"\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer local_layer;\n\n#ifdef GPU\nvoid forward_local_layer_gpu(local_layer layer, network net);\nvoid backward_local_layer_gpu(local_layer layer, network net);\nvoid update_local_layer_gpu(local_layer layer, update_args a);\n\nvoid push_local_layer(local_layer layer);\nvoid pull_local_layer(local_layer layer);\n#endif\n\nlocal_layer make_local_layer(int batch, int h, int w, int c, int n, int size, int stride, int pad, ACTIVATION activation);\n\nvoid forward_local_layer(const local_layer layer, network net);\nvoid backward_local_layer(local_layer layer, network net);\nvoid update_local_layer(local_layer layer, update_args a);\n\nvoid bias_output(float *output, float *biases, int batch, int n, int size);\nvoid backward_bias(float *bias_updates, float *delta, int batch, int n, int size);\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/lstm_layer.c",
    "content": "#include \"lstm_layer.h\"\n#include \"connected_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nstatic void increment_layer(layer *l, int steps)\n{\n    int num = l->outputs*l->batch*steps;\n    l->output += num;\n    l->delta += num;\n    l->x += num;\n    l->x_norm += num;\n\n#ifdef GPU\n    l->output_gpu += num;\n    l->delta_gpu += num;\n    l->x_gpu += num;\n    l->x_norm_gpu += num;\n#endif\n}\n\nlayer make_lstm_layer(int batch, int inputs, int outputs, int steps, int batch_normalize, int adam)\n{\n    fprintf(stderr, \"LSTM Layer: %d inputs, %d outputs\\n\", inputs, outputs);\n    batch = batch / steps;\n    layer l = { 0 };\n    l.batch = batch;\n    l.type = LSTM;\n    l.steps = steps;\n    l.inputs = inputs;\n\n    l.uf = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.uf) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.uf->batch = batch;\n\n    l.ui = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.ui) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.ui->batch = batch;\n\n    l.ug = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.ug) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.ug->batch = batch;\n\n    l.uo = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.uo) = make_connected_layer(batch*steps, inputs, outputs, LINEAR, batch_normalize, adam);\n    l.uo->batch = batch;\n\n    l.wf = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wf) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wf->batch = batch;\n\n    l.wi = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wi) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wi->batch = batch;\n\n    l.wg = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wg) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wg->batch = batch;\n\n    l.wo = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.wo) = make_connected_layer(batch*steps, outputs, outputs, LINEAR, batch_normalize, adam);\n    l.wo->batch = batch;\n\n    l.batch_normalize = batch_normalize;\n    l.outputs = outputs;\n\n    l.output = calloc(outputs*batch*steps, sizeof(float));\n    l.state = calloc(outputs*batch, sizeof(float));\n\n    l.forward = forward_lstm_layer;\n    l.update = update_lstm_layer;\n\n    l.prev_state_cpu =  calloc(batch*outputs, sizeof(float));\n    l.prev_cell_cpu =   calloc(batch*outputs, sizeof(float));\n    l.cell_cpu =        calloc(batch*outputs*steps, sizeof(float));\n\n    l.f_cpu =           calloc(batch*outputs, sizeof(float));\n    l.i_cpu =           calloc(batch*outputs, sizeof(float));\n    l.g_cpu =           calloc(batch*outputs, sizeof(float));\n    l.o_cpu =           calloc(batch*outputs, sizeof(float));\n    l.c_cpu =           calloc(batch*outputs, sizeof(float));\n    l.h_cpu =           calloc(batch*outputs, sizeof(float));\n    l.temp_cpu =        calloc(batch*outputs, sizeof(float));\n    l.temp2_cpu =       calloc(batch*outputs, sizeof(float));\n    l.temp3_cpu =       calloc(batch*outputs, sizeof(float));\n    l.dc_cpu =          calloc(batch*outputs, sizeof(float));\n    l.dh_cpu =          calloc(batch*outputs, sizeof(float));\n\n#ifdef GPU\n    l.forward_gpu = forward_lstm_layer_gpu;\n    l.backward_gpu = backward_lstm_layer_gpu;\n    l.update_gpu = update_lstm_layer_gpu;\n\n    l.output_gpu = cuda_make_array(0, batch*outputs*steps);\n    l.delta_gpu = cuda_make_array(0, batch*l.outputs*steps);\n\n    l.prev_state_gpu = cuda_make_array(0, batch*outputs);\n    l.prev_cell_gpu = cuda_make_array(0, batch*outputs);\n    l.cell_gpu = cuda_make_array(0, batch*outputs*steps);\n\n    l.f_gpu = cuda_make_array(0, batch*outputs);\n    l.i_gpu = cuda_make_array(0, batch*outputs);\n    l.g_gpu = cuda_make_array(0, batch*outputs);\n    l.o_gpu = cuda_make_array(0, batch*outputs);\n    l.c_gpu = cuda_make_array(0, batch*outputs);\n    l.h_gpu = cuda_make_array(0, batch*outputs);\n    l.temp_gpu =  cuda_make_array(0, batch*outputs);\n    l.temp2_gpu = cuda_make_array(0, batch*outputs);\n    l.temp3_gpu = cuda_make_array(0, batch*outputs);\n    l.dc_gpu = cuda_make_array(0, batch*outputs);\n    l.dh_gpu = cuda_make_array(0, batch*outputs);\n#ifdef CUDNN\n        cudnnSetTensor4dDescriptor(l.wf->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wf->out_c, l.wf->out_h, l.wf->out_w); \n        cudnnSetTensor4dDescriptor(l.wi->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wi->out_c, l.wi->out_h, l.wi->out_w); \n        cudnnSetTensor4dDescriptor(l.wg->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wg->out_c, l.wg->out_h, l.wg->out_w); \n        cudnnSetTensor4dDescriptor(l.wo->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.wo->out_c, l.wo->out_h, l.wo->out_w); \n\n        cudnnSetTensor4dDescriptor(l.uf->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.uf->out_c, l.uf->out_h, l.uf->out_w); \n        cudnnSetTensor4dDescriptor(l.ui->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.ui->out_c, l.ui->out_h, l.ui->out_w); \n        cudnnSetTensor4dDescriptor(l.ug->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.ug->out_c, l.ug->out_h, l.ug->out_w); \n        cudnnSetTensor4dDescriptor(l.uo->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.uo->out_c, l.uo->out_h, l.uo->out_w); \n#endif\n\n#endif\n\n    return l;\n}\n\nvoid update_lstm_layer(layer l, update_args a)\n{\n    update_connected_layer(*(l.wf), a);\n    update_connected_layer(*(l.wi), a);\n    update_connected_layer(*(l.wg), a);\n    update_connected_layer(*(l.wo), a);\n    update_connected_layer(*(l.uf), a);\n    update_connected_layer(*(l.ui), a);\n    update_connected_layer(*(l.ug), a);\n    update_connected_layer(*(l.uo), a);\n}\n\nvoid forward_lstm_layer(layer l, network state)\n{\n    network s = { 0 };\n    s.train = state.train;\n    int i;\n    layer wf = *(l.wf);\n    layer wi = *(l.wi);\n    layer wg = *(l.wg);\n    layer wo = *(l.wo);\n\n    layer uf = *(l.uf);\n    layer ui = *(l.ui);\n    layer ug = *(l.ug);\n    layer uo = *(l.uo);\n\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wf.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wi.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wg.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, wo.delta, 1);\n\n    fill_cpu(l.outputs * l.batch * l.steps, 0, uf.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, ui.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, ug.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, uo.delta, 1);\n    if (state.train) {\n        fill_cpu(l.outputs * l.batch * l.steps, 0, l.delta, 1);\n    }\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input = l.h_cpu;\n        forward_connected_layer(wf, s);\t\t\t\t\t\t\t\n        forward_connected_layer(wi, s);\t\t\t\t\t\t\t\n        forward_connected_layer(wg, s);\t\t\t\t\t\t\t\n        forward_connected_layer(wo, s);\t\t\t\t\t\t\t\n\n        s.input = state.input;\n        forward_connected_layer(uf, s);\t\t\t\t\t\t\t\n        forward_connected_layer(ui, s);\t\t\t\t\t\t\t\n        forward_connected_layer(ug, s);\t\t\t\t\t\t\t\n        forward_connected_layer(uo, s);\t\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, wf.output, 1, l.f_cpu, 1);\n        axpy_cpu(l.outputs*l.batch, 1, uf.output, 1, l.f_cpu, 1);\n\n        copy_cpu(l.outputs*l.batch, wi.output, 1, l.i_cpu, 1);\t\n        axpy_cpu(l.outputs*l.batch, 1, ui.output, 1, l.i_cpu, 1);\t\n\n        copy_cpu(l.outputs*l.batch, wg.output, 1, l.g_cpu, 1);\t\n        axpy_cpu(l.outputs*l.batch, 1, ug.output, 1, l.g_cpu, 1);\t\n\n        copy_cpu(l.outputs*l.batch, wo.output, 1, l.o_cpu, 1);\t\n        axpy_cpu(l.outputs*l.batch, 1, uo.output, 1, l.o_cpu, 1);\t\n\n        activate_array(l.f_cpu, l.outputs*l.batch, LOGISTIC);\t\t\n        activate_array(l.i_cpu, l.outputs*l.batch, LOGISTIC);\t\t\n        activate_array(l.g_cpu, l.outputs*l.batch, TANH);\t\t\t\n        activate_array(l.o_cpu, l.outputs*l.batch, LOGISTIC);\t\t\n\n        copy_cpu(l.outputs*l.batch, l.i_cpu, 1, l.temp_cpu, 1);\t\t\n        mul_cpu(l.outputs*l.batch, l.g_cpu, 1, l.temp_cpu, 1);\t\t\n        mul_cpu(l.outputs*l.batch, l.f_cpu, 1, l.c_cpu, 1);\t\t\t\n        axpy_cpu(l.outputs*l.batch, 1, l.temp_cpu, 1, l.c_cpu, 1);\t\n\n        copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.h_cpu, 1);\t\t\t\n        activate_array(l.h_cpu, l.outputs*l.batch, TANH);\t\t\n        mul_cpu(l.outputs*l.batch, l.o_cpu, 1, l.h_cpu, 1);\t\n\n        copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.cell_cpu, 1);\t\t\n        copy_cpu(l.outputs*l.batch, l.h_cpu, 1, l.output, 1);\n\n        state.input += l.inputs*l.batch;\n        l.output    += l.outputs*l.batch;\n        l.cell_cpu      += l.outputs*l.batch;\n\n        increment_layer(&wf, 1);\n        increment_layer(&wi, 1);\n        increment_layer(&wg, 1);\n        increment_layer(&wo, 1);\n\n        increment_layer(&uf, 1);\n        increment_layer(&ui, 1);\n        increment_layer(&ug, 1);\n        increment_layer(&uo, 1);\n    }\n}\n\nvoid backward_lstm_layer(layer l, network state)\n{\n    network s = { 0 };\n    s.train = state.train;\n    int i;\n    layer wf = *(l.wf);\n    layer wi = *(l.wi);\n    layer wg = *(l.wg);\n    layer wo = *(l.wo);\n\n    layer uf = *(l.uf);\n    layer ui = *(l.ui);\n    layer ug = *(l.ug);\n    layer uo = *(l.uo);\n\n    increment_layer(&wf, l.steps - 1);\n    increment_layer(&wi, l.steps - 1);\n    increment_layer(&wg, l.steps - 1);\n    increment_layer(&wo, l.steps - 1);\n\n    increment_layer(&uf, l.steps - 1);\n    increment_layer(&ui, l.steps - 1);\n    increment_layer(&ug, l.steps - 1);\n    increment_layer(&uo, l.steps - 1);\n\n    state.input += l.inputs*l.batch*(l.steps - 1);\n    if (state.delta) state.delta += l.inputs*l.batch*(l.steps - 1);\n\n    l.output += l.outputs*l.batch*(l.steps - 1);\n    l.cell_cpu += l.outputs*l.batch*(l.steps - 1);\n    l.delta += l.outputs*l.batch*(l.steps - 1);\n\n    for (i = l.steps - 1; i >= 0; --i) {\n        if (i != 0) copy_cpu(l.outputs*l.batch, l.cell_cpu - l.outputs*l.batch, 1, l.prev_cell_cpu, 1);\n        copy_cpu(l.outputs*l.batch, l.cell_cpu, 1, l.c_cpu, 1);\n        if (i != 0) copy_cpu(l.outputs*l.batch, l.output - l.outputs*l.batch, 1, l.prev_state_cpu, 1);\n        copy_cpu(l.outputs*l.batch, l.output, 1, l.h_cpu, 1);\n\n        l.dh_cpu = (i == 0) ? 0 : l.delta - l.outputs*l.batch;\n\n        copy_cpu(l.outputs*l.batch, wf.output, 1, l.f_cpu, 1);\t\t\t\n        axpy_cpu(l.outputs*l.batch, 1, uf.output, 1, l.f_cpu, 1);\t\t\t\n\n        copy_cpu(l.outputs*l.batch, wi.output, 1, l.i_cpu, 1);\t\t\t\n        axpy_cpu(l.outputs*l.batch, 1, ui.output, 1, l.i_cpu, 1);\t\t\t\n\n        copy_cpu(l.outputs*l.batch, wg.output, 1, l.g_cpu, 1);\t\t\t\n        axpy_cpu(l.outputs*l.batch, 1, ug.output, 1, l.g_cpu, 1);\t\t\t\n\n        copy_cpu(l.outputs*l.batch, wo.output, 1, l.o_cpu, 1);\t\t\t\n        axpy_cpu(l.outputs*l.batch, 1, uo.output, 1, l.o_cpu, 1);\t\t\t\n\n        activate_array(l.f_cpu, l.outputs*l.batch, LOGISTIC);\t\t\t\n        activate_array(l.i_cpu, l.outputs*l.batch, LOGISTIC);\t\t\n        activate_array(l.g_cpu, l.outputs*l.batch, TANH);\t\t\t\n        activate_array(l.o_cpu, l.outputs*l.batch, LOGISTIC);\t\t\n\n        copy_cpu(l.outputs*l.batch, l.delta, 1, l.temp3_cpu, 1);\t\t\n\n        copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.temp_cpu, 1);\t\t\t\n        activate_array(l.temp_cpu, l.outputs*l.batch, TANH);\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp3_cpu, 1, l.temp2_cpu, 1);\t\t\n        mul_cpu(l.outputs*l.batch, l.o_cpu, 1, l.temp2_cpu, 1);\t\t\t\n\n        gradient_array(l.temp_cpu, l.outputs*l.batch, TANH, l.temp2_cpu);\n        axpy_cpu(l.outputs*l.batch, 1, l.dc_cpu, 1, l.temp2_cpu, 1);\t\t\n\n        copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.temp_cpu, 1);\t\t\t\n        activate_array(l.temp_cpu, l.outputs*l.batch, TANH);\t\t\t\n        mul_cpu(l.outputs*l.batch, l.temp3_cpu, 1, l.temp_cpu, 1);\t\t\n        gradient_array(l.o_cpu, l.outputs*l.batch, LOGISTIC, l.temp_cpu);\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wo.delta, 1);\n        s.input = l.prev_state_cpu;\n        s.delta = l.dh_cpu;\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n        backward_connected_layer(wo, s);\t\n\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, uo.delta, 1);\n        s.input = state.input;\n        s.delta = state.delta;\n        backward_connected_layer(uo, s);\t\t\t\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1);\t\t\t\n        mul_cpu(l.outputs*l.batch, l.i_cpu, 1, l.temp_cpu, 1);\t\t\t\t\n        gradient_array(l.g_cpu, l.outputs*l.batch, TANH, l.temp_cpu);\t\t\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wg.delta, 1);\n        s.input = l.prev_state_cpu;\n        s.delta = l.dh_cpu;\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n        backward_connected_layer(wg, s);\t\n\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, ug.delta, 1);\n        s.input = state.input;\n        s.delta = state.delta;\n        backward_connected_layer(ug, s);\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1);\t\t\t\n        mul_cpu(l.outputs*l.batch, l.g_cpu, 1, l.temp_cpu, 1);\t\t\t\t\n        gradient_array(l.i_cpu, l.outputs*l.batch, LOGISTIC, l.temp_cpu);\t\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wi.delta, 1);\n        s.input = l.prev_state_cpu;\n        s.delta = l.dh_cpu;\n        backward_connected_layer(wi, s);\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, ui.delta, 1);\n        s.input = state.input;\n        s.delta = state.delta;\n        backward_connected_layer(ui, s);\t\t\t\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1);\t\t\n        mul_cpu(l.outputs*l.batch, l.prev_cell_cpu, 1, l.temp_cpu, 1);\n        gradient_array(l.f_cpu, l.outputs*l.batch, LOGISTIC, l.temp_cpu);\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wf.delta, 1);\n        s.input = l.prev_state_cpu;\n        s.delta = l.dh_cpu;\n        backward_connected_layer(wf, s);\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, uf.delta, 1);\n        s.input = state.input;\n        s.delta = state.delta;\n        backward_connected_layer(uf, s);\t\t\t\t\t\t\t\t\t\n\n        copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1);\t\t\t\n        mul_cpu(l.outputs*l.batch, l.f_cpu, 1, l.temp_cpu, 1);\t\t\t\t\n        copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, l.dc_cpu, 1);\t\t\t\t\n\n        state.input -= l.inputs*l.batch;\n        if (state.delta) state.delta -= l.inputs*l.batch;\n        l.output -= l.outputs*l.batch;\n        l.cell_cpu -= l.outputs*l.batch;\n        l.delta -= l.outputs*l.batch;\n\n        increment_layer(&wf, -1);\n        increment_layer(&wi, -1);\n        increment_layer(&wg, -1);\n        increment_layer(&wo, -1);\n\n        increment_layer(&uf, -1);\n        increment_layer(&ui, -1);\n        increment_layer(&ug, -1);\n        increment_layer(&uo, -1);\n    }\n}\n\n#ifdef GPU\nvoid update_lstm_layer_gpu(layer l, update_args a)\n{\n    update_connected_layer_gpu(*(l.wf), a);\n    update_connected_layer_gpu(*(l.wi), a);\n    update_connected_layer_gpu(*(l.wg), a);\n    update_connected_layer_gpu(*(l.wo), a);\n    update_connected_layer_gpu(*(l.uf), a);\n    update_connected_layer_gpu(*(l.ui), a);\n    update_connected_layer_gpu(*(l.ug), a);\n    update_connected_layer_gpu(*(l.uo), a);\n}\n\nvoid forward_lstm_layer_gpu(layer l, network state)\n{\n    network s = { 0 };\n    s.train = state.train;\n    int i;\n    layer wf = *(l.wf);\n    layer wi = *(l.wi);\n    layer wg = *(l.wg);\n    layer wo = *(l.wo);\n\n    layer uf = *(l.uf);\n    layer ui = *(l.ui);\n    layer ug = *(l.ug);\n    layer uo = *(l.uo);\n\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wf.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wi.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wg.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, wo.delta_gpu, 1);\n\n    fill_gpu(l.outputs * l.batch * l.steps, 0, uf.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, ui.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, ug.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, uo.delta_gpu, 1);\n    if (state.train) {\n        fill_gpu(l.outputs * l.batch * l.steps, 0, l.delta_gpu, 1);\n    }\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input_gpu = l.h_gpu;\n        forward_connected_layer_gpu(wf, s);\t\t\t\t\t\t\t\n        forward_connected_layer_gpu(wi, s);\t\t\t\t\t\t\t\n        forward_connected_layer_gpu(wg, s);\t\t\t\t\t\t\t\n        forward_connected_layer_gpu(wo, s);\t\t\t\t\t\t\t\n\n        s.input_gpu = state.input_gpu;\n        forward_connected_layer_gpu(uf, s);\t\t\t\t\t\t\t\n        forward_connected_layer_gpu(ui, s);\t\t\t\t\t\t\t\n        forward_connected_layer_gpu(ug, s);\t\t\t\t\t\t\t\n        forward_connected_layer_gpu(uo, s);\t\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, wf.output_gpu, 1, l.f_gpu, 1);\n        axpy_gpu(l.outputs*l.batch, 1, uf.output_gpu, 1, l.f_gpu, 1);\n\n        copy_gpu(l.outputs*l.batch, wi.output_gpu, 1, l.i_gpu, 1);\t\n        axpy_gpu(l.outputs*l.batch, 1, ui.output_gpu, 1, l.i_gpu, 1);\t\n\n        copy_gpu(l.outputs*l.batch, wg.output_gpu, 1, l.g_gpu, 1);\t\n        axpy_gpu(l.outputs*l.batch, 1, ug.output_gpu, 1, l.g_gpu, 1);\t\n\n        copy_gpu(l.outputs*l.batch, wo.output_gpu, 1, l.o_gpu, 1);\t\n        axpy_gpu(l.outputs*l.batch, 1, uo.output_gpu, 1, l.o_gpu, 1);\t\n\n        activate_array_gpu(l.f_gpu, l.outputs*l.batch, LOGISTIC);\t\t\n        activate_array_gpu(l.i_gpu, l.outputs*l.batch, LOGISTIC);\t\t\n        activate_array_gpu(l.g_gpu, l.outputs*l.batch, TANH);\t\t\t\n        activate_array_gpu(l.o_gpu, l.outputs*l.batch, LOGISTIC);\t\t\n\n        copy_gpu(l.outputs*l.batch, l.i_gpu, 1, l.temp_gpu, 1);\t\t\n        mul_gpu(l.outputs*l.batch, l.g_gpu, 1, l.temp_gpu, 1);\t\t\n        mul_gpu(l.outputs*l.batch, l.f_gpu, 1, l.c_gpu, 1);\t\t\t\n        axpy_gpu(l.outputs*l.batch, 1, l.temp_gpu, 1, l.c_gpu, 1);\t\n\n        copy_gpu(l.outputs*l.batch, l.c_gpu, 1, l.h_gpu, 1);\t\t\t\n        activate_array_gpu(l.h_gpu, l.outputs*l.batch, TANH);\t\t\n        mul_gpu(l.outputs*l.batch, l.o_gpu, 1, l.h_gpu, 1);\t\n\n        copy_gpu(l.outputs*l.batch, l.c_gpu, 1, l.cell_gpu, 1);\t\t\n        copy_gpu(l.outputs*l.batch, l.h_gpu, 1, l.output_gpu, 1);\n\n        state.input_gpu += l.inputs*l.batch;\n        l.output_gpu    += l.outputs*l.batch;\n        l.cell_gpu      += l.outputs*l.batch;\n\n        increment_layer(&wf, 1);\n        increment_layer(&wi, 1);\n        increment_layer(&wg, 1);\n        increment_layer(&wo, 1);\n\n        increment_layer(&uf, 1);\n        increment_layer(&ui, 1);\n        increment_layer(&ug, 1);\n        increment_layer(&uo, 1);\n    }\n}\n\nvoid backward_lstm_layer_gpu(layer l, network state)\n{\n    network s = { 0 };\n    s.train = state.train;\n    int i;\n    layer wf = *(l.wf);\n    layer wi = *(l.wi);\n    layer wg = *(l.wg);\n    layer wo = *(l.wo);\n\n    layer uf = *(l.uf);\n    layer ui = *(l.ui);\n    layer ug = *(l.ug);\n    layer uo = *(l.uo);\n\n    increment_layer(&wf, l.steps - 1);\n    increment_layer(&wi, l.steps - 1);\n    increment_layer(&wg, l.steps - 1);\n    increment_layer(&wo, l.steps - 1);\n\n    increment_layer(&uf, l.steps - 1);\n    increment_layer(&ui, l.steps - 1);\n    increment_layer(&ug, l.steps - 1);\n    increment_layer(&uo, l.steps - 1);\n\n    state.input_gpu += l.inputs*l.batch*(l.steps - 1);\n    if (state.delta_gpu) state.delta_gpu += l.inputs*l.batch*(l.steps - 1);\n\n    l.output_gpu += l.outputs*l.batch*(l.steps - 1);\n    l.cell_gpu += l.outputs*l.batch*(l.steps - 1);\n    l.delta_gpu += l.outputs*l.batch*(l.steps - 1);\n\n    for (i = l.steps - 1; i >= 0; --i) {\n        if (i != 0) copy_gpu(l.outputs*l.batch, l.cell_gpu - l.outputs*l.batch, 1, l.prev_cell_gpu, 1);\n        copy_gpu(l.outputs*l.batch, l.cell_gpu, 1, l.c_gpu, 1);\n        if (i != 0) copy_gpu(l.outputs*l.batch, l.output_gpu - l.outputs*l.batch, 1, l.prev_state_gpu, 1);\n        copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.h_gpu, 1);\n\n        l.dh_gpu = (i == 0) ? 0 : l.delta_gpu - l.outputs*l.batch;\n\n        copy_gpu(l.outputs*l.batch, wf.output_gpu, 1, l.f_gpu, 1);\t\t\t\n        axpy_gpu(l.outputs*l.batch, 1, uf.output_gpu, 1, l.f_gpu, 1);\t\t\t\n\n        copy_gpu(l.outputs*l.batch, wi.output_gpu, 1, l.i_gpu, 1);\t\t\t\n        axpy_gpu(l.outputs*l.batch, 1, ui.output_gpu, 1, l.i_gpu, 1);\t\t\t\n\n        copy_gpu(l.outputs*l.batch, wg.output_gpu, 1, l.g_gpu, 1);\t\t\t\n        axpy_gpu(l.outputs*l.batch, 1, ug.output_gpu, 1, l.g_gpu, 1);\t\t\t\n\n        copy_gpu(l.outputs*l.batch, wo.output_gpu, 1, l.o_gpu, 1);\t\t\t\n        axpy_gpu(l.outputs*l.batch, 1, uo.output_gpu, 1, l.o_gpu, 1);\t\t\t\n\n        activate_array_gpu(l.f_gpu, l.outputs*l.batch, LOGISTIC);\t\t\t\n        activate_array_gpu(l.i_gpu, l.outputs*l.batch, LOGISTIC);\t\t\n        activate_array_gpu(l.g_gpu, l.outputs*l.batch, TANH);\t\t\t\n        activate_array_gpu(l.o_gpu, l.outputs*l.batch, LOGISTIC);\t\t\n\n        copy_gpu(l.outputs*l.batch, l.delta_gpu, 1, l.temp3_gpu, 1);\t\t\n\n        copy_gpu(l.outputs*l.batch, l.c_gpu, 1, l.temp_gpu, 1);\t\t\t\n        activate_array_gpu(l.temp_gpu, l.outputs*l.batch, TANH);\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp3_gpu, 1, l.temp2_gpu, 1);\t\t\n        mul_gpu(l.outputs*l.batch, l.o_gpu, 1, l.temp2_gpu, 1);\t\t\t\n\n        gradient_array_gpu(l.temp_gpu, l.outputs*l.batch, TANH, l.temp2_gpu);\n        axpy_gpu(l.outputs*l.batch, 1, l.dc_gpu, 1, l.temp2_gpu, 1);\t\t\n\n        copy_gpu(l.outputs*l.batch, l.c_gpu, 1, l.temp_gpu, 1);\t\t\t\n        activate_array_gpu(l.temp_gpu, l.outputs*l.batch, TANH);\t\t\t\n        mul_gpu(l.outputs*l.batch, l.temp3_gpu, 1, l.temp_gpu, 1);\t\t\n        gradient_array_gpu(l.o_gpu, l.outputs*l.batch, LOGISTIC, l.temp_gpu);\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, wo.delta_gpu, 1);\n        s.input_gpu = l.prev_state_gpu;\n        s.delta_gpu = l.dh_gpu;\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n        backward_connected_layer_gpu(wo, s);\t\n\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, uo.delta_gpu, 1);\n        s.input_gpu = state.input_gpu;\n        s.delta_gpu = state.delta_gpu;\n        backward_connected_layer_gpu(uo, s);\t\t\t\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp2_gpu, 1, l.temp_gpu, 1);\t\t\t\n        mul_gpu(l.outputs*l.batch, l.i_gpu, 1, l.temp_gpu, 1);\t\t\t\t\n        gradient_array_gpu(l.g_gpu, l.outputs*l.batch, TANH, l.temp_gpu);\t\t\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, wg.delta_gpu, 1);\n        s.input_gpu = l.prev_state_gpu;\n        s.delta_gpu = l.dh_gpu;\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n        backward_connected_layer_gpu(wg, s);\t\n\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, ug.delta_gpu, 1);\n        s.input_gpu = state.input_gpu;\n        s.delta_gpu = state.delta_gpu;\n        backward_connected_layer_gpu(ug, s);\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp2_gpu, 1, l.temp_gpu, 1);\t\t\t\n        mul_gpu(l.outputs*l.batch, l.g_gpu, 1, l.temp_gpu, 1);\t\t\t\t\n        gradient_array_gpu(l.i_gpu, l.outputs*l.batch, LOGISTIC, l.temp_gpu);\t\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, wi.delta_gpu, 1);\n        s.input_gpu = l.prev_state_gpu;\n        s.delta_gpu = l.dh_gpu;\n        backward_connected_layer_gpu(wi, s);\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, ui.delta_gpu, 1);\n        s.input_gpu = state.input_gpu;\n        s.delta_gpu = state.delta_gpu;\n        backward_connected_layer_gpu(ui, s);\t\t\t\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp2_gpu, 1, l.temp_gpu, 1);\t\t\n        mul_gpu(l.outputs*l.batch, l.prev_cell_gpu, 1, l.temp_gpu, 1);\n        gradient_array_gpu(l.f_gpu, l.outputs*l.batch, LOGISTIC, l.temp_gpu);\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, wf.delta_gpu, 1);\n        s.input_gpu = l.prev_state_gpu;\n        s.delta_gpu = l.dh_gpu;\n        backward_connected_layer_gpu(wf, s);\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, uf.delta_gpu, 1);\n        s.input_gpu = state.input_gpu;\n        s.delta_gpu = state.delta_gpu;\n        backward_connected_layer_gpu(uf, s);\t\t\t\t\t\t\t\t\t\n\n        copy_gpu(l.outputs*l.batch, l.temp2_gpu, 1, l.temp_gpu, 1);\t\t\t\n        mul_gpu(l.outputs*l.batch, l.f_gpu, 1, l.temp_gpu, 1);\t\t\t\t\n        copy_gpu(l.outputs*l.batch, l.temp_gpu, 1, l.dc_gpu, 1);\t\t\t\t\n\n        state.input_gpu -= l.inputs*l.batch;\n        if (state.delta_gpu) state.delta_gpu -= l.inputs*l.batch;\n        l.output_gpu -= l.outputs*l.batch;\n        l.cell_gpu -= l.outputs*l.batch;\n        l.delta_gpu -= l.outputs*l.batch;\n\n        increment_layer(&wf, -1);\n        increment_layer(&wi, -1);\n        increment_layer(&wg, -1);\n        increment_layer(&wo, -1);\n\n        increment_layer(&uf, -1);\n        increment_layer(&ui, -1);\n        increment_layer(&ug, -1);\n        increment_layer(&uo, -1);\n    }\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/lstm_layer.h",
    "content": "#ifndef LSTM_LAYER_H\n#define LSTM_LAYER_H\n\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n#define USET\n\nlayer make_lstm_layer(int batch, int inputs, int outputs, int steps, int batch_normalize, int adam);\n\nvoid forward_lstm_layer(layer l, network net); \nvoid update_lstm_layer(layer l, update_args a);\n\n#ifdef GPU\nvoid forward_lstm_layer_gpu(layer l, network net);\nvoid backward_lstm_layer_gpu(layer l, network net);\nvoid update_lstm_layer_gpu(layer l, update_args a); \n\n#endif\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/matrix.c",
    "content": "#include \"matrix.h\"\n#include \"utils.h\"\n#include \"blas.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <assert.h>\n#include <math.h>\n\nvoid free_matrix(matrix m)\n{\n    int i;\n    for(i = 0; i < m.rows; ++i) free(m.vals[i]);\n    free(m.vals);\n}\n\nfloat matrix_topk_accuracy(matrix truth, matrix guess, int k)\n{\n    int *indexes = calloc(k, sizeof(int));\n    int n = truth.cols;\n    int i,j;\n    int correct = 0;\n    for(i = 0; i < truth.rows; ++i){\n        top_k(guess.vals[i], n, k, indexes);\n        for(j = 0; j < k; ++j){\n            int class = indexes[j];\n            if(truth.vals[i][class]){\n                ++correct;\n                break;\n            }\n        }\n    }\n    free(indexes);\n    return (float)correct/truth.rows;\n}\n\nvoid scale_matrix(matrix m, float scale)\n{\n    int i,j;\n    for(i = 0; i < m.rows; ++i){\n        for(j = 0; j < m.cols; ++j){\n            m.vals[i][j] *= scale;\n        }\n    }\n}\n\nmatrix resize_matrix(matrix m, int size)\n{\n    int i;\n    if (m.rows == size) return m;\n    if (m.rows < size) {\n        m.vals = realloc(m.vals, size*sizeof(float*));\n        for (i = m.rows; i < size; ++i) {\n            m.vals[i] = calloc(m.cols, sizeof(float));\n        }\n    } else if (m.rows > size) {\n        for (i = size; i < m.rows; ++i) {\n            free(m.vals[i]);\n        }\n        m.vals = realloc(m.vals, size*sizeof(float*));\n    }\n    m.rows = size;\n    return m;\n}\n\nvoid matrix_add_matrix(matrix from, matrix to)\n{\n    assert(from.rows == to.rows && from.cols == to.cols);\n    int i,j;\n    for(i = 0; i < from.rows; ++i){\n        for(j = 0; j < from.cols; ++j){\n            to.vals[i][j] += from.vals[i][j];\n        }\n    }\n}\n\nmatrix copy_matrix(matrix m)\n{\n    matrix c = {0};\n    c.rows = m.rows;\n    c.cols = m.cols;\n    c.vals = calloc(c.rows, sizeof(float *));\n    int i;\n    for(i = 0; i < c.rows; ++i){\n        c.vals[i] = calloc(c.cols, sizeof(float));\n        copy_cpu(c.cols, m.vals[i], 1, c.vals[i], 1);\n    }\n    return c;\n}\n\nmatrix make_matrix(int rows, int cols)\n{\n    int i;\n    matrix m;\n    m.rows = rows;\n    m.cols = cols;\n    m.vals = calloc(m.rows, sizeof(float *));\n    for(i = 0; i < m.rows; ++i){\n        m.vals[i] = calloc(m.cols, sizeof(float));\n    }\n    return m;\n}\n\nmatrix hold_out_matrix(matrix *m, int n)\n{\n    int i;\n    matrix h;\n    h.rows = n;\n    h.cols = m->cols;\n    h.vals = calloc(h.rows, sizeof(float *));\n    for(i = 0; i < n; ++i){\n        int index = rand()%m->rows;\n        h.vals[i] = m->vals[index];\n        m->vals[index] = m->vals[--(m->rows)];\n    }\n    return h;\n}\n\nfloat *pop_column(matrix *m, int c)\n{\n    float *col = calloc(m->rows, sizeof(float));\n    int i, j;\n    for(i = 0; i < m->rows; ++i){\n        col[i] = m->vals[i][c];\n        for(j = c; j < m->cols-1; ++j){\n            m->vals[i][j] = m->vals[i][j+1];\n        }\n    }\n    --m->cols;\n    return col;\n}\n\nmatrix csv_to_matrix(char *filename)\n{\n    FILE *fp = fopen(filename, \"r\");\n    if(!fp) file_error(filename);\n\n    matrix m;\n    m.cols = -1;\n\n    char *line;\n\n    int n = 0;\n    int size = 1024;\n    m.vals = calloc(size, sizeof(float*));\n    while((line = fgetl(fp))){\n        if(m.cols == -1) m.cols = count_fields(line);\n        if(n == size){\n            size *= 2;\n            m.vals = realloc(m.vals, size*sizeof(float*));\n        }\n        m.vals[n] = parse_fields(line, m.cols);\n        free(line);\n        ++n;\n    }\n    m.vals = realloc(m.vals, n*sizeof(float*));\n    m.rows = n;\n    return m;\n}\n\nvoid matrix_to_csv(matrix m)\n{\n    int i, j;\n\n    for(i = 0; i < m.rows; ++i){\n        for(j = 0; j < m.cols; ++j){\n            if(j > 0) printf(\",\");\n            printf(\"%.17g\", m.vals[i][j]);\n        }\n        printf(\"\\n\");\n    }\n}\n\nvoid print_matrix(matrix m)\n{\n    int i, j;\n    printf(\"%d X %d Matrix:\\n\",m.rows, m.cols);\n    printf(\" __\");\n    for(j = 0; j < 16*m.cols-1; ++j) printf(\" \");\n    printf(\"__ \\n\");\n\n    printf(\"|  \");\n    for(j = 0; j < 16*m.cols-1; ++j) printf(\" \");\n    printf(\"  |\\n\");\n\n    for(i = 0; i < m.rows; ++i){\n        printf(\"|  \");\n        for(j = 0; j < m.cols; ++j){\n            printf(\"%15.7f \", m.vals[i][j]);\n        }\n        printf(\" |\\n\");\n    }\n    printf(\"|__\");\n    for(j = 0; j < 16*m.cols-1; ++j) printf(\" \");\n    printf(\"__|\\n\");\n}\n"
  },
  {
    "path": "lightnet/_darknet/matrix.h",
    "content": "#ifndef MATRIX_H\n#define MATRIX_H\n#include \"darknet.h\"\n\nmatrix copy_matrix(matrix m);\nvoid print_matrix(matrix m);\n\nmatrix hold_out_matrix(matrix *m, int n);\nmatrix resize_matrix(matrix m, int size);\n\nfloat *pop_column(matrix *m, int c);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/maxpool_layer.c",
    "content": "#include \"maxpool_layer.h\"\n#include \"cuda.h\"\n#include <stdio.h>\n\nimage get_maxpool_image(maxpool_layer l)\n{\n    int h = l.out_h;\n    int w = l.out_w;\n    int c = l.c;\n    return float_to_image(w,h,c,l.output);\n}\n\nimage get_maxpool_delta(maxpool_layer l)\n{\n    int h = l.out_h;\n    int w = l.out_w;\n    int c = l.c;\n    return float_to_image(w,h,c,l.delta);\n}\n\nmaxpool_layer make_maxpool_layer(int batch, int h, int w, int c, int size, int stride, int padding)\n{\n    maxpool_layer l = {0};\n    l.type = MAXPOOL;\n    l.batch = batch;\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.pad = padding;\n    l.out_w = (w + 2*padding)/stride;\n    l.out_h = (h + 2*padding)/stride;\n    l.out_c = c;\n    l.outputs = l.out_h * l.out_w * l.out_c;\n    l.inputs = h*w*c;\n    l.size = size;\n    l.stride = stride;\n    int output_size = l.out_h * l.out_w * l.out_c * batch;\n    l.indexes = calloc(output_size, sizeof(int));\n    l.output =  calloc(output_size, sizeof(float));\n    l.delta =   calloc(output_size, sizeof(float));\n    l.forward = forward_maxpool_layer;\n    l.backward = backward_maxpool_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_maxpool_layer_gpu;\n    l.backward_gpu = backward_maxpool_layer_gpu;\n    l.indexes_gpu = cuda_make_int_array(0, output_size);\n    l.output_gpu  = cuda_make_array(l.output, output_size);\n    l.delta_gpu   = cuda_make_array(l.delta, output_size);\n    #endif\n    //fprintf(stderr, \"max          %d x %d / %d  %4d x%4d x%4d   ->  %4d x%4d x%4d\\n\", size, size, stride, w, h, c, l.out_w, l.out_h, l.out_c);\n    return l;\n}\n\nvoid resize_maxpool_layer(maxpool_layer *l, int w, int h)\n{\n    l->h = h;\n    l->w = w;\n    l->inputs = h*w*l->c;\n\n    l->out_w = (w + 2*l->pad)/l->stride;\n    l->out_h = (h + 2*l->pad)/l->stride;\n    l->outputs = l->out_w * l->out_h * l->c;\n    int output_size = l->outputs * l->batch;\n\n    l->indexes = realloc(l->indexes, output_size * sizeof(int));\n    l->output = realloc(l->output, output_size * sizeof(float));\n    l->delta = realloc(l->delta, output_size * sizeof(float));\n\n    #ifdef GPU\n    cuda_free((float *)l->indexes_gpu);\n    cuda_free(l->output_gpu);\n    cuda_free(l->delta_gpu);\n    l->indexes_gpu = cuda_make_int_array(0, output_size);\n    l->output_gpu  = cuda_make_array(l->output, output_size);\n    l->delta_gpu   = cuda_make_array(l->delta,  output_size);\n    #endif\n}\n\nvoid forward_maxpool_layer(const maxpool_layer l, network net)\n{\n    int b,i,j,k,m,n;\n    int w_offset = -l.pad;\n    int h_offset = -l.pad;\n\n    int h = l.out_h;\n    int w = l.out_w;\n    int c = l.c;\n\n    for(b = 0; b < l.batch; ++b){\n        for(k = 0; k < c; ++k){\n            for(i = 0; i < h; ++i){\n                for(j = 0; j < w; ++j){\n                    int out_index = j + w*(i + h*(k + c*b));\n                    float max = -FLT_MAX;\n                    int max_i = -1;\n                    for(n = 0; n < l.size; ++n){\n                        for(m = 0; m < l.size; ++m){\n                            int cur_h = h_offset + i*l.stride + n;\n                            int cur_w = w_offset + j*l.stride + m;\n                            int index = cur_w + l.w*(cur_h + l.h*(k + b*l.c));\n                            int valid = (cur_h >= 0 && cur_h < l.h &&\n                                         cur_w >= 0 && cur_w < l.w);\n                            float val = (valid != 0) ? net.input[index] : -FLT_MAX;\n                            max_i = (val > max) ? index : max_i;\n                            max   = (val > max) ? val   : max;\n                        }\n                    }\n                    l.output[out_index] = max;\n                    l.indexes[out_index] = max_i;\n                }\n            }\n        }\n    }\n}\n\nvoid backward_maxpool_layer(const maxpool_layer l, network net)\n{\n    int i;\n    int h = l.out_h;\n    int w = l.out_w;\n    int c = l.c;\n    for(i = 0; i < h*w*c*l.batch; ++i){\n        int index = l.indexes[i];\n        net.delta[index] += l.delta[i];\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/maxpool_layer.h",
    "content": "#ifndef MAXPOOL_LAYER_H\n#define MAXPOOL_LAYER_H\n\n#include \"image.h\"\n#include \"cuda.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer maxpool_layer;\n\nimage get_maxpool_image(maxpool_layer l);\nmaxpool_layer make_maxpool_layer(int batch, int h, int w, int c, int size, int stride, int padding);\nvoid resize_maxpool_layer(maxpool_layer *l, int w, int h);\nvoid forward_maxpool_layer(const maxpool_layer l, network net);\nvoid backward_maxpool_layer(const maxpool_layer l, network net);\n\n#ifdef GPU\nvoid forward_maxpool_layer_gpu(maxpool_layer l, network net);\nvoid backward_maxpool_layer_gpu(maxpool_layer l, network net);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/maxpool_layer_kernels.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n\nextern \"C\" {\n#include \"maxpool_layer.h\"\n#include \"cuda.h\"\n}\n\n__global__ void forward_maxpool_layer_kernel(int n, int in_h, int in_w, int in_c, int stride, int size, int pad, float *input, float *output, int *indexes)\n{\n    int h = (in_h + 2*pad)/stride;\n    int w = (in_w + 2*pad)/stride;\n    int c = in_c;\n\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= n) return;\n\n    int j = id % w;\n    id /= w;\n    int i = id % h;\n    id /= h;\n    int k = id % c;\n    id /= c;\n    int b = id;\n\n    int w_offset = -pad;\n    int h_offset = -pad;\n\n    int out_index = j + w*(i + h*(k + c*b));\n    float max = -INFINITY;\n    int max_i = -1;\n    int l, m;\n    for(l = 0; l < size; ++l){\n        for(m = 0; m < size; ++m){\n            int cur_h = h_offset + i*stride + l;\n            int cur_w = w_offset + j*stride + m;\n            int index = cur_w + in_w*(cur_h + in_h*(k + b*in_c));\n            int valid = (cur_h >= 0 && cur_h < in_h &&\n                    cur_w >= 0 && cur_w < in_w);\n            float val = (valid != 0) ? input[index] : -INFINITY;\n            max_i = (val > max) ? index : max_i;\n            max   = (val > max) ? val   : max;\n        }\n    }\n    output[out_index] = max;\n    indexes[out_index] = max_i;\n}\n\n__global__ void backward_maxpool_layer_kernel(int n, int in_h, int in_w, int in_c, int stride, int size, int pad, float *delta, float *prev_delta, int *indexes)\n{\n    int h = (in_h + 2*pad)/stride;\n    int w = (in_w + 2*pad)/stride;\n    int c = in_c;\n    int area = (size-1)/stride;\n\n    int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;\n    if(id >= n) return;\n\n    int index = id;\n    int j = id % in_w;\n    id /= in_w;\n    int i = id % in_h;\n    id /= in_h;\n    int k = id % in_c;\n    id /= in_c;\n    int b = id;\n\n    int w_offset = -pad;\n    int h_offset = -pad;\n\n    float d = 0;\n    int l, m;\n    for(l = -area; l < area+1; ++l){\n        for(m = -area; m < area+1; ++m){\n            int out_w = (j-w_offset)/stride + m;\n            int out_h = (i-h_offset)/stride + l;\n            int out_index = out_w + w*(out_h + h*(k + c*b));\n            int valid = (out_w >= 0 && out_w < w &&\n                     out_h >= 0 && out_h < h);\n            d += (valid && indexes[out_index] == index) ? delta[out_index] : 0;\n        }\n    }\n    prev_delta[index] += d;\n}\n\nextern \"C\" void forward_maxpool_layer_gpu(maxpool_layer layer, network net)\n{\n    int h = layer.out_h;\n    int w = layer.out_w;\n    int c = layer.c;\n\n    size_t n = h*w*c*layer.batch;\n\n    forward_maxpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.h, layer.w, layer.c, layer.stride, layer.size, layer.pad, net.input_gpu, layer.output_gpu, layer.indexes_gpu);\n    check_error(cudaPeekAtLastError());\n}\n\nextern \"C\" void backward_maxpool_layer_gpu(maxpool_layer layer, network net)\n{\n    size_t n = layer.h*layer.w*layer.c*layer.batch;\n\n    backward_maxpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.h, layer.w, layer.c, layer.stride, layer.size, layer.pad, layer.delta_gpu, net.delta_gpu, layer.indexes_gpu);\n    check_error(cudaPeekAtLastError());\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/network.c",
    "content": "#include <stdio.h>\n#include <time.h>\n#include <assert.h>\n#include \"network.h\"\n#include \"image.h\"\n#include \"data.h\"\n#include \"utils.h\"\n#include \"blas.h\"\n\n#include \"crop_layer.h\"\n#include \"connected_layer.h\"\n#include \"gru_layer.h\"\n#include \"rnn_layer.h\"\n#include \"crnn_layer.h\"\n#include \"local_layer.h\"\n#include \"convolutional_layer.h\"\n#include \"activation_layer.h\"\n#include \"detection_layer.h\"\n#include \"region_layer.h\"\n#include \"normalization_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"maxpool_layer.h\"\n#include \"reorg_layer.h\"\n#include \"avgpool_layer.h\"\n#include \"cost_layer.h\"\n#include \"softmax_layer.h\"\n#include \"dropout_layer.h\"\n#include \"route_layer.h\"\n#include \"shortcut_layer.h\"\n#include \"parser.h\"\n#include \"data.h\"\n\nload_args get_base_args(network *net)\n{\n    load_args args = {0};\n    args.w = net->w;\n    args.h = net->h;\n    args.size = net->w;\n\n    args.min = net->min_crop;\n    args.max = net->max_crop;\n    args.angle = net->angle;\n    args.aspect = net->aspect;\n    args.exposure = net->exposure;\n    args.center = net->center;\n    args.saturation = net->saturation;\n    args.hue = net->hue;\n    return args;\n}\n\nnetwork *load_network(char *cfg, char *weights, int clear)\n{\n    network *net = parse_network_cfg(cfg);\n    if(weights && weights[0] != 0){\n        load_weights(net, weights);\n    }\n    if(clear) (*net->seen) = 0;\n    return net;\n}\n\nsize_t get_current_batch(network *net)\n{\n    size_t batch_num = (*net->seen)/(net->batch*net->subdivisions);\n    return batch_num;\n}\n\nvoid reset_network_state(network *net, int b)\n{\n    int i;\n    for (i = 0; i < net->n; ++i) {\n        #ifdef GPU\n        layer l = net->layers[i];\n        if(l.state_gpu){\n            fill_gpu(l.outputs, 0, l.state_gpu + l.outputs*b, 1);\n        }\n        if(l.h_gpu){\n            fill_gpu(l.outputs, 0, l.h_gpu + l.outputs*b, 1);\n        }\n        #endif\n    }\n}\n\nvoid reset_rnn(network *net)\n{\n    reset_network_state(net, 0);\n}\n\nfloat get_current_rate(network *net)\n{\n    size_t batch_num = get_current_batch(net);\n    int i;\n    float rate;\n    if (batch_num < net->burn_in) return net->learning_rate * pow((float)batch_num / net->burn_in, net->power);\n    switch (net->policy) {\n        case CONSTANT:\n            return net->learning_rate;\n        case STEP:\n            return net->learning_rate * pow(net->scale, batch_num/net->step);\n        case STEPS:\n            rate = net->learning_rate;\n            for(i = 0; i < net->num_steps; ++i){\n                if(net->steps[i] > batch_num) return rate;\n                rate *= net->scales[i];\n            }\n            return rate;\n        case EXP:\n            return net->learning_rate * pow(net->gamma, batch_num);\n        case POLY:\n            return net->learning_rate * pow(1 - (float)batch_num / net->max_batches, net->power);\n        case RANDOM:\n            return net->learning_rate * pow(rand_uniform(0,1), net->power);\n        case SIG:\n            return net->learning_rate * (1./(1.+exp(net->gamma*(batch_num - net->step))));\n        default:\n            fprintf(stderr, \"Policy is weird!\\n\");\n            return net->learning_rate;\n    }\n}\n\nchar *get_layer_string(LAYER_TYPE a)\n{\n    switch(a){\n        case CONVOLUTIONAL:\n            return \"convolutional\";\n        case ACTIVE:\n            return \"activation\";\n        case LOCAL:\n            return \"local\";\n        case DECONVOLUTIONAL:\n            return \"deconvolutional\";\n        case CONNECTED:\n            return \"connected\";\n        case RNN:\n            return \"rnn\";\n        case GRU:\n            return \"gru\";\n        case LSTM:\n\t    return \"lstm\";\n        case CRNN:\n            return \"crnn\";\n        case MAXPOOL:\n            return \"maxpool\";\n        case REORG:\n            return \"reorg\";\n        case AVGPOOL:\n            return \"avgpool\";\n        case SOFTMAX:\n            return \"softmax\";\n        case DETECTION:\n            return \"detection\";\n        case REGION:\n            return \"region\";\n        case DROPOUT:\n            return \"dropout\";\n        case CROP:\n            return \"crop\";\n        case COST:\n            return \"cost\";\n        case ROUTE:\n            return \"route\";\n        case SHORTCUT:\n            return \"shortcut\";\n        case NORMALIZATION:\n            return \"normalization\";\n        case BATCHNORM:\n            return \"batchnorm\";\n        default:\n            break;\n    }\n    return \"none\";\n}\n\nnetwork *make_network(int n)\n{\n    network *net = calloc(1, sizeof(network));\n    net->n = n;\n    net->layers = calloc(net->n, sizeof(layer));\n    net->seen = calloc(1, sizeof(size_t));\n    net->t    = calloc(1, sizeof(int));\n    net->cost = calloc(1, sizeof(float));\n    return net;\n}\n\nvoid forward_network(network *netp)\n{\n#ifdef GPU\n    if(netp->gpu_index >= 0){\n        forward_network_gpu(netp);   \n        return;\n    }\n#endif\n    network net = *netp;\n    int i;\n    for(i = 0; i < net.n; ++i){\n        net.index = i;\n        layer l = net.layers[i];\n        if(l.delta){\n            fill_cpu(l.outputs * l.batch, 0, l.delta, 1);\n        }\n        l.forward(l, net);\n        net.input = l.output;\n        if(l.truth) {\n            net.truth = l.output;\n        }\n    }\n    calc_network_cost(netp);\n}\n\nvoid update_network(network *netp)\n{\n#ifdef GPU\n    if(netp->gpu_index >= 0){\n        update_network_gpu(netp);   \n        return;\n    }\n#endif\n    network net = *netp;\n    int i;\n    update_args a = {0};\n    a.batch = net.batch*net.subdivisions;\n    a.learning_rate = get_current_rate(netp);\n    a.momentum = net.momentum;\n    a.decay = net.decay;\n    a.adam = net.adam;\n    a.B1 = net.B1;\n    a.B2 = net.B2;\n    a.eps = net.eps;\n    ++*net.t;\n    a.t = *net.t;\n\n    for(i = 0; i < net.n; ++i){\n        layer l = net.layers[i];\n        if(l.update){\n            l.update(l, a);\n        }\n    }\n}\n\nvoid calc_network_cost(network *netp)\n{\n    network net = *netp;\n    int i;\n    float sum = 0;\n    int count = 0;\n    for(i = 0; i < net.n; ++i){\n        if(net.layers[i].cost){\n            sum += net.layers[i].cost[0];\n            ++count;\n        }\n    }\n    *net.cost = sum/count;\n}\n\nint get_predicted_class_network(network *net)\n{\n    return max_index(net->output, net->outputs);\n}\n\nvoid backward_network(network *netp)\n{\n#ifdef GPU\n    if(netp->gpu_index >= 0){\n        backward_network_gpu(netp);   \n        return;\n    }\n#endif\n    network net = *netp;\n    int i;\n    network orig = net;\n    for(i = net.n-1; i >= 0; --i){\n        layer l = net.layers[i];\n        if(l.stopbackward) break;\n        if(i == 0){\n            net = orig;\n        }else{\n            layer prev = net.layers[i-1];\n            net.input = prev.output;\n            net.delta = prev.delta;\n        }\n        net.index = i;\n        l.backward(l, net);\n    }\n}\n\nfloat train_network_datum(network *net)\n{\n    *net->seen += net->batch;\n    net->train = 1;\n    forward_network(net);\n    backward_network(net);\n    float error = *net->cost;\n    if(((*net->seen)/net->batch)%net->subdivisions == 0) update_network(net);\n    return error;\n}\n\nfloat train_network_sgd(network *net, data d, int n)\n{\n    int batch = net->batch;\n\n    int i;\n    float sum = 0;\n    for(i = 0; i < n; ++i){\n        get_random_batch(d, batch, net->input, net->truth);\n        float err = train_network_datum(net);\n        sum += err;\n    }\n    return (float)sum/(n*batch);\n}\n\nfloat train_network(network *net, data d)\n{\n    assert(d.X.rows % net->batch == 0);\n    int batch = net->batch;\n    int n = d.X.rows / batch;\n\n    int i;\n    float sum = 0;\n    for(i = 0; i < n; ++i){\n        get_next_batch(d, batch, i*batch, net->input, net->truth);\n        float err = train_network_datum(net);\n        sum += err;\n    }\n    return (float)sum/(n*batch);\n}\n\nvoid set_temp_network(network *net, float t)\n{\n    int i;\n    for(i = 0; i < net->n; ++i){\n        net->layers[i].temperature = t;\n    }\n}\n\n\nvoid set_batch_network(network *net, int b)\n{\n    net->batch = b;\n    int i;\n    for(i = 0; i < net->n; ++i){\n        net->layers[i].batch = b;\n#ifdef CUDNN\n        if(net->layers[i].type == CONVOLUTIONAL){\n            cudnn_convolutional_setup(net->layers + i);\n        }\n        if(net->layers[i].type == DECONVOLUTIONAL){\n            layer *l = net->layers + i;\n            cudnnSetTensor4dDescriptor(l->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l->out_c, l->out_h, l->out_w);\n            cudnnSetTensor4dDescriptor(l->normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l->out_c, 1, 1); \n        }\n#endif\n    }\n}\n\nint resize_network(network *net, int w, int h)\n{\n#ifdef GPU\n    cuda_set_device(net->gpu_index);\n    cuda_free(net->workspace);\n#endif\n    int i;\n    //if(w == net->w && h == net->h) return 0;\n    net->w = w;\n    net->h = h;\n    int inputs = 0;\n    size_t workspace_size = 0;\n    //fprintf(stderr, \"Resizing to %d x %d...\\n\", w, h);\n    //fflush(stderr);\n    for (i = 0; i < net->n; ++i){\n        layer l = net->layers[i];\n        if(l.type == CONVOLUTIONAL){\n            resize_convolutional_layer(&l, w, h);\n        }else if(l.type == CROP){\n            resize_crop_layer(&l, w, h);\n        }else if(l.type == MAXPOOL){\n            resize_maxpool_layer(&l, w, h);\n        }else if(l.type == REGION){\n            resize_region_layer(&l, w, h);\n        }else if(l.type == ROUTE){\n            resize_route_layer(&l, net);\n        }else if(l.type == REORG){\n            resize_reorg_layer(&l, w, h);\n        }else if(l.type == AVGPOOL){\n            resize_avgpool_layer(&l, w, h);\n        }else if(l.type == NORMALIZATION){\n            resize_normalization_layer(&l, w, h);\n        }else if(l.type == COST){\n            resize_cost_layer(&l, inputs);\n        }else{\n            error(\"Cannot resize this type of layer\");\n        }\n        if(l.workspace_size > workspace_size) workspace_size = l.workspace_size;\n        inputs = l.outputs;\n        net->layers[i] = l;\n        w = l.out_w;\n        h = l.out_h;\n        if(l.type == AVGPOOL) break;\n    }\n    layer out = get_network_output_layer(net);\n    net->inputs = net->layers[0].inputs;\n    net->outputs = out.outputs;\n    net->truths = out.outputs;\n    if(net->layers[net->n-1].truths) net->truths = net->layers[net->n-1].truths;\n    net->output = out.output;\n    free(net->input);\n    free(net->truth);\n    net->input = calloc(net->inputs*net->batch, sizeof(float));\n    net->truth = calloc(net->truths*net->batch, sizeof(float));\n#ifdef GPU\n    if(gpu_index >= 0){\n        cuda_free(net->input_gpu);\n        cuda_free(net->truth_gpu);\n        net->input_gpu = cuda_make_array(net->input, net->inputs*net->batch);\n        net->truth_gpu = cuda_make_array(net->truth, net->truths*net->batch);\n        net->workspace = cuda_make_array(0, (workspace_size-1)/sizeof(float)+1);\n    }else {\n        free(net->workspace);\n        net->workspace = calloc(1, workspace_size);\n    }\n#else\n    free(net->workspace);\n    net->workspace = calloc(1, workspace_size);\n#endif\n    //fprintf(stderr, \" Done!\\n\");\n    return 0;\n}\n\nlayer get_network_detection_layer(network *net)\n{\n    int i;\n    for(i = 0; i < net->n; ++i){\n        if(net->layers[i].type == DETECTION){\n            return net->layers[i];\n        }\n    }\n    fprintf(stderr, \"Detection layer not found!!\\n\");\n    layer l = {0};\n    return l;\n}\n\nimage get_network_image_layer(network *net, int i)\n{\n    layer l = net->layers[i];\n#ifdef GPU\n    //cuda_pull_array(l.output_gpu, l.output, l.outputs);\n#endif\n    if (l.out_w && l.out_h && l.out_c){\n        return float_to_image(l.out_w, l.out_h, l.out_c, l.output);\n    }\n    image def = {0};\n    return def;\n}\n\nimage get_network_image(network *net)\n{\n    int i;\n    for(i = net->n-1; i >= 0; --i){\n        image m = get_network_image_layer(net, i);\n        if(m.h != 0) return m;\n    }\n    image def = {0};\n    return def;\n}\n\nvoid visualize_network(network *net)\n{\n    image *prev = 0;\n    int i;\n    char buff[256];\n    for(i = 0; i < net->n; ++i){\n        //sprintf(buff, \"Layer %d\", i);\n        layer l = net->layers[i];\n        if(l.type == CONVOLUTIONAL){\n            prev = visualize_convolutional_layer(l, buff, prev);\n        }\n    } \n}\n\nvoid top_predictions(network *net, int k, int *index)\n{\n    top_k(net->output, net->outputs, k, index);\n}\n\n\nfloat *network_predict(network *net, float *input)\n{\n    network orig = *net;\n    net->input = input;\n    net->truth = 0;\n    net->train = 0;\n    net->delta = 0;\n    forward_network(net);\n    float *out = net->output;\n    *net = orig;\n    return out;\n}\n\nint num_boxes(network *net)\n{\n    layer l = net->layers[net->n-1];\n    return l.w*l.h*l.n;\n}\n\nbox *make_boxes(network *net)\n{\n    layer l = net->layers[net->n-1];\n    box *boxes = calloc(l.w*l.h*l.n, sizeof(box));\n    return boxes;\n}\n\nfloat **make_probs(network *net)\n{\n    int j;\n    layer l = net->layers[net->n-1];\n    float **probs = calloc(l.w*l.h*l.n, sizeof(float *));\n    for(j = 0; j < l.w*l.h*l.n; ++j) probs[j] = calloc(l.classes + 1, sizeof(float *));\n    return probs;\n}\n\nvoid network_detect(network *net, image im, float thresh, float hier_thresh, float nms, box *boxes, float **probs)\n{\n    network_predict_image(net, im);\n    layer l = net->layers[net->n-1];\n    if(l.type == REGION){\n        get_region_boxes(l, im.w, im.h, net->w, net->h, thresh, probs, boxes, 0, 0, 0, hier_thresh, 0);\n        if (nms) do_nms_sort(boxes, probs, l.w*l.h*l.n, l.classes, nms);\n    }\n}\n\nfloat *network_predict_image(network *net, image im)\n{\n    image imr = letterbox_image(im, net->w, net->h);\n    set_batch_network(net, 1);\n    float *p = network_predict(net, imr.data);\n    free_image(imr);\n    return p;\n}\n\nint network_width(network *net){return net->w;}\nint network_height(network *net){return net->h;}\n\nmatrix network_predict_data_multi(network *net, data test, int n)\n{\n    int i,j,b,m;\n    int k = net->outputs;\n    matrix pred = make_matrix(test.X.rows, k);\n    float *X = calloc(net->batch*test.X.rows, sizeof(float));\n    for(i = 0; i < test.X.rows; i += net->batch){\n        for(b = 0; b < net->batch; ++b){\n            if(i+b == test.X.rows) break;\n            memcpy(X+b*test.X.cols, test.X.vals[i+b], test.X.cols*sizeof(float));\n        }\n        for(m = 0; m < n; ++m){\n            float *out = network_predict(net, X);\n            for(b = 0; b < net->batch; ++b){\n                if(i+b == test.X.rows) break;\n                for(j = 0; j < k; ++j){\n                    pred.vals[i+b][j] += out[j+b*k]/n;\n                }\n            }\n        }\n    }\n    free(X);\n    return pred;   \n}\n\nmatrix network_predict_data(network *net, data test)\n{\n    int i,j,b;\n    int k = net->outputs;\n    matrix pred = make_matrix(test.X.rows, k);\n    float *X = calloc(net->batch*test.X.cols, sizeof(float));\n    //printf(\"Batch %d rows %d\\n\", net->batch, test.X.rows);\n    for(i = 0; i < test.X.rows; i += net->batch){\n        for(b = 0; b < net->batch; ++b){\n            if(i+b == test.X.rows) break;\n            memcpy(X+b*test.X.cols, test.X.vals[i+b], test.X.cols*sizeof(float));\n        }\n        float *out = network_predict(net, X);\n        for(b = 0; b < net->batch; ++b){\n            if(i+b == test.X.rows) break;\n            for(j = 0; j < k; ++j){\n                pred.vals[i+b][j] = out[j+b*k];\n                //printf(\"predict %f\\n\", pred.vals[i+b][j]);\n            }\n        }\n    }\n    free(X);\n    return pred;   \n}\n\nvoid print_network(network *net)\n{\n    int i,j;\n    for(i = 0; i < net->n; ++i){\n        layer l = net->layers[i];\n        float *output = l.output;\n        int n = l.outputs;\n        float mean = mean_array(output, n);\n        float vari = variance_array(output, n);\n        fprintf(stderr, \"Layer %d - Mean: %f, Variance: %f\\n\",i,mean, vari);\n        if(n > 100) n = 100;\n        for(j = 0; j < n; ++j) fprintf(stderr, \"%f, \", output[j]);\n        if(n == 100)fprintf(stderr,\".....\\n\");\n        fprintf(stderr, \"\\n\");\n    }\n}\n\nvoid compare_networks(network *n1, network *n2, data test)\n{\n    matrix g1 = network_predict_data(n1, test);\n    matrix g2 = network_predict_data(n2, test);\n    int i;\n    int a,b,c,d;\n    a = b = c = d = 0;\n    for(i = 0; i < g1.rows; ++i){\n        int truth = max_index(test.y.vals[i], test.y.cols);\n        int p1 = max_index(g1.vals[i], g1.cols);\n        int p2 = max_index(g2.vals[i], g2.cols);\n        if(p1 == truth){\n            if(p2 == truth) ++d;\n            else ++c;\n        }else{\n            if(p2 == truth) ++b;\n            else ++a;\n        }\n    }\n    //printf(\"%5d %5d\\n%5d %5d\\n\", a, b, c, d);\n    float num = pow((abs(b - c) - 1.), 2.);\n    float den = b + c;\n    //printf(\"%f\\n\", num/den); \n}\n\nfloat network_accuracy(network *net, data d)\n{\n    matrix guess = network_predict_data(net, d);\n    float acc = matrix_topk_accuracy(d.y, guess,1);\n    free_matrix(guess);\n    return acc;\n}\n\nfloat *network_accuracies(network *net, data d, int n)\n{\n    static float acc[2];\n    matrix guess = network_predict_data(net, d);\n    acc[0] = matrix_topk_accuracy(d.y, guess, 1);\n    acc[1] = matrix_topk_accuracy(d.y, guess, n);\n    free_matrix(guess);\n    return acc;\n}\n\nlayer get_network_output_layer(network *net)\n{\n    int i;\n    for(i = net->n - 1; i >= 0; --i){\n        if(net->layers[i].type != COST) break;\n    }\n    return net->layers[i];\n}\n\nfloat network_accuracy_multi(network *net, data d, int n)\n{\n    matrix guess = network_predict_data_multi(net, d, n);\n    float acc = matrix_topk_accuracy(d.y, guess,1);\n    free_matrix(guess);\n    return acc;\n}\n\nvoid free_network(network *net)\n{\n    int i;\n    for(i = 0; i < net->n; ++i){\n        free_layer(net->layers[i]);\n    }\n    free(net->layers);\n    if(net->input) free(net->input);\n    if(net->truth) free(net->truth);\n#ifdef GPU\n    if(net->input_gpu) cuda_free(net->input_gpu);\n    if(net->truth_gpu) cuda_free(net->truth_gpu);\n#endif\n    free(net);\n}\n\n// Some day...\n// ^ What the hell is this comment for?\n\n\nlayer network_output_layer(network *net)\n{\n    int i;\n    for(i = net->n - 1; i >= 0; --i){\n        if(net->layers[i].type != COST) break;\n    }\n    return net->layers[i];\n}\n\nint network_inputs(network *net)\n{\n    return net->layers[0].inputs;\n}\n\nint network_outputs(network *net)\n{\n    return network_output_layer(net).outputs;\n}\n\nfloat *network_output(network *net)\n{\n    return network_output_layer(net).output;\n}\n\n#ifdef GPU\n\nvoid forward_network_gpu(network *netp)\n{\n    network net = *netp;\n    cuda_set_device(net.gpu_index);\n    cuda_push_array(net.input_gpu, net.input, net.inputs*net.batch);\n    if(net.truth){\n        cuda_push_array(net.truth_gpu, net.truth, net.truths*net.batch);\n    }\n\n    int i;\n    for(i = 0; i < net.n; ++i){\n        net.index = i;\n        layer l = net.layers[i];\n        if(l.delta_gpu){\n            fill_gpu(l.outputs * l.batch, 0, l.delta_gpu, 1);\n        }\n        l.forward_gpu(l, net);\n        net.input_gpu = l.output_gpu;\n        net.input = l.output;\n        if(l.truth) {\n            net.truth_gpu = l.output_gpu;\n            net.truth = l.output;\n        }\n    }\n    pull_network_output(netp);\n    calc_network_cost(netp);\n}\n\nvoid backward_network_gpu(network *netp)\n{\n    int i;\n    network net = *netp;\n    network orig = net;\n    cuda_set_device(net.gpu_index);\n    for(i = net.n-1; i >= 0; --i){\n        layer l = net.layers[i];\n        if(l.stopbackward) break;\n        if(i == 0){\n            net = orig;\n        }else{\n            layer prev = net.layers[i-1];\n            net.input = prev.output;\n            net.delta = prev.delta;\n            net.input_gpu = prev.output_gpu;\n            net.delta_gpu = prev.delta_gpu;\n        }\n        net.index = i;\n        l.backward_gpu(l, net);\n    }\n}\n\nvoid update_network_gpu(network *netp)\n{\n    network net = *netp;\n    cuda_set_device(net.gpu_index);\n    int i;\n    update_args a = {0};\n    a.batch = net.batch*net.subdivisions;\n    a.learning_rate = get_current_rate(netp);\n    a.momentum = net.momentum;\n    a.decay = net.decay;\n    a.adam = net.adam;\n    a.B1 = net.B1;\n    a.B2 = net.B2;\n    a.eps = net.eps;\n    ++*net.t;\n    a.t = (*net.t);\n\n    for(i = 0; i < net.n; ++i){\n        layer l = net.layers[i];\n        if(l.update_gpu){\n            l.update_gpu(l, a);\n        }\n    }\n}\n\nvoid harmless_update_network_gpu(network *netp)\n{\n    network net = *netp;\n    cuda_set_device(net.gpu_index);\n    int i;\n    for(i = 0; i < net.n; ++i){\n        layer l = net.layers[i];\n        if(l.weight_updates_gpu) fill_gpu(l.nweights, 0, l.weight_updates_gpu, 1);\n        if(l.bias_updates_gpu) fill_gpu(l.nbiases, 0, l.bias_updates_gpu, 1);\n        if(l.scale_updates_gpu) fill_gpu(l.nbiases, 0, l.scale_updates_gpu, 1);\n    }\n}\n\ntypedef struct {\n    network *net;\n    data d;\n    float *err;\n} train_args;\n\nvoid *train_thread(void *ptr)\n{\n    train_args args = *(train_args*)ptr;\n    free(ptr);\n    cuda_set_device(args.net->gpu_index);\n    *args.err = train_network(args.net, args.d);\n    return 0;\n}\n\npthread_t train_network_in_thread(network *net, data d, float *err)\n{\n    pthread_t thread;\n    train_args *ptr = (train_args *)calloc(1, sizeof(train_args));\n    ptr->net = net;\n    ptr->d = d;\n    ptr->err = err;\n    if(pthread_create(&thread, 0, train_thread, ptr)) error(\"Thread creation failed\");\n    return thread;\n}\n\nvoid merge_weights(layer l, layer base)\n{\n    if (l.type == CONVOLUTIONAL) {\n        axpy_cpu(l.n, 1, l.bias_updates, 1, base.biases, 1);\n        axpy_cpu(l.nweights, 1, l.weight_updates, 1, base.weights, 1);\n        if (l.scales) {\n            axpy_cpu(l.n, 1, l.scale_updates, 1, base.scales, 1);\n        }\n    } else if(l.type == CONNECTED) {\n        axpy_cpu(l.outputs, 1, l.bias_updates, 1, base.biases, 1);\n        axpy_cpu(l.outputs*l.inputs, 1, l.weight_updates, 1, base.weights, 1);\n    }\n}\n\nvoid scale_weights(layer l, float s)\n{\n    if (l.type == CONVOLUTIONAL) {\n        scal_cpu(l.n, s, l.biases, 1);\n        scal_cpu(l.nweights, s, l.weights, 1);\n        if (l.scales) {\n            scal_cpu(l.n, s, l.scales, 1);\n        }\n    } else if(l.type == CONNECTED) {\n        scal_cpu(l.outputs, s, l.biases, 1);\n        scal_cpu(l.outputs*l.inputs, s, l.weights, 1);\n    }\n}\n\n\nvoid pull_weights(layer l)\n{\n    if(l.type == CONVOLUTIONAL || l.type == DECONVOLUTIONAL){\n        cuda_pull_array(l.biases_gpu, l.bias_updates, l.n);\n        cuda_pull_array(l.weights_gpu, l.weight_updates, l.nweights);\n        if(l.scales) cuda_pull_array(l.scales_gpu, l.scale_updates, l.n);\n    } else if(l.type == CONNECTED){\n        cuda_pull_array(l.biases_gpu, l.bias_updates, l.outputs);\n        cuda_pull_array(l.weights_gpu, l.weight_updates, l.outputs*l.inputs);\n    }\n}\n\nvoid push_weights(layer l)\n{\n    if(l.type == CONVOLUTIONAL || l.type == DECONVOLUTIONAL){\n        cuda_push_array(l.biases_gpu, l.biases, l.n);\n        cuda_push_array(l.weights_gpu, l.weights, l.nweights);\n        if(l.scales) cuda_push_array(l.scales_gpu, l.scales, l.n);\n    } else if(l.type == CONNECTED){\n        cuda_push_array(l.biases_gpu, l.biases, l.outputs);\n        cuda_push_array(l.weights_gpu, l.weights, l.outputs*l.inputs);\n    }\n}\n\nvoid distribute_weights(layer l, layer base)\n{\n    if (l.type == CONVOLUTIONAL || l.type == DECONVOLUTIONAL) {\n        cuda_push_array(l.biases_gpu, base.biases, l.n);\n        cuda_push_array(l.weights_gpu, base.weights, l.nweights);\n        if (base.scales) cuda_push_array(l.scales_gpu, base.scales, l.n);\n    } else if (l.type == CONNECTED) {\n        cuda_push_array(l.biases_gpu, base.biases, l.outputs);\n        cuda_push_array(l.weights_gpu, base.weights, l.outputs*l.inputs);\n    }\n}\n\n\n/*\n\n   void pull_updates(layer l)\n   {\n   if(l.type == CONVOLUTIONAL){\n   cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.n);\n   cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.nweights);\n   if(l.scale_updates) cuda_pull_array(l.scale_updates_gpu, l.scale_updates, l.n);\n   } else if(l.type == CONNECTED){\n   cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.outputs);\n   cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.outputs*l.inputs);\n   }\n   }\n\n   void push_updates(layer l)\n   {\n   if(l.type == CONVOLUTIONAL){\n   cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.n);\n   cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.nweights);\n   if(l.scale_updates) cuda_push_array(l.scale_updates_gpu, l.scale_updates, l.n);\n   } else if(l.type == CONNECTED){\n   cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.outputs);\n   cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.outputs*l.inputs);\n   }\n   }\n\n   void update_layer(layer l, network net)\n   {\n   int update_batch = net.batch*net.subdivisions;\n   float rate = get_current_rate(net);\n   l.t = get_current_batch(net);\n   if(l.update_gpu){\n   l.update_gpu(l, update_batch, rate*l.learning_rate_scale, net.momentum, net.decay);\n   }\n   }\n   void merge_updates(layer l, layer base)\n   {\n   if (l.type == CONVOLUTIONAL) {\n   axpy_cpu(l.n, 1, l.bias_updates, 1, base.bias_updates, 1);\n   axpy_cpu(l.nweights, 1, l.weight_updates, 1, base.weight_updates, 1);\n   if (l.scale_updates) {\n   axpy_cpu(l.n, 1, l.scale_updates, 1, base.scale_updates, 1);\n   }\n   } else if(l.type == CONNECTED) {\n   axpy_cpu(l.outputs, 1, l.bias_updates, 1, base.bias_updates, 1);\n   axpy_cpu(l.outputs*l.inputs, 1, l.weight_updates, 1, base.weight_updates, 1);\n   }\n   }\n\n   void distribute_updates(layer l, layer base)\n   {\n   if(l.type == CONVOLUTIONAL || l.type == DECONVOLUTIONAL){\n   cuda_push_array(l.bias_updates_gpu, base.bias_updates, l.n);\n   cuda_push_array(l.weight_updates_gpu, base.weight_updates, l.nweights);\n   if(base.scale_updates) cuda_push_array(l.scale_updates_gpu, base.scale_updates, l.n);\n   } else if(l.type == CONNECTED){\n   cuda_push_array(l.bias_updates_gpu, base.bias_updates, l.outputs);\n   cuda_push_array(l.weight_updates_gpu, base.weight_updates, l.outputs*l.inputs);\n   }\n   }\n */\n\n/*\n   void sync_layer(network *nets, int n, int j)\n   {\n   int i;\n   network net = nets[0];\n   layer base = net.layers[j];\n   scale_weights(base, 0);\n   for (i = 0; i < n; ++i) {\n   cuda_set_device(nets[i].gpu_index);\n   layer l = nets[i].layers[j];\n   pull_weights(l);\n   merge_weights(l, base);\n   }\n   scale_weights(base, 1./n);\n   for (i = 0; i < n; ++i) {\n   cuda_set_device(nets[i].gpu_index);\n   layer l = nets[i].layers[j];\n   distribute_weights(l, base);\n   }\n   }\n */\n\nvoid sync_layer(network **nets, int n, int j)\n{\n    int i;\n    network *net = nets[0];\n    layer base = net->layers[j];\n    scale_weights(base, 0);\n    for (i = 0; i < n; ++i) {\n        cuda_set_device(nets[i]->gpu_index);\n        layer l = nets[i]->layers[j];\n        pull_weights(l);\n        merge_weights(l, base);\n    }\n    scale_weights(base, 1./n);\n    for (i = 0; i < n; ++i) {\n        cuda_set_device(nets[i]->gpu_index);\n        layer l = nets[i]->layers[j];\n        distribute_weights(l, base);\n    }\n}\n\ntypedef struct{\n    network **nets;\n    int n;\n    int j;\n} sync_args;\n\nvoid *sync_layer_thread(void *ptr)\n{\n    sync_args args = *(sync_args*)ptr;\n    sync_layer(args.nets, args.n, args.j);\n    free(ptr);\n    return 0;\n}\n\npthread_t sync_layer_in_thread(network **nets, int n, int j)\n{\n    pthread_t thread;\n    sync_args *ptr = (sync_args *)calloc(1, sizeof(sync_args));\n    ptr->nets = nets;\n    ptr->n = n;\n    ptr->j = j;\n    if(pthread_create(&thread, 0, sync_layer_thread, ptr)) error(\"Thread creation failed\");\n    return thread;\n}\n\nvoid sync_nets(network **nets, int n, int interval)\n{\n    int j;\n    int layers = nets[0]->n;\n    pthread_t *threads = (pthread_t *) calloc(layers, sizeof(pthread_t));\n\n    *(nets[0]->seen) += interval * (n-1) * nets[0]->batch * nets[0]->subdivisions;\n    for (j = 0; j < n; ++j){\n        *(nets[j]->seen) = *(nets[0]->seen);\n    }\n    for (j = 0; j < layers; ++j) {\n        threads[j] = sync_layer_in_thread(nets, n, j);\n    }\n    for (j = 0; j < layers; ++j) {\n        pthread_join(threads[j], 0);\n    }\n    free(threads);\n}\n\nfloat train_networks(network **nets, int n, data d, int interval)\n{\n    int i;\n    int batch = nets[0]->batch;\n    int subdivisions = nets[0]->subdivisions;\n    assert(batch * subdivisions * n == d.X.rows);\n    pthread_t *threads = (pthread_t *) calloc(n, sizeof(pthread_t));\n    float *errors = (float *) calloc(n, sizeof(float));\n\n    float sum = 0;\n    for(i = 0; i < n; ++i){\n        data p = get_data_part(d, i, n);\n        threads[i] = train_network_in_thread(nets[i], p, errors + i);\n    }\n    for(i = 0; i < n; ++i){\n        pthread_join(threads[i], 0);\n        //printf(\"%f\\n\", errors[i]);\n        sum += errors[i];\n    }\n    //cudaDeviceSynchronize();\n    if (get_current_batch(nets[0]) % interval == 0) {\n        printf(\"Syncing... \");\n        fflush(stdout);\n        sync_nets(nets, n, interval);\n        printf(\"Done!\\n\");\n    }\n    //cudaDeviceSynchronize();\n    free(threads);\n    free(errors);\n    return (float)sum/(n);\n}\n\nvoid pull_network_output(network *net)\n{\n    layer l = get_network_output_layer(net);\n    cuda_pull_array(l.output_gpu, l.output, l.outputs*l.batch);\n}\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/network.h",
    "content": "// Oh boy, why am I about to do this....\n#ifndef NETWORK_H\n#define NETWORK_H\n#include \"darknet.h\"\n\n#include \"image.h\"\n#include \"layer.h\"\n#include \"data.h\"\n#include \"tree.h\"\n\n\n#ifdef GPU\nvoid pull_network_output(network *net);\n#endif\n\nvoid compare_networks(network *n1, network *n2, data d);\nchar *get_layer_string(LAYER_TYPE a);\n\nnetwork *make_network(int n);\n\n\nfloat network_accuracy_multi(network *net, data d, int n);\nint get_predicted_class_network(network *net);\nvoid print_network(network *net);\nint resize_network(network *net, int w, int h);\nvoid calc_network_cost(network *net);\n\nfloat **make_probs(network *net);\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/normalization_layer.c",
    "content": "#include \"normalization_layer.h\"\n#include \"blas.h\"\n\n#include <stdio.h>\n\nlayer make_normalization_layer(int batch, int w, int h, int c, int size, float alpha, float beta, float kappa)\n{\n    fprintf(stderr, \"Local Response Normalization Layer: %d x %d x %d image, %d size\\n\", w,h,c,size);\n    layer layer = {0};\n    layer.type = NORMALIZATION;\n    layer.batch = batch;\n    layer.h = layer.out_h = h;\n    layer.w = layer.out_w = w;\n    layer.c = layer.out_c = c;\n    layer.kappa = kappa;\n    layer.size = size;\n    layer.alpha = alpha;\n    layer.beta = beta;\n    layer.output = calloc(h * w * c * batch, sizeof(float));\n    layer.delta = calloc(h * w * c * batch, sizeof(float));\n    layer.squared = calloc(h * w * c * batch, sizeof(float));\n    layer.norms = calloc(h * w * c * batch, sizeof(float));\n    layer.inputs = w*h*c;\n    layer.outputs = layer.inputs;\n\n    layer.forward = forward_normalization_layer;\n    layer.backward = backward_normalization_layer;\n    #ifdef GPU\n    layer.forward_gpu = forward_normalization_layer_gpu;\n    layer.backward_gpu = backward_normalization_layer_gpu;\n\n    layer.output_gpu =  cuda_make_array(layer.output, h * w * c * batch);\n    layer.delta_gpu =   cuda_make_array(layer.delta, h * w * c * batch);\n    layer.squared_gpu = cuda_make_array(layer.squared, h * w * c * batch);\n    layer.norms_gpu =   cuda_make_array(layer.norms, h * w * c * batch);\n    #endif\n    return layer;\n}\n\nvoid resize_normalization_layer(layer *layer, int w, int h)\n{\n    int c = layer->c;\n    int batch = layer->batch;\n    layer->h = h;\n    layer->w = w;\n    layer->out_h = h;\n    layer->out_w = w;\n    layer->inputs = w*h*c;\n    layer->outputs = layer->inputs;\n    layer->output = realloc(layer->output, h * w * c * batch * sizeof(float));\n    layer->delta = realloc(layer->delta, h * w * c * batch * sizeof(float));\n    layer->squared = realloc(layer->squared, h * w * c * batch * sizeof(float));\n    layer->norms = realloc(layer->norms, h * w * c * batch * sizeof(float));\n#ifdef GPU\n    cuda_free(layer->output_gpu);\n    cuda_free(layer->delta_gpu); \n    cuda_free(layer->squared_gpu); \n    cuda_free(layer->norms_gpu);   \n    layer->output_gpu =  cuda_make_array(layer->output, h * w * c * batch);\n    layer->delta_gpu =   cuda_make_array(layer->delta, h * w * c * batch);\n    layer->squared_gpu = cuda_make_array(layer->squared, h * w * c * batch);\n    layer->norms_gpu =   cuda_make_array(layer->norms, h * w * c * batch);\n#endif\n}\n\nvoid forward_normalization_layer(const layer layer, network net)\n{\n    int k,b;\n    int w = layer.w;\n    int h = layer.h;\n    int c = layer.c;\n    scal_cpu(w*h*c*layer.batch, 0, layer.squared, 1);\n\n    for(b = 0; b < layer.batch; ++b){\n        float *squared = layer.squared + w*h*c*b;\n        float *norms   = layer.norms + w*h*c*b;\n        float *input   = net.input + w*h*c*b;\n        pow_cpu(w*h*c, 2, input, 1, squared, 1);\n\n        const_cpu(w*h, layer.kappa, norms, 1);\n        for(k = 0; k < layer.size/2; ++k){\n            axpy_cpu(w*h, layer.alpha, squared + w*h*k, 1, norms, 1);\n        }\n\n        for(k = 1; k < layer.c; ++k){\n            copy_cpu(w*h, norms + w*h*(k-1), 1, norms + w*h*k, 1);\n            int prev = k - ((layer.size-1)/2) - 1;\n            int next = k + (layer.size/2);\n            if(prev >= 0)      axpy_cpu(w*h, -layer.alpha, squared + w*h*prev, 1, norms + w*h*k, 1);\n            if(next < layer.c) axpy_cpu(w*h,  layer.alpha, squared + w*h*next, 1, norms + w*h*k, 1);\n        }\n    }\n    pow_cpu(w*h*c*layer.batch, -layer.beta, layer.norms, 1, layer.output, 1);\n    mul_cpu(w*h*c*layer.batch, net.input, 1, layer.output, 1);\n}\n\nvoid backward_normalization_layer(const layer layer, network net)\n{\n    // TODO This is approximate ;-)\n    // Also this should add in to delta instead of overwritting.\n\n    int w = layer.w;\n    int h = layer.h;\n    int c = layer.c;\n    pow_cpu(w*h*c*layer.batch, -layer.beta, layer.norms, 1, net.delta, 1);\n    mul_cpu(w*h*c*layer.batch, layer.delta, 1, net.delta, 1);\n}\n\n#ifdef GPU\nvoid forward_normalization_layer_gpu(const layer layer, network net)\n{\n    int k,b;\n    int w = layer.w;\n    int h = layer.h;\n    int c = layer.c;\n    scal_gpu(w*h*c*layer.batch, 0, layer.squared_gpu, 1);\n\n    for(b = 0; b < layer.batch; ++b){\n        float *squared = layer.squared_gpu + w*h*c*b;\n        float *norms   = layer.norms_gpu + w*h*c*b;\n        float *input   = net.input_gpu + w*h*c*b;\n        pow_gpu(w*h*c, 2, input, 1, squared, 1);\n\n        const_gpu(w*h, layer.kappa, norms, 1);\n        for(k = 0; k < layer.size/2; ++k){\n            axpy_gpu(w*h, layer.alpha, squared + w*h*k, 1, norms, 1);\n        }\n\n        for(k = 1; k < layer.c; ++k){\n            copy_gpu(w*h, norms + w*h*(k-1), 1, norms + w*h*k, 1);\n            int prev = k - ((layer.size-1)/2) - 1;\n            int next = k + (layer.size/2);\n            if(prev >= 0)      axpy_gpu(w*h, -layer.alpha, squared + w*h*prev, 1, norms + w*h*k, 1);\n            if(next < layer.c) axpy_gpu(w*h,  layer.alpha, squared + w*h*next, 1, norms + w*h*k, 1);\n        }\n    }\n    pow_gpu(w*h*c*layer.batch, -layer.beta, layer.norms_gpu, 1, layer.output_gpu, 1);\n    mul_gpu(w*h*c*layer.batch, net.input_gpu, 1, layer.output_gpu, 1);\n}\n\nvoid backward_normalization_layer_gpu(const layer layer, network net)\n{\n    // TODO This is approximate ;-)\n\n    int w = layer.w;\n    int h = layer.h;\n    int c = layer.c;\n    pow_gpu(w*h*c*layer.batch, -layer.beta, layer.norms_gpu, 1, net.delta_gpu, 1);\n    mul_gpu(w*h*c*layer.batch, layer.delta_gpu, 1, net.delta_gpu, 1);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/normalization_layer.h",
    "content": "#ifndef NORMALIZATION_LAYER_H\n#define NORMALIZATION_LAYER_H\n\n#include \"image.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_normalization_layer(int batch, int w, int h, int c, int size, float alpha, float beta, float kappa);\nvoid resize_normalization_layer(layer *layer, int h, int w);\nvoid forward_normalization_layer(const layer layer, network net);\nvoid backward_normalization_layer(const layer layer, network net);\nvoid visualize_normalization_layer(layer layer, char *window);\n\n#ifdef GPU\nvoid forward_normalization_layer_gpu(const layer layer, network net);\nvoid backward_normalization_layer_gpu(const layer layer, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/option_list.c",
    "content": "#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include \"option_list.h\"\n#include \"utils.h\"\n\nlist *read_data_cfg(char *filename)\n{\n    FILE *file = fopen(filename, \"r\");\n    if(file == 0) file_error(filename);\n    char *line;\n    int nu = 0;\n    list *options = make_list();\n    while((line=fgetl(file)) != 0){\n        ++ nu;\n        strip(line);\n        switch(line[0]){\n            case '\\0':\n            case '#':\n            case ';':\n                free(line);\n                break;\n            default:\n                if(!read_option(line, options)){\n                    fprintf(stderr, \"Config file error line %d, could parse: %s\\n\", nu, line);\n                    free(line);\n                }\n                break;\n        }\n    }\n    fclose(file);\n    return options;\n}\n\nmetadata get_metadata(char *file)\n{\n    metadata m = {0};\n    list *options = read_data_cfg(file);\n\n    char *name_list = option_find_str(options, \"names\", 0);\n    if(!name_list) name_list = option_find_str(options, \"labels\", 0);\n    if(!name_list) {\n        fprintf(stderr, \"No names or labels found\\n\");\n    } else {\n        m.names = get_labels(name_list);\n    }\n    m.classes = option_find_int(options, \"classes\", 2);\n    free_list(options);\n    return m;\n}\n\nint read_option(char *s, list *options)\n{\n    size_t i;\n    size_t len = strlen(s);\n    char *val = 0;\n    for(i = 0; i < len; ++i){\n        if(s[i] == '='){\n            s[i] = '\\0';\n            val = s+i+1;\n            break;\n        }\n    }\n    if(i == len-1) return 0;\n    char *key = s;\n    option_insert(options, key, val);\n    return 1;\n}\n\nvoid option_insert(list *l, char *key, char *val)\n{\n    kvp *p = malloc(sizeof(kvp));\n    p->key = key;\n    p->val = val;\n    p->used = 0;\n    list_insert(l, p);\n}\n\nvoid option_unused(list *l)\n{\n    node *n = l->front;\n    while(n){\n        kvp *p = (kvp *)n->val;\n        if(!p->used){\n            fprintf(stderr, \"Unused field: '%s = %s'\\n\", p->key, p->val);\n        }\n        n = n->next;\n    }\n}\n\nchar *option_find(list *l, char *key)\n{\n    node *n = l->front;\n    while(n){\n        kvp *p = (kvp *)n->val;\n        if(strcmp(p->key, key) == 0){\n            p->used = 1;\n            return p->val;\n        }\n        n = n->next;\n    }\n    return 0;\n}\nchar *option_find_str(list *l, char *key, char *def)\n{\n    char *v = option_find(l, key);\n    if(v) return v;\n    //if(def) fprintf(stderr, \"%s: Using default '%s'\\n\", key, def);\n    return def;\n}\n\nint option_find_int(list *l, char *key, int def)\n{\n    char *v = option_find(l, key);\n    if(v) return atoi(v);\n    //fprintf(stderr, \"%s: Using default '%d'\\n\", key, def);\n    return def;\n}\n\nint option_find_int_quiet(list *l, char *key, int def)\n{\n    char *v = option_find(l, key);\n    if(v) return atoi(v);\n    return def;\n}\n\nfloat option_find_float_quiet(list *l, char *key, float def)\n{\n    char *v = option_find(l, key);\n    if(v) return atof(v);\n    return def;\n}\n\nfloat option_find_float(list *l, char *key, float def)\n{\n    char *v = option_find(l, key);\n    if(v) return atof(v);\n    //fprintf(stderr, \"%s: Using default '%lf'\\n\", key, def);\n    return def;\n}\n"
  },
  {
    "path": "lightnet/_darknet/option_list.h",
    "content": "#ifndef OPTION_LIST_H\n#define OPTION_LIST_H\n#include \"list.h\"\n\ntypedef struct{\n    char *key;\n    char *val;\n    int used;\n} kvp;\n\n\nint read_option(char *s, list *options);\nvoid option_insert(list *l, char *key, char *val);\nchar *option_find(list *l, char *key);\nint option_find_int_quiet(list *l, char *key, int def);\nfloat option_find_float(list *l, char *key, float def);\nfloat option_find_float_quiet(list *l, char *key, float def);\nvoid option_unused(list *l);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/parser.c",
    "content": "#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <assert.h>\n\n#include \"activation_layer.h\"\n#include \"activations.h\"\n#include \"avgpool_layer.h\"\n#include \"batchnorm_layer.h\"\n#include \"blas.h\"\n#include \"connected_layer.h\"\n#include \"deconvolutional_layer.h\"\n#include \"convolutional_layer.h\"\n#include \"cost_layer.h\"\n#include \"crnn_layer.h\"\n#include \"crop_layer.h\"\n#include \"detection_layer.h\"\n#include \"dropout_layer.h\"\n#include \"gru_layer.h\"\n#include \"list.h\"\n#include \"local_layer.h\"\n#include \"maxpool_layer.h\"\n#include \"normalization_layer.h\"\n#include \"option_list.h\"\n#include \"parser.h\"\n#include \"region_layer.h\"\n#include \"reorg_layer.h\"\n#include \"rnn_layer.h\"\n#include \"route_layer.h\"\n#include \"shortcut_layer.h\"\n#include \"softmax_layer.h\"\n#include \"lstm_layer.h\"\n#include \"utils.h\"\n\ntypedef struct{\n    char *type;\n    list *options;\n}section;\n\nlist *read_cfg(char *filename);\n\nLAYER_TYPE string_to_layer_type(char * type)\n{\n\n    if (strcmp(type, \"[shortcut]\")==0) return SHORTCUT;\n    if (strcmp(type, \"[crop]\")==0) return CROP;\n    if (strcmp(type, \"[cost]\")==0) return COST;\n    if (strcmp(type, \"[detection]\")==0) return DETECTION;\n    if (strcmp(type, \"[region]\")==0) return REGION;\n    if (strcmp(type, \"[local]\")==0) return LOCAL;\n    if (strcmp(type, \"[conv]\")==0\n            || strcmp(type, \"[convolutional]\")==0) return CONVOLUTIONAL;\n    if (strcmp(type, \"[deconv]\")==0\n            || strcmp(type, \"[deconvolutional]\")==0) return DECONVOLUTIONAL;\n    if (strcmp(type, \"[activation]\")==0) return ACTIVE;\n    if (strcmp(type, \"[net]\")==0\n            || strcmp(type, \"[network]\")==0) return NETWORK;\n    if (strcmp(type, \"[crnn]\")==0) return CRNN;\n    if (strcmp(type, \"[gru]\")==0) return GRU;\n    if (strcmp(type, \"[lstm]\") == 0) return LSTM;\n    if (strcmp(type, \"[rnn]\")==0) return RNN;\n    if (strcmp(type, \"[conn]\")==0\n            || strcmp(type, \"[connected]\")==0) return CONNECTED;\n    if (strcmp(type, \"[max]\")==0\n            || strcmp(type, \"[maxpool]\")==0) return MAXPOOL;\n    if (strcmp(type, \"[reorg]\")==0) return REORG;\n    if (strcmp(type, \"[avg]\")==0\n            || strcmp(type, \"[avgpool]\")==0) return AVGPOOL;\n    if (strcmp(type, \"[dropout]\")==0) return DROPOUT;\n    if (strcmp(type, \"[lrn]\")==0\n            || strcmp(type, \"[normalization]\")==0) return NORMALIZATION;\n    if (strcmp(type, \"[batchnorm]\")==0) return BATCHNORM;\n    if (strcmp(type, \"[soft]\")==0\n            || strcmp(type, \"[softmax]\")==0) return SOFTMAX;\n    if (strcmp(type, \"[route]\")==0) return ROUTE;\n    return BLANK;\n}\n\nvoid free_section(section *s)\n{\n    free(s->type);\n    node *n = s->options->front;\n    while(n){\n        kvp *pair = (kvp *)n->val;\n        free(pair->key);\n        free(pair);\n        node *next = n->next;\n        free(n);\n        n = next;\n    }\n    free(s->options);\n    free(s);\n}\n\nvoid parse_data(char *data, float *a, int n)\n{\n    int i;\n    if(!data) return;\n    char *curr = data;\n    char *next = data;\n    int done = 0;\n    for(i = 0; i < n && !done; ++i){\n        while(*++next !='\\0' && *next != ',');\n        if(*next == '\\0') done = 1;\n        *next = '\\0';\n        sscanf(curr, \"%g\", &a[i]);\n        curr = next+1;\n    }\n}\n\ntypedef struct size_params{\n    int batch;\n    int inputs;\n    int h;\n    int w;\n    int c;\n    int index;\n    int time_steps;\n    network *net;\n} size_params;\n\nlocal_layer parse_local(list *options, size_params params)\n{\n    int n = option_find_int(options, \"filters\",1);\n    int size = option_find_int(options, \"size\",1);\n    int stride = option_find_int(options, \"stride\",1);\n    int pad = option_find_int(options, \"pad\",0);\n    char *activation_s = option_find_str(options, \"activation\", \"logistic\");\n    ACTIVATION activation = get_activation(activation_s);\n\n    int batch,h,w,c;\n    h = params.h;\n    w = params.w;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before local layer must output image.\");\n\n    local_layer layer = make_local_layer(batch,h,w,c,n,size,stride,pad,activation);\n\n    return layer;\n}\n\nlayer parse_deconvolutional(list *options, size_params params)\n{\n    int n = option_find_int(options, \"filters\",1);\n    int size = option_find_int(options, \"size\",1);\n    int stride = option_find_int(options, \"stride\",1);\n\n    char *activation_s = option_find_str(options, \"activation\", \"logistic\");\n    ACTIVATION activation = get_activation(activation_s);\n\n    int batch,h,w,c;\n    h = params.h;\n    w = params.w;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before deconvolutional layer must output image.\");\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n    int pad = option_find_int_quiet(options, \"pad\",0);\n    int padding = option_find_int_quiet(options, \"padding\",0);\n    if(pad) padding = size/2;\n\n    layer l = make_deconvolutional_layer(batch,h,w,c,n,size,stride,padding, activation, batch_normalize, params.net->adam);\n\n    return l;\n}\n\n\nconvolutional_layer parse_convolutional(list *options, size_params params)\n{\n    int n = option_find_int(options, \"filters\",1);\n    int size = option_find_int(options, \"size\",1);\n    int stride = option_find_int(options, \"stride\",1);\n    int pad = option_find_int_quiet(options, \"pad\",0);\n    int padding = option_find_int_quiet(options, \"padding\",0);\n    int groups = option_find_int_quiet(options, \"groups\", 1);\n    if(pad) padding = size/2;\n\n    char *activation_s = option_find_str(options, \"activation\", \"logistic\");\n    ACTIVATION activation = get_activation(activation_s);\n\n    int batch,h,w,c;\n    h = params.h;\n    w = params.w;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before convolutional layer must output image.\");\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n    int binary = option_find_int_quiet(options, \"binary\", 0);\n    int xnor = option_find_int_quiet(options, \"xnor\", 0);\n\n    convolutional_layer layer = make_convolutional_layer(batch,h,w,c,n,groups,size,stride,padding,activation, batch_normalize, binary, xnor, params.net->adam);\n    layer.flipped = option_find_int_quiet(options, \"flipped\", 0);\n    layer.dot = option_find_float_quiet(options, \"dot\", 0);\n\n    return layer;\n}\n\nlayer parse_crnn(list *options, size_params params)\n{\n    int output_filters = option_find_int(options, \"output_filters\",1);\n    int hidden_filters = option_find_int(options, \"hidden_filters\",1);\n    char *activation_s = option_find_str(options, \"activation\", \"logistic\");\n    ACTIVATION activation = get_activation(activation_s);\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n\n    layer l = make_crnn_layer(params.batch, params.w, params.h, params.c, hidden_filters, output_filters, params.time_steps, activation, batch_normalize);\n\n    l.shortcut = option_find_int_quiet(options, \"shortcut\", 0);\n\n    return l;\n}\n\nlayer parse_rnn(list *options, size_params params)\n{\n    int output = option_find_int(options, \"output\",1);\n    char *activation_s = option_find_str(options, \"activation\", \"logistic\");\n    ACTIVATION activation = get_activation(activation_s);\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n\n    layer l = make_rnn_layer(params.batch, params.inputs, output, params.time_steps, activation, batch_normalize, params.net->adam);\n\n    l.shortcut = option_find_int_quiet(options, \"shortcut\", 0);\n\n    return l;\n}\n\nlayer parse_gru(list *options, size_params params)\n{\n    int output = option_find_int(options, \"output\",1);\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n\n    layer l = make_gru_layer(params.batch, params.inputs, output, params.time_steps, batch_normalize, params.net->adam);\n    l.tanh = option_find_int_quiet(options, \"tanh\", 0);\n\n    return l;\n}\n\nlayer parse_lstm(list *options, size_params params)\n{\n    int output = option_find_int(options, \"output\", 1);\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n\n    layer l = make_lstm_layer(params.batch, params.inputs, output, params.time_steps, batch_normalize, params.net->adam);\n\n    return l;\n}\n\nlayer parse_connected(list *options, size_params params)\n{\n    int output = option_find_int(options, \"output\",1);\n    char *activation_s = option_find_str(options, \"activation\", \"logistic\");\n    ACTIVATION activation = get_activation(activation_s);\n    int batch_normalize = option_find_int_quiet(options, \"batch_normalize\", 0);\n\n    layer l = make_connected_layer(params.batch, params.inputs, output, activation, batch_normalize, params.net->adam);\n    return l;\n}\n\nsoftmax_layer parse_softmax(list *options, size_params params)\n{\n    int groups = option_find_int_quiet(options, \"groups\",1);\n    softmax_layer layer = make_softmax_layer(params.batch, params.inputs, groups);\n    layer.temperature = option_find_float_quiet(options, \"temperature\", 1);\n    char *tree_file = option_find_str(options, \"tree\", 0);\n    if (tree_file) layer.softmax_tree = read_tree(tree_file);\n    layer.w = params.w;\n    layer.h = params.h;\n    layer.c = params.c;\n    layer.spatial = option_find_float_quiet(options, \"spatial\", 0);\n    return layer;\n}\n\nlayer parse_region(list *options, size_params params)\n{\n    int coords = option_find_int(options, \"coords\", 4);\n    int classes = option_find_int(options, \"classes\", 20);\n    int num = option_find_int(options, \"num\", 1);\n\n    layer l = make_region_layer(params.batch, params.w, params.h, num, classes, coords);\n    assert(l.outputs == params.inputs);\n\n    l.log = option_find_int_quiet(options, \"log\", 0);\n    l.sqrt = option_find_int_quiet(options, \"sqrt\", 0);\n\n    l.softmax = option_find_int(options, \"softmax\", 0);\n    l.background = option_find_int_quiet(options, \"background\", 0);\n    l.max_boxes = option_find_int_quiet(options, \"max\",30);\n    l.jitter = option_find_float(options, \"jitter\", .2);\n    l.rescore = option_find_int_quiet(options, \"rescore\",0);\n\n    l.thresh = option_find_float(options, \"thresh\", .5);\n    l.classfix = option_find_int_quiet(options, \"classfix\", 0);\n    l.absolute = option_find_int_quiet(options, \"absolute\", 0);\n    l.random = option_find_int_quiet(options, \"random\", 0);\n\n    l.coord_scale = option_find_float(options, \"coord_scale\", 1);\n    l.object_scale = option_find_float(options, \"object_scale\", 1);\n    l.noobject_scale = option_find_float(options, \"noobject_scale\", 1);\n    l.mask_scale = option_find_float(options, \"mask_scale\", 1);\n    l.class_scale = option_find_float(options, \"class_scale\", 1);\n    l.bias_match = option_find_int_quiet(options, \"bias_match\",0);\n\n    char *tree_file = option_find_str(options, \"tree\", 0);\n    if (tree_file) l.softmax_tree = read_tree(tree_file);\n    char *map_file = option_find_str(options, \"map\", 0);\n    if (map_file) l.map = read_map(map_file);\n\n    char *a = option_find_str(options, \"anchors\", 0);\n    if(a){\n        int len = strlen(a);\n        int n = 1;\n        int i;\n        for(i = 0; i < len; ++i){\n            if (a[i] == ',') ++n;\n        }\n        for(i = 0; i < n; ++i){\n            float bias = atof(a);\n            l.biases[i] = bias;\n            a = strchr(a, ',')+1;\n        }\n    }\n    return l;\n}\ndetection_layer parse_detection(list *options, size_params params)\n{\n    int coords = option_find_int(options, \"coords\", 1);\n    int classes = option_find_int(options, \"classes\", 1);\n    int rescore = option_find_int(options, \"rescore\", 0);\n    int num = option_find_int(options, \"num\", 1);\n    int side = option_find_int(options, \"side\", 7);\n    detection_layer layer = make_detection_layer(params.batch, params.inputs, num, side, classes, coords, rescore);\n\n    layer.softmax = option_find_int(options, \"softmax\", 0);\n    layer.sqrt = option_find_int(options, \"sqrt\", 0);\n\n    layer.max_boxes = option_find_int_quiet(options, \"max\",30);\n    layer.coord_scale = option_find_float(options, \"coord_scale\", 1);\n    layer.forced = option_find_int(options, \"forced\", 0);\n    layer.object_scale = option_find_float(options, \"object_scale\", 1);\n    layer.noobject_scale = option_find_float(options, \"noobject_scale\", 1);\n    layer.class_scale = option_find_float(options, \"class_scale\", 1);\n    layer.jitter = option_find_float(options, \"jitter\", .2);\n    layer.random = option_find_int_quiet(options, \"random\", 0);\n    layer.reorg = option_find_int_quiet(options, \"reorg\", 0);\n    return layer;\n}\n\ncost_layer parse_cost(list *options, size_params params)\n{\n    char *type_s = option_find_str(options, \"type\", \"sse\");\n    COST_TYPE type = get_cost_type(type_s);\n    float scale = option_find_float_quiet(options, \"scale\",1);\n    cost_layer layer = make_cost_layer(params.batch, params.inputs, type, scale);\n    layer.ratio =  option_find_float_quiet(options, \"ratio\",0);\n    layer.noobject_scale =  option_find_float_quiet(options, \"noobj\", 1);\n    layer.thresh =  option_find_float_quiet(options, \"thresh\",0);\n    return layer;\n}\n\ncrop_layer parse_crop(list *options, size_params params)\n{\n    int crop_height = option_find_int(options, \"crop_height\",1);\n    int crop_width = option_find_int(options, \"crop_width\",1);\n    int flip = option_find_int(options, \"flip\",0);\n    float angle = option_find_float(options, \"angle\",0);\n    float saturation = option_find_float(options, \"saturation\",1);\n    float exposure = option_find_float(options, \"exposure\",1);\n\n    int batch,h,w,c;\n    h = params.h;\n    w = params.w;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before crop layer must output image.\");\n\n    int noadjust = option_find_int_quiet(options, \"noadjust\",0);\n\n    crop_layer l = make_crop_layer(batch,h,w,c,crop_height,crop_width,flip, angle, saturation, exposure);\n    l.shift = option_find_float(options, \"shift\", 0);\n    l.noadjust = noadjust;\n    return l;\n}\n\nlayer parse_reorg(list *options, size_params params)\n{\n    int stride = option_find_int(options, \"stride\",1);\n    int reverse = option_find_int_quiet(options, \"reverse\",0);\n    int flatten = option_find_int_quiet(options, \"flatten\",0);\n    int extra = option_find_int_quiet(options, \"extra\",0);\n\n    int batch,h,w,c;\n    h = params.h;\n    w = params.w;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before reorg layer must output image.\");\n\n    layer layer = make_reorg_layer(batch,w,h,c,stride,reverse, flatten, extra);\n    return layer;\n}\n\nmaxpool_layer parse_maxpool(list *options, size_params params)\n{\n    int stride = option_find_int(options, \"stride\",1);\n    int size = option_find_int(options, \"size\",stride);\n    int padding = option_find_int_quiet(options, \"padding\", (size-1)/2);\n\n    int batch,h,w,c;\n    h = params.h;\n    w = params.w;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before maxpool layer must output image.\");\n\n    maxpool_layer layer = make_maxpool_layer(batch,h,w,c,size,stride,padding);\n    return layer;\n}\n\navgpool_layer parse_avgpool(list *options, size_params params)\n{\n    int batch,w,h,c;\n    w = params.w;\n    h = params.h;\n    c = params.c;\n    batch=params.batch;\n    if(!(h && w && c)) error(\"Layer before avgpool layer must output image.\");\n\n    avgpool_layer layer = make_avgpool_layer(batch,w,h,c);\n    return layer;\n}\n\ndropout_layer parse_dropout(list *options, size_params params)\n{\n    float probability = option_find_float(options, \"probability\", .5);\n    dropout_layer layer = make_dropout_layer(params.batch, params.inputs, probability);\n    layer.out_w = params.w;\n    layer.out_h = params.h;\n    layer.out_c = params.c;\n    return layer;\n}\n\nlayer parse_normalization(list *options, size_params params)\n{\n    float alpha = option_find_float(options, \"alpha\", .0001);\n    float beta =  option_find_float(options, \"beta\" , .75);\n    float kappa = option_find_float(options, \"kappa\", 1);\n    int size = option_find_int(options, \"size\", 5);\n    layer l = make_normalization_layer(params.batch, params.w, params.h, params.c, size, alpha, beta, kappa);\n    return l;\n}\n\nlayer parse_batchnorm(list *options, size_params params)\n{\n    layer l = make_batchnorm_layer(params.batch, params.w, params.h, params.c);\n    return l;\n}\n\nlayer parse_shortcut(list *options, size_params params, network *net)\n{\n    char *l = option_find(options, \"from\");\n    int index = atoi(l);\n    if(index < 0) index = params.index + index;\n\n    int batch = params.batch;\n    layer from = net->layers[index];\n\n    layer s = make_shortcut_layer(batch, index, params.w, params.h, params.c, from.out_w, from.out_h, from.out_c);\n\n    char *activation_s = option_find_str(options, \"activation\", \"linear\");\n    ACTIVATION activation = get_activation(activation_s);\n    s.activation = activation;\n    return s;\n}\n\n\nlayer parse_activation(list *options, size_params params)\n{\n    char *activation_s = option_find_str(options, \"activation\", \"linear\");\n    ACTIVATION activation = get_activation(activation_s);\n\n    layer l = make_activation_layer(params.batch, params.inputs, activation);\n\n    l.out_h = params.h;\n    l.out_w = params.w;\n    l.out_c = params.c;\n    l.h = params.h;\n    l.w = params.w;\n    l.c = params.c;\n\n    return l;\n}\n\nroute_layer parse_route(list *options, size_params params, network *net)\n{\n    char *l = option_find(options, \"layers\");\n    int len = strlen(l);\n    if(!l) error(\"Route Layer must specify input layers\");\n    int n = 1;\n    int i;\n    for(i = 0; i < len; ++i){\n        if (l[i] == ',') ++n;\n    }\n\n    int *layers = calloc(n, sizeof(int));\n    int *sizes = calloc(n, sizeof(int));\n    for(i = 0; i < n; ++i){\n        int index = atoi(l);\n        l = strchr(l, ',')+1;\n        if(index < 0) index = params.index + index;\n        layers[i] = index;\n        sizes[i] = net->layers[index].outputs;\n    }\n    int batch = params.batch;\n\n    route_layer layer = make_route_layer(batch, n, layers, sizes);\n\n    convolutional_layer first = net->layers[layers[0]];\n    layer.out_w = first.out_w;\n    layer.out_h = first.out_h;\n    layer.out_c = first.out_c;\n    for(i = 1; i < n; ++i){\n        int index = layers[i];\n        convolutional_layer next = net->layers[index];\n        if(next.out_w == first.out_w && next.out_h == first.out_h){\n            layer.out_c += next.out_c;\n        }else{\n            layer.out_h = layer.out_w = layer.out_c = 0;\n        }\n    }\n\n    return layer;\n}\n\nlearning_rate_policy get_policy(char *s)\n{\n    if (strcmp(s, \"random\")==0) return RANDOM;\n    if (strcmp(s, \"poly\")==0) return POLY;\n    if (strcmp(s, \"constant\")==0) return CONSTANT;\n    if (strcmp(s, \"step\")==0) return STEP;\n    if (strcmp(s, \"exp\")==0) return EXP;\n    if (strcmp(s, \"sigmoid\")==0) return SIG;\n    if (strcmp(s, \"steps\")==0) return STEPS;\n    fprintf(stderr, \"Couldn't find policy %s, going with constant\\n\", s);\n    return CONSTANT;\n}\n\nvoid parse_net_options(list *options, network *net)\n{\n    net->batch = option_find_int(options, \"batch\",1);\n    net->learning_rate = option_find_float(options, \"learning_rate\", .001);\n    net->momentum = option_find_float(options, \"momentum\", .9);\n    net->decay = option_find_float(options, \"decay\", .0001);\n    int subdivs = option_find_int(options, \"subdivisions\",1);\n    net->time_steps = option_find_int_quiet(options, \"time_steps\",1);\n    net->notruth = option_find_int_quiet(options, \"notruth\",0);\n    net->batch /= subdivs;\n    net->batch *= net->time_steps;\n    net->subdivisions = subdivs;\n    net->random = option_find_int_quiet(options, \"random\", 0);\n\n    net->adam = option_find_int_quiet(options, \"adam\", 0);\n    if(net->adam){\n        net->B1 = option_find_float(options, \"B1\", .9);\n        net->B2 = option_find_float(options, \"B2\", .999);\n        net->eps = option_find_float(options, \"eps\", .0000001);\n    }\n\n    net->h = option_find_int_quiet(options, \"height\",0);\n    net->w = option_find_int_quiet(options, \"width\",0);\n    net->c = option_find_int_quiet(options, \"channels\",0);\n    net->inputs = option_find_int_quiet(options, \"inputs\", net->h * net->w * net->c);\n    net->max_crop = option_find_int_quiet(options, \"max_crop\",net->w*2);\n    net->min_crop = option_find_int_quiet(options, \"min_crop\",net->w);\n    net->max_ratio = option_find_float_quiet(options, \"max_ratio\", (float) net->max_crop / net->w);\n    net->min_ratio = option_find_float_quiet(options, \"min_ratio\", (float) net->min_crop / net->w);\n    net->center = option_find_int_quiet(options, \"center\",0);\n\n    net->angle = option_find_float_quiet(options, \"angle\", 0);\n    net->aspect = option_find_float_quiet(options, \"aspect\", 1);\n    net->saturation = option_find_float_quiet(options, \"saturation\", 1);\n    net->exposure = option_find_float_quiet(options, \"exposure\", 1);\n    net->hue = option_find_float_quiet(options, \"hue\", 0);\n\n    if(!net->inputs && !(net->h && net->w && net->c)) error(\"No input parameters supplied\");\n\n    char *policy_s = option_find_str(options, \"policy\", \"constant\");\n    net->policy = get_policy(policy_s);\n    net->burn_in = option_find_int_quiet(options, \"burn_in\", 0);\n    net->power = option_find_float_quiet(options, \"power\", 4);\n    if(net->policy == STEP){\n        net->step = option_find_int(options, \"step\", 1);\n        net->scale = option_find_float(options, \"scale\", 1);\n    } else if (net->policy == STEPS){\n        char *l = option_find(options, \"steps\");\n        char *p = option_find(options, \"scales\");\n        if(!l || !p) error(\"STEPS policy must have steps and scales in cfg file\");\n\n        int len = strlen(l);\n        int n = 1;\n        int i;\n        for(i = 0; i < len; ++i){\n            if (l[i] == ',') ++n;\n        }\n        int *steps = calloc(n, sizeof(int));\n        float *scales = calloc(n, sizeof(float));\n        for(i = 0; i < n; ++i){\n            int step    = atoi(l);\n            float scale = atof(p);\n            l = strchr(l, ',')+1;\n            p = strchr(p, ',')+1;\n            steps[i] = step;\n            scales[i] = scale;\n        }\n        net->scales = scales;\n        net->steps = steps;\n        net->num_steps = n;\n    } else if (net->policy == EXP){\n        net->gamma = option_find_float(options, \"gamma\", 1);\n    } else if (net->policy == SIG){\n        net->gamma = option_find_float(options, \"gamma\", 1);\n        net->step = option_find_int(options, \"step\", 1);\n    } else if (net->policy == POLY || net->policy == RANDOM){\n    }\n    net->max_batches = option_find_int(options, \"max_batches\", 0);\n}\n\nint is_network(section *s)\n{\n    return (strcmp(s->type, \"[net]\")==0\n            || strcmp(s->type, \"[network]\")==0);\n}\n\nnetwork *parse_network_cfg(char *filename)\n{\n    list *sections = read_cfg(filename);\n    node *n = sections->front;\n    if(!n) error(\"Config file has no sections\");\n    network *net = make_network(sections->size - 1);\n    net->gpu_index = gpu_index;\n    size_params params;\n\n    section *s = (section *)n->val;\n    list *options = s->options;\n    if(!is_network(s)) error(\"First section must be [net] or [network]\");\n    parse_net_options(options, net);\n\n    params.h = net->h;\n    params.w = net->w;\n    params.c = net->c;\n    params.inputs = net->inputs;\n    params.batch = net->batch;\n    params.time_steps = net->time_steps;\n    params.net = net;\n\n    size_t workspace_size = 0;\n    n = n->next;\n    int count = 0;\n    free_section(s);\n    //fprintf(stderr, \"layer     filters    size              input                output\\n\");\n    while(n){\n        params.index = count;\n        //fprintf(stderr, \"%5d \", count);\n        s = (section *)n->val;\n        options = s->options;\n        layer l = {0};\n        LAYER_TYPE lt = string_to_layer_type(s->type);\n        if(lt == CONVOLUTIONAL){\n            l = parse_convolutional(options, params);\n        }else if(lt == DECONVOLUTIONAL){\n            l = parse_deconvolutional(options, params);\n        }else if(lt == LOCAL){\n            l = parse_local(options, params);\n        }else if(lt == ACTIVE){\n            l = parse_activation(options, params);\n        }else if(lt == RNN){\n            l = parse_rnn(options, params);\n        }else if(lt == GRU){\n            l = parse_gru(options, params);\n        }else if (lt == LSTM) {\n            l = parse_lstm(options, params);\n        }else if(lt == CRNN){\n            l = parse_crnn(options, params);\n        }else if(lt == CONNECTED){\n            l = parse_connected(options, params);\n        }else if(lt == CROP){\n            l = parse_crop(options, params);\n        }else if(lt == COST){\n            l = parse_cost(options, params);\n        }else if(lt == REGION){\n            l = parse_region(options, params);\n        }else if(lt == DETECTION){\n            l = parse_detection(options, params);\n        }else if(lt == SOFTMAX){\n            l = parse_softmax(options, params);\n            net->hierarchy = l.softmax_tree;\n        }else if(lt == NORMALIZATION){\n            l = parse_normalization(options, params);\n        }else if(lt == BATCHNORM){\n            l = parse_batchnorm(options, params);\n        }else if(lt == MAXPOOL){\n            l = parse_maxpool(options, params);\n        }else if(lt == REORG){\n            l = parse_reorg(options, params);\n        }else if(lt == AVGPOOL){\n            l = parse_avgpool(options, params);\n        }else if(lt == ROUTE){\n            l = parse_route(options, params, net);\n        }else if(lt == SHORTCUT){\n            l = parse_shortcut(options, params, net);\n        }else if(lt == DROPOUT){\n            l = parse_dropout(options, params);\n            l.output = net->layers[count-1].output;\n            l.delta = net->layers[count-1].delta;\n#ifdef GPU\n            l.output_gpu = net->layers[count-1].output_gpu;\n            l.delta_gpu = net->layers[count-1].delta_gpu;\n#endif\n        }else{\n            fprintf(stderr, \"Type not recognized: %s\\n\", s->type);\n        }\n        l.truth = option_find_int_quiet(options, \"truth\", 0);\n        l.onlyforward = option_find_int_quiet(options, \"onlyforward\", 0);\n        l.stopbackward = option_find_int_quiet(options, \"stopbackward\", 0);\n        l.dontload = option_find_int_quiet(options, \"dontload\", 0);\n        l.dontloadscales = option_find_int_quiet(options, \"dontloadscales\", 0);\n        l.learning_rate_scale = option_find_float_quiet(options, \"learning_rate\", 1);\n        l.smooth = option_find_float_quiet(options, \"smooth\", 0);\n        option_unused(options);\n        net->layers[count] = l;\n        if (l.workspace_size > workspace_size) workspace_size = l.workspace_size;\n        free_section(s);\n        n = n->next;\n        ++count;\n        if(n){\n            params.h = l.out_h;\n            params.w = l.out_w;\n            params.c = l.out_c;\n            params.inputs = l.outputs;\n        }\n    }\n    free_list(sections);\n    layer out = get_network_output_layer(net);\n    net->outputs = out.outputs;\n    net->truths = out.outputs;\n    if(net->layers[net->n-1].truths) net->truths = net->layers[net->n-1].truths;\n    net->output = out.output;\n    net->input = calloc(net->inputs*net->batch, sizeof(float));\n    net->truth = calloc(net->truths*net->batch, sizeof(float));\n#ifdef GPU\n    net->output_gpu = out.output_gpu;\n    net->input_gpu = cuda_make_array(net->input, net->inputs*net->batch);\n    net->truth_gpu = cuda_make_array(net->truth, net->truths*net->batch);\n#endif\n    if(workspace_size){\n        //printf(\"%ld\\n\", workspace_size);\n#ifdef GPU\n        if(gpu_index >= 0){\n            net->workspace = cuda_make_array(0, (workspace_size-1)/sizeof(float)+1);\n        }else {\n            net->workspace = calloc(1, workspace_size);\n        }\n#else\n        net->workspace = calloc(1, workspace_size);\n#endif\n    }\n    return net;\n}\n\nlist *read_cfg(char *filename)\n{\n    FILE *file = fopen(filename, \"r\");\n    if(file == 0) file_error(filename);\n    char *line;\n    int nu = 0;\n    list *options = make_list();\n    section *current = 0;\n    while((line=fgetl(file)) != 0){\n        ++ nu;\n        strip(line);\n        switch(line[0]){\n            case '[':\n                current = malloc(sizeof(section));\n                list_insert(options, current);\n                current->options = make_list();\n                current->type = line;\n                break;\n            case '\\0':\n            case '#':\n            case ';':\n                free(line);\n                break;\n            default:\n                if(!read_option(line, current->options)){\n                    fprintf(stderr, \"Config file error line %d, could parse: %s\\n\", nu, line);\n                    free(line);\n                }\n                break;\n        }\n    }\n    fclose(file);\n    return options;\n}\n\nvoid save_convolutional_weights_binary(layer l, FILE *fp)\n{\n#ifdef GPU\n    if(gpu_index >= 0){\n        pull_convolutional_layer(l);\n    }\n#endif\n    binarize_weights(l.weights, l.n, l.c*l.size*l.size, l.binary_weights);\n    int size = l.c*l.size*l.size;\n    int i, j, k;\n    fwrite(l.biases, sizeof(float), l.n, fp);\n    if (l.batch_normalize){\n        fwrite(l.scales, sizeof(float), l.n, fp);\n        fwrite(l.rolling_mean, sizeof(float), l.n, fp);\n        fwrite(l.rolling_variance, sizeof(float), l.n, fp);\n    }\n    for(i = 0; i < l.n; ++i){\n        float mean = l.binary_weights[i*size];\n        if(mean < 0) mean = -mean;\n        fwrite(&mean, sizeof(float), 1, fp);\n        for(j = 0; j < size/8; ++j){\n            int index = i*size + j*8;\n            unsigned char c = 0;\n            for(k = 0; k < 8; ++k){\n                if (j*8 + k >= size) break;\n                if (l.binary_weights[index + k] > 0) c = (c | 1<<k);\n            }\n            fwrite(&c, sizeof(char), 1, fp);\n        }\n    }\n}\n\nvoid save_convolutional_weights(layer l, FILE *fp)\n{\n    if(l.binary){\n        //save_convolutional_weights_binary(l, fp);\n        //return;\n    }\n#ifdef GPU\n    if(gpu_index >= 0){\n        pull_convolutional_layer(l);\n    }\n#endif\n    int num = l.nweights;\n    fwrite(l.biases, sizeof(float), l.n, fp);\n    if (l.batch_normalize){\n        fwrite(l.scales, sizeof(float), l.n, fp);\n        fwrite(l.rolling_mean, sizeof(float), l.n, fp);\n        fwrite(l.rolling_variance, sizeof(float), l.n, fp);\n    }\n    fwrite(l.weights, sizeof(float), num, fp);\n}\n\nvoid save_batchnorm_weights(layer l, FILE *fp)\n{\n#ifdef GPU\n    if(gpu_index >= 0){\n        pull_batchnorm_layer(l);\n    }\n#endif\n    fwrite(l.scales, sizeof(float), l.c, fp);\n    fwrite(l.rolling_mean, sizeof(float), l.c, fp);\n    fwrite(l.rolling_variance, sizeof(float), l.c, fp);\n}\n\nvoid save_connected_weights(layer l, FILE *fp)\n{\n#ifdef GPU\n    if(gpu_index >= 0){\n        pull_connected_layer(l);\n    }\n#endif\n    fwrite(l.biases, sizeof(float), l.outputs, fp);\n    fwrite(l.weights, sizeof(float), l.outputs*l.inputs, fp);\n    if (l.batch_normalize){\n        fwrite(l.scales, sizeof(float), l.outputs, fp);\n        fwrite(l.rolling_mean, sizeof(float), l.outputs, fp);\n        fwrite(l.rolling_variance, sizeof(float), l.outputs, fp);\n    }\n}\n\nvoid save_weights_upto(network *net, char *filename, int cutoff)\n{\n#ifdef GPU\n    if(net->gpu_index >= 0){\n        cuda_set_device(net->gpu_index);\n    }\n#endif\n    //fprintf(stderr, \"Saving weights to %s\\n\", filename);\n    FILE *fp = fopen(filename, \"wb\");\n    if(!fp) file_error(filename);\n\n    int major = 0;\n    int minor = 2;\n    int revision = 0;\n    fwrite(&major, sizeof(int), 1, fp);\n    fwrite(&minor, sizeof(int), 1, fp);\n    fwrite(&revision, sizeof(int), 1, fp);\n    fwrite(net->seen, sizeof(size_t), 1, fp);\n\n    int i;\n    for(i = 0; i < net->n && i < cutoff; ++i){\n        layer l = net->layers[i];\n        if(l.type == CONVOLUTIONAL || l.type == DECONVOLUTIONAL){\n            save_convolutional_weights(l, fp);\n        } if(l.type == CONNECTED){\n            save_connected_weights(l, fp);\n        } if(l.type == BATCHNORM){\n            save_batchnorm_weights(l, fp);\n        } if(l.type == RNN){\n            save_connected_weights(*(l.input_layer), fp);\n            save_connected_weights(*(l.self_layer), fp);\n            save_connected_weights(*(l.output_layer), fp);\n        } if (l.type == LSTM) {\n            save_connected_weights(*(l.wi), fp);\n            save_connected_weights(*(l.wf), fp);\n            save_connected_weights(*(l.wo), fp);\n            save_connected_weights(*(l.wg), fp);\n            save_connected_weights(*(l.ui), fp);\n            save_connected_weights(*(l.uf), fp);\n            save_connected_weights(*(l.uo), fp);\n            save_connected_weights(*(l.ug), fp);\n        } if (l.type == GRU) {\n            if(1){\n                save_connected_weights(*(l.wz), fp);\n                save_connected_weights(*(l.wr), fp);\n                save_connected_weights(*(l.wh), fp);\n                save_connected_weights(*(l.uz), fp);\n                save_connected_weights(*(l.ur), fp);\n                save_connected_weights(*(l.uh), fp);\n            }else{\n                save_connected_weights(*(l.reset_layer), fp);\n                save_connected_weights(*(l.update_layer), fp);\n                save_connected_weights(*(l.state_layer), fp);\n            }\n        }  if(l.type == CRNN){\n            save_convolutional_weights(*(l.input_layer), fp);\n            save_convolutional_weights(*(l.self_layer), fp);\n            save_convolutional_weights(*(l.output_layer), fp);\n        } if(l.type == LOCAL){\n#ifdef GPU\n            if(gpu_index >= 0){\n                pull_local_layer(l);\n            }\n#endif\n            int locations = l.out_w*l.out_h;\n            int size = l.size*l.size*l.c*l.n*locations;\n            fwrite(l.biases, sizeof(float), l.outputs, fp);\n            fwrite(l.weights, sizeof(float), size, fp);\n        }\n    }\n    fclose(fp);\n}\nvoid save_weights(network *net, char *filename)\n{\n    save_weights_upto(net, filename, net->n);\n}\n\nvoid transpose_matrix(float *a, int rows, int cols)\n{\n    float *transpose = calloc(rows*cols, sizeof(float));\n    int x, y;\n    for(x = 0; x < rows; ++x){\n        for(y = 0; y < cols; ++y){\n            transpose[y*rows + x] = a[x*cols + y];\n        }\n    }\n    memcpy(a, transpose, rows*cols*sizeof(float));\n    free(transpose);\n}\n\nvoid load_connected_weights(layer l, FILE *fp, int transpose)\n{\n    fread(l.biases, sizeof(float), l.outputs, fp);\n    fread(l.weights, sizeof(float), l.outputs*l.inputs, fp);\n    if(transpose){\n        transpose_matrix(l.weights, l.inputs, l.outputs);\n    }\n    //printf(\"Biases: %f mean %f variance\\n\", mean_array(l.biases, l.outputs), variance_array(l.biases, l.outputs));\n    //printf(\"Weights: %f mean %f variance\\n\", mean_array(l.weights, l.outputs*l.inputs), variance_array(l.weights, l.outputs*l.inputs));\n    if (l.batch_normalize && (!l.dontloadscales)){\n        fread(l.scales, sizeof(float), l.outputs, fp);\n        fread(l.rolling_mean, sizeof(float), l.outputs, fp);\n        fread(l.rolling_variance, sizeof(float), l.outputs, fp);\n        //printf(\"Scales: %f mean %f variance\\n\", mean_array(l.scales, l.outputs), variance_array(l.scales, l.outputs));\n        //printf(\"rolling_mean: %f mean %f variance\\n\", mean_array(l.rolling_mean, l.outputs), variance_array(l.rolling_mean, l.outputs));\n        //printf(\"rolling_variance: %f mean %f variance\\n\", mean_array(l.rolling_variance, l.outputs), variance_array(l.rolling_variance, l.outputs));\n    }\n#ifdef GPU\n    if(gpu_index >= 0){\n        push_connected_layer(l);\n    }\n#endif\n}\n\nvoid load_batchnorm_weights(layer l, FILE *fp)\n{\n    fread(l.scales, sizeof(float), l.c, fp);\n    fread(l.rolling_mean, sizeof(float), l.c, fp);\n    fread(l.rolling_variance, sizeof(float), l.c, fp);\n#ifdef GPU\n    if(gpu_index >= 0){\n        push_batchnorm_layer(l);\n    }\n#endif\n}\n\nvoid load_convolutional_weights_binary(layer l, FILE *fp)\n{\n    fread(l.biases, sizeof(float), l.n, fp);\n    if (l.batch_normalize && (!l.dontloadscales)){\n        fread(l.scales, sizeof(float), l.n, fp);\n        fread(l.rolling_mean, sizeof(float), l.n, fp);\n        fread(l.rolling_variance, sizeof(float), l.n, fp);\n    }\n    int size = l.c*l.size*l.size;\n    int i, j, k;\n    for(i = 0; i < l.n; ++i){\n        float mean = 0;\n        fread(&mean, sizeof(float), 1, fp);\n        for(j = 0; j < size/8; ++j){\n            int index = i*size + j*8;\n            unsigned char c = 0;\n            fread(&c, sizeof(char), 1, fp);\n            for(k = 0; k < 8; ++k){\n                if (j*8 + k >= size) break;\n                l.weights[index + k] = (c & 1<<k) ? mean : -mean;\n            }\n        }\n    }\n#ifdef GPU\n    if(gpu_index >= 0){\n        push_convolutional_layer(l);\n    }\n#endif\n}\n\nvoid load_convolutional_weights(layer l, FILE *fp)\n{\n    if(l.binary){\n        //load_convolutional_weights_binary(l, fp);\n        //return;\n    }\n    int num = l.nweights;\n    fread(l.biases, sizeof(float), l.n, fp);\n    if (l.batch_normalize && (!l.dontloadscales)){\n        fread(l.scales, sizeof(float), l.n, fp);\n        fread(l.rolling_mean, sizeof(float), l.n, fp);\n        fread(l.rolling_variance, sizeof(float), l.n, fp);\n        if(0){\n            int i;\n            //for(i = 0; i < l.n; ++i){\n            //    printf(\"%g, \", l.rolling_mean[i]);\n            //}\n            //printf(\"\\n\");\n            //for(i = 0; i < l.n; ++i){\n            //    printf(\"%g, \", l.rolling_variance[i]);\n            //}\n            //printf(\"\\n\");\n        }\n        if(0){\n            fill_cpu(l.n, 0, l.rolling_mean, 1);\n            fill_cpu(l.n, 0, l.rolling_variance, 1);\n        }\n        if(0){\n            int i;\n            //for(i = 0; i < l.n; ++i){\n            //    printf(\"%g, \", l.rolling_mean[i]);\n            //}\n            //printf(\"\\n\");\n            //for(i = 0; i < l.n; ++i){\n            //    printf(\"%g, \", l.rolling_variance[i]);\n            //}\n            //printf(\"\\n\");\n        }\n    }\n    fread(l.weights, sizeof(float), num, fp);\n    //if(l.c == 3) scal_cpu(num, 1./256, l.weights, 1);\n    if (l.flipped) {\n        transpose_matrix(l.weights, l.c*l.size*l.size, l.n);\n    }\n    //if (l.binary) binarize_weights(l.weights, l.n, l.c*l.size*l.size, l.weights);\n#ifdef GPU\n    if(gpu_index >= 0){\n        push_convolutional_layer(l);\n    }\n#endif\n}\n\n\nvoid load_weights_upto(network *net, char *filename, int start, int cutoff)\n{\n#ifdef GPU\n    if(net->gpu_index >= 0){\n        cuda_set_device(net->gpu_index);\n    }\n#endif\n    //fprintf(stderr, \"Loading weights from %s...\", filename);\n    fflush(stdout);\n    FILE *fp = fopen(filename, \"rb\");\n    if(!fp) file_error(filename);\n\n    int major;\n    int minor;\n    int revision;\n    fread(&major, sizeof(int), 1, fp);\n    fread(&minor, sizeof(int), 1, fp);\n    fread(&revision, sizeof(int), 1, fp);\n    if ((major*10 + minor) >= 2){\n        fread(net->seen, sizeof(size_t), 1, fp);\n    } else {\n        int iseen = 0;\n        fread(&iseen, sizeof(int), 1, fp);\n        *net->seen = iseen;\n    }\n    int transpose = (major > 1000) || (minor > 1000);\n\n    int i;\n    for(i = start; i < net->n && i < cutoff; ++i){\n        layer l = net->layers[i];\n        if (l.dontload) continue;\n        if(l.type == CONVOLUTIONAL || l.type == DECONVOLUTIONAL){\n            load_convolutional_weights(l, fp);\n        }\n        if(l.type == CONNECTED){\n            load_connected_weights(l, fp, transpose);\n        }\n        if(l.type == BATCHNORM){\n            load_batchnorm_weights(l, fp);\n        }\n        if(l.type == CRNN){\n            load_convolutional_weights(*(l.input_layer), fp);\n            load_convolutional_weights(*(l.self_layer), fp);\n            load_convolutional_weights(*(l.output_layer), fp);\n        }\n        if(l.type == RNN){\n            load_connected_weights(*(l.input_layer), fp, transpose);\n            load_connected_weights(*(l.self_layer), fp, transpose);\n            load_connected_weights(*(l.output_layer), fp, transpose);\n        }\n        if (l.type == LSTM) {\n            load_connected_weights(*(l.wi), fp, transpose);\n            load_connected_weights(*(l.wf), fp, transpose);\n            load_connected_weights(*(l.wo), fp, transpose);\n            load_connected_weights(*(l.wg), fp, transpose);\n            load_connected_weights(*(l.ui), fp, transpose);\n            load_connected_weights(*(l.uf), fp, transpose);\n            load_connected_weights(*(l.uo), fp, transpose);\n            load_connected_weights(*(l.ug), fp, transpose);\n        }\n        if (l.type == GRU) {\n            if(1){\n                load_connected_weights(*(l.wz), fp, transpose);\n                load_connected_weights(*(l.wr), fp, transpose);\n                load_connected_weights(*(l.wh), fp, transpose);\n                load_connected_weights(*(l.uz), fp, transpose);\n                load_connected_weights(*(l.ur), fp, transpose);\n                load_connected_weights(*(l.uh), fp, transpose);\n            }else{\n                load_connected_weights(*(l.reset_layer), fp, transpose);\n                load_connected_weights(*(l.update_layer), fp, transpose);\n                load_connected_weights(*(l.state_layer), fp, transpose);\n            }\n        }\n        if(l.type == LOCAL){\n            int locations = l.out_w*l.out_h;\n            int size = l.size*l.size*l.c*l.n*locations;\n            fread(l.biases, sizeof(float), l.outputs, fp);\n            fread(l.weights, sizeof(float), size, fp);\n#ifdef GPU\n            if(gpu_index >= 0){\n                push_local_layer(l);\n            }\n#endif\n        }\n    }\n    //fprintf(stderr, \"Done!\\n\");\n    fclose(fp);\n}\n\nvoid load_weights(network *net, char *filename)\n{\n    load_weights_upto(net, filename, 0, net->n);\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/parser.h",
    "content": "#ifndef PARSER_H\n#define PARSER_H\n#include \"darknet.h\"\n#include \"network.h\"\n\nvoid save_network(network net, char *filename);\nvoid save_weights_double(network net, char *filename);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/region_layer.c",
    "content": "#include \"region_layer.h\"\n#include \"activations.h\"\n#include \"blas.h\"\n#include \"box.h\"\n#include \"cuda.h\"\n#include \"utils.h\"\n\n#include <stdio.h>\n#include <assert.h>\n#include <string.h>\n#include <stdlib.h>\n\nlayer make_region_layer(int batch, int w, int h, int n, int classes, int coords)\n{\n    layer l = {0};\n    l.type = REGION;\n\n    l.n = n;\n    l.batch = batch;\n    l.h = h;\n    l.w = w;\n    l.c = n*(classes + coords + 1);\n    l.out_w = l.w;\n    l.out_h = l.h;\n    l.out_c = l.c;\n    l.classes = classes;\n    l.coords = coords;\n    l.cost = calloc(1, sizeof(float));\n    l.biases = calloc(n*2, sizeof(float));\n    l.bias_updates = calloc(n*2, sizeof(float));\n    l.outputs = h*w*n*(classes + coords + 1);\n    l.inputs = l.outputs;\n    l.truths = 30*(l.coords + 1);\n    l.delta = calloc(batch*l.outputs, sizeof(float));\n    l.output = calloc(batch*l.outputs, sizeof(float));\n    int i;\n    for(i = 0; i < n*2; ++i){\n        l.biases[i] = .5;\n    }\n\n    l.forward = forward_region_layer;\n    l.backward = backward_region_layer;\n#ifdef GPU\n    l.forward_gpu = forward_region_layer_gpu;\n    l.backward_gpu = backward_region_layer_gpu;\n    l.output_gpu = cuda_make_array(l.output, batch*l.outputs);\n    l.delta_gpu = cuda_make_array(l.delta, batch*l.outputs);\n#endif\n\n    //fprintf(stderr, \"detection\\n\");\n    srand(0);\n\n    return l;\n}\n\nvoid resize_region_layer(layer *l, int w, int h)\n{\n    l->w = w;\n    l->h = h;\n\n    l->outputs = h*w*l->n*(l->classes + l->coords + 1);\n    l->inputs = l->outputs;\n\n    l->output = realloc(l->output, l->batch*l->outputs*sizeof(float));\n    l->delta = realloc(l->delta, l->batch*l->outputs*sizeof(float));\n\n#ifdef GPU\n    cuda_free(l->delta_gpu);\n    cuda_free(l->output_gpu);\n\n    l->delta_gpu =     cuda_make_array(l->delta, l->batch*l->outputs);\n    l->output_gpu =    cuda_make_array(l->output, l->batch*l->outputs);\n#endif\n}\n\nbox get_region_box(float *x, float *biases, int n, int index, int i, int j, int w, int h, int stride)\n{\n    box b;\n    b.x = (i + x[index + 0*stride]) / w;\n    b.y = (j + x[index + 1*stride]) / h;\n    b.w = exp(x[index + 2*stride]) * biases[2*n]   / w;\n    b.h = exp(x[index + 3*stride]) * biases[2*n+1] / h;\n    return b;\n}\n\nfloat delta_region_box(box truth, float *x, float *biases, int n, int index, int i, int j, int w, int h, float *delta, float scale, int stride)\n{\n    box pred = get_region_box(x, biases, n, index, i, j, w, h, stride);\n    float iou = box_iou(pred, truth);\n\n    float tx = (truth.x*w - i);\n    float ty = (truth.y*h - j);\n    float tw = log(truth.w*w / biases[2*n]);\n    float th = log(truth.h*h / biases[2*n + 1]);\n\n    delta[index + 0*stride] = scale * (tx - x[index + 0*stride]);\n    delta[index + 1*stride] = scale * (ty - x[index + 1*stride]);\n    delta[index + 2*stride] = scale * (tw - x[index + 2*stride]);\n    delta[index + 3*stride] = scale * (th - x[index + 3*stride]);\n    return iou;\n}\n\nvoid delta_region_mask(float *truth, float *x, int n, int index, float *delta, int stride, int scale)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        delta[index + i*stride] = scale*(truth[i] - x[index + i*stride]);\n    }\n}\n\n\nvoid delta_region_class(float *output, float *delta, int index, int class, int classes, tree *hier, float scale, int stride, float *avg_cat, int tag)\n{\n    int i, n;\n    if(hier){\n        float pred = 1;\n        while(class >= 0){\n            pred *= output[index + stride*class];\n            int g = hier->group[class];\n            int offset = hier->group_offset[g];\n            for(i = 0; i < hier->group_size[g]; ++i){\n                delta[index + stride*(offset + i)] = scale * (0 - output[index + stride*(offset + i)]);\n            }\n            delta[index + stride*class] = scale * (1 - output[index + stride*class]);\n\n            class = hier->parent[class];\n        }\n        *avg_cat += pred;\n    } else {\n        if (delta[index] && tag){\n            delta[index + stride*class] = scale * (1 - output[index + stride*class]);\n            return;\n        }\n        for(n = 0; n < classes; ++n){\n            delta[index + stride*n] = scale * (((n == class)?1 : 0) - output[index + stride*n]);\n            if(n == class) *avg_cat += output[index + stride*n];\n        }\n    }\n}\n\nfloat logit(float x)\n{\n    return log(x/(1.-x));\n}\n\nfloat tisnan(float x)\n{\n    return (x != x);\n}\n\nint entry_index(layer l, int batch, int location, int entry)\n{\n    int n =   location / (l.w*l.h);\n    int loc = location % (l.w*l.h);\n    return batch*l.outputs + n*l.w*l.h*(l.coords+l.classes+1) + entry*l.w*l.h + loc;\n}\n\nvoid forward_region_layer(const layer l, network net)\n{\n    int i,j,b,t,n;\n    memcpy(l.output, net.input, l.outputs*l.batch*sizeof(float));\n\n#ifndef GPU\n    for (b = 0; b < l.batch; ++b){\n        for(n = 0; n < l.n; ++n){\n            int index = entry_index(l, b, n*l.w*l.h, 0);\n            activate_array(l.output + index, 2*l.w*l.h, LOGISTIC);\n            index = entry_index(l, b, n*l.w*l.h, l.coords);\n            if(!l.background) activate_array(l.output + index,   l.w*l.h, LOGISTIC);\n            index = entry_index(l, b, n*l.w*l.h, l.coords + 1);\n            if(!l.softmax && !l.softmax_tree) activate_array(l.output + index, l.classes*l.w*l.h, LOGISTIC);\n        }\n    }\n    if (l.softmax_tree){\n        int i;\n        int count = l.coords + 1;\n        for (i = 0; i < l.softmax_tree->groups; ++i) {\n            int group_size = l.softmax_tree->group_size[i];\n            softmax_cpu(net.input + count, group_size, l.batch, l.inputs, l.n*l.w*l.h, 1, l.n*l.w*l.h, l.temperature, l.output + count);\n            count += group_size;\n        }\n    } else if (l.softmax){\n        int index = entry_index(l, 0, 0, l.coords + !l.background);\n        softmax_cpu(net.input + index, l.classes + l.background, l.batch*l.n, l.inputs/l.n, l.w*l.h, 1, l.w*l.h, 1, l.output + index);\n    }\n#endif\n\n    memset(l.delta, 0, l.outputs * l.batch * sizeof(float));\n    if(!net.train) return;\n    float avg_iou = 0;\n    float recall = 0;\n    float avg_cat = 0;\n    float avg_obj = 0;\n    float avg_anyobj = 0;\n    int count = 0;\n    int class_count = 0;\n    *(l.cost) = 0;\n    for (b = 0; b < l.batch; ++b) {\n        if(l.softmax_tree){\n            int onlyclass = 0;\n            for(t = 0; t < 30; ++t){\n                box truth = float_to_box(net.truth + t*(l.coords + 1) + b*l.truths, 1);\n                if(!truth.x) break;\n                int class = net.truth[t*(l.coords + 1) + b*l.truths + l.coords];\n                float maxp = 0;\n                int maxi = 0;\n                if(truth.x > 100000 && truth.y > 100000){\n                    for(n = 0; n < l.n*l.w*l.h; ++n){\n                        int class_index = entry_index(l, b, n, l.coords + 1);\n                        int obj_index = entry_index(l, b, n, l.coords);\n                        float scale =  l.output[obj_index];\n                        l.delta[obj_index] = l.noobject_scale * (0 - l.output[obj_index]);\n                        float p = scale*get_hierarchy_probability(l.output + class_index, l.softmax_tree, class, l.w*l.h);\n                        if(p > maxp){\n                            maxp = p;\n                            maxi = n;\n                        }\n                    }\n                    int class_index = entry_index(l, b, maxi, l.coords + 1);\n                    int obj_index = entry_index(l, b, maxi, l.coords);\n                    delta_region_class(l.output, l.delta, class_index, class, l.classes, l.softmax_tree, l.class_scale, l.w*l.h, &avg_cat, !l.softmax);\n                    if(l.output[obj_index] < .3) l.delta[obj_index] = l.object_scale * (.3 - l.output[obj_index]);\n                    else  l.delta[obj_index] = 0;\n                    l.delta[obj_index] = 0;\n                    ++class_count;\n                    onlyclass = 1;\n                    break;\n                }\n            }\n            if(onlyclass) continue;\n        }\n        for (j = 0; j < l.h; ++j) {\n            for (i = 0; i < l.w; ++i) {\n                for (n = 0; n < l.n; ++n) {\n                    int box_index = entry_index(l, b, n*l.w*l.h + j*l.w + i, 0);\n                    box pred = get_region_box(l.output, l.biases, n, box_index, i, j, l.w, l.h, l.w*l.h);\n                    float best_iou = 0;\n                    for(t = 0; t < 30; ++t){\n                        box truth = float_to_box(net.truth + t*(l.coords + 1) + b*l.truths, 1);\n                        if(!truth.x) break;\n                        float iou = box_iou(pred, truth);\n                        if (iou > best_iou) {\n                            best_iou = iou;\n                        }\n                    }\n                    int obj_index = entry_index(l, b, n*l.w*l.h + j*l.w + i, l.coords);\n                    avg_anyobj += l.output[obj_index];\n                    l.delta[obj_index] = l.noobject_scale * (0 - l.output[obj_index]);\n                    if(l.background) l.delta[obj_index] = l.noobject_scale * (1 - l.output[obj_index]);\n                    if (best_iou > l.thresh) {\n                        l.delta[obj_index] = 0;\n                    }\n\n                    if(*(net.seen) < 12800){\n                        box truth = {0};\n                        truth.x = (i + .5)/l.w;\n                        truth.y = (j + .5)/l.h;\n                        truth.w = l.biases[2*n]/l.w;\n                        truth.h = l.biases[2*n+1]/l.h;\n                        delta_region_box(truth, l.output, l.biases, n, box_index, i, j, l.w, l.h, l.delta, .01, l.w*l.h);\n                    }\n                }\n            }\n        }\n        for(t = 0; t < 30; ++t){\n            box truth = float_to_box(net.truth + t*(l.coords + 1) + b*l.truths, 1);\n\n            if(!truth.x) break;\n            float best_iou = 0;\n            int best_n = 0;\n            i = (truth.x * l.w);\n            j = (truth.y * l.h);\n            //printf(\"%d %f %d %f\\n\", i, truth.x*l.w, j, truth.y*l.h);\n            box truth_shift = truth;\n            truth_shift.x = 0;\n            truth_shift.y = 0;\n            //printf(\"index %d %d\\n\",i, j);\n            for(n = 0; n < l.n; ++n){\n                int box_index = entry_index(l, b, n*l.w*l.h + j*l.w + i, 0);\n                box pred = get_region_box(l.output, l.biases, n, box_index, i, j, l.w, l.h, l.w*l.h);\n                if(l.bias_match){\n                    pred.w = l.biases[2*n]/l.w;\n                    pred.h = l.biases[2*n+1]/l.h;\n                }\n                //printf(\"pred: (%f, %f) %f x %f\\n\", pred.x, pred.y, pred.w, pred.h);\n                pred.x = 0;\n                pred.y = 0;\n                float iou = box_iou(pred, truth_shift);\n                if (iou > best_iou){\n                    best_iou = iou;\n                    best_n = n;\n                }\n            }\n            //printf(\"%d %f (%f, %f) %f x %f\\n\", best_n, best_iou, truth.x, truth.y, truth.w, truth.h);\n\n            int box_index = entry_index(l, b, best_n*l.w*l.h + j*l.w + i, 0);\n            float iou = delta_region_box(truth, l.output, l.biases, best_n, box_index, i, j, l.w, l.h, l.delta, l.coord_scale *  (2 - truth.w*truth.h), l.w*l.h);\n            if(l.coords > 4){\n                int mask_index = entry_index(l, b, best_n*l.w*l.h + j*l.w + i, 4);\n                delta_region_mask(net.truth + t*(l.coords + 1) + b*l.truths + 5, l.output, l.coords - 4, mask_index, l.delta, l.w*l.h, l.mask_scale);\n            }\n            if(iou > .5) recall += 1;\n            avg_iou += iou;\n\n            //l.delta[best_index + 4] = iou - l.output[best_index + 4];\n            int obj_index = entry_index(l, b, best_n*l.w*l.h + j*l.w + i, l.coords);\n            avg_obj += l.output[obj_index];\n            l.delta[obj_index] = l.object_scale * (1 - l.output[obj_index]);\n            if (l.rescore) {\n                l.delta[obj_index] = l.object_scale * (iou - l.output[obj_index]);\n            }\n            if(l.background){\n                l.delta[obj_index] = l.object_scale * (0 - l.output[obj_index]);\n            }\n\n            int class = net.truth[t*(l.coords + 1) + b*l.truths + l.coords];\n            if (l.map) class = l.map[class];\n            int class_index = entry_index(l, b, best_n*l.w*l.h + j*l.w + i, l.coords + 1);\n            delta_region_class(l.output, l.delta, class_index, class, l.classes, l.softmax_tree, l.class_scale, l.w*l.h, &avg_cat, !l.softmax);\n            ++count;\n            ++class_count;\n        }\n    }\n    //printf(\"\\n\");\n    *(l.cost) = pow(mag_array(l.delta, l.outputs * l.batch), 2);\n    //printf(\"Region Avg IOU: %f, Class: %f, Obj: %f, No Obj: %f, Avg Recall: %f,  count: %d\\n\", avg_iou/count, avg_cat/class_count, avg_obj/count, avg_anyobj/(l.w*l.h*l.n*l.batch), recall/count, count);\n}\n\nvoid backward_region_layer(const layer l, network net)\n{\n    /*\n       int b;\n       int size = l.coords + l.classes + 1;\n       for (b = 0; b < l.batch*l.n; ++b){\n       int index = (b*size + 4)*l.w*l.h;\n       gradient_array(l.output + index, l.w*l.h, LOGISTIC, l.delta + index);\n       }\n       axpy_cpu(l.batch*l.inputs, 1, l.delta, 1, net.delta, 1);\n     */\n}\n\nvoid correct_region_boxes(box *boxes, int n, int w, int h, int netw, int neth, int relative)\n{\n    int i;\n    int new_w=0;\n    int new_h=0;\n    if (((float)netw/w) < ((float)neth/h)) {\n        new_w = netw;\n        new_h = (h * netw)/w;\n    } else {\n        new_h = neth;\n        new_w = (w * neth)/h;\n    }\n    for (i = 0; i < n; ++i){\n        box b = boxes[i];\n        b.x =  (b.x - (netw - new_w)/2./netw) / ((float)new_w/netw); \n        b.y =  (b.y - (neth - new_h)/2./neth) / ((float)new_h/neth); \n        b.w *= (float)netw/new_w;\n        b.h *= (float)neth/new_h;\n        if(!relative){\n            b.x *= w;\n            b.w *= w;\n            b.y *= h;\n            b.h *= h;\n        }\n        boxes[i] = b;\n    }\n}\n\nvoid get_region_boxes(layer l, int w, int h, int netw, int neth, float thresh, float **probs, box *boxes, float **masks, int only_objectness, int *map, float tree_thresh, int relative)\n{\n    int i,j,n,z;\n    float *predictions = l.output;\n    if (l.batch == 2) {\n        float *flip = l.output + l.outputs;\n        for (j = 0; j < l.h; ++j) {\n            for (i = 0; i < l.w/2; ++i) {\n                for (n = 0; n < l.n; ++n) {\n                    for(z = 0; z < l.classes + l.coords + 1; ++z){\n                        int i1 = z*l.w*l.h*l.n + n*l.w*l.h + j*l.w + i;\n                        int i2 = z*l.w*l.h*l.n + n*l.w*l.h + j*l.w + (l.w - i - 1);\n                        float swap = flip[i1];\n                        flip[i1] = flip[i2];\n                        flip[i2] = swap;\n                        if(z == 0){\n                            flip[i1] = -flip[i1];\n                            flip[i2] = -flip[i2];\n                        }\n                    }\n                }\n            }\n        }\n        for(i = 0; i < l.outputs; ++i){\n            l.output[i] = (l.output[i] + flip[i])/2.;\n        }\n    }\n    for (i = 0; i < l.w*l.h; ++i){\n        int row = i / l.w;\n        int col = i % l.w;\n        for(n = 0; n < l.n; ++n){\n            int index = n*l.w*l.h + i;\n            for(j = 0; j < l.classes; ++j){\n                probs[index][j] = 0;\n            }\n            int obj_index  = entry_index(l, 0, n*l.w*l.h + i, l.coords);\n            int box_index  = entry_index(l, 0, n*l.w*l.h + i, 0);\n            int mask_index = entry_index(l, 0, n*l.w*l.h + i, 4);\n            float scale = l.background ? 1 : predictions[obj_index];\n            boxes[index] = get_region_box(predictions, l.biases, n, box_index, col, row, l.w, l.h, l.w*l.h);\n            if(masks){\n                for(j = 0; j < l.coords - 4; ++j){\n                    masks[index][j] = l.output[mask_index + j*l.w*l.h];\n                }\n            }\n\n            int class_index = entry_index(l, 0, n*l.w*l.h + i, l.coords + !l.background);\n            if(l.softmax_tree){\n\n                hierarchy_predictions(predictions + class_index, l.classes, l.softmax_tree, 0, l.w*l.h);\n                if(map){\n                    for(j = 0; j < 200; ++j){\n                        int class_index = entry_index(l, 0, n*l.w*l.h + i, l.coords + 1 + map[j]);\n                        float prob = scale*predictions[class_index];\n                        probs[index][j] = (prob > thresh) ? prob : 0;\n                    }\n                } else {\n                    int j =  hierarchy_top_prediction(predictions + class_index, l.softmax_tree, tree_thresh, l.w*l.h);\n                    probs[index][j] = (scale > thresh) ? scale : 0;\n                    probs[index][l.classes] = scale;\n                }\n            } else {\n                float max = 0;\n                for(j = 0; j < l.classes; ++j){\n                    int class_index = entry_index(l, 0, n*l.w*l.h + i, l.coords + 1 + j);\n                    float prob = scale*predictions[class_index];\n                    probs[index][j] = (prob > thresh) ? prob : 0;\n                    if(prob > max) max = prob;\n                    // TODO REMOVE\n                    // if (j == 56 ) probs[index][j] = 0; \n                    /*\n                       if (j != 0) probs[index][j] = 0; \n                       int blacklist[] = {121, 497, 482, 504, 122, 518,481, 418, 542, 491, 914, 478, 120, 510,500};\n                       int bb;\n                       for (bb = 0; bb < sizeof(blacklist)/sizeof(int); ++bb){\n                       if(index == blacklist[bb]) probs[index][j] = 0;\n                       }\n                     */\n                }\n                probs[index][l.classes] = max;\n            }\n            if(only_objectness){\n                probs[index][0] = scale;\n            }\n        }\n    }\n    correct_region_boxes(boxes, l.w*l.h*l.n, w, h, netw, neth, relative);\n}\n\n#ifdef GPU\n\nvoid forward_region_layer_gpu(const layer l, network net)\n{\n    copy_gpu(l.batch*l.inputs, net.input_gpu, 1, l.output_gpu, 1);\n    int b, n;\n    for (b = 0; b < l.batch; ++b){\n        for(n = 0; n < l.n; ++n){\n            int index = entry_index(l, b, n*l.w*l.h, 0);\n            activate_array_gpu(l.output_gpu + index, 2*l.w*l.h, LOGISTIC);\n            if(l.coords > 4){\n                index = entry_index(l, b, n*l.w*l.h, 4);\n                activate_array_gpu(l.output_gpu + index, (l.coords - 4)*l.w*l.h, LOGISTIC);\n            }\n            index = entry_index(l, b, n*l.w*l.h, l.coords);\n            if(!l.background) activate_array_gpu(l.output_gpu + index,   l.w*l.h, LOGISTIC);\n            index = entry_index(l, b, n*l.w*l.h, l.coords + 1);\n            if(!l.softmax && !l.softmax_tree) activate_array_gpu(l.output_gpu + index, l.classes*l.w*l.h, LOGISTIC);\n        }\n    }\n    if (l.softmax_tree){\n        int index = entry_index(l, 0, 0, l.coords + 1);\n        softmax_tree(net.input_gpu + index, l.w*l.h, l.batch*l.n, l.inputs/l.n, 1, l.output_gpu + index, *l.softmax_tree);\n    /*\n        int mmin = 9000;\n        int mmax = 0;\n        int i;\n        for(i = 0; i < l.softmax_tree->groups; ++i){\n            int group_size = l.softmax_tree->group_size[i];\n            if (group_size < mmin) mmin = group_size;\n            if (group_size > mmax) mmax = group_size;\n        }\n        //printf(\"%d %d %d \\n\", l.softmax_tree->groups, mmin, mmax);\n        */\n        /*\n        // TIMING CODE\n        int zz;\n        int number = 1000;\n        int count = 0;\n        int i;\n        for (i = 0; i < l.softmax_tree->groups; ++i) {\n        int group_size = l.softmax_tree->group_size[i];\n        count += group_size;\n        }\n        printf(\"%d %d\\n\", l.softmax_tree->groups, count);\n        {\n        double then = what_time_is_it_now();\n        for(zz = 0; zz < number; ++zz){\n        int index = entry_index(l, 0, 0, 5);\n        softmax_tree(net.input_gpu + index, l.w*l.h, l.batch*l.n, l.inputs/l.n, 1, l.output_gpu + index, *l.softmax_tree);\n        }\n        cudaDeviceSynchronize();\n        printf(\"Good GPU Timing: %f\\n\", what_time_is_it_now() - then);\n        } \n        {\n        double then = what_time_is_it_now();\n        for(zz = 0; zz < number; ++zz){\n        int i;\n        int count = 5;\n        for (i = 0; i < l.softmax_tree->groups; ++i) {\n        int group_size = l.softmax_tree->group_size[i];\n        int index = entry_index(l, 0, 0, count);\n        softmax_gpu(net.input_gpu + index, group_size, l.batch*l.n, l.inputs/l.n, l.w*l.h, 1, l.w*l.h, 1, l.output_gpu + index);\n        count += group_size;\n        }\n        }\n        cudaDeviceSynchronize();\n        printf(\"Bad GPU Timing: %f\\n\", what_time_is_it_now() - then);\n        }\n        {\n        double then = what_time_is_it_now();\n        for(zz = 0; zz < number; ++zz){\n        int i;\n        int count = 5;\n        for (i = 0; i < l.softmax_tree->groups; ++i) {\n        int group_size = l.softmax_tree->group_size[i];\n        softmax_cpu(net.input + count, group_size, l.batch, l.inputs, l.n*l.w*l.h, 1, l.n*l.w*l.h, l.temperature, l.output + count);\n        count += group_size;\n        }\n        }\n        cudaDeviceSynchronize();\n        printf(\"CPU Timing: %f\\n\", what_time_is_it_now() - then);\n        }\n         */\n        /*\n           int i;\n           int count = 5;\n           for (i = 0; i < l.softmax_tree->groups; ++i) {\n           int group_size = l.softmax_tree->group_size[i];\n           int index = entry_index(l, 0, 0, count);\n           softmax_gpu(net.input_gpu + index, group_size, l.batch*l.n, l.inputs/l.n, l.w*l.h, 1, l.w*l.h, 1, l.output_gpu + index);\n           count += group_size;\n           }\n         */\n    } else if (l.softmax) {\n        int index = entry_index(l, 0, 0, l.coords + !l.background);\n        //printf(\"%d\\n\", index);\n        softmax_gpu(net.input_gpu + index, l.classes + l.background, l.batch*l.n, l.inputs/l.n, l.w*l.h, 1, l.w*l.h, 1, l.output_gpu + index);\n    }\n    if(!net.train || l.onlyforward){\n        cuda_pull_array(l.output_gpu, l.output, l.batch*l.outputs);\n        return;\n    }\n\n    cuda_pull_array(l.output_gpu, net.input, l.batch*l.inputs);\n    forward_region_layer(l, net);\n    //cuda_push_array(l.output_gpu, l.output, l.batch*l.outputs);\n    if(!net.train) return;\n    cuda_push_array(l.delta_gpu, l.delta, l.batch*l.outputs);\n}\n\nvoid backward_region_layer_gpu(const layer l, network net)\n{\n    int b, n;\n    for (b = 0; b < l.batch; ++b){\n        for(n = 0; n < l.n; ++n){\n            int index = entry_index(l, b, n*l.w*l.h, 0);\n            gradient_array_gpu(l.output_gpu + index, 2*l.w*l.h, LOGISTIC, l.delta_gpu + index);\n            if(l.coords > 4){\n                index = entry_index(l, b, n*l.w*l.h, 4);\n                gradient_array_gpu(l.output_gpu + index, (l.coords - 4)*l.w*l.h, LOGISTIC, l.delta_gpu + index);\n            }\n            index = entry_index(l, b, n*l.w*l.h, l.coords);\n            if(!l.background) gradient_array_gpu(l.output_gpu + index,   l.w*l.h, LOGISTIC, l.delta_gpu + index);\n        }\n    }\n    axpy_gpu(l.batch*l.inputs, 1, l.delta_gpu, 1, net.delta_gpu, 1);\n}\n#endif\n\nvoid zero_objectness(layer l)\n{\n    int i, n;\n    for (i = 0; i < l.w*l.h; ++i){\n        for(n = 0; n < l.n; ++n){\n            int obj_index = entry_index(l, 0, n*l.w*l.h + i, l.coords);\n            l.output[obj_index] = 0;\n        }\n    }\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/region_layer.h",
    "content": "#ifndef REGION_LAYER_H\n#define REGION_LAYER_H\n\n#include \"darknet.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_region_layer(int batch, int h, int w, int n, int classes, int coords);\nvoid forward_region_layer(const layer l, network net);\nvoid backward_region_layer(const layer l, network net);\nvoid resize_region_layer(layer *l, int w, int h);\n\n#ifdef GPU\nvoid forward_region_layer_gpu(const layer l, network net);\nvoid backward_region_layer_gpu(layer l, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/reorg_layer.c",
    "content": "#include \"reorg_layer.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n\n#include <stdio.h>\n\n\nlayer make_reorg_layer(int batch, int w, int h, int c, int stride, int reverse, int flatten, int extra)\n{\n    layer l = {0};\n    l.type = REORG;\n    l.batch = batch;\n    l.stride = stride;\n    l.extra = extra;\n    l.h = h;\n    l.w = w;\n    l.c = c;\n    l.flatten = flatten;\n    if(reverse){\n        l.out_w = w*stride;\n        l.out_h = h*stride;\n        l.out_c = c/(stride*stride);\n    }else{\n        l.out_w = w/stride;\n        l.out_h = h/stride;\n        l.out_c = c*(stride*stride);\n    }\n    l.reverse = reverse;\n\n    l.outputs = l.out_h * l.out_w * l.out_c;\n    l.inputs = h*w*c;\n    if(l.extra){\n        l.out_w = l.out_h = l.out_c = 0;\n        l.outputs = l.inputs + l.extra;\n    }\n\n    int output_size = l.outputs * batch;\n    l.output =  calloc(output_size, sizeof(float));\n    l.delta =   calloc(output_size, sizeof(float));\n\n    l.forward = forward_reorg_layer;\n    l.backward = backward_reorg_layer;\n#ifdef GPU\n    l.forward_gpu = forward_reorg_layer_gpu;\n    l.backward_gpu = backward_reorg_layer_gpu;\n\n    l.output_gpu  = cuda_make_array(l.output, output_size);\n    l.delta_gpu   = cuda_make_array(l.delta, output_size);\n#endif\n    return l;\n}\n\nvoid resize_reorg_layer(layer *l, int w, int h)\n{\n    int stride = l->stride;\n    int c = l->c;\n\n    l->h = h;\n    l->w = w;\n\n    if(l->reverse){\n        l->out_w = w*stride;\n        l->out_h = h*stride;\n        l->out_c = c/(stride*stride);\n    }else{\n        l->out_w = w/stride;\n        l->out_h = h/stride;\n        l->out_c = c*(stride*stride);\n    }\n\n    l->outputs = l->out_h * l->out_w * l->out_c;\n    l->inputs = l->outputs;\n    int output_size = l->outputs * l->batch;\n\n    l->output = realloc(l->output, output_size * sizeof(float));\n    l->delta = realloc(l->delta, output_size * sizeof(float));\n\n#ifdef GPU\n    cuda_free(l->output_gpu);\n    cuda_free(l->delta_gpu);\n    l->output_gpu  = cuda_make_array(l->output, output_size);\n    l->delta_gpu   = cuda_make_array(l->delta,  output_size);\n#endif\n}\n\nvoid forward_reorg_layer(const layer l, network net)\n{\n    int i;\n    if(l.flatten){\n        memcpy(l.output, net.input, l.outputs*l.batch*sizeof(float));\n        if(l.reverse){\n            flatten(l.output, l.w*l.h, l.c, l.batch, 0);\n        }else{\n            flatten(l.output, l.w*l.h, l.c, l.batch, 1);\n        }\n    } else if (l.extra) {\n        for(i = 0; i < l.batch; ++i){\n            copy_cpu(l.inputs, net.input + i*l.inputs, 1, l.output + i*l.outputs, 1);\n        }\n    } else if (l.reverse){\n        reorg_cpu(net.input, l.w, l.h, l.c, l.batch, l.stride, 1, l.output);\n    } else {\n        reorg_cpu(net.input, l.w, l.h, l.c, l.batch, l.stride, 0, l.output);\n    }\n}\n\nvoid backward_reorg_layer(const layer l, network net)\n{\n    int i;\n    if(l.flatten){\n        memcpy(net.delta, l.delta, l.outputs*l.batch*sizeof(float));\n        if(l.reverse){\n            flatten(net.delta, l.w*l.h, l.c, l.batch, 1);\n        }else{\n            flatten(net.delta, l.w*l.h, l.c, l.batch, 0);\n        }\n    } else if(l.reverse){\n        reorg_cpu(l.delta, l.w, l.h, l.c, l.batch, l.stride, 0, net.delta);\n    } else if (l.extra) {\n        for(i = 0; i < l.batch; ++i){\n            copy_cpu(l.inputs, l.delta + i*l.outputs, 1, net.delta + i*l.inputs, 1);\n        }\n    }else{\n        reorg_cpu(l.delta, l.w, l.h, l.c, l.batch, l.stride, 1, net.delta);\n    }\n}\n\n#ifdef GPU\nvoid forward_reorg_layer_gpu(layer l, network net)\n{\n    int i;\n    if(l.flatten){\n        if(l.reverse){\n            flatten_gpu(net.input_gpu, l.w*l.h, l.c, l.batch, 0, l.output_gpu);\n        }else{\n            flatten_gpu(net.input_gpu, l.w*l.h, l.c, l.batch, 1, l.output_gpu);\n        }\n    } else if (l.extra) {\n        for(i = 0; i < l.batch; ++i){\n            copy_gpu(l.inputs, net.input_gpu + i*l.inputs, 1, l.output_gpu + i*l.outputs, 1);\n        }\n    } else if (l.reverse) {\n        reorg_gpu(net.input_gpu, l.w, l.h, l.c, l.batch, l.stride, 1, l.output_gpu);\n    }else {\n        reorg_gpu(net.input_gpu, l.w, l.h, l.c, l.batch, l.stride, 0, l.output_gpu);\n    }\n}\n\nvoid backward_reorg_layer_gpu(layer l, network net)\n{\n    if(l.flatten){\n        if(l.reverse){\n            flatten_gpu(l.delta_gpu, l.w*l.h, l.c, l.batch, 1, net.delta_gpu);\n        }else{\n            flatten_gpu(l.delta_gpu, l.w*l.h, l.c, l.batch, 0, net.delta_gpu);\n        }\n    } else if (l.extra) {\n        int i;\n        for(i = 0; i < l.batch; ++i){\n            copy_gpu(l.inputs, l.delta_gpu + i*l.outputs, 1, net.delta_gpu + i*l.inputs, 1);\n        }\n    } else if(l.reverse){\n        reorg_gpu(l.delta_gpu, l.w, l.h, l.c, l.batch, l.stride, 0, net.delta_gpu);\n    } else {\n        reorg_gpu(l.delta_gpu, l.w, l.h, l.c, l.batch, l.stride, 1, net.delta_gpu);\n    }\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/reorg_layer.h",
    "content": "#ifndef REORG_LAYER_H\n#define REORG_LAYER_H\n\n#include \"image.h\"\n#include \"cuda.h\"\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_reorg_layer(int batch, int w, int h, int c, int stride, int reverse, int flatten, int extra);\nvoid resize_reorg_layer(layer *l, int w, int h);\nvoid forward_reorg_layer(const layer l, network net);\nvoid backward_reorg_layer(const layer l, network net);\n\n#ifdef GPU\nvoid forward_reorg_layer_gpu(layer l, network net);\nvoid backward_reorg_layer_gpu(layer l, network net);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/rnn_layer.c",
    "content": "#include \"rnn_layer.h\"\n#include \"connected_layer.h\"\n#include \"utils.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"gemm.h\"\n\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nstatic void increment_layer(layer *l, int steps)\n{\n    int num = l->outputs*l->batch*steps;\n    l->output += num;\n    l->delta += num;\n    l->x += num;\n    l->x_norm += num;\n\n#ifdef GPU\n    l->output_gpu += num;\n    l->delta_gpu += num;\n    l->x_gpu += num;\n    l->x_norm_gpu += num;\n#endif\n}\n\nlayer make_rnn_layer(int batch, int inputs, int outputs, int steps, ACTIVATION activation, int batch_normalize, int adam)\n{\n    fprintf(stderr, \"RNN Layer: %d inputs, %d outputs\\n\", inputs, outputs);\n    batch = batch / steps;\n    layer l = {0};\n    l.batch = batch;\n    l.type = RNN;\n    l.steps = steps;\n    l.inputs = inputs;\n\n    l.state = calloc(batch*outputs, sizeof(float));\n    l.prev_state = calloc(batch*outputs, sizeof(float));\n\n    l.input_layer = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.input_layer) = make_connected_layer(batch*steps, inputs, outputs, activation, batch_normalize, adam);\n    l.input_layer->batch = batch;\n\n    l.self_layer = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.self_layer) = make_connected_layer(batch*steps, outputs, outputs, activation, batch_normalize, adam);\n    l.self_layer->batch = batch;\n\n    l.output_layer = malloc(sizeof(layer));\n    fprintf(stderr, \"\\t\\t\");\n    *(l.output_layer) = make_connected_layer(batch*steps, outputs, outputs, activation, batch_normalize, adam);\n    l.output_layer->batch = batch;\n\n    l.outputs = outputs;\n    l.output = l.output_layer->output;\n    l.delta = l.output_layer->delta;\n\n    l.forward = forward_rnn_layer;\n    l.backward = backward_rnn_layer;\n    l.update = update_rnn_layer;\n#ifdef GPU\n    l.forward_gpu = forward_rnn_layer_gpu;\n    l.backward_gpu = backward_rnn_layer_gpu;\n    l.update_gpu = update_rnn_layer_gpu;\n    l.state_gpu = cuda_make_array(0, batch*outputs);\n    l.prev_state_gpu = cuda_make_array(0, batch*outputs);\n    l.output_gpu = l.output_layer->output_gpu;\n    l.delta_gpu = l.output_layer->delta_gpu;\n#ifdef CUDNN\n    cudnnSetTensor4dDescriptor(l.input_layer->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.input_layer->out_c, l.input_layer->out_h, l.input_layer->out_w); \n    cudnnSetTensor4dDescriptor(l.self_layer->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.self_layer->out_c, l.self_layer->out_h, l.self_layer->out_w); \n    cudnnSetTensor4dDescriptor(l.output_layer->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, batch, l.output_layer->out_c, l.output_layer->out_h, l.output_layer->out_w); \n#endif\n#endif\n\n    return l;\n}\n\nvoid update_rnn_layer(layer l, update_args a)\n{\n    update_connected_layer(*(l.input_layer),  a);\n    update_connected_layer(*(l.self_layer),   a);\n    update_connected_layer(*(l.output_layer), a);\n}\n\nvoid forward_rnn_layer(layer l, network net)\n{\n    network s = net;\n    s.train = net.train;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n\n    fill_cpu(l.outputs * l.batch * l.steps, 0, output_layer.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, self_layer.delta, 1);\n    fill_cpu(l.outputs * l.batch * l.steps, 0, input_layer.delta, 1);\n    if(net.train) fill_cpu(l.outputs * l.batch, 0, l.state, 1);\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input = net.input;\n        forward_connected_layer(input_layer, s);\n\n        s.input = l.state;\n        forward_connected_layer(self_layer, s);\n\n        float *old_state = l.state;\n        if(net.train) l.state += l.outputs*l.batch;\n        if(l.shortcut){\n            copy_cpu(l.outputs * l.batch, old_state, 1, l.state, 1);\n        }else{\n            fill_cpu(l.outputs * l.batch, 0, l.state, 1);\n        }\n        axpy_cpu(l.outputs * l.batch, 1, input_layer.output, 1, l.state, 1);\n        axpy_cpu(l.outputs * l.batch, 1, self_layer.output, 1, l.state, 1);\n\n        s.input = l.state;\n        forward_connected_layer(output_layer, s);\n\n        net.input += l.inputs*l.batch;\n        increment_layer(&input_layer, 1);\n        increment_layer(&self_layer, 1);\n        increment_layer(&output_layer, 1);\n    }\n}\n\nvoid backward_rnn_layer(layer l, network net)\n{\n    network s = net;\n    s.train = net.train;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n\n    increment_layer(&input_layer, l.steps-1);\n    increment_layer(&self_layer, l.steps-1);\n    increment_layer(&output_layer, l.steps-1);\n\n    l.state += l.outputs*l.batch*l.steps;\n    for (i = l.steps-1; i >= 0; --i) {\n        copy_cpu(l.outputs * l.batch, input_layer.output, 1, l.state, 1);\n        axpy_cpu(l.outputs * l.batch, 1, self_layer.output, 1, l.state, 1);\n\n        s.input = l.state;\n        s.delta = self_layer.delta;\n        backward_connected_layer(output_layer, s);\n\n        l.state -= l.outputs*l.batch;\n        /*\n           if(i > 0){\n           copy_cpu(l.outputs * l.batch, input_layer.output - l.outputs*l.batch, 1, l.state, 1);\n           axpy_cpu(l.outputs * l.batch, 1, self_layer.output - l.outputs*l.batch, 1, l.state, 1);\n           }else{\n           fill_cpu(l.outputs * l.batch, 0, l.state, 1);\n           }\n         */\n\n        s.input = l.state;\n        s.delta = self_layer.delta - l.outputs*l.batch;\n        if (i == 0) s.delta = 0;\n        backward_connected_layer(self_layer, s);\n\n        copy_cpu(l.outputs*l.batch, self_layer.delta, 1, input_layer.delta, 1);\n        if (i > 0 && l.shortcut) axpy_cpu(l.outputs*l.batch, 1, self_layer.delta, 1, self_layer.delta - l.outputs*l.batch, 1);\n        s.input = net.input + i*l.inputs*l.batch;\n        if(net.delta) s.delta = net.delta + i*l.inputs*l.batch;\n        else s.delta = 0;\n        backward_connected_layer(input_layer, s);\n\n        increment_layer(&input_layer, -1);\n        increment_layer(&self_layer, -1);\n        increment_layer(&output_layer, -1);\n    }\n}\n\n#ifdef GPU\n\nvoid pull_rnn_layer(layer l)\n{\n    pull_connected_layer(*(l.input_layer));\n    pull_connected_layer(*(l.self_layer));\n    pull_connected_layer(*(l.output_layer));\n}\n\nvoid push_rnn_layer(layer l)\n{\n    push_connected_layer(*(l.input_layer));\n    push_connected_layer(*(l.self_layer));\n    push_connected_layer(*(l.output_layer));\n}\n\nvoid update_rnn_layer_gpu(layer l, update_args a)\n{\n    update_connected_layer_gpu(*(l.input_layer),  a);\n    update_connected_layer_gpu(*(l.self_layer),   a);\n    update_connected_layer_gpu(*(l.output_layer), a);\n}\n\nvoid forward_rnn_layer_gpu(layer l, network net)\n{\n    network s = {0};\n    s.train = net.train;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n\n    fill_gpu(l.outputs * l.batch * l.steps, 0, output_layer.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, self_layer.delta_gpu, 1);\n    fill_gpu(l.outputs * l.batch * l.steps, 0, input_layer.delta_gpu, 1);\n\n    if(net.train) {\n        fill_gpu(l.outputs * l.batch * l.steps, 0, l.delta_gpu, 1);\n        copy_gpu(l.outputs*l.batch, l.state_gpu, 1, l.prev_state_gpu, 1);\n    }\n\n    for (i = 0; i < l.steps; ++i) {\n        s.input_gpu = net.input_gpu;\n        forward_connected_layer_gpu(input_layer, s);\n\n        s.input_gpu = l.state_gpu;\n        forward_connected_layer_gpu(self_layer, s);\n\n        fill_gpu(l.outputs * l.batch, 0, l.state_gpu, 1);\n        axpy_gpu(l.outputs * l.batch, 1, input_layer.output_gpu, 1, l.state_gpu, 1);\n        axpy_gpu(l.outputs * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);\n\n        s.input_gpu = l.state_gpu;\n        forward_connected_layer_gpu(output_layer, s);\n\n        net.input_gpu += l.inputs*l.batch;\n        increment_layer(&input_layer, 1);\n        increment_layer(&self_layer, 1);\n        increment_layer(&output_layer, 1);\n    }\n}\n\nvoid backward_rnn_layer_gpu(layer l, network net)\n{\n    network s = {0};\n    s.train = net.train;\n    int i;\n    layer input_layer = *(l.input_layer);\n    layer self_layer = *(l.self_layer);\n    layer output_layer = *(l.output_layer);\n    increment_layer(&input_layer,  l.steps - 1);\n    increment_layer(&self_layer,   l.steps - 1);\n    increment_layer(&output_layer, l.steps - 1);\n    float *last_input = input_layer.output_gpu;\n    float *last_self = self_layer.output_gpu;\n    for (i = l.steps-1; i >= 0; --i) {\n        fill_gpu(l.outputs * l.batch, 0, l.state_gpu, 1);\n        axpy_gpu(l.outputs * l.batch, 1, input_layer.output_gpu, 1, l.state_gpu, 1);\n        axpy_gpu(l.outputs * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);\n\n        s.input_gpu = l.state_gpu;\n        s.delta_gpu = self_layer.delta_gpu;\n        backward_connected_layer_gpu(output_layer, s);\n\n        if(i != 0) {\n            fill_gpu(l.outputs * l.batch, 0, l.state_gpu, 1);\n            axpy_gpu(l.outputs * l.batch, 1, input_layer.output_gpu - l.outputs*l.batch, 1, l.state_gpu, 1);\n            axpy_gpu(l.outputs * l.batch, 1, self_layer.output_gpu - l.outputs*l.batch, 1, l.state_gpu, 1);\n        }else {\n            copy_gpu(l.outputs*l.batch, l.prev_state_gpu, 1, l.state_gpu, 1);\n        }\n\n        copy_gpu(l.outputs*l.batch, self_layer.delta_gpu, 1, input_layer.delta_gpu, 1);\n\n        s.input_gpu = l.state_gpu;\n        s.delta_gpu = (i > 0) ? self_layer.delta_gpu - l.outputs*l.batch : 0;\n        if (i == 0) s.delta_gpu = 0;\n        backward_connected_layer_gpu(self_layer, s);\n\n        s.input_gpu = net.input_gpu + i*l.inputs*l.batch;\n        if(net.delta_gpu) s.delta_gpu = net.delta_gpu + i*l.inputs*l.batch;\n        else s.delta_gpu = 0;\n        backward_connected_layer_gpu(input_layer, s);\n\n        increment_layer(&input_layer,  -1);\n        increment_layer(&self_layer,   -1);\n        increment_layer(&output_layer, -1);\n    }\n    fill_gpu(l.outputs * l.batch, 0, l.state_gpu, 1);\n    axpy_gpu(l.outputs * l.batch, 1, last_input, 1, l.state_gpu, 1);\n    axpy_gpu(l.outputs * l.batch, 1, last_self, 1, l.state_gpu, 1);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/rnn_layer.h",
    "content": "\n#ifndef RNN_LAYER_H\n#define RNN_LAYER_H\n\n#include \"activations.h\"\n#include \"layer.h\"\n#include \"network.h\"\n#define USET\n\nlayer make_rnn_layer(int batch, int inputs, int outputs, int steps, ACTIVATION activation, int batch_normalize, int adam);\n\nvoid forward_rnn_layer(layer l, network net);\nvoid backward_rnn_layer(layer l, network net);\nvoid update_rnn_layer(layer l, update_args a);\n\n#ifdef GPU\nvoid forward_rnn_layer_gpu(layer l, network net);\nvoid backward_rnn_layer_gpu(layer l, network net);\nvoid update_rnn_layer_gpu(layer l, update_args a);\nvoid push_rnn_layer(layer l);\nvoid pull_rnn_layer(layer l);\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lightnet/_darknet/route_layer.c",
    "content": "#include \"route_layer.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n\n#include <stdio.h>\n\nroute_layer make_route_layer(int batch, int n, int *input_layers, int *input_sizes)\n{\n    //fprintf(stderr,\"route \");\n    route_layer l = {0};\n    l.type = ROUTE;\n    l.batch = batch;\n    l.n = n;\n    l.input_layers = input_layers;\n    l.input_sizes = input_sizes;\n    int i;\n    int outputs = 0;\n    for(i = 0; i < n; ++i){\n        //fprintf(stderr,\" %d\", input_layers[i]);\n        outputs += input_sizes[i];\n    }\n    //fprintf(stderr, \"\\n\");\n    l.outputs = outputs;\n    l.inputs = outputs;\n    l.delta =  calloc(outputs*batch, sizeof(float));\n    l.output = calloc(outputs*batch, sizeof(float));;\n\n    l.forward = forward_route_layer;\n    l.backward = backward_route_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_route_layer_gpu;\n    l.backward_gpu = backward_route_layer_gpu;\n\n    l.delta_gpu =  cuda_make_array(l.delta, outputs*batch);\n    l.output_gpu = cuda_make_array(l.output, outputs*batch);\n    #endif\n    return l;\n}\n\nvoid resize_route_layer(route_layer *l, network *net)\n{\n    int i;\n    layer first = net->layers[l->input_layers[0]];\n    l->out_w = first.out_w;\n    l->out_h = first.out_h;\n    l->out_c = first.out_c;\n    l->outputs = first.outputs;\n    l->input_sizes[0] = first.outputs;\n    for(i = 1; i < l->n; ++i){\n        int index = l->input_layers[i];\n        layer next = net->layers[index];\n        l->outputs += next.outputs;\n        l->input_sizes[i] = next.outputs;\n        if(next.out_w == first.out_w && next.out_h == first.out_h){\n            l->out_c += next.out_c;\n        }else{\n            //printf(\"%d %d, %d %d\\n\", next.out_w, next.out_h, first.out_w, first.out_h);\n            l->out_h = l->out_w = l->out_c = 0;\n        }\n    }\n    l->inputs = l->outputs;\n    l->delta =  realloc(l->delta, l->outputs*l->batch*sizeof(float));\n    l->output = realloc(l->output, l->outputs*l->batch*sizeof(float));\n\n#ifdef GPU\n    cuda_free(l->output_gpu);\n    cuda_free(l->delta_gpu);\n    l->output_gpu  = cuda_make_array(l->output, l->outputs*l->batch);\n    l->delta_gpu   = cuda_make_array(l->delta,  l->outputs*l->batch);\n#endif\n    \n}\n\nvoid forward_route_layer(const route_layer l, network net)\n{\n    int i, j;\n    int offset = 0;\n    for(i = 0; i < l.n; ++i){\n        int index = l.input_layers[i];\n        float *input = net.layers[index].output;\n        int input_size = l.input_sizes[i];\n        for(j = 0; j < l.batch; ++j){\n            copy_cpu(input_size, input + j*input_size, 1, l.output + offset + j*l.outputs, 1);\n        }\n        offset += input_size;\n    }\n}\n\nvoid backward_route_layer(const route_layer l, network net)\n{\n    int i, j;\n    int offset = 0;\n    for(i = 0; i < l.n; ++i){\n        int index = l.input_layers[i];\n        float *delta = net.layers[index].delta;\n        int input_size = l.input_sizes[i];\n        for(j = 0; j < l.batch; ++j){\n            axpy_cpu(input_size, 1, l.delta + offset + j*l.outputs, 1, delta + j*input_size, 1);\n        }\n        offset += input_size;\n    }\n}\n\n#ifdef GPU\nvoid forward_route_layer_gpu(const route_layer l, network net)\n{\n    int i, j;\n    int offset = 0;\n    for(i = 0; i < l.n; ++i){\n        int index = l.input_layers[i];\n        float *input = net.layers[index].output_gpu;\n        int input_size = l.input_sizes[i];\n        for(j = 0; j < l.batch; ++j){\n            copy_gpu(input_size, input + j*input_size, 1, l.output_gpu + offset + j*l.outputs, 1);\n        }\n        offset += input_size;\n    }\n}\n\nvoid backward_route_layer_gpu(const route_layer l, network net)\n{\n    int i, j;\n    int offset = 0;\n    for(i = 0; i < l.n; ++i){\n        int index = l.input_layers[i];\n        float *delta = net.layers[index].delta_gpu;\n        int input_size = l.input_sizes[i];\n        for(j = 0; j < l.batch; ++j){\n            axpy_gpu(input_size, 1, l.delta_gpu + offset + j*l.outputs, 1, delta + j*input_size, 1);\n        }\n        offset += input_size;\n    }\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/route_layer.h",
    "content": "#ifndef ROUTE_LAYER_H\n#define ROUTE_LAYER_H\n#include \"network.h\"\n#include \"layer.h\"\n\ntypedef layer route_layer;\n\nroute_layer make_route_layer(int batch, int n, int *input_layers, int *input_size);\nvoid forward_route_layer(const route_layer l, network net);\nvoid backward_route_layer(const route_layer l, network net);\nvoid resize_route_layer(route_layer *l, network *net);\n\n#ifdef GPU\nvoid forward_route_layer_gpu(const route_layer l, network net);\nvoid backward_route_layer_gpu(const route_layer l, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/shortcut_layer.c",
    "content": "#include \"shortcut_layer.h\"\n#include \"cuda.h\"\n#include \"blas.h\"\n#include \"activations.h\"\n\n#include <stdio.h>\n#include <assert.h>\n\nlayer make_shortcut_layer(int batch, int index, int w, int h, int c, int w2, int h2, int c2)\n{\n    fprintf(stderr,\"Shortcut Layer: %d\\n\", index);\n    layer l = {0};\n    l.type = SHORTCUT;\n    l.batch = batch;\n    l.w = w2;\n    l.h = h2;\n    l.c = c2;\n    l.out_w = w;\n    l.out_h = h;\n    l.out_c = c;\n    l.outputs = w*h*c;\n    l.inputs = l.outputs;\n\n    l.index = index;\n\n    l.delta =  calloc(l.outputs*batch, sizeof(float));\n    l.output = calloc(l.outputs*batch, sizeof(float));;\n\n    l.forward = forward_shortcut_layer;\n    l.backward = backward_shortcut_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_shortcut_layer_gpu;\n    l.backward_gpu = backward_shortcut_layer_gpu;\n\n    l.delta_gpu =  cuda_make_array(l.delta, l.outputs*batch);\n    l.output_gpu = cuda_make_array(l.output, l.outputs*batch);\n    #endif\n    return l;\n}\n\nvoid forward_shortcut_layer(const layer l, network net)\n{\n    copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);\n    shortcut_cpu(l.batch, l.w, l.h, l.c, net.layers[l.index].output, l.out_w, l.out_h, l.out_c, l.output);\n    activate_array(l.output, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_shortcut_layer(const layer l, network net)\n{\n    gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);\n    axpy_cpu(l.outputs*l.batch, 1, l.delta, 1, net.delta, 1);\n    shortcut_cpu(l.batch, l.out_w, l.out_h, l.out_c, l.delta, l.w, l.h, l.c, net.layers[l.index].delta);\n}\n\n#ifdef GPU\nvoid forward_shortcut_layer_gpu(const layer l, network net)\n{\n    copy_gpu(l.outputs*l.batch, net.input_gpu, 1, l.output_gpu, 1);\n    shortcut_gpu(l.batch, l.w, l.h, l.c, net.layers[l.index].output_gpu, l.out_w, l.out_h, l.out_c, l.output_gpu);\n    activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);\n}\n\nvoid backward_shortcut_layer_gpu(const layer l, network net)\n{\n    gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);\n    axpy_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1, net.delta_gpu, 1);\n    shortcut_gpu(l.batch, l.out_w, l.out_h, l.out_c, l.delta_gpu, l.w, l.h, l.c, net.layers[l.index].delta_gpu);\n}\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/shortcut_layer.h",
    "content": "#ifndef SHORTCUT_LAYER_H\n#define SHORTCUT_LAYER_H\n\n#include \"layer.h\"\n#include \"network.h\"\n\nlayer make_shortcut_layer(int batch, int index, int w, int h, int c, int w2, int h2, int c2);\nvoid forward_shortcut_layer(const layer l, network net);\nvoid backward_shortcut_layer(const layer l, network net);\n\n#ifdef GPU\nvoid forward_shortcut_layer_gpu(const layer l, network net);\nvoid backward_shortcut_layer_gpu(const layer l, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/softmax_layer.c",
    "content": "#include \"softmax_layer.h\"\n#include \"blas.h\"\n#include \"cuda.h\"\n\n#include <float.h>\n#include <math.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <assert.h>\n\nsoftmax_layer make_softmax_layer(int batch, int inputs, int groups)\n{\n    assert(inputs%groups == 0);\n    fprintf(stderr, \"softmax                                        %4d\\n\",  inputs);\n    softmax_layer l = {0};\n    l.type = SOFTMAX;\n    l.batch = batch;\n    l.groups = groups;\n    l.inputs = inputs;\n    l.outputs = inputs;\n    l.output = calloc(inputs*batch, sizeof(float));\n    l.delta = calloc(inputs*batch, sizeof(float));\n\n    l.forward = forward_softmax_layer;\n    l.backward = backward_softmax_layer;\n    #ifdef GPU\n    l.forward_gpu = forward_softmax_layer_gpu;\n    l.backward_gpu = backward_softmax_layer_gpu;\n\n    l.output_gpu = cuda_make_array(l.output, inputs*batch); \n    l.delta_gpu = cuda_make_array(l.delta, inputs*batch); \n    #endif\n    return l;\n}\n\nvoid forward_softmax_layer(const softmax_layer l, network net)\n{\n    if(l.softmax_tree){\n        int i;\n        int count = 0;\n        for (i = 0; i < l.softmax_tree->groups; ++i) {\n            int group_size = l.softmax_tree->group_size[i];\n            softmax_cpu(net.input + count, group_size, l.batch, l.inputs, 1, 0, 1, l.temperature, l.output + count);\n            count += group_size;\n        }\n    } else {\n        softmax_cpu(net.input, l.inputs/l.groups, l.batch, l.inputs, l.groups, l.inputs/l.groups, 1, l.temperature, l.output);\n    }\n}\n\nvoid backward_softmax_layer(const softmax_layer l, network net)\n{\n    axpy_cpu(l.inputs*l.batch, 1, l.delta, 1, net.delta, 1);\n}\n\n#ifdef GPU\n\nvoid pull_softmax_layer_output(const softmax_layer layer)\n{\n    cuda_pull_array(layer.output_gpu, layer.output, layer.inputs*layer.batch);\n}\n\nvoid forward_softmax_layer_gpu(const softmax_layer l, network net)\n{\n    if(l.softmax_tree){\n        int i;\n        int count = 0;\n        for (i = 0; i < l.softmax_tree->groups; ++i) {\n            int group_size = l.softmax_tree->group_size[i];\n            softmax_gpu(net.input_gpu + count, group_size, l.batch, l.inputs, 1, 0, 1, l.temperature, l.output_gpu + count);\n            count += group_size;\n        }\n    } else {\n        if(l.spatial){\n            softmax_gpu(net.input_gpu, l.c, l.batch*l.c, l.inputs/l.c, l.w*l.h, 1, l.w*l.h, 1, l.output_gpu);\n        }else{\n            softmax_gpu(net.input_gpu, l.inputs/l.groups, l.batch, l.inputs, l.groups, l.inputs/l.groups, 1, l.temperature, l.output_gpu);\n        }\n    }\n}\n\nvoid backward_softmax_layer_gpu(const softmax_layer layer, network net)\n{\n    axpy_gpu(layer.batch*layer.inputs, 1, layer.delta_gpu, 1, net.delta_gpu, 1);\n}\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/softmax_layer.h",
    "content": "#ifndef SOFTMAX_LAYER_H\n#define SOFTMAX_LAYER_H\n#include \"layer.h\"\n#include \"network.h\"\n\ntypedef layer softmax_layer;\n\nvoid softmax_array(float *input, int n, float temp, float *output);\nsoftmax_layer make_softmax_layer(int batch, int inputs, int groups);\nvoid forward_softmax_layer(const softmax_layer l, network net);\nvoid backward_softmax_layer(const softmax_layer l, network net);\n\n#ifdef GPU\nvoid pull_softmax_layer_output(const softmax_layer l);\nvoid forward_softmax_layer_gpu(const softmax_layer l, network net);\nvoid backward_softmax_layer_gpu(const softmax_layer l, network net);\n#endif\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/stb_image.h",
    "content": "/* stb_image - v2.06 - public domain image loader - http://nothings.org/stb_image.h\n                                     no warranty implied; use at your own risk\n\n   Do this:\n      #define STB_IMAGE_IMPLEMENTATION\n   before you include this file in *one* C or C++ file to create the implementation.\n\n   // i.e. it should look like this:\n   #include ...\n   #include ...\n   #include ...\n   #define STB_IMAGE_IMPLEMENTATION\n   #include \"stb_image.h\"\n\n   You can #define STBI_ASSERT(x) before the #include to avoid using assert.h.\n   And #define STBI_MALLOC, STBI_REALLOC, and STBI_FREE to avoid using malloc,realloc,free\n\n\n   QUICK NOTES:\n      Primarily of interest to game developers and other people who can\n          avoid problematic images and only need the trivial interface\n\n      JPEG baseline & progressive (12 bpc/arithmetic not supported, same as stock IJG lib)\n      PNG 1/2/4/8-bit-per-channel (16 bpc not supported)\n\n      TGA (not sure what subset, if a subset)\n      BMP non-1bpp, non-RLE\n      PSD (composited view only, no extra channels)\n\n      GIF (*comp always reports as 4-channel)\n      HDR (radiance rgbE format)\n      PIC (Softimage PIC)\n      PNM (PPM and PGM binary only)\n\n      - decode from memory or through FILE (define STBI_NO_STDIO to remove code)\n      - decode from arbitrary I/O callbacks\n      - SIMD acceleration on x86/x64 (SSE2) and ARM (NEON)\n\n   Full documentation under \"DOCUMENTATION\" below.\n\n\n   Revision 2.00 release notes:\n\n      - Progressive JPEG is now supported.\n\n      - PPM and PGM binary formats are now supported, thanks to Ken Miller.\n\n      - x86 platforms now make use of SSE2 SIMD instructions for\n        JPEG decoding, and ARM platforms can use NEON SIMD if requested.\n        This work was done by Fabian \"ryg\" Giesen. SSE2 is used by\n        default, but NEON must be enabled explicitly; see docs.\n\n        With other JPEG optimizations included in this version, we see\n        2x speedup on a JPEG on an x86 machine, and a 1.5x speedup\n        on a JPEG on an ARM machine, relative to previous versions of this\n        library. The same results will not obtain for all JPGs and for all\n        x86/ARM machines. (Note that progressive JPEGs are significantly\n        slower to decode than regular JPEGs.) This doesn't mean that this\n        is the fastest JPEG decoder in the land; rather, it brings it\n        closer to parity with standard libraries. If you want the fastest\n        decode, look elsewhere. (See \"Philosophy\" section of docs below.)\n\n        See final bullet items below for more info on SIMD.\n\n      - Added STBI_MALLOC, STBI_REALLOC, and STBI_FREE macros for replacing\n        the memory allocator. Unlike other STBI libraries, these macros don't\n        support a context parameter, so if you need to pass a context in to\n        the allocator, you'll have to store it in a global or a thread-local\n        variable.\n\n      - Split existing STBI_NO_HDR flag into two flags, STBI_NO_HDR and\n        STBI_NO_LINEAR.\n            STBI_NO_HDR:     suppress implementation of .hdr reader format\n            STBI_NO_LINEAR:  suppress high-dynamic-range light-linear float API\n\n      - You can suppress implementation of any of the decoders to reduce\n        your code footprint by #defining one or more of the following\n        symbols before creating the implementation.\n\n            STBI_NO_JPEG\n            STBI_NO_PNG\n            STBI_NO_BMP\n            STBI_NO_PSD\n            STBI_NO_TGA\n            STBI_NO_GIF\n            STBI_NO_HDR\n            STBI_NO_PIC\n            STBI_NO_PNM   (.ppm and .pgm)\n\n      - You can request *only* certain decoders and suppress all other ones\n        (this will be more forward-compatible, as addition of new decoders\n        doesn't require you to disable them explicitly):\n\n            STBI_ONLY_JPEG\n            STBI_ONLY_PNG\n            STBI_ONLY_BMP\n            STBI_ONLY_PSD\n            STBI_ONLY_TGA\n            STBI_ONLY_GIF\n            STBI_ONLY_HDR\n            STBI_ONLY_PIC\n            STBI_ONLY_PNM   (.ppm and .pgm)\n\n         Note that you can define multiples of these, and you will get all\n         of them (\"only x\" and \"only y\" is interpreted to mean \"only x&y\").\n\n       - If you use STBI_NO_PNG (or _ONLY_ without PNG), and you still\n         want the zlib decoder to be available, #define STBI_SUPPORT_ZLIB\n\n      - Compilation of all SIMD code can be suppressed with\n            #define STBI_NO_SIMD\n        It should not be necessary to disable SIMD unless you have issues\n        compiling (e.g. using an x86 compiler which doesn't support SSE\n        intrinsics or that doesn't support the method used to detect\n        SSE2 support at run-time), and even those can be reported as\n        bugs so I can refine the built-in compile-time checking to be\n        smarter.\n\n      - The old STBI_SIMD system which allowed installing a user-defined\n        IDCT etc. has been removed. If you need this, don't upgrade. My\n        assumption is that almost nobody was doing this, and those who\n        were will find the built-in SIMD more satisfactory anyway.\n\n      - RGB values computed for JPEG images are slightly different from\n        previous versions of stb_image. (This is due to using less\n        integer precision in SIMD.) The C code has been adjusted so\n        that the same RGB values will be computed regardless of whether\n        SIMD support is available, so your app should always produce\n        consistent results. But these results are slightly different from\n        previous versions. (Specifically, about 3% of available YCbCr values\n        will compute different RGB results from pre-1.49 versions by +-1;\n        most of the deviating values are one smaller in the G channel.)\n\n      - If you must produce consistent results with previous versions of\n        stb_image, #define STBI_JPEG_OLD and you will get the same results\n        you used to; however, you will not get the SIMD speedups for\n        the YCbCr-to-RGB conversion step (although you should still see\n        significant JPEG speedup from the other changes).\n\n        Please note that STBI_JPEG_OLD is a temporary feature; it will be\n        removed in future versions of the library. It is only intended for\n        near-term back-compatibility use.\n\n\n   Latest revision history:\n      2.06  (2015-04-19) fix bug where PSD returns wrong '*comp' value\n      2.05  (2015-04-19) fix bug in progressive JPEG handling, fix warning\n      2.04  (2015-04-15) try to re-enable SIMD on MinGW 64-bit\n      2.03  (2015-04-12) additional corruption checking\n                         stbi_set_flip_vertically_on_load\n                         fix NEON support; fix mingw support\n      2.02  (2015-01-19) fix incorrect assert, fix warning\n      2.01  (2015-01-17) fix various warnings\n      2.00b (2014-12-25) fix STBI_MALLOC in progressive JPEG\n      2.00  (2014-12-25) optimize JPEG, including x86 SSE2 & ARM NEON SIMD\n                         progressive JPEG\n                         PGM/PPM support\n                         STBI_MALLOC,STBI_REALLOC,STBI_FREE\n                         STBI_NO_*, STBI_ONLY_*\n                         GIF bugfix\n      1.48  (2014-12-14) fix incorrectly-named assert()\n      1.47  (2014-12-14) 1/2/4-bit PNG support (both grayscale and paletted)\n                         optimize PNG\n                         fix bug in interlaced PNG with user-specified channel count\n\n   See end of file for full revision history.\n\n\n ============================    Contributors    =========================\n\n Image formats                                Bug fixes & warning fixes\n    Sean Barrett (jpeg, png, bmp)                Marc LeBlanc\n    Nicolas Schulz (hdr, psd)                    Christpher Lloyd\n    Jonathan Dummer (tga)                        Dave Moore\n    Jean-Marc Lienher (gif)                      Won Chun\n    Tom Seddon (pic)                             the Horde3D community\n    Thatcher Ulrich (psd)                        Janez Zemva\n    Ken Miller (pgm, ppm)                        Jonathan Blow\n                                                 Laurent Gomila\n                                                 Aruelien Pocheville\n Extensions, features                            Ryamond Barbiero\n    Jetro Lauha (stbi_info)                      David Woo\n    Martin \"SpartanJ\" Golini (stbi_info)         Martin Golini\n    James \"moose2000\" Brown (iPhone PNG)         Roy Eltham\n    Ben \"Disch\" Wenger (io callbacks)            Luke Graham\n    Omar Cornut (1/2/4-bit PNG)                  Thomas Ruf\n    Nicolas Guillemot (vertical flip)            John Bartholomew\n                                                 Ken Hamada\n Optimizations & bugfixes                        Cort Stratton\n    Fabian \"ryg\" Giesen                          Blazej Dariusz Roszkowski\n    Arseny Kapoulkine                            Thibault Reuille\n                                                 Paul Du Bois\n                                                 Guillaume George\n  If your name should be here but                Jerry Jansson\n  isn't, let Sean know.                          Hayaki Saito\n                                                 Johan Duparc\n                                                 Ronny Chevalier\n                                                 Michal Cichon\n                                                 Tero Hanninen\n                                                 Sergio Gonzalez\n                                                 Cass Everitt\n                                                 Engin Manap\n                                                 Martins Mozeiko\n                                                 Joseph Thomson\n                                                 Phil Jordan\n\nLicense:\n   This software is in the public domain. Where that dedication is not\n   recognized, you are granted a perpetual, irrevocable license to copy\n   and modify this file however you want.\n\n*/\n\n#ifndef STBI_INCLUDE_STB_IMAGE_H\n#define STBI_INCLUDE_STB_IMAGE_H\n\n// DOCUMENTATION\n//\n// Limitations:\n//    - no 16-bit-per-channel PNG\n//    - no 12-bit-per-channel JPEG\n//    - no JPEGs with arithmetic coding\n//    - no 1-bit BMP\n//    - GIF always returns *comp=4\n//\n// Basic usage (see HDR discussion below for HDR usage):\n//    int x,y,n;\n//    unsigned char *data = stbi_load(filename, &x, &y, &n, 0);\n//    // ... process data if not NULL ...\n//    // ... x = width, y = height, n = # 8-bit components per pixel ...\n//    // ... replace '0' with '1'..'4' to force that many components per pixel\n//    // ... but 'n' will always be the number that it would have been if you said 0\n//    stbi_image_free(data)\n//\n// Standard parameters:\n//    int *x       -- outputs image width in pixels\n//    int *y       -- outputs image height in pixels\n//    int *comp    -- outputs # of image components in image file\n//    int req_comp -- if non-zero, # of image components requested in result\n//\n// The return value from an image loader is an 'unsigned char *' which points\n// to the pixel data, or NULL on an allocation failure or if the image is\n// corrupt or invalid. The pixel data consists of *y scanlines of *x pixels,\n// with each pixel consisting of N interleaved 8-bit components; the first\n// pixel pointed to is top-left-most in the image. There is no padding between\n// image scanlines or between pixels, regardless of format. The number of\n// components N is 'req_comp' if req_comp is non-zero, or *comp otherwise.\n// If req_comp is non-zero, *comp has the number of components that _would_\n// have been output otherwise. E.g. if you set req_comp to 4, you will always\n// get RGBA output, but you can check *comp to see if it's trivially opaque\n// because e.g. there were only 3 channels in the source image.\n//\n// An output image with N components has the following components interleaved\n// in this order in each pixel:\n//\n//     N=#comp     components\n//       1           grey\n//       2           grey, alpha\n//       3           red, green, blue\n//       4           red, green, blue, alpha\n//\n// If image loading fails for any reason, the return value will be NULL,\n// and *x, *y, *comp will be unchanged. The function stbi_failure_reason()\n// can be queried for an extremely brief, end-user unfriendly explanation\n// of why the load failed. Define STBI_NO_FAILURE_STRINGS to avoid\n// compiling these strings at all, and STBI_FAILURE_USERMSG to get slightly\n// more user-friendly ones.\n//\n// Paletted PNG, BMP, GIF, and PIC images are automatically depalettized.\n//\n// ===========================================================================\n//\n// Philosophy\n//\n// stb libraries are designed with the following priorities:\n//\n//    1. easy to use\n//    2. easy to maintain\n//    3. good performance\n//\n// Sometimes I let \"good performance\" creep up in priority over \"easy to maintain\",\n// and for best performance I may provide less-easy-to-use APIs that give higher\n// performance, in addition to the easy to use ones. Nevertheless, it's important\n// to keep in mind that from the standpoint of you, a client of this library,\n// all you care about is #1 and #3, and stb libraries do not emphasize #3 above all.\n//\n// Some secondary priorities arise directly from the first two, some of which\n// make more explicit reasons why performance can't be emphasized.\n//\n//    - Portable (\"ease of use\")\n//    - Small footprint (\"easy to maintain\")\n//    - No dependencies (\"ease of use\")\n//\n// ===========================================================================\n//\n// I/O callbacks\n//\n// I/O callbacks allow you to read from arbitrary sources, like packaged\n// files or some other source. Data read from callbacks are processed\n// through a small internal buffer (currently 128 bytes) to try to reduce\n// overhead.\n//\n// The three functions you must define are \"read\" (reads some bytes of data),\n// \"skip\" (skips some bytes of data), \"eof\" (reports if the stream is at the end).\n//\n// ===========================================================================\n//\n// SIMD support\n//\n// The JPEG decoder will try to automatically use SIMD kernels on x86 when\n// supported by the compiler. For ARM Neon support, you must explicitly\n// request it.\n//\n// (The old do-it-yourself SIMD API is no longer supported in the current\n// code.)\n//\n// On x86, SSE2 will automatically be used when available based on a run-time\n// test; if not, the generic C versions are used as a fall-back. On ARM targets,\n// the typical path is to have separate builds for NEON and non-NEON devices\n// (at least this is true for iOS and Android). Therefore, the NEON support is\n// toggled by a build flag: define STBI_NEON to get NEON loops.\n//\n// The output of the JPEG decoder is slightly different from versions where\n// SIMD support was introduced (that is, for versions before 1.49). The\n// difference is only +-1 in the 8-bit RGB channels, and only on a small\n// fraction of pixels. You can force the pre-1.49 behavior by defining\n// STBI_JPEG_OLD, but this will disable some of the SIMD decoding path\n// and hence cost some performance.\n//\n// If for some reason you do not want to use any of SIMD code, or if\n// you have issues compiling it, you can disable it entirely by\n// defining STBI_NO_SIMD.\n//\n// ===========================================================================\n//\n// HDR image support   (disable by defining STBI_NO_HDR)\n//\n// stb_image now supports loading HDR images in general, and currently\n// the Radiance .HDR file format, although the support is provided\n// generically. You can still load any file through the existing interface;\n// if you attempt to load an HDR file, it will be automatically remapped to\n// LDR, assuming gamma 2.2 and an arbitrary scale factor defaulting to 1;\n// both of these constants can be reconfigured through this interface:\n//\n//     stbi_hdr_to_ldr_gamma(2.2f);\n//     stbi_hdr_to_ldr_scale(1.0f);\n//\n// (note, do not use _inverse_ constants; stbi_image will invert them\n// appropriately).\n//\n// Additionally, there is a new, parallel interface for loading files as\n// (linear) floats to preserve the full dynamic range:\n//\n//    float *data = stbi_loadf(filename, &x, &y, &n, 0);\n//\n// If you load LDR images through this interface, those images will\n// be promoted to floating point values, run through the inverse of\n// constants corresponding to the above:\n//\n//     stbi_ldr_to_hdr_scale(1.0f);\n//     stbi_ldr_to_hdr_gamma(2.2f);\n//\n// Finally, given a filename (or an open file or memory block--see header\n// file for details) containing image data, you can query for the \"most\n// appropriate\" interface to use (that is, whether the image is HDR or\n// not), using:\n//\n//     stbi_is_hdr(char *filename);\n//\n// ===========================================================================\n//\n// iPhone PNG support:\n//\n// By default we convert iphone-formatted PNGs back to RGB, even though\n// they are internally encoded differently. You can disable this conversion\n// by by calling stbi_convert_iphone_png_to_rgb(0), in which case\n// you will always just get the native iphone \"format\" through (which\n// is BGR stored in RGB).\n//\n// Call stbi_set_unpremultiply_on_load(1) as well to force a divide per\n// pixel to remove any premultiplied alpha *only* if the image file explicitly\n// says there's premultiplied data (currently only happens in iPhone images,\n// and only if iPhone convert-to-rgb processing is on).\n//\n\n\n#ifndef STBI_NO_STDIO\n#include <stdio.h>\n#endif // STBI_NO_STDIO\n\n#define STBI_VERSION 1\n\nenum\n{\n   STBI_default = 0, // only used for req_comp\n\n   STBI_grey       = 1,\n   STBI_grey_alpha = 2,\n   STBI_rgb        = 3,\n   STBI_rgb_alpha  = 4\n};\n\ntypedef unsigned char stbi_uc;\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifdef STB_IMAGE_STATIC\n#define STBIDEF static\n#else\n#define STBIDEF extern\n#endif\n\n//////////////////////////////////////////////////////////////////////////////\n//\n// PRIMARY API - works on images of any type\n//\n\n//\n// load image by filename, open file, or memory buffer\n//\n\ntypedef struct\n{\n   int      (*read)  (void *user,char *data,int size);   // fill 'data' with 'size' bytes.  return number of bytes actually read\n   void     (*skip)  (void *user,int n);                 // skip the next 'n' bytes, or 'unget' the last -n bytes if negative\n   int      (*eof)   (void *user);                       // returns nonzero if we are at end of file/data\n} stbi_io_callbacks;\n\nSTBIDEF stbi_uc *stbi_load               (char              const *filename,           int *x, int *y, int *comp, int req_comp);\nSTBIDEF stbi_uc *stbi_load_from_memory   (stbi_uc           const *buffer, int len   , int *x, int *y, int *comp, int req_comp);\nSTBIDEF stbi_uc *stbi_load_from_callbacks(stbi_io_callbacks const *clbk  , void *user, int *x, int *y, int *comp, int req_comp);\n\n#ifndef STBI_NO_STDIO\nSTBIDEF stbi_uc *stbi_load_from_file  (FILE *f,                  int *x, int *y, int *comp, int req_comp);\n// for stbi_load_from_file, file pointer is left pointing immediately after image\n#endif\n\n#ifndef STBI_NO_LINEAR\n   STBIDEF float *stbi_loadf                 (char const *filename,           int *x, int *y, int *comp, int req_comp);\n   STBIDEF float *stbi_loadf_from_memory     (stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp);\n   STBIDEF float *stbi_loadf_from_callbacks  (stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp);\n\n   #ifndef STBI_NO_STDIO\n   STBIDEF float *stbi_loadf_from_file  (FILE *f,                int *x, int *y, int *comp, int req_comp);\n   #endif\n#endif\n\n#ifndef STBI_NO_HDR\n   STBIDEF void   stbi_hdr_to_ldr_gamma(float gamma);\n   STBIDEF void   stbi_hdr_to_ldr_scale(float scale);\n#endif\n\n#ifndef STBI_NO_LINEAR\n   STBIDEF void   stbi_ldr_to_hdr_gamma(float gamma);\n   STBIDEF void   stbi_ldr_to_hdr_scale(float scale);\n#endif // STBI_NO_HDR\n\n// stbi_is_hdr is always defined, but always returns false if STBI_NO_HDR\nSTBIDEF int    stbi_is_hdr_from_callbacks(stbi_io_callbacks const *clbk, void *user);\nSTBIDEF int    stbi_is_hdr_from_memory(stbi_uc const *buffer, int len);\n#ifndef STBI_NO_STDIO\nSTBIDEF int      stbi_is_hdr          (char const *filename);\nSTBIDEF int      stbi_is_hdr_from_file(FILE *f);\n#endif // STBI_NO_STDIO\n\n\n// get a VERY brief reason for failure\n// NOT THREADSAFE\nSTBIDEF const char *stbi_failure_reason  (void);\n\n// free the loaded image -- this is just free()\nSTBIDEF void     stbi_image_free      (void *retval_from_stbi_load);\n\n// get image dimensions & components without fully decoding\nSTBIDEF int      stbi_info_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp);\nSTBIDEF int      stbi_info_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp);\n\n#ifndef STBI_NO_STDIO\nSTBIDEF int      stbi_info            (char const *filename,     int *x, int *y, int *comp);\nSTBIDEF int      stbi_info_from_file  (FILE *f,                  int *x, int *y, int *comp);\n\n#endif\n\n\n\n// for image formats that explicitly notate that they have premultiplied alpha,\n// we just return the colors as stored in the file. set this flag to force\n// unpremultiplication. results are undefined if the unpremultiply overflow.\nSTBIDEF void stbi_set_unpremultiply_on_load(int flag_true_if_should_unpremultiply);\n\n// indicate whether we should process iphone images back to canonical format,\n// or just pass them through \"as-is\"\nSTBIDEF void stbi_convert_iphone_png_to_rgb(int flag_true_if_should_convert);\n\n// flip the image vertically, so the first pixel in the output array is the bottom left\nSTBIDEF void stbi_set_flip_vertically_on_load(int flag_true_if_should_flip);\n\n// ZLIB client - used by PNG, available for other purposes\n\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize(const char *buffer, int len, int initial_size, int *outlen);\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize_headerflag(const char *buffer, int len, int initial_size, int *outlen, int parse_header);\nSTBIDEF char *stbi_zlib_decode_malloc(const char *buffer, int len, int *outlen);\nSTBIDEF int   stbi_zlib_decode_buffer(char *obuffer, int olen, const char *ibuffer, int ilen);\n\nSTBIDEF char *stbi_zlib_decode_noheader_malloc(const char *buffer, int len, int *outlen);\nSTBIDEF int   stbi_zlib_decode_noheader_buffer(char *obuffer, int olen, const char *ibuffer, int ilen);\n\n\n#ifdef __cplusplus\n}\n#endif\n\n//\n//\n////   end header file   /////////////////////////////////////////////////////\n#endif // STBI_INCLUDE_STB_IMAGE_H\n\n#ifdef STB_IMAGE_IMPLEMENTATION\n\n#if defined(STBI_ONLY_JPEG) || defined(STBI_ONLY_PNG) || defined(STBI_ONLY_BMP) \\\n  || defined(STBI_ONLY_TGA) || defined(STBI_ONLY_GIF) || defined(STBI_ONLY_PSD) \\\n  || defined(STBI_ONLY_HDR) || defined(STBI_ONLY_PIC) || defined(STBI_ONLY_PNM) \\\n  || defined(STBI_ONLY_ZLIB)\n   #ifndef STBI_ONLY_JPEG\n   #define STBI_NO_JPEG\n   #endif\n   #ifndef STBI_ONLY_PNG\n   #define STBI_NO_PNG\n   #endif\n   #ifndef STBI_ONLY_BMP\n   #define STBI_NO_BMP\n   #endif\n   #ifndef STBI_ONLY_PSD\n   #define STBI_NO_PSD\n   #endif\n   #ifndef STBI_ONLY_TGA\n   #define STBI_NO_TGA\n   #endif\n   #ifndef STBI_ONLY_GIF\n   #define STBI_NO_GIF\n   #endif\n   #ifndef STBI_ONLY_HDR\n   #define STBI_NO_HDR\n   #endif\n   #ifndef STBI_ONLY_PIC\n   #define STBI_NO_PIC\n   #endif\n   #ifndef STBI_ONLY_PNM\n   #define STBI_NO_PNM\n   #endif\n#endif\n\n#if defined(STBI_NO_PNG) && !defined(STBI_SUPPORT_ZLIB) && !defined(STBI_NO_ZLIB)\n#define STBI_NO_ZLIB\n#endif\n\n\n#include <stdarg.h>\n#include <stddef.h> // ptrdiff_t on osx\n#include <stdlib.h>\n#include <string.h>\n\n#if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR)\n#include <math.h>  // ldexp\n#endif\n\n#ifndef STBI_NO_STDIO\n#include <stdio.h>\n#endif\n\n#ifndef STBI_ASSERT\n#include <assert.h>\n#define STBI_ASSERT(x) assert(x)\n#endif\n\n\n#ifndef _MSC_VER\n   #ifdef __cplusplus\n   #define stbi_inline inline\n   #else\n   #define stbi_inline\n   #endif\n#else\n   #define stbi_inline __forceinline\n#endif\n\n\n#ifdef _MSC_VER\ntypedef unsigned short stbi__uint16;\ntypedef   signed short stbi__int16;\ntypedef unsigned int   stbi__uint32;\ntypedef   signed int   stbi__int32;\n#else\n#include <stdint.h>\ntypedef uint16_t stbi__uint16;\ntypedef int16_t  stbi__int16;\ntypedef uint32_t stbi__uint32;\ntypedef int32_t  stbi__int32;\n#endif\n\n// should produce compiler error if size is wrong\ntypedef unsigned char validate_uint32[sizeof(stbi__uint32)==4 ? 1 : -1];\n\n#ifdef _MSC_VER\n#define STBI_NOTUSED(v)  (void)(v)\n#else\n#define STBI_NOTUSED(v)  (void)sizeof(v)\n#endif\n\n#ifdef _MSC_VER\n#define STBI_HAS_LROTL\n#endif\n\n#ifdef STBI_HAS_LROTL\n   #define stbi_lrot(x,y)  _lrotl(x,y)\n#else\n   #define stbi_lrot(x,y)  (((x) << (y)) | ((x) >> (32 - (y))))\n#endif\n\n#if defined(STBI_MALLOC) && defined(STBI_FREE) && defined(STBI_REALLOC)\n// ok\n#elif !defined(STBI_MALLOC) && !defined(STBI_FREE) && !defined(STBI_REALLOC)\n// ok\n#else\n#error \"Must define all or none of STBI_MALLOC, STBI_FREE, and STBI_REALLOC.\"\n#endif\n\n#ifndef STBI_MALLOC\n#define STBI_MALLOC(sz)    malloc(sz)\n#define STBI_REALLOC(p,sz) realloc(p,sz)\n#define STBI_FREE(p)       free(p)\n#endif\n\n// x86/x64 detection\n#if defined(__x86_64__) || defined(_M_X64)\n#define STBI__X64_TARGET\n#elif defined(__i386) || defined(_M_IX86)\n#define STBI__X86_TARGET\n#endif\n\n#if defined(__GNUC__) && (defined(STBI__X86_TARGET) || defined(STBI__X64_TARGET)) && !defined(__SSE2__) && !defined(STBI_NO_SIMD)\n// NOTE: not clear do we actually need this for the 64-bit path?\n// gcc doesn't support sse2 intrinsics unless you compile with -msse2,\n// (but compiling with -msse2 allows the compiler to use SSE2 everywhere;\n// this is just broken and gcc are jerks for not fixing it properly\n// http://www.virtualdub.org/blog/pivot/entry.php?id=363 )\n#define STBI_NO_SIMD\n#endif\n\n#if defined(__MINGW32__) && defined(STBI__X86_TARGET) && !defined(STBI_MINGW_ENABLE_SSE2) && !defined(STBI_NO_SIMD)\n// Note that __MINGW32__ doesn't actually mean 32-bit, so we have to avoid STBI__X64_TARGET\n//\n// 32-bit MinGW wants ESP to be 16-byte aligned, but this is not in the\n// Windows ABI and VC++ as well as Windows DLLs don't maintain that invariant.\n// As a result, enabling SSE2 on 32-bit MinGW is dangerous when not\n// simultaneously enabling \"-mstackrealign\".\n//\n// See https://github.com/nothings/stb/issues/81 for more information.\n//\n// So default to no SSE2 on 32-bit MinGW. If you've read this far and added\n// -mstackrealign to your build settings, feel free to #define STBI_MINGW_ENABLE_SSE2.\n#define STBI_NO_SIMD\n#endif\n\n#if !defined(STBI_NO_SIMD) && defined(STBI__X86_TARGET)\n#define STBI_SSE2\n#include <emmintrin.h>\n\n#ifdef _MSC_VER\n\n#if _MSC_VER >= 1400  // not VC6\n#include <intrin.h> // __cpuid\nstatic int stbi__cpuid3(void)\n{\n   int info[4];\n   __cpuid(info,1);\n   return info[3];\n}\n#else\nstatic int stbi__cpuid3(void)\n{\n   int res;\n   __asm {\n      mov  eax,1\n      cpuid\n      mov  res,edx\n   }\n   return res;\n}\n#endif\n\n#define STBI_SIMD_ALIGN(type, name) __declspec(align(16)) type name\n\nstatic int stbi__sse2_available()\n{\n   int info3 = stbi__cpuid3();\n   return ((info3 >> 26) & 1) != 0;\n}\n#else // assume GCC-style if not VC++\n#define STBI_SIMD_ALIGN(type, name) type name __attribute__((aligned(16)))\n\nstatic int stbi__sse2_available()\n{\n#if defined(__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__) >= 408 // GCC 4.8 or later\n   // GCC 4.8+ has a nice way to do this\n   return __builtin_cpu_supports(\"sse2\");\n#else\n   // portable way to do this, preferably without using GCC inline ASM?\n   // just bail for now.\n   return 0;\n#endif\n}\n#endif\n#endif\n\n// ARM NEON\n#if defined(STBI_NO_SIMD) && defined(STBI_NEON)\n#undef STBI_NEON\n#endif\n\n#ifdef STBI_NEON\n#include <arm_neon.h>\n// assume GCC or Clang on ARM targets\n#define STBI_SIMD_ALIGN(type, name) type name __attribute__((aligned(16)))\n#endif\n\n#ifndef STBI_SIMD_ALIGN\n#define STBI_SIMD_ALIGN(type, name) type name\n#endif\n\n///////////////////////////////////////////////\n//\n//  stbi__context struct and start_xxx functions\n\n// stbi__context structure is our basic context used by all images, so it\n// contains all the IO context, plus some basic image information\ntypedef struct\n{\n   stbi__uint32 img_x, img_y;\n   int img_n, img_out_n;\n\n   stbi_io_callbacks io;\n   void *io_user_data;\n\n   int read_from_callbacks;\n   int buflen;\n   stbi_uc buffer_start[128];\n\n   stbi_uc *img_buffer, *img_buffer_end;\n   stbi_uc *img_buffer_original;\n} stbi__context;\n\n\nstatic void stbi__refill_buffer(stbi__context *s);\n\n// initialize a memory-decode context\nstatic void stbi__start_mem(stbi__context *s, stbi_uc const *buffer, int len)\n{\n   s->io.read = NULL;\n   s->read_from_callbacks = 0;\n   s->img_buffer = s->img_buffer_original = (stbi_uc *) buffer;\n   s->img_buffer_end = (stbi_uc *) buffer+len;\n}\n\n// initialize a callback-based context\nstatic void stbi__start_callbacks(stbi__context *s, stbi_io_callbacks *c, void *user)\n{\n   s->io = *c;\n   s->io_user_data = user;\n   s->buflen = sizeof(s->buffer_start);\n   s->read_from_callbacks = 1;\n   s->img_buffer_original = s->buffer_start;\n   stbi__refill_buffer(s);\n}\n\n#ifndef STBI_NO_STDIO\n\nstatic int stbi__stdio_read(void *user, char *data, int size)\n{\n   return (int) fread(data,1,size,(FILE*) user);\n}\n\nstatic void stbi__stdio_skip(void *user, int n)\n{\n   fseek((FILE*) user, n, SEEK_CUR);\n}\n\nstatic int stbi__stdio_eof(void *user)\n{\n   return feof((FILE*) user);\n}\n\nstatic stbi_io_callbacks stbi__stdio_callbacks =\n{\n   stbi__stdio_read,\n   stbi__stdio_skip,\n   stbi__stdio_eof,\n};\n\nstatic void stbi__start_file(stbi__context *s, FILE *f)\n{\n   stbi__start_callbacks(s, &stbi__stdio_callbacks, (void *) f);\n}\n\n//static void stop_file(stbi__context *s) { }\n\n#endif // !STBI_NO_STDIO\n\nstatic void stbi__rewind(stbi__context *s)\n{\n   // conceptually rewind SHOULD rewind to the beginning of the stream,\n   // but we just rewind to the beginning of the initial buffer, because\n   // we only use it after doing 'test', which only ever looks at at most 92 bytes\n   s->img_buffer = s->img_buffer_original;\n}\n\n#ifndef STBI_NO_JPEG\nstatic int      stbi__jpeg_test(stbi__context *s);\nstatic stbi_uc *stbi__jpeg_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__jpeg_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PNG\nstatic int      stbi__png_test(stbi__context *s);\nstatic stbi_uc *stbi__png_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__png_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_BMP\nstatic int      stbi__bmp_test(stbi__context *s);\nstatic stbi_uc *stbi__bmp_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__bmp_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_TGA\nstatic int      stbi__tga_test(stbi__context *s);\nstatic stbi_uc *stbi__tga_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__tga_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PSD\nstatic int      stbi__psd_test(stbi__context *s);\nstatic stbi_uc *stbi__psd_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__psd_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_HDR\nstatic int      stbi__hdr_test(stbi__context *s);\nstatic float   *stbi__hdr_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__hdr_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PIC\nstatic int      stbi__pic_test(stbi__context *s);\nstatic stbi_uc *stbi__pic_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__pic_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_GIF\nstatic int      stbi__gif_test(stbi__context *s);\nstatic stbi_uc *stbi__gif_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__gif_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PNM\nstatic int      stbi__pnm_test(stbi__context *s);\nstatic stbi_uc *stbi__pnm_load(stbi__context *s, int *x, int *y, int *comp, int req_comp);\nstatic int      stbi__pnm_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n// this is not threadsafe\nstatic const char *stbi__g_failure_reason;\n\nSTBIDEF const char *stbi_failure_reason(void)\n{\n   return stbi__g_failure_reason;\n}\n\nstatic int stbi__err(const char *str)\n{\n   stbi__g_failure_reason = str;\n   return 0;\n}\n\nstatic void *stbi__malloc(size_t size)\n{\n    return STBI_MALLOC(size);\n}\n\n// stbi__err - error\n// stbi__errpf - error returning pointer to float\n// stbi__errpuc - error returning pointer to unsigned char\n\n#ifdef STBI_NO_FAILURE_STRINGS\n   #define stbi__err(x,y)  0\n#elif defined(STBI_FAILURE_USERMSG)\n   #define stbi__err(x,y)  stbi__err(y)\n#else\n   #define stbi__err(x,y)  stbi__err(x)\n#endif\n\n#define stbi__errpf(x,y)   ((float *) (stbi__err(x,y)?NULL:NULL))\n#define stbi__errpuc(x,y)  ((unsigned char *) (stbi__err(x,y)?NULL:NULL))\n\nSTBIDEF void stbi_image_free(void *retval_from_stbi_load)\n{\n   STBI_FREE(retval_from_stbi_load);\n}\n\n#ifndef STBI_NO_LINEAR\nstatic float   *stbi__ldr_to_hdr(stbi_uc *data, int x, int y, int comp);\n#endif\n\n#ifndef STBI_NO_HDR\nstatic stbi_uc *stbi__hdr_to_ldr(float   *data, int x, int y, int comp);\n#endif\n\nstatic int stbi__vertically_flip_on_load = 0;\n\nSTBIDEF void stbi_set_flip_vertically_on_load(int flag_true_if_should_flip)\n{\n    stbi__vertically_flip_on_load = flag_true_if_should_flip;\n}\n\nstatic unsigned char *stbi__load_main(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   #ifndef STBI_NO_JPEG\n   if (stbi__jpeg_test(s)) return stbi__jpeg_load(s,x,y,comp,req_comp);\n   #endif\n   #ifndef STBI_NO_PNG\n   if (stbi__png_test(s))  return stbi__png_load(s,x,y,comp,req_comp);\n   #endif\n   #ifndef STBI_NO_BMP\n   if (stbi__bmp_test(s))  return stbi__bmp_load(s,x,y,comp,req_comp);\n   #endif\n   #ifndef STBI_NO_GIF\n   if (stbi__gif_test(s))  return stbi__gif_load(s,x,y,comp,req_comp);\n   #endif\n   #ifndef STBI_NO_PSD\n   if (stbi__psd_test(s))  return stbi__psd_load(s,x,y,comp,req_comp);\n   #endif\n   #ifndef STBI_NO_PIC\n   if (stbi__pic_test(s))  return stbi__pic_load(s,x,y,comp,req_comp);\n   #endif\n   #ifndef STBI_NO_PNM\n   if (stbi__pnm_test(s))  return stbi__pnm_load(s,x,y,comp,req_comp);\n   #endif\n\n   #ifndef STBI_NO_HDR\n   if (stbi__hdr_test(s)) {\n      float *hdr = stbi__hdr_load(s, x,y,comp,req_comp);\n      return stbi__hdr_to_ldr(hdr, *x, *y, req_comp ? req_comp : *comp);\n   }\n   #endif\n\n   #ifndef STBI_NO_TGA\n   // test tga last because it's a crappy test!\n   if (stbi__tga_test(s))\n      return stbi__tga_load(s,x,y,comp,req_comp);\n   #endif\n\n   return stbi__errpuc(\"unknown image type\", \"Image not of any known type, or corrupt\");\n}\n\nstatic unsigned char *stbi__load_flip(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   unsigned char *result = stbi__load_main(s, x, y, comp, req_comp);\n\n   if (stbi__vertically_flip_on_load && result != NULL) {\n      int w = *x, h = *y;\n      int depth = req_comp ? req_comp : *comp;\n      int row,col,z;\n      stbi_uc temp;\n\n      // @OPTIMIZE: use a bigger temp buffer and memcpy multiple pixels at once\n      for (row = 0; row < (h>>1); row++) {\n         for (col = 0; col < w; col++) {\n            for (z = 0; z < depth; z++) {\n               temp = result[(row * w + col) * depth + z];\n               result[(row * w + col) * depth + z] = result[((h - row - 1) * w + col) * depth + z];\n               result[((h - row - 1) * w + col) * depth + z] = temp;\n            }\n         }\n      }\n   }\n\n   return result;\n}\n\nstatic void stbi__float_postprocess(float *result, int *x, int *y, int *comp, int req_comp)\n{\n   if (stbi__vertically_flip_on_load && result != NULL) {\n      int w = *x, h = *y;\n      int depth = req_comp ? req_comp : *comp;\n      int row,col,z;\n      float temp;\n\n      // @OPTIMIZE: use a bigger temp buffer and memcpy multiple pixels at once\n      for (row = 0; row < (h>>1); row++) {\n         for (col = 0; col < w; col++) {\n            for (z = 0; z < depth; z++) {\n               temp = result[(row * w + col) * depth + z];\n               result[(row * w + col) * depth + z] = result[((h - row - 1) * w + col) * depth + z];\n               result[((h - row - 1) * w + col) * depth + z] = temp;\n            }\n         }\n      }\n   }\n}\n\n\n#ifndef STBI_NO_STDIO\n\nstatic FILE *stbi__fopen(char const *filename, char const *mode)\n{\n   FILE *f;\n#if defined(_MSC_VER) && _MSC_VER >= 1400\n   if (0 != fopen_s(&f, filename, mode))\n      f=0;\n#else\n   f = fopen(filename, mode);\n#endif\n   return f;\n}\n\n\nSTBIDEF stbi_uc *stbi_load(char const *filename, int *x, int *y, int *comp, int req_comp)\n{\n   FILE *f = stbi__fopen(filename, \"rb\");\n   unsigned char *result;\n   if (!f) return stbi__errpuc(\"can't fopen\", \"Unable to open file\");\n   result = stbi_load_from_file(f,x,y,comp,req_comp);\n   fclose(f);\n   return result;\n}\n\nSTBIDEF stbi_uc *stbi_load_from_file(FILE *f, int *x, int *y, int *comp, int req_comp)\n{\n   unsigned char *result;\n   stbi__context s;\n   stbi__start_file(&s,f);\n   result = stbi__load_flip(&s,x,y,comp,req_comp);\n   if (result) {\n      // need to 'unget' all the characters in the IO buffer\n      fseek(f, - (int) (s.img_buffer_end - s.img_buffer), SEEK_CUR);\n   }\n   return result;\n}\n#endif //!STBI_NO_STDIO\n\nSTBIDEF stbi_uc *stbi_load_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__load_flip(&s,x,y,comp,req_comp);\n}\n\nSTBIDEF stbi_uc *stbi_load_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) clbk, user);\n   return stbi__load_flip(&s,x,y,comp,req_comp);\n}\n\n#ifndef STBI_NO_LINEAR\nstatic float *stbi__loadf_main(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   unsigned char *data;\n   #ifndef STBI_NO_HDR\n   if (stbi__hdr_test(s)) {\n      float *hdr_data = stbi__hdr_load(s,x,y,comp,req_comp);\n      if (hdr_data)\n         stbi__float_postprocess(hdr_data,x,y,comp,req_comp);\n      return hdr_data;\n   }\n   #endif\n   data = stbi__load_flip(s, x, y, comp, req_comp);\n   if (data)\n      return stbi__ldr_to_hdr(data, *x, *y, req_comp ? req_comp : *comp);\n   return stbi__errpf(\"unknown image type\", \"Image not of any known type, or corrupt\");\n}\n\nSTBIDEF float *stbi_loadf_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__loadf_main(&s,x,y,comp,req_comp);\n}\n\nSTBIDEF float *stbi_loadf_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) clbk, user);\n   return stbi__loadf_main(&s,x,y,comp,req_comp);\n}\n\n#ifndef STBI_NO_STDIO\nSTBIDEF float *stbi_loadf(char const *filename, int *x, int *y, int *comp, int req_comp)\n{\n   float *result;\n   FILE *f = stbi__fopen(filename, \"rb\");\n   if (!f) return stbi__errpf(\"can't fopen\", \"Unable to open file\");\n   result = stbi_loadf_from_file(f,x,y,comp,req_comp);\n   fclose(f);\n   return result;\n}\n\nSTBIDEF float *stbi_loadf_from_file(FILE *f, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_file(&s,f);\n   return stbi__loadf_main(&s,x,y,comp,req_comp);\n}\n#endif // !STBI_NO_STDIO\n\n#endif // !STBI_NO_LINEAR\n\n// these is-hdr-or-not is defined independent of whether STBI_NO_LINEAR is\n// defined, for API simplicity; if STBI_NO_LINEAR is defined, it always\n// reports false!\n\nSTBIDEF int stbi_is_hdr_from_memory(stbi_uc const *buffer, int len)\n{\n   #ifndef STBI_NO_HDR\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__hdr_test(&s);\n   #else\n   STBI_NOTUSED(buffer);\n   STBI_NOTUSED(len);\n   return 0;\n   #endif\n}\n\n#ifndef STBI_NO_STDIO\nSTBIDEF int      stbi_is_hdr          (char const *filename)\n{\n   FILE *f = stbi__fopen(filename, \"rb\");\n   int result=0;\n   if (f) {\n      result = stbi_is_hdr_from_file(f);\n      fclose(f);\n   }\n   return result;\n}\n\nSTBIDEF int      stbi_is_hdr_from_file(FILE *f)\n{\n   #ifndef STBI_NO_HDR\n   stbi__context s;\n   stbi__start_file(&s,f);\n   return stbi__hdr_test(&s);\n   #else\n   return 0;\n   #endif\n}\n#endif // !STBI_NO_STDIO\n\nSTBIDEF int      stbi_is_hdr_from_callbacks(stbi_io_callbacks const *clbk, void *user)\n{\n   #ifndef STBI_NO_HDR\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) clbk, user);\n   return stbi__hdr_test(&s);\n   #else\n   return 0;\n   #endif\n}\n\nstatic float stbi__h2l_gamma_i=1.0f/2.2f, stbi__h2l_scale_i=1.0f;\nstatic float stbi__l2h_gamma=2.2f, stbi__l2h_scale=1.0f;\n\n#ifndef STBI_NO_LINEAR\nSTBIDEF void   stbi_ldr_to_hdr_gamma(float gamma) { stbi__l2h_gamma = gamma; }\nSTBIDEF void   stbi_ldr_to_hdr_scale(float scale) { stbi__l2h_scale = scale; }\n#endif\n\nSTBIDEF void   stbi_hdr_to_ldr_gamma(float gamma) { stbi__h2l_gamma_i = 1/gamma; }\nSTBIDEF void   stbi_hdr_to_ldr_scale(float scale) { stbi__h2l_scale_i = 1/scale; }\n\n\n//////////////////////////////////////////////////////////////////////////////\n//\n// Common code used by all image loaders\n//\n\nenum\n{\n   STBI__SCAN_load=0,\n   STBI__SCAN_type,\n   STBI__SCAN_header\n};\n\nstatic void stbi__refill_buffer(stbi__context *s)\n{\n   int n = (s->io.read)(s->io_user_data,(char*)s->buffer_start,s->buflen);\n   if (n == 0) {\n      // at end of file, treat same as if from memory, but need to handle case\n      // where s->img_buffer isn't pointing to safe memory, e.g. 0-byte file\n      s->read_from_callbacks = 0;\n      s->img_buffer = s->buffer_start;\n      s->img_buffer_end = s->buffer_start+1;\n      *s->img_buffer = 0;\n   } else {\n      s->img_buffer = s->buffer_start;\n      s->img_buffer_end = s->buffer_start + n;\n   }\n}\n\nstbi_inline static stbi_uc stbi__get8(stbi__context *s)\n{\n   if (s->img_buffer < s->img_buffer_end)\n      return *s->img_buffer++;\n   if (s->read_from_callbacks) {\n      stbi__refill_buffer(s);\n      return *s->img_buffer++;\n   }\n   return 0;\n}\n\nstbi_inline static int stbi__at_eof(stbi__context *s)\n{\n   if (s->io.read) {\n      if (!(s->io.eof)(s->io_user_data)) return 0;\n      // if feof() is true, check if buffer = end\n      // special case: we've only got the special 0 character at the end\n      if (s->read_from_callbacks == 0) return 1;\n   }\n\n   return s->img_buffer >= s->img_buffer_end;\n}\n\nstatic void stbi__skip(stbi__context *s, int n)\n{\n   if (n < 0) {\n      s->img_buffer = s->img_buffer_end;\n      return;\n   }\n   if (s->io.read) {\n      int blen = (int) (s->img_buffer_end - s->img_buffer);\n      if (blen < n) {\n         s->img_buffer = s->img_buffer_end;\n         (s->io.skip)(s->io_user_data, n - blen);\n         return;\n      }\n   }\n   s->img_buffer += n;\n}\n\nstatic int stbi__getn(stbi__context *s, stbi_uc *buffer, int n)\n{\n   if (s->io.read) {\n      int blen = (int) (s->img_buffer_end - s->img_buffer);\n      if (blen < n) {\n         int res, count;\n\n         memcpy(buffer, s->img_buffer, blen);\n\n         count = (s->io.read)(s->io_user_data, (char*) buffer + blen, n - blen);\n         res = (count == (n-blen));\n         s->img_buffer = s->img_buffer_end;\n         return res;\n      }\n   }\n\n   if (s->img_buffer+n <= s->img_buffer_end) {\n      memcpy(buffer, s->img_buffer, n);\n      s->img_buffer += n;\n      return 1;\n   } else\n      return 0;\n}\n\nstatic int stbi__get16be(stbi__context *s)\n{\n   int z = stbi__get8(s);\n   return (z << 8) + stbi__get8(s);\n}\n\nstatic stbi__uint32 stbi__get32be(stbi__context *s)\n{\n   stbi__uint32 z = stbi__get16be(s);\n   return (z << 16) + stbi__get16be(s);\n}\n\nstatic int stbi__get16le(stbi__context *s)\n{\n   int z = stbi__get8(s);\n   return z + (stbi__get8(s) << 8);\n}\n\nstatic stbi__uint32 stbi__get32le(stbi__context *s)\n{\n   stbi__uint32 z = stbi__get16le(s);\n   return z + (stbi__get16le(s) << 16);\n}\n\n#define STBI__BYTECAST(x)  ((stbi_uc) ((x) & 255))  // truncate int to byte without warnings\n\n\n//////////////////////////////////////////////////////////////////////////////\n//\n//  generic converter from built-in img_n to req_comp\n//    individual types do this automatically as much as possible (e.g. jpeg\n//    does all cases internally since it needs to colorspace convert anyway,\n//    and it never has alpha, so very few cases ). png can automatically\n//    interleave an alpha=255 channel, but falls back to this for other cases\n//\n//  assume data buffer is malloced, so malloc a new one and free that one\n//  only failure mode is malloc failing\n\nstatic stbi_uc stbi__compute_y(int r, int g, int b)\n{\n   return (stbi_uc) (((r*77) + (g*150) +  (29*b)) >> 8);\n}\n\nstatic unsigned char *stbi__convert_format(unsigned char *data, int img_n, int req_comp, unsigned int x, unsigned int y)\n{\n   int i,j;\n   unsigned char *good;\n\n   if (req_comp == img_n) return data;\n   STBI_ASSERT(req_comp >= 1 && req_comp <= 4);\n\n   good = (unsigned char *) stbi__malloc(req_comp * x * y);\n   if (good == NULL) {\n      STBI_FREE(data);\n      return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   }\n\n   for (j=0; j < (int) y; ++j) {\n      unsigned char *src  = data + j * x * img_n   ;\n      unsigned char *dest = good + j * x * req_comp;\n\n      #define COMBO(a,b)  ((a)*8+(b))\n      #define CASE(a,b)   case COMBO(a,b): for(i=x-1; i >= 0; --i, src += a, dest += b)\n      // convert source image with img_n components to one with req_comp components;\n      // avoid switch per pixel, so use switch per scanline and massive macros\n      switch (COMBO(img_n, req_comp)) {\n         CASE(1,2) dest[0]=src[0], dest[1]=255; break;\n         CASE(1,3) dest[0]=dest[1]=dest[2]=src[0]; break;\n         CASE(1,4) dest[0]=dest[1]=dest[2]=src[0], dest[3]=255; break;\n         CASE(2,1) dest[0]=src[0]; break;\n         CASE(2,3) dest[0]=dest[1]=dest[2]=src[0]; break;\n         CASE(2,4) dest[0]=dest[1]=dest[2]=src[0], dest[3]=src[1]; break;\n         CASE(3,4) dest[0]=src[0],dest[1]=src[1],dest[2]=src[2],dest[3]=255; break;\n         CASE(3,1) dest[0]=stbi__compute_y(src[0],src[1],src[2]); break;\n         CASE(3,2) dest[0]=stbi__compute_y(src[0],src[1],src[2]), dest[1] = 255; break;\n         CASE(4,1) dest[0]=stbi__compute_y(src[0],src[1],src[2]); break;\n         CASE(4,2) dest[0]=stbi__compute_y(src[0],src[1],src[2]), dest[1] = src[3]; break;\n         CASE(4,3) dest[0]=src[0],dest[1]=src[1],dest[2]=src[2]; break;\n         default: STBI_ASSERT(0);\n      }\n      #undef CASE\n   }\n\n   STBI_FREE(data);\n   return good;\n}\n\n#ifndef STBI_NO_LINEAR\nstatic float   *stbi__ldr_to_hdr(stbi_uc *data, int x, int y, int comp)\n{\n   int i,k,n;\n   float *output = (float *) stbi__malloc(x * y * comp * sizeof(float));\n   if (output == NULL) { STBI_FREE(data); return stbi__errpf(\"outofmem\", \"Out of memory\"); }\n   // compute number of non-alpha components\n   if (comp & 1) n = comp; else n = comp-1;\n   for (i=0; i < x*y; ++i) {\n      for (k=0; k < n; ++k) {\n         output[i*comp + k] = (float) (pow(data[i*comp+k]/255.0f, stbi__l2h_gamma) * stbi__l2h_scale);\n      }\n      if (k < comp) output[i*comp + k] = data[i*comp+k]/255.0f;\n   }\n   STBI_FREE(data);\n   return output;\n}\n#endif\n\n#ifndef STBI_NO_HDR\n#define stbi__float2int(x)   ((int) (x))\nstatic stbi_uc *stbi__hdr_to_ldr(float   *data, int x, int y, int comp)\n{\n   int i,k,n;\n   stbi_uc *output = (stbi_uc *) stbi__malloc(x * y * comp);\n   if (output == NULL) { STBI_FREE(data); return stbi__errpuc(\"outofmem\", \"Out of memory\"); }\n   // compute number of non-alpha components\n   if (comp & 1) n = comp; else n = comp-1;\n   for (i=0; i < x*y; ++i) {\n      for (k=0; k < n; ++k) {\n         float z = (float) pow(data[i*comp+k]*stbi__h2l_scale_i, stbi__h2l_gamma_i) * 255 + 0.5f;\n         if (z < 0) z = 0;\n         if (z > 255) z = 255;\n         output[i*comp + k] = (stbi_uc) stbi__float2int(z);\n      }\n      if (k < comp) {\n         float z = data[i*comp+k] * 255 + 0.5f;\n         if (z < 0) z = 0;\n         if (z > 255) z = 255;\n         output[i*comp + k] = (stbi_uc) stbi__float2int(z);\n      }\n   }\n   STBI_FREE(data);\n   return output;\n}\n#endif\n\n//////////////////////////////////////////////////////////////////////////////\n//\n//  \"baseline\" JPEG/JFIF decoder\n//\n//    simple implementation\n//      - doesn't support delayed output of y-dimension\n//      - simple interface (only one output format: 8-bit interleaved RGB)\n//      - doesn't try to recover corrupt jpegs\n//      - doesn't allow partial loading, loading multiple at once\n//      - still fast on x86 (copying globals into locals doesn't help x86)\n//      - allocates lots of intermediate memory (full size of all components)\n//        - non-interleaved case requires this anyway\n//        - allows good upsampling (see next)\n//    high-quality\n//      - upsampled channels are bilinearly interpolated, even across blocks\n//      - quality integer IDCT derived from IJG's 'slow'\n//    performance\n//      - fast huffman; reasonable integer IDCT\n//      - some SIMD kernels for common paths on targets with SSE2/NEON\n//      - uses a lot of intermediate memory, could cache poorly\n\n#ifndef STBI_NO_JPEG\n\n// huffman decoding acceleration\n#define FAST_BITS   9  // larger handles more cases; smaller stomps less cache\n\ntypedef struct\n{\n   stbi_uc  fast[1 << FAST_BITS];\n   // weirdly, repacking this into AoS is a 10% speed loss, instead of a win\n   stbi__uint16 code[256];\n   stbi_uc  values[256];\n   stbi_uc  size[257];\n   unsigned int maxcode[18];\n   int    delta[17];   // old 'firstsymbol' - old 'firstcode'\n} stbi__huffman;\n\ntypedef struct\n{\n   stbi__context *s;\n   stbi__huffman huff_dc[4];\n   stbi__huffman huff_ac[4];\n   stbi_uc dequant[4][64];\n   stbi__int16 fast_ac[4][1 << FAST_BITS];\n\n// sizes for components, interleaved MCUs\n   int img_h_max, img_v_max;\n   int img_mcu_x, img_mcu_y;\n   int img_mcu_w, img_mcu_h;\n\n// definition of jpeg image component\n   struct\n   {\n      int id;\n      int h,v;\n      int tq;\n      int hd,ha;\n      int dc_pred;\n\n      int x,y,w2,h2;\n      stbi_uc *data;\n      void *raw_data, *raw_coeff;\n      stbi_uc *linebuf;\n      short   *coeff;   // progressive only\n      int      coeff_w, coeff_h; // number of 8x8 coefficient blocks\n   } img_comp[4];\n\n   stbi__uint32   code_buffer; // jpeg entropy-coded buffer\n   int            code_bits;   // number of valid bits\n   unsigned char  marker;      // marker seen while filling entropy buffer\n   int            nomore;      // flag if we saw a marker so must stop\n\n   int            progressive;\n   int            spec_start;\n   int            spec_end;\n   int            succ_high;\n   int            succ_low;\n   int            eob_run;\n\n   int scan_n, order[4];\n   int restart_interval, todo;\n\n// kernels\n   void (*idct_block_kernel)(stbi_uc *out, int out_stride, short data[64]);\n   void (*YCbCr_to_RGB_kernel)(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step);\n   stbi_uc *(*resample_row_hv_2_kernel)(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs);\n} stbi__jpeg;\n\nstatic int stbi__build_huffman(stbi__huffman *h, int *count)\n{\n   int i,j,k=0,code;\n   // build size list for each symbol (from JPEG spec)\n   for (i=0; i < 16; ++i)\n      for (j=0; j < count[i]; ++j)\n         h->size[k++] = (stbi_uc) (i+1);\n   h->size[k] = 0;\n\n   // compute actual symbols (from jpeg spec)\n   code = 0;\n   k = 0;\n   for(j=1; j <= 16; ++j) {\n      // compute delta to add to code to compute symbol id\n      h->delta[j] = k - code;\n      if (h->size[k] == j) {\n         while (h->size[k] == j)\n            h->code[k++] = (stbi__uint16) (code++);\n         if (code-1 >= (1 << j)) return stbi__err(\"bad code lengths\",\"Corrupt JPEG\");\n      }\n      // compute largest code + 1 for this size, preshifted as needed later\n      h->maxcode[j] = code << (16-j);\n      code <<= 1;\n   }\n   h->maxcode[j] = 0xffffffff;\n\n   // build non-spec acceleration table; 255 is flag for not-accelerated\n   memset(h->fast, 255, 1 << FAST_BITS);\n   for (i=0; i < k; ++i) {\n      int s = h->size[i];\n      if (s <= FAST_BITS) {\n         int c = h->code[i] << (FAST_BITS-s);\n         int m = 1 << (FAST_BITS-s);\n         for (j=0; j < m; ++j) {\n            h->fast[c+j] = (stbi_uc) i;\n         }\n      }\n   }\n   return 1;\n}\n\n// build a table that decodes both magnitude and value of small ACs in\n// one go.\nstatic void stbi__build_fast_ac(stbi__int16 *fast_ac, stbi__huffman *h)\n{\n   int i;\n   for (i=0; i < (1 << FAST_BITS); ++i) {\n      stbi_uc fast = h->fast[i];\n      fast_ac[i] = 0;\n      if (fast < 255) {\n         int rs = h->values[fast];\n         int run = (rs >> 4) & 15;\n         int magbits = rs & 15;\n         int len = h->size[fast];\n\n         if (magbits && len + magbits <= FAST_BITS) {\n            // magnitude code followed by receive_extend code\n            int k = ((i << len) & ((1 << FAST_BITS) - 1)) >> (FAST_BITS - magbits);\n            int m = 1 << (magbits - 1);\n            if (k < m) k += (-1 << magbits) + 1;\n            // if the result is small enough, we can fit it in fast_ac table\n            if (k >= -128 && k <= 127)\n               fast_ac[i] = (stbi__int16) ((k << 8) + (run << 4) + (len + magbits));\n         }\n      }\n   }\n}\n\nstatic void stbi__grow_buffer_unsafe(stbi__jpeg *j)\n{\n   do {\n      int b = j->nomore ? 0 : stbi__get8(j->s);\n      if (b == 0xff) {\n         int c = stbi__get8(j->s);\n         if (c != 0) {\n            j->marker = (unsigned char) c;\n            j->nomore = 1;\n            return;\n         }\n      }\n      j->code_buffer |= b << (24 - j->code_bits);\n      j->code_bits += 8;\n   } while (j->code_bits <= 24);\n}\n\n// (1 << n) - 1\nstatic stbi__uint32 stbi__bmask[17]={0,1,3,7,15,31,63,127,255,511,1023,2047,4095,8191,16383,32767,65535};\n\n// decode a jpeg huffman value from the bitstream\nstbi_inline static int stbi__jpeg_huff_decode(stbi__jpeg *j, stbi__huffman *h)\n{\n   unsigned int temp;\n   int c,k;\n\n   if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n\n   // look at the top FAST_BITS and determine what symbol ID it is,\n   // if the code is <= FAST_BITS\n   c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS)-1);\n   k = h->fast[c];\n   if (k < 255) {\n      int s = h->size[k];\n      if (s > j->code_bits)\n         return -1;\n      j->code_buffer <<= s;\n      j->code_bits -= s;\n      return h->values[k];\n   }\n\n   // naive test is to shift the code_buffer down so k bits are\n   // valid, then test against maxcode. To speed this up, we've\n   // preshifted maxcode left so that it has (16-k) 0s at the\n   // end; in other words, regardless of the number of bits, it\n   // wants to be compared against something shifted to have 16;\n   // that way we don't need to shift inside the loop.\n   temp = j->code_buffer >> 16;\n   for (k=FAST_BITS+1 ; ; ++k)\n      if (temp < h->maxcode[k])\n         break;\n   if (k == 17) {\n      // error! code not found\n      j->code_bits -= 16;\n      return -1;\n   }\n\n   if (k > j->code_bits)\n      return -1;\n\n   // convert the huffman code to the symbol id\n   c = ((j->code_buffer >> (32 - k)) & stbi__bmask[k]) + h->delta[k];\n   STBI_ASSERT((((j->code_buffer) >> (32 - h->size[c])) & stbi__bmask[h->size[c]]) == h->code[c]);\n\n   // convert the id to a symbol\n   j->code_bits -= k;\n   j->code_buffer <<= k;\n   return h->values[c];\n}\n\n// bias[n] = (-1<<n) + 1\nstatic int const stbi__jbias[16] = {0,-1,-3,-7,-15,-31,-63,-127,-255,-511,-1023,-2047,-4095,-8191,-16383,-32767};\n\n// combined JPEG 'receive' and JPEG 'extend', since baseline\n// always extends everything it receives.\nstbi_inline static int stbi__extend_receive(stbi__jpeg *j, int n)\n{\n   unsigned int k;\n   int sgn;\n   if (j->code_bits < n) stbi__grow_buffer_unsafe(j);\n\n   sgn = (stbi__int32)j->code_buffer >> 31; // sign bit is always in MSB\n   k = stbi_lrot(j->code_buffer, n);\n   STBI_ASSERT(n >= 0 && n < (int) (sizeof(stbi__bmask)/sizeof(*stbi__bmask)));\n   j->code_buffer = k & ~stbi__bmask[n];\n   k &= stbi__bmask[n];\n   j->code_bits -= n;\n   return k + (stbi__jbias[n] & ~sgn);\n}\n\n// get some unsigned bits\nstbi_inline static int stbi__jpeg_get_bits(stbi__jpeg *j, int n)\n{\n   unsigned int k;\n   if (j->code_bits < n) stbi__grow_buffer_unsafe(j);\n   k = stbi_lrot(j->code_buffer, n);\n   j->code_buffer = k & ~stbi__bmask[n];\n   k &= stbi__bmask[n];\n   j->code_bits -= n;\n   return k;\n}\n\nstbi_inline static int stbi__jpeg_get_bit(stbi__jpeg *j)\n{\n   unsigned int k;\n   if (j->code_bits < 1) stbi__grow_buffer_unsafe(j);\n   k = j->code_buffer;\n   j->code_buffer <<= 1;\n   --j->code_bits;\n   return k & 0x80000000;\n}\n\n// given a value that's at position X in the zigzag stream,\n// where does it appear in the 8x8 matrix coded as row-major?\nstatic stbi_uc stbi__jpeg_dezigzag[64+15] =\n{\n    0,  1,  8, 16,  9,  2,  3, 10,\n   17, 24, 32, 25, 18, 11,  4,  5,\n   12, 19, 26, 33, 40, 48, 41, 34,\n   27, 20, 13,  6,  7, 14, 21, 28,\n   35, 42, 49, 56, 57, 50, 43, 36,\n   29, 22, 15, 23, 30, 37, 44, 51,\n   58, 59, 52, 45, 38, 31, 39, 46,\n   53, 60, 61, 54, 47, 55, 62, 63,\n   // let corrupt input sample past end\n   63, 63, 63, 63, 63, 63, 63, 63,\n   63, 63, 63, 63, 63, 63, 63\n};\n\n// decode one 64-entry block--\nstatic int stbi__jpeg_decode_block(stbi__jpeg *j, short data[64], stbi__huffman *hdc, stbi__huffman *hac, stbi__int16 *fac, int b, stbi_uc *dequant)\n{\n   int diff,dc,k;\n   int t;\n\n   if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n   t = stbi__jpeg_huff_decode(j, hdc);\n   if (t < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n\n   // 0 all the ac values now so we can do it 32-bits at a time\n   memset(data,0,64*sizeof(data[0]));\n\n   diff = t ? stbi__extend_receive(j, t) : 0;\n   dc = j->img_comp[b].dc_pred + diff;\n   j->img_comp[b].dc_pred = dc;\n   data[0] = (short) (dc * dequant[0]);\n\n   // decode AC components, see JPEG spec\n   k = 1;\n   do {\n      unsigned int zig;\n      int c,r,s;\n      if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n      c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS)-1);\n      r = fac[c];\n      if (r) { // fast-AC path\n         k += (r >> 4) & 15; // run\n         s = r & 15; // combined length\n         j->code_buffer <<= s;\n         j->code_bits -= s;\n         // decode into unzigzag'd location\n         zig = stbi__jpeg_dezigzag[k++];\n         data[zig] = (short) ((r >> 8) * dequant[zig]);\n      } else {\n         int rs = stbi__jpeg_huff_decode(j, hac);\n         if (rs < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n         s = rs & 15;\n         r = rs >> 4;\n         if (s == 0) {\n            if (rs != 0xf0) break; // end block\n            k += 16;\n         } else {\n            k += r;\n            // decode into unzigzag'd location\n            zig = stbi__jpeg_dezigzag[k++];\n            data[zig] = (short) (stbi__extend_receive(j,s) * dequant[zig]);\n         }\n      }\n   } while (k < 64);\n   return 1;\n}\n\nstatic int stbi__jpeg_decode_block_prog_dc(stbi__jpeg *j, short data[64], stbi__huffman *hdc, int b)\n{\n   int diff,dc;\n   int t;\n   if (j->spec_end != 0) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n\n   if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n\n   if (j->succ_high == 0) {\n      // first scan for DC coefficient, must be first\n      memset(data,0,64*sizeof(data[0])); // 0 all the ac values now\n      t = stbi__jpeg_huff_decode(j, hdc);\n      diff = t ? stbi__extend_receive(j, t) : 0;\n\n      dc = j->img_comp[b].dc_pred + diff;\n      j->img_comp[b].dc_pred = dc;\n      data[0] = (short) (dc << j->succ_low);\n   } else {\n      // refinement scan for DC coefficient\n      if (stbi__jpeg_get_bit(j))\n         data[0] += (short) (1 << j->succ_low);\n   }\n   return 1;\n}\n\n// @OPTIMIZE: store non-zigzagged during the decode passes,\n// and only de-zigzag when dequantizing\nstatic int stbi__jpeg_decode_block_prog_ac(stbi__jpeg *j, short data[64], stbi__huffman *hac, stbi__int16 *fac)\n{\n   int k;\n   if (j->spec_start == 0) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n\n   if (j->succ_high == 0) {\n      int shift = j->succ_low;\n\n      if (j->eob_run) {\n         --j->eob_run;\n         return 1;\n      }\n\n      k = j->spec_start;\n      do {\n         unsigned int zig;\n         int c,r,s;\n         if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n         c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS)-1);\n         r = fac[c];\n         if (r) { // fast-AC path\n            k += (r >> 4) & 15; // run\n            s = r & 15; // combined length\n            j->code_buffer <<= s;\n            j->code_bits -= s;\n            zig = stbi__jpeg_dezigzag[k++];\n            data[zig] = (short) ((r >> 8) << shift);\n         } else {\n            int rs = stbi__jpeg_huff_decode(j, hac);\n            if (rs < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n            s = rs & 15;\n            r = rs >> 4;\n            if (s == 0) {\n               if (r < 15) {\n                  j->eob_run = (1 << r);\n                  if (r)\n                     j->eob_run += stbi__jpeg_get_bits(j, r);\n                  --j->eob_run;\n                  break;\n               }\n               k += 16;\n            } else {\n               k += r;\n               zig = stbi__jpeg_dezigzag[k++];\n               data[zig] = (short) (stbi__extend_receive(j,s) << shift);\n            }\n         }\n      } while (k <= j->spec_end);\n   } else {\n      // refinement scan for these AC coefficients\n\n      short bit = (short) (1 << j->succ_low);\n\n      if (j->eob_run) {\n         --j->eob_run;\n         for (k = j->spec_start; k <= j->spec_end; ++k) {\n            short *p = &data[stbi__jpeg_dezigzag[k]];\n            if (*p != 0)\n               if (stbi__jpeg_get_bit(j))\n                  if ((*p & bit)==0) {\n                     if (*p > 0)\n                        *p += bit;\n                     else\n                        *p -= bit;\n                  }\n         }\n      } else {\n         k = j->spec_start;\n         do {\n            int r,s;\n            int rs = stbi__jpeg_huff_decode(j, hac); // @OPTIMIZE see if we can use the fast path here, advance-by-r is so slow, eh\n            if (rs < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n            s = rs & 15;\n            r = rs >> 4;\n            if (s == 0) {\n               if (r < 15) {\n                  j->eob_run = (1 << r) - 1;\n                  if (r)\n                     j->eob_run += stbi__jpeg_get_bits(j, r);\n                  r = 64; // force end of block\n               } else {\n                  // r=15 s=0 should write 16 0s, so we just do\n                  // a run of 15 0s and then write s (which is 0),\n                  // so we don't have to do anything special here\n               }\n            } else {\n               if (s != 1) return stbi__err(\"bad huffman code\", \"Corrupt JPEG\");\n               // sign bit\n               if (stbi__jpeg_get_bit(j))\n                  s = bit;\n               else\n                  s = -bit;\n            }\n\n            // advance by r\n            while (k <= j->spec_end) {\n               short *p = &data[stbi__jpeg_dezigzag[k++]];\n               if (*p != 0) {\n                  if (stbi__jpeg_get_bit(j))\n                     if ((*p & bit)==0) {\n                        if (*p > 0)\n                           *p += bit;\n                        else\n                           *p -= bit;\n                     }\n               } else {\n                  if (r == 0) {\n                     *p = (short) s;\n                     break;\n                  }\n                  --r;\n               }\n            }\n         } while (k <= j->spec_end);\n      }\n   }\n   return 1;\n}\n\n// take a -128..127 value and stbi__clamp it and convert to 0..255\nstbi_inline static stbi_uc stbi__clamp(int x)\n{\n   // trick to use a single test to catch both cases\n   if ((unsigned int) x > 255) {\n      if (x < 0) return 0;\n      if (x > 255) return 255;\n   }\n   return (stbi_uc) x;\n}\n\n#define stbi__f2f(x)  ((int) (((x) * 4096 + 0.5)))\n#define stbi__fsh(x)  ((x) << 12)\n\n// derived from jidctint -- DCT_ISLOW\n#define STBI__IDCT_1D(s0,s1,s2,s3,s4,s5,s6,s7) \\\n   int t0,t1,t2,t3,p1,p2,p3,p4,p5,x0,x1,x2,x3; \\\n   p2 = s2;                                    \\\n   p3 = s6;                                    \\\n   p1 = (p2+p3) * stbi__f2f(0.5411961f);       \\\n   t2 = p1 + p3*stbi__f2f(-1.847759065f);      \\\n   t3 = p1 + p2*stbi__f2f( 0.765366865f);      \\\n   p2 = s0;                                    \\\n   p3 = s4;                                    \\\n   t0 = stbi__fsh(p2+p3);                      \\\n   t1 = stbi__fsh(p2-p3);                      \\\n   x0 = t0+t3;                                 \\\n   x3 = t0-t3;                                 \\\n   x1 = t1+t2;                                 \\\n   x2 = t1-t2;                                 \\\n   t0 = s7;                                    \\\n   t1 = s5;                                    \\\n   t2 = s3;                                    \\\n   t3 = s1;                                    \\\n   p3 = t0+t2;                                 \\\n   p4 = t1+t3;                                 \\\n   p1 = t0+t3;                                 \\\n   p2 = t1+t2;                                 \\\n   p5 = (p3+p4)*stbi__f2f( 1.175875602f);      \\\n   t0 = t0*stbi__f2f( 0.298631336f);           \\\n   t1 = t1*stbi__f2f( 2.053119869f);           \\\n   t2 = t2*stbi__f2f( 3.072711026f);           \\\n   t3 = t3*stbi__f2f( 1.501321110f);           \\\n   p1 = p5 + p1*stbi__f2f(-0.899976223f);      \\\n   p2 = p5 + p2*stbi__f2f(-2.562915447f);      \\\n   p3 = p3*stbi__f2f(-1.961570560f);           \\\n   p4 = p4*stbi__f2f(-0.390180644f);           \\\n   t3 += p1+p4;                                \\\n   t2 += p2+p3;                                \\\n   t1 += p2+p4;                                \\\n   t0 += p1+p3;\n\nstatic void stbi__idct_block(stbi_uc *out, int out_stride, short data[64])\n{\n   int i,val[64],*v=val;\n   stbi_uc *o;\n   short *d = data;\n\n   // columns\n   for (i=0; i < 8; ++i,++d, ++v) {\n      // if all zeroes, shortcut -- this avoids dequantizing 0s and IDCTing\n      if (d[ 8]==0 && d[16]==0 && d[24]==0 && d[32]==0\n           && d[40]==0 && d[48]==0 && d[56]==0) {\n         //    no shortcut                 0     seconds\n         //    (1|2|3|4|5|6|7)==0          0     seconds\n         //    all separate               -0.047 seconds\n         //    1 && 2|3 && 4|5 && 6|7:    -0.047 seconds\n         int dcterm = d[0] << 2;\n         v[0] = v[8] = v[16] = v[24] = v[32] = v[40] = v[48] = v[56] = dcterm;\n      } else {\n         STBI__IDCT_1D(d[ 0],d[ 8],d[16],d[24],d[32],d[40],d[48],d[56])\n         // constants scaled things up by 1<<12; let's bring them back\n         // down, but keep 2 extra bits of precision\n         x0 += 512; x1 += 512; x2 += 512; x3 += 512;\n         v[ 0] = (x0+t3) >> 10;\n         v[56] = (x0-t3) >> 10;\n         v[ 8] = (x1+t2) >> 10;\n         v[48] = (x1-t2) >> 10;\n         v[16] = (x2+t1) >> 10;\n         v[40] = (x2-t1) >> 10;\n         v[24] = (x3+t0) >> 10;\n         v[32] = (x3-t0) >> 10;\n      }\n   }\n\n   for (i=0, v=val, o=out; i < 8; ++i,v+=8,o+=out_stride) {\n      // no fast case since the first 1D IDCT spread components out\n      STBI__IDCT_1D(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7])\n      // constants scaled things up by 1<<12, plus we had 1<<2 from first\n      // loop, plus horizontal and vertical each scale by sqrt(8) so together\n      // we've got an extra 1<<3, so 1<<17 total we need to remove.\n      // so we want to round that, which means adding 0.5 * 1<<17,\n      // aka 65536. Also, we'll end up with -128 to 127 that we want\n      // to encode as 0..255 by adding 128, so we'll add that before the shift\n      x0 += 65536 + (128<<17);\n      x1 += 65536 + (128<<17);\n      x2 += 65536 + (128<<17);\n      x3 += 65536 + (128<<17);\n      // tried computing the shifts into temps, or'ing the temps to see\n      // if any were out of range, but that was slower\n      o[0] = stbi__clamp((x0+t3) >> 17);\n      o[7] = stbi__clamp((x0-t3) >> 17);\n      o[1] = stbi__clamp((x1+t2) >> 17);\n      o[6] = stbi__clamp((x1-t2) >> 17);\n      o[2] = stbi__clamp((x2+t1) >> 17);\n      o[5] = stbi__clamp((x2-t1) >> 17);\n      o[3] = stbi__clamp((x3+t0) >> 17);\n      o[4] = stbi__clamp((x3-t0) >> 17);\n   }\n}\n\n#ifdef STBI_SSE2\n// sse2 integer IDCT. not the fastest possible implementation but it\n// produces bit-identical results to the generic C version so it's\n// fully \"transparent\".\nstatic void stbi__idct_simd(stbi_uc *out, int out_stride, short data[64])\n{\n   // This is constructed to match our regular (generic) integer IDCT exactly.\n   __m128i row0, row1, row2, row3, row4, row5, row6, row7;\n   __m128i tmp;\n\n   // dot product constant: even elems=x, odd elems=y\n   #define dct_const(x,y)  _mm_setr_epi16((x),(y),(x),(y),(x),(y),(x),(y))\n\n   // out(0) = c0[even]*x + c0[odd]*y   (c0, x, y 16-bit, out 32-bit)\n   // out(1) = c1[even]*x + c1[odd]*y\n   #define dct_rot(out0,out1, x,y,c0,c1) \\\n      __m128i c0##lo = _mm_unpacklo_epi16((x),(y)); \\\n      __m128i c0##hi = _mm_unpackhi_epi16((x),(y)); \\\n      __m128i out0##_l = _mm_madd_epi16(c0##lo, c0); \\\n      __m128i out0##_h = _mm_madd_epi16(c0##hi, c0); \\\n      __m128i out1##_l = _mm_madd_epi16(c0##lo, c1); \\\n      __m128i out1##_h = _mm_madd_epi16(c0##hi, c1)\n\n   // out = in << 12  (in 16-bit, out 32-bit)\n   #define dct_widen(out, in) \\\n      __m128i out##_l = _mm_srai_epi32(_mm_unpacklo_epi16(_mm_setzero_si128(), (in)), 4); \\\n      __m128i out##_h = _mm_srai_epi32(_mm_unpackhi_epi16(_mm_setzero_si128(), (in)), 4)\n\n   // wide add\n   #define dct_wadd(out, a, b) \\\n      __m128i out##_l = _mm_add_epi32(a##_l, b##_l); \\\n      __m128i out##_h = _mm_add_epi32(a##_h, b##_h)\n\n   // wide sub\n   #define dct_wsub(out, a, b) \\\n      __m128i out##_l = _mm_sub_epi32(a##_l, b##_l); \\\n      __m128i out##_h = _mm_sub_epi32(a##_h, b##_h)\n\n   // butterfly a/b, add bias, then shift by \"s\" and pack\n   #define dct_bfly32o(out0, out1, a,b,bias,s) \\\n      { \\\n         __m128i abiased_l = _mm_add_epi32(a##_l, bias); \\\n         __m128i abiased_h = _mm_add_epi32(a##_h, bias); \\\n         dct_wadd(sum, abiased, b); \\\n         dct_wsub(dif, abiased, b); \\\n         out0 = _mm_packs_epi32(_mm_srai_epi32(sum_l, s), _mm_srai_epi32(sum_h, s)); \\\n         out1 = _mm_packs_epi32(_mm_srai_epi32(dif_l, s), _mm_srai_epi32(dif_h, s)); \\\n      }\n\n   // 8-bit interleave step (for transposes)\n   #define dct_interleave8(a, b) \\\n      tmp = a; \\\n      a = _mm_unpacklo_epi8(a, b); \\\n      b = _mm_unpackhi_epi8(tmp, b)\n\n   // 16-bit interleave step (for transposes)\n   #define dct_interleave16(a, b) \\\n      tmp = a; \\\n      a = _mm_unpacklo_epi16(a, b); \\\n      b = _mm_unpackhi_epi16(tmp, b)\n\n   #define dct_pass(bias,shift) \\\n      { \\\n         /* even part */ \\\n         dct_rot(t2e,t3e, row2,row6, rot0_0,rot0_1); \\\n         __m128i sum04 = _mm_add_epi16(row0, row4); \\\n         __m128i dif04 = _mm_sub_epi16(row0, row4); \\\n         dct_widen(t0e, sum04); \\\n         dct_widen(t1e, dif04); \\\n         dct_wadd(x0, t0e, t3e); \\\n         dct_wsub(x3, t0e, t3e); \\\n         dct_wadd(x1, t1e, t2e); \\\n         dct_wsub(x2, t1e, t2e); \\\n         /* odd part */ \\\n         dct_rot(y0o,y2o, row7,row3, rot2_0,rot2_1); \\\n         dct_rot(y1o,y3o, row5,row1, rot3_0,rot3_1); \\\n         __m128i sum17 = _mm_add_epi16(row1, row7); \\\n         __m128i sum35 = _mm_add_epi16(row3, row5); \\\n         dct_rot(y4o,y5o, sum17,sum35, rot1_0,rot1_1); \\\n         dct_wadd(x4, y0o, y4o); \\\n         dct_wadd(x5, y1o, y5o); \\\n         dct_wadd(x6, y2o, y5o); \\\n         dct_wadd(x7, y3o, y4o); \\\n         dct_bfly32o(row0,row7, x0,x7,bias,shift); \\\n         dct_bfly32o(row1,row6, x1,x6,bias,shift); \\\n         dct_bfly32o(row2,row5, x2,x5,bias,shift); \\\n         dct_bfly32o(row3,row4, x3,x4,bias,shift); \\\n      }\n\n   __m128i rot0_0 = dct_const(stbi__f2f(0.5411961f), stbi__f2f(0.5411961f) + stbi__f2f(-1.847759065f));\n   __m128i rot0_1 = dct_const(stbi__f2f(0.5411961f) + stbi__f2f( 0.765366865f), stbi__f2f(0.5411961f));\n   __m128i rot1_0 = dct_const(stbi__f2f(1.175875602f) + stbi__f2f(-0.899976223f), stbi__f2f(1.175875602f));\n   __m128i rot1_1 = dct_const(stbi__f2f(1.175875602f), stbi__f2f(1.175875602f) + stbi__f2f(-2.562915447f));\n   __m128i rot2_0 = dct_const(stbi__f2f(-1.961570560f) + stbi__f2f( 0.298631336f), stbi__f2f(-1.961570560f));\n   __m128i rot2_1 = dct_const(stbi__f2f(-1.961570560f), stbi__f2f(-1.961570560f) + stbi__f2f( 3.072711026f));\n   __m128i rot3_0 = dct_const(stbi__f2f(-0.390180644f) + stbi__f2f( 2.053119869f), stbi__f2f(-0.390180644f));\n   __m128i rot3_1 = dct_const(stbi__f2f(-0.390180644f), stbi__f2f(-0.390180644f) + stbi__f2f( 1.501321110f));\n\n   // rounding biases in column/row passes, see stbi__idct_block for explanation.\n   __m128i bias_0 = _mm_set1_epi32(512);\n   __m128i bias_1 = _mm_set1_epi32(65536 + (128<<17));\n\n   // load\n   row0 = _mm_load_si128((const __m128i *) (data + 0*8));\n   row1 = _mm_load_si128((const __m128i *) (data + 1*8));\n   row2 = _mm_load_si128((const __m128i *) (data + 2*8));\n   row3 = _mm_load_si128((const __m128i *) (data + 3*8));\n   row4 = _mm_load_si128((const __m128i *) (data + 4*8));\n   row5 = _mm_load_si128((const __m128i *) (data + 5*8));\n   row6 = _mm_load_si128((const __m128i *) (data + 6*8));\n   row7 = _mm_load_si128((const __m128i *) (data + 7*8));\n\n   // column pass\n   dct_pass(bias_0, 10);\n\n   {\n      // 16bit 8x8 transpose pass 1\n      dct_interleave16(row0, row4);\n      dct_interleave16(row1, row5);\n      dct_interleave16(row2, row6);\n      dct_interleave16(row3, row7);\n\n      // transpose pass 2\n      dct_interleave16(row0, row2);\n      dct_interleave16(row1, row3);\n      dct_interleave16(row4, row6);\n      dct_interleave16(row5, row7);\n\n      // transpose pass 3\n      dct_interleave16(row0, row1);\n      dct_interleave16(row2, row3);\n      dct_interleave16(row4, row5);\n      dct_interleave16(row6, row7);\n   }\n\n   // row pass\n   dct_pass(bias_1, 17);\n\n   {\n      // pack\n      __m128i p0 = _mm_packus_epi16(row0, row1); // a0a1a2a3...a7b0b1b2b3...b7\n      __m128i p1 = _mm_packus_epi16(row2, row3);\n      __m128i p2 = _mm_packus_epi16(row4, row5);\n      __m128i p3 = _mm_packus_epi16(row6, row7);\n\n      // 8bit 8x8 transpose pass 1\n      dct_interleave8(p0, p2); // a0e0a1e1...\n      dct_interleave8(p1, p3); // c0g0c1g1...\n\n      // transpose pass 2\n      dct_interleave8(p0, p1); // a0c0e0g0...\n      dct_interleave8(p2, p3); // b0d0f0h0...\n\n      // transpose pass 3\n      dct_interleave8(p0, p2); // a0b0c0d0...\n      dct_interleave8(p1, p3); // a4b4c4d4...\n\n      // store\n      _mm_storel_epi64((__m128i *) out, p0); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p0, 0x4e)); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, p2); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p2, 0x4e)); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, p1); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p1, 0x4e)); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, p3); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p3, 0x4e));\n   }\n\n#undef dct_const\n#undef dct_rot\n#undef dct_widen\n#undef dct_wadd\n#undef dct_wsub\n#undef dct_bfly32o\n#undef dct_interleave8\n#undef dct_interleave16\n#undef dct_pass\n}\n\n#endif // STBI_SSE2\n\n#ifdef STBI_NEON\n\n// NEON integer IDCT. should produce bit-identical\n// results to the generic C version.\nstatic void stbi__idct_simd(stbi_uc *out, int out_stride, short data[64])\n{\n   int16x8_t row0, row1, row2, row3, row4, row5, row6, row7;\n\n   int16x4_t rot0_0 = vdup_n_s16(stbi__f2f(0.5411961f));\n   int16x4_t rot0_1 = vdup_n_s16(stbi__f2f(-1.847759065f));\n   int16x4_t rot0_2 = vdup_n_s16(stbi__f2f( 0.765366865f));\n   int16x4_t rot1_0 = vdup_n_s16(stbi__f2f( 1.175875602f));\n   int16x4_t rot1_1 = vdup_n_s16(stbi__f2f(-0.899976223f));\n   int16x4_t rot1_2 = vdup_n_s16(stbi__f2f(-2.562915447f));\n   int16x4_t rot2_0 = vdup_n_s16(stbi__f2f(-1.961570560f));\n   int16x4_t rot2_1 = vdup_n_s16(stbi__f2f(-0.390180644f));\n   int16x4_t rot3_0 = vdup_n_s16(stbi__f2f( 0.298631336f));\n   int16x4_t rot3_1 = vdup_n_s16(stbi__f2f( 2.053119869f));\n   int16x4_t rot3_2 = vdup_n_s16(stbi__f2f( 3.072711026f));\n   int16x4_t rot3_3 = vdup_n_s16(stbi__f2f( 1.501321110f));\n\n#define dct_long_mul(out, inq, coeff) \\\n   int32x4_t out##_l = vmull_s16(vget_low_s16(inq), coeff); \\\n   int32x4_t out##_h = vmull_s16(vget_high_s16(inq), coeff)\n\n#define dct_long_mac(out, acc, inq, coeff) \\\n   int32x4_t out##_l = vmlal_s16(acc##_l, vget_low_s16(inq), coeff); \\\n   int32x4_t out##_h = vmlal_s16(acc##_h, vget_high_s16(inq), coeff)\n\n#define dct_widen(out, inq) \\\n   int32x4_t out##_l = vshll_n_s16(vget_low_s16(inq), 12); \\\n   int32x4_t out##_h = vshll_n_s16(vget_high_s16(inq), 12)\n\n// wide add\n#define dct_wadd(out, a, b) \\\n   int32x4_t out##_l = vaddq_s32(a##_l, b##_l); \\\n   int32x4_t out##_h = vaddq_s32(a##_h, b##_h)\n\n// wide sub\n#define dct_wsub(out, a, b) \\\n   int32x4_t out##_l = vsubq_s32(a##_l, b##_l); \\\n   int32x4_t out##_h = vsubq_s32(a##_h, b##_h)\n\n// butterfly a/b, then shift using \"shiftop\" by \"s\" and pack\n#define dct_bfly32o(out0,out1, a,b,shiftop,s) \\\n   { \\\n      dct_wadd(sum, a, b); \\\n      dct_wsub(dif, a, b); \\\n      out0 = vcombine_s16(shiftop(sum_l, s), shiftop(sum_h, s)); \\\n      out1 = vcombine_s16(shiftop(dif_l, s), shiftop(dif_h, s)); \\\n   }\n\n#define dct_pass(shiftop, shift) \\\n   { \\\n      /* even part */ \\\n      int16x8_t sum26 = vaddq_s16(row2, row6); \\\n      dct_long_mul(p1e, sum26, rot0_0); \\\n      dct_long_mac(t2e, p1e, row6, rot0_1); \\\n      dct_long_mac(t3e, p1e, row2, rot0_2); \\\n      int16x8_t sum04 = vaddq_s16(row0, row4); \\\n      int16x8_t dif04 = vsubq_s16(row0, row4); \\\n      dct_widen(t0e, sum04); \\\n      dct_widen(t1e, dif04); \\\n      dct_wadd(x0, t0e, t3e); \\\n      dct_wsub(x3, t0e, t3e); \\\n      dct_wadd(x1, t1e, t2e); \\\n      dct_wsub(x2, t1e, t2e); \\\n      /* odd part */ \\\n      int16x8_t sum15 = vaddq_s16(row1, row5); \\\n      int16x8_t sum17 = vaddq_s16(row1, row7); \\\n      int16x8_t sum35 = vaddq_s16(row3, row5); \\\n      int16x8_t sum37 = vaddq_s16(row3, row7); \\\n      int16x8_t sumodd = vaddq_s16(sum17, sum35); \\\n      dct_long_mul(p5o, sumodd, rot1_0); \\\n      dct_long_mac(p1o, p5o, sum17, rot1_1); \\\n      dct_long_mac(p2o, p5o, sum35, rot1_2); \\\n      dct_long_mul(p3o, sum37, rot2_0); \\\n      dct_long_mul(p4o, sum15, rot2_1); \\\n      dct_wadd(sump13o, p1o, p3o); \\\n      dct_wadd(sump24o, p2o, p4o); \\\n      dct_wadd(sump23o, p2o, p3o); \\\n      dct_wadd(sump14o, p1o, p4o); \\\n      dct_long_mac(x4, sump13o, row7, rot3_0); \\\n      dct_long_mac(x5, sump24o, row5, rot3_1); \\\n      dct_long_mac(x6, sump23o, row3, rot3_2); \\\n      dct_long_mac(x7, sump14o, row1, rot3_3); \\\n      dct_bfly32o(row0,row7, x0,x7,shiftop,shift); \\\n      dct_bfly32o(row1,row6, x1,x6,shiftop,shift); \\\n      dct_bfly32o(row2,row5, x2,x5,shiftop,shift); \\\n      dct_bfly32o(row3,row4, x3,x4,shiftop,shift); \\\n   }\n\n   // load\n   row0 = vld1q_s16(data + 0*8);\n   row1 = vld1q_s16(data + 1*8);\n   row2 = vld1q_s16(data + 2*8);\n   row3 = vld1q_s16(data + 3*8);\n   row4 = vld1q_s16(data + 4*8);\n   row5 = vld1q_s16(data + 5*8);\n   row6 = vld1q_s16(data + 6*8);\n   row7 = vld1q_s16(data + 7*8);\n\n   // add DC bias\n   row0 = vaddq_s16(row0, vsetq_lane_s16(1024, vdupq_n_s16(0), 0));\n\n   // column pass\n   dct_pass(vrshrn_n_s32, 10);\n\n   // 16bit 8x8 transpose\n   {\n// these three map to a single VTRN.16, VTRN.32, and VSWP, respectively.\n// whether compilers actually get this is another story, sadly.\n#define dct_trn16(x, y) { int16x8x2_t t = vtrnq_s16(x, y); x = t.val[0]; y = t.val[1]; }\n#define dct_trn32(x, y) { int32x4x2_t t = vtrnq_s32(vreinterpretq_s32_s16(x), vreinterpretq_s32_s16(y)); x = vreinterpretq_s16_s32(t.val[0]); y = vreinterpretq_s16_s32(t.val[1]); }\n#define dct_trn64(x, y) { int16x8_t x0 = x; int16x8_t y0 = y; x = vcombine_s16(vget_low_s16(x0), vget_low_s16(y0)); y = vcombine_s16(vget_high_s16(x0), vget_high_s16(y0)); }\n\n      // pass 1\n      dct_trn16(row0, row1); // a0b0a2b2a4b4a6b6\n      dct_trn16(row2, row3);\n      dct_trn16(row4, row5);\n      dct_trn16(row6, row7);\n\n      // pass 2\n      dct_trn32(row0, row2); // a0b0c0d0a4b4c4d4\n      dct_trn32(row1, row3);\n      dct_trn32(row4, row6);\n      dct_trn32(row5, row7);\n\n      // pass 3\n      dct_trn64(row0, row4); // a0b0c0d0e0f0g0h0\n      dct_trn64(row1, row5);\n      dct_trn64(row2, row6);\n      dct_trn64(row3, row7);\n\n#undef dct_trn16\n#undef dct_trn32\n#undef dct_trn64\n   }\n\n   // row pass\n   // vrshrn_n_s32 only supports shifts up to 16, we need\n   // 17. so do a non-rounding shift of 16 first then follow\n   // up with a rounding shift by 1.\n   dct_pass(vshrn_n_s32, 16);\n\n   {\n      // pack and round\n      uint8x8_t p0 = vqrshrun_n_s16(row0, 1);\n      uint8x8_t p1 = vqrshrun_n_s16(row1, 1);\n      uint8x8_t p2 = vqrshrun_n_s16(row2, 1);\n      uint8x8_t p3 = vqrshrun_n_s16(row3, 1);\n      uint8x8_t p4 = vqrshrun_n_s16(row4, 1);\n      uint8x8_t p5 = vqrshrun_n_s16(row5, 1);\n      uint8x8_t p6 = vqrshrun_n_s16(row6, 1);\n      uint8x8_t p7 = vqrshrun_n_s16(row7, 1);\n\n      // again, these can translate into one instruction, but often don't.\n#define dct_trn8_8(x, y) { uint8x8x2_t t = vtrn_u8(x, y); x = t.val[0]; y = t.val[1]; }\n#define dct_trn8_16(x, y) { uint16x4x2_t t = vtrn_u16(vreinterpret_u16_u8(x), vreinterpret_u16_u8(y)); x = vreinterpret_u8_u16(t.val[0]); y = vreinterpret_u8_u16(t.val[1]); }\n#define dct_trn8_32(x, y) { uint32x2x2_t t = vtrn_u32(vreinterpret_u32_u8(x), vreinterpret_u32_u8(y)); x = vreinterpret_u8_u32(t.val[0]); y = vreinterpret_u8_u32(t.val[1]); }\n\n      // sadly can't use interleaved stores here since we only write\n      // 8 bytes to each scan line!\n\n      // 8x8 8-bit transpose pass 1\n      dct_trn8_8(p0, p1);\n      dct_trn8_8(p2, p3);\n      dct_trn8_8(p4, p5);\n      dct_trn8_8(p6, p7);\n\n      // pass 2\n      dct_trn8_16(p0, p2);\n      dct_trn8_16(p1, p3);\n      dct_trn8_16(p4, p6);\n      dct_trn8_16(p5, p7);\n\n      // pass 3\n      dct_trn8_32(p0, p4);\n      dct_trn8_32(p1, p5);\n      dct_trn8_32(p2, p6);\n      dct_trn8_32(p3, p7);\n\n      // store\n      vst1_u8(out, p0); out += out_stride;\n      vst1_u8(out, p1); out += out_stride;\n      vst1_u8(out, p2); out += out_stride;\n      vst1_u8(out, p3); out += out_stride;\n      vst1_u8(out, p4); out += out_stride;\n      vst1_u8(out, p5); out += out_stride;\n      vst1_u8(out, p6); out += out_stride;\n      vst1_u8(out, p7);\n\n#undef dct_trn8_8\n#undef dct_trn8_16\n#undef dct_trn8_32\n   }\n\n#undef dct_long_mul\n#undef dct_long_mac\n#undef dct_widen\n#undef dct_wadd\n#undef dct_wsub\n#undef dct_bfly32o\n#undef dct_pass\n}\n\n#endif // STBI_NEON\n\n#define STBI__MARKER_none  0xff\n// if there's a pending marker from the entropy stream, return that\n// otherwise, fetch from the stream and get a marker. if there's no\n// marker, return 0xff, which is never a valid marker value\nstatic stbi_uc stbi__get_marker(stbi__jpeg *j)\n{\n   stbi_uc x;\n   if (j->marker != STBI__MARKER_none) { x = j->marker; j->marker = STBI__MARKER_none; return x; }\n   x = stbi__get8(j->s);\n   if (x != 0xff) return STBI__MARKER_none;\n   while (x == 0xff)\n      x = stbi__get8(j->s);\n   return x;\n}\n\n// in each scan, we'll have scan_n components, and the order\n// of the components is specified by order[]\n#define STBI__RESTART(x)     ((x) >= 0xd0 && (x) <= 0xd7)\n\n// after a restart interval, stbi__jpeg_reset the entropy decoder and\n// the dc prediction\nstatic void stbi__jpeg_reset(stbi__jpeg *j)\n{\n   j->code_bits = 0;\n   j->code_buffer = 0;\n   j->nomore = 0;\n   j->img_comp[0].dc_pred = j->img_comp[1].dc_pred = j->img_comp[2].dc_pred = 0;\n   j->marker = STBI__MARKER_none;\n   j->todo = j->restart_interval ? j->restart_interval : 0x7fffffff;\n   j->eob_run = 0;\n   // no more than 1<<31 MCUs if no restart_interal? that's plenty safe,\n   // since we don't even allow 1<<30 pixels\n}\n\nstatic int stbi__parse_entropy_coded_data(stbi__jpeg *z)\n{\n   stbi__jpeg_reset(z);\n   if (!z->progressive) {\n      if (z->scan_n == 1) {\n         int i,j;\n         STBI_SIMD_ALIGN(short, data[64]);\n         int n = z->order[0];\n         // non-interleaved data, we just need to process one block at a time,\n         // in trivial scanline order\n         // number of blocks to do just depends on how many actual \"pixels\" this\n         // component has, independent of interleaved MCU blocking and such\n         int w = (z->img_comp[n].x+7) >> 3;\n         int h = (z->img_comp[n].y+7) >> 3;\n         for (j=0; j < h; ++j) {\n            for (i=0; i < w; ++i) {\n               int ha = z->img_comp[n].ha;\n               if (!stbi__jpeg_decode_block(z, data, z->huff_dc+z->img_comp[n].hd, z->huff_ac+ha, z->fast_ac[ha], n, z->dequant[z->img_comp[n].tq])) return 0;\n               z->idct_block_kernel(z->img_comp[n].data+z->img_comp[n].w2*j*8+i*8, z->img_comp[n].w2, data);\n               // every data block is an MCU, so countdown the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  // if it's NOT a restart, then just bail, so we get corrupt data\n                  // rather than no data\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      } else { // interleaved\n         int i,j,k,x,y;\n         STBI_SIMD_ALIGN(short, data[64]);\n         for (j=0; j < z->img_mcu_y; ++j) {\n            for (i=0; i < z->img_mcu_x; ++i) {\n               // scan an interleaved mcu... process scan_n components in order\n               for (k=0; k < z->scan_n; ++k) {\n                  int n = z->order[k];\n                  // scan out an mcu's worth of this component; that's just determined\n                  // by the basic H and V specified for the component\n                  for (y=0; y < z->img_comp[n].v; ++y) {\n                     for (x=0; x < z->img_comp[n].h; ++x) {\n                        int x2 = (i*z->img_comp[n].h + x)*8;\n                        int y2 = (j*z->img_comp[n].v + y)*8;\n                        int ha = z->img_comp[n].ha;\n                        if (!stbi__jpeg_decode_block(z, data, z->huff_dc+z->img_comp[n].hd, z->huff_ac+ha, z->fast_ac[ha], n, z->dequant[z->img_comp[n].tq])) return 0;\n                        z->idct_block_kernel(z->img_comp[n].data+z->img_comp[n].w2*y2+x2, z->img_comp[n].w2, data);\n                     }\n                  }\n               }\n               // after all interleaved components, that's an interleaved MCU,\n               // so now count down the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      }\n   } else {\n      if (z->scan_n == 1) {\n         int i,j;\n         int n = z->order[0];\n         // non-interleaved data, we just need to process one block at a time,\n         // in trivial scanline order\n         // number of blocks to do just depends on how many actual \"pixels\" this\n         // component has, independent of interleaved MCU blocking and such\n         int w = (z->img_comp[n].x+7) >> 3;\n         int h = (z->img_comp[n].y+7) >> 3;\n         for (j=0; j < h; ++j) {\n            for (i=0; i < w; ++i) {\n               short *data = z->img_comp[n].coeff + 64 * (i + j * z->img_comp[n].coeff_w);\n               if (z->spec_start == 0) {\n                  if (!stbi__jpeg_decode_block_prog_dc(z, data, &z->huff_dc[z->img_comp[n].hd], n))\n                     return 0;\n               } else {\n                  int ha = z->img_comp[n].ha;\n                  if (!stbi__jpeg_decode_block_prog_ac(z, data, &z->huff_ac[ha], z->fast_ac[ha]))\n                     return 0;\n               }\n               // every data block is an MCU, so countdown the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      } else { // interleaved\n         int i,j,k,x,y;\n         for (j=0; j < z->img_mcu_y; ++j) {\n            for (i=0; i < z->img_mcu_x; ++i) {\n               // scan an interleaved mcu... process scan_n components in order\n               for (k=0; k < z->scan_n; ++k) {\n                  int n = z->order[k];\n                  // scan out an mcu's worth of this component; that's just determined\n                  // by the basic H and V specified for the component\n                  for (y=0; y < z->img_comp[n].v; ++y) {\n                     for (x=0; x < z->img_comp[n].h; ++x) {\n                        int x2 = (i*z->img_comp[n].h + x);\n                        int y2 = (j*z->img_comp[n].v + y);\n                        short *data = z->img_comp[n].coeff + 64 * (x2 + y2 * z->img_comp[n].coeff_w);\n                        if (!stbi__jpeg_decode_block_prog_dc(z, data, &z->huff_dc[z->img_comp[n].hd], n))\n                           return 0;\n                     }\n                  }\n               }\n               // after all interleaved components, that's an interleaved MCU,\n               // so now count down the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      }\n   }\n}\n\nstatic void stbi__jpeg_dequantize(short *data, stbi_uc *dequant)\n{\n   int i;\n   for (i=0; i < 64; ++i)\n      data[i] *= dequant[i];\n}\n\nstatic void stbi__jpeg_finish(stbi__jpeg *z)\n{\n   if (z->progressive) {\n      // dequantize and idct the data\n      int i,j,n;\n      for (n=0; n < z->s->img_n; ++n) {\n         int w = (z->img_comp[n].x+7) >> 3;\n         int h = (z->img_comp[n].y+7) >> 3;\n         for (j=0; j < h; ++j) {\n            for (i=0; i < w; ++i) {\n               short *data = z->img_comp[n].coeff + 64 * (i + j * z->img_comp[n].coeff_w);\n               stbi__jpeg_dequantize(data, z->dequant[z->img_comp[n].tq]);\n               z->idct_block_kernel(z->img_comp[n].data+z->img_comp[n].w2*j*8+i*8, z->img_comp[n].w2, data);\n            }\n         }\n      }\n   }\n}\n\nstatic int stbi__process_marker(stbi__jpeg *z, int m)\n{\n   int L;\n   switch (m) {\n      case STBI__MARKER_none: // no marker found\n         return stbi__err(\"expected marker\",\"Corrupt JPEG\");\n\n      case 0xDD: // DRI - specify restart interval\n         if (stbi__get16be(z->s) != 4) return stbi__err(\"bad DRI len\",\"Corrupt JPEG\");\n         z->restart_interval = stbi__get16be(z->s);\n         return 1;\n\n      case 0xDB: // DQT - define quantization table\n         L = stbi__get16be(z->s)-2;\n         while (L > 0) {\n            int q = stbi__get8(z->s);\n            int p = q >> 4;\n            int t = q & 15,i;\n            if (p != 0) return stbi__err(\"bad DQT type\",\"Corrupt JPEG\");\n            if (t > 3) return stbi__err(\"bad DQT table\",\"Corrupt JPEG\");\n            for (i=0; i < 64; ++i)\n               z->dequant[t][stbi__jpeg_dezigzag[i]] = stbi__get8(z->s);\n            L -= 65;\n         }\n         return L==0;\n\n      case 0xC4: // DHT - define huffman table\n         L = stbi__get16be(z->s)-2;\n         while (L > 0) {\n            stbi_uc *v;\n            int sizes[16],i,n=0;\n            int q = stbi__get8(z->s);\n            int tc = q >> 4;\n            int th = q & 15;\n            if (tc > 1 || th > 3) return stbi__err(\"bad DHT header\",\"Corrupt JPEG\");\n            for (i=0; i < 16; ++i) {\n               sizes[i] = stbi__get8(z->s);\n               n += sizes[i];\n            }\n            L -= 17;\n            if (tc == 0) {\n               if (!stbi__build_huffman(z->huff_dc+th, sizes)) return 0;\n               v = z->huff_dc[th].values;\n            } else {\n               if (!stbi__build_huffman(z->huff_ac+th, sizes)) return 0;\n               v = z->huff_ac[th].values;\n            }\n            for (i=0; i < n; ++i)\n               v[i] = stbi__get8(z->s);\n            if (tc != 0)\n               stbi__build_fast_ac(z->fast_ac[th], z->huff_ac + th);\n            L -= n;\n         }\n         return L==0;\n   }\n   // check for comment block or APP blocks\n   if ((m >= 0xE0 && m <= 0xEF) || m == 0xFE) {\n      stbi__skip(z->s, stbi__get16be(z->s)-2);\n      return 1;\n   }\n   return 0;\n}\n\n// after we see SOS\nstatic int stbi__process_scan_header(stbi__jpeg *z)\n{\n   int i;\n   int Ls = stbi__get16be(z->s);\n   z->scan_n = stbi__get8(z->s);\n   if (z->scan_n < 1 || z->scan_n > 4 || z->scan_n > (int) z->s->img_n) return stbi__err(\"bad SOS component count\",\"Corrupt JPEG\");\n   if (Ls != 6+2*z->scan_n) return stbi__err(\"bad SOS len\",\"Corrupt JPEG\");\n   for (i=0; i < z->scan_n; ++i) {\n      int id = stbi__get8(z->s), which;\n      int q = stbi__get8(z->s);\n      for (which = 0; which < z->s->img_n; ++which)\n         if (z->img_comp[which].id == id)\n            break;\n      if (which == z->s->img_n) return 0; // no match\n      z->img_comp[which].hd = q >> 4;   if (z->img_comp[which].hd > 3) return stbi__err(\"bad DC huff\",\"Corrupt JPEG\");\n      z->img_comp[which].ha = q & 15;   if (z->img_comp[which].ha > 3) return stbi__err(\"bad AC huff\",\"Corrupt JPEG\");\n      z->order[i] = which;\n   }\n\n   {\n      int aa;\n      z->spec_start = stbi__get8(z->s);\n      z->spec_end   = stbi__get8(z->s); // should be 63, but might be 0\n      aa = stbi__get8(z->s);\n      z->succ_high = (aa >> 4);\n      z->succ_low  = (aa & 15);\n      if (z->progressive) {\n         if (z->spec_start > 63 || z->spec_end > 63  || z->spec_start > z->spec_end || z->succ_high > 13 || z->succ_low > 13)\n            return stbi__err(\"bad SOS\", \"Corrupt JPEG\");\n      } else {\n         if (z->spec_start != 0) return stbi__err(\"bad SOS\",\"Corrupt JPEG\");\n         if (z->succ_high != 0 || z->succ_low != 0) return stbi__err(\"bad SOS\",\"Corrupt JPEG\");\n         z->spec_end = 63;\n      }\n   }\n\n   return 1;\n}\n\nstatic int stbi__process_frame_header(stbi__jpeg *z, int scan)\n{\n   stbi__context *s = z->s;\n   int Lf,p,i,q, h_max=1,v_max=1,c;\n   Lf = stbi__get16be(s);         if (Lf < 11) return stbi__err(\"bad SOF len\",\"Corrupt JPEG\"); // JPEG\n   p  = stbi__get8(s);            if (p != 8) return stbi__err(\"only 8-bit\",\"JPEG format not supported: 8-bit only\"); // JPEG baseline\n   s->img_y = stbi__get16be(s);   if (s->img_y == 0) return stbi__err(\"no header height\", \"JPEG format not supported: delayed height\"); // Legal, but we don't handle it--but neither does IJG\n   s->img_x = stbi__get16be(s);   if (s->img_x == 0) return stbi__err(\"0 width\",\"Corrupt JPEG\"); // JPEG requires\n   c = stbi__get8(s);\n   if (c != 3 && c != 1) return stbi__err(\"bad component count\",\"Corrupt JPEG\");    // JFIF requires\n   s->img_n = c;\n   for (i=0; i < c; ++i) {\n      z->img_comp[i].data = NULL;\n      z->img_comp[i].linebuf = NULL;\n   }\n\n   if (Lf != 8+3*s->img_n) return stbi__err(\"bad SOF len\",\"Corrupt JPEG\");\n\n   for (i=0; i < s->img_n; ++i) {\n      z->img_comp[i].id = stbi__get8(s);\n      if (z->img_comp[i].id != i+1)   // JFIF requires\n         if (z->img_comp[i].id != i)  // some version of jpegtran outputs non-JFIF-compliant files!\n            return stbi__err(\"bad component ID\",\"Corrupt JPEG\");\n      q = stbi__get8(s);\n      z->img_comp[i].h = (q >> 4);  if (!z->img_comp[i].h || z->img_comp[i].h > 4) return stbi__err(\"bad H\",\"Corrupt JPEG\");\n      z->img_comp[i].v = q & 15;    if (!z->img_comp[i].v || z->img_comp[i].v > 4) return stbi__err(\"bad V\",\"Corrupt JPEG\");\n      z->img_comp[i].tq = stbi__get8(s);  if (z->img_comp[i].tq > 3) return stbi__err(\"bad TQ\",\"Corrupt JPEG\");\n   }\n\n   if (scan != STBI__SCAN_load) return 1;\n\n   if ((1 << 30) / s->img_x / s->img_n < s->img_y) return stbi__err(\"too large\", \"Image too large to decode\");\n\n   for (i=0; i < s->img_n; ++i) {\n      if (z->img_comp[i].h > h_max) h_max = z->img_comp[i].h;\n      if (z->img_comp[i].v > v_max) v_max = z->img_comp[i].v;\n   }\n\n   // compute interleaved mcu info\n   z->img_h_max = h_max;\n   z->img_v_max = v_max;\n   z->img_mcu_w = h_max * 8;\n   z->img_mcu_h = v_max * 8;\n   z->img_mcu_x = (s->img_x + z->img_mcu_w-1) / z->img_mcu_w;\n   z->img_mcu_y = (s->img_y + z->img_mcu_h-1) / z->img_mcu_h;\n\n   for (i=0; i < s->img_n; ++i) {\n      // number of effective pixels (e.g. for non-interleaved MCU)\n      z->img_comp[i].x = (s->img_x * z->img_comp[i].h + h_max-1) / h_max;\n      z->img_comp[i].y = (s->img_y * z->img_comp[i].v + v_max-1) / v_max;\n      // to simplify generation, we'll allocate enough memory to decode\n      // the bogus oversized data from using interleaved MCUs and their\n      // big blocks (e.g. a 16x16 iMCU on an image of width 33); we won't\n      // discard the extra data until colorspace conversion\n      z->img_comp[i].w2 = z->img_mcu_x * z->img_comp[i].h * 8;\n      z->img_comp[i].h2 = z->img_mcu_y * z->img_comp[i].v * 8;\n      z->img_comp[i].raw_data = stbi__malloc(z->img_comp[i].w2 * z->img_comp[i].h2+15);\n\n      if (z->img_comp[i].raw_data == NULL) {\n         for(--i; i >= 0; --i) {\n            STBI_FREE(z->img_comp[i].raw_data);\n            z->img_comp[i].data = NULL;\n         }\n         return stbi__err(\"outofmem\", \"Out of memory\");\n      }\n      // align blocks for idct using mmx/sse\n      z->img_comp[i].data = (stbi_uc*) (((size_t) z->img_comp[i].raw_data + 15) & ~15);\n      z->img_comp[i].linebuf = NULL;\n      if (z->progressive) {\n         z->img_comp[i].coeff_w = (z->img_comp[i].w2 + 7) >> 3;\n         z->img_comp[i].coeff_h = (z->img_comp[i].h2 + 7) >> 3;\n         z->img_comp[i].raw_coeff = STBI_MALLOC(z->img_comp[i].coeff_w * z->img_comp[i].coeff_h * 64 * sizeof(short) + 15);\n         z->img_comp[i].coeff = (short*) (((size_t) z->img_comp[i].raw_coeff + 15) & ~15);\n      } else {\n         z->img_comp[i].coeff = 0;\n         z->img_comp[i].raw_coeff = 0;\n      }\n   }\n\n   return 1;\n}\n\n// use comparisons since in some cases we handle more than one case (e.g. SOF)\n#define stbi__DNL(x)         ((x) == 0xdc)\n#define stbi__SOI(x)         ((x) == 0xd8)\n#define stbi__EOI(x)         ((x) == 0xd9)\n#define stbi__SOF(x)         ((x) == 0xc0 || (x) == 0xc1 || (x) == 0xc2)\n#define stbi__SOS(x)         ((x) == 0xda)\n\n#define stbi__SOF_progressive(x)   ((x) == 0xc2)\n\nstatic int stbi__decode_jpeg_header(stbi__jpeg *z, int scan)\n{\n   int m;\n   z->marker = STBI__MARKER_none; // initialize cached marker to empty\n   m = stbi__get_marker(z);\n   if (!stbi__SOI(m)) return stbi__err(\"no SOI\",\"Corrupt JPEG\");\n   if (scan == STBI__SCAN_type) return 1;\n   m = stbi__get_marker(z);\n   while (!stbi__SOF(m)) {\n      if (!stbi__process_marker(z,m)) return 0;\n      m = stbi__get_marker(z);\n      while (m == STBI__MARKER_none) {\n         // some files have extra padding after their blocks, so ok, we'll scan\n         if (stbi__at_eof(z->s)) return stbi__err(\"no SOF\", \"Corrupt JPEG\");\n         m = stbi__get_marker(z);\n      }\n   }\n   z->progressive = stbi__SOF_progressive(m);\n   if (!stbi__process_frame_header(z, scan)) return 0;\n   return 1;\n}\n\n// decode image to YCbCr format\nstatic int stbi__decode_jpeg_image(stbi__jpeg *j)\n{\n   int m;\n   for (m = 0; m < 4; m++) {\n      j->img_comp[m].raw_data = NULL;\n      j->img_comp[m].raw_coeff = NULL;\n   }\n   j->restart_interval = 0;\n   if (!stbi__decode_jpeg_header(j, STBI__SCAN_load)) return 0;\n   m = stbi__get_marker(j);\n   while (!stbi__EOI(m)) {\n      if (stbi__SOS(m)) {\n         if (!stbi__process_scan_header(j)) return 0;\n         if (!stbi__parse_entropy_coded_data(j)) return 0;\n         if (j->marker == STBI__MARKER_none ) {\n            // handle 0s at the end of image data from IP Kamera 9060\n            while (!stbi__at_eof(j->s)) {\n               int x = stbi__get8(j->s);\n               if (x == 255) {\n                  j->marker = stbi__get8(j->s);\n                  break;\n               } else if (x != 0) {\n                  return stbi__err(\"junk before marker\", \"Corrupt JPEG\");\n               }\n            }\n            // if we reach eof without hitting a marker, stbi__get_marker() below will fail and we'll eventually return 0\n         }\n      } else {\n         if (!stbi__process_marker(j, m)) return 0;\n      }\n      m = stbi__get_marker(j);\n   }\n   if (j->progressive)\n      stbi__jpeg_finish(j);\n   return 1;\n}\n\n// static jfif-centered resampling (across block boundaries)\n\ntypedef stbi_uc *(*resample_row_func)(stbi_uc *out, stbi_uc *in0, stbi_uc *in1,\n                                    int w, int hs);\n\n#define stbi__div4(x) ((stbi_uc) ((x) >> 2))\n\nstatic stbi_uc *resample_row_1(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   STBI_NOTUSED(out);\n   STBI_NOTUSED(in_far);\n   STBI_NOTUSED(w);\n   STBI_NOTUSED(hs);\n   return in_near;\n}\n\nstatic stbi_uc* stbi__resample_row_v_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate two samples vertically for every one in input\n   int i;\n   STBI_NOTUSED(hs);\n   for (i=0; i < w; ++i)\n      out[i] = stbi__div4(3*in_near[i] + in_far[i] + 2);\n   return out;\n}\n\nstatic stbi_uc*  stbi__resample_row_h_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate two samples horizontally for every one in input\n   int i;\n   stbi_uc *input = in_near;\n\n   if (w == 1) {\n      // if only one sample, can't do any interpolation\n      out[0] = out[1] = input[0];\n      return out;\n   }\n\n   out[0] = input[0];\n   out[1] = stbi__div4(input[0]*3 + input[1] + 2);\n   for (i=1; i < w-1; ++i) {\n      int n = 3*input[i]+2;\n      out[i*2+0] = stbi__div4(n+input[i-1]);\n      out[i*2+1] = stbi__div4(n+input[i+1]);\n   }\n   out[i*2+0] = stbi__div4(input[w-2]*3 + input[w-1] + 2);\n   out[i*2+1] = input[w-1];\n\n   STBI_NOTUSED(in_far);\n   STBI_NOTUSED(hs);\n\n   return out;\n}\n\n#define stbi__div16(x) ((stbi_uc) ((x) >> 4))\n\nstatic stbi_uc *stbi__resample_row_hv_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate 2x2 samples for every one in input\n   int i,t0,t1;\n   if (w == 1) {\n      out[0] = out[1] = stbi__div4(3*in_near[0] + in_far[0] + 2);\n      return out;\n   }\n\n   t1 = 3*in_near[0] + in_far[0];\n   out[0] = stbi__div4(t1+2);\n   for (i=1; i < w; ++i) {\n      t0 = t1;\n      t1 = 3*in_near[i]+in_far[i];\n      out[i*2-1] = stbi__div16(3*t0 + t1 + 8);\n      out[i*2  ] = stbi__div16(3*t1 + t0 + 8);\n   }\n   out[w*2-1] = stbi__div4(t1+2);\n\n   STBI_NOTUSED(hs);\n\n   return out;\n}\n\n#if defined(STBI_SSE2) || defined(STBI_NEON)\nstatic stbi_uc *stbi__resample_row_hv_2_simd(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate 2x2 samples for every one in input\n   int i=0,t0,t1;\n\n   if (w == 1) {\n      out[0] = out[1] = stbi__div4(3*in_near[0] + in_far[0] + 2);\n      return out;\n   }\n\n   t1 = 3*in_near[0] + in_far[0];\n   // process groups of 8 pixels for as long as we can.\n   // note we can't handle the last pixel in a row in this loop\n   // because we need to handle the filter boundary conditions.\n   for (; i < ((w-1) & ~7); i += 8) {\n#if defined(STBI_SSE2)\n      // load and perform the vertical filtering pass\n      // this uses 3*x + y = 4*x + (y - x)\n      __m128i zero  = _mm_setzero_si128();\n      __m128i farb  = _mm_loadl_epi64((__m128i *) (in_far + i));\n      __m128i nearb = _mm_loadl_epi64((__m128i *) (in_near + i));\n      __m128i farw  = _mm_unpacklo_epi8(farb, zero);\n      __m128i nearw = _mm_unpacklo_epi8(nearb, zero);\n      __m128i diff  = _mm_sub_epi16(farw, nearw);\n      __m128i nears = _mm_slli_epi16(nearw, 2);\n      __m128i curr  = _mm_add_epi16(nears, diff); // current row\n\n      // horizontal filter works the same based on shifted vers of current\n      // row. \"prev\" is current row shifted right by 1 pixel; we need to\n      // insert the previous pixel value (from t1).\n      // \"next\" is current row shifted left by 1 pixel, with first pixel\n      // of next block of 8 pixels added in.\n      __m128i prv0 = _mm_slli_si128(curr, 2);\n      __m128i nxt0 = _mm_srli_si128(curr, 2);\n      __m128i prev = _mm_insert_epi16(prv0, t1, 0);\n      __m128i next = _mm_insert_epi16(nxt0, 3*in_near[i+8] + in_far[i+8], 7);\n\n      // horizontal filter, polyphase implementation since it's convenient:\n      // even pixels = 3*cur + prev = cur*4 + (prev - cur)\n      // odd  pixels = 3*cur + next = cur*4 + (next - cur)\n      // note the shared term.\n      __m128i bias  = _mm_set1_epi16(8);\n      __m128i curs = _mm_slli_epi16(curr, 2);\n      __m128i prvd = _mm_sub_epi16(prev, curr);\n      __m128i nxtd = _mm_sub_epi16(next, curr);\n      __m128i curb = _mm_add_epi16(curs, bias);\n      __m128i even = _mm_add_epi16(prvd, curb);\n      __m128i odd  = _mm_add_epi16(nxtd, curb);\n\n      // interleave even and odd pixels, then undo scaling.\n      __m128i int0 = _mm_unpacklo_epi16(even, odd);\n      __m128i int1 = _mm_unpackhi_epi16(even, odd);\n      __m128i de0  = _mm_srli_epi16(int0, 4);\n      __m128i de1  = _mm_srli_epi16(int1, 4);\n\n      // pack and write output\n      __m128i outv = _mm_packus_epi16(de0, de1);\n      _mm_storeu_si128((__m128i *) (out + i*2), outv);\n#elif defined(STBI_NEON)\n      // load and perform the vertical filtering pass\n      // this uses 3*x + y = 4*x + (y - x)\n      uint8x8_t farb  = vld1_u8(in_far + i);\n      uint8x8_t nearb = vld1_u8(in_near + i);\n      int16x8_t diff  = vreinterpretq_s16_u16(vsubl_u8(farb, nearb));\n      int16x8_t nears = vreinterpretq_s16_u16(vshll_n_u8(nearb, 2));\n      int16x8_t curr  = vaddq_s16(nears, diff); // current row\n\n      // horizontal filter works the same based on shifted vers of current\n      // row. \"prev\" is current row shifted right by 1 pixel; we need to\n      // insert the previous pixel value (from t1).\n      // \"next\" is current row shifted left by 1 pixel, with first pixel\n      // of next block of 8 pixels added in.\n      int16x8_t prv0 = vextq_s16(curr, curr, 7);\n      int16x8_t nxt0 = vextq_s16(curr, curr, 1);\n      int16x8_t prev = vsetq_lane_s16(t1, prv0, 0);\n      int16x8_t next = vsetq_lane_s16(3*in_near[i+8] + in_far[i+8], nxt0, 7);\n\n      // horizontal filter, polyphase implementation since it's convenient:\n      // even pixels = 3*cur + prev = cur*4 + (prev - cur)\n      // odd  pixels = 3*cur + next = cur*4 + (next - cur)\n      // note the shared term.\n      int16x8_t curs = vshlq_n_s16(curr, 2);\n      int16x8_t prvd = vsubq_s16(prev, curr);\n      int16x8_t nxtd = vsubq_s16(next, curr);\n      int16x8_t even = vaddq_s16(curs, prvd);\n      int16x8_t odd  = vaddq_s16(curs, nxtd);\n\n      // undo scaling and round, then store with even/odd phases interleaved\n      uint8x8x2_t o;\n      o.val[0] = vqrshrun_n_s16(even, 4);\n      o.val[1] = vqrshrun_n_s16(odd,  4);\n      vst2_u8(out + i*2, o);\n#endif\n\n      // \"previous\" value for next iter\n      t1 = 3*in_near[i+7] + in_far[i+7];\n   }\n\n   t0 = t1;\n   t1 = 3*in_near[i] + in_far[i];\n   out[i*2] = stbi__div16(3*t1 + t0 + 8);\n\n   for (++i; i < w; ++i) {\n      t0 = t1;\n      t1 = 3*in_near[i]+in_far[i];\n      out[i*2-1] = stbi__div16(3*t0 + t1 + 8);\n      out[i*2  ] = stbi__div16(3*t1 + t0 + 8);\n   }\n   out[w*2-1] = stbi__div4(t1+2);\n\n   STBI_NOTUSED(hs);\n\n   return out;\n}\n#endif\n\nstatic stbi_uc *stbi__resample_row_generic(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // resample with nearest-neighbor\n   int i,j;\n   STBI_NOTUSED(in_far);\n   for (i=0; i < w; ++i)\n      for (j=0; j < hs; ++j)\n         out[i*hs+j] = in_near[i];\n   return out;\n}\n\n#ifdef STBI_JPEG_OLD\n// this is the same YCbCr-to-RGB calculation that stb_image has used\n// historically before the algorithm changes in 1.49\n#define float2fixed(x)  ((int) ((x) * 65536 + 0.5))\nstatic void stbi__YCbCr_to_RGB_row(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step)\n{\n   int i;\n   for (i=0; i < count; ++i) {\n      int y_fixed = (y[i] << 16) + 32768; // rounding\n      int r,g,b;\n      int cr = pcr[i] - 128;\n      int cb = pcb[i] - 128;\n      r = y_fixed + cr*float2fixed(1.40200f);\n      g = y_fixed - cr*float2fixed(0.71414f) - cb*float2fixed(0.34414f);\n      b = y_fixed                            + cb*float2fixed(1.77200f);\n      r >>= 16;\n      g >>= 16;\n      b >>= 16;\n      if ((unsigned) r > 255) { if (r < 0) r = 0; else r = 255; }\n      if ((unsigned) g > 255) { if (g < 0) g = 0; else g = 255; }\n      if ((unsigned) b > 255) { if (b < 0) b = 0; else b = 255; }\n      out[0] = (stbi_uc)r;\n      out[1] = (stbi_uc)g;\n      out[2] = (stbi_uc)b;\n      out[3] = 255;\n      out += step;\n   }\n}\n#else\n// this is a reduced-precision calculation of YCbCr-to-RGB introduced\n// to make sure the code produces the same results in both SIMD and scalar\n#define float2fixed(x)  (((int) ((x) * 4096.0f + 0.5f)) << 8)\nstatic void stbi__YCbCr_to_RGB_row(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step)\n{\n   int i;\n   for (i=0; i < count; ++i) {\n      int y_fixed = (y[i] << 20) + (1<<19); // rounding\n      int r,g,b;\n      int cr = pcr[i] - 128;\n      int cb = pcb[i] - 128;\n      r = y_fixed +  cr* float2fixed(1.40200f);\n      g = y_fixed + (cr*-float2fixed(0.71414f)) + ((cb*-float2fixed(0.34414f)) & 0xffff0000);\n      b = y_fixed                               +   cb* float2fixed(1.77200f);\n      r >>= 20;\n      g >>= 20;\n      b >>= 20;\n      if ((unsigned) r > 255) { if (r < 0) r = 0; else r = 255; }\n      if ((unsigned) g > 255) { if (g < 0) g = 0; else g = 255; }\n      if ((unsigned) b > 255) { if (b < 0) b = 0; else b = 255; }\n      out[0] = (stbi_uc)r;\n      out[1] = (stbi_uc)g;\n      out[2] = (stbi_uc)b;\n      out[3] = 255;\n      out += step;\n   }\n}\n#endif\n\n#if defined(STBI_SSE2) || defined(STBI_NEON)\nstatic void stbi__YCbCr_to_RGB_simd(stbi_uc *out, stbi_uc const *y, stbi_uc const *pcb, stbi_uc const *pcr, int count, int step)\n{\n   int i = 0;\n\n#ifdef STBI_SSE2\n   // step == 3 is pretty ugly on the final interleave, and i'm not convinced\n   // it's useful in practice (you wouldn't use it for textures, for example).\n   // so just accelerate step == 4 case.\n   if (step == 4) {\n      // this is a fairly straightforward implementation and not super-optimized.\n      __m128i signflip  = _mm_set1_epi8(-0x80);\n      __m128i cr_const0 = _mm_set1_epi16(   (short) ( 1.40200f*4096.0f+0.5f));\n      __m128i cr_const1 = _mm_set1_epi16( - (short) ( 0.71414f*4096.0f+0.5f));\n      __m128i cb_const0 = _mm_set1_epi16( - (short) ( 0.34414f*4096.0f+0.5f));\n      __m128i cb_const1 = _mm_set1_epi16(   (short) ( 1.77200f*4096.0f+0.5f));\n      __m128i y_bias = _mm_set1_epi8((char) (unsigned char) 128);\n      __m128i xw = _mm_set1_epi16(255); // alpha channel\n\n      for (; i+7 < count; i += 8) {\n         // load\n         __m128i y_bytes = _mm_loadl_epi64((__m128i *) (y+i));\n         __m128i cr_bytes = _mm_loadl_epi64((__m128i *) (pcr+i));\n         __m128i cb_bytes = _mm_loadl_epi64((__m128i *) (pcb+i));\n         __m128i cr_biased = _mm_xor_si128(cr_bytes, signflip); // -128\n         __m128i cb_biased = _mm_xor_si128(cb_bytes, signflip); // -128\n\n         // unpack to short (and left-shift cr, cb by 8)\n         __m128i yw  = _mm_unpacklo_epi8(y_bias, y_bytes);\n         __m128i crw = _mm_unpacklo_epi8(_mm_setzero_si128(), cr_biased);\n         __m128i cbw = _mm_unpacklo_epi8(_mm_setzero_si128(), cb_biased);\n\n         // color transform\n         __m128i yws = _mm_srli_epi16(yw, 4);\n         __m128i cr0 = _mm_mulhi_epi16(cr_const0, crw);\n         __m128i cb0 = _mm_mulhi_epi16(cb_const0, cbw);\n         __m128i cb1 = _mm_mulhi_epi16(cbw, cb_const1);\n         __m128i cr1 = _mm_mulhi_epi16(crw, cr_const1);\n         __m128i rws = _mm_add_epi16(cr0, yws);\n         __m128i gwt = _mm_add_epi16(cb0, yws);\n         __m128i bws = _mm_add_epi16(yws, cb1);\n         __m128i gws = _mm_add_epi16(gwt, cr1);\n\n         // descale\n         __m128i rw = _mm_srai_epi16(rws, 4);\n         __m128i bw = _mm_srai_epi16(bws, 4);\n         __m128i gw = _mm_srai_epi16(gws, 4);\n\n         // back to byte, set up for transpose\n         __m128i brb = _mm_packus_epi16(rw, bw);\n         __m128i gxb = _mm_packus_epi16(gw, xw);\n\n         // transpose to interleave channels\n         __m128i t0 = _mm_unpacklo_epi8(brb, gxb);\n         __m128i t1 = _mm_unpackhi_epi8(brb, gxb);\n         __m128i o0 = _mm_unpacklo_epi16(t0, t1);\n         __m128i o1 = _mm_unpackhi_epi16(t0, t1);\n\n         // store\n         _mm_storeu_si128((__m128i *) (out + 0), o0);\n         _mm_storeu_si128((__m128i *) (out + 16), o1);\n         out += 32;\n      }\n   }\n#endif\n\n#ifdef STBI_NEON\n   // in this version, step=3 support would be easy to add. but is there demand?\n   if (step == 4) {\n      // this is a fairly straightforward implementation and not super-optimized.\n      uint8x8_t signflip = vdup_n_u8(0x80);\n      int16x8_t cr_const0 = vdupq_n_s16(   (short) ( 1.40200f*4096.0f+0.5f));\n      int16x8_t cr_const1 = vdupq_n_s16( - (short) ( 0.71414f*4096.0f+0.5f));\n      int16x8_t cb_const0 = vdupq_n_s16( - (short) ( 0.34414f*4096.0f+0.5f));\n      int16x8_t cb_const1 = vdupq_n_s16(   (short) ( 1.77200f*4096.0f+0.5f));\n\n      for (; i+7 < count; i += 8) {\n         // load\n         uint8x8_t y_bytes  = vld1_u8(y + i);\n         uint8x8_t cr_bytes = vld1_u8(pcr + i);\n         uint8x8_t cb_bytes = vld1_u8(pcb + i);\n         int8x8_t cr_biased = vreinterpret_s8_u8(vsub_u8(cr_bytes, signflip));\n         int8x8_t cb_biased = vreinterpret_s8_u8(vsub_u8(cb_bytes, signflip));\n\n         // expand to s16\n         int16x8_t yws = vreinterpretq_s16_u16(vshll_n_u8(y_bytes, 4));\n         int16x8_t crw = vshll_n_s8(cr_biased, 7);\n         int16x8_t cbw = vshll_n_s8(cb_biased, 7);\n\n         // color transform\n         int16x8_t cr0 = vqdmulhq_s16(crw, cr_const0);\n         int16x8_t cb0 = vqdmulhq_s16(cbw, cb_const0);\n         int16x8_t cr1 = vqdmulhq_s16(crw, cr_const1);\n         int16x8_t cb1 = vqdmulhq_s16(cbw, cb_const1);\n         int16x8_t rws = vaddq_s16(yws, cr0);\n         int16x8_t gws = vaddq_s16(vaddq_s16(yws, cb0), cr1);\n         int16x8_t bws = vaddq_s16(yws, cb1);\n\n         // undo scaling, round, convert to byte\n         uint8x8x4_t o;\n         o.val[0] = vqrshrun_n_s16(rws, 4);\n         o.val[1] = vqrshrun_n_s16(gws, 4);\n         o.val[2] = vqrshrun_n_s16(bws, 4);\n         o.val[3] = vdup_n_u8(255);\n\n         // store, interleaving r/g/b/a\n         vst4_u8(out, o);\n         out += 8*4;\n      }\n   }\n#endif\n\n   for (; i < count; ++i) {\n      int y_fixed = (y[i] << 20) + (1<<19); // rounding\n      int r,g,b;\n      int cr = pcr[i] - 128;\n      int cb = pcb[i] - 128;\n      r = y_fixed + cr* float2fixed(1.40200f);\n      g = y_fixed + cr*-float2fixed(0.71414f) + ((cb*-float2fixed(0.34414f)) & 0xffff0000);\n      b = y_fixed                             +   cb* float2fixed(1.77200f);\n      r >>= 20;\n      g >>= 20;\n      b >>= 20;\n      if ((unsigned) r > 255) { if (r < 0) r = 0; else r = 255; }\n      if ((unsigned) g > 255) { if (g < 0) g = 0; else g = 255; }\n      if ((unsigned) b > 255) { if (b < 0) b = 0; else b = 255; }\n      out[0] = (stbi_uc)r;\n      out[1] = (stbi_uc)g;\n      out[2] = (stbi_uc)b;\n      out[3] = 255;\n      out += step;\n   }\n}\n#endif\n\n// set up the kernels\nstatic void stbi__setup_jpeg(stbi__jpeg *j)\n{\n   j->idct_block_kernel = stbi__idct_block;\n   j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_row;\n   j->resample_row_hv_2_kernel = stbi__resample_row_hv_2;\n\n#ifdef STBI_SSE2\n   if (stbi__sse2_available()) {\n      j->idct_block_kernel = stbi__idct_simd;\n      #ifndef STBI_JPEG_OLD\n      j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_simd;\n      #endif\n      j->resample_row_hv_2_kernel = stbi__resample_row_hv_2_simd;\n   }\n#endif\n\n#ifdef STBI_NEON\n   j->idct_block_kernel = stbi__idct_simd;\n   #ifndef STBI_JPEG_OLD\n   j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_simd;\n   #endif\n   j->resample_row_hv_2_kernel = stbi__resample_row_hv_2_simd;\n#endif\n}\n\n// clean up the temporary component buffers\nstatic void stbi__cleanup_jpeg(stbi__jpeg *j)\n{\n   int i;\n   for (i=0; i < j->s->img_n; ++i) {\n      if (j->img_comp[i].raw_data) {\n         STBI_FREE(j->img_comp[i].raw_data);\n         j->img_comp[i].raw_data = NULL;\n         j->img_comp[i].data = NULL;\n      }\n      if (j->img_comp[i].raw_coeff) {\n         STBI_FREE(j->img_comp[i].raw_coeff);\n         j->img_comp[i].raw_coeff = 0;\n         j->img_comp[i].coeff = 0;\n      }\n      if (j->img_comp[i].linebuf) {\n         STBI_FREE(j->img_comp[i].linebuf);\n         j->img_comp[i].linebuf = NULL;\n      }\n   }\n}\n\ntypedef struct\n{\n   resample_row_func resample;\n   stbi_uc *line0,*line1;\n   int hs,vs;   // expansion factor in each axis\n   int w_lores; // horizontal pixels pre-expansion\n   int ystep;   // how far through vertical expansion we are\n   int ypos;    // which pre-expansion row we're on\n} stbi__resample;\n\nstatic stbi_uc *load_jpeg_image(stbi__jpeg *z, int *out_x, int *out_y, int *comp, int req_comp)\n{\n   int n, decode_n;\n   z->s->img_n = 0; // make stbi__cleanup_jpeg safe\n\n   // validate req_comp\n   if (req_comp < 0 || req_comp > 4) return stbi__errpuc(\"bad req_comp\", \"Internal error\");\n\n   // load a jpeg image from whichever source, but leave in YCbCr format\n   if (!stbi__decode_jpeg_image(z)) { stbi__cleanup_jpeg(z); return NULL; }\n\n   // determine actual number of components to generate\n   n = req_comp ? req_comp : z->s->img_n;\n\n   if (z->s->img_n == 3 && n < 3)\n      decode_n = 1;\n   else\n      decode_n = z->s->img_n;\n\n   // resample and color-convert\n   {\n      int k;\n      unsigned int i,j;\n      stbi_uc *output;\n      stbi_uc *coutput[4];\n\n      stbi__resample res_comp[4];\n\n      for (k=0; k < decode_n; ++k) {\n         stbi__resample *r = &res_comp[k];\n\n         // allocate line buffer big enough for upsampling off the edges\n         // with upsample factor of 4\n         z->img_comp[k].linebuf = (stbi_uc *) stbi__malloc(z->s->img_x + 3);\n         if (!z->img_comp[k].linebuf) { stbi__cleanup_jpeg(z); return stbi__errpuc(\"outofmem\", \"Out of memory\"); }\n\n         r->hs      = z->img_h_max / z->img_comp[k].h;\n         r->vs      = z->img_v_max / z->img_comp[k].v;\n         r->ystep   = r->vs >> 1;\n         r->w_lores = (z->s->img_x + r->hs-1) / r->hs;\n         r->ypos    = 0;\n         r->line0   = r->line1 = z->img_comp[k].data;\n\n         if      (r->hs == 1 && r->vs == 1) r->resample = resample_row_1;\n         else if (r->hs == 1 && r->vs == 2) r->resample = stbi__resample_row_v_2;\n         else if (r->hs == 2 && r->vs == 1) r->resample = stbi__resample_row_h_2;\n         else if (r->hs == 2 && r->vs == 2) r->resample = z->resample_row_hv_2_kernel;\n         else                               r->resample = stbi__resample_row_generic;\n      }\n\n      // can't error after this so, this is safe\n      output = (stbi_uc *) stbi__malloc(n * z->s->img_x * z->s->img_y + 1);\n      if (!output) { stbi__cleanup_jpeg(z); return stbi__errpuc(\"outofmem\", \"Out of memory\"); }\n\n      // now go ahead and resample\n      for (j=0; j < z->s->img_y; ++j) {\n         stbi_uc *out = output + n * z->s->img_x * j;\n         for (k=0; k < decode_n; ++k) {\n            stbi__resample *r = &res_comp[k];\n            int y_bot = r->ystep >= (r->vs >> 1);\n            coutput[k] = r->resample(z->img_comp[k].linebuf,\n                                     y_bot ? r->line1 : r->line0,\n                                     y_bot ? r->line0 : r->line1,\n                                     r->w_lores, r->hs);\n            if (++r->ystep >= r->vs) {\n               r->ystep = 0;\n               r->line0 = r->line1;\n               if (++r->ypos < z->img_comp[k].y)\n                  r->line1 += z->img_comp[k].w2;\n            }\n         }\n         if (n >= 3) {\n            stbi_uc *y = coutput[0];\n            if (z->s->img_n == 3) {\n               z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n);\n            } else\n               for (i=0; i < z->s->img_x; ++i) {\n                  out[0] = out[1] = out[2] = y[i];\n                  out[3] = 255; // not used if n==3\n                  out += n;\n               }\n         } else {\n            stbi_uc *y = coutput[0];\n            if (n == 1)\n               for (i=0; i < z->s->img_x; ++i) out[i] = y[i];\n            else\n               for (i=0; i < z->s->img_x; ++i) *out++ = y[i], *out++ = 255;\n         }\n      }\n      stbi__cleanup_jpeg(z);\n      *out_x = z->s->img_x;\n      *out_y = z->s->img_y;\n      if (comp) *comp  = z->s->img_n; // report original components, not output\n      return output;\n   }\n}\n\nstatic unsigned char *stbi__jpeg_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__jpeg j;\n   j.s = s;\n   stbi__setup_jpeg(&j);\n   return load_jpeg_image(&j, x,y,comp,req_comp);\n}\n\nstatic int stbi__jpeg_test(stbi__context *s)\n{\n   int r;\n   stbi__jpeg j;\n   j.s = s;\n   stbi__setup_jpeg(&j);\n   r = stbi__decode_jpeg_header(&j, STBI__SCAN_type);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic int stbi__jpeg_info_raw(stbi__jpeg *j, int *x, int *y, int *comp)\n{\n   if (!stbi__decode_jpeg_header(j, STBI__SCAN_header)) {\n      stbi__rewind( j->s );\n      return 0;\n   }\n   if (x) *x = j->s->img_x;\n   if (y) *y = j->s->img_y;\n   if (comp) *comp = j->s->img_n;\n   return 1;\n}\n\nstatic int stbi__jpeg_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   stbi__jpeg j;\n   j.s = s;\n   return stbi__jpeg_info_raw(&j, x, y, comp);\n}\n#endif\n\n// public domain zlib decode    v0.2  Sean Barrett 2006-11-18\n//    simple implementation\n//      - all input must be provided in an upfront buffer\n//      - all output is written to a single output buffer (can malloc/realloc)\n//    performance\n//      - fast huffman\n\n#ifndef STBI_NO_ZLIB\n\n// fast-way is faster to check than jpeg huffman, but slow way is slower\n#define STBI__ZFAST_BITS  9 // accelerate all cases in default tables\n#define STBI__ZFAST_MASK  ((1 << STBI__ZFAST_BITS) - 1)\n\n// zlib-style huffman encoding\n// (jpegs packs from left, zlib from right, so can't share code)\ntypedef struct\n{\n   stbi__uint16 fast[1 << STBI__ZFAST_BITS];\n   stbi__uint16 firstcode[16];\n   int maxcode[17];\n   stbi__uint16 firstsymbol[16];\n   stbi_uc  size[288];\n   stbi__uint16 value[288];\n} stbi__zhuffman;\n\nstbi_inline static int stbi__bitreverse16(int n)\n{\n  n = ((n & 0xAAAA) >>  1) | ((n & 0x5555) << 1);\n  n = ((n & 0xCCCC) >>  2) | ((n & 0x3333) << 2);\n  n = ((n & 0xF0F0) >>  4) | ((n & 0x0F0F) << 4);\n  n = ((n & 0xFF00) >>  8) | ((n & 0x00FF) << 8);\n  return n;\n}\n\nstbi_inline static int stbi__bit_reverse(int v, int bits)\n{\n   STBI_ASSERT(bits <= 16);\n   // to bit reverse n bits, reverse 16 and shift\n   // e.g. 11 bits, bit reverse and shift away 5\n   return stbi__bitreverse16(v) >> (16-bits);\n}\n\nstatic int stbi__zbuild_huffman(stbi__zhuffman *z, stbi_uc *sizelist, int num)\n{\n   int i,k=0;\n   int code, next_code[16], sizes[17];\n\n   // DEFLATE spec for generating codes\n   memset(sizes, 0, sizeof(sizes));\n   memset(z->fast, 0, sizeof(z->fast));\n   for (i=0; i < num; ++i)\n      ++sizes[sizelist[i]];\n   sizes[0] = 0;\n   for (i=1; i < 16; ++i)\n      if (sizes[i] > (1 << i))\n         return stbi__err(\"bad sizes\", \"Corrupt PNG\");\n   code = 0;\n   for (i=1; i < 16; ++i) {\n      next_code[i] = code;\n      z->firstcode[i] = (stbi__uint16) code;\n      z->firstsymbol[i] = (stbi__uint16) k;\n      code = (code + sizes[i]);\n      if (sizes[i])\n         if (code-1 >= (1 << i)) return stbi__err(\"bad codelengths\",\"Corrupt PNG\");\n      z->maxcode[i] = code << (16-i); // preshift for inner loop\n      code <<= 1;\n      k += sizes[i];\n   }\n   z->maxcode[16] = 0x10000; // sentinel\n   for (i=0; i < num; ++i) {\n      int s = sizelist[i];\n      if (s) {\n         int c = next_code[s] - z->firstcode[s] + z->firstsymbol[s];\n         stbi__uint16 fastv = (stbi__uint16) ((s << 9) | i);\n         z->size [c] = (stbi_uc     ) s;\n         z->value[c] = (stbi__uint16) i;\n         if (s <= STBI__ZFAST_BITS) {\n            int k = stbi__bit_reverse(next_code[s],s);\n            while (k < (1 << STBI__ZFAST_BITS)) {\n               z->fast[k] = fastv;\n               k += (1 << s);\n            }\n         }\n         ++next_code[s];\n      }\n   }\n   return 1;\n}\n\n// zlib-from-memory implementation for PNG reading\n//    because PNG allows splitting the zlib stream arbitrarily,\n//    and it's annoying structurally to have PNG call ZLIB call PNG,\n//    we require PNG read all the IDATs and combine them into a single\n//    memory buffer\n\ntypedef struct\n{\n   stbi_uc *zbuffer, *zbuffer_end;\n   int num_bits;\n   stbi__uint32 code_buffer;\n\n   char *zout;\n   char *zout_start;\n   char *zout_end;\n   int   z_expandable;\n\n   stbi__zhuffman z_length, z_distance;\n} stbi__zbuf;\n\nstbi_inline static stbi_uc stbi__zget8(stbi__zbuf *z)\n{\n   if (z->zbuffer >= z->zbuffer_end) return 0;\n   return *z->zbuffer++;\n}\n\nstatic void stbi__fill_bits(stbi__zbuf *z)\n{\n   do {\n      STBI_ASSERT(z->code_buffer < (1U << z->num_bits));\n      z->code_buffer |= stbi__zget8(z) << z->num_bits;\n      z->num_bits += 8;\n   } while (z->num_bits <= 24);\n}\n\nstbi_inline static unsigned int stbi__zreceive(stbi__zbuf *z, int n)\n{\n   unsigned int k;\n   if (z->num_bits < n) stbi__fill_bits(z);\n   k = z->code_buffer & ((1 << n) - 1);\n   z->code_buffer >>= n;\n   z->num_bits -= n;\n   return k;\n}\n\nstatic int stbi__zhuffman_decode_slowpath(stbi__zbuf *a, stbi__zhuffman *z)\n{\n   int b,s,k;\n   // not resolved by fast table, so compute it the slow way\n   // use jpeg approach, which requires MSbits at top\n   k = stbi__bit_reverse(a->code_buffer, 16);\n   for (s=STBI__ZFAST_BITS+1; ; ++s)\n      if (k < z->maxcode[s])\n         break;\n   if (s == 16) return -1; // invalid code!\n   // code size is s, so:\n   b = (k >> (16-s)) - z->firstcode[s] + z->firstsymbol[s];\n   STBI_ASSERT(z->size[b] == s);\n   a->code_buffer >>= s;\n   a->num_bits -= s;\n   return z->value[b];\n}\n\nstbi_inline static int stbi__zhuffman_decode(stbi__zbuf *a, stbi__zhuffman *z)\n{\n   int b,s;\n   if (a->num_bits < 16) stbi__fill_bits(a);\n   b = z->fast[a->code_buffer & STBI__ZFAST_MASK];\n   if (b) {\n      s = b >> 9;\n      a->code_buffer >>= s;\n      a->num_bits -= s;\n      return b & 511;\n   }\n   return stbi__zhuffman_decode_slowpath(a, z);\n}\n\nstatic int stbi__zexpand(stbi__zbuf *z, char *zout, int n)  // need to make room for n bytes\n{\n   char *q;\n   int cur, limit;\n   z->zout = zout;\n   if (!z->z_expandable) return stbi__err(\"output buffer limit\",\"Corrupt PNG\");\n   cur   = (int) (z->zout     - z->zout_start);\n   limit = (int) (z->zout_end - z->zout_start);\n   while (cur + n > limit)\n      limit *= 2;\n   q = (char *) STBI_REALLOC(z->zout_start, limit);\n   if (q == NULL) return stbi__err(\"outofmem\", \"Out of memory\");\n   z->zout_start = q;\n   z->zout       = q + cur;\n   z->zout_end   = q + limit;\n   return 1;\n}\n\nstatic int stbi__zlength_base[31] = {\n   3,4,5,6,7,8,9,10,11,13,\n   15,17,19,23,27,31,35,43,51,59,\n   67,83,99,115,131,163,195,227,258,0,0 };\n\nstatic int stbi__zlength_extra[31]=\n{ 0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,0,0 };\n\nstatic int stbi__zdist_base[32] = { 1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,\n257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577,0,0};\n\nstatic int stbi__zdist_extra[32] =\n{ 0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13};\n\nstatic int stbi__parse_huffman_block(stbi__zbuf *a)\n{\n   char *zout = a->zout;\n   for(;;) {\n      int z = stbi__zhuffman_decode(a, &a->z_length);\n      if (z < 256) {\n         if (z < 0) return stbi__err(\"bad huffman code\",\"Corrupt PNG\"); // error in huffman codes\n         if (zout >= a->zout_end) {\n            if (!stbi__zexpand(a, zout, 1)) return 0;\n            zout = a->zout;\n         }\n         *zout++ = (char) z;\n      } else {\n         stbi_uc *p;\n         int len,dist;\n         if (z == 256) {\n            a->zout = zout;\n            return 1;\n         }\n         z -= 257;\n         len = stbi__zlength_base[z];\n         if (stbi__zlength_extra[z]) len += stbi__zreceive(a, stbi__zlength_extra[z]);\n         z = stbi__zhuffman_decode(a, &a->z_distance);\n         if (z < 0) return stbi__err(\"bad huffman code\",\"Corrupt PNG\");\n         dist = stbi__zdist_base[z];\n         if (stbi__zdist_extra[z]) dist += stbi__zreceive(a, stbi__zdist_extra[z]);\n         if (zout - a->zout_start < dist) return stbi__err(\"bad dist\",\"Corrupt PNG\");\n         if (zout + len > a->zout_end) {\n            if (!stbi__zexpand(a, zout, len)) return 0;\n            zout = a->zout;\n         }\n         p = (stbi_uc *) (zout - dist);\n         if (dist == 1) { // run of one byte; common in images.\n            stbi_uc v = *p;\n            if (len) { do *zout++ = v; while (--len); }\n         } else {\n            if (len) { do *zout++ = *p++; while (--len); }\n         }\n      }\n   }\n}\n\nstatic int stbi__compute_huffman_codes(stbi__zbuf *a)\n{\n   static stbi_uc length_dezigzag[19] = { 16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15 };\n   stbi__zhuffman z_codelength;\n   stbi_uc lencodes[286+32+137];//padding for maximum single op\n   stbi_uc codelength_sizes[19];\n   int i,n;\n\n   int hlit  = stbi__zreceive(a,5) + 257;\n   int hdist = stbi__zreceive(a,5) + 1;\n   int hclen = stbi__zreceive(a,4) + 4;\n\n   memset(codelength_sizes, 0, sizeof(codelength_sizes));\n   for (i=0; i < hclen; ++i) {\n      int s = stbi__zreceive(a,3);\n      codelength_sizes[length_dezigzag[i]] = (stbi_uc) s;\n   }\n   if (!stbi__zbuild_huffman(&z_codelength, codelength_sizes, 19)) return 0;\n\n   n = 0;\n   while (n < hlit + hdist) {\n      int c = stbi__zhuffman_decode(a, &z_codelength);\n      if (c < 0 || c >= 19) return stbi__err(\"bad codelengths\", \"Corrupt PNG\");\n      if (c < 16)\n         lencodes[n++] = (stbi_uc) c;\n      else if (c == 16) {\n         c = stbi__zreceive(a,2)+3;\n         memset(lencodes+n, lencodes[n-1], c);\n         n += c;\n      } else if (c == 17) {\n         c = stbi__zreceive(a,3)+3;\n         memset(lencodes+n, 0, c);\n         n += c;\n      } else {\n         STBI_ASSERT(c == 18);\n         c = stbi__zreceive(a,7)+11;\n         memset(lencodes+n, 0, c);\n         n += c;\n      }\n   }\n   if (n != hlit+hdist) return stbi__err(\"bad codelengths\",\"Corrupt PNG\");\n   if (!stbi__zbuild_huffman(&a->z_length, lencodes, hlit)) return 0;\n   if (!stbi__zbuild_huffman(&a->z_distance, lencodes+hlit, hdist)) return 0;\n   return 1;\n}\n\nstatic int stbi__parse_uncomperssed_block(stbi__zbuf *a)\n{\n   stbi_uc header[4];\n   int len,nlen,k;\n   if (a->num_bits & 7)\n      stbi__zreceive(a, a->num_bits & 7); // discard\n   // drain the bit-packed data into header\n   k = 0;\n   while (a->num_bits > 0) {\n      header[k++] = (stbi_uc) (a->code_buffer & 255); // suppress MSVC run-time check\n      a->code_buffer >>= 8;\n      a->num_bits -= 8;\n   }\n   STBI_ASSERT(a->num_bits == 0);\n   // now fill header the normal way\n   while (k < 4)\n      header[k++] = stbi__zget8(a);\n   len  = header[1] * 256 + header[0];\n   nlen = header[3] * 256 + header[2];\n   if (nlen != (len ^ 0xffff)) return stbi__err(\"zlib corrupt\",\"Corrupt PNG\");\n   if (a->zbuffer + len > a->zbuffer_end) return stbi__err(\"read past buffer\",\"Corrupt PNG\");\n   if (a->zout + len > a->zout_end)\n      if (!stbi__zexpand(a, a->zout, len)) return 0;\n   memcpy(a->zout, a->zbuffer, len);\n   a->zbuffer += len;\n   a->zout += len;\n   return 1;\n}\n\nstatic int stbi__parse_zlib_header(stbi__zbuf *a)\n{\n   int cmf   = stbi__zget8(a);\n   int cm    = cmf & 15;\n   /* int cinfo = cmf >> 4; */\n   int flg   = stbi__zget8(a);\n   if ((cmf*256+flg) % 31 != 0) return stbi__err(\"bad zlib header\",\"Corrupt PNG\"); // zlib spec\n   if (flg & 32) return stbi__err(\"no preset dict\",\"Corrupt PNG\"); // preset dictionary not allowed in png\n   if (cm != 8) return stbi__err(\"bad compression\",\"Corrupt PNG\"); // DEFLATE required for png\n   // window = 1 << (8 + cinfo)... but who cares, we fully buffer output\n   return 1;\n}\n\n// @TODO: should statically initialize these for optimal thread safety\nstatic stbi_uc stbi__zdefault_length[288], stbi__zdefault_distance[32];\nstatic void stbi__init_zdefaults(void)\n{\n   int i;   // use <= to match clearly with spec\n   for (i=0; i <= 143; ++i)     stbi__zdefault_length[i]   = 8;\n   for (   ; i <= 255; ++i)     stbi__zdefault_length[i]   = 9;\n   for (   ; i <= 279; ++i)     stbi__zdefault_length[i]   = 7;\n   for (   ; i <= 287; ++i)     stbi__zdefault_length[i]   = 8;\n\n   for (i=0; i <=  31; ++i)     stbi__zdefault_distance[i] = 5;\n}\n\nstatic int stbi__parse_zlib(stbi__zbuf *a, int parse_header)\n{\n   int final, type;\n   if (parse_header)\n      if (!stbi__parse_zlib_header(a)) return 0;\n   a->num_bits = 0;\n   a->code_buffer = 0;\n   do {\n      final = stbi__zreceive(a,1);\n      type = stbi__zreceive(a,2);\n      if (type == 0) {\n         if (!stbi__parse_uncomperssed_block(a)) return 0;\n      } else if (type == 3) {\n         return 0;\n      } else {\n         if (type == 1) {\n            // use fixed code lengths\n            if (!stbi__zdefault_distance[31]) stbi__init_zdefaults();\n            if (!stbi__zbuild_huffman(&a->z_length  , stbi__zdefault_length  , 288)) return 0;\n            if (!stbi__zbuild_huffman(&a->z_distance, stbi__zdefault_distance,  32)) return 0;\n         } else {\n            if (!stbi__compute_huffman_codes(a)) return 0;\n         }\n         if (!stbi__parse_huffman_block(a)) return 0;\n      }\n   } while (!final);\n   return 1;\n}\n\nstatic int stbi__do_zlib(stbi__zbuf *a, char *obuf, int olen, int exp, int parse_header)\n{\n   a->zout_start = obuf;\n   a->zout       = obuf;\n   a->zout_end   = obuf + olen;\n   a->z_expandable = exp;\n\n   return stbi__parse_zlib(a, parse_header);\n}\n\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize(const char *buffer, int len, int initial_size, int *outlen)\n{\n   stbi__zbuf a;\n   char *p = (char *) stbi__malloc(initial_size);\n   if (p == NULL) return NULL;\n   a.zbuffer = (stbi_uc *) buffer;\n   a.zbuffer_end = (stbi_uc *) buffer + len;\n   if (stbi__do_zlib(&a, p, initial_size, 1, 1)) {\n      if (outlen) *outlen = (int) (a.zout - a.zout_start);\n      return a.zout_start;\n   } else {\n      STBI_FREE(a.zout_start);\n      return NULL;\n   }\n}\n\nSTBIDEF char *stbi_zlib_decode_malloc(char const *buffer, int len, int *outlen)\n{\n   return stbi_zlib_decode_malloc_guesssize(buffer, len, 16384, outlen);\n}\n\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize_headerflag(const char *buffer, int len, int initial_size, int *outlen, int parse_header)\n{\n   stbi__zbuf a;\n   char *p = (char *) stbi__malloc(initial_size);\n   if (p == NULL) return NULL;\n   a.zbuffer = (stbi_uc *) buffer;\n   a.zbuffer_end = (stbi_uc *) buffer + len;\n   if (stbi__do_zlib(&a, p, initial_size, 1, parse_header)) {\n      if (outlen) *outlen = (int) (a.zout - a.zout_start);\n      return a.zout_start;\n   } else {\n      STBI_FREE(a.zout_start);\n      return NULL;\n   }\n}\n\nSTBIDEF int stbi_zlib_decode_buffer(char *obuffer, int olen, char const *ibuffer, int ilen)\n{\n   stbi__zbuf a;\n   a.zbuffer = (stbi_uc *) ibuffer;\n   a.zbuffer_end = (stbi_uc *) ibuffer + ilen;\n   if (stbi__do_zlib(&a, obuffer, olen, 0, 1))\n      return (int) (a.zout - a.zout_start);\n   else\n      return -1;\n}\n\nSTBIDEF char *stbi_zlib_decode_noheader_malloc(char const *buffer, int len, int *outlen)\n{\n   stbi__zbuf a;\n   char *p = (char *) stbi__malloc(16384);\n   if (p == NULL) return NULL;\n   a.zbuffer = (stbi_uc *) buffer;\n   a.zbuffer_end = (stbi_uc *) buffer+len;\n   if (stbi__do_zlib(&a, p, 16384, 1, 0)) {\n      if (outlen) *outlen = (int) (a.zout - a.zout_start);\n      return a.zout_start;\n   } else {\n      STBI_FREE(a.zout_start);\n      return NULL;\n   }\n}\n\nSTBIDEF int stbi_zlib_decode_noheader_buffer(char *obuffer, int olen, const char *ibuffer, int ilen)\n{\n   stbi__zbuf a;\n   a.zbuffer = (stbi_uc *) ibuffer;\n   a.zbuffer_end = (stbi_uc *) ibuffer + ilen;\n   if (stbi__do_zlib(&a, obuffer, olen, 0, 0))\n      return (int) (a.zout - a.zout_start);\n   else\n      return -1;\n}\n#endif\n\n// public domain \"baseline\" PNG decoder   v0.10  Sean Barrett 2006-11-18\n//    simple implementation\n//      - only 8-bit samples\n//      - no CRC checking\n//      - allocates lots of intermediate memory\n//        - avoids problem of streaming data between subsystems\n//        - avoids explicit window management\n//    performance\n//      - uses stb_zlib, a PD zlib implementation with fast huffman decoding\n\n#ifndef STBI_NO_PNG\ntypedef struct\n{\n   stbi__uint32 length;\n   stbi__uint32 type;\n} stbi__pngchunk;\n\nstatic stbi__pngchunk stbi__get_chunk_header(stbi__context *s)\n{\n   stbi__pngchunk c;\n   c.length = stbi__get32be(s);\n   c.type   = stbi__get32be(s);\n   return c;\n}\n\nstatic int stbi__check_png_header(stbi__context *s)\n{\n   static stbi_uc png_sig[8] = { 137,80,78,71,13,10,26,10 };\n   int i;\n   for (i=0; i < 8; ++i)\n      if (stbi__get8(s) != png_sig[i]) return stbi__err(\"bad png sig\",\"Not a PNG\");\n   return 1;\n}\n\ntypedef struct\n{\n   stbi__context *s;\n   stbi_uc *idata, *expanded, *out;\n} stbi__png;\n\n\nenum {\n   STBI__F_none=0,\n   STBI__F_sub=1,\n   STBI__F_up=2,\n   STBI__F_avg=3,\n   STBI__F_paeth=4,\n   // synthetic filters used for first scanline to avoid needing a dummy row of 0s\n   STBI__F_avg_first,\n   STBI__F_paeth_first\n};\n\nstatic stbi_uc first_row_filter[5] =\n{\n   STBI__F_none,\n   STBI__F_sub,\n   STBI__F_none,\n   STBI__F_avg_first,\n   STBI__F_paeth_first\n};\n\nstatic int stbi__paeth(int a, int b, int c)\n{\n   int p = a + b - c;\n   int pa = abs(p-a);\n   int pb = abs(p-b);\n   int pc = abs(p-c);\n   if (pa <= pb && pa <= pc) return a;\n   if (pb <= pc) return b;\n   return c;\n}\n\nstatic stbi_uc stbi__depth_scale_table[9] = { 0, 0xff, 0x55, 0, 0x11, 0,0,0, 0x01 };\n\n// create the png data from post-deflated data\nstatic int stbi__create_png_image_raw(stbi__png *a, stbi_uc *raw, stbi__uint32 raw_len, int out_n, stbi__uint32 x, stbi__uint32 y, int depth, int color)\n{\n   stbi__context *s = a->s;\n   stbi__uint32 i,j,stride = x*out_n;\n   stbi__uint32 img_len, img_width_bytes;\n   int k;\n   int img_n = s->img_n; // copy it into a local for later\n\n   STBI_ASSERT(out_n == s->img_n || out_n == s->img_n+1);\n   a->out = (stbi_uc *) stbi__malloc(x * y * out_n); // extra bytes to write off the end into\n   if (!a->out) return stbi__err(\"outofmem\", \"Out of memory\");\n\n   img_width_bytes = (((img_n * x * depth) + 7) >> 3);\n   img_len = (img_width_bytes + 1) * y;\n   if (s->img_x == x && s->img_y == y) {\n      if (raw_len != img_len) return stbi__err(\"not enough pixels\",\"Corrupt PNG\");\n   } else { // interlaced:\n      if (raw_len < img_len) return stbi__err(\"not enough pixels\",\"Corrupt PNG\");\n   }\n\n   for (j=0; j < y; ++j) {\n      stbi_uc *cur = a->out + stride*j;\n      stbi_uc *prior = cur - stride;\n      int filter = *raw++;\n      int filter_bytes = img_n;\n      int width = x;\n      if (filter > 4)\n         return stbi__err(\"invalid filter\",\"Corrupt PNG\");\n\n      if (depth < 8) {\n         STBI_ASSERT(img_width_bytes <= x);\n         cur += x*out_n - img_width_bytes; // store output to the rightmost img_len bytes, so we can decode in place\n         filter_bytes = 1;\n         width = img_width_bytes;\n      }\n\n      // if first row, use special filter that doesn't sample previous row\n      if (j == 0) filter = first_row_filter[filter];\n\n      // handle first byte explicitly\n      for (k=0; k < filter_bytes; ++k) {\n         switch (filter) {\n            case STBI__F_none       : cur[k] = raw[k]; break;\n            case STBI__F_sub        : cur[k] = raw[k]; break;\n            case STBI__F_up         : cur[k] = STBI__BYTECAST(raw[k] + prior[k]); break;\n            case STBI__F_avg        : cur[k] = STBI__BYTECAST(raw[k] + (prior[k]>>1)); break;\n            case STBI__F_paeth      : cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(0,prior[k],0)); break;\n            case STBI__F_avg_first  : cur[k] = raw[k]; break;\n            case STBI__F_paeth_first: cur[k] = raw[k]; break;\n         }\n      }\n\n      if (depth == 8) {\n         if (img_n != out_n)\n            cur[img_n] = 255; // first pixel\n         raw += img_n;\n         cur += out_n;\n         prior += out_n;\n      } else {\n         raw += 1;\n         cur += 1;\n         prior += 1;\n      }\n\n      // this is a little gross, so that we don't switch per-pixel or per-component\n      if (depth < 8 || img_n == out_n) {\n         int nk = (width - 1)*img_n;\n         #define CASE(f) \\\n             case f:     \\\n                for (k=0; k < nk; ++k)\n         switch (filter) {\n            // \"none\" filter turns into a memcpy here; make that explicit.\n            case STBI__F_none:         memcpy(cur, raw, nk); break;\n            CASE(STBI__F_sub)          cur[k] = STBI__BYTECAST(raw[k] + cur[k-filter_bytes]); break;\n            CASE(STBI__F_up)           cur[k] = STBI__BYTECAST(raw[k] + prior[k]); break;\n            CASE(STBI__F_avg)          cur[k] = STBI__BYTECAST(raw[k] + ((prior[k] + cur[k-filter_bytes])>>1)); break;\n            CASE(STBI__F_paeth)        cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k-filter_bytes],prior[k],prior[k-filter_bytes])); break;\n            CASE(STBI__F_avg_first)    cur[k] = STBI__BYTECAST(raw[k] + (cur[k-filter_bytes] >> 1)); break;\n            CASE(STBI__F_paeth_first)  cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k-filter_bytes],0,0)); break;\n         }\n         #undef CASE\n         raw += nk;\n      } else {\n         STBI_ASSERT(img_n+1 == out_n);\n         #define CASE(f) \\\n             case f:     \\\n                for (i=x-1; i >= 1; --i, cur[img_n]=255,raw+=img_n,cur+=out_n,prior+=out_n) \\\n                   for (k=0; k < img_n; ++k)\n         switch (filter) {\n            CASE(STBI__F_none)         cur[k] = raw[k]; break;\n            CASE(STBI__F_sub)          cur[k] = STBI__BYTECAST(raw[k] + cur[k-out_n]); break;\n            CASE(STBI__F_up)           cur[k] = STBI__BYTECAST(raw[k] + prior[k]); break;\n            CASE(STBI__F_avg)          cur[k] = STBI__BYTECAST(raw[k] + ((prior[k] + cur[k-out_n])>>1)); break;\n            CASE(STBI__F_paeth)        cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k-out_n],prior[k],prior[k-out_n])); break;\n            CASE(STBI__F_avg_first)    cur[k] = STBI__BYTECAST(raw[k] + (cur[k-out_n] >> 1)); break;\n            CASE(STBI__F_paeth_first)  cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k-out_n],0,0)); break;\n         }\n         #undef CASE\n      }\n   }\n\n   // we make a separate pass to expand bits to pixels; for performance,\n   // this could run two scanlines behind the above code, so it won't\n   // intefere with filtering but will still be in the cache.\n   if (depth < 8) {\n      for (j=0; j < y; ++j) {\n         stbi_uc *cur = a->out + stride*j;\n         stbi_uc *in  = a->out + stride*j + x*out_n - img_width_bytes;\n         // unpack 1/2/4-bit into a 8-bit buffer. allows us to keep the common 8-bit path optimal at minimal cost for 1/2/4-bit\n         // png guarante byte alignment, if width is not multiple of 8/4/2 we'll decode dummy trailing data that will be skipped in the later loop\n         stbi_uc scale = (color == 0) ? stbi__depth_scale_table[depth] : 1; // scale grayscale values to 0..255 range\n\n         // note that the final byte might overshoot and write more data than desired.\n         // we can allocate enough data that this never writes out of memory, but it\n         // could also overwrite the next scanline. can it overwrite non-empty data\n         // on the next scanline? yes, consider 1-pixel-wide scanlines with 1-bit-per-pixel.\n         // so we need to explicitly clamp the final ones\n\n         if (depth == 4) {\n            for (k=x*img_n; k >= 2; k-=2, ++in) {\n               *cur++ = scale * ((*in >> 4)       );\n               *cur++ = scale * ((*in     ) & 0x0f);\n            }\n            if (k > 0) *cur++ = scale * ((*in >> 4)       );\n         } else if (depth == 2) {\n            for (k=x*img_n; k >= 4; k-=4, ++in) {\n               *cur++ = scale * ((*in >> 6)       );\n               *cur++ = scale * ((*in >> 4) & 0x03);\n               *cur++ = scale * ((*in >> 2) & 0x03);\n               *cur++ = scale * ((*in     ) & 0x03);\n            }\n            if (k > 0) *cur++ = scale * ((*in >> 6)       );\n            if (k > 1) *cur++ = scale * ((*in >> 4) & 0x03);\n            if (k > 2) *cur++ = scale * ((*in >> 2) & 0x03);\n         } else if (depth == 1) {\n            for (k=x*img_n; k >= 8; k-=8, ++in) {\n               *cur++ = scale * ((*in >> 7)       );\n               *cur++ = scale * ((*in >> 6) & 0x01);\n               *cur++ = scale * ((*in >> 5) & 0x01);\n               *cur++ = scale * ((*in >> 4) & 0x01);\n               *cur++ = scale * ((*in >> 3) & 0x01);\n               *cur++ = scale * ((*in >> 2) & 0x01);\n               *cur++ = scale * ((*in >> 1) & 0x01);\n               *cur++ = scale * ((*in     ) & 0x01);\n            }\n            if (k > 0) *cur++ = scale * ((*in >> 7)       );\n            if (k > 1) *cur++ = scale * ((*in >> 6) & 0x01);\n            if (k > 2) *cur++ = scale * ((*in >> 5) & 0x01);\n            if (k > 3) *cur++ = scale * ((*in >> 4) & 0x01);\n            if (k > 4) *cur++ = scale * ((*in >> 3) & 0x01);\n            if (k > 5) *cur++ = scale * ((*in >> 2) & 0x01);\n            if (k > 6) *cur++ = scale * ((*in >> 1) & 0x01);\n         }\n         if (img_n != out_n) {\n            // insert alpha = 255\n            stbi_uc *cur = a->out + stride*j;\n            int i;\n            if (img_n == 1) {\n               for (i=x-1; i >= 0; --i) {\n                  cur[i*2+1] = 255;\n                  cur[i*2+0] = cur[i];\n               }\n            } else {\n               STBI_ASSERT(img_n == 3);\n               for (i=x-1; i >= 0; --i) {\n                  cur[i*4+3] = 255;\n                  cur[i*4+2] = cur[i*3+2];\n                  cur[i*4+1] = cur[i*3+1];\n                  cur[i*4+0] = cur[i*3+0];\n               }\n            }\n         }\n      }\n   }\n\n   return 1;\n}\n\nstatic int stbi__create_png_image(stbi__png *a, stbi_uc *image_data, stbi__uint32 image_data_len, int out_n, int depth, int color, int interlaced)\n{\n   stbi_uc *final;\n   int p;\n   if (!interlaced)\n      return stbi__create_png_image_raw(a, image_data, image_data_len, out_n, a->s->img_x, a->s->img_y, depth, color);\n\n   // de-interlacing\n   final = (stbi_uc *) stbi__malloc(a->s->img_x * a->s->img_y * out_n);\n   for (p=0; p < 7; ++p) {\n      int xorig[] = { 0,4,0,2,0,1,0 };\n      int yorig[] = { 0,0,4,0,2,0,1 };\n      int xspc[]  = { 8,8,4,4,2,2,1 };\n      int yspc[]  = { 8,8,8,4,4,2,2 };\n      int i,j,x,y;\n      // pass1_x[4] = 0, pass1_x[5] = 1, pass1_x[12] = 1\n      x = (a->s->img_x - xorig[p] + xspc[p]-1) / xspc[p];\n      y = (a->s->img_y - yorig[p] + yspc[p]-1) / yspc[p];\n      if (x && y) {\n         stbi__uint32 img_len = ((((a->s->img_n * x * depth) + 7) >> 3) + 1) * y;\n         if (!stbi__create_png_image_raw(a, image_data, image_data_len, out_n, x, y, depth, color)) {\n            STBI_FREE(final);\n            return 0;\n         }\n         for (j=0; j < y; ++j) {\n            for (i=0; i < x; ++i) {\n               int out_y = j*yspc[p]+yorig[p];\n               int out_x = i*xspc[p]+xorig[p];\n               memcpy(final + out_y*a->s->img_x*out_n + out_x*out_n,\n                      a->out + (j*x+i)*out_n, out_n);\n            }\n         }\n         STBI_FREE(a->out);\n         image_data += img_len;\n         image_data_len -= img_len;\n      }\n   }\n   a->out = final;\n\n   return 1;\n}\n\nstatic int stbi__compute_transparency(stbi__png *z, stbi_uc tc[3], int out_n)\n{\n   stbi__context *s = z->s;\n   stbi__uint32 i, pixel_count = s->img_x * s->img_y;\n   stbi_uc *p = z->out;\n\n   // compute color-based transparency, assuming we've\n   // already got 255 as the alpha value in the output\n   STBI_ASSERT(out_n == 2 || out_n == 4);\n\n   if (out_n == 2) {\n      for (i=0; i < pixel_count; ++i) {\n         p[1] = (p[0] == tc[0] ? 0 : 255);\n         p += 2;\n      }\n   } else {\n      for (i=0; i < pixel_count; ++i) {\n         if (p[0] == tc[0] && p[1] == tc[1] && p[2] == tc[2])\n            p[3] = 0;\n         p += 4;\n      }\n   }\n   return 1;\n}\n\nstatic int stbi__expand_png_palette(stbi__png *a, stbi_uc *palette, int len, int pal_img_n)\n{\n   stbi__uint32 i, pixel_count = a->s->img_x * a->s->img_y;\n   stbi_uc *p, *temp_out, *orig = a->out;\n\n   p = (stbi_uc *) stbi__malloc(pixel_count * pal_img_n);\n   if (p == NULL) return stbi__err(\"outofmem\", \"Out of memory\");\n\n   // between here and free(out) below, exitting would leak\n   temp_out = p;\n\n   if (pal_img_n == 3) {\n      for (i=0; i < pixel_count; ++i) {\n         int n = orig[i]*4;\n         p[0] = palette[n  ];\n         p[1] = palette[n+1];\n         p[2] = palette[n+2];\n         p += 3;\n      }\n   } else {\n      for (i=0; i < pixel_count; ++i) {\n         int n = orig[i]*4;\n         p[0] = palette[n  ];\n         p[1] = palette[n+1];\n         p[2] = palette[n+2];\n         p[3] = palette[n+3];\n         p += 4;\n      }\n   }\n   STBI_FREE(a->out);\n   a->out = temp_out;\n\n   STBI_NOTUSED(len);\n\n   return 1;\n}\n\nstatic int stbi__unpremultiply_on_load = 0;\nstatic int stbi__de_iphone_flag = 0;\n\nSTBIDEF void stbi_set_unpremultiply_on_load(int flag_true_if_should_unpremultiply)\n{\n   stbi__unpremultiply_on_load = flag_true_if_should_unpremultiply;\n}\n\nSTBIDEF void stbi_convert_iphone_png_to_rgb(int flag_true_if_should_convert)\n{\n   stbi__de_iphone_flag = flag_true_if_should_convert;\n}\n\nstatic void stbi__de_iphone(stbi__png *z)\n{\n   stbi__context *s = z->s;\n   stbi__uint32 i, pixel_count = s->img_x * s->img_y;\n   stbi_uc *p = z->out;\n\n   if (s->img_out_n == 3) {  // convert bgr to rgb\n      for (i=0; i < pixel_count; ++i) {\n         stbi_uc t = p[0];\n         p[0] = p[2];\n         p[2] = t;\n         p += 3;\n      }\n   } else {\n      STBI_ASSERT(s->img_out_n == 4);\n      if (stbi__unpremultiply_on_load) {\n         // convert bgr to rgb and unpremultiply\n         for (i=0; i < pixel_count; ++i) {\n            stbi_uc a = p[3];\n            stbi_uc t = p[0];\n            if (a) {\n               p[0] = p[2] * 255 / a;\n               p[1] = p[1] * 255 / a;\n               p[2] =  t   * 255 / a;\n            } else {\n               p[0] = p[2];\n               p[2] = t;\n            }\n            p += 4;\n         }\n      } else {\n         // convert bgr to rgb\n         for (i=0; i < pixel_count; ++i) {\n            stbi_uc t = p[0];\n            p[0] = p[2];\n            p[2] = t;\n            p += 4;\n         }\n      }\n   }\n}\n\n#define STBI__PNG_TYPE(a,b,c,d)  (((a) << 24) + ((b) << 16) + ((c) << 8) + (d))\n\nstatic int stbi__parse_png_file(stbi__png *z, int scan, int req_comp)\n{\n   stbi_uc palette[1024], pal_img_n=0;\n   stbi_uc has_trans=0, tc[3];\n   stbi__uint32 ioff=0, idata_limit=0, i, pal_len=0;\n   int first=1,k,interlace=0, color=0, depth=0, is_iphone=0;\n   stbi__context *s = z->s;\n\n   z->expanded = NULL;\n   z->idata = NULL;\n   z->out = NULL;\n\n   if (!stbi__check_png_header(s)) return 0;\n\n   if (scan == STBI__SCAN_type) return 1;\n\n   for (;;) {\n      stbi__pngchunk c = stbi__get_chunk_header(s);\n      switch (c.type) {\n         case STBI__PNG_TYPE('C','g','B','I'):\n            is_iphone = 1;\n            stbi__skip(s, c.length);\n            break;\n         case STBI__PNG_TYPE('I','H','D','R'): {\n            int comp,filter;\n            if (!first) return stbi__err(\"multiple IHDR\",\"Corrupt PNG\");\n            first = 0;\n            if (c.length != 13) return stbi__err(\"bad IHDR len\",\"Corrupt PNG\");\n            s->img_x = stbi__get32be(s); if (s->img_x > (1 << 24)) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n            s->img_y = stbi__get32be(s); if (s->img_y > (1 << 24)) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n            depth = stbi__get8(s);  if (depth != 1 && depth != 2 && depth != 4 && depth != 8)  return stbi__err(\"1/2/4/8-bit only\",\"PNG not supported: 1/2/4/8-bit only\");\n            color = stbi__get8(s);  if (color > 6)         return stbi__err(\"bad ctype\",\"Corrupt PNG\");\n            if (color == 3) pal_img_n = 3; else if (color & 1) return stbi__err(\"bad ctype\",\"Corrupt PNG\");\n            comp  = stbi__get8(s);  if (comp) return stbi__err(\"bad comp method\",\"Corrupt PNG\");\n            filter= stbi__get8(s);  if (filter) return stbi__err(\"bad filter method\",\"Corrupt PNG\");\n            interlace = stbi__get8(s); if (interlace>1) return stbi__err(\"bad interlace method\",\"Corrupt PNG\");\n            if (!s->img_x || !s->img_y) return stbi__err(\"0-pixel image\",\"Corrupt PNG\");\n            if (!pal_img_n) {\n               s->img_n = (color & 2 ? 3 : 1) + (color & 4 ? 1 : 0);\n               if ((1 << 30) / s->img_x / s->img_n < s->img_y) return stbi__err(\"too large\", \"Image too large to decode\");\n               if (scan == STBI__SCAN_header) return 1;\n            } else {\n               // if paletted, then pal_n is our final components, and\n               // img_n is # components to decompress/filter.\n               s->img_n = 1;\n               if ((1 << 30) / s->img_x / 4 < s->img_y) return stbi__err(\"too large\",\"Corrupt PNG\");\n               // if SCAN_header, have to scan to see if we have a tRNS\n            }\n            break;\n         }\n\n         case STBI__PNG_TYPE('P','L','T','E'):  {\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (c.length > 256*3) return stbi__err(\"invalid PLTE\",\"Corrupt PNG\");\n            pal_len = c.length / 3;\n            if (pal_len * 3 != c.length) return stbi__err(\"invalid PLTE\",\"Corrupt PNG\");\n            for (i=0; i < pal_len; ++i) {\n               palette[i*4+0] = stbi__get8(s);\n               palette[i*4+1] = stbi__get8(s);\n               palette[i*4+2] = stbi__get8(s);\n               palette[i*4+3] = 255;\n            }\n            break;\n         }\n\n         case STBI__PNG_TYPE('t','R','N','S'): {\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (z->idata) return stbi__err(\"tRNS after IDAT\",\"Corrupt PNG\");\n            if (pal_img_n) {\n               if (scan == STBI__SCAN_header) { s->img_n = 4; return 1; }\n               if (pal_len == 0) return stbi__err(\"tRNS before PLTE\",\"Corrupt PNG\");\n               if (c.length > pal_len) return stbi__err(\"bad tRNS len\",\"Corrupt PNG\");\n               pal_img_n = 4;\n               for (i=0; i < c.length; ++i)\n                  palette[i*4+3] = stbi__get8(s);\n            } else {\n               if (!(s->img_n & 1)) return stbi__err(\"tRNS with alpha\",\"Corrupt PNG\");\n               if (c.length != (stbi__uint32) s->img_n*2) return stbi__err(\"bad tRNS len\",\"Corrupt PNG\");\n               has_trans = 1;\n               for (k=0; k < s->img_n; ++k)\n                  tc[k] = (stbi_uc) (stbi__get16be(s) & 255) * stbi__depth_scale_table[depth]; // non 8-bit images will be larger\n            }\n            break;\n         }\n\n         case STBI__PNG_TYPE('I','D','A','T'): {\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (pal_img_n && !pal_len) return stbi__err(\"no PLTE\",\"Corrupt PNG\");\n            if (scan == STBI__SCAN_header) { s->img_n = pal_img_n; return 1; }\n            if ((int)(ioff + c.length) < (int)ioff) return 0;\n            if (ioff + c.length > idata_limit) {\n               stbi_uc *p;\n               if (idata_limit == 0) idata_limit = c.length > 4096 ? c.length : 4096;\n               while (ioff + c.length > idata_limit)\n                  idata_limit *= 2;\n               p = (stbi_uc *) STBI_REALLOC(z->idata, idata_limit); if (p == NULL) return stbi__err(\"outofmem\", \"Out of memory\");\n               z->idata = p;\n            }\n            if (!stbi__getn(s, z->idata+ioff,c.length)) return stbi__err(\"outofdata\",\"Corrupt PNG\");\n            ioff += c.length;\n            break;\n         }\n\n         case STBI__PNG_TYPE('I','E','N','D'): {\n            stbi__uint32 raw_len, bpl;\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (scan != STBI__SCAN_load) return 1;\n            if (z->idata == NULL) return stbi__err(\"no IDAT\",\"Corrupt PNG\");\n            // initial guess for decoded data size to avoid unnecessary reallocs\n            bpl = (s->img_x * depth + 7) / 8; // bytes per line, per component\n            raw_len = bpl * s->img_y * s->img_n /* pixels */ + s->img_y /* filter mode per row */;\n            z->expanded = (stbi_uc *) stbi_zlib_decode_malloc_guesssize_headerflag((char *) z->idata, ioff, raw_len, (int *) &raw_len, !is_iphone);\n            if (z->expanded == NULL) return 0; // zlib should set error\n            STBI_FREE(z->idata); z->idata = NULL;\n            if ((req_comp == s->img_n+1 && req_comp != 3 && !pal_img_n) || has_trans)\n               s->img_out_n = s->img_n+1;\n            else\n               s->img_out_n = s->img_n;\n            if (!stbi__create_png_image(z, z->expanded, raw_len, s->img_out_n, depth, color, interlace)) return 0;\n            if (has_trans)\n               if (!stbi__compute_transparency(z, tc, s->img_out_n)) return 0;\n            if (is_iphone && stbi__de_iphone_flag && s->img_out_n > 2)\n               stbi__de_iphone(z);\n            if (pal_img_n) {\n               // pal_img_n == 3 or 4\n               s->img_n = pal_img_n; // record the actual colors we had\n               s->img_out_n = pal_img_n;\n               if (req_comp >= 3) s->img_out_n = req_comp;\n               if (!stbi__expand_png_palette(z, palette, pal_len, s->img_out_n))\n                  return 0;\n            }\n            STBI_FREE(z->expanded); z->expanded = NULL;\n            return 1;\n         }\n\n         default:\n            // if critical, fail\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if ((c.type & (1 << 29)) == 0) {\n               #ifndef STBI_NO_FAILURE_STRINGS\n               // not threadsafe\n               static char invalid_chunk[] = \"XXXX PNG chunk not known\";\n               invalid_chunk[0] = STBI__BYTECAST(c.type >> 24);\n               invalid_chunk[1] = STBI__BYTECAST(c.type >> 16);\n               invalid_chunk[2] = STBI__BYTECAST(c.type >>  8);\n               invalid_chunk[3] = STBI__BYTECAST(c.type >>  0);\n               #endif\n               return stbi__err(invalid_chunk, \"PNG not supported: unknown PNG chunk type\");\n            }\n            stbi__skip(s, c.length);\n            break;\n      }\n      // end of PNG chunk, read and skip CRC\n      stbi__get32be(s);\n   }\n}\n\nstatic unsigned char *stbi__do_png(stbi__png *p, int *x, int *y, int *n, int req_comp)\n{\n   unsigned char *result=NULL;\n   if (req_comp < 0 || req_comp > 4) return stbi__errpuc(\"bad req_comp\", \"Internal error\");\n   if (stbi__parse_png_file(p, STBI__SCAN_load, req_comp)) {\n      result = p->out;\n      p->out = NULL;\n      if (req_comp && req_comp != p->s->img_out_n) {\n         result = stbi__convert_format(result, p->s->img_out_n, req_comp, p->s->img_x, p->s->img_y);\n         p->s->img_out_n = req_comp;\n         if (result == NULL) return result;\n      }\n      *x = p->s->img_x;\n      *y = p->s->img_y;\n      if (n) *n = p->s->img_out_n;\n   }\n   STBI_FREE(p->out);      p->out      = NULL;\n   STBI_FREE(p->expanded); p->expanded = NULL;\n   STBI_FREE(p->idata);    p->idata    = NULL;\n\n   return result;\n}\n\nstatic unsigned char *stbi__png_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__png p;\n   p.s = s;\n   return stbi__do_png(&p, x,y,comp,req_comp);\n}\n\nstatic int stbi__png_test(stbi__context *s)\n{\n   int r;\n   r = stbi__check_png_header(s);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic int stbi__png_info_raw(stbi__png *p, int *x, int *y, int *comp)\n{\n   if (!stbi__parse_png_file(p, STBI__SCAN_header, 0)) {\n      stbi__rewind( p->s );\n      return 0;\n   }\n   if (x) *x = p->s->img_x;\n   if (y) *y = p->s->img_y;\n   if (comp) *comp = p->s->img_n;\n   return 1;\n}\n\nstatic int stbi__png_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   stbi__png p;\n   p.s = s;\n   return stbi__png_info_raw(&p, x, y, comp);\n}\n#endif\n\n// Microsoft/Windows BMP image\n\n#ifndef STBI_NO_BMP\nstatic int stbi__bmp_test_raw(stbi__context *s)\n{\n   int r;\n   int sz;\n   if (stbi__get8(s) != 'B') return 0;\n   if (stbi__get8(s) != 'M') return 0;\n   stbi__get32le(s); // discard filesize\n   stbi__get16le(s); // discard reserved\n   stbi__get16le(s); // discard reserved\n   stbi__get32le(s); // discard data offset\n   sz = stbi__get32le(s);\n   r = (sz == 12 || sz == 40 || sz == 56 || sz == 108 || sz == 124);\n   return r;\n}\n\nstatic int stbi__bmp_test(stbi__context *s)\n{\n   int r = stbi__bmp_test_raw(s);\n   stbi__rewind(s);\n   return r;\n}\n\n\n// returns 0..31 for the highest set bit\nstatic int stbi__high_bit(unsigned int z)\n{\n   int n=0;\n   if (z == 0) return -1;\n   if (z >= 0x10000) n += 16, z >>= 16;\n   if (z >= 0x00100) n +=  8, z >>=  8;\n   if (z >= 0x00010) n +=  4, z >>=  4;\n   if (z >= 0x00004) n +=  2, z >>=  2;\n   if (z >= 0x00002) n +=  1, z >>=  1;\n   return n;\n}\n\nstatic int stbi__bitcount(unsigned int a)\n{\n   a = (a & 0x55555555) + ((a >>  1) & 0x55555555); // max 2\n   a = (a & 0x33333333) + ((a >>  2) & 0x33333333); // max 4\n   a = (a + (a >> 4)) & 0x0f0f0f0f; // max 8 per 4, now 8 bits\n   a = (a + (a >> 8)); // max 16 per 8 bits\n   a = (a + (a >> 16)); // max 32 per 8 bits\n   return a & 0xff;\n}\n\nstatic int stbi__shiftsigned(int v, int shift, int bits)\n{\n   int result;\n   int z=0;\n\n   if (shift < 0) v <<= -shift;\n   else v >>= shift;\n   result = v;\n\n   z = bits;\n   while (z < 8) {\n      result += v >> z;\n      z += bits;\n   }\n   return result;\n}\n\nstatic stbi_uc *stbi__bmp_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi_uc *out;\n   unsigned int mr=0,mg=0,mb=0,ma=0, fake_a=0;\n   stbi_uc pal[256][4];\n   int psize=0,i,j,compress=0,width;\n   int bpp, flip_vertically, pad, target, offset, hsz;\n   if (stbi__get8(s) != 'B' || stbi__get8(s) != 'M') return stbi__errpuc(\"not BMP\", \"Corrupt BMP\");\n   stbi__get32le(s); // discard filesize\n   stbi__get16le(s); // discard reserved\n   stbi__get16le(s); // discard reserved\n   offset = stbi__get32le(s);\n   hsz = stbi__get32le(s);\n   if (hsz != 12 && hsz != 40 && hsz != 56 && hsz != 108 && hsz != 124) return stbi__errpuc(\"unknown BMP\", \"BMP type not supported: unknown\");\n   if (hsz == 12) {\n      s->img_x = stbi__get16le(s);\n      s->img_y = stbi__get16le(s);\n   } else {\n      s->img_x = stbi__get32le(s);\n      s->img_y = stbi__get32le(s);\n   }\n   if (stbi__get16le(s) != 1) return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n   bpp = stbi__get16le(s);\n   if (bpp == 1) return stbi__errpuc(\"monochrome\", \"BMP type not supported: 1-bit\");\n   flip_vertically = ((int) s->img_y) > 0;\n   s->img_y = abs((int) s->img_y);\n   if (hsz == 12) {\n      if (bpp < 24)\n         psize = (offset - 14 - 24) / 3;\n   } else {\n      compress = stbi__get32le(s);\n      if (compress == 1 || compress == 2) return stbi__errpuc(\"BMP RLE\", \"BMP type not supported: RLE\");\n      stbi__get32le(s); // discard sizeof\n      stbi__get32le(s); // discard hres\n      stbi__get32le(s); // discard vres\n      stbi__get32le(s); // discard colorsused\n      stbi__get32le(s); // discard max important\n      if (hsz == 40 || hsz == 56) {\n         if (hsz == 56) {\n            stbi__get32le(s);\n            stbi__get32le(s);\n            stbi__get32le(s);\n            stbi__get32le(s);\n         }\n         if (bpp == 16 || bpp == 32) {\n            mr = mg = mb = 0;\n            if (compress == 0) {\n               if (bpp == 32) {\n                  mr = 0xffu << 16;\n                  mg = 0xffu <<  8;\n                  mb = 0xffu <<  0;\n                  ma = 0xffu << 24;\n                  fake_a = 1; // @TODO: check for cases like alpha value is all 0 and switch it to 255\n                  STBI_NOTUSED(fake_a);\n               } else {\n                  mr = 31u << 10;\n                  mg = 31u <<  5;\n                  mb = 31u <<  0;\n               }\n            } else if (compress == 3) {\n               mr = stbi__get32le(s);\n               mg = stbi__get32le(s);\n               mb = stbi__get32le(s);\n               // not documented, but generated by photoshop and handled by mspaint\n               if (mr == mg && mg == mb) {\n                  // ?!?!?\n                  return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n               }\n            } else\n               return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n         }\n      } else {\n         STBI_ASSERT(hsz == 108 || hsz == 124);\n         mr = stbi__get32le(s);\n         mg = stbi__get32le(s);\n         mb = stbi__get32le(s);\n         ma = stbi__get32le(s);\n         stbi__get32le(s); // discard color space\n         for (i=0; i < 12; ++i)\n            stbi__get32le(s); // discard color space parameters\n         if (hsz == 124) {\n            stbi__get32le(s); // discard rendering intent\n            stbi__get32le(s); // discard offset of profile data\n            stbi__get32le(s); // discard size of profile data\n            stbi__get32le(s); // discard reserved\n         }\n      }\n      if (bpp < 16)\n         psize = (offset - 14 - hsz) >> 2;\n   }\n   s->img_n = ma ? 4 : 3;\n   if (req_comp && req_comp >= 3) // we can directly decode 3 or 4\n      target = req_comp;\n   else\n      target = s->img_n; // if they want monochrome, we'll post-convert\n   out = (stbi_uc *) stbi__malloc(target * s->img_x * s->img_y);\n   if (!out) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   if (bpp < 16) {\n      int z=0;\n      if (psize == 0 || psize > 256) { STBI_FREE(out); return stbi__errpuc(\"invalid\", \"Corrupt BMP\"); }\n      for (i=0; i < psize; ++i) {\n         pal[i][2] = stbi__get8(s);\n         pal[i][1] = stbi__get8(s);\n         pal[i][0] = stbi__get8(s);\n         if (hsz != 12) stbi__get8(s);\n         pal[i][3] = 255;\n      }\n      stbi__skip(s, offset - 14 - hsz - psize * (hsz == 12 ? 3 : 4));\n      if (bpp == 4) width = (s->img_x + 1) >> 1;\n      else if (bpp == 8) width = s->img_x;\n      else { STBI_FREE(out); return stbi__errpuc(\"bad bpp\", \"Corrupt BMP\"); }\n      pad = (-width)&3;\n      for (j=0; j < (int) s->img_y; ++j) {\n         for (i=0; i < (int) s->img_x; i += 2) {\n            int v=stbi__get8(s),v2=0;\n            if (bpp == 4) {\n               v2 = v & 15;\n               v >>= 4;\n            }\n            out[z++] = pal[v][0];\n            out[z++] = pal[v][1];\n            out[z++] = pal[v][2];\n            if (target == 4) out[z++] = 255;\n            if (i+1 == (int) s->img_x) break;\n            v = (bpp == 8) ? stbi__get8(s) : v2;\n            out[z++] = pal[v][0];\n            out[z++] = pal[v][1];\n            out[z++] = pal[v][2];\n            if (target == 4) out[z++] = 255;\n         }\n         stbi__skip(s, pad);\n      }\n   } else {\n      int rshift=0,gshift=0,bshift=0,ashift=0,rcount=0,gcount=0,bcount=0,acount=0;\n      int z = 0;\n      int easy=0;\n      stbi__skip(s, offset - 14 - hsz);\n      if (bpp == 24) width = 3 * s->img_x;\n      else if (bpp == 16) width = 2*s->img_x;\n      else /* bpp = 32 and pad = 0 */ width=0;\n      pad = (-width) & 3;\n      if (bpp == 24) {\n         easy = 1;\n      } else if (bpp == 32) {\n         if (mb == 0xff && mg == 0xff00 && mr == 0x00ff0000 && ma == 0xff000000)\n            easy = 2;\n      }\n      if (!easy) {\n         if (!mr || !mg || !mb) { STBI_FREE(out); return stbi__errpuc(\"bad masks\", \"Corrupt BMP\"); }\n         // right shift amt to put high bit in position #7\n         rshift = stbi__high_bit(mr)-7; rcount = stbi__bitcount(mr);\n         gshift = stbi__high_bit(mg)-7; gcount = stbi__bitcount(mg);\n         bshift = stbi__high_bit(mb)-7; bcount = stbi__bitcount(mb);\n         ashift = stbi__high_bit(ma)-7; acount = stbi__bitcount(ma);\n      }\n      for (j=0; j < (int) s->img_y; ++j) {\n         if (easy) {\n            for (i=0; i < (int) s->img_x; ++i) {\n               unsigned char a;\n               out[z+2] = stbi__get8(s);\n               out[z+1] = stbi__get8(s);\n               out[z+0] = stbi__get8(s);\n               z += 3;\n               a = (easy == 2 ? stbi__get8(s) : 255);\n               if (target == 4) out[z++] = a;\n            }\n         } else {\n            for (i=0; i < (int) s->img_x; ++i) {\n               stbi__uint32 v = (bpp == 16 ? (stbi__uint32) stbi__get16le(s) : stbi__get32le(s));\n               int a;\n               out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mr, rshift, rcount));\n               out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mg, gshift, gcount));\n               out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mb, bshift, bcount));\n               a = (ma ? stbi__shiftsigned(v & ma, ashift, acount) : 255);\n               if (target == 4) out[z++] = STBI__BYTECAST(a);\n            }\n         }\n         stbi__skip(s, pad);\n      }\n   }\n   if (flip_vertically) {\n      stbi_uc t;\n      for (j=0; j < (int) s->img_y>>1; ++j) {\n         stbi_uc *p1 = out +      j     *s->img_x*target;\n         stbi_uc *p2 = out + (s->img_y-1-j)*s->img_x*target;\n         for (i=0; i < (int) s->img_x*target; ++i) {\n            t = p1[i], p1[i] = p2[i], p2[i] = t;\n         }\n      }\n   }\n\n   if (req_comp && req_comp != target) {\n      out = stbi__convert_format(out, target, req_comp, s->img_x, s->img_y);\n      if (out == NULL) return out; // stbi__convert_format frees input on failure\n   }\n\n   *x = s->img_x;\n   *y = s->img_y;\n   if (comp) *comp = s->img_n;\n   return out;\n}\n#endif\n\n// Targa Truevision - TGA\n// by Jonathan Dummer\n#ifndef STBI_NO_TGA\nstatic int stbi__tga_info(stbi__context *s, int *x, int *y, int *comp)\n{\n    int tga_w, tga_h, tga_comp;\n    int sz;\n    stbi__get8(s);                   // discard Offset\n    sz = stbi__get8(s);              // color type\n    if( sz > 1 ) {\n        stbi__rewind(s);\n        return 0;      // only RGB or indexed allowed\n    }\n    sz = stbi__get8(s);              // image type\n    // only RGB or grey allowed, +/- RLE\n    if ((sz != 1) && (sz != 2) && (sz != 3) && (sz != 9) && (sz != 10) && (sz != 11)) return 0;\n    stbi__skip(s,9);\n    tga_w = stbi__get16le(s);\n    if( tga_w < 1 ) {\n        stbi__rewind(s);\n        return 0;   // test width\n    }\n    tga_h = stbi__get16le(s);\n    if( tga_h < 1 ) {\n        stbi__rewind(s);\n        return 0;   // test height\n    }\n    sz = stbi__get8(s);               // bits per pixel\n    // only RGB or RGBA or grey allowed\n    if ((sz != 8) && (sz != 16) && (sz != 24) && (sz != 32)) {\n        stbi__rewind(s);\n        return 0;\n    }\n    tga_comp = sz;\n    if (x) *x = tga_w;\n    if (y) *y = tga_h;\n    if (comp) *comp = tga_comp / 8;\n    return 1;                   // seems to have passed everything\n}\n\nstatic int stbi__tga_test(stbi__context *s)\n{\n   int res;\n   int sz;\n   stbi__get8(s);      //   discard Offset\n   sz = stbi__get8(s);   //   color type\n   if ( sz > 1 ) return 0;   //   only RGB or indexed allowed\n   sz = stbi__get8(s);   //   image type\n   if ( (sz != 1) && (sz != 2) && (sz != 3) && (sz != 9) && (sz != 10) && (sz != 11) ) return 0;   //   only RGB or grey allowed, +/- RLE\n   stbi__get16be(s);      //   discard palette start\n   stbi__get16be(s);      //   discard palette length\n   stbi__get8(s);         //   discard bits per palette color entry\n   stbi__get16be(s);      //   discard x origin\n   stbi__get16be(s);      //   discard y origin\n   if ( stbi__get16be(s) < 1 ) return 0;      //   test width\n   if ( stbi__get16be(s) < 1 ) return 0;      //   test height\n   sz = stbi__get8(s);   //   bits per pixel\n   if ( (sz != 8) && (sz != 16) && (sz != 24) && (sz != 32) )\n      res = 0;\n   else\n      res = 1;\n   stbi__rewind(s);\n   return res;\n}\n\nstatic stbi_uc *stbi__tga_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   //   read in the TGA header stuff\n   int tga_offset = stbi__get8(s);\n   int tga_indexed = stbi__get8(s);\n   int tga_image_type = stbi__get8(s);\n   int tga_is_RLE = 0;\n   int tga_palette_start = stbi__get16le(s);\n   int tga_palette_len = stbi__get16le(s);\n   int tga_palette_bits = stbi__get8(s);\n   int tga_x_origin = stbi__get16le(s);\n   int tga_y_origin = stbi__get16le(s);\n   int tga_width = stbi__get16le(s);\n   int tga_height = stbi__get16le(s);\n   int tga_bits_per_pixel = stbi__get8(s);\n   int tga_comp = tga_bits_per_pixel / 8;\n   int tga_inverted = stbi__get8(s);\n   //   image data\n   unsigned char *tga_data;\n   unsigned char *tga_palette = NULL;\n   int i, j;\n   unsigned char raw_data[4];\n   int RLE_count = 0;\n   int RLE_repeating = 0;\n   int read_next_pixel = 1;\n\n   //   do a tiny bit of precessing\n   if ( tga_image_type >= 8 )\n   {\n      tga_image_type -= 8;\n      tga_is_RLE = 1;\n   }\n   /* int tga_alpha_bits = tga_inverted & 15; */\n   tga_inverted = 1 - ((tga_inverted >> 5) & 1);\n\n   //   error check\n   if ( //(tga_indexed) ||\n      (tga_width < 1) || (tga_height < 1) ||\n      (tga_image_type < 1) || (tga_image_type > 3) ||\n      ((tga_bits_per_pixel != 8) && (tga_bits_per_pixel != 16) &&\n      (tga_bits_per_pixel != 24) && (tga_bits_per_pixel != 32))\n      )\n   {\n      return NULL; // we don't report this as a bad TGA because we don't even know if it's TGA\n   }\n\n   //   If I'm paletted, then I'll use the number of bits from the palette\n   if ( tga_indexed )\n   {\n      tga_comp = tga_palette_bits / 8;\n   }\n\n   //   tga info\n   *x = tga_width;\n   *y = tga_height;\n   if (comp) *comp = tga_comp;\n\n   tga_data = (unsigned char*)stbi__malloc( (size_t)tga_width * tga_height * tga_comp );\n   if (!tga_data) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n\n   // skip to the data's starting position (offset usually = 0)\n   stbi__skip(s, tga_offset );\n\n   if ( !tga_indexed && !tga_is_RLE) {\n      for (i=0; i < tga_height; ++i) {\n         int y = tga_inverted ? tga_height -i - 1 : i;\n         stbi_uc *tga_row = tga_data + y*tga_width*tga_comp;\n         stbi__getn(s, tga_row, tga_width * tga_comp);\n      }\n   } else  {\n      //   do I need to load a palette?\n      if ( tga_indexed)\n      {\n         //   any data to skip? (offset usually = 0)\n         stbi__skip(s, tga_palette_start );\n         //   load the palette\n         tga_palette = (unsigned char*)stbi__malloc( tga_palette_len * tga_palette_bits / 8 );\n         if (!tga_palette) {\n            STBI_FREE(tga_data);\n            return stbi__errpuc(\"outofmem\", \"Out of memory\");\n         }\n         if (!stbi__getn(s, tga_palette, tga_palette_len * tga_palette_bits / 8 )) {\n            STBI_FREE(tga_data);\n            STBI_FREE(tga_palette);\n            return stbi__errpuc(\"bad palette\", \"Corrupt TGA\");\n         }\n      }\n      //   load the data\n      for (i=0; i < tga_width * tga_height; ++i)\n      {\n         //   if I'm in RLE mode, do I need to get a RLE stbi__pngchunk?\n         if ( tga_is_RLE )\n         {\n            if ( RLE_count == 0 )\n            {\n               //   yep, get the next byte as a RLE command\n               int RLE_cmd = stbi__get8(s);\n               RLE_count = 1 + (RLE_cmd & 127);\n               RLE_repeating = RLE_cmd >> 7;\n               read_next_pixel = 1;\n            } else if ( !RLE_repeating )\n            {\n               read_next_pixel = 1;\n            }\n         } else\n         {\n            read_next_pixel = 1;\n         }\n         //   OK, if I need to read a pixel, do it now\n         if ( read_next_pixel )\n         {\n            //   load however much data we did have\n            if ( tga_indexed )\n            {\n               //   read in 1 byte, then perform the lookup\n               int pal_idx = stbi__get8(s);\n               if ( pal_idx >= tga_palette_len )\n               {\n                  //   invalid index\n                  pal_idx = 0;\n               }\n               pal_idx *= tga_bits_per_pixel / 8;\n               for (j = 0; j*8 < tga_bits_per_pixel; ++j)\n               {\n                  raw_data[j] = tga_palette[pal_idx+j];\n               }\n            } else\n            {\n               //   read in the data raw\n               for (j = 0; j*8 < tga_bits_per_pixel; ++j)\n               {\n                  raw_data[j] = stbi__get8(s);\n               }\n            }\n            //   clear the reading flag for the next pixel\n            read_next_pixel = 0;\n         } // end of reading a pixel\n\n         // copy data\n         for (j = 0; j < tga_comp; ++j)\n           tga_data[i*tga_comp+j] = raw_data[j];\n\n         //   in case we're in RLE mode, keep counting down\n         --RLE_count;\n      }\n      //   do I need to invert the image?\n      if ( tga_inverted )\n      {\n         for (j = 0; j*2 < tga_height; ++j)\n         {\n            int index1 = j * tga_width * tga_comp;\n            int index2 = (tga_height - 1 - j) * tga_width * tga_comp;\n            for (i = tga_width * tga_comp; i > 0; --i)\n            {\n               unsigned char temp = tga_data[index1];\n               tga_data[index1] = tga_data[index2];\n               tga_data[index2] = temp;\n               ++index1;\n               ++index2;\n            }\n         }\n      }\n      //   clear my palette, if I had one\n      if ( tga_palette != NULL )\n      {\n         STBI_FREE( tga_palette );\n      }\n   }\n\n   // swap RGB\n   if (tga_comp >= 3)\n   {\n      unsigned char* tga_pixel = tga_data;\n      for (i=0; i < tga_width * tga_height; ++i)\n      {\n         unsigned char temp = tga_pixel[0];\n         tga_pixel[0] = tga_pixel[2];\n         tga_pixel[2] = temp;\n         tga_pixel += tga_comp;\n      }\n   }\n\n   // convert to target component count\n   if (req_comp && req_comp != tga_comp)\n      tga_data = stbi__convert_format(tga_data, tga_comp, req_comp, tga_width, tga_height);\n\n   //   the things I do to get rid of an error message, and yet keep\n   //   Microsoft's C compilers happy... [8^(\n   tga_palette_start = tga_palette_len = tga_palette_bits =\n         tga_x_origin = tga_y_origin = 0;\n   //   OK, done\n   return tga_data;\n}\n#endif\n\n// *************************************************************************************************\n// Photoshop PSD loader -- PD by Thatcher Ulrich, integration by Nicolas Schulz, tweaked by STB\n\n#ifndef STBI_NO_PSD\nstatic int stbi__psd_test(stbi__context *s)\n{\n   int r = (stbi__get32be(s) == 0x38425053);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic stbi_uc *stbi__psd_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   int   pixelCount;\n   int channelCount, compression;\n   int channel, i, count, len;\n   int w,h;\n   stbi_uc *out;\n\n   // Check identifier\n   if (stbi__get32be(s) != 0x38425053)   // \"8BPS\"\n      return stbi__errpuc(\"not PSD\", \"Corrupt PSD image\");\n\n   // Check file type version.\n   if (stbi__get16be(s) != 1)\n      return stbi__errpuc(\"wrong version\", \"Unsupported version of PSD image\");\n\n   // Skip 6 reserved bytes.\n   stbi__skip(s, 6 );\n\n   // Read the number of channels (R, G, B, A, etc).\n   channelCount = stbi__get16be(s);\n   if (channelCount < 0 || channelCount > 16)\n      return stbi__errpuc(\"wrong channel count\", \"Unsupported number of channels in PSD image\");\n\n   // Read the rows and columns of the image.\n   h = stbi__get32be(s);\n   w = stbi__get32be(s);\n\n   // Make sure the depth is 8 bits.\n   if (stbi__get16be(s) != 8)\n      return stbi__errpuc(\"unsupported bit depth\", \"PSD bit depth is not 8 bit\");\n\n   // Make sure the color mode is RGB.\n   // Valid options are:\n   //   0: Bitmap\n   //   1: Grayscale\n   //   2: Indexed color\n   //   3: RGB color\n   //   4: CMYK color\n   //   7: Multichannel\n   //   8: Duotone\n   //   9: Lab color\n   if (stbi__get16be(s) != 3)\n      return stbi__errpuc(\"wrong color format\", \"PSD is not in RGB color format\");\n\n   // Skip the Mode Data.  (It's the palette for indexed color; other info for other modes.)\n   stbi__skip(s,stbi__get32be(s) );\n\n   // Skip the image resources.  (resolution, pen tool paths, etc)\n   stbi__skip(s, stbi__get32be(s) );\n\n   // Skip the reserved data.\n   stbi__skip(s, stbi__get32be(s) );\n\n   // Find out if the data is compressed.\n   // Known values:\n   //   0: no compression\n   //   1: RLE compressed\n   compression = stbi__get16be(s);\n   if (compression > 1)\n      return stbi__errpuc(\"bad compression\", \"PSD has an unknown compression format\");\n\n   // Create the destination image.\n   out = (stbi_uc *) stbi__malloc(4 * w*h);\n   if (!out) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   pixelCount = w*h;\n\n   // Initialize the data to zero.\n   //memset( out, 0, pixelCount * 4 );\n\n   // Finally, the image data.\n   if (compression) {\n      // RLE as used by .PSD and .TIFF\n      // Loop until you get the number of unpacked bytes you are expecting:\n      //     Read the next source byte into n.\n      //     If n is between 0 and 127 inclusive, copy the next n+1 bytes literally.\n      //     Else if n is between -127 and -1 inclusive, copy the next byte -n+1 times.\n      //     Else if n is 128, noop.\n      // Endloop\n\n      // The RLE-compressed data is preceeded by a 2-byte data count for each row in the data,\n      // which we're going to just skip.\n      stbi__skip(s, h * channelCount * 2 );\n\n      // Read the RLE data by channel.\n      for (channel = 0; channel < 4; channel++) {\n         stbi_uc *p;\n\n         p = out+channel;\n         if (channel >= channelCount) {\n            // Fill this channel with default data.\n            for (i = 0; i < pixelCount; i++, p += 4)\n               *p = (channel == 3 ? 255 : 0);\n         } else {\n            // Read the RLE data.\n            count = 0;\n            while (count < pixelCount) {\n               len = stbi__get8(s);\n               if (len == 128) {\n                  // No-op.\n               } else if (len < 128) {\n                  // Copy next len+1 bytes literally.\n                  len++;\n                  count += len;\n                  while (len) {\n                     *p = stbi__get8(s);\n                     p += 4;\n                     len--;\n                  }\n               } else if (len > 128) {\n                  stbi_uc   val;\n                  // Next -len+1 bytes in the dest are replicated from next source byte.\n                  // (Interpret len as a negative 8-bit int.)\n                  len ^= 0x0FF;\n                  len += 2;\n                  val = stbi__get8(s);\n                  count += len;\n                  while (len) {\n                     *p = val;\n                     p += 4;\n                     len--;\n                  }\n               }\n            }\n         }\n      }\n\n   } else {\n      // We're at the raw image data.  It's each channel in order (Red, Green, Blue, Alpha, ...)\n      // where each channel consists of an 8-bit value for each pixel in the image.\n\n      // Read the data by channel.\n      for (channel = 0; channel < 4; channel++) {\n         stbi_uc *p;\n\n         p = out + channel;\n         if (channel > channelCount) {\n            // Fill this channel with default data.\n            for (i = 0; i < pixelCount; i++, p += 4)\n               *p = channel == 3 ? 255 : 0;\n         } else {\n            // Read the data.\n            for (i = 0; i < pixelCount; i++, p += 4)\n               *p = stbi__get8(s);\n         }\n      }\n   }\n\n   if (req_comp && req_comp != 4) {\n      out = stbi__convert_format(out, 4, req_comp, w, h);\n      if (out == NULL) return out; // stbi__convert_format frees input on failure\n   }\n\n   if (comp) *comp = 4;\n   *y = h;\n   *x = w;\n\n   return out;\n}\n#endif\n\n// *************************************************************************************************\n// Softimage PIC loader\n// by Tom Seddon\n//\n// See http://softimage.wiki.softimage.com/index.php/INFO:_PIC_file_format\n// See http://ozviz.wasp.uwa.edu.au/~pbourke/dataformats/softimagepic/\n\n#ifndef STBI_NO_PIC\nstatic int stbi__pic_is4(stbi__context *s,const char *str)\n{\n   int i;\n   for (i=0; i<4; ++i)\n      if (stbi__get8(s) != (stbi_uc)str[i])\n         return 0;\n\n   return 1;\n}\n\nstatic int stbi__pic_test_core(stbi__context *s)\n{\n   int i;\n\n   if (!stbi__pic_is4(s,\"\\x53\\x80\\xF6\\x34\"))\n      return 0;\n\n   for(i=0;i<84;++i)\n      stbi__get8(s);\n\n   if (!stbi__pic_is4(s,\"PICT\"))\n      return 0;\n\n   return 1;\n}\n\ntypedef struct\n{\n   stbi_uc size,type,channel;\n} stbi__pic_packet;\n\nstatic stbi_uc *stbi__readval(stbi__context *s, int channel, stbi_uc *dest)\n{\n   int mask=0x80, i;\n\n   for (i=0; i<4; ++i, mask>>=1) {\n      if (channel & mask) {\n         if (stbi__at_eof(s)) return stbi__errpuc(\"bad file\",\"PIC file too short\");\n         dest[i]=stbi__get8(s);\n      }\n   }\n\n   return dest;\n}\n\nstatic void stbi__copyval(int channel,stbi_uc *dest,const stbi_uc *src)\n{\n   int mask=0x80,i;\n\n   for (i=0;i<4; ++i, mask>>=1)\n      if (channel&mask)\n         dest[i]=src[i];\n}\n\nstatic stbi_uc *stbi__pic_load_core(stbi__context *s,int width,int height,int *comp, stbi_uc *result)\n{\n   int act_comp=0,num_packets=0,y,chained;\n   stbi__pic_packet packets[10];\n\n   // this will (should...) cater for even some bizarre stuff like having data\n    // for the same channel in multiple packets.\n   do {\n      stbi__pic_packet *packet;\n\n      if (num_packets==sizeof(packets)/sizeof(packets[0]))\n         return stbi__errpuc(\"bad format\",\"too many packets\");\n\n      packet = &packets[num_packets++];\n\n      chained = stbi__get8(s);\n      packet->size    = stbi__get8(s);\n      packet->type    = stbi__get8(s);\n      packet->channel = stbi__get8(s);\n\n      act_comp |= packet->channel;\n\n      if (stbi__at_eof(s))          return stbi__errpuc(\"bad file\",\"file too short (reading packets)\");\n      if (packet->size != 8)  return stbi__errpuc(\"bad format\",\"packet isn't 8bpp\");\n   } while (chained);\n\n   *comp = (act_comp & 0x10 ? 4 : 3); // has alpha channel?\n\n   for(y=0; y<height; ++y) {\n      int packet_idx;\n\n      for(packet_idx=0; packet_idx < num_packets; ++packet_idx) {\n         stbi__pic_packet *packet = &packets[packet_idx];\n         stbi_uc *dest = result+y*width*4;\n\n         switch (packet->type) {\n            default:\n               return stbi__errpuc(\"bad format\",\"packet has bad compression type\");\n\n            case 0: {//uncompressed\n               int x;\n\n               for(x=0;x<width;++x, dest+=4)\n                  if (!stbi__readval(s,packet->channel,dest))\n                     return 0;\n               break;\n            }\n\n            case 1://Pure RLE\n               {\n                  int left=width, i;\n\n                  while (left>0) {\n                     stbi_uc count,value[4];\n\n                     count=stbi__get8(s);\n                     if (stbi__at_eof(s))   return stbi__errpuc(\"bad file\",\"file too short (pure read count)\");\n\n                     if (count > left)\n                        count = (stbi_uc) left;\n\n                     if (!stbi__readval(s,packet->channel,value))  return 0;\n\n                     for(i=0; i<count; ++i,dest+=4)\n                        stbi__copyval(packet->channel,dest,value);\n                     left -= count;\n                  }\n               }\n               break;\n\n            case 2: {//Mixed RLE\n               int left=width;\n               while (left>0) {\n                  int count = stbi__get8(s), i;\n                  if (stbi__at_eof(s))  return stbi__errpuc(\"bad file\",\"file too short (mixed read count)\");\n\n                  if (count >= 128) { // Repeated\n                     stbi_uc value[4];\n                     int i;\n\n                     if (count==128)\n                        count = stbi__get16be(s);\n                     else\n                        count -= 127;\n                     if (count > left)\n                        return stbi__errpuc(\"bad file\",\"scanline overrun\");\n\n                     if (!stbi__readval(s,packet->channel,value))\n                        return 0;\n\n                     for(i=0;i<count;++i, dest += 4)\n                        stbi__copyval(packet->channel,dest,value);\n                  } else { // Raw\n                     ++count;\n                     if (count>left) return stbi__errpuc(\"bad file\",\"scanline overrun\");\n\n                     for(i=0;i<count;++i, dest+=4)\n                        if (!stbi__readval(s,packet->channel,dest))\n                           return 0;\n                  }\n                  left-=count;\n               }\n               break;\n            }\n         }\n      }\n   }\n\n   return result;\n}\n\nstatic stbi_uc *stbi__pic_load(stbi__context *s,int *px,int *py,int *comp,int req_comp)\n{\n   stbi_uc *result;\n   int i, x,y;\n\n   for (i=0; i<92; ++i)\n      stbi__get8(s);\n\n   x = stbi__get16be(s);\n   y = stbi__get16be(s);\n   if (stbi__at_eof(s))  return stbi__errpuc(\"bad file\",\"file too short (pic header)\");\n   if ((1 << 28) / x < y) return stbi__errpuc(\"too large\", \"Image too large to decode\");\n\n   stbi__get32be(s); //skip `ratio'\n   stbi__get16be(s); //skip `fields'\n   stbi__get16be(s); //skip `pad'\n\n   // intermediate buffer is RGBA\n   result = (stbi_uc *) stbi__malloc(x*y*4);\n   memset(result, 0xff, x*y*4);\n\n   if (!stbi__pic_load_core(s,x,y,comp, result)) {\n      STBI_FREE(result);\n      result=0;\n   }\n   *px = x;\n   *py = y;\n   if (req_comp == 0) req_comp = *comp;\n   result=stbi__convert_format(result,4,req_comp,x,y);\n\n   return result;\n}\n\nstatic int stbi__pic_test(stbi__context *s)\n{\n   int r = stbi__pic_test_core(s);\n   stbi__rewind(s);\n   return r;\n}\n#endif\n\n// *************************************************************************************************\n// GIF loader -- public domain by Jean-Marc Lienher -- simplified/shrunk by stb\n\n#ifndef STBI_NO_GIF\ntypedef struct\n{\n   stbi__int16 prefix;\n   stbi_uc first;\n   stbi_uc suffix;\n} stbi__gif_lzw;\n\ntypedef struct\n{\n   int w,h;\n   stbi_uc *out;                 // output buffer (always 4 components)\n   int flags, bgindex, ratio, transparent, eflags;\n   stbi_uc  pal[256][4];\n   stbi_uc lpal[256][4];\n   stbi__gif_lzw codes[4096];\n   stbi_uc *color_table;\n   int parse, step;\n   int lflags;\n   int start_x, start_y;\n   int max_x, max_y;\n   int cur_x, cur_y;\n   int line_size;\n} stbi__gif;\n\nstatic int stbi__gif_test_raw(stbi__context *s)\n{\n   int sz;\n   if (stbi__get8(s) != 'G' || stbi__get8(s) != 'I' || stbi__get8(s) != 'F' || stbi__get8(s) != '8') return 0;\n   sz = stbi__get8(s);\n   if (sz != '9' && sz != '7') return 0;\n   if (stbi__get8(s) != 'a') return 0;\n   return 1;\n}\n\nstatic int stbi__gif_test(stbi__context *s)\n{\n   int r = stbi__gif_test_raw(s);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic void stbi__gif_parse_colortable(stbi__context *s, stbi_uc pal[256][4], int num_entries, int transp)\n{\n   int i;\n   for (i=0; i < num_entries; ++i) {\n      pal[i][2] = stbi__get8(s);\n      pal[i][1] = stbi__get8(s);\n      pal[i][0] = stbi__get8(s);\n      pal[i][3] = transp == i ? 0 : 255;\n   }\n}\n\nstatic int stbi__gif_header(stbi__context *s, stbi__gif *g, int *comp, int is_info)\n{\n   stbi_uc version;\n   if (stbi__get8(s) != 'G' || stbi__get8(s) != 'I' || stbi__get8(s) != 'F' || stbi__get8(s) != '8')\n      return stbi__err(\"not GIF\", \"Corrupt GIF\");\n\n   version = stbi__get8(s);\n   if (version != '7' && version != '9')    return stbi__err(\"not GIF\", \"Corrupt GIF\");\n   if (stbi__get8(s) != 'a')                return stbi__err(\"not GIF\", \"Corrupt GIF\");\n\n   stbi__g_failure_reason = \"\";\n   g->w = stbi__get16le(s);\n   g->h = stbi__get16le(s);\n   g->flags = stbi__get8(s);\n   g->bgindex = stbi__get8(s);\n   g->ratio = stbi__get8(s);\n   g->transparent = -1;\n\n   if (comp != 0) *comp = 4;  // can't actually tell whether it's 3 or 4 until we parse the comments\n\n   if (is_info) return 1;\n\n   if (g->flags & 0x80)\n      stbi__gif_parse_colortable(s,g->pal, 2 << (g->flags & 7), -1);\n\n   return 1;\n}\n\nstatic int stbi__gif_info_raw(stbi__context *s, int *x, int *y, int *comp)\n{\n   stbi__gif g;\n   if (!stbi__gif_header(s, &g, comp, 1)) {\n      stbi__rewind( s );\n      return 0;\n   }\n   if (x) *x = g.w;\n   if (y) *y = g.h;\n   return 1;\n}\n\nstatic void stbi__out_gif_code(stbi__gif *g, stbi__uint16 code)\n{\n   stbi_uc *p, *c;\n\n   // recurse to decode the prefixes, since the linked-list is backwards,\n   // and working backwards through an interleaved image would be nasty\n   if (g->codes[code].prefix >= 0)\n      stbi__out_gif_code(g, g->codes[code].prefix);\n\n   if (g->cur_y >= g->max_y) return;\n\n   p = &g->out[g->cur_x + g->cur_y];\n   c = &g->color_table[g->codes[code].suffix * 4];\n\n   if (c[3] >= 128) {\n      p[0] = c[2];\n      p[1] = c[1];\n      p[2] = c[0];\n      p[3] = c[3];\n   }\n   g->cur_x += 4;\n\n   if (g->cur_x >= g->max_x) {\n      g->cur_x = g->start_x;\n      g->cur_y += g->step;\n\n      while (g->cur_y >= g->max_y && g->parse > 0) {\n         g->step = (1 << g->parse) * g->line_size;\n         g->cur_y = g->start_y + (g->step >> 1);\n         --g->parse;\n      }\n   }\n}\n\nstatic stbi_uc *stbi__process_gif_raster(stbi__context *s, stbi__gif *g)\n{\n   stbi_uc lzw_cs;\n   stbi__int32 len, code;\n   stbi__uint32 first;\n   stbi__int32 codesize, codemask, avail, oldcode, bits, valid_bits, clear;\n   stbi__gif_lzw *p;\n\n   lzw_cs = stbi__get8(s);\n   if (lzw_cs > 12) return NULL;\n   clear = 1 << lzw_cs;\n   first = 1;\n   codesize = lzw_cs + 1;\n   codemask = (1 << codesize) - 1;\n   bits = 0;\n   valid_bits = 0;\n   for (code = 0; code < clear; code++) {\n      g->codes[code].prefix = -1;\n      g->codes[code].first = (stbi_uc) code;\n      g->codes[code].suffix = (stbi_uc) code;\n   }\n\n   // support no starting clear code\n   avail = clear+2;\n   oldcode = -1;\n\n   len = 0;\n   for(;;) {\n      if (valid_bits < codesize) {\n         if (len == 0) {\n            len = stbi__get8(s); // start new block\n            if (len == 0)\n               return g->out;\n         }\n         --len;\n         bits |= (stbi__int32) stbi__get8(s) << valid_bits;\n         valid_bits += 8;\n      } else {\n         stbi__int32 code = bits & codemask;\n         bits >>= codesize;\n         valid_bits -= codesize;\n         // @OPTIMIZE: is there some way we can accelerate the non-clear path?\n         if (code == clear) {  // clear code\n            codesize = lzw_cs + 1;\n            codemask = (1 << codesize) - 1;\n            avail = clear + 2;\n            oldcode = -1;\n            first = 0;\n         } else if (code == clear + 1) { // end of stream code\n            stbi__skip(s, len);\n            while ((len = stbi__get8(s)) > 0)\n               stbi__skip(s,len);\n            return g->out;\n         } else if (code <= avail) {\n            if (first) return stbi__errpuc(\"no clear code\", \"Corrupt GIF\");\n\n            if (oldcode >= 0) {\n               p = &g->codes[avail++];\n               if (avail > 4096)        return stbi__errpuc(\"too many codes\", \"Corrupt GIF\");\n               p->prefix = (stbi__int16) oldcode;\n               p->first = g->codes[oldcode].first;\n               p->suffix = (code == avail) ? p->first : g->codes[code].first;\n            } else if (code == avail)\n               return stbi__errpuc(\"illegal code in raster\", \"Corrupt GIF\");\n\n            stbi__out_gif_code(g, (stbi__uint16) code);\n\n            if ((avail & codemask) == 0 && avail <= 0x0FFF) {\n               codesize++;\n               codemask = (1 << codesize) - 1;\n            }\n\n            oldcode = code;\n         } else {\n            return stbi__errpuc(\"illegal code in raster\", \"Corrupt GIF\");\n         }\n      }\n   }\n}\n\nstatic void stbi__fill_gif_background(stbi__gif *g)\n{\n   int i;\n   stbi_uc *c = g->pal[g->bgindex];\n   // @OPTIMIZE: write a dword at a time\n   for (i = 0; i < g->w * g->h * 4; i += 4) {\n      stbi_uc *p  = &g->out[i];\n      p[0] = c[2];\n      p[1] = c[1];\n      p[2] = c[0];\n      p[3] = c[3];\n   }\n}\n\n// this function is designed to support animated gifs, although stb_image doesn't support it\nstatic stbi_uc *stbi__gif_load_next(stbi__context *s, stbi__gif *g, int *comp, int req_comp)\n{\n   int i;\n   stbi_uc *old_out = 0;\n\n   if (g->out == 0) {\n      if (!stbi__gif_header(s, g, comp,0))     return 0; // stbi__g_failure_reason set by stbi__gif_header\n      g->out = (stbi_uc *) stbi__malloc(4 * g->w * g->h);\n      if (g->out == 0)                      return stbi__errpuc(\"outofmem\", \"Out of memory\");\n      stbi__fill_gif_background(g);\n   } else {\n      // animated-gif-only path\n      if (((g->eflags & 0x1C) >> 2) == 3) {\n         old_out = g->out;\n         g->out = (stbi_uc *) stbi__malloc(4 * g->w * g->h);\n         if (g->out == 0)                   return stbi__errpuc(\"outofmem\", \"Out of memory\");\n         memcpy(g->out, old_out, g->w*g->h*4);\n      }\n   }\n\n   for (;;) {\n      switch (stbi__get8(s)) {\n         case 0x2C: /* Image Descriptor */\n         {\n            stbi__int32 x, y, w, h;\n            stbi_uc *o;\n\n            x = stbi__get16le(s);\n            y = stbi__get16le(s);\n            w = stbi__get16le(s);\n            h = stbi__get16le(s);\n            if (((x + w) > (g->w)) || ((y + h) > (g->h)))\n               return stbi__errpuc(\"bad Image Descriptor\", \"Corrupt GIF\");\n\n            g->line_size = g->w * 4;\n            g->start_x = x * 4;\n            g->start_y = y * g->line_size;\n            g->max_x   = g->start_x + w * 4;\n            g->max_y   = g->start_y + h * g->line_size;\n            g->cur_x   = g->start_x;\n            g->cur_y   = g->start_y;\n\n            g->lflags = stbi__get8(s);\n\n            if (g->lflags & 0x40) {\n               g->step = 8 * g->line_size; // first interlaced spacing\n               g->parse = 3;\n            } else {\n               g->step = g->line_size;\n               g->parse = 0;\n            }\n\n            if (g->lflags & 0x80) {\n               stbi__gif_parse_colortable(s,g->lpal, 2 << (g->lflags & 7), g->eflags & 0x01 ? g->transparent : -1);\n               g->color_table = (stbi_uc *) g->lpal;\n            } else if (g->flags & 0x80) {\n               for (i=0; i < 256; ++i)  // @OPTIMIZE: stbi__jpeg_reset only the previous transparent\n                  g->pal[i][3] = 255;\n               if (g->transparent >= 0 && (g->eflags & 0x01))\n                  g->pal[g->transparent][3] = 0;\n               g->color_table = (stbi_uc *) g->pal;\n            } else\n               return stbi__errpuc(\"missing color table\", \"Corrupt GIF\");\n\n            o = stbi__process_gif_raster(s, g);\n            if (o == NULL) return NULL;\n\n            if (req_comp && req_comp != 4)\n               o = stbi__convert_format(o, 4, req_comp, g->w, g->h);\n            return o;\n         }\n\n         case 0x21: // Comment Extension.\n         {\n            int len;\n            if (stbi__get8(s) == 0xF9) { // Graphic Control Extension.\n               len = stbi__get8(s);\n               if (len == 4) {\n                  g->eflags = stbi__get8(s);\n                  stbi__get16le(s); // delay\n                  g->transparent = stbi__get8(s);\n               } else {\n                  stbi__skip(s, len);\n                  break;\n               }\n            }\n            while ((len = stbi__get8(s)) != 0)\n               stbi__skip(s, len);\n            break;\n         }\n\n         case 0x3B: // gif stream termination code\n            return (stbi_uc *) s; // using '1' causes warning on some compilers\n\n         default:\n            return stbi__errpuc(\"unknown code\", \"Corrupt GIF\");\n      }\n   }\n}\n\nstatic stbi_uc *stbi__gif_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi_uc *u = 0;\n   stbi__gif g;\n   memset(&g, 0, sizeof(g));\n\n   u = stbi__gif_load_next(s, &g, comp, req_comp);\n   if (u == (stbi_uc *) s) u = 0;  // end of animated gif marker\n   if (u) {\n      *x = g.w;\n      *y = g.h;\n   }\n\n   return u;\n}\n\nstatic int stbi__gif_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   return stbi__gif_info_raw(s,x,y,comp);\n}\n#endif\n\n// *************************************************************************************************\n// Radiance RGBE HDR loader\n// originally by Nicolas Schulz\n#ifndef STBI_NO_HDR\nstatic int stbi__hdr_test_core(stbi__context *s)\n{\n   const char *signature = \"#?RADIANCE\\n\";\n   int i;\n   for (i=0; signature[i]; ++i)\n      if (stbi__get8(s) != signature[i])\n         return 0;\n   return 1;\n}\n\nstatic int stbi__hdr_test(stbi__context* s)\n{\n   int r = stbi__hdr_test_core(s);\n   stbi__rewind(s);\n   return r;\n}\n\n#define STBI__HDR_BUFLEN  1024\nstatic char *stbi__hdr_gettoken(stbi__context *z, char *buffer)\n{\n   int len=0;\n   char c = '\\0';\n\n   c = (char) stbi__get8(z);\n\n   while (!stbi__at_eof(z) && c != '\\n') {\n      buffer[len++] = c;\n      if (len == STBI__HDR_BUFLEN-1) {\n         // flush to end of line\n         while (!stbi__at_eof(z) && stbi__get8(z) != '\\n')\n            ;\n         break;\n      }\n      c = (char) stbi__get8(z);\n   }\n\n   buffer[len] = 0;\n   return buffer;\n}\n\nstatic void stbi__hdr_convert(float *output, stbi_uc *input, int req_comp)\n{\n   if ( input[3] != 0 ) {\n      float f1;\n      // Exponent\n      f1 = (float) ldexp(1.0f, input[3] - (int)(128 + 8));\n      if (req_comp <= 2)\n         output[0] = (input[0] + input[1] + input[2]) * f1 / 3;\n      else {\n         output[0] = input[0] * f1;\n         output[1] = input[1] * f1;\n         output[2] = input[2] * f1;\n      }\n      if (req_comp == 2) output[1] = 1;\n      if (req_comp == 4) output[3] = 1;\n   } else {\n      switch (req_comp) {\n         case 4: output[3] = 1; /* fallthrough */\n         case 3: output[0] = output[1] = output[2] = 0;\n                 break;\n         case 2: output[1] = 1; /* fallthrough */\n         case 1: output[0] = 0;\n                 break;\n      }\n   }\n}\n\nstatic float *stbi__hdr_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   char buffer[STBI__HDR_BUFLEN];\n   char *token;\n   int valid = 0;\n   int width, height;\n   stbi_uc *scanline;\n   float *hdr_data;\n   int len;\n   unsigned char count, value;\n   int i, j, k, c1,c2, z;\n\n\n   // Check identifier\n   if (strcmp(stbi__hdr_gettoken(s,buffer), \"#?RADIANCE\") != 0)\n      return stbi__errpf(\"not HDR\", \"Corrupt HDR image\");\n\n   // Parse header\n   for(;;) {\n      token = stbi__hdr_gettoken(s,buffer);\n      if (token[0] == 0) break;\n      if (strcmp(token, \"FORMAT=32-bit_rle_rgbe\") == 0) valid = 1;\n   }\n\n   if (!valid)    return stbi__errpf(\"unsupported format\", \"Unsupported HDR format\");\n\n   // Parse width and height\n   // can't use sscanf() if we're not using stdio!\n   token = stbi__hdr_gettoken(s,buffer);\n   if (strncmp(token, \"-Y \", 3))  return stbi__errpf(\"unsupported data layout\", \"Unsupported HDR format\");\n   token += 3;\n   height = (int) strtol(token, &token, 10);\n   while (*token == ' ') ++token;\n   if (strncmp(token, \"+X \", 3))  return stbi__errpf(\"unsupported data layout\", \"Unsupported HDR format\");\n   token += 3;\n   width = (int) strtol(token, NULL, 10);\n\n   *x = width;\n   *y = height;\n\n   if (comp) *comp = 3;\n   if (req_comp == 0) req_comp = 3;\n\n   // Read data\n   hdr_data = (float *) stbi__malloc(height * width * req_comp * sizeof(float));\n\n   // Load image data\n   // image data is stored as some number of sca\n   if ( width < 8 || width >= 32768) {\n      // Read flat data\n      for (j=0; j < height; ++j) {\n         for (i=0; i < width; ++i) {\n            stbi_uc rgbe[4];\n           main_decode_loop:\n            stbi__getn(s, rgbe, 4);\n            stbi__hdr_convert(hdr_data + j * width * req_comp + i * req_comp, rgbe, req_comp);\n         }\n      }\n   } else {\n      // Read RLE-encoded data\n      scanline = NULL;\n\n      for (j = 0; j < height; ++j) {\n         c1 = stbi__get8(s);\n         c2 = stbi__get8(s);\n         len = stbi__get8(s);\n         if (c1 != 2 || c2 != 2 || (len & 0x80)) {\n            // not run-length encoded, so we have to actually use THIS data as a decoded\n            // pixel (note this can't be a valid pixel--one of RGB must be >= 128)\n            stbi_uc rgbe[4];\n            rgbe[0] = (stbi_uc) c1;\n            rgbe[1] = (stbi_uc) c2;\n            rgbe[2] = (stbi_uc) len;\n            rgbe[3] = (stbi_uc) stbi__get8(s);\n            stbi__hdr_convert(hdr_data, rgbe, req_comp);\n            i = 1;\n            j = 0;\n            STBI_FREE(scanline);\n            goto main_decode_loop; // yes, this makes no sense\n         }\n         len <<= 8;\n         len |= stbi__get8(s);\n         if (len != width) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf(\"invalid decoded scanline length\", \"corrupt HDR\"); }\n         if (scanline == NULL) scanline = (stbi_uc *) stbi__malloc(width * 4);\n\n         for (k = 0; k < 4; ++k) {\n            i = 0;\n            while (i < width) {\n               count = stbi__get8(s);\n               if (count > 128) {\n                  // Run\n                  value = stbi__get8(s);\n                  count -= 128;\n                  for (z = 0; z < count; ++z)\n                     scanline[i++ * 4 + k] = value;\n               } else {\n                  // Dump\n                  for (z = 0; z < count; ++z)\n                     scanline[i++ * 4 + k] = stbi__get8(s);\n               }\n            }\n         }\n         for (i=0; i < width; ++i)\n            stbi__hdr_convert(hdr_data+(j*width + i)*req_comp, scanline + i*4, req_comp);\n      }\n      STBI_FREE(scanline);\n   }\n\n   return hdr_data;\n}\n\nstatic int stbi__hdr_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   char buffer[STBI__HDR_BUFLEN];\n   char *token;\n   int valid = 0;\n\n   if (strcmp(stbi__hdr_gettoken(s,buffer), \"#?RADIANCE\") != 0) {\n       stbi__rewind( s );\n       return 0;\n   }\n\n   for(;;) {\n      token = stbi__hdr_gettoken(s,buffer);\n      if (token[0] == 0) break;\n      if (strcmp(token, \"FORMAT=32-bit_rle_rgbe\") == 0) valid = 1;\n   }\n\n   if (!valid) {\n       stbi__rewind( s );\n       return 0;\n   }\n   token = stbi__hdr_gettoken(s,buffer);\n   if (strncmp(token, \"-Y \", 3)) {\n       stbi__rewind( s );\n       return 0;\n   }\n   token += 3;\n   *y = (int) strtol(token, &token, 10);\n   while (*token == ' ') ++token;\n   if (strncmp(token, \"+X \", 3)) {\n       stbi__rewind( s );\n       return 0;\n   }\n   token += 3;\n   *x = (int) strtol(token, NULL, 10);\n   *comp = 3;\n   return 1;\n}\n#endif // STBI_NO_HDR\n\n#ifndef STBI_NO_BMP\nstatic int stbi__bmp_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int hsz;\n   if (stbi__get8(s) != 'B' || stbi__get8(s) != 'M') {\n       stbi__rewind( s );\n       return 0;\n   }\n   stbi__skip(s,12);\n   hsz = stbi__get32le(s);\n   if (hsz != 12 && hsz != 40 && hsz != 56 && hsz != 108 && hsz != 124) {\n       stbi__rewind( s );\n       return 0;\n   }\n   if (hsz == 12) {\n      *x = stbi__get16le(s);\n      *y = stbi__get16le(s);\n   } else {\n      *x = stbi__get32le(s);\n      *y = stbi__get32le(s);\n   }\n   if (stbi__get16le(s) != 1) {\n       stbi__rewind( s );\n       return 0;\n   }\n   *comp = stbi__get16le(s) / 8;\n   return 1;\n}\n#endif\n\n#ifndef STBI_NO_PSD\nstatic int stbi__psd_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int channelCount;\n   if (stbi__get32be(s) != 0x38425053) {\n       stbi__rewind( s );\n       return 0;\n   }\n   if (stbi__get16be(s) != 1) {\n       stbi__rewind( s );\n       return 0;\n   }\n   stbi__skip(s, 6);\n   channelCount = stbi__get16be(s);\n   if (channelCount < 0 || channelCount > 16) {\n       stbi__rewind( s );\n       return 0;\n   }\n   *y = stbi__get32be(s);\n   *x = stbi__get32be(s);\n   if (stbi__get16be(s) != 8) {\n       stbi__rewind( s );\n       return 0;\n   }\n   if (stbi__get16be(s) != 3) {\n       stbi__rewind( s );\n       return 0;\n   }\n   *comp = 4;\n   return 1;\n}\n#endif\n\n#ifndef STBI_NO_PIC\nstatic int stbi__pic_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int act_comp=0,num_packets=0,chained;\n   stbi__pic_packet packets[10];\n\n   stbi__skip(s, 92);\n\n   *x = stbi__get16be(s);\n   *y = stbi__get16be(s);\n   if (stbi__at_eof(s))  return 0;\n   if ( (*x) != 0 && (1 << 28) / (*x) < (*y)) {\n       stbi__rewind( s );\n       return 0;\n   }\n\n   stbi__skip(s, 8);\n\n   do {\n      stbi__pic_packet *packet;\n\n      if (num_packets==sizeof(packets)/sizeof(packets[0]))\n         return 0;\n\n      packet = &packets[num_packets++];\n      chained = stbi__get8(s);\n      packet->size    = stbi__get8(s);\n      packet->type    = stbi__get8(s);\n      packet->channel = stbi__get8(s);\n      act_comp |= packet->channel;\n\n      if (stbi__at_eof(s)) {\n          stbi__rewind( s );\n          return 0;\n      }\n      if (packet->size != 8) {\n          stbi__rewind( s );\n          return 0;\n      }\n   } while (chained);\n\n   *comp = (act_comp & 0x10 ? 4 : 3);\n\n   return 1;\n}\n#endif\n\n// *************************************************************************************************\n// Portable Gray Map and Portable Pixel Map loader\n// by Ken Miller\n//\n// PGM: http://netpbm.sourceforge.net/doc/pgm.html\n// PPM: http://netpbm.sourceforge.net/doc/ppm.html\n//\n// Known limitations:\n//    Does not support comments in the header section\n//    Does not support ASCII image data (formats P2 and P3)\n//    Does not support 16-bit-per-channel\n\n#ifndef STBI_NO_PNM\n\nstatic int      stbi__pnm_test(stbi__context *s)\n{\n   char p, t;\n   p = (char) stbi__get8(s);\n   t = (char) stbi__get8(s);\n   if (p != 'P' || (t != '5' && t != '6')) {\n       stbi__rewind( s );\n       return 0;\n   }\n   return 1;\n}\n\nstatic stbi_uc *stbi__pnm_load(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi_uc *out;\n   if (!stbi__pnm_info(s, (int *)&s->img_x, (int *)&s->img_y, (int *)&s->img_n))\n      return 0;\n   *x = s->img_x;\n   *y = s->img_y;\n   *comp = s->img_n;\n\n   out = (stbi_uc *) stbi__malloc(s->img_n * s->img_x * s->img_y);\n   if (!out) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   stbi__getn(s, out, s->img_n * s->img_x * s->img_y);\n\n   if (req_comp && req_comp != s->img_n) {\n      out = stbi__convert_format(out, s->img_n, req_comp, s->img_x, s->img_y);\n      if (out == NULL) return out; // stbi__convert_format frees input on failure\n   }\n   return out;\n}\n\nstatic int      stbi__pnm_isspace(char c)\n{\n   return c == ' ' || c == '\\t' || c == '\\n' || c == '\\v' || c == '\\f' || c == '\\r';\n}\n\nstatic void     stbi__pnm_skip_whitespace(stbi__context *s, char *c)\n{\n   while (!stbi__at_eof(s) && stbi__pnm_isspace(*c))\n      *c = (char) stbi__get8(s);\n}\n\nstatic int      stbi__pnm_isdigit(char c)\n{\n   return c >= '0' && c <= '9';\n}\n\nstatic int      stbi__pnm_getinteger(stbi__context *s, char *c)\n{\n   int value = 0;\n\n   while (!stbi__at_eof(s) && stbi__pnm_isdigit(*c)) {\n      value = value*10 + (*c - '0');\n      *c = (char) stbi__get8(s);\n   }\n\n   return value;\n}\n\nstatic int      stbi__pnm_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int maxv;\n   char c, p, t;\n\n   stbi__rewind( s );\n\n   // Get identifier\n   p = (char) stbi__get8(s);\n   t = (char) stbi__get8(s);\n   if (p != 'P' || (t != '5' && t != '6')) {\n       stbi__rewind( s );\n       return 0;\n   }\n\n   *comp = (t == '6') ? 3 : 1;  // '5' is 1-component .pgm; '6' is 3-component .ppm\n\n   c = (char) stbi__get8(s);\n   stbi__pnm_skip_whitespace(s, &c);\n\n   *x = stbi__pnm_getinteger(s, &c); // read width\n   stbi__pnm_skip_whitespace(s, &c);\n\n   *y = stbi__pnm_getinteger(s, &c); // read height\n   stbi__pnm_skip_whitespace(s, &c);\n\n   maxv = stbi__pnm_getinteger(s, &c);  // read max value\n\n   if (maxv > 255)\n      return stbi__err(\"max value > 255\", \"PPM image not 8-bit\");\n   else\n      return 1;\n}\n#endif\n\nstatic int stbi__info_main(stbi__context *s, int *x, int *y, int *comp)\n{\n   #ifndef STBI_NO_JPEG\n   if (stbi__jpeg_info(s, x, y, comp)) return 1;\n   #endif\n\n   #ifndef STBI_NO_PNG\n   if (stbi__png_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_GIF\n   if (stbi__gif_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_BMP\n   if (stbi__bmp_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PSD\n   if (stbi__psd_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PIC\n   if (stbi__pic_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PNM\n   if (stbi__pnm_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_HDR\n   if (stbi__hdr_info(s, x, y, comp))  return 1;\n   #endif\n\n   // test tga last because it's a crappy test!\n   #ifndef STBI_NO_TGA\n   if (stbi__tga_info(s, x, y, comp))\n       return 1;\n   #endif\n   return stbi__err(\"unknown image type\", \"Image not of any known type, or corrupt\");\n}\n\n#ifndef STBI_NO_STDIO\nSTBIDEF int stbi_info(char const *filename, int *x, int *y, int *comp)\n{\n    FILE *f = stbi__fopen(filename, \"rb\");\n    int result;\n    if (!f) return stbi__err(\"can't fopen\", \"Unable to open file\");\n    result = stbi_info_from_file(f, x, y, comp);\n    fclose(f);\n    return result;\n}\n\nSTBIDEF int stbi_info_from_file(FILE *f, int *x, int *y, int *comp)\n{\n   int r;\n   stbi__context s;\n   long pos = ftell(f);\n   stbi__start_file(&s, f);\n   r = stbi__info_main(&s,x,y,comp);\n   fseek(f,pos,SEEK_SET);\n   return r;\n}\n#endif // !STBI_NO_STDIO\n\nSTBIDEF int stbi_info_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__info_main(&s,x,y,comp);\n}\n\nSTBIDEF int stbi_info_from_callbacks(stbi_io_callbacks const *c, void *user, int *x, int *y, int *comp)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) c, user);\n   return stbi__info_main(&s,x,y,comp);\n}\n\n#endif // STB_IMAGE_IMPLEMENTATION\n\n/*\n   revision history:\n      2.06  (2015-04-19) fix bug where PSD returns wrong '*comp' value\n      2.05  (2015-04-19) fix bug in progressive JPEG handling, fix warning\n      2.04  (2015-04-15) try to re-enable SIMD on MinGW 64-bit\n      2.03  (2015-04-12) extra corruption checking (mmozeiko)\n                         stbi_set_flip_vertically_on_load (nguillemot)\n                         fix NEON support; fix mingw support\n      2.02  (2015-01-19) fix incorrect assert, fix warning\n      2.01  (2015-01-17) fix various warnings; suppress SIMD on gcc 32-bit without -msse2\n      2.00b (2014-12-25) fix STBI_MALLOC in progressive JPEG\n      2.00  (2014-12-25) optimize JPG, including x86 SSE2 & NEON SIMD (ryg)\n                         progressive JPEG (stb)\n                         PGM/PPM support (Ken Miller)\n                         STBI_MALLOC,STBI_REALLOC,STBI_FREE\n                         GIF bugfix -- seemingly never worked\n                         STBI_NO_*, STBI_ONLY_*\n      1.48  (2014-12-14) fix incorrectly-named assert()\n      1.47  (2014-12-14) 1/2/4-bit PNG support, both direct and paletted (Omar Cornut & stb)\n                         optimize PNG (ryg)\n                         fix bug in interlaced PNG with user-specified channel count (stb)\n      1.46  (2014-08-26)\n              fix broken tRNS chunk (colorkey-style transparency) in non-paletted PNG\n      1.45  (2014-08-16)\n              fix MSVC-ARM internal compiler error by wrapping malloc\n      1.44  (2014-08-07)\n              various warning fixes from Ronny Chevalier\n      1.43  (2014-07-15)\n              fix MSVC-only compiler problem in code changed in 1.42\n      1.42  (2014-07-09)\n              don't define _CRT_SECURE_NO_WARNINGS (affects user code)\n              fixes to stbi__cleanup_jpeg path\n              added STBI_ASSERT to avoid requiring assert.h\n      1.41  (2014-06-25)\n              fix search&replace from 1.36 that messed up comments/error messages\n      1.40  (2014-06-22)\n              fix gcc struct-initialization warning\n      1.39  (2014-06-15)\n              fix to TGA optimization when req_comp != number of components in TGA;\n              fix to GIF loading because BMP wasn't rewinding (whoops, no GIFs in my test suite)\n              add support for BMP version 5 (more ignored fields)\n      1.38  (2014-06-06)\n              suppress MSVC warnings on integer casts truncating values\n              fix accidental rename of 'skip' field of I/O\n      1.37  (2014-06-04)\n              remove duplicate typedef\n      1.36  (2014-06-03)\n              convert to header file single-file library\n              if de-iphone isn't set, load iphone images color-swapped instead of returning NULL\n      1.35  (2014-05-27)\n              various warnings\n              fix broken STBI_SIMD path\n              fix bug where stbi_load_from_file no longer left file pointer in correct place\n              fix broken non-easy path for 32-bit BMP (possibly never used)\n              TGA optimization by Arseny Kapoulkine\n      1.34  (unknown)\n              use STBI_NOTUSED in stbi__resample_row_generic(), fix one more leak in tga failure case\n      1.33  (2011-07-14)\n              make stbi_is_hdr work in STBI_NO_HDR (as specified), minor compiler-friendly improvements\n      1.32  (2011-07-13)\n              support for \"info\" function for all supported filetypes (SpartanJ)\n      1.31  (2011-06-20)\n              a few more leak fixes, bug in PNG handling (SpartanJ)\n      1.30  (2011-06-11)\n              added ability to load files via callbacks to accomidate custom input streams (Ben Wenger)\n              removed deprecated format-specific test/load functions\n              removed support for installable file formats (stbi_loader) -- would have been broken for IO callbacks anyway\n              error cases in bmp and tga give messages and don't leak (Raymond Barbiero, grisha)\n              fix inefficiency in decoding 32-bit BMP (David Woo)\n      1.29  (2010-08-16)\n              various warning fixes from Aurelien Pocheville\n      1.28  (2010-08-01)\n              fix bug in GIF palette transparency (SpartanJ)\n      1.27  (2010-08-01)\n              cast-to-stbi_uc to fix warnings\n      1.26  (2010-07-24)\n              fix bug in file buffering for PNG reported by SpartanJ\n      1.25  (2010-07-17)\n              refix trans_data warning (Won Chun)\n      1.24  (2010-07-12)\n              perf improvements reading from files on platforms with lock-heavy fgetc()\n              minor perf improvements for jpeg\n              deprecated type-specific functions so we'll get feedback if they're needed\n              attempt to fix trans_data warning (Won Chun)\n      1.23    fixed bug in iPhone support\n      1.22  (2010-07-10)\n              removed image *writing* support\n              stbi_info support from Jetro Lauha\n              GIF support from Jean-Marc Lienher\n              iPhone PNG-extensions from James Brown\n              warning-fixes from Nicolas Schulz and Janez Zemva (i.stbi__err. Janez (U+017D)emva)\n      1.21    fix use of 'stbi_uc' in header (reported by jon blow)\n      1.20    added support for Softimage PIC, by Tom Seddon\n      1.19    bug in interlaced PNG corruption check (found by ryg)\n      1.18  (2008-08-02)\n              fix a threading bug (local mutable static)\n      1.17    support interlaced PNG\n      1.16    major bugfix - stbi__convert_format converted one too many pixels\n      1.15    initialize some fields for thread safety\n      1.14    fix threadsafe conversion bug\n              header-file-only version (#define STBI_HEADER_FILE_ONLY before including)\n      1.13    threadsafe\n      1.12    const qualifiers in the API\n      1.11    Support installable IDCT, colorspace conversion routines\n      1.10    Fixes for 64-bit (don't use \"unsigned long\")\n              optimized upsampling by Fabian \"ryg\" Giesen\n      1.09    Fix format-conversion for PSD code (bad global variables!)\n      1.08    Thatcher Ulrich's PSD code integrated by Nicolas Schulz\n      1.07    attempt to fix C++ warning/errors again\n      1.06    attempt to fix C++ warning/errors again\n      1.05    fix TGA loading to return correct *comp and use good luminance calc\n      1.04    default float alpha is 1, not 255; use 'void *' for stbi_image_free\n      1.03    bugfixes to STBI_NO_STDIO, STBI_NO_HDR\n      1.02    support for (subset of) HDR files, float interface for preferred access to them\n      1.01    fix bug: possible bug in handling right-side up bmps... not sure\n              fix bug: the stbi__bmp_load() and stbi__tga_load() functions didn't work at all\n      1.00    interface to zlib that skips zlib header\n      0.99    correct handling of alpha in palette\n      0.98    TGA loader by lonesock; dynamically add loaders (untested)\n      0.97    jpeg errors on too large a file; also catch another malloc failure\n      0.96    fix detection of invalid v value - particleman@mollyrocket forum\n      0.95    during header scan, seek to markers in case of padding\n      0.94    STBI_NO_STDIO to disable stdio usage; rename all #defines the same\n      0.93    handle jpegtran output; verbose errors\n      0.92    read 4,8,16,24,32-bit BMP files of several formats\n      0.91    output 24-bit Windows 3.0 BMP files\n      0.90    fix a few more warnings; bump version number to approach 1.0\n      0.61    bugfixes due to Marc LeBlanc, Christopher Lloyd\n      0.60    fix compiling as c++\n      0.59    fix warnings: merge Dave Moore's -Wall fixes\n      0.58    fix bug: zlib uncompressed mode len/nlen was wrong endian\n      0.57    fix bug: jpg last huffman symbol before marker was >9 bits but less than 16 available\n      0.56    fix bug: zlib uncompressed mode len vs. nlen\n      0.55    fix bug: restart_interval not initialized to 0\n      0.54    allow NULL for 'int *comp'\n      0.53    fix bug in png 3->4; speedup png decoding\n      0.52    png handles req_comp=3,4 directly; minor cleanup; jpeg comments\n      0.51    obey req_comp requests, 1-component jpegs return as 1-component,\n              on 'test' only check type, not whether we support this variant\n      0.50  (2006-11-19)\n              first released version\n*/\n"
  },
  {
    "path": "lightnet/_darknet/stb_image_write.h",
    "content": "/* stb_image_write - v0.98 - public domain - http://nothings.org/stb/stb_image_write.h\n   writes out PNG/BMP/TGA images to C stdio - Sean Barrett 2010\n                            no warranty implied; use at your own risk\n\n\n   Before #including,\n\n       #define STB_IMAGE_WRITE_IMPLEMENTATION\n\n   in the file that you want to have the implementation.\n\n   Will probably not work correctly with strict-aliasing optimizations.\n\nABOUT:\n\n   This header file is a library for writing images to C stdio. It could be\n   adapted to write to memory or a general streaming interface; let me know.\n\n   The PNG output is not optimal; it is 20-50% larger than the file\n   written by a decent optimizing implementation. This library is designed\n   for source code compactness and simplicitly, not optimal image file size\n   or run-time performance.\n\nBUILDING:\n\n   You can #define STBIW_ASSERT(x) before the #include to avoid using assert.h.\n   You can #define STBIW_MALLOC(), STBIW_REALLOC(), and STBIW_FREE() to replace\n   malloc,realloc,free.\n   You can define STBIW_MEMMOVE() to replace memmove()\n\nUSAGE:\n\n   There are four functions, one for each image file format:\n\n     int stbi_write_png(char const *filename, int w, int h, int comp, const void *data, int stride_in_bytes);\n     int stbi_write_bmp(char const *filename, int w, int h, int comp, const void *data);\n     int stbi_write_tga(char const *filename, int w, int h, int comp, const void *data);\n     int stbi_write_hdr(char const *filename, int w, int h, int comp, const void *data);\n\n   Each function returns 0 on failure and non-0 on success.\n\n   The functions create an image file defined by the parameters. The image\n   is a rectangle of pixels stored from left-to-right, top-to-bottom.\n   Each pixel contains 'comp' channels of data stored interleaved with 8-bits\n   per channel, in the following order: 1=Y, 2=YA, 3=RGB, 4=RGBA. (Y is\n   monochrome color.) The rectangle is 'w' pixels wide and 'h' pixels tall.\n   The *data pointer points to the first byte of the top-left-most pixel.\n   For PNG, \"stride_in_bytes\" is the distance in bytes from the first byte of\n   a row of pixels to the first byte of the next row of pixels.\n\n   PNG creates output files with the same number of components as the input.\n   The BMP format expands Y to RGB in the file format and does not\n   output alpha.\n\n   PNG supports writing rectangles of data even when the bytes storing rows of\n   data are not consecutive in memory (e.g. sub-rectangles of a larger image),\n   by supplying the stride between the beginning of adjacent rows. The other\n   formats do not. (Thus you cannot write a native-format BMP through the BMP\n   writer, both because it is in BGR order and because it may have padding\n   at the end of the line.)\n\n   HDR expects linear float data. Since the format is always 32-bit rgb(e)\n   data, alpha (if provided) is discarded, and for monochrome data it is\n   replicated across all three channels.\n\nCREDITS:\n\n   PNG/BMP/TGA\n      Sean Barrett\n   HDR\n      Baldur Karlsson\n   TGA monochrome:\n      Jean-Sebastien Guay\n   misc enhancements:\n      Tim Kelsey\n   bugfixes:\n      github:Chribba\n*/\n\n#ifndef INCLUDE_STB_IMAGE_WRITE_H\n#define INCLUDE_STB_IMAGE_WRITE_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nextern int stbi_write_png(char const *filename, int w, int h, int comp, const void  *data, int stride_in_bytes);\nextern int stbi_write_bmp(char const *filename, int w, int h, int comp, const void  *data);\nextern int stbi_write_tga(char const *filename, int w, int h, int comp, const void  *data);\nextern int stbi_write_hdr(char const *filename, int w, int h, int comp, const float *data);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif//INCLUDE_STB_IMAGE_WRITE_H\n\n#ifdef STB_IMAGE_WRITE_IMPLEMENTATION\n\n#include <stdarg.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <math.h>\n\n#if defined(STBIW_MALLOC) && defined(STBIW_FREE) && defined(STBIW_REALLOC)\n// ok\n#elif !defined(STBIW_MALLOC) && !defined(STBIW_FREE) && !defined(STBIW_REALLOC)\n// ok\n#else\n#error \"Must define all or none of STBIW_MALLOC, STBIW_FREE, and STBIW_REALLOC.\"\n#endif\n\n#ifndef STBIW_MALLOC\n#define STBIW_MALLOC(sz)    malloc(sz)\n#define STBIW_REALLOC(p,sz) realloc(p,sz)\n#define STBIW_FREE(p)       free(p)\n#endif\n#ifndef STBIW_MEMMOVE\n#define STBIW_MEMMOVE(a,b,sz) memmove(a,b,sz)\n#endif\n\n\n#ifndef STBIW_ASSERT\n#include <assert.h>\n#define STBIW_ASSERT(x) assert(x)\n#endif\n\ntypedef unsigned int stbiw_uint32;\ntypedef int stb_image_write_test[sizeof(stbiw_uint32)==4 ? 1 : -1];\n\nstatic void writefv(FILE *f, const char *fmt, va_list v)\n{\n   while (*fmt) {\n      switch (*fmt++) {\n         case ' ': break;\n         case '1': { unsigned char x = (unsigned char) va_arg(v, int); fputc(x,f); break; }\n         case '2': { int x = va_arg(v,int); unsigned char b[2];\n                     b[0] = (unsigned char) x; b[1] = (unsigned char) (x>>8);\n                     fwrite(b,2,1,f); break; }\n         case '4': { stbiw_uint32 x = va_arg(v,int); unsigned char b[4];\n                     b[0]=(unsigned char)x; b[1]=(unsigned char)(x>>8);\n                     b[2]=(unsigned char)(x>>16); b[3]=(unsigned char)(x>>24);\n                     fwrite(b,4,1,f); break; }\n         default:\n            STBIW_ASSERT(0);\n            return;\n      }\n   }\n}\n\nstatic void write3(FILE *f, unsigned char a, unsigned char b, unsigned char c)\n{\n   unsigned char arr[3];\n   arr[0] = a, arr[1] = b, arr[2] = c;\n   fwrite(arr, 3, 1, f);\n}\n\nstatic void write_pixels(FILE *f, int rgb_dir, int vdir, int x, int y, int comp, void *data, int write_alpha, int scanline_pad, int expand_mono)\n{\n   unsigned char bg[3] = { 255, 0, 255}, px[3];\n   stbiw_uint32 zero = 0;\n   int i,j,k, j_end;\n\n   if (y <= 0)\n      return;\n\n   if (vdir < 0)\n      j_end = -1, j = y-1;\n   else\n      j_end =  y, j = 0;\n\n   for (; j != j_end; j += vdir) {\n      for (i=0; i < x; ++i) {\n         unsigned char *d = (unsigned char *) data + (j*x+i)*comp;\n         if (write_alpha < 0)\n            fwrite(&d[comp-1], 1, 1, f);\n         switch (comp) {\n            case 1: fwrite(d, 1, 1, f);\n                    break;\n            case 2: if (expand_mono)\n                       write3(f, d[0],d[0],d[0]); // monochrome bmp\n                    else\n                       fwrite(d, 1, 1, f);  // monochrome TGA\n                    break;\n            case 4:\n               if (!write_alpha) {\n                  // composite against pink background\n                  for (k=0; k < 3; ++k)\n                     px[k] = bg[k] + ((d[k] - bg[k]) * d[3])/255;\n                  write3(f, px[1-rgb_dir],px[1],px[1+rgb_dir]);\n                  break;\n               }\n               /* FALLTHROUGH */\n            case 3:\n               write3(f, d[1-rgb_dir],d[1],d[1+rgb_dir]);\n               break;\n         }\n         if (write_alpha > 0)\n            fwrite(&d[comp-1], 1, 1, f);\n      }\n      fwrite(&zero,scanline_pad,1,f);\n   }\n}\n\nstatic int outfile(char const *filename, int rgb_dir, int vdir, int x, int y, int comp, int expand_mono, void *data, int alpha, int pad, const char *fmt, ...)\n{\n   FILE *f;\n   if (y < 0 || x < 0) return 0;\n   f = fopen(filename, \"wb\");\n   if (f) {\n      va_list v;\n      va_start(v, fmt);\n      writefv(f, fmt, v);\n      va_end(v);\n      write_pixels(f,rgb_dir,vdir,x,y,comp,data,alpha,pad,expand_mono);\n      fclose(f);\n   }\n   return f != NULL;\n}\n\nint stbi_write_bmp(char const *filename, int x, int y, int comp, const void *data)\n{\n   int pad = (-x*3) & 3;\n   return outfile(filename,-1,-1,x,y,comp,1,(void *) data,0,pad,\n           \"11 4 22 4\" \"4 44 22 444444\",\n           'B', 'M', 14+40+(x*3+pad)*y, 0,0, 14+40,  // file header\n            40, x,y, 1,24, 0,0,0,0,0,0);             // bitmap header\n}\n\nint stbi_write_tga(char const *filename, int x, int y, int comp, const void *data)\n{\n   int has_alpha = (comp == 2 || comp == 4);\n   int colorbytes = has_alpha ? comp-1 : comp;\n   int format = colorbytes < 2 ? 3 : 2; // 3 color channels (RGB/RGBA) = 2, 1 color channel (Y/YA) = 3\n   return outfile(filename, -1,-1, x, y, comp, 0, (void *) data, has_alpha, 0,\n                  \"111 221 2222 11\", 0,0,format, 0,0,0, 0,0,x,y, (colorbytes+has_alpha)*8, has_alpha*8);\n}\n\n// *************************************************************************************************\n// Radiance RGBE HDR writer\n// by Baldur Karlsson\n#define stbiw__max(a, b)  ((a) > (b) ? (a) : (b))\n\nvoid stbiw__linear_to_rgbe(unsigned char *rgbe, float *linear)\n{\n   int exponent;\n   float maxcomp = stbiw__max(linear[0], stbiw__max(linear[1], linear[2]));\n\n   if (maxcomp < 1e-32) {\n      rgbe[0] = rgbe[1] = rgbe[2] = rgbe[3] = 0;\n   } else {\n      float normalize = (float) frexp(maxcomp, &exponent) * 256.0f/maxcomp;\n\n      rgbe[0] = (unsigned char)(linear[0] * normalize);\n      rgbe[1] = (unsigned char)(linear[1] * normalize);\n      rgbe[2] = (unsigned char)(linear[2] * normalize);\n      rgbe[3] = (unsigned char)(exponent + 128);\n   }\n}\n\nvoid stbiw__write_run_data(FILE *f, int length, unsigned char databyte)\n{\n   unsigned char lengthbyte = (unsigned char) (length+128);\n   STBIW_ASSERT(length+128 <= 255);\n   fwrite(&lengthbyte, 1, 1, f);\n   fwrite(&databyte, 1, 1, f);\n}\n\nvoid stbiw__write_dump_data(FILE *f, int length, unsigned char *data)\n{\n   unsigned char lengthbyte = (unsigned char )(length & 0xff);\n   STBIW_ASSERT(length <= 128); // inconsistent with spec but consistent with official code\n   fwrite(&lengthbyte, 1, 1, f);\n   fwrite(data, length, 1, f);\n}\n\nvoid stbiw__write_hdr_scanline(FILE *f, int width, int comp, unsigned char *scratch, const float *scanline)\n{\n   unsigned char scanlineheader[4] = { 2, 2, 0, 0 };\n   unsigned char rgbe[4];\n   float linear[3] = {0};\n   int x;\n\n   scanlineheader[2] = (width&0xff00)>>8;\n   scanlineheader[3] = (width&0x00ff);\n\n   /* skip RLE for images too small or large */\n   if (width < 8 || width >= 32768) {\n      for (x=0; x < width; x++) {\n         switch (comp) {\n            case 4: /* fallthrough */\n            case 3: linear[2] = scanline[x*comp + 2];\n                    linear[1] = scanline[x*comp + 1];\n                    linear[0] = scanline[x*comp + 0];\n                    break;\n            case 2: /* fallthrough */\n            case 1: linear[0] = linear[1] = linear[2] = scanline[x*comp + 0];\n                    break;\n         }\n         stbiw__linear_to_rgbe(rgbe, linear);\n         fwrite(rgbe, 4, 1, f);\n      }\n   } else {\n      int c,r;\n      /* encode into scratch buffer */\n      for (x=0; x < width; x++) {\n         switch(comp) {\n            case 4: /* fallthrough */\n            case 3: linear[2] = scanline[x*comp + 2];\n                    linear[1] = scanline[x*comp + 1];\n                    linear[0] = scanline[x*comp + 0];\n                    break;\n            case 2: /* fallthrough */\n            case 1: linear[0] = linear[1] = linear[2] = scanline[x*comp + 0];\n                    break;\n         }\n         stbiw__linear_to_rgbe(rgbe, linear);\n         scratch[x + width*0] = rgbe[0];\n         scratch[x + width*1] = rgbe[1];\n         scratch[x + width*2] = rgbe[2];\n         scratch[x + width*3] = rgbe[3];\n      }\n\n      fwrite(scanlineheader, 4, 1, f);\n\n      /* RLE each component separately */\n      for (c=0; c < 4; c++) {\n         unsigned char *comp = &scratch[width*c];\n\n         x = 0;\n         while (x < width) {\n            // find first run\n            r = x;\n            while (r+2 < width) {\n               if (comp[r] == comp[r+1] && comp[r] == comp[r+2])\n                  break;\n               ++r;\n            }\n            if (r+2 >= width)\n               r = width;\n            // dump up to first run\n            while (x < r) {\n               int len = r-x;\n               if (len > 128) len = 128;\n               stbiw__write_dump_data(f, len, &comp[x]);\n               x += len;\n            }\n            // if there's a run, output it\n            if (r+2 < width) { // same test as what we break out of in search loop, so only true if we break'd\n               // find next byte after run\n               while (r < width && comp[r] == comp[x])\n                  ++r;\n               // output run up to r\n               while (x < r) {\n                  int len = r-x;\n                  if (len > 127) len = 127;\n                  stbiw__write_run_data(f, len, comp[x]);\n                  x += len;\n               }\n            }\n         }\n      }\n   }\n}\n\nint stbi_write_hdr(char const *filename, int x, int y, int comp, const float *data)\n{\n   int i;\n   FILE *f;\n   if (y <= 0 || x <= 0 || data == NULL) return 0;\n   f = fopen(filename, \"wb\");\n   if (f) {\n      /* Each component is stored separately. Allocate scratch space for full output scanline. */\n      unsigned char *scratch = (unsigned char *) STBIW_MALLOC(x*4);\n      fprintf(f, \"#?RADIANCE\\n# Written by stb_image_write.h\\nFORMAT=32-bit_rle_rgbe\\n\"      );\n      fprintf(f, \"EXPOSURE=          1.0000000000000\\n\\n-Y %d +X %d\\n\"                 , y, x);\n      for(i=0; i < y; i++)\n         stbiw__write_hdr_scanline(f, x, comp, scratch, data + comp*i*x);\n      STBIW_FREE(scratch);\n      fclose(f);\n   }\n   return f != NULL;\n}\n\n/////////////////////////////////////////////////////////\n// PNG\n\n// stretchy buffer; stbiw__sbpush() == vector<>::push_back() -- stbiw__sbcount() == vector<>::size()\n#define stbiw__sbraw(a) ((int *) (a) - 2)\n#define stbiw__sbm(a)   stbiw__sbraw(a)[0]\n#define stbiw__sbn(a)   stbiw__sbraw(a)[1]\n\n#define stbiw__sbneedgrow(a,n)  ((a)==0 || stbiw__sbn(a)+n >= stbiw__sbm(a))\n#define stbiw__sbmaybegrow(a,n) (stbiw__sbneedgrow(a,(n)) ? stbiw__sbgrow(a,n) : 0)\n#define stbiw__sbgrow(a,n)  stbiw__sbgrowf((void **) &(a), (n), sizeof(*(a)))\n\n#define stbiw__sbpush(a, v)      (stbiw__sbmaybegrow(a,1), (a)[stbiw__sbn(a)++] = (v))\n#define stbiw__sbcount(a)        ((a) ? stbiw__sbn(a) : 0)\n#define stbiw__sbfree(a)         ((a) ? STBIW_FREE(stbiw__sbraw(a)),0 : 0)\n\nstatic void *stbiw__sbgrowf(void **arr, int increment, int itemsize)\n{\n   int m = *arr ? 2*stbiw__sbm(*arr)+increment : increment+1;\n   void *p = STBIW_REALLOC(*arr ? stbiw__sbraw(*arr) : 0, itemsize * m + sizeof(int)*2);\n   STBIW_ASSERT(p);\n   if (p) {\n      if (!*arr) ((int *) p)[1] = 0;\n      *arr = (void *) ((int *) p + 2);\n      stbiw__sbm(*arr) = m;\n   }\n   return *arr;\n}\n\nstatic unsigned char *stbiw__zlib_flushf(unsigned char *data, unsigned int *bitbuffer, int *bitcount)\n{\n   while (*bitcount >= 8) {\n      stbiw__sbpush(data, (unsigned char) *bitbuffer);\n      *bitbuffer >>= 8;\n      *bitcount -= 8;\n   }\n   return data;\n}\n\nstatic int stbiw__zlib_bitrev(int code, int codebits)\n{\n   int res=0;\n   while (codebits--) {\n      res = (res << 1) | (code & 1);\n      code >>= 1;\n   }\n   return res;\n}\n\nstatic unsigned int stbiw__zlib_countm(unsigned char *a, unsigned char *b, int limit)\n{\n   int i;\n   for (i=0; i < limit && i < 258; ++i)\n      if (a[i] != b[i]) break;\n   return i;\n}\n\nstatic unsigned int stbiw__zhash(unsigned char *data)\n{\n   stbiw_uint32 hash = data[0] + (data[1] << 8) + (data[2] << 16);\n   hash ^= hash << 3;\n   hash += hash >> 5;\n   hash ^= hash << 4;\n   hash += hash >> 17;\n   hash ^= hash << 25;\n   hash += hash >> 6;\n   return hash;\n}\n\n#define stbiw__zlib_flush() (out = stbiw__zlib_flushf(out, &bitbuf, &bitcount))\n#define stbiw__zlib_add(code,codebits) \\\n      (bitbuf |= (code) << bitcount, bitcount += (codebits), stbiw__zlib_flush())\n#define stbiw__zlib_huffa(b,c)  stbiw__zlib_add(stbiw__zlib_bitrev(b,c),c)\n// default huffman tables\n#define stbiw__zlib_huff1(n)  stbiw__zlib_huffa(0x30 + (n), 8)\n#define stbiw__zlib_huff2(n)  stbiw__zlib_huffa(0x190 + (n)-144, 9)\n#define stbiw__zlib_huff3(n)  stbiw__zlib_huffa(0 + (n)-256,7)\n#define stbiw__zlib_huff4(n)  stbiw__zlib_huffa(0xc0 + (n)-280,8)\n#define stbiw__zlib_huff(n)  ((n) <= 143 ? stbiw__zlib_huff1(n) : (n) <= 255 ? stbiw__zlib_huff2(n) : (n) <= 279 ? stbiw__zlib_huff3(n) : stbiw__zlib_huff4(n))\n#define stbiw__zlib_huffb(n) ((n) <= 143 ? stbiw__zlib_huff1(n) : stbiw__zlib_huff2(n))\n\n#define stbiw__ZHASH   16384\n\nunsigned char * stbi_zlib_compress(unsigned char *data, int data_len, int *out_len, int quality)\n{\n   static unsigned short lengthc[] = { 3,4,5,6,7,8,9,10,11,13,15,17,19,23,27,31,35,43,51,59,67,83,99,115,131,163,195,227,258, 259 };\n   static unsigned char  lengtheb[]= { 0,0,0,0,0,0,0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4,  4,  5,  5,  5,  5,  0 };\n   static unsigned short distc[]   = { 1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577, 32768 };\n   static unsigned char  disteb[]  = { 0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13 };\n   unsigned int bitbuf=0;\n   int i,j, bitcount=0;\n   unsigned char *out = NULL;\n   unsigned char **hash_table[stbiw__ZHASH]; // 64KB on the stack!\n   if (quality < 5) quality = 5;\n\n   stbiw__sbpush(out, 0x78);   // DEFLATE 32K window\n   stbiw__sbpush(out, 0x5e);   // FLEVEL = 1\n   stbiw__zlib_add(1,1);  // BFINAL = 1\n   stbiw__zlib_add(1,2);  // BTYPE = 1 -- fixed huffman\n\n   for (i=0; i < stbiw__ZHASH; ++i)\n      hash_table[i] = NULL;\n\n   i=0;\n   while (i < data_len-3) {\n      // hash next 3 bytes of data to be compressed\n      int h = stbiw__zhash(data+i)&(stbiw__ZHASH-1), best=3;\n      unsigned char *bestloc = 0;\n      unsigned char **hlist = hash_table[h];\n      int n = stbiw__sbcount(hlist);\n      for (j=0; j < n; ++j) {\n         if (hlist[j]-data > i-32768) { // if entry lies within window\n            int d = stbiw__zlib_countm(hlist[j], data+i, data_len-i);\n            if (d >= best) best=d,bestloc=hlist[j];\n         }\n      }\n      // when hash table entry is too long, delete half the entries\n      if (hash_table[h] && stbiw__sbn(hash_table[h]) == 2*quality) {\n         STBIW_MEMMOVE(hash_table[h], hash_table[h]+quality, sizeof(hash_table[h][0])*quality);\n         stbiw__sbn(hash_table[h]) = quality;\n      }\n      stbiw__sbpush(hash_table[h],data+i);\n\n      if (bestloc) {\n         // \"lazy matching\" - check match at *next* byte, and if it's better, do cur byte as literal\n         h = stbiw__zhash(data+i+1)&(stbiw__ZHASH-1);\n         hlist = hash_table[h];\n         n = stbiw__sbcount(hlist);\n         for (j=0; j < n; ++j) {\n            if (hlist[j]-data > i-32767) {\n               int e = stbiw__zlib_countm(hlist[j], data+i+1, data_len-i-1);\n               if (e > best) { // if next match is better, bail on current match\n                  bestloc = NULL;\n                  break;\n               }\n            }\n         }\n      }\n\n      if (bestloc) {\n         int d = (int) (data+i - bestloc); // distance back\n         STBIW_ASSERT(d <= 32767 && best <= 258);\n         for (j=0; best > lengthc[j+1]-1; ++j);\n         stbiw__zlib_huff(j+257);\n         if (lengtheb[j]) stbiw__zlib_add(best - lengthc[j], lengtheb[j]);\n         for (j=0; d > distc[j+1]-1; ++j);\n         stbiw__zlib_add(stbiw__zlib_bitrev(j,5),5);\n         if (disteb[j]) stbiw__zlib_add(d - distc[j], disteb[j]);\n         i += best;\n      } else {\n         stbiw__zlib_huffb(data[i]);\n         ++i;\n      }\n   }\n   // write out final bytes\n   for (;i < data_len; ++i)\n      stbiw__zlib_huffb(data[i]);\n   stbiw__zlib_huff(256); // end of block\n   // pad with 0 bits to byte boundary\n   while (bitcount)\n      stbiw__zlib_add(0,1);\n\n   for (i=0; i < stbiw__ZHASH; ++i)\n      (void) stbiw__sbfree(hash_table[i]);\n\n   {\n      // compute adler32 on input\n      unsigned int i=0, s1=1, s2=0, blocklen = data_len % 5552;\n      int j=0;\n      while (j < data_len) {\n         for (i=0; i < blocklen; ++i) s1 += data[j+i], s2 += s1;\n         s1 %= 65521, s2 %= 65521;\n         j += blocklen;\n         blocklen = 5552;\n      }\n      stbiw__sbpush(out, (unsigned char) (s2 >> 8));\n      stbiw__sbpush(out, (unsigned char) s2);\n      stbiw__sbpush(out, (unsigned char) (s1 >> 8));\n      stbiw__sbpush(out, (unsigned char) s1);\n   }\n   *out_len = stbiw__sbn(out);\n   // make returned pointer freeable\n   STBIW_MEMMOVE(stbiw__sbraw(out), out, *out_len);\n   return (unsigned char *) stbiw__sbraw(out);\n}\n\nunsigned int stbiw__crc32(unsigned char *buffer, int len)\n{\n   static unsigned int crc_table[256];\n   unsigned int crc = ~0u;\n   int i,j;\n   if (crc_table[1] == 0)\n      for(i=0; i < 256; i++)\n         for (crc_table[i]=i, j=0; j < 8; ++j)\n            crc_table[i] = (crc_table[i] >> 1) ^ (crc_table[i] & 1 ? 0xedb88320 : 0);\n   for (i=0; i < len; ++i)\n      crc = (crc >> 8) ^ crc_table[buffer[i] ^ (crc & 0xff)];\n   return ~crc;\n}\n\n#define stbiw__wpng4(o,a,b,c,d) ((o)[0]=(unsigned char)(a),(o)[1]=(unsigned char)(b),(o)[2]=(unsigned char)(c),(o)[3]=(unsigned char)(d),(o)+=4)\n#define stbiw__wp32(data,v) stbiw__wpng4(data, (v)>>24,(v)>>16,(v)>>8,(v));\n#define stbiw__wptag(data,s) stbiw__wpng4(data, s[0],s[1],s[2],s[3])\n\nstatic void stbiw__wpcrc(unsigned char **data, int len)\n{\n   unsigned int crc = stbiw__crc32(*data - len - 4, len+4);\n   stbiw__wp32(*data, crc);\n}\n\nstatic unsigned char stbiw__paeth(int a, int b, int c)\n{\n   int p = a + b - c, pa = abs(p-a), pb = abs(p-b), pc = abs(p-c);\n   if (pa <= pb && pa <= pc) return (unsigned char) a;\n   if (pb <= pc) return (unsigned char) b;\n   return (unsigned char) c;\n}\n\nunsigned char *stbi_write_png_to_mem(unsigned char *pixels, int stride_bytes, int x, int y, int n, int *out_len)\n{\n   int ctype[5] = { -1, 0, 4, 2, 6 };\n   unsigned char sig[8] = { 137,80,78,71,13,10,26,10 };\n   unsigned char *out,*o, *filt, *zlib;\n   signed char *line_buffer;\n   int i,j,k,p,zlen;\n\n   if (stride_bytes == 0)\n      stride_bytes = x * n;\n\n   filt = (unsigned char *) STBIW_MALLOC((x*n+1) * y); if (!filt) return 0;\n   line_buffer = (signed char *) STBIW_MALLOC(x * n); if (!line_buffer) { STBIW_FREE(filt); return 0; }\n   for (j=0; j < y; ++j) {\n      static int mapping[] = { 0,1,2,3,4 };\n      static int firstmap[] = { 0,1,0,5,6 };\n      int *mymap = j ? mapping : firstmap;\n      int best = 0, bestval = 0x7fffffff;\n      for (p=0; p < 2; ++p) {\n         for (k= p?best:0; k < 5; ++k) {\n            int type = mymap[k],est=0;\n            unsigned char *z = pixels + stride_bytes*j;\n            for (i=0; i < n; ++i)\n               switch (type) {\n                  case 0: line_buffer[i] = z[i]; break;\n                  case 1: line_buffer[i] = z[i]; break;\n                  case 2: line_buffer[i] = z[i] - z[i-stride_bytes]; break;\n                  case 3: line_buffer[i] = z[i] - (z[i-stride_bytes]>>1); break;\n                  case 4: line_buffer[i] = (signed char) (z[i] - stbiw__paeth(0,z[i-stride_bytes],0)); break;\n                  case 5: line_buffer[i] = z[i]; break;\n                  case 6: line_buffer[i] = z[i]; break;\n               }\n            for (i=n; i < x*n; ++i) {\n               switch (type) {\n                  case 0: line_buffer[i] = z[i]; break;\n                  case 1: line_buffer[i] = z[i] - z[i-n]; break;\n                  case 2: line_buffer[i] = z[i] - z[i-stride_bytes]; break;\n                  case 3: line_buffer[i] = z[i] - ((z[i-n] + z[i-stride_bytes])>>1); break;\n                  case 4: line_buffer[i] = z[i] - stbiw__paeth(z[i-n], z[i-stride_bytes], z[i-stride_bytes-n]); break;\n                  case 5: line_buffer[i] = z[i] - (z[i-n]>>1); break;\n                  case 6: line_buffer[i] = z[i] - stbiw__paeth(z[i-n], 0,0); break;\n               }\n            }\n            if (p) break;\n            for (i=0; i < x*n; ++i)\n               est += abs((signed char) line_buffer[i]);\n            if (est < bestval) { bestval = est; best = k; }\n         }\n      }\n      // when we get here, best contains the filter type, and line_buffer contains the data\n      filt[j*(x*n+1)] = (unsigned char) best;\n      STBIW_MEMMOVE(filt+j*(x*n+1)+1, line_buffer, x*n);\n   }\n   STBIW_FREE(line_buffer);\n   zlib = stbi_zlib_compress(filt, y*( x*n+1), &zlen, 8); // increase 8 to get smaller but use more memory\n   STBIW_FREE(filt);\n   if (!zlib) return 0;\n\n   // each tag requires 12 bytes of overhead\n   out = (unsigned char *) STBIW_MALLOC(8 + 12+13 + 12+zlen + 12);\n   if (!out) return 0;\n   *out_len = 8 + 12+13 + 12+zlen + 12;\n\n   o=out;\n   STBIW_MEMMOVE(o,sig,8); o+= 8;\n   stbiw__wp32(o, 13); // header length\n   stbiw__wptag(o, \"IHDR\");\n   stbiw__wp32(o, x);\n   stbiw__wp32(o, y);\n   *o++ = 8;\n   *o++ = (unsigned char) ctype[n];\n   *o++ = 0;\n   *o++ = 0;\n   *o++ = 0;\n   stbiw__wpcrc(&o,13);\n\n   stbiw__wp32(o, zlen);\n   stbiw__wptag(o, \"IDAT\");\n   STBIW_MEMMOVE(o, zlib, zlen);\n   o += zlen;\n   STBIW_FREE(zlib);\n   stbiw__wpcrc(&o, zlen);\n\n   stbiw__wp32(o,0);\n   stbiw__wptag(o, \"IEND\");\n   stbiw__wpcrc(&o,0);\n\n   STBIW_ASSERT(o == out + *out_len);\n\n   return out;\n}\n\nint stbi_write_png(char const *filename, int x, int y, int comp, const void *data, int stride_bytes)\n{\n   FILE *f;\n   int len;\n   unsigned char *png = stbi_write_png_to_mem((unsigned char *) data, stride_bytes, x, y, comp, &len);\n   if (!png) return 0;\n   f = fopen(filename, \"wb\");\n   if (!f) { STBIW_FREE(png); return 0; }\n   fwrite(png, 1, len, f);\n   fclose(f);\n   STBIW_FREE(png);\n   return 1;\n}\n#endif // STB_IMAGE_WRITE_IMPLEMENTATION\n\n/* Revision history\n      0.98 (2015-04-08)\n             added STBIW_MALLOC, STBIW_ASSERT etc\n      0.97 (2015-01-18)\n             fixed HDR asserts, rewrote HDR rle logic\n      0.96 (2015-01-17)\n             add HDR output\n             fix monochrome BMP\n      0.95 (2014-08-17)\n\t\t       add monochrome TGA output\n      0.94 (2014-05-31)\n             rename private functions to avoid conflicts with stb_image.h\n      0.93 (2014-05-27)\n             warning fixes\n      0.92 (2010-08-01)\n             casts to unsigned char to fix warnings\n      0.91 (2010-07-17)\n             first public release\n      0.90   first internal release\n*/\n"
  },
  {
    "path": "lightnet/_darknet/tree.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include \"tree.h\"\n#include \"utils.h\"\n#include \"data.h\"\n\nvoid change_leaves(tree *t, char *leaf_list)\n{\n    list *llist = get_paths(leaf_list);\n    char **leaves = (char **)list_to_array(llist);\n    int n = llist->size;\n    int i,j;\n    int found = 0;\n    for(i = 0; i < t->n; ++i){\n        t->leaf[i] = 0;\n        for(j = 0; j < n; ++j){\n            if (0==strcmp(t->name[i], leaves[j])){\n                t->leaf[i] = 1;\n                ++found;\n                break;\n            }\n        }\n    }\n    fprintf(stderr, \"Found %d leaves.\\n\", found);\n}\n\nfloat get_hierarchy_probability(float *x, tree *hier, int c, int stride)\n{\n    float p = 1;\n    while(c >= 0){\n        p = p * x[c*stride];\n        c = hier->parent[c];\n    }\n    return p;\n}\n\nvoid hierarchy_predictions(float *predictions, int n, tree *hier, int only_leaves, int stride)\n{\n    int j;\n    for(j = 0; j < n; ++j){\n        int parent = hier->parent[j];\n        if(parent >= 0){\n            predictions[j*stride] *= predictions[parent*stride]; \n        }\n    }\n    if(only_leaves){\n        for(j = 0; j < n; ++j){\n            if(!hier->leaf[j]) predictions[j*stride] = 0;\n        }\n    }\n}\n\nint hierarchy_top_prediction(float *predictions, tree *hier, float thresh, int stride)\n{\n    float p = 1;\n    int group = 0;\n    int i;\n    while(1){\n        float max = 0;\n        int max_i = 0;\n\n        for(i = 0; i < hier->group_size[group]; ++i){\n            int index = i + hier->group_offset[group];\n            float val = predictions[(i + hier->group_offset[group])*stride];\n            if(val > max){\n                max_i = index;\n                max = val;\n            }\n        }\n        if(p*max > thresh){\n            p = p*max;\n            group = hier->child[max_i];\n            if(hier->child[max_i] < 0) return max_i;\n        } else if (group == 0){\n            return max_i;\n        } else {\n            return hier->parent[hier->group_offset[group]];\n        }\n    }\n    return 0;\n}\n\ntree *read_tree(char *filename)\n{\n    tree t = {0};\n    FILE *fp = fopen(filename, \"r\");\n\n    char *line;\n    int last_parent = -1;\n    int group_size = 0;\n    int groups = 0;\n    int n = 0;\n    while((line=fgetl(fp)) != 0){\n        char *id = calloc(256, sizeof(char));\n        int parent = -1;\n        sscanf(line, \"%s %d\", id, &parent);\n        t.parent = realloc(t.parent, (n+1)*sizeof(int));\n        t.parent[n] = parent;\n\n        t.child = realloc(t.child, (n+1)*sizeof(int));\n        t.child[n] = -1;\n\n        t.name = realloc(t.name, (n+1)*sizeof(char *));\n        t.name[n] = id;\n        if(parent != last_parent){\n            ++groups;\n            t.group_offset = realloc(t.group_offset, groups * sizeof(int));\n            t.group_offset[groups - 1] = n - group_size;\n            t.group_size = realloc(t.group_size, groups * sizeof(int));\n            t.group_size[groups - 1] = group_size;\n            group_size = 0;\n            last_parent = parent;\n        }\n        t.group = realloc(t.group, (n+1)*sizeof(int));\n        t.group[n] = groups;\n        if (parent >= 0) {\n            t.child[parent] = groups;\n        }\n        ++n;\n        ++group_size;\n    }\n    ++groups;\n    t.group_offset = realloc(t.group_offset, groups * sizeof(int));\n    t.group_offset[groups - 1] = n - group_size;\n    t.group_size = realloc(t.group_size, groups * sizeof(int));\n    t.group_size[groups - 1] = group_size;\n    t.n = n;\n    t.groups = groups;\n    t.leaf = calloc(n, sizeof(int));\n    int i;\n    for(i = 0; i < n; ++i) t.leaf[i] = 1;\n    for(i = 0; i < n; ++i) if(t.parent[i] >= 0) t.leaf[t.parent[i]] = 0;\n\n    fclose(fp);\n    tree *tree_ptr = calloc(1, sizeof(tree));\n    *tree_ptr = t;\n    //error(0);\n    return tree_ptr;\n}\n"
  },
  {
    "path": "lightnet/_darknet/tree.h",
    "content": "#ifndef TREE_H\n#define TREE_H\n#include \"darknet.h\"\n\ntree *read_tree(char *filename);\nint hierarchy_top_prediction(float *predictions, tree *hier, float thresh, int stride);\nfloat get_hierarchy_probability(float *x, tree *hier, int c, int stride);\n\n#endif\n"
  },
  {
    "path": "lightnet/_darknet/utils.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <math.h>\n#include <assert.h>\n#include <unistd.h>\n#include <float.h>\n#include <limits.h>\n#include <time.h>\n\n#include \"utils.h\"\n\n\n/*\n// old timing. is it better? who knows!!\ndouble get_wall_time()\n{\n    struct timeval time;\n    if (gettimeofday(&time,NULL)){\n        return 0;\n    }\n    return (double)time.tv_sec + (double)time.tv_usec * .000001;\n}\n*/\n\ndouble what_time_is_it_now()\n{\n    struct timespec now;\n    clock_gettime(CLOCK_REALTIME, &now);\n    return now.tv_sec + now.tv_nsec*1e-9;\n}\n\nint *read_intlist(char *gpu_list, int *ngpus, int d)\n{\n    int *gpus = 0;\n    if(gpu_list){\n        int len = strlen(gpu_list);\n        *ngpus = 1;\n        int i;\n        for(i = 0; i < len; ++i){\n            if (gpu_list[i] == ',') ++*ngpus;\n        }\n        gpus = calloc(*ngpus, sizeof(int));\n        for(i = 0; i < *ngpus; ++i){\n            gpus[i] = atoi(gpu_list);\n            gpu_list = strchr(gpu_list, ',')+1;\n        }\n    } else {\n        gpus = calloc(1, sizeof(float));\n        *gpus = d;\n        *ngpus = 1;\n    }\n    return gpus;\n}\n\nint *read_map(char *filename)\n{\n    int n = 0;\n    int *map = 0;\n    char *str;\n    FILE *file = fopen(filename, \"r\");\n    if(!file) file_error(filename);\n    while((str=fgetl(file))){\n        ++n;\n        map = realloc(map, n*sizeof(int));\n        map[n-1] = atoi(str);\n    }\n    return map;\n}\n\nvoid sorta_shuffle(void *arr, size_t n, size_t size, size_t sections)\n{\n    size_t i;\n    for(i = 0; i < sections; ++i){\n        size_t start = n*i/sections;\n        size_t end = n*(i+1)/sections;\n        size_t num = end-start;\n        shuffle(arr+(start*size), num, size);\n    }\n}\n\nvoid shuffle(void *arr, size_t n, size_t size)\n{\n    size_t i;\n    void *swp = calloc(1, size);\n    for(i = 0; i < n-1; ++i){\n        size_t j = i + rand()/(RAND_MAX / (n-i)+1);\n        memcpy(swp,          arr+(j*size), size);\n        memcpy(arr+(j*size), arr+(i*size), size);\n        memcpy(arr+(i*size), swp,          size);\n    }\n}\n\nint *random_index_order(int min, int max)\n{\n    int *inds = calloc(max-min, sizeof(int));\n    int i;\n    for(i = min; i < max; ++i){\n        inds[i] = i;\n    }\n    for(i = min; i < max-1; ++i){\n        int swap = inds[i];\n        int index = i + rand()%(max-i);\n        inds[i] = inds[index];\n        inds[index] = swap;\n    }\n    return inds;\n}\n\nvoid del_arg(int argc, char **argv, int index)\n{\n    int i;\n    for(i = index; i < argc-1; ++i) argv[i] = argv[i+1];\n    argv[i] = 0;\n}\n\nint find_arg(int argc, char* argv[], char *arg)\n{\n    int i;\n    for(i = 0; i < argc; ++i) {\n        if(!argv[i]) continue;\n        if(0==strcmp(argv[i], arg)) {\n            del_arg(argc, argv, i);\n            return 1;\n        }\n    }\n    return 0;\n}\n\nint find_int_arg(int argc, char **argv, char *arg, int def)\n{\n    int i;\n    for(i = 0; i < argc-1; ++i){\n        if(!argv[i]) continue;\n        if(0==strcmp(argv[i], arg)){\n            def = atoi(argv[i+1]);\n            del_arg(argc, argv, i);\n            del_arg(argc, argv, i);\n            break;\n        }\n    }\n    return def;\n}\n\nfloat find_float_arg(int argc, char **argv, char *arg, float def)\n{\n    int i;\n    for(i = 0; i < argc-1; ++i){\n        if(!argv[i]) continue;\n        if(0==strcmp(argv[i], arg)){\n            def = atof(argv[i+1]);\n            del_arg(argc, argv, i);\n            del_arg(argc, argv, i);\n            break;\n        }\n    }\n    return def;\n}\n\nchar *find_char_arg(int argc, char **argv, char *arg, char *def)\n{\n    int i;\n    for(i = 0; i < argc-1; ++i){\n        if(!argv[i]) continue;\n        if(0==strcmp(argv[i], arg)){\n            def = argv[i+1];\n            del_arg(argc, argv, i);\n            del_arg(argc, argv, i);\n            break;\n        }\n    }\n    return def;\n}\n\n\nchar *basecfg(char *cfgfile)\n{\n    char *c = cfgfile;\n    char *next;\n    while((next = strchr(c, '/')))\n    {\n        c = next+1;\n    }\n    c = copy_string(c);\n    next = strchr(c, '.');\n    if (next) *next = 0;\n    return c;\n}\n\nint alphanum_to_int(char c)\n{\n    return (c < 58) ? c - 48 : c-87;\n}\nchar int_to_alphanum(int i)\n{\n    if (i == 36) return '.';\n    return (i < 10) ? i + 48 : i + 87;\n}\n\nvoid pm(int M, int N, float *A)\n{\n    int i,j;\n    for(i =0 ; i < M; ++i){\n        printf(\"%d \", i+1);\n        for(j = 0; j < N; ++j){\n            printf(\"%2.4f, \", A[i*N+j]);\n        }\n        printf(\"\\n\");\n    }\n    printf(\"\\n\");\n}\n\nvoid find_replace(char *str, char *orig, char *rep, char *output)\n{\n    char buffer[4096] = {0};\n    char *p;\n\n    sprintf(buffer, \"%s\", str);\n    if(!(p = strstr(buffer, orig))){  // Is 'orig' even in 'str'?\n        sprintf(output, \"%s\", str);\n        return;\n    }\n\n    *p = '\\0';\n\n    sprintf(output, \"%s%s%s\", buffer, rep, p+strlen(orig));\n}\n\nfloat sec(clock_t clocks)\n{\n    return (float)clocks/CLOCKS_PER_SEC;\n}\n\nvoid top_k(float *a, int n, int k, int *index)\n{\n    int i,j;\n    for(j = 0; j < k; ++j) index[j] = -1;\n    for(i = 0; i < n; ++i){\n        int curr = i;\n        for(j = 0; j < k; ++j){\n            if((index[j] < 0) || a[curr] > a[index[j]]){\n                int swap = curr;\n                curr = index[j];\n                index[j] = swap;\n            }\n        }\n    }\n}\n\nvoid error(const char *s)\n{\n    perror(s);\n    assert(0);\n    exit(-1);\n}\n\nunsigned char *read_file(char *filename)\n{\n    FILE *fp = fopen(filename, \"rb\");\n    size_t size;\n\n    fseek(fp, 0, SEEK_END); \n    size = ftell(fp);\n    fseek(fp, 0, SEEK_SET); \n\n    unsigned char *text = calloc(size+1, sizeof(char));\n    fread(text, 1, size, fp);\n    fclose(fp);\n    return text;\n}\n\nvoid malloc_error()\n{\n    fprintf(stderr, \"Malloc error\\n\");\n    exit(-1);\n}\n\nvoid file_error(char *s)\n{\n    fprintf(stderr, \"Couldn't open file: %s\\n\", s);\n    exit(0);\n}\n\nlist *split_str(char *s, char delim)\n{\n    size_t i;\n    size_t len = strlen(s);\n    list *l = make_list();\n    list_insert(l, s);\n    for(i = 0; i < len; ++i){\n        if(s[i] == delim){\n            s[i] = '\\0';\n            list_insert(l, &(s[i+1]));\n        }\n    }\n    return l;\n}\n\nvoid strip(char *s)\n{\n    size_t i;\n    size_t len = strlen(s);\n    size_t offset = 0;\n    for(i = 0; i < len; ++i){\n        char c = s[i];\n        if(c==' '||c=='\\t'||c=='\\n') ++offset;\n        else s[i-offset] = c;\n    }\n    s[len-offset] = '\\0';\n}\n\nvoid strip_char(char *s, char bad)\n{\n    size_t i;\n    size_t len = strlen(s);\n    size_t offset = 0;\n    for(i = 0; i < len; ++i){\n        char c = s[i];\n        if(c==bad) ++offset;\n        else s[i-offset] = c;\n    }\n    s[len-offset] = '\\0';\n}\n\nvoid free_ptrs(void **ptrs, int n)\n{\n    int i;\n    for(i = 0; i < n; ++i) free(ptrs[i]);\n    free(ptrs);\n}\n\nchar *fgetl(FILE *fp)\n{\n    if(feof(fp)) return 0;\n    size_t size = 512;\n    char *line = malloc(size*sizeof(char));\n    if(!fgets(line, size, fp)){\n        free(line);\n        return 0;\n    }\n\n    size_t curr = strlen(line);\n\n    while((line[curr-1] != '\\n') && !feof(fp)){\n        if(curr == size-1){\n            size *= 2;\n            line = realloc(line, size*sizeof(char));\n            if(!line) {\n                printf(\"%ld\\n\", size);\n                malloc_error();\n            }\n        }\n        size_t readsize = size-curr;\n        if(readsize > INT_MAX) readsize = INT_MAX-1;\n        fgets(&line[curr], readsize, fp);\n        curr = strlen(line);\n    }\n    if(line[curr-1] == '\\n') line[curr-1] = '\\0';\n\n    return line;\n}\n\nint read_int(int fd)\n{\n    int n = 0;\n    int next = read(fd, &n, sizeof(int));\n    if(next <= 0) return -1;\n    return n;\n}\n\nvoid write_int(int fd, int n)\n{\n    int next = write(fd, &n, sizeof(int));\n    if(next <= 0) error(\"read failed\");\n}\n\nint read_all_fail(int fd, char *buffer, size_t bytes)\n{\n    size_t n = 0;\n    while(n < bytes){\n        int next = read(fd, buffer + n, bytes-n);\n        if(next <= 0) return 1;\n        n += next;\n    }\n    return 0;\n}\n\nint write_all_fail(int fd, char *buffer, size_t bytes)\n{\n    size_t n = 0;\n    while(n < bytes){\n        size_t next = write(fd, buffer + n, bytes-n);\n        if(next <= 0) return 1;\n        n += next;\n    }\n    return 0;\n}\n\nvoid read_all(int fd, char *buffer, size_t bytes)\n{\n    size_t n = 0;\n    while(n < bytes){\n        int next = read(fd, buffer + n, bytes-n);\n        if(next <= 0) error(\"read failed\");\n        n += next;\n    }\n}\n\nvoid write_all(int fd, char *buffer, size_t bytes)\n{\n    size_t n = 0;\n    while(n < bytes){\n        size_t next = write(fd, buffer + n, bytes-n);\n        if(next <= 0) error(\"write failed\");\n        n += next;\n    }\n}\n\n\nchar *copy_string(char *s)\n{\n    char *copy = malloc(strlen(s)+1);\n    strncpy(copy, s, strlen(s)+1);\n    return copy;\n}\n\nlist *parse_csv_line(char *line)\n{\n    list *l = make_list();\n    char *c, *p;\n    int in = 0;\n    for(c = line, p = line; *c != '\\0'; ++c){\n        if(*c == '\"') in = !in;\n        else if(*c == ',' && !in){\n            *c = '\\0';\n            list_insert(l, copy_string(p));\n            p = c+1;\n        }\n    }\n    list_insert(l, copy_string(p));\n    return l;\n}\n\nint count_fields(char *line)\n{\n    int count = 0;\n    int done = 0;\n    char *c;\n    for(c = line; !done; ++c){\n        done = (*c == '\\0');\n        if(*c == ',' || done) ++count;\n    }\n    return count;\n}\n\nfloat *parse_fields(char *line, int n)\n{\n    float *field = calloc(n, sizeof(float));\n    char *c, *p, *end;\n    int count = 0;\n    int done = 0;\n    for(c = line, p = line; !done; ++c){\n        done = (*c == '\\0');\n        if(*c == ',' || done){\n            *c = '\\0';\n            field[count] = strtod(p, &end);\n            if(p == c) field[count] = nan(\"\");\n            if(end != c && (end != c-1 || *end != '\\r')) field[count] = nan(\"\"); //DOS file formats!\n            p = c+1;\n            ++count;\n        }\n    }\n    return field;\n}\n\nfloat sum_array(float *a, int n)\n{\n    int i;\n    float sum = 0;\n    for(i = 0; i < n; ++i) sum += a[i];\n    return sum;\n}\n\nfloat mean_array(float *a, int n)\n{\n    return sum_array(a,n)/n;\n}\n\nvoid mean_arrays(float **a, int n, int els, float *avg)\n{\n    int i;\n    int j;\n    memset(avg, 0, els*sizeof(float));\n    for(j = 0; j < n; ++j){\n        for(i = 0; i < els; ++i){\n            avg[i] += a[j][i];\n        }\n    }\n    for(i = 0; i < els; ++i){\n        avg[i] /= n;\n    }\n}\n\nvoid print_statistics(float *a, int n)\n{\n    float m = mean_array(a, n);\n    float v = variance_array(a, n);\n    printf(\"MSE: %.6f, Mean: %.6f, Variance: %.6f\\n\", mse_array(a, n), m, v);\n}\n\nfloat variance_array(float *a, int n)\n{\n    int i;\n    float sum = 0;\n    float mean = mean_array(a, n);\n    for(i = 0; i < n; ++i) sum += (a[i] - mean)*(a[i]-mean);\n    float variance = sum/n;\n    return variance;\n}\n\nint constrain_int(int a, int min, int max)\n{\n    if (a < min) return min;\n    if (a > max) return max;\n    return a;\n}\n\nfloat constrain(float min, float max, float a)\n{\n    if (a < min) return min;\n    if (a > max) return max;\n    return a;\n}\n\nfloat dist_array(float *a, float *b, int n, int sub)\n{\n    int i;\n    float sum = 0;\n    for(i = 0; i < n; i += sub) sum += pow(a[i]-b[i], 2);\n    return sqrt(sum);\n}\n\nfloat mse_array(float *a, int n)\n{\n    int i;\n    float sum = 0;\n    for(i = 0; i < n; ++i) sum += a[i]*a[i];\n    return sqrt(sum/n);\n}\n\nvoid normalize_array(float *a, int n)\n{\n    int i;\n    float mu = mean_array(a,n);\n    float sigma = sqrt(variance_array(a,n));\n    for(i = 0; i < n; ++i){\n        a[i] = (a[i] - mu)/sigma;\n    }\n    mu = mean_array(a,n);\n    sigma = sqrt(variance_array(a,n));\n}\n\nvoid translate_array(float *a, int n, float s)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        a[i] += s;\n    }\n}\n\nfloat mag_array(float *a, int n)\n{\n    int i;\n    float sum = 0;\n    for(i = 0; i < n; ++i){\n        sum += a[i]*a[i];   \n    }\n    return sqrt(sum);\n}\n\nvoid scale_array(float *a, int n, float s)\n{\n    int i;\n    for(i = 0; i < n; ++i){\n        a[i] *= s;\n    }\n}\n\nint sample_array(float *a, int n)\n{\n    float sum = sum_array(a, n);\n    scale_array(a, n, 1./sum);\n    float r = rand_uniform(0, 1);\n    int i;\n    for(i = 0; i < n; ++i){\n        r = r - a[i];\n        if (r <= 0) return i;\n    }\n    return n-1;\n}\n\nint max_int_index(int *a, int n)\n{\n    if(n <= 0) return -1;\n    int i, max_i = 0;\n    int max = a[0];\n    for(i = 1; i < n; ++i){\n        if(a[i] > max){\n            max = a[i];\n            max_i = i;\n        }\n    }\n    return max_i;\n}\n\nint max_index(float *a, int n)\n{\n    if(n <= 0) return -1;\n    int i, max_i = 0;\n    float max = a[0];\n    for(i = 1; i < n; ++i){\n        if(a[i] > max){\n            max = a[i];\n            max_i = i;\n        }\n    }\n    return max_i;\n}\n\nint rand_int(int min, int max)\n{\n    if (max < min){\n        int s = min;\n        min = max;\n        max = s;\n    }\n    int r = (rand()%(max - min + 1)) + min;\n    return r;\n}\n\n// From http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform\nfloat rand_normal()\n{\n    static int haveSpare = 0;\n    static double rand1, rand2;\n\n    if(haveSpare)\n    {\n        haveSpare = 0;\n        return sqrt(rand1) * sin(rand2);\n    }\n\n    haveSpare = 1;\n\n    rand1 = rand() / ((double) RAND_MAX);\n    if(rand1 < 1e-100) rand1 = 1e-100;\n    rand1 = -2 * log(rand1);\n    rand2 = (rand() / ((double) RAND_MAX)) * TWO_PI;\n\n    return sqrt(rand1) * cos(rand2);\n}\n\n/*\n   float rand_normal()\n   {\n   int n = 12;\n   int i;\n   float sum= 0;\n   for(i = 0; i < n; ++i) sum += (float)rand()/RAND_MAX;\n   return sum-n/2.;\n   }\n */\n\nsize_t rand_size_t()\n{\n    return  ((size_t)(rand()&0xff) << 56) | \n        ((size_t)(rand()&0xff) << 48) |\n        ((size_t)(rand()&0xff) << 40) |\n        ((size_t)(rand()&0xff) << 32) |\n        ((size_t)(rand()&0xff) << 24) |\n        ((size_t)(rand()&0xff) << 16) |\n        ((size_t)(rand()&0xff) << 8) |\n        ((size_t)(rand()&0xff) << 0);\n}\n\nfloat rand_uniform(float min, float max)\n{\n    if(max < min){\n        float swap = min;\n        min = max;\n        max = swap;\n    }\n    return ((float)rand()/RAND_MAX * (max - min)) + min;\n}\n\nfloat rand_scale(float s)\n{\n    float scale = rand_uniform(1, s);\n    if(rand()%2) return scale;\n    return 1./scale;\n}\n\nfloat **one_hot_encode(float *a, int n, int k)\n{\n    int i;\n    float **t = calloc(n, sizeof(float*));\n    for(i = 0; i < n; ++i){\n        t[i] = calloc(k, sizeof(float));\n        int index = (int)a[i];\n        t[i][index] = 1;\n    }\n    return t;\n}\n\n"
  },
  {
    "path": "lightnet/_darknet/utils.h",
    "content": "#ifndef UTILS_H\n#define UTILS_H\n#include <stdio.h>\n#include <time.h>\n#include \"darknet.h\"\n#include \"list.h\"\n\n#define TIME(a) \\\n    do { \\\n    double start = what_time_is_it_now(); \\\n    a; \\\n    printf(\"%s took: %f seconds\\n\", #a, what_time_is_it_now() - start); \\\n    } while (0)\n\n#define TWO_PI 6.2831853071795864769252866f\n\ndouble what_time_is_it_now();\nvoid shuffle(void *arr, size_t n, size_t size);\nvoid sorta_shuffle(void *arr, size_t n, size_t size, size_t sections);\nvoid free_ptrs(void **ptrs, int n);\nint alphanum_to_int(char c);\nchar int_to_alphanum(int i);\nint read_int(int fd);\nvoid write_int(int fd, int n);\nvoid read_all(int fd, char *buffer, size_t bytes);\nvoid write_all(int fd, char *buffer, size_t bytes);\nint read_all_fail(int fd, char *buffer, size_t bytes);\nint write_all_fail(int fd, char *buffer, size_t bytes);\nvoid find_replace(char *str, char *orig, char *rep, char *output);\nvoid malloc_error();\nvoid file_error(char *s);\nvoid strip(char *s);\nvoid strip_char(char *s, char bad);\nlist *split_str(char *s, char delim);\nchar *fgetl(FILE *fp);\nlist *parse_csv_line(char *line);\nchar *copy_string(char *s);\nint count_fields(char *line);\nfloat *parse_fields(char *line, int n);\nvoid scale_array(float *a, int n, float s);\nvoid translate_array(float *a, int n, float s);\nfloat constrain(float min, float max, float a);\nint constrain_int(int a, int min, int max);\nfloat rand_uniform(float min, float max);\nfloat rand_scale(float s);\nint rand_int(int min, int max);\nvoid mean_arrays(float **a, int n, int els, float *avg);\nfloat dist_array(float *a, float *b, int n, int sub);\nfloat **one_hot_encode(float *a, int n, int k);\nfloat sec(clock_t clocks);\nvoid print_statistics(float *a, int n);\n\n#endif\n\n"
  },
  {
    "path": "lightnet/about.py",
    "content": "__title__ = 'lightnet'\n__version__ = '0.0.13'\n__summary__ = \"Bringing pjreddie's DarkNet out of the shadows\"\n__uri__ = 'https://explosion.ai'\n__author__ = 'Explosion AI'\n__email__ = 'contact@explosion.ai'\n__license__ = 'MIT'\n"
  },
  {
    "path": "lightnet/cli.py",
    "content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport plac\nimport requests\nimport os\nimport sys\nfrom tqdm import tqdm\nfrom pathlib import Path\n\n\nmodel_paths = {\n    'yolo': 'https://pjreddie.com/media/files/yolo.weights',\n    'tiny-yolo': 'https://pjreddie.com/media/files/tiny-yolo.weights',\n}\n\n@plac.annotations(\n    model=(\"model to download, shortcut or name)\", \"positional\", None, str),\n    direct=(\"force direct download from URL\", \"flag\", \"d\", bool))\ndef download(cmd, model, direct=False):\n    \"\"\"\n    Download model from default download path. Models: tiny-yolo, yolo.\n    \"\"\"\n    if direct:\n        url = model\n        name = model.split('/')[-1]\n    else:\n        url = model_paths[model]\n        name = model + '.weights'\n    out_loc = Path(__file__).parent / 'data' / name\n    download_file(url, out_loc)\n\n\ndef download_file(url, path):\n    r = requests.get(url, stream=True)\n    total_size = int(r.headers.get('content-length', 0))\n    with Path(path).open('wb') as file_:\n        with tqdm(total=total_size//1024, unit_scale=True, unit=\"K\") as pbar:\n            for data in r.iter_content(32*1024):\n                file_.write(data)\n                pbar.update(32)\n"
  },
  {
    "path": "lightnet/data/alexnet.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=1\nheight=227\nwidth=227\nchannels=3\nmomentum=0.9\ndecay=0.0005\nmax_crop=256\n\nlearning_rate=0.01\npolicy=poly\npower=4\nmax_batches=800000\n\nangle=7\nhue = .1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nfilters=96\nsize=11\nstride=4\npad=0\nactivation=relu\n\n[maxpool]\nsize=3\nstride=2\npadding=0\n\n[convolutional]\nfilters=256\nsize=5\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=3\nstride=2\npadding=0\n\n[convolutional]\nfilters=384\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=384\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=3\nstride=2\npadding=0\n\n[connected]\noutput=4096\nactivation=relu\n\n[dropout]\nprobability=.5\n\n[connected]\noutput=4096\nactivation=relu\n\n[dropout]\nprobability=.5\n\n[connected]\noutput=1000\nactivation=linear\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/cifar.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=1\nheight=28\nwidth=28\nchannels=3\nmax_crop=32\nmin_crop=32\n\nhue=.1\nsaturation=.75\nexposure=.75\n\nlearning_rate=0.4\npolicy=poly\npower=4\nmax_batches = 5000\nmomentum=0.9\ndecay=0.0005\n\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[dropout]\nprobability=.5\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[dropout]\nprobability=.5\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[dropout]\nprobability=.5\n\n[convolutional]\nfilters=10\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\n\n"
  },
  {
    "path": "lightnet/data/cifar.test.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=1\nheight=32\nwidth=32\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.4\npolicy=poly\npower=4\nmax_batches = 50000\n\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[dropout]\nprobability=.5\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[dropout]\nprobability=.5\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[dropout]\nprobability=.5\n\n[convolutional]\nfilters=10\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[avgpool]\n\n[softmax]\ngroups=1\ntemperature=3\n\n[cost]\n\n"
  },
  {
    "path": "lightnet/data/coco.names",
    "content": "person\nbicycle\ncar\nmotorbike\naeroplane\nbus\ntrain\ntruck\nboat\ntraffic light\nfire hydrant\nstop sign\nparking meter\nbench\nbird\ncat\ndog\nhorse\nsheep\ncow\nelephant\nbear\nzebra\ngiraffe\nbackpack\numbrella\nhandbag\ntie\nsuitcase\nfrisbee\nskis\nsnowboard\nsports ball\nkite\nbaseball bat\nbaseball glove\nskateboard\nsurfboard\ntennis racket\nbottle\nwine glass\ncup\nfork\nknife\nspoon\nbowl\nbanana\napple\nsandwich\norange\nbroccoli\ncarrot\nhot dog\npizza\ndonut\ncake\nchair\nsofa\npottedplant\nbed\ndiningtable\ntoilet\ntvmonitor\nlaptop\nmouse\nremote\nkeyboard\ncell phone\nmicrowave\noven\ntoaster\nsink\nrefrigerator\nbook\nclock\nvase\nscissors\nteddy bear\nhair drier\ntoothbrush\n"
  },
  {
    "path": "lightnet/data/coco.template",
    "content": "classes= 80\ntrain  = $DATA/coco/trainvalno5k.txt\nvalid = $DATA/coco_val_5k.list\nnames = $HERE/coco.names\nbackup = $BACKUP\neval=coco\n\n"
  },
  {
    "path": "lightnet/data/darknet.cfg",
    "content": "[net]\n# Train\nbatch=1\nsubdivisions=1\n# Test\n# batch=1\n# subdivisions=1\nheight=224\nwidth=224\nchannels=3\nmomentum=0.9\ndecay=0.0005\nmax_crop=320\n\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\n[convolutional]\nbatch_normalize=1\nfilters=16\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\npadding=1\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/darknet19.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=1\nheight=224\nwidth=224\nchannels=3\nmomentum=0.9\ndecay=0.0005\nmax_crop=448\n\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/darknet19_448.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=4\nheight=448\nwidth=448\nmax_crop=512\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.001\npolicy=poly\npower=4\nmax_batches=100000\n\nangle=7\nhue = .1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/darknet9000.cfg",
    "content": "[net]\n# Training\n# batch=128\n# subdivisions=4\n# Testing\nbatch = 1\nsubdivisions = 1\nheight=448\nwidth=448\nmax_crop=512\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.001\npolicy=poly\npower=4\nmax_batches=100000\n\nangle=7\nhue=.1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=9418\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\ntree=data/9k.tree\n\n[cost]\ntype=masked\n\n"
  },
  {
    "path": "lightnet/data/densenet201.cfg",
    "content": "[net]\n# Training\n# batch=128\n# subdivisions=4\n\n# Testing\nbatch=1\nsubdivisions=1\n\nheight=256\nwidth=256\nmax_crop=448\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nburn_in=1000\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\nangle=7\nhue=.1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[route]\nlayers=-1,-3\n\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/extraction.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=1\nheight=224\nwidth=224\nmax_crop=320\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=192\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/extraction.conv.cfg",
    "content": "[net]\nbatch=1\nsubdivisions=1\nheight=256\nwidth=256\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.5\npolicy=poly\npower=6\nmax_batches=500000\n\n[convolutional]\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=192\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[avgpool]\n\n[connected]\noutput=1000\nactivation=leaky\n\n[softmax]\ngroups=1\n\n"
  },
  {
    "path": "lightnet/data/extraction22k.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=1\nheight=224\nwidth=224\nmax_crop=320\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.01\nmax_batches = 0\npolicy=steps\nsteps=444000,590000,970000\nscales=.5,.2,.1\n\n#policy=sigmoid\n#gamma=.00008\n#step=100000\n#max_batches=200000\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=192\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[avgpool]\n\n[connected]\noutput=21842\nactivation=leaky\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/go.cfg",
    "content": "[net]\nbatch=512\nsubdivisions=1\nheight=19\nwidth=19\nchannels=1\nmomentum=0.9\ndecay=0.0005\n\nburn_in=1000\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=10000000\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=1\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[reorg]\nextra=1\nstride=1\n\n[softmax]\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/go.test.cfg",
    "content": "[net]\nbatch=1\nsubdivisions=1\nheight=19\nwidth=19\nchannels=1\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.01\npolicy=poly\npower=4\nmax_batches=100000\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\nbatch_normalize=1\n\n[convolutional]\nfilters=1\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[reorg]\nextra=1\nstride=1\n\n[softmax]\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/gru.cfg",
    "content": "[net]\nsubdivisions=1\nbatch = 256\ninputs=256\nmomentum=0.9\ndecay=0.0\ntime_steps=128\nlearning_rate=.002\nadam=1\n\npolicy=constant\npower=4\nmax_batches=1000000\n\n[gru]\noutput = 1024\n\n[gru]\noutput = 1024\n\n[gru]\noutput = 1024\n\n[connected]\noutput=256\nactivation=linear\n\n[softmax]\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/jnet-conv.cfg",
    "content": "[net]\nbatch=1\nsubdivisions=1\nheight=10\nwidth=10\nchannels=3\nlearning_rate=0.01\nmomentum=0.9\ndecay=0.0005\n\n[convolutional]\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nstride=2\nsize=2\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nstride=2\nsize=2\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nstride=2\nsize=2\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nstride=2\nsize=2\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nstride=2\nsize=2\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n"
  },
  {
    "path": "lightnet/data/resnet152.cfg",
    "content": "[net]\n# Training\n# batch=128\n# subdivisions=8\n\n# Testing\nbatch=1\nsubdivisions=1\n\nheight=256\nwidth=256\nmax_crop=448\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nburn_in=1000\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\nangle=7\nhue=.1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=2\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n\n# Conv 4\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=2\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n#Conv 5\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=2\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n\n\n\n\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/resnet50.cfg",
    "content": "[net]\n# Training\n# batch=128\n# subdivisions=4\n\n# Testing\nbatch=1\nsubdivisions=1\n\nheight=256\nwidth=256\nmax_crop=448\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nburn_in=1000\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\nangle=7\nhue=.1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=2\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n\n# Conv 4\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=2\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n#Conv 5\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=2\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=2048\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[shortcut]\nfrom=-4\nactivation=leaky\n\n\n\n\n\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/rnn.cfg",
    "content": "[net]\nsubdivisions=1\ninputs=256\nbatch = 1\nmomentum=0.9\ndecay=0.001\nmax_batches = 2000\ntime_steps=1\nlearning_rate=0.1\npolicy=steps\nsteps=1000,1500\nscales=.1,.1\n\n[rnn]\nbatch_normalize=1\noutput = 1024\nhidden=1024\nactivation=leaky\n\n[rnn]\nbatch_normalize=1\noutput = 1024\nhidden=1024\nactivation=leaky\n\n[rnn]\nbatch_normalize=1\noutput = 1024\nhidden=1024\nactivation=leaky\n\n[connected]\noutput=256\nactivation=leaky\n\n[softmax]\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/rnn.train.cfg",
    "content": "[net]\nsubdivisions=1\ninputs=256\nbatch = 128\nmomentum=0.9\ndecay=0.001\nmax_batches = 2000\ntime_steps=576\nlearning_rate=0.1\npolicy=steps\nsteps=1000,1500\nscales=.1,.1\n\n[rnn]\nbatch_normalize=1\noutput = 1024\nhidden=1024\nactivation=leaky\n\n[rnn]\nbatch_normalize=1\noutput = 1024\nhidden=1024\nactivation=leaky\n\n[rnn]\nbatch_normalize=1\noutput = 1024\nhidden=1024\nactivation=leaky\n\n[connected]\noutput=256\nactivation=leaky\n\n[softmax]\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/strided.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=4\nheight=256\nwidth=256\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.01\npolicy=steps\nscales=.1,.1,.1\nsteps=200000,300000,400000\nmax_batches=800000\n\n\n[crop]\ncrop_height=224\ncrop_width=224\nflip=1\nangle=0\nsaturation=1\nexposure=1\nshift=.2\n\n[convolutional]\nfilters=64\nsize=7\nstride=2\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=192\nsize=3\nstride=2\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=3\nstride=2\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=512\nsize=3\nstride=2\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=1024\nsize=3\nstride=2\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=ramp\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=ramp\n\n[maxpool]\nsize=3\nstride=2\n\n[connected]\noutput=4096\nactivation=ramp\n\n[dropout]\nprobability=0.5\n\n[connected]\noutput=1000\nactivation=ramp\n\n[softmax]\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/t1.test.cfg",
    "content": "[net]\nbatch=1\nsubdivisions=1\nheight=224\nwidth=224\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.0005\npolicy=steps\nsteps=200,400,600,20000,30000\nscales=2.5,2,2,.1,.1\nmax_batches = 40000\n\n[convolutional]\nfilters=16\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[connected]\noutput= 1470\nactivation=linear\n\n[detection]\nclasses=20\ncoords=4\nrescore=1\nside=7\nnum=2\nsoftmax=0\nsqrt=1\njitter=.2\n\nobject_scale=1\nnoobject_scale=.5\nclass_scale=1\ncoord_scale=5\n\n"
  },
  {
    "path": "lightnet/data/tiny-yolo-voc.cfg",
    "content": "[net]\nbatch=64\nsubdivisions=8\nwidth=416\nheight=416\nchannels=3\nmomentum=0.9\ndecay=0.0005\nangle=0\nsaturation = 1.5\nexposure = 1.5\nhue=.1\n\nlearning_rate=0.001\nmax_batches = 40200\npolicy=steps\nsteps=-1,100,20000,30000\nscales=.1,10,.1,.1\n\n[convolutional]\nbatch_normalize=1\nfilters=16\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=1\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n###########\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nsize=1\nstride=1\npad=1\nfilters=125\nactivation=linear\n\n[region]\nanchors = 1.08,1.19,  3.42,4.41,  6.63,11.38,  9.42,5.11,  16.62,10.52\nbias_match=1\nclasses=20\ncoords=4\nnum=5\nsoftmax=1\njitter=.2\nrescore=1\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nabsolute=1\nthresh = .6\nrandom=1\n"
  },
  {
    "path": "lightnet/data/tiny-yolo.cfg",
    "content": "[net]\n# Training\n# batch=64\n# subdivisions=2\n# Testing\nbatch=1\nsubdivisions=1\nwidth=416\nheight=416\nchannels=3\nmomentum=0.9\ndecay=0.0005\nangle=0\nsaturation = 1.5\nexposure = 1.5\nhue=.1\n\nlearning_rate=0.001\nburn_in=1000\nmax_batches = 500200\npolicy=steps\nsteps=400000,450000\nscales=.1,.1\n\n[convolutional]\nbatch_normalize=1\nfilters=16\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=1\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n###########\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=512\nactivation=leaky\n\n[convolutional]\nsize=1\nstride=1\npad=1\nfilters=425\nactivation=linear\n\n[region]\nanchors =  0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828\nbias_match=1\nclasses=80\ncoords=4\nnum=5\nsoftmax=1\njitter=.2\nrescore=0\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nabsolute=1\nthresh = .6\nrandom=1\n"
  },
  {
    "path": "lightnet/data/tiny.cfg",
    "content": "[net]\n# Train\nbatch=128\nsubdivisions=1\n# Test\n# batch=1\n# subdivisions=1\nheight=224\nwidth=224\nchannels=3\nmomentum=0.9\ndecay=0.0005\nmax_crop=320\n\nlearning_rate=0.1\npolicy=poly\npower=4\nmax_batches=1600000\n\nangle=7\nhue=.1\nsaturation=.75\nexposure=.75\naspect=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=16\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=16\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=16\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1000\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[avgpool]\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/vgg-16.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=4\nheight=256\nwidth=256\nchannels=3\nlearning_rate=0.00001\nmomentum=0.9\ndecay=0.0005\n\n[crop]\ncrop_height=224\ncrop_width=224\nflip=1\nexposure=1\nsaturation=1\nangle=0\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[connected]\noutput=4096\nactivation=relu\n\n[dropout]\nprobability=.5\n\n[connected]\noutput=4096\nactivation=relu\n\n[dropout]\nprobability=.5\n\n[connected]\noutput=1000\nactivation=linear\n\n[softmax]\ngroups=1\n\n[cost]\ntype=sse\n\n"
  },
  {
    "path": "lightnet/data/vgg-conv.cfg",
    "content": "[net]\nbatch=1\nsubdivisions=1\nwidth=224\nheight=224\nchannels=3\nlearning_rate=0.00001\nmomentum=0.9\ndecay=0.0005\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[convolutional]\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=relu\n\n[maxpool]\nsize=2\nstride=2\n\n"
  },
  {
    "path": "lightnet/data/voc.names",
    "content": "aeroplane\nbicycle\nbird\nboat\nbottle\nbus\ncar\ncat\nchair\ncow\ndiningtable\ndog\nhorse\nmotorbike\nperson\npottedplant\nsheep\nsofa\ntrain\ntvmonitor\n"
  },
  {
    "path": "lightnet/data/writing.cfg",
    "content": "[net]\nbatch=128\nsubdivisions=2\nheight=256\nwidth=256\nchannels=3\nlearning_rate=0.00000001\nmomentum=0.9\ndecay=0.0005\nseen=0\n\n[convolutional]\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=1\nsize=3\nstride=1\npad=1\nactivation=logistic\n\n[cost]\n\n"
  },
  {
    "path": "lightnet/data/yolo-voc.2.0.cfg",
    "content": "[net]\nbatch=64\nsubdivisions=8\nheight=416\nwidth=416\nchannels=3\nmomentum=0.9\ndecay=0.0005\nangle=0\nsaturation = 1.5\nexposure = 1.5\nhue=.1\n\nlearning_rate=0.0001\nmax_batches = 45000\npolicy=steps\nsteps=100,25000,35000\nscales=10,.1,.1\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n\n#######\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[route]\nlayers=-9\n\n[reorg]\nstride=2\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nsize=1\nstride=1\npad=1\nfilters=125\nactivation=linear\n\n[region]\nanchors = 1.08,1.19,  3.42,4.41,  6.63,11.38,  9.42,5.11,  16.62,10.52\nbias_match=1\nclasses=20\ncoords=4\nnum=5\nsoftmax=1\njitter=.2\nrescore=1\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nabsolute=1\nthresh = .6\nrandom=0\n"
  },
  {
    "path": "lightnet/data/yolo-voc.cfg",
    "content": "[net]\n# Testing\nbatch=1\nsubdivisions=1\n# Training\n# batch=64\n# subdivisions=8\nheight=416\nwidth=416\nchannels=3\nmomentum=0.9\ndecay=0.0005\nangle=0\nsaturation = 1.5\nexposure = 1.5\nhue=.1\n\nlearning_rate=0.001\nburn_in=1000\nmax_batches = 80200\npolicy=steps\nsteps=40000,60000\nscales=.1,.1\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n\n#######\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[route]\nlayers=-9\n\n[convolutional]\nbatch_normalize=1\nsize=1\nstride=1\npad=1\nfilters=64\nactivation=leaky\n\n[reorg]\nstride=2\n\n[route]\nlayers=-1,-4\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nsize=1\nstride=1\npad=1\nfilters=125\nactivation=linear\n\n\n[region]\nanchors =  1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071\nbias_match=1\nclasses=20\ncoords=4\nnum=5\nsoftmax=1\njitter=.3\nrescore=1\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nabsolute=1\nthresh = .6\nrandom=1\n"
  },
  {
    "path": "lightnet/data/yolo.2.0.cfg",
    "content": "[net]\nbatch=1\nsubdivisions=1\nwidth=416\nheight=416\nchannels=3\nmomentum=0.9\ndecay=0.0005\nangle=0\nsaturation = 1.5\nexposure = 1.5\nhue=.1\n\nlearning_rate=0.001\nmax_batches = 120000\npolicy=steps\nsteps=-1,100,80000,100000\nscales=.1,10,.1,.1\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n\n#######\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[route]\nlayers=-9\n\n[reorg]\nstride=2\n\n[route]\nlayers=-1,-3\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nsize=1\nstride=1\npad=1\nfilters=425\nactivation=linear\n\n[region]\nanchors = 0.738768,0.874946,  2.42204,2.65704,  4.30971,7.04493,  10.246,4.59428,  12.6868,11.8741\nbias_match=1\nclasses=80\ncoords=4\nnum=5\nsoftmax=1\njitter=.2\nrescore=1\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nabsolute=1\nthresh = .6\nrandom=0\n"
  },
  {
    "path": "lightnet/data/yolo.cfg",
    "content": "[net]\n# Testing\n#batch=1\n#subdivisions=1\n# Training\nbatch=64\nsubdivisions=8\nwidth=608\nheight=608\nchannels=3\nmomentum=0.9\ndecay=0.0005\nangle=0\nsaturation = 1.5\nexposure = 1.5\nhue=.1\n\nlearning_rate=0.001\nburn_in=1000\nmax_batches = 500200\npolicy=steps\nsteps=400000,450000\nscales=.1,.1\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n\n#######\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[route]\nlayers=-9\n\n[convolutional]\nbatch_normalize=1\nsize=1\nstride=1\npad=1\nfilters=64\nactivation=leaky\n\n[reorg]\nstride=2\n\n[route]\nlayers=-1,-4\n\n[convolutional]\nbatch_normalize=1\nsize=3\nstride=1\npad=1\nfilters=1024\nactivation=leaky\n\n[convolutional]\nsize=1\nstride=1\npad=1\nfilters=425\nactivation=linear\n\n\n[region]\nanchors =  0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828\nbias_match=1\nclasses=80\ncoords=4\nnum=5\nsoftmax=1\njitter=.3\nrescore=1\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nabsolute=1\nthresh = .6\nrandom=1\n"
  },
  {
    "path": "lightnet/data/yolo9000.cfg",
    "content": "[net]\n# Testing\nbatch=1\nsubdivisions=1\n# Training\n# batch=64\n# subdivisions=8\nbatch=1\nsubdivisions=1\nheight=544\nwidth=544\nchannels=3\nmomentum=0.9\ndecay=0.0005\n\nlearning_rate=0.001\nburn_in=1000\nmax_batches = 500200\npolicy=steps\nsteps=400000,450000\nscales=.1,.1\n\nhue=.1\nsaturation=.75\nexposure=.75\n\n[convolutional]\nbatch_normalize=1\nfilters=32\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=64\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=128\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=256\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[maxpool]\nsize=2\nstride=2\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=512\nsize=1\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nbatch_normalize=1\nfilters=1024\nsize=3\nstride=1\npad=1\nactivation=leaky\n\n[convolutional]\nfilters=28269\nsize=1\nstride=1\npad=1\nactivation=linear\n\n[region]\nanchors = 0.77871, 1.14074, 3.00525, 4.31277, 9.22725, 9.61974\nbias_match=1\nclasses=9418\ncoords=4\nnum=3\nsoftmax=1\njitter=.2\nrescore=1\n\nobject_scale=5\nnoobject_scale=1\nclass_scale=1\ncoord_scale=1\n\nthresh = .6\nabsolute=1\nrandom=1\n\ntree=data/9k.tree\nmap = data/coco9k.map\n"
  },
  {
    "path": "lightnet/lightnet.pxd",
    "content": "# Generated by https://github.com/tarruda/python-autopxd\nfrom libc.stdio cimport FILE\nfrom libc.time cimport clock_t\n\ncdef extern from \"pthread.h\":\n    ctypedef int pthread_t\n\n\ncdef extern from \"_darknet/darknet.h\" nogil:\n    ctypedef enum LAYER_TYPE:\n        CONVOLUTIONAL\n        DECONVOLUTIONAL\n        CONNECTED\n        MAXPOOL\n        SOFTMAX\n        DETECTION\n        DROPOUT\n        CROP\n        ROUTE\n        COST\n        NORMALIZATION\n        AVGPOOL\n        LOCAL\n        SHORTCUT\n        ACTIVE\n        RNN\n        GRU\n        LSTM\n        CRNN\n        BATCHNORM\n        NETWORK\n        XNOR\n        REGION\n        REORG\n        BLANK\n\n\ncdef extern from \"_darknet/darknet.h\" nogil:\n    int gpu_index\n\n    cdef struct _metadata_s:\n        int classes\n        char** names\n\n    ctypedef _metadata_s metadata\n\n    metadata get_metadata(char* file)\n\n    cdef struct _tree_s:\n        int* leaf\n        int n\n        int* parent\n        int* child\n        int* group\n        char** name\n        int groups\n        int* group_size\n        int* group_offset\n\n    ctypedef _tree_s tree\n\n    cdef enum ACTIVATION:\n        LOGISTIC\n        RELU\n        RELIE\n        LINEAR\n        RAMP\n        TANH\n        PLSE\n        LEAKY\n        ELU\n        LOGGY\n        STAIR\n        HARDTAN\n        LHTAN\n\n\n    cdef enum BINARY_ACTIVATION:\n        MULT\n        ADD\n        SUB\n        DIV\n\n\n    cdef enum COST_TYPE:\n        SSE\n        MASKED\n        L1\n        SEG\n        SMOOTH\n\n    cdef struct _update_args_s:\n        int batch\n        float learning_rate\n        float momentum\n        float decay\n        int adam\n        float B1\n        float B2\n        float eps\n        int t\n\n    ctypedef _update_args_s update_args\n\n    #ctypedef network network\n\n    cdef enum _learning_rate_policy_e:\n        CONSTANT\n        STEP\n        EXP\n        POLY\n        STEPS\n        SIG\n        RANDOM\n\n    ctypedef _learning_rate_policy_e learning_rate_policy\n\n    #ctypedef layer layer\n\n    cdef struct network:\n        int n\n        int batch\n        size_t* seen\n        int* t\n        float epoch\n        int subdivisions\n        layer* layers\n        float* output\n        learning_rate_policy policy\n        float learning_rate\n        float momentum\n        float decay\n        float gamma\n        float scale\n        float power\n        int time_steps\n        int step\n        int max_batches\n        float* scales\n        int* steps\n        int num_steps\n        int burn_in\n        int adam\n        float B1\n        float B2\n        float eps\n        int inputs\n        int outputs\n        int truths\n        int notruth\n        int h\n        int w\n        int c\n        int max_crop\n        int min_crop\n        float max_ratio\n        float min_ratio\n        int center\n        float angle\n        float aspect\n        float exposure\n        float saturation\n        float hue\n        int random\n        int gpu_index\n        tree* hierarchy\n        float* input\n        float* truth\n        float* delta\n        float* workspace\n        int train\n        int index\n        float* cost\n       \n    ctypedef void (*_layer_forward_ft)(layer, network)\n\n    ctypedef void (*_layer_backward_ft)(layer, network)\n\n    ctypedef void (*_layer_update_ft)(layer, update_args)\n\n    ctypedef void (*_layer_forward_gpu_ft)(layer, network)\n\n    ctypedef void (*_layer_backward_gpu_ft)(layer, network)\n\n    ctypedef void (*_layer_update_gpu_ft)(layer, update_args)\n\n    cdef struct layer:\n        LAYER_TYPE type\n        ACTIVATION activation\n        COST_TYPE cost_type\n        _layer_forward_ft forward\n        _layer_backward_ft backward\n        _layer_update_ft update\n        _layer_forward_gpu_ft forward_gpu\n        _layer_backward_gpu_ft backward_gpu\n        _layer_update_gpu_ft update_gpu\n        int batch_normalize\n        int shortcut\n        int batch\n        int forced\n        int flipped\n        int inputs\n        int outputs\n        int nweights\n        int nbiases\n        int extra\n        int truths\n        int h\n        int w\n        int c\n        int out_h\n        int out_w\n        int out_c\n        int n\n        int max_boxes\n        int groups\n        int size\n        int side\n        int stride\n        int reverse\n        int flatten\n        int spatial\n        int pad\n        int sqrt\n        int flip\n        int index\n        int binary\n        int xnor\n        int steps\n        int hidden\n        int truth\n        float smooth\n        float dot\n        float angle\n        float jitter\n        float saturation\n        float exposure\n        float shift\n        float ratio\n        float learning_rate_scale\n        int softmax\n        int classes\n        int coords\n        int background\n        int rescore\n        int objectness\n        int does_cost\n        int joint\n        int noadjust\n        int reorg\n        int log\n        int tanh\n        float alpha\n        float beta\n        float kappa\n        float coord_scale\n        float object_scale\n        float noobject_scale\n        float mask_scale\n        float class_scale\n        int bias_match\n        int random\n        float thresh\n        int classfix\n        int absolute\n        int onlyforward\n        int stopbackward\n        int dontload\n        int dontloadscales\n        float temperature\n        float probability\n        float scale\n        char* cweights\n        int* indexes\n        int* input_layers\n        int* input_sizes\n        int* map\n        float* rand\n        float* cost\n        float* state\n        float* prev_state\n        float* forgot_state\n        float* forgot_delta\n        float* state_delta\n        float* combine_cpu\n        float* combine_delta_cpu\n        float* concat\n        float* concat_delta\n        float* binary_weights\n        float* biases\n        float* bias_updates\n        float* scales\n        float* scale_updates\n        float* weights\n        float* weight_updates\n        float* delta\n        float* output\n        float* squared\n        float* norms\n        float* spatial_mean\n        float* mean\n        float* variance\n        float* mean_delta\n        float* variance_delta\n        float* rolling_mean\n        float* rolling_variance\n        float* x\n        float* x_norm\n        float* m\n        float* v\n        float* bias_m\n        float* bias_v\n        float* scale_m\n        float* scale_v\n        float* z_cpu\n        float* r_cpu\n        float* h_cpu\n        float* prev_state_cpu\n        float* temp_cpu\n        float* temp2_cpu\n        float* temp3_cpu\n        float* dh_cpu\n        float* hh_cpu\n        float* prev_cell_cpu\n        float* cell_cpu\n        float* f_cpu\n        float* i_cpu\n        float* g_cpu\n        float* o_cpu\n        float* c_cpu\n        float* dc_cpu\n        float* binary_input\n        layer* input_layer\n        layer* self_layer\n        layer* output_layer\n        layer* reset_layer\n        layer* update_layer\n        layer* state_layer\n        layer* input_gate_layer\n        layer* state_gate_layer\n        layer* input_save_layer\n        layer* state_save_layer\n        layer* input_state_layer\n        layer* state_state_layer\n        layer* input_z_layer\n        layer* state_z_layer\n        layer* input_r_layer\n        layer* state_r_layer\n        layer* input_h_layer\n        layer* state_h_layer\n        layer* wz\n        layer* uz\n        layer* wr\n        layer* ur\n        layer* wh\n        layer* uh\n        layer* uo\n        layer* wo\n        layer* uf\n        layer* wf\n        layer* ui\n        layer* wi\n        layer* ug\n        layer* wg\n        tree* softmax_tree\n        size_t workspace_size\n\n    void free_layer(layer)\n\n\n    #ctypedef network network\n\n    cdef struct _augment_args_s:\n        int w\n        int h\n        float scale\n        float rad\n        float dx\n        float dy\n        float aspect\n\n    ctypedef _augment_args_s augment_args\n\n    cdef struct _image_s:\n        int w\n        int h\n        int c\n        float* data\n\n    ctypedef _image_s image\n\n    cdef struct _box_s:\n        float x\n        float y\n        float w\n        float h\n\n    ctypedef _box_s box\n\n    cdef struct matrix:\n        int rows\n        int cols\n        float** vals\n\n    ctypedef matrix matrix\n\n    ctypedef struct data:\n        int w\n        int h\n        matrix X\n        matrix y\n        int shallow\n        int* num_boxes\n        box** boxes\n\n    cdef enum _data_type_e:\n        CLASSIFICATION_DATA\n        DETECTION_DATA\n        CAPTCHA_DATA\n        REGION_DATA\n        IMAGE_DATA\n        COMPARE_DATA\n        WRITING_DATA\n        SWAG_DATA\n        TAG_DATA\n        OLD_CLASSIFICATION_DATA\n        STUDY_DATA\n        DET_DATA\n        SUPER_DATA\n        LETTERBOX_DATA\n        REGRESSION_DATA\n        SEGMENTATION_DATA\n        INSTANCE_DATA\n\n    ctypedef _data_type_e data_type\n\n    cdef struct load_args:\n        int threads\n        char** paths\n        char* path\n        int n\n        int m\n        char** labels\n        int h\n        int w\n        int out_w\n        int out_h\n        int nh\n        int nw\n        int num_boxes\n        int min\n        int max\n        int size\n        int classes\n        int background\n        int scale\n        int center\n        int coords\n        float jitter\n        float angle\n        float aspect\n        float saturation\n        float exposure\n        float hue\n        data* d\n        image* im\n        image* resized\n        data_type type\n        tree* hierarchy\n\n    ctypedef load_args load_args\n\n    cdef struct _box_label_s:\n        int id\n        float x\n        float y\n        float w\n        float h\n        float left\n        float right\n        float top\n        float bottom\n\n    ctypedef _box_label_s box_label\n\n    network* load_network(char* cfg, char* weights, int clear)\n\n    load_args get_base_args(network* net)\n\n    void free_data(data d)\n\n    cdef struct node:\n        void* val\n        node* next\n        node* prev\n\n    ctypedef node node\n\n    cdef struct list:\n        int size\n        node* front\n        node* back\n\n    ctypedef list list\n\n    pthread_t load_data(load_args args)\n\n\n    data load_data_region(int n, char **paths, int m, int w, int h, int size,\n                          int classes, float jitter, float hue,\n                          float saturation, float exposure)\n\n    data load_data_detection(int n, char **paths, int m, int w, int h, int size,\n                          int classes, float jitter, float hue,\n                          float saturation, float exposure)\n\n    list* read_data_cfg(char* filename)\n\n    list* read_cfg(char* filename)\n\n    unsigned char* read_file(char* filename)\n\n    data resize_data(data orig, int w, int h)\n\n    data* tile_data(data orig, int divs, int size)\n\n    data select_data(data* orig, int* inds)\n\n    float **make_probs(network *net)\n\n    void forward_network(network* net)\n\n    void backward_network(network* net)\n\n    void update_network(network* net)\n\n    void axpy_cpu(int N, float ALPHA, float* X, int INCX, float* Y, int INCY)\n\n    void copy_cpu(int N, float* X, int INCX, float* Y, int INCY)\n\n    void scal_cpu(int N, float ALPHA, float* X, int INCX)\n\n    void normalize_cpu(float* x, float* mean, float* variance, int batch, int filters, int spatial)\n\n    void softmax(float* input, int n, float temp, int stride, float* output)\n\n    int best_3d_shift_r(image a, image b, int min, int max)\n\n    void save_image_png(image im, char* name)\n\n    void get_next_batch(data d, int n, int offset, float* X, float* y)\n\n    void grayscale_image_3c(image im)\n\n    void normalize_image(image p)\n\n    void matrix_to_csv(matrix m)\n\n    float train_network_sgd(network* net, data d, int n)\n\n    void rgbgr_image(image im)\n\n    data copy_data(data d)\n\n    data concat_data(data d1, data d2)\n\n    data load_cifar10_data(char* filename)\n\n    float matrix_topk_accuracy(matrix truth, matrix guess, int k)\n\n    void matrix_add_matrix(matrix from_, matrix to)\n\n    void scale_matrix(matrix m, float scale)\n\n    matrix csv_to_matrix(char* filename)\n\n    float* network_accuracies(network* net, data d, int n)\n\n    float train_network_datum(network* net)\n\n    image make_random_image(int w, int h, int c)\n\n    void denormalize_connected_layer(layer l)\n\n    void denormalize_convolutional_layer(layer l)\n\n    void statistics_connected_layer(layer l)\n\n    void rescale_weights(layer l, float scale, float trans)\n\n    void rgbgr_weights(layer l)\n\n    image* get_weights(layer l)\n\n    void demo(char* cfgfile, char* weightfile, float thresh, int cam_index, char* filename, char** names, int classes, int frame_skip, char* prefix, int avg, float hier_thresh, int w, int h, int fps, int fullscreen)\n\n    void get_detection_boxes(layer l, int w, int h, float thresh, float** probs, box* boxes, int only_objectness)\n\n    char* option_find_str(list* l, char* key, char* def_)\n\n    int option_find_int(list* l, char* key, int def_)\n\n    network* parse_network_cfg(char* filename)\n\n    void save_weights(network* net, char* filename)\n\n    void load_weights(network* net, char* filename)\n\n    void save_weights_upto(network* net, char* filename, int cutoff)\n\n    void load_weights_upto(network* net, char* filename, int start, int cutoff)\n\n    void zero_objectness(layer l)\n\n    void get_region_boxes(layer l, int w, int h, int netw, int neth, float thresh, float** probs, box* boxes, float** masks, int only_objectness, int* map, float tree_thresh, int relative)\n\n    void free_network(network* net)\n\n    void set_batch_network(network* net, int b)\n\n    void set_temp_network(network* net, float t)\n\n    image load_image(char* filename, int w, int h, int c)\n\n    image load_image_color(char* filename, int w, int h)\n\n    image make_image(int w, int h, int c)\n\n    image resize_image(image im, int w, int h)\n\n    image letterbox_image(image im, int w, int h)\n\n    image crop_image(image im, int dx, int dy, int w, int h)\n\n    image resize_min(image im, int min)\n\n    image resize_max(image im, int max)\n\n    image threshold_image(image im, float thresh)\n\n    image mask_to_rgb(image mask)\n\n    int resize_network(network* net, int w, int h)\n\n    void free_matrix(matrix m)\n\n    void test_resize(char* filename)\n\n    void save_image(image p, char* name)\n\n    void show_image(image p, char* name)\n\n    image copy_image(image p)\n\n    void draw_box_width(image a, int x1, int y1, int x2, int y2, int w, float r, float g, float b)\n\n    float get_current_rate(network* net)\n\n    void composite_3d(char* f1, char* f2, char* out, int delta)\n\n    data load_data_old(char** paths, int n, int m, char** labels, int k, int w, int h)\n\n    size_t get_current_batch(network* net)\n\n    void constrain_image(image im)\n\n    image get_network_image_layer(network* net, int i)\n\n    layer get_network_output_layer(network* net)\n\n    void top_predictions(network* net, int n, int* index)\n\n    void flip_image(image a)\n\n    image float_to_image(int w, int h, int c, float* data)\n\n    void ghost_image(image source, image dest, int dx, int dy)\n\n    float network_accuracy(network* net, data d)\n\n    void random_distort_image(image im, float hue, float saturation, float exposure)\n\n    void fill_image(image m, float s)\n\n    image grayscale_image(image im)\n\n    void rotate_image_cw(image im, int times)\n\n    double what_time_is_it_now()\n\n    image rotate_image(image m, float rad)\n\n    void visualize_network(network* net)\n\n    float box_iou(box a, box b)\n\n    void do_nms(box* boxes, float** probs, int total, int classes, float thresh)\n\n    data load_all_cifar10()\n\n    box_label* read_boxes(char* filename, int* n)\n\n    box float_to_box(float* f, int stride)\n\n    void draw_detections(image im, int num, float thresh, box* boxes, float** probs, float** masks, char** names, image** alphabet, int classes)\n\n    matrix network_predict_data(network* net, data test)\n\n    image** load_alphabet()\n\n    image get_network_image(network* net)\n\n    float* network_predict(network* net, float* input)\n\n    int network_width(network* net)\n\n    int network_height(network* net)\n\n    float* network_predict_image(network* net, image im)\n\n    void network_detect(network* net, image im, float thresh, float hier_thresh, float nms, box* boxes, float** probs)\n\n    int num_boxes(network* net)\n\n    box* make_boxes(network* net)\n\n    void reset_network_state(network* net, int b)\n\n    char** get_labels(char* filename)\n\n    void do_nms_sort(box* boxes, float** probs, int total, int classes, float thresh)\n\n    void do_nms_obj(box* boxes, float** probs, int total, int classes, float thresh)\n\n    matrix make_matrix(int rows, int cols)\n\n    void free_image(image m)\n\n    float train_network(network* net, data d)\n\n    pthread_t load_data_in_thread(load_args args)\n\n    void load_data_blocking(load_args args)\n\n    list* get_paths(char* filename)\n\n    void hierarchy_predictions(float* predictions, int n, tree* hier, int only_leaves, int stride)\n\n    void change_leaves(tree* t, char* leaf_list)\n\n    int find_int_arg(int argc, char** argv, char* arg, int def_)\n\n    float find_float_arg(int argc, char** argv, char* arg, float def_)\n\n    int find_arg(int argc, char* argv[1], char* arg)\n\n    char* find_char_arg(int argc, char** argv, char* arg, char* def_)\n\n    char* basecfg(char* cfgfile)\n\n    void find_replace(char* str, char* orig, char* rep, char* output)\n\n    void free_ptrs(void** ptrs, int n)\n\n    char* fgetl(FILE* fp)\n\n    void strip(char* s)\n\n    float sec(clock_t clocks)\n\n    void** list_to_array(list* l)\n\n    void top_k(float* a, int n, int k, int* index)\n\n    int* read_map(char* filename)\n\n    void error(char* s)\n\n    int max_index(float* a, int n)\n\n    int max_int_index(int* a, int n)\n\n    int sample_array(float* a, int n)\n\n    int* random_index_order(int min, int max)\n\n    void free_list(list* l)\n\n    float mse_array(float* a, int n)\n\n    float variance_array(float* a, int n)\n\n    float mag_array(float* a, int n)\n\n    float mean_array(float* a, int n)\n\n    float sum_array(float* a, int n)\n\n    void normalize_array(float* a, int n)\n\n    int* read_intlist(char* s, int* n, int d)\n\n    size_t rand_size_t()\n\n    float rand_normal()\n"
  },
  {
    "path": "lightnet/lightnet.pyx",
    "content": "# cython: infer_types=True\n# cython: cdivision=True\nfrom __future__ import print_function\nfrom libc.stdlib cimport calloc, free, rand\nfrom libc.string cimport memcpy, memset\ncimport numpy as np\nimport shutil\nimport tempfile\nimport numpy\nfrom pathlib import Path\nimport json\nimport msgpack\n\nfrom .util import make_temp_dir\n\n\ntry:\n    unicode\nexcept NameError:\n    unicode = str\n\ncdef extern from \"_darknet/utils.h\" nogil:\n    float rand_uniform(float min, float max)\n\ncdef extern from \"_darknet/image.h\" nogil:\n    void place_image(image im, int w, int h, int dx, int dy, image canvas)\n\ncdef extern from \"_darknet/data.h\" nogil:\n    void correct_boxes(box_label *boxes, int n, float dx, float dy, float sx, float sy, int flip)\n    void randomize_boxes(box_label *b, int n)\n\n\ncdef extern from \"_darknet/stb_image.h\" nogil:\n    ctypedef unsigned char stbi_uc\n    stbi_uc *stbi_load_from_memory(const stbi_uc* raw, int length, int *x,\n                                   int *y, int *comp, int req_comp)\n\ncdef class Image:\n    cdef image c\n\n    def __init__(self, float[:, :, ::1] data): \n        self.c = make_image(data.shape[0], data.shape[1], data.shape[2])\n        memcpy(self.c.data, &data[0,0,0], data.size * sizeof(float))\n\n    @property\n    def data(self):\n        cdef np.ndarray array = numpy.zeros((self.c.w, self.c.h, self.c.c),\n                                             dtype='f')\n        memcpy(<float*>array.data, self.c.data, self.c.h*self.c.w*self.c.c*sizeof(float))\n        return array\n\n    @property\n    def width(self):\n        return self.c.w\n\n    @property\n    def height(self):\n        return self.c.h\n\n    @classmethod\n    def from_bytes(cls, bytes raw, int channels=3):\n        cdef stbi_uc* img_data\n        cdef int w, h, c\n        img_data = stbi_load_from_memory(<stbi_uc*>raw, len(raw),\n                        &w, &h, &c, channels)\n        if channels:\n            c = channels\n        cdef Image self = Image.blank(w, h, c)\n        cdef int k, j, i, dst_index, src_index\n        for k in range(c):\n            for j in range(h):\n                for i in range(w):\n                    dst_index = i + w*j + w*h*k\n                    src_index = k + c*i + c*w*j\n                    self.c.data[dst_index] = <float>img_data[src_index] / 255.\n        return self\n \n    @classmethod\n    def random(cls, int w, int h, int c):\n        cdef Image self = Image.__new__(cls)\n        self.c = make_random_image(w, h, c)\n        return self\n\n    @classmethod\n    def blank(cls, int w, int h, int c):\n        cdef Image self = Image.__new__(cls)\n        self.c = make_image(w, h, c)\n        return self\n\n    @classmethod\n    def load(cls, path, int w, int h, int c):\n        path = Path(path)\n        if not path.exists():\n            raise IOError(\"Image not found: %s\" % path)\n        cdef Image self = Image.__new__(cls)\n        cdef bytes loc = unicode(path.resolve()).encode('utf8')\n        self.c = load_image(<char*>loc, w, h, c)\n        return self\n    \n    @classmethod\n    def load_color(cls, path, int w=0, int h=0):\n        path = Path(path)\n        if not path.exists():\n            raise IOError(\"Color image not found: %s\" % path)\n        cdef Image self = Image.__new__(cls)\n        cdef bytes loc = unicode(path.resolve()).encode('utf8')\n        self.c = load_image_color(<char*>loc, w, h)\n        return self\n\n    def __dealloc__(self):\n        free_image(self.c)\n\n\ncdef class Boxes:\n    cdef box* c\n    cdef int n\n\n    def __init__(self, int n):\n        self.c = <box*>calloc(n, sizeof(box))\n\n\ndef get_relative_box(size, box):\n    dw = 1./(size[0])\n    dh = 1./(size[1])\n    x = (box[0] + box[1])/2.0 - 1\n    y = (box[2] + box[3])/2.0 - 1\n    w = box[1] - box[0]\n    h = box[3] - box[2]\n    x = x*dw\n    w = w*dw\n    y = y*dh\n    h = h*dh\n    return (x,y,w,h)\n\n\ndef get_absolute_box(size, box):\n    raise NotImplementedError\n\n\ncdef class BoxLabels:\n    cdef box_label* c\n    cdef int n\n\n    def __init__(self, int[::1] ids, float[:, ::1] data):\n        assert ids.shape[0] == data.shape[0]\n        assert data.shape[1] == 4\n        self.c = <box_label*>calloc(ids.shape[0], sizeof(box_label))\n        self.n = ids.shape[0]\n        for i in range(ids.shape[0]):\n            self.c[i].id = ids[i]\n        for i in range(data.shape[0]):\n            self.c[i].x = data[i, 0]\n            self.c[i].y = data[i, 1]\n            self.c[i].h = data[i, 2]\n            self.c[i].w = data[i, 3]\n            self.c[i].left = self.c[i].x - self.c[i].w/2\n            self.c[i].right = self.c[i].x + self.c[i].w/2\n            self.c[i].top = self.c[i].y - self.c[i].h/2\n            self.c[i].bottom = self.c[i].y + self.c[i].h/2\n\n    def __len__(self):\n        return self.n\n\n    @classmethod\n    def from_results(cls, results):\n        ids = numpy.zeros((len(results),), dtype='i')\n        boxes = numpy.zeros((len(results), 4), dtype='f')\n        for j in range(len(results)):\n            ids[j] = results[j][0]\n            boxes[j, 0] = results[j][3][0]\n            boxes[j, 1] = results[j][3][1]\n            boxes[j, 2] = results[j][3][2]\n            boxes[j, 3] = results[j][3][3]\n        return cls(ids, boxes)\n\n    @classmethod\n    def load(cls, path):\n        cdef bytes loc = unicode(Path(path).resolve()).encode('utf8')\n        cdef BoxLabels self = BoxLabels.__new__(cls)\n        self.c = read_boxes(loc, &self.n)\n        return self\n\n    def __dealloc__(self):\n        if self.c != NULL:\n            free(self.c)\n        self.c = NULL\n\n    @property\n    def x(self):\n        return [self.c[i].x for i in range(self.n)]\n\n    @property\n    def y(self):\n        return [self.c[i].y for i in range(self.n)]\n\n    @property\n    def h(self):\n        return [self.c[i].h for i in range(self.n)]\n\n    @property\n    def w(self):\n        return [self.c[i].w for i in range(self.n)]\n\n    @property\n    def left(self):\n        return [self.c[i].left for i in range(self.n)]\n\n    @property\n    def right(self):\n        return [self.c[i].right for i in range(self.n)]\n\n    @property\n    def top(self):\n        return [self.c[i].top for i in range(self.n)]\n\n    @property\n    def bottom(self):\n        return [self.c[i].bottom for i in range(self.n)]\n\n    def has_box(self, target):\n        cdef box target_box\n        target_box.x = target['x']\n        target_box.y = target['y']\n        target_box.w = target['w']\n        target_box.h = target['h']\n        cdef int target_id = target['id']\n        cdef box_label lbox\n        cdef box gbox\n        for lbox in self.c[:self.n]:\n            if lbox.id == target_id:\n                gbox.x = lbox.x\n                gbox.y = lbox.y\n                gbox.w = lbox.w\n                gbox.h = lbox.h\n                iou = box_iou(gbox, target_box)\n                if iou >= 0.5:\n                    return 1\n        else:\n            return 0\n\n    def intersection(self, BoxLabels other):\n        output = []\n        cdef box_label b\n        for b in self.c[:self.n]:\n            target = {'id': b.id, 'y': b.y, 'x': b.x, 'w': b.w, 'h': b.h}\n            if other.has_box(target):\n                output.append(target)\n        return output\n    \n    def difference(self, BoxLabels other):\n        output = []\n        cdef box_label b\n        for b in self.c[:self.n]:\n            target = {'id': b.id, 'y': b.y, 'x': b.x, 'w': b.w, 'h': b.h}\n            if not other.has_box(target):\n                output.append(target)\n        return output\n\n\ncdef class DetectionData:\n    cdef data c\n\n    def __init__(self, images, labels, int w, int h, int max_boxes, int classes,\n            float jitter=0.2, float hue=0.1, float saturation=1.5, float exposure=1.5):\n        self.c.shallow = 0\n        cdef int n = len(images)\n        self.c.X.rows = n\n        self.c.X.vals = <float**>calloc(sizeof(float*), self.c.X.rows)\n        self.c.X.cols = h*w*3\n        self.c.y = make_matrix(n, 5 * max_boxes)\n\n        cdef Image py_image\n        cdef BoxLabels py_boxes\n        cdef float dw, dh, nw, nh, dx, dy\n        cdef float* truth\n        cdef int index = 0\n        cdef float pleft, pright, pwidth, pheight\n        for i, (py_image, py_boxes) in enumerate(zip(images, labels)):\n            self.c.X.vals[i] = _load_data_detection(self.c.y.vals[i],\n                                    py_image.c, py_boxes.c, py_boxes.n, max_boxes,\n                                    classes, jitter, hue, saturation, exposure)\n    \n    def __dealloc__(self):\n        free_data(self.c)\n\n    @property\n    def Xs(self):\n        output = []\n        for i in range(self.c.X.rows):\n            vals = [self.c.X.vals[i][j] for j in range(self.c.X.cols)]\n            output.append(vals)\n        return output\n    \n    @property\n    def ys(self):\n        output = []\n        for i in range(self.c.y.rows):\n            vals = [self.c.y.vals[i][j] for j in range(self.c.y.cols)]\n            output.append(vals)\n        return output\n\n    @property\n    def X_shape(self):\n        return (self.c.X.rows, self.c.X.cols)\n    \n    @property\n    def y_shape(self):\n        return (self.c.y.rows, self.c.y.cols)\n\n\ncdef float* _load_data_detection(float* truths, image orig, box_label* boxes,\n                                 int count, int max_boxes, int num_classes,\n                                 float jitter, float hue, float saturation,\n                                 float exposure) nogil:\n    cdef image sized = make_image(orig.w, orig.h, orig.c)\n    fill_image(sized, .5);\n\n    cdef float dw = jitter * orig.w\n    cdef float dh = jitter * orig.h\n\n    cdef float new_ar = (orig.w + rand_uniform(-dw, dw)) / (orig.h + rand_uniform(-dh, dh));\n    cdef float scale = rand_uniform(.25, 2)\n\n    cdef float nw, nh\n\n    if new_ar < 1:\n        nh = scale * orig.h\n        nw = nh * new_ar\n    else:\n        nw = scale * orig.w\n        nh = nw / new_ar\n\n    cdef float dx = rand_uniform(0, orig.w - nw)\n    cdef float dy = rand_uniform(0, orig.h - nh)\n\n    place_image(orig, <int>nw, <int>nh, <int>dx, <int>dy, sized)\n\n    random_distort_image(sized, hue, saturation, exposure)\n\n    cdef int flip = rand()%2\n    if flip:\n        flip_image(sized)\n\n    _fill_truth_detection(boxes, count, max_boxes, truths, num_classes, flip,\n        -dx/orig.w, -dy/orig.h, nw/orig.w, nh/orig.h)\n    return sized.data\n\ncdef void _fill_truth_detection(box_label* boxes, int count, int num_boxes,\n                                float *truth, int classes, int flip,\n                                float dx, float dy, float sx, float sy) nogil:\n    randomize_boxes(boxes, count)\n    correct_boxes(boxes, count, dx, dy, sx, sy, flip)\n    if count > num_boxes:\n        count = num_boxes\n    cdef float x, y, w, h\n    cdef int id, i\n\n    for i in range(count):\n        x =  boxes[i].x\n        y =  boxes[i].y\n        w =  boxes[i].w\n        h =  boxes[i].h\n        id = boxes[i].id\n\n        if ((w < .001 or h < .001)):\n            continue\n\n        truth[i*5+0] = x\n        truth[i*5+1] = y\n        truth[i*5+2] = w\n        truth[i*5+3] = h\n        truth[i*5+4] = id\n\n\ncdef class Metadata:\n    cdef metadata c\n    cdef public object backup_dir\n\n    def __init__(self, template_path):\n        template_path = Path(template_path)\n        if not template_path.exists():\n            raise IOError(\"Metadata template not found: %s\" % template_path)\n        with template_path.open('r', encoding='utf8') as file_:\n            text = file_.read()\n        data_dir = Path(__file__).parent / 'data'\n        self.backup_dir = tempfile.mkdtemp()\n        text = text.replace('$DATA', str(data_dir.resolve()))\n        text = text.replace('$HERE', str(data_dir.resolve()))\n        text = text.replace('$BACKUP', self.backup_dir)\n        out_loc = Path(str(template_path).replace('.template', '.data'))\n        with out_loc.open('w', encoding='utf8') as file_:\n            file_.write(text)\n        cdef bytes loc = unicode(out_loc.resolve()).encode('utf8')\n        self.c = get_metadata(<char*>loc)\n\n    def __dealloc__(self):\n        if self.c.names != NULL:\n            free_ptrs(<void**>self.c.names, self.c.classes)\n            self.c.names = NULL\n            shutil.rmtree(self.backup_dir)\n\n\ncdef class Network:\n    cdef network* c\n    cdef public object cfg\n    cdef readonly object names\n\n    def __init__(self):\n        self.c = NULL\n\n    def __dealloc__(self):\n        if self.c != NULL:\n            free_network(self.c)\n            self.c = NULL\n\n    @property\n    def num_classes(self):\n        return self.c.layers[self.c.n-1].classes\n    \n    @property\n    def num_boxes(self):\n        return num_boxes(self.c)\n    \n    @property\n    def max_boxes(self):\n        return self.c.layers[self.c.n - 1].max_boxes\n\n    @property\n    def side(self):\n        return self.c.layers[self.c.n - 1].side\n\n    @property\n    def width(self):\n        return network_width(self.c)\n    \n    @property\n    def height(self):\n        return network_height(self.c)\n\n    @classmethod\n    def load(cls, name, *, path=None, names=None, int clear=0):\n        if path is None:\n            path = Path(__file__).parent / 'data'\n        path = Path(path)\n        if not path.exists():\n            raise IOError(\"Data path not found: %s\" % path)\n        cfg_path = path / '{name}.cfg'.format(name=name)\n        if name == 'yolo.2.0':\n            weights_path = path / 'yolo.weights'\n        else:\n            weights_path = path / '{name}.weights'.format(name=name)\n        if not cfg_path.exists():\n            raise IOError(\"Config file not found: %s\" % cfg_path)\n        if not weights_path.exists():\n            raise IOError(\"Weights file not found: %s\" % weights_path)\n        cdef Network self = Network.__new__(cls)\n        cdef bytes cfg = unicode(cfg_path.resolve()).encode('utf8')\n        cdef bytes weights = unicode(weights_path.resolve()).encode('utf8')\n        self.cfg = cfg\n        self.c = load_network(<char*>cfg, <char*>weights, clear)\n        # TODO: Fix this hard-coding...\n        with (path / 'coco.names').open('r', encoding='utf8') as file_:\n            self.names = file_.read().strip().split('\\n')\n        return self\n\n    def __call__(self, Image image, \n            float thresh=.5, float hier_thresh=.5, float nms=.45):\n        return self._detect(image, thresh, hier_thresh, nms)\n\n    def update(self, images, labels):\n        assert len(images) != 0\n        assert len(labels) == len(images)\n        cdef DetectionData data\n        data = DetectionData(images, labels,\n                self.width, self.height, self.max_boxes, self.num_classes)\n        cdef float loss = 0.\n        cdef int prev_batch_size = self.c.batch\n        set_batch_network(self.c, data.c.X.rows)\n        resize_network(self.c, self.c.w, self.c.h)\n        self.c.train = 1\n        get_next_batch(data.c, self.c.batch, 0, self.c.input, self.c.truth)\n        forward_network(self.c)\n        backward_network(self.c)\n        loss += self.c.cost[0]\n        update_network(self.c)\n        self.c.train = 0\n        set_batch_network(self.c, prev_batch_size)\n        resize_network(self.c, self.c.w, self.c.h)\n        return loss\n\n    def evaluate(self, images, labels,\n            float thresh=.5, float hier_thresh=.5, float nms=.45):\n        cdef Image image\n        scores = {'tp': 0., 'fn': 0., 'fp': 0.}\n        for i, image in enumerate(images):\n            raw_guesses = self._detect(image, thresh=thresh,\n                                       hier_thresh=hier_thresh, nms=nms)\n            guesses = BoxLabels.from_results(raw_guesses)\n            scores['tp'] += len(labels[i].intersection(guesses))\n            scores['fn'] += len(labels[i].difference(guesses))\n            scores['fp'] += len(guesses.difference(labels[i]))\n        scores['p'] = scores['tp'] / (scores['tp'] + scores['fp'] + 1e-12)\n        scores['r'] = scores['tp'] / (scores['tp'] + scores['fn'] + 1e-12)\n        p = scores['p']\n        r = scores['r']\n        scores['f'] =  2 * ((p * r) / (p + r + 1e-12))\n        return scores\n\n    def _detect(self, Image image,\n            float thresh=.5, float hier_thresh=.5, float nms=.45):\n        num = num_boxes(self.c)\n        cdef Boxes boxes = Boxes(num)\n        cdef float** probs = make_probs(self.c)\n        network_detect(self.c, image.c, thresh, hier_thresh, nms, boxes.c, probs)\n        res = []\n        cdef int j, i\n        for j in range(num):\n            for i in range(len(self.names)):\n                if probs[j][i] > 0.:\n                    res.append((i, self.names[i], probs[j][i],\n                               (boxes.c[j].x, boxes.c[j].y,\n                                boxes.c[j].w, boxes.c[j].h)))\n        res = sorted(res, key=lambda x: -x[2])\n        free_ptrs(<void**>probs, num)\n        return res\n\n    def to_bytes(self):\n        with make_temp_dir() as temp_dir:\n            self.to_disk(temp_dir)\n            temp_dir = Path(temp_dir)\n            msg = {}\n            with (temp_dir / 'weights').open('rb') as file_:\n                msg[b'weights'] = file_.read()\n            with (temp_dir / 'cfg').open('rb') as file_:\n                msg[b'cfg'] = file_.read()\n            with (temp_dir / 'names').open('rb') as file_:\n                msg[b'names'] = file_.read()\n        return msgpack.dumps(msg)\n\n    def from_bytes(self, b):\n        msg = msgpack.loads(b)\n        with make_temp_dir() as temp_dir:\n            temp_dir = Path(temp_dir)\n            with (temp_dir / 'weights').open('wb') as file_:\n                file_.write(msg[b'weights'])\n            with (temp_dir / 'cfg').open('wb') as file_:\n                file_.write(msg[b'cfg'])\n            with (temp_dir / 'names').open('wb') as file_:\n                file_.write(msg[b'names'])\n            self.from_disk(temp_dir)\n        return self\n\n    def to_disk(self, path):\n        path = Path(path)\n        cdef bytes weights_loc = unicode(path / 'weights').encode('utf8')\n        save_weights(self.c, <char*>weights_loc)\n        with (path / 'cfg').open('wb') as file_:\n            file_.write(self.cfg)\n        with (path / 'names').open('w', encoding='utf8') as file_:\n            file_.write('\\n'.join(self.names))\n\n    def from_disk(self, path):\n        path = Path(path)\n        if not path.exists():\n            raise IOError(\"Model path not found: %s\" % path)\n        if not path.is_dir():\n            raise IOError(\"Model path not directory: %s\" % path)\n        if not (path / 'weights').exists():\n            raise IOError(\"Weights path not found: %s\" % (path/'weights'))\n        if not (path / 'cfg').exists():\n            raise IOError(\"Config path not found: %s\" % (path/'cfg'))\n        cdef bytes weights_loc = path2bytes(path / 'weights')\n        cdef bytes cfg_loc = path2bytes(path / 'cfg')\n        self.c = load_network(cfg_loc, weights_loc, 0)\n        self.cfg = (path / 'cfg').open('rb').read()\n        with (path / 'names').open('r', encoding='utf8') as file_:\n            self.names = file_.read().strip().split('\\n')\n        return self\n\n\ncpdef bytes path2bytes(path):\n    path = Path(path)\n    if not path.exists():\n        raise IOError(\"Data path not found: %s\" % path)\n    return unicode(path.resolve()).encode('utf8')\n\n\ndef train(bytes cfgfile_, bytes weightfile_, bytes train_images_, bytes backup_directory_):\n    cdef char* cfgfile = cfgfile_\n    cdef char* weightfile = weightfile_\n    cdef char* train_images = train_images_\n    cdef char* backup_directory = backup_directory_\n    cdef network* net = load_network(cfgfile, weightfile, 0)\n    cdef char* base = basecfg(cfgfile)\n    print(\"Learning Rate: %f, Momentum: %f, Decay: %f\\n\" % \n            (net.learning_rate, net.momentum, net.decay))\n    cdef int imgs = net.batch * net.subdivisions\n    cdef int i = net.seen[0]/imgs\n\n    cdef layer l = net.layers[net.n - 1]\n\n    cdef int side = l.side\n    cdef int classes = l.classes\n    cdef float jitter = l.jitter\n\n    cdef list *plist = get_paths(train_images)\n    cdef char **paths = <char**>list_to_array(plist)\n\n    cdef load_args args\n    #memset(&args, 0, sizeof(args))\n    args.w = net.w\n    args.h = net.h\n    args.paths = paths\n    args.coords = l.coords\n    args.n = imgs\n    args.m = plist.size\n    args.classes = classes\n    args.jitter = jitter\n    args.num_boxes = l.max_boxes\n    args.type = DETECTION_DATA\n    args.angle = net.angle\n    args.exposure = net.exposure\n    args.saturation = net.saturation\n    args.hue = net.hue\n\n    cdef float loss\n    if args.exposure == 0:\n        args.exposure = 1\n    if args.saturation == 0:\n        args.saturation = 1\n    if args.aspect == 0:\n        args.aspect = 1\n\n    cdef data train = load_data_detection(args.n, args.paths, args.m, args.w, args.h,\n                        args.num_boxes, args.classes, args.jitter,\n                        args.hue, args.saturation, args.exposure)\n\n    loss = train_network(net, train)\n    print(loss)\n    while get_current_batch(net) < net.max_batches:\n        args.d = &train\n        load_data_blocking(args)\n        loss = train_network(net, train)\n        print(loss)\n"
  },
  {
    "path": "lightnet/util.py",
    "content": "from contextlib import contextmanager\nfrom tempfile import mkdtemp\nimport shutil\n\n\n@contextmanager\ndef make_temp_dir():\n    path = mkdtemp()\n    yield path\n    shutil.rmtree(path)\n"
  },
  {
    "path": "requirements.txt",
    "content": "pathlib\nnumpy\nplac\nrequests\nmsgpack-python\n"
  },
  {
    "path": "setup.py",
    "content": "#!/usr/bin/env python\nimport shutil\nimport io\nimport os\nimport json\nimport distutils.command.build_ext\nimport subprocess\nimport sys\nfrom setuptools import Extension, setup\nimport platform\nimport numpy\n\ntry:\n    import cython\n    use_cython = True\nexcept ImportError:\n    use_cython = False\n\n\nclass ExtensionBuilder(distutils.command.build_ext.build_ext):\n    def build_extensions(self):\n        if use_cython:\n            subprocess.check_call([sys.executable, 'bin/cythonize.py'],\n                                   env=os.environ)\n\n        cuda_include_dir = '/usr/local/cuda/include/'\n        if 'CUDA_INCLUDE_DIR' in os.environ:\n            cuda_include_dir = os.environ['CUDA_INCLUDE_DIR']\n\n        cuda_library_dir = '/usr/local/cuda/lib64/'\n        if 'CUDA_LIBRARY_DIR' in os.environ:\n            cuda_library_dir = os.environ['CUDA_LIBRARY_DIR']\n\n        use_gpu = False\n        if 'GPU' in os.environ and os.environ['GPU'] == '1':\n            use_gpu = True\n\n        use_cudnn = False\n        if 'CUDNN' in os.environ and os.environ['CUDNN'] == '1':\n            use_gpu = True\n            use_cudnn = True\n\n        make_command = ['make']\n        if use_gpu:\n            make_command.append('GPU=1')\n        if use_cudnn:\n            make_command.append('CUDNN=1')\n\n        darknet_dir = os.path.join(PWD, 'lightnet', '_darknet')\n        subprocess.check_call(make_command, cwd=darknet_dir)\n\n        for e in self.extensions:\n            e.include_dirs.append(numpy.get_include())\n            e.undef_macros.append(\"FORTIFY_SOURCE\")\n            e.extra_compile_args.append(\"-DCBLAS\")\n            e.extra_compile_args.append('-g')\n            e.library_dirs.append(darknet_dir)\n            e.extra_link_args.append('-g')\n            e.extra_link_args.append('-ldarknet')\n            if use_gpu:\n                e.include_dirs.append(cuda_include_dir)\n                e.library_dirs.append(cuda_library_dir)\n                e.extra_link_args.append('-lcuda')\n                e.extra_link_args.append('-lcudart')\n                e.extra_link_args.append('-lcublas')\n                e.extra_link_args.append('-lcurand')\n                e.extra_link_args.append('-lstdc++')\n            if use_cudnn:\n                e.extra_link_args.append('-lcudnn')\n            if sys.platform == 'darwin':\n                e.extra_compile_args.append('-D__APPLE__')\n                e.extra_link_args.append('-lblas')\n            else:\n                e.extra_link_args.append('-lopenblas')\n        distutils.command.build_ext.build_ext.build_extensions(self)\n\n\ndef get_c_sources(start_dir):\n    c_sources = []\n    excludes = []\n    for path, subdirs, files in os.walk(start_dir):\n        for exc in excludes:\n            if exc in path:\n                break\n        else:\n            for name in files:\n                if name.endswith('.c'):\n                    c_sources.append(os.path.join(path, name))\n    return c_sources\n\n\nPWD = os.path.join(os.path.dirname(__file__))\nINCLUDE = os.path.join(PWD, 'lightnet', '_darknet')\n\nc_files = get_c_sources(os.path.join(PWD, 'lightnet', '_darknet'))\n\nwith io.open(os.path.join(PWD, 'lightnet', 'about.py'), encoding='utf8') as f:\n    about = {}\n    exec(f.read(), about)\n\nwith io.open(os.path.join(PWD, 'README.rst'), encoding='utf8') as f:\n    readme = f.read()\n\nsetup(\n    setup_requires=['numpy'],\n    install_requires=['numpy', 'plac', 'requests', 'pathlib', 'tqdm',\n                      'msgpack-python'],\n    ext_modules=[\n        Extension('lightnet.lightnet', ['lightnet/lightnet.c']),\n    ],\n    cmdclass={'build_ext': ExtensionBuilder},\n    package_data={'': ['*.json', '*.pyx', '*.pxd', '_darknet/*.h',\n                       'data/*.cfg', 'data/*.template', 'data/*.names'] + c_files},\n\n    name=about['__title__'],\n    zip_safe=False,\n    packages=['lightnet'],\n    version=about['__version__'],\n    author=about['__author__'],\n    author_email=about['__email__'],\n    url=about['__uri__'],\n    license=about['__license__'],\n    description=about['__summary__'],\n    long_description=readme,\n    classifiers=[\n        'Development Status :: 4 - Beta',\n        'Environment :: Console',\n        'Intended Audience :: Developers',\n        'Intended Audience :: Information Technology',\n        'License :: OSI Approved :: MIT License',\n        'Operating System :: POSIX :: Linux',\n        'Operating System :: MacOS :: MacOS X',\n        'Programming Language :: Cython',\n        'Programming Language :: Python :: 2.6',\n        'Programming Language :: Python :: 2.7',\n        'Programming Language :: Python :: 3.3',\n        'Programming Language :: Python :: 3.4',\n        'Programming Language :: Python :: 3.5',\n        'Programming Language :: Python :: 3.6',\n        'Topic :: Scientific/Engineering'\n    ],\n)\n"
  },
  {
    "path": "tests/test_boxes.py",
    "content": "import pytest\nimport numpy\nfrom lightnet.lightnet import BoxLabels\n\n@pytest.fixture\ndef ids():\n    return numpy.asarray([0, 2], dtype='i')\n\n@pytest.fixture\ndef xywh():\n    return numpy.asarray(\n        [[2., 2., 2., 2.],\n        [1., 1., 1., 1.]], dtype='f')\n\n\ndef test_BoxLabels_init(ids, xywh):\n    labels = BoxLabels(ids, xywh)\n    assert labels.x == list(xywh[:, 0])\n    assert labels.y == list(xywh[:, 1])\n    assert labels.h == list(xywh[:, 2])\n    assert labels.w == list(xywh[:, 3])\n    print(labels.left)\n    print(labels.right)\n    print(labels.top)\n    print(labels.bottom)\n"
  },
  {
    "path": "tests/test_image.py",
    "content": "from pathlib import Path\nfrom numpy.testing import assert_equal\n\nfrom lightnet.lightnet import Image\n\ndef test_make_image():\n    img = Image.blank(10, 10, 10)\n    img2 = Image.blank(100, 10, 10)\n\ndef test_random_image():\n    img = Image.random(10, 10, 10)\n    img2 = Image.random(100, 10, 10)\n\ndef test_image_from_bytes():\n    path = Path(\"tests/COCO_val2014_000000000042.jpg\")\n    loaded = Image.load_color(path)\n    with path.open('rb') as file_:\n        raw = file_.read()\n    made = Image.from_bytes(raw)\n    assert_equal(made.data, loaded.data)\n\n"
  },
  {
    "path": "tests/test_network.py",
    "content": "from __future__ import unicode_literals\nfrom lightnet import Network, Image, BoxLabels\nfrom lightnet.lightnet import DetectionData\nimport numpy\nimport pytest\nfrom pathlib import Path\n\n@pytest.fixture\ndef ids_xywh():\n    return (numpy.asarray([16], dtype='i'),\n            numpy.asarray([[0.606688, 0.341381, 0.544156, 0.510000]], dtype='f'))\n\n@pytest.fixture\ndef image():\n    return Image.load_color(Path(\"tests/COCO_val2014_000000000042.jpg\"))\n\n@pytest.fixture\ndef box_labels(ids_xywh):\n    ids, xywh = ids_xywh\n    return BoxLabels(ids, xywh)\n \ndef test_init():\n    nn = Network()\n\ndef test_load():\n    net = Network.load(\"tiny-yolo\")\n\n@pytest.mark.xfail\ndef test_from_disk(image):\n    net = Network().from_disk(Path('lightnet/data/tiny-yolo').resolve())\n    results = net(image)\n    assert results is not None\n\ndef test_to_from_bytes(image):\n    net = Network.load('tiny-yolo')\n    data = net.to_bytes()\n    results = net(image)\n    loaded = Network().from_bytes(data)\n    results2 = net(image)\n    assert results == results2\n\n\ndef test_detect(image):\n    net = Network.load(\"tiny-yolo\")\n    result = net(image)\n \ndef test_box_labels(box_labels):\n    pass\n \n\ndef test_detection_data(image, box_labels):\n    net = Network.load(\"tiny-yolo\")\n    data = DetectionData([image], [box_labels],\n                         net.width, net.height, net.max_boxes, net.num_classes)\n    #assert data.X_shape == (1, net.width * net.height * 3)\n    #assert data.y_shape == (1, net.num_boxes * 5)\n\ndef test_update(image, box_labels):\n    net = Network.load(\"tiny-yolo\")\n    for i in range(10):\n        loss = net.update([image], [box_labels])\n        print(loss)\n\n\ndef test_evaluate(image, box_labels):\n    net = Network.load(\"tiny-yolo\")\n    acc = net.evaluate([image], [box_labels])\n    assert 'fp' in acc\n    assert 'fn' in acc\n    assert 'tp' in acc\n    assert 'r' in acc\n    assert 'p' in acc\n    assert 'f' in acc\n"
  }
]