Full Code of mlpc-ucsd/LETR for AI

master 6022fbd9df65 cached
61 files
2.3 MB
616.1k tokens
414 symbols
1 requests
Download .txt
Showing preview only (2,464K chars total). Download the full file or copy to clipboard to get everything.
Repository: mlpc-ucsd/LETR
Branch: master
Commit: 6022fbd9df65
Files: 61
Total size: 2.3 MB

Directory structure:
gitextract_ycglcm5_/

├── .gitignore
├── LICENSE
├── README.md
├── evaluation/
│   ├── eval-aph-post-wireframe.py
│   ├── eval-aph-post-york.py
│   ├── eval-aph-score-wireframe.py
│   ├── eval-aph-score-york.py
│   ├── eval-fscore-wireframe.py
│   ├── eval-fscore-york.py
│   ├── eval-sAP-wireframe.py
│   ├── eval-sAP-york.py
│   ├── lcnn/
│   │   ├── __init__.py
│   │   ├── box.py
│   │   ├── config.py
│   │   ├── datasets.py
│   │   ├── metric.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── hourglass_pose.py
│   │   │   ├── line_vectorizer.py
│   │   │   └── multitask_learner.py
│   │   ├── postprocess.py
│   │   ├── trainer.py
│   │   └── utils.py
│   ├── matlab/
│   │   ├── correspondPixels.mexa64
│   │   ├── correspondPixels.mexmaci64
│   │   ├── correspondPixels.mexw64
│   │   └── eval_release.m
│   └── process.py
├── helper/
│   ├── gdrive-download.sh
│   ├── wireframe.py
│   ├── wireframe_eval.py
│   ├── york.py
│   └── york_eval.py
├── script/
│   ├── evaluation/
│   │   ├── eval_aph_wireframe.sh
│   │   ├── eval_aph_york.sh
│   │   ├── eval_stage1.sh
│   │   ├── eval_stage2.sh
│   │   └── eval_stage2_focal.sh
│   └── train/
│       ├── a0_train_stage1_res50.sh
│       ├── a1_train_stage1_res101.sh
│       ├── a2_train_stage2_res50.sh
│       ├── a3_train_stage2_res101.sh
│       ├── a4_train_stage2_focal_res50.sh
│       └── a5_train_stage2_focal_res101.sh
└── src/
    ├── args.py
    ├── datasets/
    │   ├── __init__.py
    │   ├── coco.py
    │   └── transforms.py
    ├── demo_letr.ipynb
    ├── engine.py
    ├── main.py
    ├── models/
    │   ├── __init__.py
    │   ├── backbone.py
    │   ├── letr.py
    │   ├── letr_stack.py
    │   ├── matcher.py
    │   ├── multi_head_attention.py
    │   ├── position_encoding.py
    │   └── transformer.py
    └── util/
        ├── __init__.py
        └── misc.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
.nfs*
*.pyc
.dumbo.json
.DS_Store
.*.swp
*.pth
**/__pycache__/**
.ipynb_checkpoints/
*.tmp
*.pkl
**/.mypy_cache/*
.mypy_cache/*
not_tracked_dir/
.vscode
logs/
data
evaluation/data
exp


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# LETR: Line Segment Detection Using Transformers without Edges

## Introduction 
This repository contains the official code and pretrained models for [Line Segment Detection Using Transformers without Edges](https://arxiv.org/abs/2101.01909). [Yifan Xu*](https://yfxu.com/), [Weijian Xu*](https://weijianxu.com/), [David Cheung](https://github.com/sawsa307), and [Zhuowen Tu](https://pages.ucsd.edu/~ztu/). CVPR2021 (**Oral**)

In this paper, we present a joint end-to-end line segment detection algorithm using Transformers that is post-processing and heuristics-guided intermediate processing (edge/junction/region detection) free. Our method, named LinE segment TRansformers (LETR), takes advantages of having integrated tokenized queries, a self-attention mechanism, and encoding-decoding strategy within Transformers by skipping standard heuristic designs for the edge element detection and perceptual grouping processes. We equip Transformers with a multi-scale encoder/decoder strategy to perform fine-grained line segment detection under a direct endpoint distance loss. This loss term is particularly suitable for detecting geometric structures such as line segments that are not conveniently represented by the standard bounding box representations. The Transformers learn to gradually refine line segments through layers of self-attention. 

<img src="figures/pipeline.svg" alt="Model Pipeline" width="720" />


## Changelog
05/07/2021: Code for LETR Basic Usage [Demo](https://github.com/mlpc-ucsd/LETR/blob/master/src/demo_letr.ipynb) are released. 

04/30/2021: Code and pre-trained checkpoint for LETR are released. 

## Results and Checkpoints


| Name | sAP10 | sAP15 | sF10 | sF15 | URL|
| --- | --- | --- | --- | --- |--- |
| Wireframe | 65.6 | 68.0 | 66.1 | 67.4 | [LETR-R101](https://vcl.ucsd.edu/letr/checkpoints/res101/res101_stage2_focal.zip) |
| YorkUrban | 29.6 | 32.0 | 40.5 | 42.1 | [LETR-R50](https://vcl.ucsd.edu/letr/checkpoints/res50/res50_stage2_focal.zip) |

## Reproducing Results

### Step1: Code Preparation
```bash
git clone https://github.com/mlpc-ucsd/LETR.git
```

### Step2: Environment Installation

```bash
mkdir -p data
mkdir -p evaluation/data
mkdir -p exp


conda create -n letr python anaconda
conda activate letr
conda install -c pytorch pytorch torchvision
conda install cython scipy
pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
pip install docopt
```

### Step3: Data Preparation
To reproduce our results, you need to process two datasets, [ShanghaiTech](https://github.com/huangkuns/wireframe) and [YorkUrban](https://www.elderlab.yorku.ca/resources/york-urban-line-segment-database-information/). Files located at ./helper/wireframe.py and ./helper/york.py are both modified based on the code from [L-CNN](https://github.com/zhou13/lcnn), which process the raw data from download.

- ShanghaiTech Train Data
    - To Download (modified based on from [L-CNN](https://github.com/zhou13/lcnn))
        ```bash
        cd data
        bash ../helper/gdrive-download.sh 1BRkqyi5CKPQF6IYzj_dQxZFQl0OwbzOf wireframe_raw.tar.xz
        tar xf wireframe_raw.tar.xz
        rm wireframe_raw.tar.xz
        python ../helper/wireframe.py ./wireframe_raw ./wireframe_processed

        ```
- YorkUrban Train Data
    - To Download  
        ```bash
        cd data
        wget https://www.dropbox.com/sh/qgsh2audfi8aajd/AAAQrKM0wLe_LepwlC1rzFMxa/YorkUrbanDB.zip
        unzip YorkUrbanDB.zip 
        python ../helper/york.py ./YorkUrbanDB ./york_processed
        
        ```
- Processed Evaluation Data
    ```bash
    bash ./helper/gdrive-download.sh 1T4_6Nb5r4yAXre3lf-zpmp3RbmyP1t9q ./evaluation/data/wireframe.tar.xz
    bash ./helper/gdrive-download.sh 1ijOXv0Xw1IaNDtp1uBJt5Xb3mMj99Iw2 ./evaluation/data/york.tar.xz
    tar -vxf ./evaluation/data/wireframe.tar.xz -C ./evaluation/data/.
    tar -vxf ./evaluation/data/york.tar.xz -C ./evaluation/data/.
    rm ./evaluation/data/wireframe.tar.xz
    rm ./evaluation/data/york.tar.xz
    ```

### Step4: Train Script Examples
1. Train a coarse-model (a.k.a. stage1 model).
    ```bash
    # Usage: bash script/*/*.sh [exp name]
    bash script/train/a0_train_stage1_res50.sh  res50_stage1 # LETR-R50  
    bash script/train/a1_train_stage1_res101.sh res101_stage1 # LETR-R101 
    ```

2. Train a fine-model (a.k.a. stage2 model).
    ```bash
    # Usage: bash script/*/*.sh [exp name]
    bash script/train/a2_train_stage2_res50.sh  res50_stage2  # LETR-R50
    bash script/train/a3_train_stage2_res101.sh res101_stage2 # LETR-R101 
    ```

3. Fine-tune the fine-model with focal loss (a.k.a. stage2_focal model).
    ```bash
    # Usage: bash script/*/*.sh [exp name]
    bash script/train/a4_train_stage2_focal_res50.sh   res50_stage2_focal # LETR-R50
    bash script/train/a5_train_stage2_focal_res101.sh  res101_stage2_focal # LETR-R101 
    ```

### Step5: Evaluation

1. Evaluate models.
    ```bash
    # Evaluate sAP^10, sAP^15, sF^10, sF^15 (both Wireframe and YorkUrban datasets).
    bash script/evaluation/eval_stage1.sh [exp name]
    bash script/evaluation/eval_stage2.sh [exp name]
    bash script/evaluation/eval_stage2_focal.sh [exp name]
    ```

### Citation

If you use this code for your research, please cite our paper:
```
@InProceedings{Xu_2021_CVPR,
    author    = {Xu, Yifan and Xu, Weijian and Cheung, David and Tu, Zhuowen},
    title     = {Line Segment Detection Using Transformers Without Edges},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {4257-4266}
}
```
### Acknowledgments

This code is based on the implementations of [**DETR: End-to-End Object Detection with Transformers**](https://github.com/facebookresearch/detr). 


================================================
FILE: evaluation/eval-aph-post-wireframe.py
================================================
#!/usr/bin/env python3
"""Post-processing the output of neural network
Usage:
    post.py [options] <input-dir> <output-dir>
    post.py ( -h | --help )

Examples:
    post.py logs/logname/npz/000336000  result/logname

Arguments:
   input-dir                         Directory that stores the npz
   output-dir                        Output directory

Options:
   -h --help                         Show this screen.
   --plot                            Generate images besides npz files
   --thresholds=<thresholds>         A comma-separated list for thresholding
                                     [default: 0.006,0.010,0.015]
"""

import glob
import math
import os
import os.path as osp
import sys

import cv2
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from docopt import docopt

from lcnn.postprocess import postprocess
from lcnn.utils import parmap

PLTOPTS = {"color": "#33FFFF", "s": 1.2, "edgecolors": "none", "zorder": 5}
cmap = plt.get_cmap("jet")
norm = mpl.colors.Normalize(vmin=0.92, vmax=1.02)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
sm.set_array([])


def c(x):
    return sm.to_rgba(x)


def imshow(im):
    plt.close()
    sizes = im.shape
    height = float(sizes[0])
    width = float(sizes[1])

    fig = plt.figure()
    fig.set_size_inches(width / height, 1, forward=False)
    ax = plt.Axes(fig, [0.0, 0.0, 1.0, 1.0])
    ax.set_axis_off()
    fig.add_axes(ax)
    plt.xlim([-0.5, sizes[1] - 0.5])
    plt.ylim([sizes[0] - 0.5, -0.5])
    plt.imshow(im)


def main():
    args = docopt(__doc__)

    files = sorted(glob.glob(osp.join(args["<input-dir>"], "*.npz")))
    inames = sorted(glob.glob("data/wireframe/valid-images/*.jpg"))
    gts = sorted(glob.glob("data/wireframe/valid/*.npz"))
    prefix = args["<output-dir>"]

    inputs = list(zip(files, inames, gts))
    thresholds = list(map(float, args["--thresholds"].split(",")))


    def handle(allname):
        fname, iname, gtname = allname
        print("Processing", fname)
        im = cv2.imread(iname)
        with np.load(fname) as f:
            lines = f["lines"]
            scores = f["score"]
        with np.load(gtname) as f:
            gtlines = f["lpos"][:, :, :2]
        gtlines[:, :, 0] *= im.shape[0] / 128
        gtlines[:, :, 1] *= im.shape[1] / 128
        for i in range(1, len(lines)):
            if (lines[i] == lines[0]).all():
                lines = lines[:i]
                scores = scores[:i]
                break

        lines[:, :, 0] *= im.shape[0] / 128
        lines[:, :, 1] *= im.shape[1] / 128
        diag = (im.shape[0] ** 2 + im.shape[1] ** 2) ** 0.5

        for threshold in thresholds:
            nlines, nscores = postprocess(lines, scores, diag * threshold, 0, False)

            outdir = osp.join(prefix, f"{threshold:.3f}".replace(".", "_"))
            os.makedirs(outdir, exist_ok=True)
            npz_name = osp.join(outdir, osp.split(fname)[-1])

            if args["--plot"]:
                # plot gt
                imshow(im[:, :, ::-1])
                for (a, b) in gtlines:
                    plt.plot([a[1], b[1]], [a[0], b[0]], c="orange", linewidth=0.5)
                    plt.scatter(a[1], a[0], **PLTOPTS)
                    plt.scatter(b[1], b[0], **PLTOPTS)
                plt.savefig(npz_name.replace(".npz", ".png"), dpi=500, bbox_inches=0)

                thres = [0.96, 0.97, 0.98, 0.99]
                for i, t in enumerate(thres):
                    imshow(im[:, :, ::-1])
                    for (a, b), s in zip(nlines[nscores > t], nscores[nscores > t]):
                        plt.plot([a[1], b[1]], [a[0], b[0]], c=c(s), linewidth=0.5)
                        plt.scatter(a[1], a[0], **PLTOPTS)
                        plt.scatter(b[1], b[0], **PLTOPTS)
                    plt.savefig(
                        npz_name.replace(".npz", f"_{i}.png"), dpi=500, bbox_inches=0
                    )

            nlines[:, :, 0] *= 128 / im.shape[0]
            nlines[:, :, 1] *= 128 / im.shape[1]
            np.savez_compressed(npz_name, lines=nlines, score=nscores)

    parmap(handle, inputs, 12)


if __name__ == "__main__":
    main()


================================================
FILE: evaluation/eval-aph-post-york.py
================================================
#!/usr/bin/env python3
"""Post-processing the output of neural network
Usage:
    post.py [options] <input-dir> <output-dir>
    post.py ( -h | --help )

Examples:
    post.py logs/logname/npz/000336000  result/logname

Arguments:
   input-dir                         Directory that stores the npz
   output-dir                        Output directory

Options:
   -h --help                         Show this screen.
   --plot                            Generate images besides npz files
   --thresholds=<thresholds>         A comma-separated list for thresholding
                                     [default: 0.006,0.010,0.015]
"""

import glob
import math
import os
import os.path as osp
import sys

import cv2
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from docopt import docopt

from lcnn.postprocess import postprocess
from lcnn.utils import parmap

PLTOPTS = {"color": "#33FFFF", "s": 1.2, "edgecolors": "none", "zorder": 5}
cmap = plt.get_cmap("jet")
norm = mpl.colors.Normalize(vmin=0.92, vmax=1.02)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
sm.set_array([])


def c(x):
    return sm.to_rgba(x)


def imshow(im):
    plt.close()
    sizes = im.shape
    height = float(sizes[0])
    width = float(sizes[1])

    fig = plt.figure()
    fig.set_size_inches(width / height, 1, forward=False)
    ax = plt.Axes(fig, [0.0, 0.0, 1.0, 1.0])
    ax.set_axis_off()
    fig.add_axes(ax)
    plt.xlim([-0.5, sizes[1] - 0.5])
    plt.ylim([sizes[0] - 0.5, -0.5])
    plt.imshow(im)


def main():
    args = docopt(__doc__)

    files = sorted(glob.glob(osp.join(args["<input-dir>"], "*.npz")))
    inames = sorted(glob.glob("data/york/valid_copy/*.jpg"))
    gts = sorted(glob.glob("data/york/valid_copy/*.npz"))
    print('len(files):{} len(inames):{} len(gts):{}'.format(len(files), len(inames), len(gts)))
    
    prefix = args["<output-dir>"]

    inputs = list(zip(files, inames, gts))
    thresholds = list(map(float, args["--thresholds"].split(",")))


    def handle(allname):
        fname, iname, gtname = allname
        print("Processing", fname)
        im = cv2.imread(iname)
        with np.load(fname) as f:
            lines = f["lines"]
            scores = f["score"]
        with np.load(gtname) as f:
            gtlines = f["lpos"][:, :, :2]
        gtlines[:, :, 0] *= im.shape[0] / 128
        gtlines[:, :, 1] *= im.shape[1] / 128
        for i in range(1, len(lines)):
            if (lines[i] == lines[0]).all():
                lines = lines[:i]
                scores = scores[:i]
                break

        lines[:, :, 0] *= im.shape[0] / 128
        lines[:, :, 1] *= im.shape[1] / 128
        diag = (im.shape[0] ** 2 + im.shape[1] ** 2) ** 0.5

        for threshold in thresholds:
            nlines, nscores = postprocess(lines, scores, diag * threshold, 0, False)

            outdir = osp.join(prefix, f"{threshold:.3f}".replace(".", "_"))
            os.makedirs(outdir, exist_ok=True)
            npz_name = osp.join(outdir, osp.split(fname)[-1])

            if args["--plot"]:
                # plot gt
                imshow(im[:, :, ::-1])
                for (a, b) in gtlines:
                    plt.plot([a[1], b[1]], [a[0], b[0]], c="orange", linewidth=0.5)
                    plt.scatter(a[1], a[0], **PLTOPTS)
                    plt.scatter(b[1], b[0], **PLTOPTS)
                plt.savefig(npz_name.replace(".npz", ".png"), dpi=500, bbox_inches=0)

                thres = [0.96, 0.97, 0.98, 0.99]
                for i, t in enumerate(thres):
                    imshow(im[:, :, ::-1])
                    for (a, b), s in zip(nlines[nscores > t], nscores[nscores > t]):
                        plt.plot([a[1], b[1]], [a[0], b[0]], c=c(s), linewidth=0.5)
                        plt.scatter(a[1], a[0], **PLTOPTS)
                        plt.scatter(b[1], b[0], **PLTOPTS)
                    plt.savefig(
                        npz_name.replace(".npz", f"_{i}.png"), dpi=500, bbox_inches=0
                    )

            nlines[:, :, 0] *= 128 / im.shape[0]
            nlines[:, :, 1] *= 128 / im.shape[1]
            np.savez_compressed(npz_name, lines=nlines, score=nscores)

    parmap(handle, inputs, 12)


if __name__ == "__main__":
    main()

================================================
FILE: evaluation/eval-aph-score-wireframe.py
================================================
#!/usr/bin/env python3
"""Evaluate APH for LCNN
Usage:
    eval-APH.py <src> <dst>
    eval-APH.py (-h | --help )

Examples:
    ./eval-APH.py post/RUN-ITERATION/0_010 post/RUN-ITERATION/0_010-APH

Arguments:
    <src>                Source directory that stores preprocessed npz
    <dst>                Temporary output directory

Options:
   -h --help             Show this screen.
"""

import os
import glob
import os.path as osp
import subprocess

import numpy as np
import scipy.io as sio
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy import interpolate
from docopt import docopt

mpl.rcParams.update({"font.size": 18})
plt.rcParams["font.family"] = "Times New Roman"
del mpl.font_manager.weight_dict["roman"]
mpl.font_manager._rebuild()

image_path = "data/wireframe/valid-images/"
line_gt_path = "data/wireframe/valid/"
output_size = 128


def main():
    args = docopt(__doc__)
    src_dir = args["<src>"]
    tar_dir = args["<dst>"]
    print(src_dir, tar_dir)
    output_file = osp.join(tar_dir, "result.mat")
    target_dir = osp.join(tar_dir, "mat")
    os.makedirs(target_dir, exist_ok=True)
    print(f"intermediate matlab results will be saved at: {target_dir}")

    file_list = glob.glob(osp.join(src_dir, "*.npz"))
    # Old threshold: thresh = [0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.97, 0.99, 0.995, 0.999, 0.9995, 0.9999]
    thresh = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.525, 0.55, 0.575, 0.6, 0.625, 0.65, 0.675, 0.7, 0.8, 0.9, 0.95, 0.97, 0.99, 0.995, 0.999, 0.9995, 0.9999]
    for t in thresh:
        for fname in file_list:
            name = fname.split("/")[-1].split(".")[0]
            mat_name = name + ".mat"
            npz = np.load(fname)
            lines = npz["lines"].reshape(-1, 4)
            scores = npz["score"]
            for j in range(len(scores) - 1):
                if scores[j + 1] == scores[0]:  # Cyclic lines/scores. Choose the first cycle.
                    lines = lines[: j + 1]
                    scores = scores[: j + 1]
                    break
            idx = np.where(scores > t)[0]  # Only choose the lines which scores > specified threshold.
            os.makedirs(osp.join(target_dir, str(t)), exist_ok=True)
            sio.savemat(osp.join(target_dir, str(t), mat_name), {"lines": lines[idx]})

    cmd = "matlab -nodisplay -nodesktop "
    cmd += '-r "dbstop if error; '
    cmd += "eval_release('{:s}', '{:s}', '{:s}', '{:s}', {:d}); quit;\"".format(
        image_path, line_gt_path, output_file, target_dir, output_size
    )
    print("Running:\n{}".format(cmd))
    os.environ["MATLABPATH"] = "matlab/"
    subprocess.call(cmd, shell=True)

    mat = sio.loadmat(output_file)
    tps = mat["sumtp"]
    fps = mat["sumfp"]
    N = mat["sumgt"]
    
    # Old way of F^H and AP^H:
    # --------------
    # rcs = sorted(list((tps / N)[:, 0]))
    # prs = sorted(list((tps / np.maximum(tps + fps, 1e-9))[:, 0]))[::-1]  # FIXME: Why using sorted?

    # print(
    #     "f measure is: ",
    #     (2 * np.array(prs) * np.array(rcs) / (np.array(prs) + np.array(rcs))).max(),
    # )

    # recall = np.concatenate(([0.0], rcs, [1.0]))
    # precision = np.concatenate(([0.0], prs, [0.0]))

    # for i in range(precision.size - 1, 0, -1):
    #     precision[i - 1] = max(precision[i - 1], precision[i])
    # i = np.where(recall[1:] != recall[:-1])[0]
    # print("AP is: ", np.sum((recall[i + 1] - recall[i]) * precision[i + 1]))
    # --------------


    # Our way of F^H and AP^H:
    # --------------
    tps = mat["sumtp"]
    fps = mat["sumfp"]
    N = mat["sumgt"]
    rcs = list((tps / N)[:, 0])
    prs = list((tps / np.maximum(tps + fps, 1e-9))[:, 0])  # No sorting.

    print(
        "f measure is: ",
        (2 * np.array(prs) * np.array(rcs) / (np.array(prs) + np.array(rcs) + 1e-9)).max(),
    )

    recall = np.concatenate(([0.0], rcs[::-1], [1.0]))  # Reverse order.
    precision = np.concatenate(([0.0], prs[::-1], [0.0]))

    for i in range(precision.size - 1, 0, -1):
        precision[i - 1] = max(precision[i - 1], precision[i])
    i = np.where(recall[1:] != recall[:-1])[0]
    print("AP is: ", np.sum((recall[i + 1] - recall[i]) * precision[i + 1]))
    # --------------




    # Old way of visualization:
    # --------------
    # f = interpolate.interp1d(rcs, prs, kind="cubic", bounds_error=False)  # FIXME: May still have issue.
    # x = np.arange(0, 1, 0.01) * rcs[-1]
    # y = f(x)
    # plt.plot(x, y, linewidth=3, label="L-CNN")

    # f_scores = np.linspace(0.2, 0.8, num=8)
    # for f_score in f_scores:
    #     x = np.linspace(0.01, 1)
    #     y = f_score * x / (2 * x - f_score)
    #     l, = plt.plot(x[y >= 0], y[y >= 0], color="green", alpha=0.3)
    #     plt.annotate("f={0:0.1}".format(f_score), xy=(0.9, y[45] + 0.02), alpha=0.4)

    # plt.grid(True)
    # plt.axis([0.0, 1.0, 0.0, 1.0])
    # plt.xticks(np.arange(0, 1.0, step=0.1))
    # plt.xlabel("Recall")
    # plt.ylabel("Precision")
    # plt.yticks(np.arange(0, 1.0, step=0.1))
    # plt.legend(loc=3)
    # plt.title("PR Curve for APH")
    # plt.savefig("apH.pdf", format="pdf", bbox_inches="tight")
    # plt.savefig("apH.svg", format="svg", bbox_inches="tight")
    # plt.show()
    # --------------


    # Our way of visualization:
    # --------------
    reversed_rcs = rcs[::-1]
    reversed_prs = prs[::-1]
    filtered_rcs = []
    filtered_prs = []
    for i in range(len(reversed_rcs)):
        if reversed_prs[i] == 0.0 and reversed_prs[i] == 0.0: # Ignore empty items.
            pass
        else:
            filtered_rcs.append(reversed_rcs[i])
            filtered_prs.append(reversed_prs[i])

    f = interpolate.interp1d(filtered_rcs, filtered_prs, kind="cubic", bounds_error=False)  # FIXME: May still have issue.
    x = np.arange(0, 1, 0.01) * filtered_rcs[-1]
    y = f(x)
    plt.plot(x, y, linewidth=3, label="Current")

    f_scores = np.linspace(0.2, 0.8, num=8)
    for f_score in f_scores:
        x = np.linspace(0.01, 1)
        y = f_score * x / (2 * x - f_score)
        l, = plt.plot(x[y >= 0], y[y >= 0], color="green", alpha=0.3)
        plt.annotate("f={0:0.1}".format(f_score), xy=(0.9, y[45] + 0.02), alpha=0.4)

    plt.grid(True)
    plt.axis([0.0, 1.0, 0.0, 1.0])
    plt.xticks(np.arange(0, 1.0, step=0.1))
    plt.xlabel("Recall")
    plt.ylabel("Precision")
    plt.yticks(np.arange(0, 1.0, step=0.1))
    plt.legend(loc=3)
    plt.title("PR Curve for APH")
    # plt.savefig("apH.pdf", format="pdf", bbox_inches="tight")
    # plt.savefig("apH.svg", format="svg", bbox_inches="tight")
    plt.show()
    # --------------


if __name__ == "__main__":
    # import debugpy
    # print("Enabling attach starts.")
    # debugpy.listen(address=('0.0.0.0', 9310))
    # debugpy.wait_for_client()
    # print("Enabling attach ends.")
    
    plt.tight_layout()
    main()


================================================
FILE: evaluation/eval-aph-score-york.py
================================================
#!/usr/bin/env python3
"""Evaluate APH for LCNN
Usage:
    eval-APH.py <src> <dst>
    eval-APH.py (-h | --help )

Examples:
    ./eval-APH.py post/RUN-ITERATION/0_010 post/RUN-ITERATION/0_010-APH

Arguments:
    <src>                Source directory that stores preprocessed npz
    <dst>                Temporary output directory

Options:
   -h --help             Show this screen.
"""

import os
import glob
import os.path as osp
import subprocess

import numpy as np
import scipy.io as sio
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy import interpolate
from docopt import docopt

mpl.rcParams.update({"font.size": 18})
plt.rcParams["font.family"] = "Times New Roman"
del mpl.font_manager.weight_dict["roman"]
mpl.font_manager._rebuild()

image_path = "data/york/valid-images/"
line_gt_path = "data/york/valid_copy/"
output_size = 128


def main():
    args = docopt(__doc__)
    src_dir = args["<src>"]
    tar_dir = args["<dst>"]

    output_file = osp.join(tar_dir, "result.mat")
    target_dir = osp.join(tar_dir, "mat")
    os.makedirs(target_dir, exist_ok=True)
    print(f"intermediate matlab results will be saved at: {target_dir}")

    file_list = glob.glob(osp.join(src_dir, "*.npz"))
    thresh = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.525, 0.55, 0.575, 0.6, 0.625, 0.65, 0.675, 0.7, 0.8, 0.9, 0.95, 0.97, 0.99, 0.995, 0.999, 0.9995, 0.9999]
    for t in thresh:
        for fname in file_list:
            name = fname.split("/")[-1].split(".")[0]
            mat_name = name + ".mat"
            npz = np.load(fname)
            lines = npz["lines"].reshape(-1, 4)
            scores = npz["score"]
            for j in range(len(scores) - 1):
                if scores[j + 1] == scores[0]:  # Cyclic lines/scores. Choose the first cycle.
                    lines = lines[: j + 1]
                    scores = scores[: j + 1]
                    break
            idx = np.where(scores > t)[0]  # Only choose the lines which scores > specified threshold.
            os.makedirs(osp.join(target_dir, str(t)), exist_ok=True)
            sio.savemat(osp.join(target_dir, str(t), mat_name), {"lines": lines[idx]})

    cmd = "matlab -nodisplay -nodesktop "
    cmd += '-r "dbstop if error; '
    cmd += "eval_release('{:s}', '{:s}', '{:s}', '{:s}', {:d}); quit;\"".format(
        image_path, line_gt_path, output_file, target_dir, output_size
    )
    print("Running:\n{}".format(cmd))
    os.environ["MATLABPATH"] = "matlab/"
    subprocess.call(cmd, shell=True)

    mat = sio.loadmat(output_file)
    tps = mat["sumtp"]
    fps = mat["sumfp"]
    N = mat["sumgt"]
    
    # Old way of F^H and AP^H:
    # --------------
    # rcs = sorted(list((tps / N)[:, 0]))
    # prs = sorted(list((tps / np.maximum(tps + fps, 1e-9))[:, 0]))[::-1]  # FIXME: Why using sorted?

    # print(
    #     "f measure is: ",
    #     (2 * np.array(prs) * np.array(rcs) / (np.array(prs) + np.array(rcs))).max(),
    # )

    # recall = np.concatenate(([0.0], rcs, [1.0]))
    # precision = np.concatenate(([0.0], prs, [0.0]))

    # for i in range(precision.size - 1, 0, -1):
    #     precision[i - 1] = max(precision[i - 1], precision[i])
    # i = np.where(recall[1:] != recall[:-1])[0]
    # print("AP is: ", np.sum((recall[i + 1] - recall[i]) * precision[i + 1]))
    # --------------


    # Our way of F^H and AP^H:
    # --------------
    tps = mat["sumtp"]
    fps = mat["sumfp"]
    N = mat["sumgt"]
    rcs = list((tps / N)[:, 0])
    prs = list((tps / np.maximum(tps + fps, 1e-9))[:, 0])  # No sorting.

    print(
        "f measure is: ",
        (2 * np.array(prs) * np.array(rcs) / (np.array(prs) + np.array(rcs) + 1e-9)).max(),
    )

    recall = np.concatenate(([0.0], rcs[::-1], [1.0]))  # Reverse order.
    precision = np.concatenate(([0.0], prs[::-1], [0.0]))

    for i in range(precision.size - 1, 0, -1):
        precision[i - 1] = max(precision[i - 1], precision[i])
    i = np.where(recall[1:] != recall[:-1])[0]
    print("AP is: ", np.sum((recall[i + 1] - recall[i]) * precision[i + 1]))
    # --------------




    # Old way of visualization:
    # --------------
    # f = interpolate.interp1d(rcs, prs, kind="cubic", bounds_error=False)  # FIXME: May still have issue.
    # x = np.arange(0, 1, 0.01) * rcs[-1]
    # y = f(x)
    # plt.plot(x, y, linewidth=3, label="L-CNN")

    # f_scores = np.linspace(0.2, 0.8, num=8)
    # for f_score in f_scores:
    #     x = np.linspace(0.01, 1)
    #     y = f_score * x / (2 * x - f_score)
    #     l, = plt.plot(x[y >= 0], y[y >= 0], color="green", alpha=0.3)
    #     plt.annotate("f={0:0.1}".format(f_score), xy=(0.9, y[45] + 0.02), alpha=0.4)

    # plt.grid(True)
    # plt.axis([0.0, 1.0, 0.0, 1.0])
    # plt.xticks(np.arange(0, 1.0, step=0.1))
    # plt.xlabel("Recall")
    # plt.ylabel("Precision")
    # plt.yticks(np.arange(0, 1.0, step=0.1))
    # plt.legend(loc=3)
    # plt.title("PR Curve for APH")
    # plt.savefig("apH.pdf", format="pdf", bbox_inches="tight")
    # plt.savefig("apH.svg", format="svg", bbox_inches="tight")
    # plt.show()
    # --------------


    # Our way of visualization:
    # --------------
    reversed_rcs = rcs[::-1]
    reversed_prs = prs[::-1]
    filtered_rcs = []
    filtered_prs = []
    for i in range(len(reversed_rcs)):
        if reversed_prs[i] == 0.0 and reversed_prs[i] == 0.0: # Ignore empty items.
            pass
        else:
            filtered_rcs.append(reversed_rcs[i])
            filtered_prs.append(reversed_prs[i])

    f = interpolate.interp1d(filtered_rcs, filtered_prs, kind="cubic", bounds_error=False)  # FIXME: May still have issue.
    x = np.arange(0, 1, 0.01) * filtered_rcs[-1]
    y = f(x)
    plt.plot(x, y, linewidth=3, label="Current")

    f_scores = np.linspace(0.2, 0.8, num=8)
    for f_score in f_scores:
        x = np.linspace(0.01, 1)
        y = f_score * x / (2 * x - f_score)
        l, = plt.plot(x[y >= 0], y[y >= 0], color="green", alpha=0.3)
        plt.annotate("f={0:0.1}".format(f_score), xy=(0.9, y[45] + 0.02), alpha=0.4)

    plt.grid(True)
    plt.axis([0.0, 1.0, 0.0, 1.0])
    plt.xticks(np.arange(0, 1.0, step=0.1))
    plt.xlabel("Recall")
    plt.ylabel("Precision")
    plt.yticks(np.arange(0, 1.0, step=0.1))
    plt.legend(loc=3)
    plt.title("PR Curve for APH")
    # plt.savefig("apH.pdf", format="pdf", bbox_inches="tight")
    # plt.savefig("apH.svg", format="svg", bbox_inches="tight")
    plt.show()
    # --------------


if __name__ == "__main__":
    # import debugpy
    # print("Enabling attach starts.")
    # debugpy.listen(address=('0.0.0.0', 9310))
    # debugpy.wait_for_client()
    # print("Enabling attach ends.")
    
    plt.tight_layout()
    main()


================================================
FILE: evaluation/eval-fscore-wireframe.py
================================================
#!/usr/bin/env python3
"""Evaluate sAP5, sAP10, sAP15 for LCNN
Usage:
    eval-sAP.py <path>...
    eval-sAP.py (-h | --help )

Examples:
    python eval-sAP.py logs/*/npz/000*

Arguments:
    <path>                           One or more directories from train.py

Options:
   -h --help                         Show this screen.
"""

import os
import sys
import glob
import os.path as osp

import numpy as np
import scipy.io
import matplotlib as mpl
import matplotlib.pyplot as plt
from docopt import docopt

import lcnn.utils
import lcnn.metric

GT_val = "evaluation/data/wireframe/valid/*.npz"
GT_train = "evaluation/data/wireframe/train/*_0_label.npz"


def f_score(tp, fp):
    recall = tp
    precision = tp / np.maximum(tp + fp, 1e-9)

    recall = np.concatenate(([0.0], recall, [1.0]))
    precision = np.concatenate(([0.0], precision, [0.0]))

    Fscore = (2*precision*recall/(precision+recall+0.0000000001)).max()
    return Fscore

def line_score(path, threshold=5):
    preds = sorted(glob.glob(path))
    
    if path.split('/')[-2].split('_')[1] == 'train':
        gts = sorted(glob.glob(GT_train))
        gts = gts[:501]
    else:
        gts = sorted(glob.glob(GT_val))
    n_gt = 0
    lcnn_tp, lcnn_fp, lcnn_scores = [], [], []
    for pred_name, gt_name in zip(preds, gts):
        with np.load(pred_name) as fpred:
            lcnn_line = fpred["lines"][:, :, :2]
            lcnn_score = fpred["score"]
        with np.load(gt_name) as fgt:
            gt_line = fgt["lpos"][:, :, :2]
        n_gt += len(gt_line)

        for i in range(len(lcnn_line)):
            if i > 0 and (lcnn_line[i] == lcnn_line[0]).all():
                lcnn_line = lcnn_line[:i]
                lcnn_score = lcnn_score[:i]
                break

        tp, fp = lcnn.metric.msTPFP(lcnn_line, gt_line, threshold)
        lcnn_tp.append(tp)
        lcnn_fp.append(fp)
        lcnn_scores.append(lcnn_score)

    lcnn_tp = np.concatenate(lcnn_tp)
    lcnn_fp = np.concatenate(lcnn_fp)
    lcnn_scores = np.concatenate(lcnn_scores)
    lcnn_index = np.argsort(-lcnn_scores)
    lcnn_tp = np.cumsum(lcnn_tp[lcnn_index]) / n_gt
    lcnn_fp = np.cumsum(lcnn_fp[lcnn_index]) / n_gt

    return f_score(lcnn_tp, lcnn_fp)


if __name__ == "__main__":
    args = docopt(__doc__)

    def work(path):
        print(f"Working on {path}")
        return [100 * line_score(f"{path}/*.npz", t) for t in [5, 10, 15]]

    dirs = sorted(sum([glob.glob(p) for p in args["<path>"]], []))
    results = lcnn.utils.parmap(work, dirs)

    for d, msAP in zip(dirs, results):
        print(f"{d}: {msAP[0]:2.1f} {msAP[1]:2.1f} {msAP[2]:2.1f}")


================================================
FILE: evaluation/eval-fscore-york.py
================================================
#!/usr/bin/env python3
"""Evaluate sAP5, sAP10, sAP15 for LCNN
Usage:
    eval-sAP.py <path>...
    eval-sAP.py (-h | --help )

Examples:
    python eval-sAP.py logs/*/npz/000*

Arguments:
    <path>                           One or more directories from train.py

Options:
   -h --help                         Show this screen.
"""

import os
import sys
import glob
import os.path as osp

import numpy as np
import scipy.io
import matplotlib as mpl
import matplotlib.pyplot as plt
from docopt import docopt
import lcnn.utils
import lcnn.metric

GT = "evaluation/data/york/valid/*.npz"


def f_score(tp, fp):
    recall = tp
    precision = tp / np.maximum(tp + fp, 1e-9)

    recall = np.concatenate(([0.0], recall, [1.0]))
    precision = np.concatenate(([0.0], precision, [0.0]))

    Fscore = (2*precision*recall/(precision+recall+0.0000000001)).max()
    return Fscore

def line_score(path, threshold=5):
    preds = sorted(glob.glob(path))
    gts = sorted(glob.glob(GT))

    n_gt = 0
    lcnn_tp, lcnn_fp, lcnn_scores = [], [], []
    for pred_name, gt_name in zip(preds, gts):
        with np.load(pred_name) as fpred:
            lcnn_line = fpred["lines"][:, :, :2]
            lcnn_score = fpred["score"]
        with np.load(gt_name) as fgt:
            gt_line = fgt["lpos"][:, :, :2]
        n_gt += len(gt_line)

        # DETR NEEDS TO SORT CONF
        #score_idx = np.argsort(-lcnn_score)
        #lcnn_line = lcnn_line[score_idx]
        #lcnn_score = lcnn_score[score_idx]

        for i in range(len(lcnn_line)):
            if i > 0 and (lcnn_line[i] == lcnn_line[0]).all():
                lcnn_line = lcnn_line[:i]
                lcnn_score = lcnn_score[:i]
                break

        tp, fp = lcnn.metric.msTPFP(lcnn_line, gt_line, threshold)
        lcnn_tp.append(tp)
        lcnn_fp.append(fp)
        lcnn_scores.append(lcnn_score)

    lcnn_tp = np.concatenate(lcnn_tp)
    lcnn_fp = np.concatenate(lcnn_fp)
    lcnn_scores = np.concatenate(lcnn_scores)
    lcnn_index = np.argsort(-lcnn_scores)
    lcnn_tp = np.cumsum(lcnn_tp[lcnn_index]) / n_gt
    lcnn_fp = np.cumsum(lcnn_fp[lcnn_index]) / n_gt

    return f_score(lcnn_tp, lcnn_fp)


if __name__ == "__main__":
    args = docopt(__doc__)

    def work(path):
        print(f"Working on {path}")
        return [100 * line_score(f"{path}/*.npz", t) for t in [5, 10, 15]]

    dirs = sorted(sum([glob.glob(p) for p in args["<path>"]], []))
    results = lcnn.utils.parmap(work, dirs)

    for d, msAP in zip(dirs, results):
        print(f"{d}: {msAP[0]:2.1f} {msAP[1]:2.1f} {msAP[2]:2.1f}")


================================================
FILE: evaluation/eval-sAP-wireframe.py
================================================
#!/usr/bin/env python3
"""Evaluate sAP5, sAP10, sAP15 for LCNN
Usage:
    eval-sAP.py <path>...
    eval-sAP.py (-h | --help )

Examples:
    python eval-sAP.py logs/*/npz/000*

Arguments:
    <path>                           One or more directories from train.py

Options:
   -h --help                         Show this screen.
"""

import os
import sys
import glob
import os.path as osp

import numpy as np
import scipy.io
import matplotlib as mpl
import matplotlib.pyplot as plt
from docopt import docopt

import lcnn.utils
import lcnn.metric

GT_val = "evaluation/data/wireframe/valid/*.npz"
GT_train = "evaluation/data/wireframe/train/*_0_label.npz"

def line_score(path, threshold=5):
    preds = sorted(glob.glob(path))

    if path.split('/')[-2].split('_')[1] == 'train':
        gts = sorted(glob.glob(GT_train))
        gts = gts[:501]
    else:
        gts = sorted(glob.glob(GT_val))



    n_gt = 0
    lcnn_tp, lcnn_fp, lcnn_scores = [], [], []
    for pred_name, gt_name in zip(preds, gts):
        with np.load(pred_name) as fpred:
            lcnn_line = fpred["lines"][:, :, :2]
            lcnn_score = fpred["score"]
        with np.load(gt_name) as fgt:
            gt_line = fgt["lpos"][:, :, :2]
        n_gt += len(gt_line)

        for i in range(len(lcnn_line)):
            if i > 0 and (lcnn_line[i] == lcnn_line[0]).all():
                lcnn_line = lcnn_line[:i]
                lcnn_score = lcnn_score[:i]
                break

        tp, fp = lcnn.metric.msTPFP(lcnn_line, gt_line, threshold)
        lcnn_tp.append(tp)
        lcnn_fp.append(fp)
        lcnn_scores.append(lcnn_score)

    lcnn_tp = np.concatenate(lcnn_tp)
    lcnn_fp = np.concatenate(lcnn_fp)
    lcnn_scores = np.concatenate(lcnn_scores)
    lcnn_index = np.argsort(-lcnn_scores)
    lcnn_tp = np.cumsum(lcnn_tp[lcnn_index]) / n_gt
    lcnn_fp = np.cumsum(lcnn_fp[lcnn_index]) / n_gt

    return lcnn.metric.ap(lcnn_tp, lcnn_fp)


if __name__ == "__main__":
    args = docopt(__doc__)

    def work(path):
        print(f"Working on {path}")
        return [100 * line_score(f"{path}/*.npz", t) for t in [5, 10, 15]]

    dirs = sorted(sum([glob.glob(p) for p in args["<path>"]], []))
    results = lcnn.utils.parmap(work, dirs)

    for d, msAP in zip(dirs, results):
        print(f"{d}: {msAP[0]:2.1f} {msAP[1]:2.1f} {msAP[2]:2.1f}")


================================================
FILE: evaluation/eval-sAP-york.py
================================================
#!/usr/bin/env python3
"""Evaluate sAP5, sAP10, sAP15 for LCNN
Usage:
    eval-sAP.py <path>...
    eval-sAP.py (-h | --help )

Examples:
    python eval-sAP.py logs/*/npz/000*

Arguments:
    <path>                           One or more directories from train.py

Options:
   -h --help                         Show this screen.
"""

import os
import sys
import glob
import os.path as osp

import numpy as np
import scipy.io
import matplotlib as mpl
import matplotlib.pyplot as plt
from docopt import docopt

import lcnn.utils
import lcnn.metric

GT = "evaluation/data/york/valid/*.npz"


def line_score(path, threshold=5):
    preds = sorted(glob.glob(path))
    gts = sorted(glob.glob(GT))




    n_gt = 0
    lcnn_tp, lcnn_fp, lcnn_scores = [], [], []
    for pred_name, gt_name in zip(preds, gts):
        with np.load(pred_name) as fpred:
            lcnn_line = fpred["lines"][:, :, :2]
            lcnn_score = fpred["score"]
        with np.load(gt_name) as fgt:
            gt_line = fgt["lpos"][:, :, :2]
        n_gt += len(gt_line)

        # DETR NEEDS TO SORT CONF
        #score_idx = np.argsort(-lcnn_score)
        #lcnn_line = lcnn_line[score_idx]
        #lcnn_score = lcnn_score[score_idx]

        for i in range(len(lcnn_line)):
            if i > 0 and (lcnn_line[i] == lcnn_line[0]).all():
                lcnn_line = lcnn_line[:i]
                lcnn_score = lcnn_score[:i]
                break

        tp, fp = lcnn.metric.msTPFP(lcnn_line, gt_line, threshold)
        lcnn_tp.append(tp)
        lcnn_fp.append(fp)
        lcnn_scores.append(lcnn_score)

    lcnn_tp = np.concatenate(lcnn_tp)
    lcnn_fp = np.concatenate(lcnn_fp)
    lcnn_scores = np.concatenate(lcnn_scores)
    lcnn_index = np.argsort(-lcnn_scores)
    lcnn_tp = np.cumsum(lcnn_tp[lcnn_index]) / n_gt
    lcnn_fp = np.cumsum(lcnn_fp[lcnn_index]) / n_gt

    return lcnn.metric.ap(lcnn_tp, lcnn_fp)


if __name__ == "__main__":
    args = docopt(__doc__)

    def work(path):
        print(f"Working on {path}")
        return [100 * line_score(f"{path}/*.npz", t) for t in [5, 10, 15]]

    dirs = sorted(sum([glob.glob(p) for p in args["<path>"]], []))
    results = lcnn.utils.parmap(work, dirs)

    for d, msAP in zip(dirs, results):
        print(f"{d}: {msAP[0]:2.1f} {msAP[1]:2.1f} {msAP[2]:2.1f}")


================================================
FILE: evaluation/lcnn/__init__.py
================================================
import lcnn.models
import lcnn.trainer
import lcnn.datasets
import lcnn.config


================================================
FILE: evaluation/lcnn/box.py
================================================
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
#
# Copyright (c) 2017-2019 - Chris Griffith - MIT License
"""
Improved dictionary access through dot notation with additional tools.
"""
import string
import sys
import json
import re
import copy
from keyword import kwlist
import warnings

try:
    from collections.abc import Iterable, Mapping, Callable
except ImportError:
    from collections import Iterable, Mapping, Callable

yaml_support = True

try:
    import yaml
except ImportError:
    try:
        import ruamel.yaml as yaml
    except ImportError:
        yaml = None
        yaml_support = False

if sys.version_info >= (3, 0):
    basestring = str
else:
    from io import open

__all__ = ['Box', 'ConfigBox', 'BoxList', 'SBox',
           'BoxError', 'BoxKeyError']
__author__ = 'Chris Griffith'
__version__ = '3.2.4'

BOX_PARAMETERS = ('default_box', 'default_box_attr', 'conversion_box',
                  'frozen_box', 'camel_killer_box', 'box_it_up',
                  'box_safe_prefix', 'box_duplicates', 'ordered_box')

_first_cap_re = re.compile('(.)([A-Z][a-z]+)')
_all_cap_re = re.compile('([a-z0-9])([A-Z])')


class BoxError(Exception):
    """Non standard dictionary exceptions"""


class BoxKeyError(BoxError, KeyError, AttributeError):
    """Key does not exist"""


# Abstract converter functions for use in any Box class


def _to_json(obj, filename=None,
             encoding="utf-8", errors="strict", **json_kwargs):
    json_dump = json.dumps(obj,
                           ensure_ascii=False, **json_kwargs)
    if filename:
        with open(filename, 'w', encoding=encoding, errors=errors) as f:
            f.write(json_dump if sys.version_info >= (3, 0) else
                    json_dump.decode("utf-8"))
    else:
        return json_dump


def _from_json(json_string=None, filename=None,
               encoding="utf-8", errors="strict", multiline=False, **kwargs):
    if filename:
        with open(filename, 'r', encoding=encoding, errors=errors) as f:
            if multiline:
                data = [json.loads(line.strip(), **kwargs) for line in f
                        if line.strip() and not line.strip().startswith("#")]
            else:
                data = json.load(f, **kwargs)
    elif json_string:
        data = json.loads(json_string, **kwargs)
    else:
        raise BoxError('from_json requires a string or filename')
    return data


def _to_yaml(obj, filename=None, default_flow_style=False,
             encoding="utf-8", errors="strict",
             **yaml_kwargs):
    if filename:
        with open(filename, 'w',
                  encoding=encoding, errors=errors) as f:
            yaml.dump(obj, stream=f,
                      default_flow_style=default_flow_style,
                      **yaml_kwargs)
    else:
        return yaml.dump(obj,
                         default_flow_style=default_flow_style,
                         **yaml_kwargs)


def _from_yaml(yaml_string=None, filename=None,
               encoding="utf-8", errors="strict",
               **kwargs):
    if filename:
        with open(filename, 'r',
                  encoding=encoding, errors=errors) as f:
            data = yaml.load(f, **kwargs)
    elif yaml_string:
        data = yaml.load(yaml_string, **kwargs)
    else:
        raise BoxError('from_yaml requires a string or filename')
    return data


# Helper functions


def _safe_key(key):
    try:
        return str(key)
    except UnicodeEncodeError:
        return key.encode("utf-8", "ignore")


def _safe_attr(attr, camel_killer=False, replacement_char='x'):
    """Convert a key into something that is accessible as an attribute"""
    allowed = string.ascii_letters + string.digits + '_'

    attr = _safe_key(attr)

    if camel_killer:
        attr = _camel_killer(attr)

    attr = attr.replace(' ', '_')

    out = ''
    for character in attr:
        out += character if character in allowed else "_"
    out = out.strip("_")

    try:
        int(out[0])
    except (ValueError, IndexError):
        pass
    else:
        out = '{0}{1}'.format(replacement_char, out)

    if out in kwlist:
        out = '{0}{1}'.format(replacement_char, out)

    return re.sub('_+', '_', out)


def _camel_killer(attr):
    """
    CamelKiller, qu'est-ce que c'est?

    Taken from http://stackoverflow.com/a/1176023/3244542
    """
    try:
        attr = str(attr)
    except UnicodeEncodeError:
        attr = attr.encode("utf-8", "ignore")

    s1 = _first_cap_re.sub(r'\1_\2', attr)
    s2 = _all_cap_re.sub(r'\1_\2', s1)
    return re.sub('_+', '_', s2.casefold() if hasattr(s2, 'casefold') else
                  s2.lower())


def _recursive_tuples(iterable, box_class, recreate_tuples=False, **kwargs):
    out_list = []
    for i in iterable:
        if isinstance(i, dict):
            out_list.append(box_class(i, **kwargs))
        elif isinstance(i, list) or (recreate_tuples and isinstance(i, tuple)):
            out_list.append(_recursive_tuples(i, box_class,
                                              recreate_tuples, **kwargs))
        else:
            out_list.append(i)
    return tuple(out_list)


def _conversion_checks(item, keys, box_config, check_only=False,
                       pre_check=False):
    """
    Internal use for checking if a duplicate safe attribute already exists

    :param item: Item to see if a dup exists
    :param keys: Keys to check against
    :param box_config: Easier to pass in than ask for specfic items
    :param check_only: Don't bother doing the conversion work
    :param pre_check: Need to add the item to the list of keys to check
    :return: the original unmodified key, if exists and not check_only
    """
    if box_config['box_duplicates'] != 'ignore':
        if pre_check:
            keys = list(keys) + [item]

        key_list = [(k,
                     _safe_attr(k, camel_killer=box_config['camel_killer_box'],
                                replacement_char=box_config['box_safe_prefix']
                                )) for k in keys]
        if len(key_list) > len(set(x[1] for x in key_list)):
            seen = set()
            dups = set()
            for x in key_list:
                if x[1] in seen:
                    dups.add("{0}({1})".format(x[0], x[1]))
                seen.add(x[1])
            if box_config['box_duplicates'].startswith("warn"):
                warnings.warn('Duplicate conversion attributes exist: '
                              '{0}'.format(dups))
            else:
                raise BoxError('Duplicate conversion attributes exist: '
                               '{0}'.format(dups))
    if check_only:
        return
    # This way will be slower for warnings, as it will have double work
    # But faster for the default 'ignore'
    for k in keys:
        if item == _safe_attr(k, camel_killer=box_config['camel_killer_box'],
                              replacement_char=box_config['box_safe_prefix']):
            return k


def _get_box_config(cls, kwargs):
    return {
        # Internal use only
        '__converted': set(),
        '__box_heritage': kwargs.pop('__box_heritage', None),
        '__created': False,
        '__ordered_box_values': [],
        # Can be changed by user after box creation
        'default_box': kwargs.pop('default_box', False),
        'default_box_attr': kwargs.pop('default_box_attr', cls),
        'conversion_box': kwargs.pop('conversion_box', True),
        'box_safe_prefix': kwargs.pop('box_safe_prefix', 'x'),
        'frozen_box': kwargs.pop('frozen_box', False),
        'camel_killer_box': kwargs.pop('camel_killer_box', False),
        'modify_tuples_box': kwargs.pop('modify_tuples_box', False),
        'box_duplicates': kwargs.pop('box_duplicates', 'ignore'),
        'ordered_box': kwargs.pop('ordered_box', False)
    }


class Box(dict):
    """
    Improved dictionary access through dot notation with additional tools.

    :param default_box: Similar to defaultdict, return a default value
    :param default_box_attr: Specify the default replacement.
        WARNING: If this is not the default 'Box', it will not be recursive
    :param frozen_box: After creation, the box cannot be modified
    :param camel_killer_box: Convert CamelCase to snake_case
    :param conversion_box: Check for near matching keys as attributes
    :param modify_tuples_box: Recreate incoming tuples with dicts into Boxes
    :param box_it_up: Recursively create all Boxes from the start
    :param box_safe_prefix: Conversion box prefix for unsafe attributes
    :param box_duplicates: "ignore", "error" or "warn" when duplicates exists
        in a conversion_box
    :param ordered_box: Preserve the order of keys entered into the box
    """

    _protected_keys = dir({}) + ['to_dict', 'tree_view', 'to_json', 'to_yaml',
                                 'from_yaml', 'from_json']

    def __new__(cls, *args, **kwargs):
        """
        Due to the way pickling works in python 3, we need to make sure
        the box config is created as early as possible.
        """
        obj = super(Box, cls).__new__(cls, *args, **kwargs)
        obj._box_config = _get_box_config(cls, kwargs)
        return obj

    def __init__(self, *args, **kwargs):
        self._box_config = _get_box_config(self.__class__, kwargs)
        if self._box_config['ordered_box']:
            self._box_config['__ordered_box_values'] = []
        if (not self._box_config['conversion_box'] and
                self._box_config['box_duplicates'] != "ignore"):
            raise BoxError('box_duplicates are only for conversion_boxes')
        if len(args) == 1:
            if isinstance(args[0], basestring):
                raise ValueError('Cannot extrapolate Box from string')
            if isinstance(args[0], Mapping):
                for k, v in args[0].items():
                    if v is args[0]:
                        v = self
                    self[k] = v
                    self.__add_ordered(k)
            elif isinstance(args[0], Iterable):
                for k, v in args[0]:
                    self[k] = v
                    self.__add_ordered(k)

            else:
                raise ValueError('First argument must be mapping or iterable')
        elif args:
            raise TypeError('Box expected at most 1 argument, '
                            'got {0}'.format(len(args)))

        box_it = kwargs.pop('box_it_up', False)
        for k, v in kwargs.items():
            if args and isinstance(args[0], Mapping) and v is args[0]:
                v = self
            self[k] = v
            self.__add_ordered(k)

        if (self._box_config['frozen_box'] or box_it or
                self._box_config['box_duplicates'] != 'ignore'):
            self.box_it_up()

        self._box_config['__created'] = True

    def __add_ordered(self, key):
        if (self._box_config['ordered_box'] and
                key not in self._box_config['__ordered_box_values']):
            self._box_config['__ordered_box_values'].append(key)

    def box_it_up(self):
        """
        Perform value lookup for all items in current dictionary,
        generating all sub Box objects, while also running `box_it_up` on
        any of those sub box objects.
        """
        for k in self:
            _conversion_checks(k, self.keys(), self._box_config,
                               check_only=True)
            if self[k] is not self and hasattr(self[k], 'box_it_up'):
                self[k].box_it_up()

    def __hash__(self):
        if self._box_config['frozen_box']:
            hashing = 54321
            for item in self.items():
                hashing ^= hash(item)
            return hashing
        raise TypeError("unhashable type: 'Box'")

    def __dir__(self):
        allowed = string.ascii_letters + string.digits + '_'
        kill_camel = self._box_config['camel_killer_box']
        items = set(dir(dict) + ['to_dict', 'to_json',
                                 'from_json', 'box_it_up'])
        # Only show items accessible by dot notation
        for key in self.keys():
            key = _safe_key(key)
            if (' ' not in key and key[0] not in string.digits and
                    key not in kwlist):
                for letter in key:
                    if letter not in allowed:
                        break
                else:
                    items.add(key)

        for key in self.keys():
            key = _safe_key(key)
            if key not in items:
                if self._box_config['conversion_box']:
                    key = _safe_attr(key, camel_killer=kill_camel,
                                     replacement_char=self._box_config[
                                         'box_safe_prefix'])
                    if key:
                        items.add(key)
            if kill_camel:
                snake_key = _camel_killer(key)
                if snake_key:
                    items.remove(key)
                    items.add(snake_key)

        if yaml_support:
            items.add('to_yaml')
            items.add('from_yaml')

        return list(items)

    def get(self, key, default=None):
        try:
            return self[key]
        except KeyError:
            if isinstance(default, dict) and not isinstance(default, Box):
                return Box(default)
            if isinstance(default, list) and not isinstance(default, BoxList):
                return BoxList(default)
            return default

    def copy(self):
        return self.__class__(super(self.__class__, self).copy())

    def __copy__(self):
        return self.__class__(super(self.__class__, self).copy())

    def __deepcopy__(self, memodict=None):
        out = self.__class__()
        memodict = memodict or {}
        memodict[id(self)] = out
        for k, v in self.items():
            out[copy.deepcopy(k, memodict)] = copy.deepcopy(v, memodict)
        return out

    def __setstate__(self, state):
        self._box_config = state['_box_config']
        self.__dict__.update(state)

    def __getitem__(self, item, _ignore_default=False):
        try:
            value = super(Box, self).__getitem__(item)
        except KeyError as err:
            if item == '_box_config':
                raise BoxKeyError('_box_config should only exist as an '
                                  'attribute and is never defaulted')
            if self._box_config['default_box'] and not _ignore_default:
                return self.__get_default(item)
            raise BoxKeyError(str(err))
        else:
            return self.__convert_and_store(item, value)

    def keys(self):
        if self._box_config['ordered_box']:
            return self._box_config['__ordered_box_values']
        return super(Box, self).keys()

    def values(self):
        return [self[x] for x in self.keys()]

    def items(self):
        return [(x, self[x]) for x in self.keys()]

    def __get_default(self, item):
        default_value = self._box_config['default_box_attr']
        if default_value is self.__class__:
            return self.__class__(__box_heritage=(self, item),
                                  **self.__box_config())
        elif isinstance(default_value, Callable):
            return default_value()
        elif hasattr(default_value, 'copy'):
            return default_value.copy()
        return default_value

    def __box_config(self):
        out = {}
        for k, v in self._box_config.copy().items():
            if not k.startswith("__"):
                out[k] = v
        return out

    def __convert_and_store(self, item, value):
        if item in self._box_config['__converted']:
            return value
        if isinstance(value, dict) and not isinstance(value, Box):
            value = self.__class__(value, __box_heritage=(self, item),
                                   **self.__box_config())
            self[item] = value
        elif isinstance(value, list) and not isinstance(value, BoxList):
            if self._box_config['frozen_box']:
                value = _recursive_tuples(value, self.__class__,
                                          recreate_tuples=self._box_config[
                                              'modify_tuples_box'],
                                          __box_heritage=(self, item),
                                          **self.__box_config())
            else:
                value = BoxList(value, __box_heritage=(self, item),
                                box_class=self.__class__,
                                **self.__box_config())
            self[item] = value
        elif (self._box_config['modify_tuples_box'] and
              isinstance(value, tuple)):
            value = _recursive_tuples(value, self.__class__,
                                      recreate_tuples=True,
                                      __box_heritage=(self, item),
                                      **self.__box_config())
            self[item] = value
        self._box_config['__converted'].add(item)
        return value

    def __create_lineage(self):
        if (self._box_config['__box_heritage'] and
                self._box_config['__created']):
            past, item = self._box_config['__box_heritage']
            if not past[item]:
                past[item] = self
            self._box_config['__box_heritage'] = None

    def __getattr__(self, item):
        try:
            try:
                value = self.__getitem__(item, _ignore_default=True)
            except KeyError:
                value = object.__getattribute__(self, item)
        except AttributeError as err:
            if item == "__getstate__":
                raise AttributeError(item)
            if item == '_box_config':
                raise BoxError('_box_config key must exist')
            kill_camel = self._box_config['camel_killer_box']
            if self._box_config['conversion_box'] and item:
                k = _conversion_checks(item, self.keys(), self._box_config)
                if k:
                    return self.__getitem__(k)
            if kill_camel:
                for k in self.keys():
                    if item == _camel_killer(k):
                        return self.__getitem__(k)
            if self._box_config['default_box']:
                return self.__get_default(item)
            raise BoxKeyError(str(err))
        else:
            if item == '_box_config':
                return value
            return self.__convert_and_store(item, value)

    def __setitem__(self, key, value):
        if (key != '_box_config' and self._box_config['__created'] and
                self._box_config['frozen_box']):
            raise BoxError('Box is frozen')
        if self._box_config['conversion_box']:
            _conversion_checks(key, self.keys(), self._box_config,
                               check_only=True, pre_check=True)
        super(Box, self).__setitem__(key, value)
        self.__add_ordered(key)
        self.__create_lineage()

    def __setattr__(self, key, value):
        if (key != '_box_config' and self._box_config['frozen_box'] and
                self._box_config['__created']):
            raise BoxError('Box is frozen')
        if key in self._protected_keys:
            raise AttributeError("Key name '{0}' is protected".format(key))
        if key == '_box_config':
            return object.__setattr__(self, key, value)
        try:
            object.__getattribute__(self, key)
        except (AttributeError, UnicodeEncodeError):
            if (key not in self.keys() and
                    (self._box_config['conversion_box'] or
                     self._box_config['camel_killer_box'])):
                if self._box_config['conversion_box']:
                    k = _conversion_checks(key, self.keys(),
                                           self._box_config)
                    self[key if not k else k] = value
                elif self._box_config['camel_killer_box']:
                    for each_key in self:
                        if key == _camel_killer(each_key):
                            self[each_key] = value
                            break
            else:
                self[key] = value
        else:
            object.__setattr__(self, key, value)
        self.__add_ordered(key)
        self.__create_lineage()

    def __delitem__(self, key):
        if self._box_config['frozen_box']:
            raise BoxError('Box is frozen')
        super(Box, self).__delitem__(key)
        if (self._box_config['ordered_box'] and
                key in self._box_config['__ordered_box_values']):
            self._box_config['__ordered_box_values'].remove(key)

    def __delattr__(self, item):
        if self._box_config['frozen_box']:
            raise BoxError('Box is frozen')
        if item == '_box_config':
            raise BoxError('"_box_config" is protected')
        if item in self._protected_keys:
            raise AttributeError("Key name '{0}' is protected".format(item))
        try:
            object.__getattribute__(self, item)
        except AttributeError:
            del self[item]
        else:
            object.__delattr__(self, item)
        if (self._box_config['ordered_box'] and
                item in self._box_config['__ordered_box_values']):
            self._box_config['__ordered_box_values'].remove(item)

    def pop(self, key, *args):
        if args:
            if len(args) != 1:
                raise BoxError('pop() takes only one optional'
                               ' argument "default"')
            try:
                item = self[key]
            except KeyError:
                return args[0]
            else:
                del self[key]
                return item
        try:
            item = self[key]
        except KeyError:
            raise BoxKeyError('{0}'.format(key))
        else:
            del self[key]
            return item

    def clear(self):
        self._box_config['__ordered_box_values'] = []
        super(Box, self).clear()

    def popitem(self):
        try:
            key = next(self.__iter__())
        except StopIteration:
            raise BoxKeyError('Empty box')
        return key, self.pop(key)

    def __repr__(self):
        return '<Box: {0}>'.format(str(self.to_dict()))

    def __str__(self):
        return str(self.to_dict())

    def __iter__(self):
        for key in self.keys():
            yield key

    def __reversed__(self):
        for key in reversed(list(self.keys())):
            yield key

    def to_dict(self):
        """
        Turn the Box and sub Boxes back into a native
        python dictionary.

        :return: python dictionary of this Box
        """
        out_dict = dict(self)
        for k, v in out_dict.items():
            if v is self:
                out_dict[k] = out_dict
            elif hasattr(v, 'to_dict'):
                out_dict[k] = v.to_dict()
            elif hasattr(v, 'to_list'):
                out_dict[k] = v.to_list()
        return out_dict

    def update(self, item=None, **kwargs):
        if not item:
            item = kwargs
        iter_over = item.items() if hasattr(item, 'items') else item
        for k, v in iter_over:
            if isinstance(v, dict):
                # Box objects must be created in case they are already
                # in the `converted` box_config set
                v = self.__class__(v)
                if k in self and isinstance(self[k], dict):
                    self[k].update(v)
                    continue
            if isinstance(v, list):
                v = BoxList(v)
            try:
                self.__setattr__(k, v)
            except (AttributeError, TypeError):
                self.__setitem__(k, v)

    def setdefault(self, item, default=None):
        if item in self:
            return self[item]

        if isinstance(default, dict):
            default = self.__class__(default)
        if isinstance(default, list):
            default = BoxList(default)
        self[item] = default
        return default

    def to_json(self, filename=None,
                encoding="utf-8", errors="strict", **json_kwargs):
        """
        Transform the Box object into a JSON string.

        :param filename: If provided will save to file
        :param encoding: File encoding
        :param errors: How to handle encoding errors
        :param json_kwargs: additional arguments to pass to json.dump(s)
        :return: string of JSON or return of `json.dump`
        """
        return _to_json(self.to_dict(), filename=filename,
                        encoding=encoding, errors=errors, **json_kwargs)

    @classmethod
    def from_json(cls, json_string=None, filename=None,
                  encoding="utf-8", errors="strict", **kwargs):
        """
        Transform a json object string into a Box object. If the incoming
        json is a list, you must use BoxList.from_json.

        :param json_string: string to pass to `json.loads`
        :param filename: filename to open and pass to `json.load`
        :param encoding: File encoding
        :param errors: How to handle encoding errors
        :param kwargs: parameters to pass to `Box()` or `json.loads`
        :return: Box object from json data
        """
        bx_args = {}
        for arg in kwargs.copy():
            if arg in BOX_PARAMETERS:
                bx_args[arg] = kwargs.pop(arg)

        data = _from_json(json_string, filename=filename,
                          encoding=encoding, errors=errors, **kwargs)

        if not isinstance(data, dict):
            raise BoxError('json data not returned as a dictionary, '
                           'but rather a {0}'.format(type(data).__name__))
        return cls(data, **bx_args)

    if yaml_support:
        def to_yaml(self, filename=None, default_flow_style=False,
                    encoding="utf-8", errors="strict",
                    **yaml_kwargs):
            """
            Transform the Box object into a YAML string.

            :param filename:  If provided will save to file
            :param default_flow_style: False will recursively dump dicts
            :param encoding: File encoding
            :param errors: How to handle encoding errors
            :param yaml_kwargs: additional arguments to pass to yaml.dump
            :return: string of YAML or return of `yaml.dump`
            """
            return _to_yaml(self.to_dict(), filename=filename,
                            default_flow_style=default_flow_style,
                            encoding=encoding, errors=errors, **yaml_kwargs)

        @classmethod
        def from_yaml(cls, yaml_string=None, filename=None,
                      encoding="utf-8", errors="strict",
                      loader=yaml.SafeLoader, **kwargs):
            """
            Transform a yaml object string into a Box object.

            :param yaml_string: string to pass to `yaml.load`
            :param filename: filename to open and pass to `yaml.load`
            :param encoding: File encoding
            :param errors: How to handle encoding errors
            :param loader: YAML Loader, defaults to SafeLoader
            :param kwargs: parameters to pass to `Box()` or `yaml.load`
            :return: Box object from yaml data
            """
            bx_args = {}
            for arg in kwargs.copy():
                if arg in BOX_PARAMETERS:
                    bx_args[arg] = kwargs.pop(arg)

            data = _from_yaml(yaml_string=yaml_string, filename=filename,
                              encoding=encoding, errors=errors,
                              Loader=loader, **kwargs)
            if not isinstance(data, dict):
                raise BoxError('yaml data not returned as a dictionary'
                               'but rather a {0}'.format(type(data).__name__))
            return cls(data, **bx_args)


class BoxList(list):
    """
    Drop in replacement of list, that converts added objects to Box or BoxList
    objects as necessary.
    """

    def __init__(self, iterable=None, box_class=Box, **box_options):
        self.box_class = box_class
        self.box_options = box_options
        self.box_org_ref = self.box_org_ref = id(iterable) if iterable else 0
        if iterable:
            for x in iterable:
                self.append(x)
        if box_options.get('frozen_box'):
            def frozen(*args, **kwargs):
                raise BoxError('BoxList is frozen')

            for method in ['append', 'extend', 'insert', 'pop',
                           'remove', 'reverse', 'sort']:
                self.__setattr__(method, frozen)

    def __delitem__(self, key):
        if self.box_options.get('frozen_box'):
            raise BoxError('BoxList is frozen')
        super(BoxList, self).__delitem__(key)

    def __setitem__(self, key, value):
        if self.box_options.get('frozen_box'):
            raise BoxError('BoxList is frozen')
        super(BoxList, self).__setitem__(key, value)

    def append(self, p_object):
        if isinstance(p_object, dict):
            try:
                p_object = self.box_class(p_object, **self.box_options)
            except AttributeError as err:
                if 'box_class' in self.__dict__:
                    raise err
        elif isinstance(p_object, list):
            try:
                p_object = (self if id(p_object) == self.box_org_ref else
                            BoxList(p_object))
            except AttributeError as err:
                if 'box_org_ref' in self.__dict__:
                    raise err
        super(BoxList, self).append(p_object)

    def extend(self, iterable):
        for item in iterable:
            self.append(item)

    def insert(self, index, p_object):
        if isinstance(p_object, dict):
            p_object = self.box_class(p_object, **self.box_options)
        elif isinstance(p_object, list):
            p_object = (self if id(p_object) == self.box_org_ref else
                        BoxList(p_object))
        super(BoxList, self).insert(index, p_object)

    def __repr__(self):
        return "<BoxList: {0}>".format(self.to_list())

    def __str__(self):
        return str(self.to_list())

    def __copy__(self):
        return BoxList((x for x in self),
                       self.box_class,
                       **self.box_options)

    def __deepcopy__(self, memodict=None):
        out = self.__class__()
        memodict = memodict or {}
        memodict[id(self)] = out
        for k in self:
            out.append(copy.deepcopy(k))
        return out

    def __hash__(self):
        if self.box_options.get('frozen_box'):
            hashing = 98765
            hashing ^= hash(tuple(self))
            return hashing
        raise TypeError("unhashable type: 'BoxList'")

    def to_list(self):
        new_list = []
        for x in self:
            if x is self:
                new_list.append(new_list)
            elif isinstance(x, Box):
                new_list.append(x.to_dict())
            elif isinstance(x, BoxList):
                new_list.append(x.to_list())
            else:
                new_list.append(x)
        return new_list

    def to_json(self, filename=None,
                encoding="utf-8", errors="strict",
                multiline=False, **json_kwargs):
        """
        Transform the BoxList object into a JSON string.

        :param filename: If provided will save to file
        :param encoding: File encoding
        :param errors: How to handle encoding errors
        :param multiline: Put each item in list onto it's own line
        :param json_kwargs: additional arguments to pass to json.dump(s)
        :return: string of JSON or return of `json.dump`
        """
        if filename and multiline:
            lines = [_to_json(item, filename=False, encoding=encoding,
                              errors=errors, **json_kwargs) for item in self]
            with open(filename, 'w', encoding=encoding, errors=errors) as f:
                f.write("\n".join(lines).decode('utf-8') if
                        sys.version_info < (3, 0) else "\n".join(lines))
        else:
            return _to_json(self.to_list(), filename=filename,
                            encoding=encoding, errors=errors, **json_kwargs)

    @classmethod
    def from_json(cls, json_string=None, filename=None, encoding="utf-8",
                  errors="strict", multiline=False, **kwargs):
        """
        Transform a json object string into a BoxList object. If the incoming
        json is a dict, you must use Box.from_json.

        :param json_string: string to pass to `json.loads`
        :param filename: filename to open and pass to `json.load`
        :param encoding: File encoding
        :param errors: How to handle encoding errors
        :param multiline: One object per line
        :param kwargs: parameters to pass to `Box()` or `json.loads`
        :return: BoxList object from json data
        """
        bx_args = {}
        for arg in kwargs.copy():
            if arg in BOX_PARAMETERS:
                bx_args[arg] = kwargs.pop(arg)

        data = _from_json(json_string, filename=filename, encoding=encoding,
                          errors=errors, multiline=multiline, **kwargs)

        if not isinstance(data, list):
            raise BoxError('json data not returned as a list, '
                           'but rather a {0}'.format(type(data).__name__))
        return cls(data, **bx_args)

    if yaml_support:
        def to_yaml(self, filename=None, default_flow_style=False,
                    encoding="utf-8", errors="strict",
                    **yaml_kwargs):
            """
            Transform the BoxList object into a YAML string.

            :param filename:  If provided will save to file
            :param default_flow_style: False will recursively dump dicts
            :param encoding: File encoding
            :param errors: How to handle encoding errors
            :param yaml_kwargs: additional arguments to pass to yaml.dump
            :return: string of YAML or return of `yaml.dump`
            """
            return _to_yaml(self.to_list(), filename=filename,
                            default_flow_style=default_flow_style,
                            encoding=encoding, errors=errors, **yaml_kwargs)

        @classmethod
        def from_yaml(cls, yaml_string=None, filename=None,
                      encoding="utf-8", errors="strict",
                      loader=yaml.SafeLoader,
                      **kwargs):
            """
            Transform a yaml object string into a BoxList object.

            :param yaml_string: string to pass to `yaml.load`
            :param filename: filename to open and pass to `yaml.load`
            :param encoding: File encoding
            :param errors: How to handle encoding errors
            :param loader: YAML Loader, defaults to SafeLoader
            :param kwargs: parameters to pass to `BoxList()` or `yaml.load`
            :return: BoxList object from yaml data
            """
            bx_args = {}
            for arg in kwargs.copy():
                if arg in BOX_PARAMETERS:
                    bx_args[arg] = kwargs.pop(arg)

            data = _from_yaml(yaml_string=yaml_string, filename=filename,
                              encoding=encoding, errors=errors,
                              Loader=loader, **kwargs)
            if not isinstance(data, list):
                raise BoxError('yaml data not returned as a list'
                               'but rather a {0}'.format(type(data).__name__))
            return cls(data, **bx_args)

    def box_it_up(self):
        for v in self:
            if hasattr(v, 'box_it_up') and v is not self:
                v.box_it_up()


class ConfigBox(Box):
    """
    Modified box object to add object transforms.

    Allows for build in transforms like:

    cns = ConfigBox(my_bool='yes', my_int='5', my_list='5,4,3,3,2')

    cns.bool('my_bool') # True
    cns.int('my_int') # 5
    cns.list('my_list', mod=lambda x: int(x)) # [5, 4, 3, 3, 2]
    """

    _protected_keys = dir({}) + ['to_dict', 'bool', 'int', 'float',
                                 'list', 'getboolean', 'to_json', 'to_yaml',
                                 'getfloat', 'getint',
                                 'from_json', 'from_yaml']

    def __getattr__(self, item):
        """Config file keys are stored in lower case, be a little more
        loosey goosey"""
        try:
            return super(ConfigBox, self).__getattr__(item)
        except AttributeError:
            return super(ConfigBox, self).__getattr__(item.lower())

    def __dir__(self):
        return super(ConfigBox, self).__dir__() + ['bool', 'int', 'float',
                                                   'list', 'getboolean',
                                                   'getfloat', 'getint']

    def bool(self, item, default=None):
        """ Return value of key as a boolean

        :param item: key of value to transform
        :param default: value to return if item does not exist
        :return: approximated bool of value
        """
        try:
            item = self.__getattr__(item)
        except AttributeError as err:
            if default is not None:
                return default
            raise err

        if isinstance(item, (bool, int)):
            return bool(item)

        if (isinstance(item, str) and
                item.lower() in ('n', 'no', 'false', 'f', '0')):
            return False

        return True if item else False

    def int(self, item, default=None):
        """ Return value of key as an int

        :param item: key of value to transform
        :param default: value to return if item does not exist
        :return: int of value
        """
        try:
            item = self.__getattr__(item)
        except AttributeError as err:
            if default is not None:
                return default
            raise err
        return int(item)

    def float(self, item, default=None):
        """ Return value of key as a float

        :param item: key of value to transform
        :param default: value to return if item does not exist
        :return: float of value
        """
        try:
            item = self.__getattr__(item)
        except AttributeError as err:
            if default is not None:
                return default
            raise err
        return float(item)

    def list(self, item, default=None, spliter=",", strip=True, mod=None):
        """ Return value of key as a list

        :param item: key of value to transform
        :param mod: function to map against list
        :param default: value to return if item does not exist
        :param spliter: character to split str on
        :param strip: clean the list with the `strip`
        :return: list of items
        """
        try:
            item = self.__getattr__(item)
        except AttributeError as err:
            if default is not None:
                return default
            raise err
        if strip:
            item = item.lstrip('[').rstrip(']')
        out = [x.strip() if strip else x for x in item.split(spliter)]
        if mod:
            return list(map(mod, out))
        return out

    # loose configparser compatibility

    def getboolean(self, item, default=None):
        return self.bool(item, default)

    def getint(self, item, default=None):
        return self.int(item, default)

    def getfloat(self, item, default=None):
        return self.float(item, default)

    def __repr__(self):
        return '<ConfigBox: {0}>'.format(str(self.to_dict()))


class SBox(Box):
    """
    ShorthandBox (SBox) allows for
    property access of `dict` `json` and `yaml`
    """
    _protected_keys = dir({}) + ['to_dict', 'tree_view', 'to_json', 'to_yaml',
                                 'json', 'yaml', 'from_yaml', 'from_json',
                                 'dict']

    @property
    def dict(self):
        return self.to_dict()

    @property
    def json(self):
        return self.to_json()

    if yaml_support:
        @property
        def yaml(self):
            return self.to_yaml()

    def __repr__(self):
        return '<ShorthandBox: {0}>'.format(str(self.to_dict()))


================================================
FILE: evaluation/lcnn/config.py
================================================
import numpy as np

from lcnn.box import Box

# C is a dict storing all the configuration
C = Box()

# shortcut for C.model
M = Box()


================================================
FILE: evaluation/lcnn/datasets.py
================================================
import glob
import json
import math
import os
import random

import numpy as np
import numpy.linalg as LA
import torch
from skimage import io
from torch.utils.data import Dataset
from torch.utils.data.dataloader import default_collate

from lcnn.config import M


class WireframeDataset(Dataset):
    def __init__(self, rootdir, split):
        self.rootdir = rootdir
        filelist = glob.glob(f"{rootdir}/{split}/*_label.npz")
        filelist.sort()

        print(f"n{split}:", len(filelist))
        self.split = split
        self.filelist = filelist

    def __len__(self):
        return len(self.filelist)

    def __getitem__(self, idx):
        iname = self.filelist[idx][:-10].replace("_a0", "").replace("_a1", "") + ".png"
        image = io.imread(iname).astype(float)[:, :, :3]
        if "a1" in self.filelist[idx]:
            image = image[:, ::-1, :]
        image = (image - M.image.mean) / M.image.stddev
        image = np.rollaxis(image, 2).copy()

        # npz["jmap"]: [J, H, W]    Junction heat map
        # npz["joff"]: [J, 2, H, W] Junction offset within each pixel
        # npz["lmap"]: [H, W]       Line heat map with anti-aliasing
        # npz["junc"]: [Na, 3]      Junction coordinates
        # npz["Lpos"]: [M, 2]       Positive lines represented with junction indices
        # npz["Lneg"]: [M, 2]       Negative lines represented with junction indices
        # npz["lpos"]: [Np, 2, 3]   Positive lines represented with junction coordinates
        # npz["lneg"]: [Nn, 2, 3]   Negative lines represented with junction coordinates
        #
        # For junc, lpos, and lneg that stores the junction coordinates, the last
        # dimension is (y, x, t), where t represents the type of that junction.
        with np.load(self.filelist[idx]) as npz:
            target = {
                name: torch.from_numpy(npz[name]).float()
                for name in ["jmap", "joff", "lmap"]
            }
            lpos = np.random.permutation(npz["lpos"])[: M.n_stc_posl]
            lneg = np.random.permutation(npz["lneg"])[: M.n_stc_negl]
            npos, nneg = len(lpos), len(lneg)
            lpre = np.concatenate([lpos, lneg], 0)
            for i in range(len(lpre)):
                if random.random() > 0.5:
                    lpre[i] = lpre[i, ::-1]
            ldir = lpre[:, 0, :2] - lpre[:, 1, :2]
            ldir /= np.clip(LA.norm(ldir, axis=1, keepdims=True), 1e-6, None)
            feat = [
                lpre[:, :, :2].reshape(-1, 4) / 128 * M.use_cood,
                ldir * M.use_slop,
                lpre[:, :, 2],
            ]
            feat = np.concatenate(feat, 1)
            meta = {
                "junc": torch.from_numpy(npz["junc"][:, :2]),
                "jtyp": torch.from_numpy(npz["junc"][:, 2]).byte(),
                "Lpos": self.adjacency_matrix(len(npz["junc"]), npz["Lpos"]),
                "Lneg": self.adjacency_matrix(len(npz["junc"]), npz["Lneg"]),
                "lpre": torch.from_numpy(lpre[:, :, :2]),
                "lpre_label": torch.cat([torch.ones(npos), torch.zeros(nneg)]),
                "lpre_feat": torch.from_numpy(feat),
            }

        return torch.from_numpy(image).float(), meta, target

    def adjacency_matrix(self, n, link):
        mat = torch.zeros(n + 1, n + 1, dtype=torch.uint8)
        link = torch.from_numpy(link)
        if len(link) > 0:
            mat[link[:, 0], link[:, 1]] = 1
            mat[link[:, 1], link[:, 0]] = 1
        return mat


def collate(batch):
    return (
        default_collate([b[0] for b in batch]),
        [b[1] for b in batch],
        default_collate([b[2] for b in batch]),
    )


================================================
FILE: evaluation/lcnn/metric.py
================================================
import numpy as np
import numpy.linalg as LA
import matplotlib.pyplot as plt

from lcnn.utils import argsort2d

DX = [0, 0, 1, -1, 1, 1, -1, -1]
DY = [1, -1, 0, 0, 1, -1, 1, -1]


def ap(tp, fp):
    recall = tp
    precision = tp / np.maximum(tp + fp, 1e-9)

    recall = np.concatenate(([0.0], recall, [1.0]))
    precision = np.concatenate(([0.0], precision, [0.0]))

    for i in range(precision.size - 1, 0, -1):
        precision[i - 1] = max(precision[i - 1], precision[i])
    i = np.where(recall[1:] != recall[:-1])[0]
    return np.sum((recall[i + 1] - recall[i]) * precision[i + 1])

def fscore(tp, fp):
    recall = tp
    precision = tp / np.maximum(tp + fp, 1e-9)

    recall = np.concatenate(([0.0], recall, [1.0]))
    precision = np.concatenate(([0.0], precision, [0.0]))

    return (2 * np.array(precision) * np.array(recall) / (1e-9 + np.array(precision) + np.array(recall))).max()

def APJ(vert_pred, vert_gt, max_distance, im_ids):
    if len(vert_pred) == 0:
        return 0

    vert_pred = np.array(vert_pred)
    vert_gt = np.array(vert_gt)

    confidence = vert_pred[:, -1]
    idx = np.argsort(-confidence)
    vert_pred = vert_pred[idx, :]
    im_ids = im_ids[idx]
    n_gt = sum(len(gt) for gt in vert_gt)

    nd = len(im_ids)
    tp, fp = np.zeros(nd, dtype=np.float), np.zeros(nd, dtype=np.float)
    hit = [[False for _ in j] for j in vert_gt]

    for i in range(nd):
        gt_juns = vert_gt[im_ids[i]]
        pred_juns = vert_pred[i][:-1]
        if len(gt_juns) == 0:
            continue
        dists = np.linalg.norm((pred_juns[None, :] - gt_juns), axis=1)
        choice = np.argmin(dists)
        dist = np.min(dists)
        if dist < max_distance and not hit[im_ids[i]][choice]:
            tp[i] = 1
            hit[im_ids[i]][choice] = True
        else:
            fp[i] = 1

    tp = np.cumsum(tp) / n_gt
    fp = np.cumsum(fp) / n_gt
    return ap(tp, fp)


def nms_j(heatmap, delta=1):
    heatmap = heatmap.copy()
    disable = np.zeros_like(heatmap, dtype=np.bool)
    for x, y in argsort2d(heatmap):
        for dx, dy in zip(DX, DY):
            xp, yp = x + dx, y + dy
            if not (0 <= xp < heatmap.shape[0] and 0 <= yp < heatmap.shape[1]):
                continue
            if heatmap[x, y] >= heatmap[xp, yp]:
                disable[xp, yp] = True
    heatmap[disable] *= 0.6
    return heatmap


def mAPJ(pred, truth, distances, im_ids):
    return sum(APJ(pred, truth, d, im_ids) for d in distances) / len(distances) * 100


def post_jheatmap(heatmap, offset=None, delta=1):
    heatmap = nms_j(heatmap, delta=delta)
    # only select the best 1000 junctions for efficiency
    v0 = argsort2d(-heatmap)[:1000]
    confidence = -np.sort(-heatmap.ravel())[:1000]
    keep_id = np.where(confidence >= 1e-2)[0]
    if len(keep_id) == 0:
        return np.zeros((0, 3))

    confidence = confidence[keep_id]
    if offset is not None:
        v0 = np.array([v + offset[:, v[0], v[1]] for v in v0])
    v0 = v0[keep_id] + 0.5
    v0 = np.hstack((v0, confidence[:, np.newaxis]))
    return v0


def vectorized_wireframe_2d_metric(
    vert_pred, dpth_pred, edge_pred, vert_gt, dpth_gt, edge_gt, threshold
):
    # staging 1: matching
    nd = len(vert_pred)
    sorted_confidence = np.argsort(-vert_pred[:, -1])
    vert_pred = vert_pred[sorted_confidence, :-1]
    dpth_pred = dpth_pred[sorted_confidence]
    d = np.sqrt(
        np.sum(vert_pred ** 2, 1)[:, None]
        + np.sum(vert_gt ** 2, 1)[None, :]
        - 2 * vert_pred @ vert_gt.T
    )
    choice = np.argmin(d, 1)
    dist = np.min(d, 1)

    # staging 2: compute depth metric: SIL/L2
    loss_L1 = loss_L2 = 0
    hit = np.zeros_like(dpth_gt, np.bool)
    SIL = np.zeros(dpth_pred)
    for i in range(nd):
        if dist[i] < threshold and not hit[choice[i]]:
            hit[choice[i]] = True
            loss_L1 += abs(dpth_gt[choice[i]] - dpth_pred[i])
            loss_L2 += (dpth_gt[choice[i]] - dpth_pred[i]) ** 2
            a = np.maximum(-dpth_pred[i], 1e-10)
            b = -dpth_gt[choice[i]]
            SIL[i] = np.log(a) - np.log(b)
        else:
            choice[i] = -1

    n = max(np.sum(hit), 1)
    loss_L1 /= n
    loss_L2 /= n
    loss_SIL = np.sum(SIL ** 2) / n - np.sum(SIL) ** 2 / (n * n)

    # staging 3: compute mAP for edge matching
    edgeset = set([frozenset(e) for e in edge_gt])
    tp = np.zeros(len(edge_pred), dtype=np.float)
    fp = np.zeros(len(edge_pred), dtype=np.float)
    for i, (v0, v1, score) in enumerate(sorted(edge_pred, key=-edge_pred[2])):
        length = LA.norm(vert_gt[v0] - vert_gt[v1], axis=1)
        if frozenset([choice[v0], choice[v1]]) in edgeset:
            tp[i] = length
        else:
            fp[i] = length
    total_length = LA.norm(
        vert_gt[edge_gt[:, 0]] - vert_gt[edge_gt[:, 1]], axis=1
    ).sum()
    return ap(tp / total_length, fp / total_length), (loss_SIL, loss_L1, loss_L2)


def vectorized_wireframe_3d_metric(
    vert_pred, dpth_pred, edge_pred, vert_gt, dpth_gt, edge_gt, threshold
):
    # staging 1: matching
    nd = len(vert_pred)
    sorted_confidence = np.argsort(-vert_pred[:, -1])
    vert_pred = np.hstack([vert_pred[:, :-1], dpth_pred[:, None]])[sorted_confidence]
    vert_gt = np.hstack([vert_gt[:, :-1], dpth_gt[:, None]])
    d = np.sqrt(
        np.sum(vert_pred ** 2, 1)[:, None]
        + np.sum(vert_gt ** 2, 1)[None, :]
        - 2 * vert_pred @ vert_gt.T
    )
    choice = np.argmin(d, 1)
    dist = np.min(d, 1)
    hit = np.zeros_like(dpth_gt, np.bool)
    for i in range(nd):
        if dist[i] < threshold and not hit[choice[i]]:
            hit[choice[i]] = True
        else:
            choice[i] = -1

    # staging 2: compute mAP for edge matching
    edgeset = set([frozenset(e) for e in edge_gt])
    tp = np.zeros(len(edge_pred), dtype=np.float)
    fp = np.zeros(len(edge_pred), dtype=np.float)
    for i, (v0, v1, score) in enumerate(sorted(edge_pred, key=-edge_pred[2])):
        length = LA.norm(vert_gt[v0] - vert_gt[v1], axis=1)
        if frozenset([choice[v0], choice[v1]]) in edgeset:
            tp[i] = length
        else:
            fp[i] = length
    total_length = LA.norm(
        vert_gt[edge_gt[:, 0]] - vert_gt[edge_gt[:, 1]], axis=1
    ).sum()

    return ap(tp / total_length, fp / total_length)


def msTPFP(line_pred, line_gt, threshold):
    diff = ((line_pred[:, None, :, None] - line_gt[:, None]) ** 2).sum(-1)
    diff = np.minimum(
        diff[:, :, 0, 0] + diff[:, :, 1, 1], diff[:, :, 0, 1] + diff[:, :, 1, 0]
    )
    choice = np.argmin(diff, 1)
    dist = np.min(diff, 1)
    hit = np.zeros(len(line_gt), np.bool)
    tp = np.zeros(len(line_pred), np.float)
    fp = np.zeros(len(line_pred), np.float)
    for i in range(len(line_pred)):
        if dist[i] < threshold and not hit[choice[i]]:
            hit[choice[i]] = True
            tp[i] = 1
        else:
            fp[i] = 1
    return tp, fp


def msAP(line_pred, line_gt, threshold):
    tp, fp = msTPFP(line_pred, line_gt, threshold)
    tp = np.cumsum(tp) / len(line_gt)
    fp = np.cumsum(fp) / len(line_gt)
    return ap(tp, fp)


================================================
FILE: evaluation/lcnn/models/__init__.py
================================================
# flake8: noqa
from .hourglass_pose import hg
# from .dla import dla169, dla102x, dla102x2


================================================
FILE: evaluation/lcnn/models/hourglass_pose.py
================================================
"""
Hourglass network inserted in the pre-activated Resnet
Use lr=0.01 for current version
(c) Yichao Zhou (LCNN)
(c) YANG, Wei
"""
import torch
import torch.nn as nn
import torch.nn.functional as F

__all__ = ["HourglassNet", "hg"]


class Bottleneck2D(nn.Module):
    expansion = 2

    def __init__(self, inplanes, planes, stride=1, downsample=None):
        super(Bottleneck2D, self).__init__()

        self.bn1 = nn.BatchNorm2d(inplanes)
        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1)
        self.bn2 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1)
        self.bn3 = nn.BatchNorm2d(planes)
        self.conv3 = nn.Conv2d(planes, planes * 2, kernel_size=1)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        residual = x

        out = self.bn1(x)
        out = self.relu(out)
        out = self.conv1(out)

        out = self.bn2(out)
        out = self.relu(out)
        out = self.conv2(out)

        out = self.bn3(out)
        out = self.relu(out)
        out = self.conv3(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual

        return out


class Hourglass(nn.Module):
    def __init__(self, block, num_blocks, planes, depth):
        super(Hourglass, self).__init__()
        self.depth = depth
        self.block = block
        self.hg = self._make_hour_glass(block, num_blocks, planes, depth)

    def _make_residual(self, block, num_blocks, planes):
        layers = []
        for i in range(0, num_blocks):
            layers.append(block(planes * block.expansion, planes))
        return nn.Sequential(*layers)

    def _make_hour_glass(self, block, num_blocks, planes, depth):
        hg = []
        for i in range(depth):
            res = []
            for j in range(3):
                res.append(self._make_residual(block, num_blocks, planes))
            if i == 0:
                res.append(self._make_residual(block, num_blocks, planes))
            hg.append(nn.ModuleList(res))
        return nn.ModuleList(hg)

    def _hour_glass_forward(self, n, x):
        up1 = self.hg[n - 1][0](x)
        low1 = F.max_pool2d(x, 2, stride=2)
        low1 = self.hg[n - 1][1](low1)

        if n > 1:
            low2 = self._hour_glass_forward(n - 1, low1)
        else:
            low2 = self.hg[n - 1][3](low1)
        low3 = self.hg[n - 1][2](low2)
        up2 = F.interpolate(low3, scale_factor=2)
        out = up1 + up2
        return out

    def forward(self, x):
        return self._hour_glass_forward(self.depth, x)


class HourglassNet(nn.Module):
    """Hourglass model from Newell et al ECCV 2016"""

    def __init__(self, block, head, depth, num_stacks, num_blocks, num_classes):
        super(HourglassNet, self).__init__()

        self.inplanes = 64
        self.num_feats = 128
        self.num_stacks = num_stacks
        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3)
        self.bn1 = nn.BatchNorm2d(self.inplanes)
        self.relu = nn.ReLU(inplace=True)
        self.layer1 = self._make_residual(block, self.inplanes, 1)
        self.layer2 = self._make_residual(block, self.inplanes, 1)
        self.layer3 = self._make_residual(block, self.num_feats, 1)
        self.maxpool = nn.MaxPool2d(2, stride=2)

        # build hourglass modules
        ch = self.num_feats * block.expansion
        # vpts = []
        hg, res, fc, score, fc_, score_ = [], [], [], [], [], []
        for i in range(num_stacks):
            hg.append(Hourglass(block, num_blocks, self.num_feats, depth))
            res.append(self._make_residual(block, self.num_feats, num_blocks))
            fc.append(self._make_fc(ch, ch))
            score.append(head(ch, num_classes))
            # vpts.append(VptsHead(ch))
            # vpts.append(nn.Linear(ch, 9))
            # score.append(nn.Conv2d(ch, num_classes, kernel_size=1))
            # score[i].bias.data[0] += 4.6
            # score[i].bias.data[2] += 4.6
            if i < num_stacks - 1:
                fc_.append(nn.Conv2d(ch, ch, kernel_size=1))
                score_.append(nn.Conv2d(num_classes, ch, kernel_size=1))
        self.hg = nn.ModuleList(hg)
        self.res = nn.ModuleList(res)
        self.fc = nn.ModuleList(fc)
        self.score = nn.ModuleList(score)
        # self.vpts = nn.ModuleList(vpts)
        self.fc_ = nn.ModuleList(fc_)
        self.score_ = nn.ModuleList(score_)

    def _make_residual(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(
                    self.inplanes,
                    planes * block.expansion,
                    kernel_size=1,
                    stride=stride,
                )
            )

        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)

    def _make_fc(self, inplanes, outplanes):
        bn = nn.BatchNorm2d(inplanes)
        conv = nn.Conv2d(inplanes, outplanes, kernel_size=1)
        return nn.Sequential(conv, bn, self.relu)

    def forward(self, x):
        out = []
        # out_vps = []
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)

        x = self.layer1(x)
        x = self.maxpool(x)
        x = self.layer2(x)
        x = self.layer3(x)

        for i in range(self.num_stacks):
            y = self.hg[i](x)
            y = self.res[i](y)
            y = self.fc[i](y)
            score = self.score[i](y)
            # pre_vpts = F.adaptive_avg_pool2d(x, (1, 1))
            # pre_vpts = pre_vpts.reshape(-1, 256)
            # vpts = self.vpts[i](x)
            out.append(score)
            # out_vps.append(vpts)
            if i < self.num_stacks - 1:
                fc_ = self.fc_[i](y)
                score_ = self.score_[i](score)
                x = x + fc_ + score_

        return out[::-1], y  # , out_vps[::-1]


def hg(**kwargs):
    model = HourglassNet(
        Bottleneck2D,
        head=kwargs.get("head", lambda c_in, c_out: nn.Conv2D(c_in, c_out, 1)),
        depth=kwargs["depth"],
        num_stacks=kwargs["num_stacks"],
        num_blocks=kwargs["num_blocks"],
        num_classes=kwargs["num_classes"],
    )
    return model


================================================
FILE: evaluation/lcnn/models/line_vectorizer.py
================================================
import itertools
import random
from collections import defaultdict

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F

from lcnn.config import M

FEATURE_DIM = 8


class LineVectorizer(nn.Module):
    def __init__(self, backbone):
        super().__init__()
        self.backbone = backbone

        lambda_ = torch.linspace(0, 1, M.n_pts0)[:, None]
        self.register_buffer("lambda_", lambda_)
        self.do_static_sampling = M.n_stc_posl + M.n_stc_negl > 0

        self.fc1 = nn.Conv2d(256, M.dim_loi, 1)
        scale_factor = M.n_pts0 // M.n_pts1
        if M.use_conv:
            self.pooling = nn.Sequential(
                nn.MaxPool1d(scale_factor, scale_factor),
                Bottleneck1D(M.dim_loi, M.dim_loi),
            )
            self.fc2 = nn.Sequential(
                nn.ReLU(inplace=True), nn.Linear(M.dim_loi * M.n_pts1 + FEATURE_DIM, 1)
            )
        else:
            self.pooling = nn.MaxPool1d(scale_factor, scale_factor)
            self.fc2 = nn.Sequential(
                nn.Linear(M.dim_loi * M.n_pts1 + FEATURE_DIM, M.dim_fc),
                nn.ReLU(inplace=True),
                nn.Linear(M.dim_fc, M.dim_fc),
                nn.ReLU(inplace=True),
                nn.Linear(M.dim_fc, 1),
            )
        self.loss = nn.BCEWithLogitsLoss(reduction="none")

    def forward(self, input_dict):
        result = self.backbone(input_dict)
        h = result["preds"]
        x = self.fc1(result["feature"])
        n_batch, n_channel, row, col = x.shape

        xs, ys, fs, ps, idx, jcs = [], [], [], [], [0], []
        for i, meta in enumerate(input_dict["meta"]):
            p, label, feat, jc = self.sample_lines(
                meta, h["jmap"][i], h["joff"][i], input_dict["mode"]
            )
            # print("p.shape:", p.shape)
            ys.append(label)
            if input_dict["mode"] == "training" and self.do_static_sampling:
                p = torch.cat([p, meta["lpre"]])
                feat = torch.cat([feat, meta["lpre_feat"]])
                ys.append(meta["lpre_label"])
                del jc
            else:
                jcs.append(jc)
                ps.append(p)
            fs.append(feat)

            p = p[:, 0:1, :] * self.lambda_ + p[:, 1:2, :] * (1 - self.lambda_) - 0.5
            p = p.reshape(-1, 2)  # [N_LINE x N_POINT, 2_XY]
            px, py = p[:, 0].contiguous(), p[:, 1].contiguous()
            px0 = px.floor().clamp(min=0, max=127)
            py0 = py.floor().clamp(min=0, max=127)
            px1 = (px0 + 1).clamp(min=0, max=127)
            py1 = (py0 + 1).clamp(min=0, max=127)
            px0l, py0l, px1l, py1l = px0.long(), py0.long(), px1.long(), py1.long()

            # xp: [N_LINE, N_CHANNEL, N_POINT]
            xp = (
                (
                    x[i, :, px0l, py0l] * (px1 - px) * (py1 - py)
                    + x[i, :, px1l, py0l] * (px - px0) * (py1 - py)
                    + x[i, :, px0l, py1l] * (px1 - px) * (py - py0)
                    + x[i, :, px1l, py1l] * (px - px0) * (py - py0)
                )
                .reshape(n_channel, -1, M.n_pts0)
                .permute(1, 0, 2)
            )
            xp = self.pooling(xp)
            xs.append(xp)
            idx.append(idx[-1] + xp.shape[0])

        x, y = torch.cat(xs), torch.cat(ys)
        f = torch.cat(fs)
        x = x.reshape(-1, M.n_pts1 * M.dim_loi)
        x = torch.cat([x, f], 1)
        x = self.fc2(x).flatten()

        if input_dict["mode"] != "training":
            p = torch.cat(ps)
            s = torch.sigmoid(x)
            b = s > 0.5
            lines = []
            score = []
            for i in range(n_batch):
                p0 = p[idx[i] : idx[i + 1]]
                s0 = s[idx[i] : idx[i + 1]]
                mask = b[idx[i] : idx[i + 1]]
                p0 = p0[mask]
                s0 = s0[mask]
                if len(p0) == 0:
                    lines.append(torch.zeros([1, M.n_out_line, 2, 2], device=p.device))
                    score.append(torch.zeros([1, M.n_out_line], device=p.device))
                else:
                    arg = torch.argsort(s0, descending=True)
                    p0, s0 = p0[arg], s0[arg]
                    lines.append(p0[None, torch.arange(M.n_out_line) % len(p0)])
                    score.append(s0[None, torch.arange(M.n_out_line) % len(s0)])
                for j in range(len(jcs[i])):
                    if len(jcs[i][j]) == 0:
                        jcs[i][j] = torch.zeros([M.n_out_junc, 2], device=p.device)
                    jcs[i][j] = jcs[i][j][
                        None, torch.arange(M.n_out_junc) % len(jcs[i][j])
                    ]
            result["preds"]["lines"] = torch.cat(lines)
            result["preds"]["score"] = torch.cat(score)
            result["preds"]["juncs"] = torch.cat([jcs[i][0] for i in range(n_batch)])
            if len(jcs[i]) > 1:
                result["preds"]["junts"] = torch.cat(
                    [jcs[i][1] for i in range(n_batch)]
                )

        if input_dict["mode"] != "testing":
            y = torch.cat(ys)
            loss = self.loss(x, y)
            lpos_mask, lneg_mask = y, 1 - y
            loss_lpos, loss_lneg = loss * lpos_mask, loss * lneg_mask

            def sum_batch(x):
                xs = [x[idx[i] : idx[i + 1]].sum()[None] for i in range(n_batch)]
                return torch.cat(xs)

            lpos = sum_batch(loss_lpos) / sum_batch(lpos_mask).clamp(min=1)
            lneg = sum_batch(loss_lneg) / sum_batch(lneg_mask).clamp(min=1)
            result["losses"][0]["lpos"] = lpos * M.loss_weight["lpos"]
            result["losses"][0]["lneg"] = lneg * M.loss_weight["lneg"]

        if input_dict["mode"] == "training":
            del result["preds"]

        return result

    def sample_lines(self, meta, jmap, joff, mode):
        with torch.no_grad():
            junc = meta["junc"]  # [N, 2]
            jtyp = meta["jtyp"]  # [N]
            Lpos = meta["Lpos"]
            Lneg = meta["Lneg"]

            n_type = jmap.shape[0]
            jmap = non_maximum_suppression(jmap).reshape(n_type, -1)
            joff = joff.reshape(n_type, 2, -1)
            max_K = M.n_dyn_junc // n_type
            N = len(junc)
            if mode != "training":
                K = min(int((jmap > M.eval_junc_thres).float().sum().item()), max_K)
            else:
                K = min(int(N * 2 + 2), max_K)
            if K < 2:
                K = 2
            device = jmap.device

            # index: [N_TYPE, K]
            score, index = torch.topk(jmap, k=K)
            y = (index // 128).float() + torch.gather(joff[:, 0], 1, index) + 0.5
            x = (index % 128).float() + torch.gather(joff[:, 1], 1, index) + 0.5

            # xy: [N_TYPE, K, 2]
            xy = torch.cat([y[..., None], x[..., None]], dim=-1)
            xy_ = xy[..., None, :]
            del x, y, index

            # dist: [N_TYPE, K, N]
            dist = torch.sum((xy_ - junc) ** 2, -1)
            cost, match = torch.min(dist, -1)

            # xy: [N_TYPE * K, 2]
            # match: [N_TYPE, K]
            for t in range(n_type):
                match[t, jtyp[match[t]] != t] = N
            match[cost > 1.5 * 1.5] = N
            match = match.flatten()

            _ = torch.arange(n_type * K, device=device)
            u, v = torch.meshgrid(_, _)
            u, v = u.flatten(), v.flatten()
            up, vp = match[u], match[v]
            label = Lpos[up, vp]

            if mode == "training":
                c = torch.zeros_like(label, dtype=torch.bool)

                # sample positive lines
                cdx = label.nonzero().flatten()
                if len(cdx) > M.n_dyn_posl:
                    # print("too many positive lines")
                    perm = torch.randperm(len(cdx), device=device)[: M.n_dyn_posl]
                    cdx = cdx[perm]
                c[cdx] = 1

                # sample negative lines
                cdx = Lneg[up, vp].nonzero().flatten()
                if len(cdx) > M.n_dyn_negl:
                    # print("too many negative lines")
                    perm = torch.randperm(len(cdx), device=device)[: M.n_dyn_negl]
                    cdx = cdx[perm]
                c[cdx] = 1

                # sample other (unmatched) lines
                cdx = torch.randint(len(c), (M.n_dyn_othr,), device=device)
                c[cdx] = 1
            else:
                c = (u < v).flatten()

            # sample lines
            u, v, label = u[c], v[c], label[c]
            xy = xy.reshape(n_type * K, 2)
            xyu, xyv = xy[u], xy[v]

            u2v = xyu - xyv
            u2v /= torch.sqrt((u2v ** 2).sum(-1, keepdim=True)).clamp(min=1e-6)
            feat = torch.cat(
                [
                    xyu / 128 * M.use_cood,
                    xyv / 128 * M.use_cood,
                    u2v * M.use_slop,
                    (u[:, None] > K).float(),
                    (v[:, None] > K).float(),
                ],
                1,
            )
            line = torch.cat([xyu[:, None], xyv[:, None]], 1)

            xy = xy.reshape(n_type, K, 2)
            jcs = [xy[i, score[i] > 0.03] for i in range(n_type)]
            return line, label.float(), feat, jcs


def non_maximum_suppression(a):
    ap = F.max_pool2d(a, 3, stride=1, padding=1)
    mask = (a == ap).float().clamp(min=0.0)
    return a * mask


class Bottleneck1D(nn.Module):
    def __init__(self, inplanes, outplanes):
        super(Bottleneck1D, self).__init__()

        planes = outplanes // 2
        self.op = nn.Sequential(
            nn.BatchNorm1d(inplanes),
            nn.ReLU(inplace=True),
            nn.Conv1d(inplanes, planes, kernel_size=1),
            nn.BatchNorm1d(planes),
            nn.ReLU(inplace=True),
            nn.Conv1d(planes, planes, kernel_size=3, padding=1),
            nn.BatchNorm1d(planes),
            nn.ReLU(inplace=True),
            nn.Conv1d(planes, outplanes, kernel_size=1),
        )

    def forward(self, x):
        return x + self.op(x)


================================================
FILE: evaluation/lcnn/models/multitask_learner.py
================================================
from collections import OrderedDict, defaultdict

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F

from lcnn.config import M


class MultitaskHead(nn.Module):
    def __init__(self, input_channels, num_class):
        super(MultitaskHead, self).__init__()

        m = int(input_channels / 4)
        heads = []
        for output_channels in sum(M.head_size, []):
            heads.append(
                nn.Sequential(
                    nn.Conv2d(input_channels, m, kernel_size=3, padding=1),
                    nn.ReLU(inplace=True),
                    nn.Conv2d(m, output_channels, kernel_size=1),
                )
            )
        self.heads = nn.ModuleList(heads)
        assert num_class == sum(sum(M.head_size, []))

    def forward(self, x):
        return torch.cat([head(x) for head in self.heads], dim=1)


class MultitaskLearner(nn.Module):
    def __init__(self, backbone):
        super(MultitaskLearner, self).__init__()
        self.backbone = backbone
        head_size = M.head_size
        self.num_class = sum(sum(head_size, []))
        self.head_off = np.cumsum([sum(h) for h in head_size])

    def forward(self, input_dict):
        image = input_dict["image"]
        outputs, feature = self.backbone(image)
        result = {"feature": feature}
        batch, channel, row, col = outputs[0].shape

        T = input_dict["target"].copy()
        n_jtyp = T["jmap"].shape[1]

        # switch to CNHW
        for task in ["jmap"]:
            T[task] = T[task].permute(1, 0, 2, 3)
        for task in ["joff"]:
            T[task] = T[task].permute(1, 2, 0, 3, 4)

        offset = self.head_off
        loss_weight = M.loss_weight
        losses = []
        for stack, output in enumerate(outputs):
            output = output.transpose(0, 1).reshape([-1, batch, row, col]).contiguous()
            jmap = output[0 : offset[0]].reshape(n_jtyp, 2, batch, row, col)
            lmap = output[offset[0] : offset[1]].squeeze(0)
            joff = output[offset[1] : offset[2]].reshape(n_jtyp, 2, batch, row, col)
            if stack == 0:
                result["preds"] = {
                    "jmap": jmap.permute(2, 0, 1, 3, 4).softmax(2)[:, :, 1],
                    "lmap": lmap.sigmoid(),
                    "joff": joff.permute(2, 0, 1, 3, 4).sigmoid() - 0.5,
                }
                if input_dict["mode"] == "testing":
                    return result

            L = OrderedDict()
            L["jmap"] = sum(
                cross_entropy_loss(jmap[i], T["jmap"][i]) for i in range(n_jtyp)
            )
            L["lmap"] = (
                F.binary_cross_entropy_with_logits(lmap, T["lmap"], reduction="none")
                .mean(2)
                .mean(1)
            )
            L["joff"] = sum(
                sigmoid_l1_loss(joff[i, j], T["joff"][i, j], -0.5, T["jmap"][i])
                for i in range(n_jtyp)
                for j in range(2)
            )
            for loss_name in L:
                L[loss_name].mul_(loss_weight[loss_name])
            losses.append(L)
        result["losses"] = losses
        return result


def l2loss(input, target):
    return ((target - input) ** 2).mean(2).mean(1)


def cross_entropy_loss(logits, positive):
    nlogp = -F.log_softmax(logits, dim=0)
    return (positive * nlogp[1] + (1 - positive) * nlogp[0]).mean(2).mean(1)


def sigmoid_l1_loss(logits, target, offset=0.0, mask=None):
    logp = torch.sigmoid(logits) + offset
    loss = torch.abs(logp - target)
    if mask is not None:
        w = mask.mean(2, True).mean(1, True)
        w[w == 0] = 1
        loss = loss * (mask / w)

    return loss.mean(2).mean(1)


================================================
FILE: evaluation/lcnn/postprocess.py
================================================
import numpy as np


def pline(x1, y1, x2, y2, x, y):
    px = x2 - x1
    py = y2 - y1
    dd = px * px + py * py
    u = ((x - x1) * px + (y - y1) * py) / max(1e-9, float(dd))
    dx = x1 + u * px - x
    dy = y1 + u * py - y
    return dx * dx + dy * dy


def psegment(x1, y1, x2, y2, x, y):
    px = x2 - x1
    py = y2 - y1
    dd = px * px + py * py
    u = max(min(((x - x1) * px + (y - y1) * py) / float(dd), 1), 0)
    dx = x1 + u * px - x
    dy = y1 + u * py - y
    return dx * dx + dy * dy


def plambda(x1, y1, x2, y2, x, y):
    px = x2 - x1
    py = y2 - y1
    dd = px * px + py * py
    return ((x - x1) * px + (y - y1) * py) / max(1e-9, float(dd))


def postprocess(lines, scores, threshold=0.01, tol=1e9, do_clip=False):
    nlines, nscores = [], []
    for (p, q), score in zip(lines, scores):
        start, end = 0, 1
        for a, b in nlines:  # nlines: Selected lines.
            if (
                min(
                    max(pline(*p, *q, *a), pline(*p, *q, *b)),
                    max(pline(*a, *b, *p), pline(*a, *b, *q)),
                )
                > threshold ** 2
            ):
                continue
            lambda_a = plambda(*p, *q, *a)
            lambda_b = plambda(*p, *q, *b)
            if lambda_a > lambda_b:
                lambda_a, lambda_b = lambda_b, lambda_a
            lambda_a -= tol
            lambda_b += tol

            # case 1: skip (if not do_clip)
            if start < lambda_a and lambda_b < end:
                continue

            # not intersect
            if lambda_b < start or lambda_a > end:
                continue

            # cover
            if lambda_a <= start and end <= lambda_b:
                start = 10
                break

            # case 2 & 3:
            if lambda_a <= start and start <= lambda_b:
                start = lambda_b
            if lambda_a <= end and end <= lambda_b:
                end = lambda_a

            if start >= end:
                break

        if start >= end:
            continue
        nlines.append(np.array([p + (q - p) * start, p + (q - p) * end]))
        nscores.append(score)
    return np.array(nlines), np.array(nscores)


================================================
FILE: evaluation/lcnn/trainer.py
================================================
import atexit
import os
import os.path as osp
import shutil
import signal
import subprocess
import threading
import time
from timeit import default_timer as timer

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn.functional as F
from skimage import io
#from tensorboardX import SummaryWriter

from lcnn.config import C, M
from lcnn.utils import recursive_to


class Trainer(object):
    def __init__(self, device, model, optimizer, train_loader, val_loader, out):
        self.device = device

        self.model = model
        self.optim = optimizer

        self.train_loader = train_loader
        self.val_loader = val_loader
        self.batch_size = C.model.batch_size

        self.validation_interval = C.io.validation_interval

        self.out = out
        if not osp.exists(self.out):
            os.makedirs(self.out)

        self.run_tensorboard()
        time.sleep(1)

        self.epoch = 0
        self.iteration = 0
        self.max_epoch = C.optim.max_epoch
        self.lr_decay_epoch = C.optim.lr_decay_epoch
        self.num_stacks = C.model.num_stacks
        self.mean_loss = self.best_mean_loss = 1e1000

        self.loss_labels = None
        self.avg_metrics = None
        self.metrics = np.zeros(0)

    def run_tensorboard(self):
        board_out = osp.join(self.out, "tensorboard")
        if not osp.exists(board_out):
            os.makedirs(board_out)
        self.writer = SummaryWriter(board_out)
        os.environ["CUDA_VISIBLE_DEVICES"] = ""
        p = subprocess.Popen(
            ["tensorboard", f"--logdir={board_out}", f"--port={C.io.tensorboard_port}"]
        )

        def killme():
            os.kill(p.pid, signal.SIGTERM)

        atexit.register(killme)

    def _loss(self, result):
        losses = result["losses"]
        # Don't move loss label to other place.
        # If I want to change the loss, I just need to change this function.
        if self.loss_labels is None:
            self.loss_labels = ["sum"] + list(losses[0].keys())
            self.metrics = np.zeros([self.num_stacks, len(self.loss_labels)])
            print()
            print(
                "| ".join(
                    ["progress "]
                    + list(map("{:7}".format, self.loss_labels))
                    + ["speed"]
                )
            )
            with open(f"{self.out}/loss.csv", "a") as fout:
                print(",".join(["progress"] + self.loss_labels), file=fout)

        total_loss = 0
        for i in range(self.num_stacks):
            for j, name in enumerate(self.loss_labels):
                if name == "sum":
                    continue
                if name not in losses[i]:
                    assert i != 0
                    continue
                loss = losses[i][name].mean()
                self.metrics[i, 0] += loss.item()
                self.metrics[i, j] += loss.item()
                total_loss += loss
        return total_loss

    def validate(self):
        tprint("Running validation...", " " * 75)
        training = self.model.training
        self.model.eval()

        viz = osp.join(self.out, "viz", f"{self.iteration * M.batch_size_eval:09d}")
        npz = osp.join(self.out, "npz", f"{self.iteration * M.batch_size_eval:09d}")
        osp.exists(viz) or os.makedirs(viz)
        osp.exists(npz) or os.makedirs(npz)

        total_loss = 0
        self.metrics[...] = 0
        with torch.no_grad():
            for batch_idx, (image, meta, target) in enumerate(self.val_loader):
                input_dict = {
                    "image": recursive_to(image, self.device),
                    "meta": recursive_to(meta, self.device),
                    "target": recursive_to(target, self.device),
                    "mode": "validation",
                }
                result = self.model(input_dict)

                total_loss += self._loss(result)

                H = result["preds"]
                for i in range(H["jmap"].shape[0]):
                    index = batch_idx * M.batch_size_eval + i
                    np.savez(
                        f"{npz}/{index:06}.npz",
                        **{k: v[i].cpu().numpy() for k, v in H.items()},
                    )
                    if index >= 20:
                        continue
                    self._plot_samples(i, index, H, meta, target, f"{viz}/{index:06}")

        self._write_metrics(len(self.val_loader), total_loss, "validation", True)
        self.mean_loss = total_loss / len(self.val_loader)

        torch.save(
            {
                "iteration": self.iteration,
                "arch": self.model.__class__.__name__,
                "optim_state_dict": self.optim.state_dict(),
                "model_state_dict": self.model.state_dict(),
                "best_mean_loss": self.best_mean_loss,
            },
            osp.join(self.out, "checkpoint_latest.pth"),
        )
        shutil.copy(
            osp.join(self.out, "checkpoint_latest.pth"),
            osp.join(npz, "checkpoint.pth"),
        )
        if self.mean_loss < self.best_mean_loss:
            self.best_mean_loss = self.mean_loss
            shutil.copy(
                osp.join(self.out, "checkpoint_latest.pth"),
                osp.join(self.out, "checkpoint_best.pth"),
            )

        if training:
            self.model.train()

    def train_epoch(self):
        self.model.train()

        time = timer()
        for batch_idx, (image, meta, target) in enumerate(self.train_loader):

            self.optim.zero_grad()
            self.metrics[...] = 0

            input_dict = {
                "image": recursive_to(image, self.device),
                "meta": recursive_to(meta, self.device),
                "target": recursive_to(target, self.device),
                "mode": "training",
            }
            result = self.model(input_dict)

            loss = self._loss(result)
            if np.isnan(loss.item()):
                raise ValueError("loss is nan while training")
            loss.backward()
            self.optim.step()

            if self.avg_metrics is None:
                self.avg_metrics = self.metrics
            else:
                self.avg_metrics = self.avg_metrics * 0.9 + self.metrics * 0.1
            self.iteration += 1
            self._write_metrics(1, loss.item(), "training", do_print=False)

            if self.iteration % 4 == 0:
                tprint(
                    f"{self.epoch:03}/{self.iteration * self.batch_size // 1000:04}k| "
                    + "| ".join(map("{:.5f}".format, self.avg_metrics[0]))
                    + f"| {4 * self.batch_size / (timer() - time):04.1f} "
                )
                time = timer()
            num_images = self.batch_size * self.iteration
            if num_images % self.validation_interval == 0 or num_images == 600:
                self.validate()
                time = timer()

    def _write_metrics(self, size, total_loss, prefix, do_print=False):
        for i, metrics in enumerate(self.metrics):
            for label, metric in zip(self.loss_labels, metrics):
                self.writer.add_scalar(
                    f"{prefix}/{i}/{label}", metric / size, self.iteration
                )
            if i == 0 and do_print:
                csv_str = (
                    f"{self.epoch:03}/{self.iteration * self.batch_size:07},"
                    + ",".join(map("{:.11f}".format, metrics / size))
                )
                prt_str = (
                    f"{self.epoch:03}/{self.iteration * self.batch_size // 1000:04}k| "
                    + "| ".join(map("{:.5f}".format, metrics / size))
                )
                with open(f"{self.out}/loss.csv", "a") as fout:
                    print(csv_str, file=fout)
                pprint(prt_str, " " * 7)
        self.writer.add_scalar(
            f"{prefix}/total_loss", total_loss / size, self.iteration
        )
        return total_loss

    def _plot_samples(self, i, index, result, meta, target, prefix):
        fn = self.val_loader.dataset.filelist[index][:-10].replace("_a0", "") + ".png"
        img = io.imread(fn)
        imshow(img), plt.savefig(f"{prefix}_img.jpg"), plt.close()

        mask_result = result["jmap"][i].cpu().numpy()
        mask_target = target["jmap"][i].cpu().numpy()
        for ch, (ia, ib) in enumerate(zip(mask_target, mask_result)):
            imshow(ia), plt.savefig(f"{prefix}_mask_{ch}a.jpg"), plt.close()
            imshow(ib), plt.savefig(f"{prefix}_mask_{ch}b.jpg"), plt.close()

        line_result = result["lmap"][i].cpu().numpy()
        line_target = target["lmap"][i].cpu().numpy()
        imshow(line_target), plt.savefig(f"{prefix}_line_a.jpg"), plt.close()
        imshow(line_result), plt.savefig(f"{prefix}_line_b.jpg"), plt.close()

        def draw_vecl(lines, sline, juncs, junts, fn):
            imshow(img)
            if len(lines) > 0 and not (lines[0] == 0).all():
                for i, ((a, b), s) in enumerate(zip(lines, sline)):
                    if i > 0 and (lines[i] == lines[0]).all():
                        break
                    plt.plot([a[1], b[1]], [a[0], b[0]], c=c(s), linewidth=4)
            if not (juncs[0] == 0).all():
                for i, j in enumerate(juncs):
                    if i > 0 and (i == juncs[0]).all():
                        break
                    plt.scatter(j[1], j[0], c="red", s=64, zorder=100)
            if junts is not None and len(junts) > 0 and not (junts[0] == 0).all():
                for i, j in enumerate(junts):
                    if i > 0 and (i == junts[0]).all():
                        break
                    plt.scatter(j[1], j[0], c="blue", s=64, zorder=100)
            plt.savefig(fn), plt.close()

        junc = meta[i]["junc"].cpu().numpy() * 4
        jtyp = meta[i]["jtyp"].cpu().numpy()
        juncs = junc[jtyp == 0]
        junts = junc[jtyp == 1]
        rjuncs = result["juncs"][i].cpu().numpy() * 4
        rjunts = None
        if "junts" in result:
            rjunts = result["junts"][i].cpu().numpy() * 4

        lpre = meta[i]["lpre"].cpu().numpy() * 4
        vecl_target = meta[i]["lpre_label"].cpu().numpy()
        vecl_result = result["lines"][i].cpu().numpy() * 4
        score = result["score"][i].cpu().numpy()
        lpre = lpre[vecl_target == 1]

        draw_vecl(lpre, np.ones(lpre.shape[0]), juncs, junts, f"{prefix}_vecl_a.jpg")
        draw_vecl(vecl_result, score, rjuncs, rjunts, f"{prefix}_vecl_b.jpg")

    def train(self):
        plt.rcParams["figure.figsize"] = (24, 24)
        # if self.iteration == 0:
        #     self.validate()
        epoch_size = len(self.train_loader)
        start_epoch = self.iteration // epoch_size
        for self.epoch in range(start_epoch, self.max_epoch):
            if self.epoch == self.lr_decay_epoch:
                self.optim.param_groups[0]["lr"] /= 10
            self.train_epoch()


cmap = plt.get_cmap("jet")
norm = mpl.colors.Normalize(vmin=0.4, vmax=1.0)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
sm.set_array([])


def c(x):
    return sm.to_rgba(x)


def imshow(im):
    plt.close()
    plt.tight_layout()
    plt.imshow(im)
    plt.colorbar(sm, fraction=0.046)
    plt.xlim([0, im.shape[0]])
    plt.ylim([im.shape[0], 0])


def tprint(*args):
    """Temporarily prints things on the screen"""
    print("\r", end="")
    print(*args, end="")


def pprint(*args):
    """Permanently prints things on the screen"""
    print("\r", end="")
    print(*args)


def _launch_tensorboard(board_out, port, out):
    os.environ["CUDA_VISIBLE_DEVICES"] = ""
    p = subprocess.Popen(["tensorboard", f"--logdir={board_out}", f"--port={port}"])

    def kill():
        os.kill(p.pid, signal.SIGTERM)

    atexit.register(kill)


================================================
FILE: evaluation/lcnn/utils.py
================================================
import math
import os.path as osp
import multiprocessing
from timeit import default_timer as timer

import numpy as np
import torch
import matplotlib.pyplot as plt


class benchmark(object):
    def __init__(self, msg, enable=True, fmt="%0.3g"):
        self.msg = msg
        self.fmt = fmt
        self.enable = enable

    def __enter__(self):
        if self.enable:
            self.start = timer()
        return self

    def __exit__(self, *args):
        if self.enable:
            t = timer() - self.start
            print(("%s : " + self.fmt + " seconds") % (self.msg, t))
            self.time = t


def quiver(x, y, ax):
    ax.set_xlim(0, x.shape[1])
    ax.set_ylim(x.shape[0], 0)
    ax.quiver(
        x,
        y,
        units="xy",
        angles="xy",
        scale_units="xy",
        scale=1,
        minlength=0.01,
        width=0.1,
        color="b",
    )


def recursive_to(input, device):
    if isinstance(input, torch.Tensor):
        return input.to(device)
    if isinstance(input, dict):
        for name in input:
            if isinstance(input[name], torch.Tensor):
                input[name] = input[name].to(device)
        return input
    if isinstance(input, list):
        for i, item in enumerate(input):
            input[i] = recursive_to(item, device)
        return input
    assert False


def np_softmax(x, axis=0):
    """Compute softmax values for each sets of scores in x."""
    e_x = np.exp(x - np.max(x))
    return e_x / e_x.sum(axis=axis, keepdims=True)


def argsort2d(arr):
    return np.dstack(np.unravel_index(np.argsort(arr.ravel()), arr.shape))[0]


def __parallel_handle(f, q_in, q_out):
    while True:
        i, x = q_in.get()
        if i is None:
            break
        q_out.put((i, f(x)))


def parmap(f, X, nprocs=multiprocessing.cpu_count(), progress_bar=lambda x: x):
    if nprocs == 0:
        nprocs = multiprocessing.cpu_count()
    q_in = multiprocessing.Queue(1)
    q_out = multiprocessing.Queue()

    proc = [
        multiprocessing.Process(target=__parallel_handle, args=(f, q_in, q_out))
        for _ in range(nprocs)
    ]
    for p in proc:
        p.daemon = True
        p.start()

    try:
        sent = [q_in.put((i, x)) for i, x in enumerate(X)]
        [q_in.put((None, None)) for _ in range(nprocs)]
        res = [q_out.get() for _ in progress_bar(range(len(sent)))]
        [p.join() for p in proc]
    except KeyboardInterrupt:
        q_in.close()
        q_out.close()
        raise
    return [x for i, x in sorted(res)]


================================================
FILE: evaluation/matlab/eval_release.m
================================================
function eval_release(image_path, line_gt_path, output_file, result_path, output_size, mode)

% lineThresh = [0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.97, 0.99, 0.995, 0.999, 0.9995, 0.9999];
lineThresh = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.525, 0.55, 0.575, 0.6, 0.625, 0.65, 0.675, 0.7, 0.8, 0.9, 0.95, 0.97, 0.99, 0.995, 0.999, 0.9995, 0.9999];

nLineThresh = size(lineThresh, 2);
sumtp = zeros(nLineThresh, 1);
sumfp = zeros(nLineThresh, 1);
sumgt = zeros(nLineThresh, 1);

listing = dir(image_path);
numResults = size(listing, 1);

for index=1:numResults
  filename = listing(index).name;
  if length(filename) == 1 || length(filename) == 2  % '.' and '..' folder.
    continue;
  end
  filename = filename(1:end-4);
  fprintf('processed %d/%d\n', index - 2, numResults - 2)
  gtname = [line_gt_path, '/', filename, '_line.mat'];
  imgname = [image_path, filename, '.jpg'];
  
  I = imread(imgname);
  height = size(I,1);
  width = size(I,2);
  
  % convert GT lines to binary map
  gtlines = load(gtname);
  gtlines = gtlines.lines;
  
  ne = size(gtlines,1);
  edgemap0 = zeros(height, width);
  for k = 1:ne
    x1 = gtlines(k,1);
    x2 = gtlines(k,3);
    y1 = gtlines(k,2);
    y2 = gtlines(k,4);
    
    vn = ceil(sqrt((x1-x2)^2+(y1-y2)^2));
    cur_edge = [linspace(y1,y2,vn).', linspace(x1,x2,vn).'];
    for j = 1:size(cur_edge,1)
      yy = round(cur_edge(j,1));
      xx = round(cur_edge(j,2));
      if yy <= 0
        yy = 1;
      end
      if xx <= 0
        xx = 1;
      end
      edgemap0(yy,xx) = 1;
    end
  end
  
  parfor m=1:nLineThresh
    resultname = [result_path, '/', num2str(lineThresh(m)), '/', filename, '.mat'];  % Support the data from DETR.
    % resultname = [result_path, '/', num2str(lineThresh(m)), '/', sprintf('%06d', index - 3), '.mat'];  % Support the data from LCNN.
    resultlines = load(resultname);
    resultlines = resultlines.lines;
    ne = size(resultlines,1);
    edgemap1 = zeros(height, width);
    for k = 1:ne
      x1 = resultlines(k,2) * width / output_size;
      y1 = resultlines(k,1) * height / output_size;
      x2 = resultlines(k,4) * width / output_size;
      y2 = resultlines(k,3) * height / output_size;

      vn = ceil(sqrt((x1-x2)^2+(y1-y2)^2));
      cur_edge = [linspace(y1,y2,vn).', linspace(x1,x2,vn).'];
      for j = 1:size(cur_edge,1)
        yy = round(cur_edge(j,1) - 0.5);
        xx = round(cur_edge(j,2) - 0.5);
        if yy <= 0
          yy = 1;
        end
        if xx <= 0
          xx = 1;
        end
        if yy > height
          yy = height;
        end
        if xx > width
          xx = width;
        end
        edgemap1(yy,xx) = 1;
      end
    end
    
    [matchE1,matchG1] = correspondPixels(edgemap1,edgemap0,0.01);
    matchE = double(matchE1 > 0);

    sumtp(m, 1) = sumtp(m, 1) + sum(matchE(:));
    sumfp(m, 1) = sumfp(m, 1) + sum(edgemap1(:)) - sum(matchE(:));
    sumgt(m, 1) = sumgt(m, 1) + sum(edgemap0(:));
  end
end
save(output_file, 'sumtp', 'sumfp', 'sumgt');
end


================================================
FILE: evaluation/process.py
================================================
#!/usr/bin/env python3
"""Process a dataset with the trained neural network
Usage:
    process.py [options] <yaml-config> <checkpoint> <image-dir> <output-dir>
    process.py (-h | --help )

Arguments:
   <yaml-config>                 Path to the yaml hyper-parameter file
   <checkpoint>                  Path to the checkpoint
   <image-dir>                   Path to the directory containing processed images
   <output-dir>                  Path to the output directory

Options:
   -h --help                     Show this screen.
   -d --devices <devices>        Comma seperated GPU devices [default: 0]
   --plot                        Plot the result
"""

import os
import sys
import shlex
import pprint
import random
import os.path as osp
import threading
import subprocess

import yaml
import numpy as np
import torch
import matplotlib as mpl
import skimage.io
import matplotlib.pyplot as plt
from docopt import docopt

import lcnn
from lcnn.utils import recursive_to
from lcnn.config import C, M
from lcnn.datasets import WireframeDataset, collate
from lcnn.models.line_vectorizer import LineVectorizer
from lcnn.models.multitask_learner import MultitaskHead, MultitaskLearner


def main():
    args = docopt(__doc__)
    config_file = args["<yaml-config>"] or "config/wireframe.yaml"
    C.update(C.from_yaml(filename=config_file))
    M.update(C.model)
    pprint.pprint(C, indent=4)

    random.seed(0)
    np.random.seed(0)
    torch.manual_seed(0)

    device_name = "cpu"
    os.environ["CUDA_VISIBLE_DEVICES"] = args["--devices"]
    if torch.cuda.is_available():
        device_name = "cuda"
        torch.backends.cudnn.deterministic = True
        torch.cuda.manual_seed(0)
        print("Let's use", torch.cuda.device_count(), "GPU(s)!")
    else:
        print("CUDA is not available")
    device = torch.device(device_name)

    if M.backbone == "stacked_hourglass":
        model = lcnn.models.hg(
            depth=M.depth,
            head=lambda c_in, c_out: MultitaskHead(c_in, c_out),
            num_stacks=M.num_stacks,
            num_blocks=M.num_blocks,
            num_classes=sum(sum(M.head_size, [])),
        )
    else:
        raise NotImplementedError

    checkpoint = torch.load(args["<checkpoint>"])
    model = MultitaskLearner(model)
    model = LineVectorizer(model)
    model.load_state_dict(checkpoint["model_state_dict"])
    model = model.to(device)
    model.eval()

    loader = torch.utils.data.DataLoader(
        WireframeDataset(args["<image-dir>"], split="valid"),
        shuffle=False,
        batch_size=M.batch_size,
        collate_fn=collate,
        num_workers=C.io.num_workers if os.name != "nt" else 0,
        pin_memory=True,
    )
    os.makedirs(args["<output-dir>"], exist_ok=True)

    for batch_idx, (image, meta, target) in enumerate(loader):
        with torch.no_grad():
            input_dict = {
                "image": recursive_to(image, device),
                "meta": recursive_to(meta, device),
                "target": recursive_to(target, device),
                "mode": "validation",
            }
            H = model(input_dict)["preds"]
            for i in range(M.batch_size):
                index = batch_idx * M.batch_size + i
                np.savez(
                    osp.join(args["<output-dir>"], f"{index:06}.npz"),
                    **{k: v[i].cpu().numpy() for k, v in H.items()},
                )
                if not args["--plot"]:
                    continue
                im = image[i].cpu().numpy().transpose(1, 2, 0)
                im = im * M.image.stddev + M.image.mean
                lines = H["lines"][i].cpu().numpy() * 4
                scores = H["score"][i].cpu().numpy()
                if len(lines) > 0 and not (lines[0] == 0).all():
                    for i, ((a, b), s) in enumerate(zip(lines, scores)):
                        if i > 0 and (lines[i] == lines[0]).all():
                            break
                        plt.plot([a[1], b[1]], [a[0], b[0]], c=c(s), linewidth=4)
                plt.show()


cmap = plt.get_cmap("jet")
norm = mpl.colors.Normalize(vmin=0.4, vmax=1.0)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
sm.set_array([])


def c(x):
    return sm.to_rgba(x)


if __name__ == "__main__":
    main()


================================================
FILE: helper/gdrive-download.sh
================================================
#!/bin/bash
fileid="$1"
filename="$2"
curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${fileid}" > /dev/null
curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=`awk '/download/ {print $NF}' ./cookie`&id=${fileid}" -o ${filename}

rm ./cookie

================================================
FILE: helper/wireframe.py
================================================
#!/usr/bin/env python3
"""Process data for LETR
Usage:
    wireframe.py <src> <dst>
    wireframe.py (-h | --help )

Examples:
    python wireframe.py wireframe_raw wireframe_processed

Arguments:
    <src>                Source directory that stores preprocessed wireframe data
    <dst>                Temporary output directory

Options:
   -h --help             Show this screen.
"""

import json
import cv2
import os
import numpy as np
import math
from docopt import docopt

def main():
    args = docopt(__doc__)
    src_dir = args["<src>"]
    tar_dir = args["<dst>"]

    image_id = 0
    anno_id = 0
    for batch in ["train2017", "val2017"]:
        if batch == "train2017":
            anno_file = os.path.join(src_dir, "train.json")
        else:
            anno_file = os.path.join(src_dir, "valid.json")

        with open(anno_file, "r") as f:
            dataset = json.load(f)

        def handle(data, image_id, anno_id):
            im = cv2.imread(os.path.join(src_dir, "images", data["filename"]))
            anno['images'].append({'file_name': data['filename'], 'height': im.shape[0], 'width': im.shape[1], 'id': image_id})
            lines = np.array(data["lines"]).reshape(-1, 2, 2)
            os.makedirs(os.path.join(tar_dir, batch), exist_ok=True)

            image_path = os.path.join(tar_dir, batch, data['filename'])
            line_set = save_and_process(f"{image_path}", data['filename'], im[::, ::], lines)
            for line in line_set:
                info = {}
                info['id'] = anno_id
                anno_id += 1
                info['image_id'] = image_id
                info['category_id'] = 0
                info['line'] = line
                info['area'] = 1
                anno['annotations'].append(info)

            image_id += 1
            print("Finishing", image_path)
            return anno_id

        anno = {}
        anno['images'] = []
        anno['annotations'] = []
        anno['categories'] = [{'supercategory':"line", "id": "0", "name": "line"}]
        for img_dict in dataset:
            anno_id = handle(img_dict, image_id, anno_id)
            image_id += 1

        os.makedirs(os.path.join(tar_dir, "annotations"), exist_ok=True)
        anno_path = os.path.join(tar_dir, "annotations", f"lines_{batch}.json")
        with open(anno_path, 'w') as outfile:
            json.dump(anno, outfile)

            
def save_and_process(image_path, image_name, image, lines):
# change the format from x,y,x,y to x,y,dx, dy
# order: top point > bottom point
#        if same y coordinate, right point > left point
    
    new_lines_pairs = []
    for line in lines: # [ #lines, 2, 2 ]
        p1 = line[0]    # xy
        p2 = line[1]    # xy
        if p1[0] < p2[0]:
            new_lines_pairs.append( [p1[0], p1[1], p2[0]-p1[0], p2[1]-p1[1]] ) 
        elif  p1[0] > p2[0]:
            new_lines_pairs.append( [p2[0], p2[1], p1[0]-p2[0], p1[1]-p2[1]] )
        else:
            if p1[1] < p2[1]:
                new_lines_pairs.append( [p1[0], p1[1], p2[0]-p1[0], p2[1]-p1[1]] )
            else:
                new_lines_pairs.append( [p2[0], p2[1], p1[0]-p2[0], p1[1]-p2[1]] )

    cv2.imwrite(f"{image_path}", image)
    return new_lines_pairs


if __name__ == "__main__":
    main()

================================================
FILE: helper/wireframe_eval.py
================================================
#!/usr/bin/env python
"""Process Huang's wireframe dataset for L-CNN network
Usage:
    dataset/wireframe.py <src> <dst>
    dataset/wireframe.py (-h | --help )
Examples:
    python dataset/wireframe.py /datadir/wireframe data/wireframe
Arguments:
    <src>                Original data directory of Huang's wireframe dataset
    <dst>                Directory of the output
Options:
   -h --help             Show this screen.
"""

import os
import sys
import json
from itertools import combinations

import cv2
import numpy as np
import skimage.draw
import matplotlib.pyplot as plt
from docopt import docopt
from scipy.ndimage import zoom

try:
    sys.path.append(".")
    sys.path.append("..")
    from lcnn.utils import parmap
except Exception:
    raise


def inrange(v, shape):
    return 0 <= v[0] < shape[0] and 0 <= v[1] < shape[1]


def to_int(x):
    return tuple(map(int, x))


def save_heatmap(prefix, image, lines):
    im_rescale = (512, 512)
    heatmap_scale = (128, 128)

    fy, fx = heatmap_scale[1] / \
        image.shape[0], heatmap_scale[0] / image.shape[1]
    jmap = np.zeros((1,) + heatmap_scale, dtype=np.float32)
    joff = np.zeros((1, 2) + heatmap_scale, dtype=np.float32)
    lmap = np.zeros(heatmap_scale, dtype=np.float32)

    lines[:, :, 0] = np.clip(lines[:, :, 0] * fx, 0, heatmap_scale[0] - 1e-4)
    lines[:, :, 1] = np.clip(lines[:, :, 1] * fy, 0, heatmap_scale[1] - 1e-4)
    lines = lines[:, :, ::-1]

    junc = []
    jids = {}

    def jid(jun):
        jun = tuple(jun[:2])
        if jun in jids:
            return jids[jun]
        jids[jun] = len(junc)
        junc.append(np.array(jun + (0,)))
        return len(junc) - 1

    lnid = []
    lpos, lneg = [], []
    for v0, v1 in lines:
        lnid.append((jid(v0), jid(v1)))
        lpos.append([junc[jid(v0)], junc[jid(v1)]])

        vint0, vint1 = to_int(v0), to_int(v1)
        jmap[0][vint0] = 1
        jmap[0][vint1] = 1
        rr, cc, value = skimage.draw.line_aa(*to_int(v0), *to_int(v1))
        lmap[rr, cc] = np.maximum(lmap[rr, cc], value)

    for v in junc:
        vint = to_int(v[:2])
        joff[0, :, vint[0], vint[1]] = v[:2] - vint - 0.5

    llmap = zoom(lmap, [0.5, 0.5])
    lineset = set([frozenset(l) for l in lnid])
    for i0, i1 in combinations(range(len(junc)), 2):
        if frozenset([i0, i1]) not in lineset:
            v0, v1 = junc[i0], junc[i1]
            vint0, vint1 = to_int(v0[:2] / 2), to_int(v1[:2] / 2)
            rr, cc, value = skimage.draw.line_aa(*vint0, *vint1)
            lneg.append([v0, v1, i0, i1, np.average(
                np.minimum(value, llmap[rr, cc]))])

    assert len(lneg) != 0
    lneg.sort(key=lambda l: -l[-1])

    junc = np.array(junc, dtype=np.float32)
    Lpos = np.array(lnid, dtype=np.int)
    Lneg = np.array([l[2:4] for l in lneg][:4000], dtype=np.int)
    lpos = np.array(lpos, dtype=np.float32)
    lneg = np.array([l[:2] for l in lneg[:2000]], dtype=np.float32)

    image = cv2.resize(image, im_rescale)

    # plt.subplot(131), plt.imshow(lmap)
    # plt.subplot(132), plt.imshow(image)
    # for i0, i1 in Lpos:
    #     plt.scatter(junc[i0][1] * 4, junc[i0][0] * 4)
    #     plt.scatter(junc[i1][1] * 4, junc[i1][0] * 4)
    #     plt.plot([junc[i0][1] * 4, junc[i1][1] * 4], [junc[i0][0] * 4, junc[i1][0] * 4])
    # plt.subplot(133), plt.imshow(lmap)
    # for i0, i1 in Lneg[:150]:
    #     plt.plot([junc[i0][1], junc[i1][1]], [junc[i0][0], junc[i1][0]])
    # plt.show()

    # For junc, lpos, and lneg that stores the junction coordinates, the last
    # dimension is (y, x, t), where t represents the type of that junction.  In
    # the wireframe dataset, t is always zero.
    np.savez_compressed(
        f"{prefix}_label.npz",
        aspect_ratio=image.shape[1] / image.shape[0],
        jmap=jmap,  # [J, H, W]    Junction heat map
        joff=joff,  # [J, 2, H, W] Junction offset within each pixel
        lmap=lmap,  # [H, W]       Line heat map with anti-aliasing
        junc=junc,  # [Na, 3]      Junction coordinate
        # [M, 2]       Positive lines represented with junction indices
        Lpos=Lpos,
        # [M, 2]       Negative lines represented with junction indices
        Lneg=Lneg,
        # [Np, 2, 3]   Positive lines represented with junction coordinates
        lpos=lpos,
        # [Nn, 2, 3]   Negative lines represented with junction coordinates
        lneg=lneg,
    )
    cv2.imwrite(f"{prefix}.png", image)

    # plt.imshow(jmap[0])
    # plt.savefig("/tmp/1jmap0.jpg")
    # plt.imshow(jmap[1])
    # plt.savefig("/tmp/2jmap1.jpg")
    # plt.imshow(lmap)
    # plt.savefig("/tmp/3lmap.jpg")
    # plt.imshow(Lmap[2])
    # plt.savefig("/tmp/4ymap.jpg")
    # plt.imshow(jwgt[0])
    # plt.savefig("/tmp/5jwgt.jpg")
    # plt.cla()
    # plt.imshow(jmap[0])
    # for i in range(8):
    #     plt.quiver(
    #         8 * jmap[0] * cdir[i] * np.cos(2 * math.pi / 16 * i),
    #         8 * jmap[0] * cdir[i] * np.sin(2 * math.pi / 16 * i),
    #         units="xy",
    #         angles="xy",
    #         scale_units="xy",
    #         scale=1,
    #         minlength=0.01,
    #         width=0.1,
    #         zorder=10,
    #         color="w",
    #     )
    # plt.savefig("/tmp/6cdir.jpg")
    # plt.cla()
    # plt.imshow(lmap)
    # plt.quiver(
    #     2 * lmap * np.cos(ldir),
    #     2 * lmap * np.sin(ldir),
    #     units="xy",
    #     angles="xy",
    #     scale_units="xy",
    #     scale=1,
    #     minlength=0.01,
    #     width=0.1,
    #     zorder=10,
    #     color="w",
    # )
    # plt.savefig("/tmp/7ldir.jpg")
    # plt.cla()
    # plt.imshow(jmap[1])
    # plt.quiver(
    #     8 * jmap[1] * np.cos(tdir),
    #     8 * jmap[1] * np.sin(tdir),
    #     units="xy",
    #     angles="xy",
    #     scale_units="xy",
    #     scale=1,
    #     minlength=0.01,
    #     width=0.1,
    #     zorder=10,
    #     color="w",
    # )
    # plt.savefig("/tmp/8tdir.jpg")


def main():
    args = docopt(__doc__)
    data_root = args["<src>"]
    data_output = args["<dst>"]

    os.makedirs(data_output, exist_ok=True)
    for batch in ["train", "valid"]:
        anno_file = os.path.join(data_root, f"{batch}.json")

        with open(anno_file, "r") as f:
            dataset = json.load(f)

        def handle(data):
            im = cv2.imread(os.path.join(
                data_root, "images", data["filename"]))
            prefix = data["filename"].split(".")[0]
            lines = np.array(data["lines"]).reshape(-1, 2, 2)
            os.makedirs(os.path.join(data_output, batch), exist_ok=True)

            lines0 = lines.copy()
            lines1 = lines.copy()
            lines1[:, :, 0] = im.shape[1] - lines1[:, :, 0]
            lines2 = lines.copy()
            lines2[:, :, 1] = im.shape[0] - lines2[:, :, 1]
            lines3 = lines.copy()
            lines3[:, :, 0] = im.shape[1] - lines3[:, :, 0]
            lines3[:, :, 1] = im.shape[0] - lines3[:, :, 1]

            path = os.path.join(data_output, batch, prefix)
            save_heatmap(f"{path}_0", im[::, ::], lines0)
            if batch != "valid":
                save_heatmap(f"{path}_1", im[::, ::-1], lines1)
                save_heatmap(f"{path}_2", im[::-1, ::], lines2)
                save_heatmap(f"{path}_3", im[::-1, ::-1], lines3)
            print("Finishing", os.path.join(data_output, batch, prefix))

        parmap(handle, dataset, 16)


if __name__ == "__main__":
    main()


================================================
FILE: helper/york.py
================================================
#!/usr/bin/env python3
"""Process YorkUrban dataset for L-CNN network
Usage:
    york.py <src> <dst>
    york.py (-h | --help )
Examples:
    python dataset/york.py /datadir/york data/york
Arguments:
    <src>                Original data directory of YorkUrban
    <dst>                Directory of the output
Options:
   -h --help             Show this screen.
"""

import os
import sys
import glob
import json
import os.path as osp
from itertools import combinations

import cv2
import numpy as np
import skimage.draw
import matplotlib.pyplot as plt
from docopt import docopt
from scipy.io import loadmat
from scipy.ndimage import zoom

def main():
    args = docopt(__doc__)
    src_dir = args["<src>"]
    tar_dir = args["<dst>"]

    os.makedirs(tar_dir, exist_ok=True)
    dataset = sorted(glob.glob(osp.join(src_dir, "*/*.jpg")))
    image_id = 0
    anno_id = 0
    for mode in ["train", "val"]:
        batch = f"{mode}2017"
        os.makedirs(os.path.join(tar_dir, batch), exist_ok=True)

        anno = {}
        anno['images'] = []
        anno['annotations'] = []
        anno['categories'] = [{'supercategory': "line", "id": "0", "name": "line"}]

        def handle(iname, image_id, anno_id, batch):

            im = cv2.imread(iname)
            filename = iname.split("/")[-1]

            anno['images'].append({'file_name': filename,
                                'height': im.shape[0], 'width': im.shape[1], 'id': image_id})
            mat = loadmat(iname.replace(".jpg", "LinesAndVP.mat"))
            lines = np.array(mat["lines"]).reshape(-1, 2, 2)
            lines = lines.astype('float')
            os.makedirs(os.path.join(tar_dir, batch), exist_ok=True)

            image_path = os.path.join(tar_dir, batch, filename)
            line_set = save_and_process(f"{image_path}", filename, im[::, ::], lines)
            for line in line_set:
                info = {}
                info['id'] = anno_id
                anno_id += 1
                info['image_id'] = image_id
                info['category_id'] = 0
                info['line'] = line
                info['area'] = 1
                anno['annotations'].append(info)

            image_id += 1
            print(f"Finishing {image_path}")
            return anno_id

        if mode == "val":
            for img in dataset:
                anno_id = handle(img, image_id, anno_id, batch)
                image_id += 1

        os.makedirs(os.path.join(tar_dir, "annotations"), exist_ok=True)
        anno_path = os.path.join(tar_dir, "annotations", f"lines_{batch}.json")
        with open(anno_path, 'w') as outfile:
            json.dump(anno, outfile)


def save_and_process(image_path, image_name, image, lines):
    # change the format from x,y,x,y to x,y,dx, dy
    # order: top point > bottom point
    #        if same y coordinate, right point > left point

    new_lines_pairs = []
    for line in lines:  # [ #lines, 2, 2 ]
        p1 = line[0]    # xy
        p2 = line[1]    # xy
        if p1[0] < p2[0]:
            new_lines_pairs.append([p1[0], p1[1], p2[0]-p1[0], p2[1]-p1[1]])
        elif p1[0] > p2[0]:
            new_lines_pairs.append([p2[0], p2[1], p1[0]-p2[0], p1[1]-p2[1]])
        else:
            if p1[1] < p2[1]:
                new_lines_pairs.append(
                    [p1[0], p1[1], p2[0]-p1[0], p2[1]-p1[1]])
            else:
                new_lines_pairs.append(
                    [p2[0], p2[1], p1[0]-p2[0], p1[1]-p2[1]])

    cv2.imwrite(f"{image_path}", image)
    return new_lines_pairs

if __name__ == "__main__":
    main()

================================================
FILE: helper/york_eval.py
================================================
#!/usr/bin/env python3
"""Process YorkUrban dataset for L-CNN network
Usage:
    dataset/york.py <src> <dst>
    dataset/york.py (-h | --help )
Examples:
    python dataset/york.py /datadir/york data/york
Arguments:
    <src>                Original data directory of YorkUrban
    <dst>                Directory of the output
Options:
   -h --help             Show this screen.
"""

import os
import sys
import glob
import json
import os.path as osp
from itertools import combinations

import cv2
import numpy as np
import skimage.draw
import matplotlib.pyplot as plt
from docopt import docopt
from scipy.io import loadmat
from scipy.ndimage import zoom

try:
    sys.path.append(".")
    sys.path.append("..")
    from lcnn.utils import parmap
except Exception:
    raise


def inrange(v, shape):
    return 0 <= v[0] < shape[0] and 0 <= v[1] < shape[1]


def to_int(x):
    return tuple(map(int, x))


def save_heatmap(prefix, image, lines):
    im_rescale = (512, 512)
    heatmap_scale = (128, 128)

    fy, fx = heatmap_scale[1] / image.shape[0], heatmap_scale[0] / image.shape[1]
    jmap = np.zeros((1,) + heatmap_scale, dtype=np.float32)
    joff = np.zeros((1, 2) + heatmap_scale, dtype=np.float32)
    lmap = np.zeros(heatmap_scale, dtype=np.float32)

    lines[:, :, 0] = np.clip(lines[:, :, 0] * fx, 0, heatmap_scale[0] - 1e-4)
    lines[:, :, 1] = np.clip(lines[:, :, 1] * fy, 0, heatmap_scale[1] - 1e-4)
    lines = lines[:, :, ::-1]

    junc = []
    jids = {}

    def jid(jun):
        jun = tuple(jun[:2])
        if jun in jids:
            return jids[jun]
        jids[jun] = len(junc)
        junc.append(np.array(jun + (0,)))
        return len(junc) - 1

    lnid = []
    lpos, lneg = [], []
    for v0, v1 in lines:
        lnid.append((jid(v0), jid(v1)))
        lpos.append([junc[jid(v0)], junc[jid(v1)]])

        vint0, vint1 = to_int(v0), to_int(v1)
        jmap[0][vint0] = 1
        jmap[0][vint1] = 1
        rr, cc, value = skimage.draw.line_aa(*to_int(v0), *to_int(v1))
        lmap[rr, cc] = np.maximum(lmap[rr, cc], value)

    for v in junc:
        vint = to_int(v[:2])
        joff[0, :, vint[0], vint[1]] = v[:2] - vint - 0.5

    llmap = zoom(lmap, [0.5, 0.5])
    lineset = set([frozenset(l) for l in lnid])
    for i0, i1 in combinations(range(len(junc)), 2):
        if frozenset([i0, i1]) not in lineset:
            v0, v1 = junc[i0], junc[i1]
            vint0, vint1 = to_int(v0[:2] / 2), to_int(v1[:2] / 2)
            rr, cc, value = skimage.draw.line_aa(*vint0, *vint1)
            lneg.append([v0, v1, i0, i1, np.average(np.minimum(value, llmap[rr, cc]))])
            # assert np.sum((v0 - v1) ** 2) > 0.01

    assert len(lneg) != 0
    lneg.sort(key=lambda l: -l[-1])

    junc = np.array(junc, dtype=np.float32)
    Lpos = np.array(lnid, dtype=np.int)
    Lneg = np.array([l[2:4] for l in lneg][:4000], dtype=np.int)
    lpos = np.array(lpos, dtype=np.float32)
    lneg = np.array([l[:2] for l in lneg[:2000]], dtype=np.float32)

    image = cv2.resize(image, im_rescale)

    # plt.subplot(131), plt.imshow(lmap)
    # plt.subplot(132), plt.imshow(image)
    # for i0, i1 in Lpos:
    #     plt.scatter(junc[i0][1] * 4, junc[i0][0] * 4)
    #     plt.scatter(junc[i1][1] * 4, junc[i1][0] * 4)
    #     plt.plot([junc[i0][1] * 4, junc[i1][1] * 4], [junc[i0][0] * 4, junc[i1][0] * 4])
    # plt.subplot(133), plt.imshow(lmap)
    # for i0, i1 in Lneg[:150]:
    #     plt.plot([junc[i0][1], junc[i1][1]], [junc[i0][0], junc[i1][0]])
    # plt.show()

    np.savez_compressed(
        f"{prefix}_label.npz",
        aspect_ratio=image.shape[1] / image.shape[0],
        jmap=jmap,  # [J, H, W]
        joff=joff,  # [J, 2, H, W]
        lmap=lmap,  # [H, W]
        junc=junc,  # [Na, 3]
        Lpos=Lpos,  # [M, 2]
        Lneg=Lneg,  # [M, 2]
        lpos=lpos,  # [Np, 2, 3]   (y, x, t) for the last dim
        lneg=lneg,  # [Nn, 2, 3]
    )
    cv2.imwrite(f"{prefix}.png", image)

    # plt.imshow(jmap[0])
    # plt.savefig("/tmp/1jmap0.jpg")
    # plt.imshow(jmap[1])
    # plt.savefig("/tmp/2jmap1.jpg")
    # plt.imshow(lmap)
    # plt.savefig("/tmp/3lmap.jpg")
    # plt.imshow(Lmap[2])
    # plt.savefig("/tmp/4ymap.jpg")
    # plt.imshow(jwgt[0])
    # plt.savefig("/tmp/5jwgt.jpg")
    # plt.cla()
    # plt.imshow(jmap[0])
    # for i in range(8):
    #     plt.quiver(
    #         8 * jmap[0] * cdir[i] * np.cos(2 * math.pi / 16 * i),
    #         8 * jmap[0] * cdir[i] * np.sin(2 * math.pi / 16 * i),
    #         units="xy",
    #         angles="xy",
    #         scale_units="xy",
    #         scale=1,
    #         minlength=0.01,
    #         width=0.1,
    #         zorder=10,
    #         color="w",
    #     )
    # plt.savefig("/tmp/6cdir.jpg")
    # plt.cla()
    # plt.imshow(lmap)
    # plt.quiver(
    #     2 * lmap * np.cos(ldir),
    #     2 * lmap * np.sin(ldir),
    #     units="xy",
    #     angles="xy",
    #     scale_units="xy",
    #     scale=1,
    #     minlength=0.01,
    #     width=0.1,
    #     zorder=10,
    #     color="w",
    # )
    # plt.savefig("/tmp/7ldir.jpg")
    # plt.cla()
    # plt.imshow(jmap[1])
    # plt.quiver(
    #     8 * jmap[1] * np.cos(tdir),
    #     8 * jmap[1] * np.sin(tdir),
    #     units="xy",
    #     angles="xy",
    #     scale_units="xy",
    #     scale=1,
    #     minlength=0.01,
    #     width=0.1,
    #     zorder=10,
    #     color="w",
    # )
    # plt.savefig("/tmp/8tdir.jpg")


def main():
    args = docopt(__doc__)
    data_root = args["<src>"]
    data_output = args["<dst>"]
    os.makedirs(data_output, exist_ok=True)

    dataset = sorted(glob.glob(osp.join(data_root, "*/*.jpg")))

    def handle(iname):
        prefix = osp.split(iname)[1].replace(".jpg", "")
        im = cv2.imread(iname)
        mat = loadmat(iname.replace(".jpg", "LinesAndVP.mat"))
        lines = np.array(mat["lines"]).reshape(-1, 2, 2)
        path = osp.join(data_output, prefix)
        save_heatmap(f"{path}", im[::, ::], lines)
        print(f"Finishing {path}")

    parmap(handle, dataset)


if __name__ == "__main__":
    main()

================================================
FILE: script/evaluation/eval_aph_wireframe.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name and Epoch'
    exit 1
fi

name=$1
epoch=$2

output=exp/$name

cd evaluation

echo "Post-processing"
python eval-aph-post-wireframe.py --plot --thresholds="0.010,0.015" ../$output/benchmark/benchmark_val_$epoch ../$output/post/$epoch

echo "Evaluation AP-H"
mkdir -p ../$output/score/eval-APH-0_010
python eval-aph-score-wireframe.py ../$output/post/$epoch/0_010 ../$output/post/$epoch/0_010-APH | tee ../$output/score/eval-APH-0_010/$epoch.txt

================================================
FILE: script/evaluation/eval_aph_york.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name and Epoch'
    exit 1
fi

name=$1
epoch=$2

output=exp/$name

cd evaluation

echo "Post-processing"
python eval-aph-post-york.py --plot --thresholds="0.010,0.015" ../$output/benchmark/benchmark_york_$epoch ../$output/post_york/$epoch

echo "Evaluation AP-H"
mkdir -p ../$output/score/eval-APH-0_010-york
python eval-aph-score-york.py ../$output/post_york/$epoch/0_010 ../$output/post_york/$epoch/0_010-APH | tee ../$output/score/eval-APH-0_010-york/$epoch.txt

================================================
FILE: script/evaluation/eval_stage1.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi
name=$1
output=exp/$name

epoch=("499")

for ((i=0;i<${#epoch[@]};++i)); do
    mkdir  -p $output/score
    mkdir  -p $output/score/eval-sAP
    mkdir  -p $output/score/eval-fscore
    mkdir  -p $output/score/eval-sAP-york
    mkdir  -p $output/score/eval-fscore-york

    PYTHONPATH=$PYTHONPATH:./src \
    python ./src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --backbone resnet101 --resume $output/checkpoints/checkpoint0${epoch[i]}.pth \
    --batch_size 1 ${@:2}  --num_queries 1000 \
    --eval --benchmark --dataset val --append_word ${epoch[i]} --no_aux_loss

    PYTHONPATH=$PYTHONPATH:./src \
    python ./src/main.py --coco_path data/york_processed \
    --output_dir $output --backbone resnet101 --resume $output/checkpoints/checkpoint0${epoch[i]}.pth \
    --batch_size 1 ${@:2}  --num_queries 1000  \
    --eval --benchmark --dataset val --append_word ${epoch[i]} --no_aux_loss


    python evaluation/eval-sAP-wireframe.py $output/benchmark/benchmark_val_${epoch[i]} | tee $output/score/eval-sAP/${epoch[i]}.txt

    python evaluation/eval-fscore-wireframe.py $output/benchmark/benchmark_val_${epoch[i]} | tee $output/score/eval-fscore/${epoch[i]}.txt

    python evaluation/eval-sAP-york.py $output/benchmark/benchmark_york_${epoch[i]} | tee $output/score/eval-sAP-york/${epoch[i]}.txt

    python evaluation/eval-fscore-york.py $output/benchmark/benchmark_york_${epoch[i]} | tee $output/score/eval-fscore-york/${epoch[i]}.txt


done



================================================
FILE: script/evaluation/eval_stage2.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi
name=$1
output=exp/$name

epoch=("299")

for ((i=0;i<${#epoch[@]};++i)); do
    mkdir  -p $output/score
    mkdir  -p $output/score/eval-sAP
    mkdir  -p $output/score/eval-fscore
    mkdir  -p $output/score/eval-sAP-york
    mkdir  -p $output/score/eval-fscore-york

    PYTHONPATH=$PYTHONPATH:./src \
    python ./src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --LETRpost --backbone resnet101 --resume $output/checkpoints/checkpoint0${epoch[i]}.pth \
    --batch_size 1 ${@:2}  --num_queries 1000 \
    --eval --benchmark --dataset val --append_word ${epoch[i]} --no_aux_loss  

    PYTHONPATH=$PYTHONPATH:./src 
    python ./src/main.py --coco_path data/york_processed \
    --output_dir $output --LETRpost --backbone resnet101 --resume $output/checkpoints/checkpoint0${epoch[i]}.pth \
    --batch_size 1 ${@:2}  --num_queries 1000 \
    --eval --benchmark --dataset val --append_word ${epoch[i]} --no_aux_loss 

    python evaluation/eval-sAP-wireframe.py $output/benchmark/benchmark_val_${epoch[i]} | tee -a $output/score/eval-sAP/${epoch[i]}.txt

    python evaluation/eval-fscore-wireframe.py $output/benchmark/benchmark_val_${epoch[i]} | tee $output/score/eval-fscore/${epoch[i]}.txt

    python evaluation/eval-sAP-york.py $output/benchmark/benchmark_york_${epoch[i]} | tee $output/score/eval-sAP-york/${epoch[i]}.txt

    python evaluation/eval-fscore-york.py $output/benchmark/benchmark_york_${epoch[i]} | tee $output/score/eval-fscore-york/${epoch[i]}.txt


done


================================================
FILE: script/evaluation/eval_stage2_focal.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi
name=$1
output=exp/$name

epoch=("024")

for ((i=0;i<${#epoch[@]};++i)); do
    mkdir  -p $output/score
    mkdir  -p $output/score/eval-sAP
    mkdir  -p $output/score/eval-fscore
    mkdir  -p $output/score/eval-sAP-york
    mkdir  -p $output/score/eval-fscore-york

    PYTHONPATH=$PYTHONPATH:./src \
    python ./src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --LETRpost --backbone resnet50 --resume $output/checkpoints/checkpoint0${epoch[i]}.pth \
    --batch_size 1 ${@:2}  --num_queries 1000 \
    --eval --benchmark --dataset val --append_word ${epoch[i]} --no_aux_loss  

    PYTHONPATH=$PYTHONPATH:./src \
    python ./src/main.py --coco_path data/york_processed \
    --output_dir $output --LETRpost --backbone resnet50 --resume $output/checkpoints/checkpoint0${epoch[i]}.pth \
    --batch_size 1 ${@:2}  --num_queries 1000 \
    --eval --benchmark --dataset val --append_word ${epoch[i]} --no_aux_loss 

    python evaluation/eval-sAP-wireframe.py $output/benchmark/benchmark_val_${epoch[i]} | tee -a $output/score/eval-sAP/${epoch[i]}.txt

    python evaluation/eval-fscore-wireframe.py $output/benchmark/benchmark_val_${epoch[i]} | tee $output/score/eval-fscore/${epoch[i]}.txt

    python evaluation/eval-sAP-york.py $output/benchmark/benchmark_york_${epoch[i]} | tee $output/score/eval-sAP-york/${epoch[i]}.txt

    python evaluation/eval-fscore-york.py $output/benchmark/benchmark_york_${epoch[i]} | tee $output/score/eval-fscore-york/${epoch[i]}.txt


done


================================================
FILE: script/train/a0_train_stage1_res50.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi

# The name of this experiment.
name=$1

# Save logs and models under snap/gqa; make backup.
output=exp/$name
if [ ! -d "$output"  ]; then
    echo "folder not exist"
    mkdir -p $output/src
    cp -r src/* $output/src/
    cp $0 $output/run.bash

    PYTHONPATH=$PYTHONPATH:./src python -m torch.distributed.launch \
    --master_port=$((1000 + RANDOM % 9999)) --nproc_per_node=8 --use_env  src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --backbone resnet50 --resume  https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth \
    --batch_size 1 --epochs 500 --lr_drop 200 --num_queries 1000  --num_gpus 8   --layer1_num 3 | tee -a $output/history.txt

else
    echo "folder already exist"
fi






================================================
FILE: script/train/a1_train_stage1_res101.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi

# The name of this experiment.
name=$1

# Save logs and models under snap/gqa; make backup.
output=exp/$name
if [ ! -d "$output"  ]; then
    echo "folder not exist"
    mkdir -p $output/src
    cp -r src/* $output/src/
    cp $0 $output/run.bash

    PYTHONPATH=$PYTHONPATH:./src python -m torch.distributed.launch \
    --master_port=$((1000 + RANDOM % 9999)) --nproc_per_node=4 --use_env  src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --backbone resnet101 --resume  https://dl.fbaipublicfiles.com/detr/detr-r101-2c7b67e5.pth \
    --batch_size 1 --epochs 500 --lr_drop 200 --num_queries 1000  --num_gpus 4   --layer1_num 3 | tee -a $output/history.txt

else
    echo "folder already exist"
fi






================================================
FILE: script/train/a2_train_stage2_res50.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi

# The name of this experiment.
name=$1

# Save logs and models under snap/gqa; make backup.
output=exp/$name
if [ ! -d "$output"  ]; then
    echo "folder not exist"
    mkdir -p $output/src
    cp -r src/* $output/src/
    cp $0 $output/run.bash

    PYTHONPATH=$PYTHONPATH:./src python -m torch.distributed.launch \
    --master_port=$((1000 + RANDOM % 9999)) --nproc_per_node=8 --use_env  src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --LETRpost --backbone resnet50 --layer1_frozen --frozen_weights exp/res50_stage1/checkpoints/checkpoint.pth --no_opt \
    --batch_size 1 ${@:2} --epochs 300 --lr_drop 120 --num_queries 1000 --num_gpus 8 | tee -a $output/history.txt  

else
    echo "folder already exist"
fi





================================================
FILE: script/train/a3_train_stage2_res101.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi

# The name of this experiment.
name=$1

# Save logs and models under snap/gqa; make backup.
output=exp/$name
if [ ! -d "$output"  ]; then
    echo "folder not exist"
    mkdir -p $output/src
    cp -r src/* $output/src/
    cp $0 $output/run.bash

    CUDA_VISIBLE_DEVICES=1,3,8,9 PYTHONPATH=$PYTHONPATH:./src python -m torch.distributed.launch \
    --master_port=$((1000 + RANDOM % 9999)) --nproc_per_node=4 --use_env  src/main.py --coco_path data/wireframe_processed \
    --output_dir $output --LETRpost --backbone resnet101 --layer1_frozen --frozen_weights exp/res101_stage1/checkpoints/checkpoint.pth --no_opt \
    --batch_size 1 --epochs 300 --lr_drop 120 --num_queries 1000 --num_gpus 4 | tee -a $output/history.txt  

else
    echo "folder already exist"
fi




================================================
FILE: script/train/a4_train_stage2_focal_res50.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi

# The name of this experiment.
name=$1

# Save logs and models under snap/gqa; make backup.
output=exp/$name
if [ ! -d "$output"  ]; then
    echo "folder not exist"
    mkdir -p $output/src
    cp -r src/* $output/src/
    cp $0 $output/run.bash

    PYTHONPATH=$PYTHONPATH:./src python -m torch.distributed.launch \
        --master_port=$((1000 + RANDOM % 9999)) --nproc_per_node=8 --use_env  src/main.py --coco_path data/wireframe_processed \
        --output_dir $output  --LETRpost  --backbone resnet50  --layer1_frozen  --resume exp/res50_stage2/checkpoints/checkpoint.pth  \
        --no_opt --batch_size 1  --epochs 25  --lr_drop 25  --num_queries 1000  --num_gpus 8  --lr 1e-5  --label_loss_func focal_loss \
        --label_loss_params '{"gamma":2.0}'  --save_freq 1  |  tee -a $output/history.txt 

else
    echo "folder already exist"
fi





================================================
FILE: script/train/a5_train_stage2_focal_res101.sh
================================================
# Fail the script if there is any failure
set -e

if [[ $# -eq 0 ]] ; then
    echo 'Require Experiment Name'
    exit 1
fi

# The name of this experiment.
name=$1

# Save logs and models under snap/gqa; make backup.
output=exp/$name
if [ ! -d "$output"  ]; then
    echo "folder not exist"
    mkdir -p $output/src
    cp -r src/* $output/src/
    cp $0 $output/run.bash

    PYTHONPATH=$PYTHONPATH:./src python -m torch.distributed.launch \
        --master_port=$((1000 + RANDOM % 9999)) --nproc_per_node=4 --use_env  src/main.py --coco_path data/wireframe_processed \
        --output_dir $output  --LETRpost  --backbone resnet101  --layer1_frozen  --resume exp/res101_stage2/checkpoints/checkpoint.pth  --no_opt \
        --batch_size 1  --epochs 25  --lr_drop 25  --num_queries 1000  --num_gpus 4  --lr 1e-5  --label_loss_func focal_loss \
        --label_loss_params '{"gamma":2.0}'  --save_freq 1  |  tee -a $output/history.txt 

else
    echo "folder already exist"
fi





================================================
FILE: src/args.py
================================================
import argparse


def get_args_parser():
    parser = argparse.ArgumentParser('Set transformer detector', add_help=False)
    parser.add_argument('--lr', default=1e-4, type=float)
    parser.add_argument('--lr_backbone', default=1e-5, type=float)
    parser.add_argument('--batch_size', default=2, type=int)
    parser.add_argument('--weight_decay', default=1e-4, type=float)
    parser.add_argument('--epochs', default=300, type=int)
    parser.add_argument('--lr_drop', default=200, type=int)
    parser.add_argument('--clip_max_norm', default=0.1, type=float,
                        help='gradient clipping max norm')
    parser.add_argument('--save_freq', default=5, type=int)
    
    parser.add_argument('--benchmark', action='store_true',
                        help="Train segmentation head if the flag is provided")
    parser.add_argument('--append_word', default=None, type=str, help="Name of the convolutional backbone to use")
    # Model parameters
    # * Backbone
    parser.add_argument('--backbone', default='resnet50', type=str,
                        help="Name of the convolutional backbone to use")
    parser.add_argument('--dilation', action='store_true',
                        help="If true, we replace stride with dilation in the last convolutional block (DC5)")
    parser.add_argument('--position_embedding', default='sine', type=str, choices=('sine', 'learned'),
                        help="Type of positional embedding to use on top of the image features")

    # Load
    parser.add_argument('--layer1_frozen', action='store_true')
    parser.add_argument('--layer2_frozen', action='store_true')

    parser.add_argument('--frozen_weights', default='', help='resume from checkpoint')
    parser.add_argument('--resume', default='', help='resume from checkpoint')
    parser.add_argument('--no_opt', action='store_true')

    # Transformer
    parser.add_argument('--LETRpost', action='store_true')
    parser.add_argument('--layer1_num', default=3, type=int)
    parser.add_argument('--layer2_num', default=2, type=int)

    # First Transformer
    parser.add_argument('--enc_layers', default=6, type=int,
                        help="Number of encoding layers in the transformer")
    parser.add_argument('--dec_layers', default=6, type=int,
                        help="Number of decoding layers in the transformer")
    parser.add_argument('--dim_feedforward', default=2048, type=int,
                        help="Intermediate size of the feedforward layers in the transformer blocks")
    parser.add_argument('--hidden_dim', default=256, type=int,
                        help="Size of the embeddings (dimension of the transformer)")
    parser.add_argument('--dropout', default=0.1, type=float,
                        help="Dropout applied in the transformer")
    parser.add_argument('--nheads', default=8, type=int,
                        help="Number of attention heads inside the transformer's attentions")
    parser.add_argument('--num_queries', default=1000, type=int,
                        help="Number of query slots")
    parser.add_argument('--pre_norm', action='store_true')


    # Second Transformer
    parser.add_argument('--second_enc_layers', default=6, type=int,
                        help="Number of encoding layers in the transformer")
    parser.add_argument('--second_dec_layers', default=6, type=int,
                        help="Number of decoding layers in the transformer")
    parser.add_argument('--second_dim_feedforward', default=2048, type=int,
                        help="Intermediate size of the feedforward layers in the transformer blocks")
    parser.add_argument('--second_hidden_dim', default=256, type=int,
                        help="Size of the embeddings (dimension of the transformer)")
    parser.add_argument('--second_dropout', default=0.1, type=float,
                        help="Dropout applied in the transformer")
    parser.add_argument('--second_nheads', default=8, type=int,
                        help="Number of attention heads inside the transformer's attentions")
    parser.add_argument('--second_pre_norm', action='store_true')
    # Loss
    parser.add_argument('--no_aux_loss', dest='aux_loss', action='store_false',
                        help="Disables auxiliary decoding losses (loss at each layer)")
    # * Matcher
    parser.add_argument('--set_cost_class', default=1, type=float,
                        help="Class coefficient in the matching cost")
    parser.add_argument('--set_cost_line', default=5, type=float,
                        help="L1 box coefficient in the matching cost")
    parser.add_argument('--set_cost_point', default=5, type=float,
                        help="L1 box coefficient in the matching cost")

    # * Loss coefficients
    parser.add_argument('--dice_loss_coef', default=1, type=float)
    parser.add_argument('--point_loss_coef', default=5, type=float)
    parser.add_argument('--line_loss_coef', default=5, type=float)
    parser.add_argument('--eos_coef', default=0.1, type=float)
    parser.add_argument('--label_loss_func', default='cross_entropy', type=str)
    parser.add_argument('--label_loss_params', default='{}', type=str)

    # dataset parameters
    parser.add_argument('--dataset_file', default='coco')
    parser.add_argument('--coco_path', type=str)
    parser.add_argument('--coco_panoptic_path', type=str)
    parser.add_argument('--remove_difficult', action='store_true')

    parser.add_argument('--output_dir', default='',
                        help='path where to save, empty for no saving')
    parser.add_argument('--device', default='cuda',
                        help='device to use for training / testing')
    parser.add_argument('--seed', default=42, type=int)
   

    parser.add_argument('--start_epoch', default=0, type=int, metavar='N',
                        help='start epoch')
    parser.add_argument('--num_workers', default=2, type=int)
    parser.add_argument('--num_gpus', default=1, type=int)
    # distributed training parameters
    parser.add_argument('--world_size', default=1, type=int,
                        help='number of distributed processes')
    parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')


    parser.add_argument('--eval', action='store_true')
    parser.add_argument('--dataset', default='train', type=str, choices=('train', 'val'))

    return parser

================================================
FILE: src/datasets/__init__.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import torch.utils.data
import torchvision

from .coco import build as build_coco


def get_coco_api_from_dataset(dataset):
    for _ in range(10):
        # if isinstance(dataset, torchvision.datasets.CocoDetection):
        #     break
        if isinstance(dataset, torch.utils.data.Subset):
            dataset = dataset.dataset
    if isinstance(dataset, torchvision.datasets.CocoDetection):
        return dataset.coco


def build_dataset(image_set, args):

    return build_coco(image_set, args)


================================================
FILE: src/datasets/coco.py
================================================
"""
Modified based on Detr: https://github.com/facebookresearch/detr/blob/master/datasets/coco.py
"""
from pathlib import Path

import torch
import torch.utils.data
import torchvision

import datasets.transforms as T
import math
import numpy as np

class CocoDetection(torchvision.datasets.CocoDetection):
    def __init__(self, img_folder, ann_file, transforms, args):
        super(CocoDetection, self).__init__(img_folder, ann_file)
        self._transforms = transforms
        self.prepare = ConvertCocoPolysToMask() 
        self.args = args

    def __getitem__(self, idx):
        img, target = super(CocoDetection, self).__getitem__(idx)
        image_id = self.ids[idx]
        target = {'image_id': image_id, 'annotations': target}
        img, target = self.prepare(img, target, self.args)
        if self._transforms is not None:
            img, target = self._transforms(img, target)
        return img, target

class ConvertCocoPolysToMask(object):

    def __call__(self, image, target, args):
        w, h = image.size

        image_id = target["image_id"]
        image_id = torch.tensor([image_id])

        anno = target["annotations"]

        anno = [obj for obj in anno]
 
        lines = [obj["line"] for obj in anno]
        lines = torch.as_tensor(lines, dtype=torch.float32).reshape(-1, 4)

        lines[:, 2:] += lines[:, :2] #xyxy

        lines[:, 0::2].clamp_(min=0, max=w)
        lines[:, 1::2].clamp_(min=0, max=h)

        classes = [obj["category_id"] for obj in anno]
        classes = torch.tensor(classes, dtype=torch.int64)

        target = {}
        target["lines"] = lines
        

        target["labels"] = classes
        
        target["image_id"] = image_id

        # for conversion to coco api
        area = torch.tensor([obj["area"] for obj in anno])
        iscrowd = torch.tensor([obj["iscrowd"] if "iscrowd" in obj else 0 for obj in anno])
        target["area"] = area
        target["iscrowd"] = iscrowd

        target["orig_size"] = torch.as_tensor([int(h), int(w)])
        target["size"] = torch.as_tensor([int(h), int(w)])

        return image, target


def make_coco_transforms(image_set, args):

    normalize = T.Compose([
        T.ToTensor(),
        T.Normalize([0.538, 0.494, 0.453], [0.257, 0.263, 0.273])
    ])

    scales = [480, 512, 544, 576, 608, 640, 672, 680, 690, 704, 736, 768, 788, 800]
    test_size = 1100
    max = 1333 

    if args.eval:
        return T.Compose([
            T.RandomResize([test_size], max_size=max),
            normalize,
        ])
    else:
        if image_set == 'train':
            return T.Compose([
                T.RandomSelect(
                    T.RandomHorizontalFlip(),
                    T.RandomVerticalFlip(),
                ),
                T.RandomSelect(
                    T.RandomResize(scales, max_size=max),
                    T.Compose([
                        T.RandomResize([400, 500, 600]),
                        T.RandomSizeCrop(384, 600),
                        T.RandomResize(scales, max_size=max),
                    ])
                ),
                T.ColorJitter(),
                normalize,
            ])

        if image_set == 'val':
            return T.Compose([
                T.RandomResize([test_size], max_size=max),
                normalize,
            ])

    raise ValueError(f'unknown {image_set}')

def build(image_set, args):
    root = Path(args.coco_path)
    assert root.exists(), f'provided COCO path {root} does not exist'
    mode = 'lines'

    if args.eval:
        PATHS = {
            "train": (root / "train2017", root / "annotations" / f'{mode}_train2017.json'),
            "val": (root / "val2017", root / "annotations" / f'{mode}_val2017.json'),
        }    
    else:
        PATHS = {
            "train": (root / "train2017", root / "annotations" / f'{mode}_train2017.json'),
            "val": (root / "val2017", root / "annotations" / f'{mode}_val2017.json'),
        }

    img_folder, ann_file = PATHS[image_set]
    dataset = CocoDetection(img_folder, ann_file, transforms=make_coco_transforms(image_set, args), args=args)
    return dataset


================================================
FILE: src/datasets/transforms.py
================================================
"""
Transforms and data augmentation for both image + line.
modfied based on https://github.com/facebookresearch/detr/blob/master/datasets/transforms.py
"""
import random

import PIL
import torch
import torchvision.transforms as T
import torchvision.transforms.functional as F
import numbers
import warnings
from typing import Tuple, List, Optional
from PIL import Image
from torch import Tensor
import math

from util.misc import interpolate
import numpy as np

def crop(image, target, region):
    cropped_image = F.crop(image, *region)

    target = target.copy()
    i, j, h, w = region

    # should we do something wrt the original size?
    target["size"] = torch.tensor([h, w])

    fields = ["labels", "area", "iscrowd"]

    if "lines" in target:
        lines = target["lines"]
        cropped_lines = lines - torch.as_tensor([j, i, j, i])
        
        eps = 1e-12

        # In dataset, we assume the left point has smaller x coord
        remove_x_min = cropped_lines[:, 2] < 0
        remove_x_max = cropped_lines[:, 0] > w
        remove_x = torch.logical_or(remove_x_min, remove_x_max)
        keep_x = ~remove_x

        # there is no assumption on y, so remove lines that have both y coord out of bound
        remove_y_min = torch.logical_and(cropped_lines[:, 1] < 0, cropped_lines[:, 3] < 0)
        remove_y_max = torch.logical_and(cropped_lines[:, 1] > h, cropped_lines[:, 3] > h)
        remove_y = torch.logical_or(remove_y_min, remove_y_max)
        keep_y = ~remove_y

        keep = torch.logical_and(keep_x, keep_y)
        cropped_lines = cropped_lines[keep]
        clamped_lines = torch.zeros_like(cropped_lines)

        for i,line in enumerate(cropped_lines):
            x1, y1, x2, y2 = line
            slope = (y2 - y1) / (x2 - x1 + eps)
            if x1 < 0:
                x1 = 0
                y1 = y2 + (x1 - x2) * slope
            if y1 < 0:
                y1 = 0
                x1 = x2 - (y2 - y1) / slope
            if x2 > w:
                x2 = w
                y2 = y1 + (x2 - x1) * slope
            if y2 > h:
                y2 = h
                x2 = x1 + (y2 - y1) / slope

            clamped_lines[i, :] = torch.tensor([x1, y1, x2, y2])

        target["lines"] = clamped_lines
        
    for field in fields:
        target[field] = target[field][keep]
    return cropped_image, target


def hflip(image, target):
    flipped_image = F.hflip(image)

    w, h = image.size

    target = target.copy()
    
    if "lines" in target:
        lines = target["lines"]   
        lines = lines[:, [2, 3, 0, 1]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor([w, 0, w, 0])
        target["lines"] = lines


    return flipped_image, target


def vflip(image, target):
    flipped_image = F.vflip(image)

    w, h = image.size

    target = target.copy()

    if "lines" in target:
        lines = target["lines"]

        # in dataset, we assume if two points with same x coord, we assume first point is the upper point
        lines = lines * torch.as_tensor([1, -1, 1, -1]) + torch.as_tensor([0, h, 0, h])
        vertical_line_idx = (lines[:, 0] == lines[:, 2])
        lines[vertical_line_idx] = torch.index_select(lines[vertical_line_idx], 1, torch.tensor([2,3,0,1]))
        target["lines"] = lines

    return flipped_image, target


def ccw_rotation(image, target):
    rotateded_image = F.rotate(image, 90, expand=True)
    w, h = rotateded_image.size

    target = target.copy()

    target["size"] = torch.tensor([h, w])

    if "lines" in target:
        lines = target["lines"]
        lines = lines[:, [1, 0, 3, 2]] * torch.as_tensor([1, -1, 1, -1]) + torch.as_tensor([0, h, 0, h])
        # in dataset, we assume the first point is the left point
        x_switch_idx = lines[:, 0] > lines[:, 2]
        lines[x_switch_idx] = torch.index_select(lines[x_switch_idx], 1, torch.tensor([2,3,0,1]))

        # in dataset, if two points have same x coord, we assume the first point is the upper point
        y_switch_idx = torch.logical_and(lines[:, 0] == lines[:, 2], lines[:, 1] > lines[:, 3])
        lines[y_switch_idx] = torch.index_select(lines[y_switch_idx], 1, torch.tensor([2,3,0,1]))

        target["lines"] = lines

    return rotateded_image, target


def cw_rotation(image, target):
    rotateded_image = F.rotate(image, -90, expand=True)
    w, h = rotateded_image.size

    target = target.copy()

    target["size"] = torch.tensor([h, w])

    if "lines" in target:
        lines = target["lines"]
        lines = lines[:, [1, 0, 3, 2]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor([w, 0, w, 0])
        
        # in dataset, we assume the first point is the left point
        x_switch_idx = lines[:, 0] > lines[:, 2]
        lines[x_switch_idx] = torch.index_select(
            lines[x_switch_idx], 1, torch.tensor([2, 3, 0, 1]))

        # in dataset, if two points have same x coord, we assume the first point is the upper point
        y_switch_idx = torch.logical_and(
            lines[:, 0] == lines[:, 2], lines[:, 1] > lines[:, 3])
        lines[y_switch_idx] = torch.index_select(
            lines[y_switch_idx], 1, torch.tensor([2, 3, 0, 1]))

        target["lines"] = lines

    return rotateded_image, target

def resize(image, target, size, max_size=None):
    # size can be min_size (scalar) or (w, h) tuple

    def get_size_with_aspect_ratio(image_size, size, max_size=None):
        w, h = image_size
        if max_size is not None:
            min_original_size = float(min((w, h)))
            max_original_size = float(max((w, h)))
            if max_original_size / min_original_size * size > max_size:
                size = int(round(max_size * min_original_size / max_original_size))

        if (w <= h and w == size
Download .txt
gitextract_ycglcm5_/

├── .gitignore
├── LICENSE
├── README.md
├── evaluation/
│   ├── eval-aph-post-wireframe.py
│   ├── eval-aph-post-york.py
│   ├── eval-aph-score-wireframe.py
│   ├── eval-aph-score-york.py
│   ├── eval-fscore-wireframe.py
│   ├── eval-fscore-york.py
│   ├── eval-sAP-wireframe.py
│   ├── eval-sAP-york.py
│   ├── lcnn/
│   │   ├── __init__.py
│   │   ├── box.py
│   │   ├── config.py
│   │   ├── datasets.py
│   │   ├── metric.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── hourglass_pose.py
│   │   │   ├── line_vectorizer.py
│   │   │   └── multitask_learner.py
│   │   ├── postprocess.py
│   │   ├── trainer.py
│   │   └── utils.py
│   ├── matlab/
│   │   ├── correspondPixels.mexa64
│   │   ├── correspondPixels.mexmaci64
│   │   ├── correspondPixels.mexw64
│   │   └── eval_release.m
│   └── process.py
├── helper/
│   ├── gdrive-download.sh
│   ├── wireframe.py
│   ├── wireframe_eval.py
│   ├── york.py
│   └── york_eval.py
├── script/
│   ├── evaluation/
│   │   ├── eval_aph_wireframe.sh
│   │   ├── eval_aph_york.sh
│   │   ├── eval_stage1.sh
│   │   ├── eval_stage2.sh
│   │   └── eval_stage2_focal.sh
│   └── train/
│       ├── a0_train_stage1_res50.sh
│       ├── a1_train_stage1_res101.sh
│       ├── a2_train_stage2_res50.sh
│       ├── a3_train_stage2_res101.sh
│       ├── a4_train_stage2_focal_res50.sh
│       └── a5_train_stage2_focal_res101.sh
└── src/
    ├── args.py
    ├── datasets/
    │   ├── __init__.py
    │   ├── coco.py
    │   └── transforms.py
    ├── demo_letr.ipynb
    ├── engine.py
    ├── main.py
    ├── models/
    │   ├── __init__.py
    │   ├── backbone.py
    │   ├── letr.py
    │   ├── letr_stack.py
    │   ├── matcher.py
    │   ├── multi_head_attention.py
    │   ├── position_encoding.py
    │   └── transformer.py
    └── util/
        ├── __init__.py
        └── misc.py
Download .txt
SYMBOL INDEX (414 symbols across 37 files)

FILE: evaluation/eval-aph-post-wireframe.py
  function c (line 43) | def c(x):
  function imshow (line 47) | def imshow(im):
  function main (line 63) | def main():

FILE: evaluation/eval-aph-post-york.py
  function c (line 43) | def c(x):
  function imshow (line 47) | def imshow(im):
  function main (line 63) | def main():

FILE: evaluation/eval-aph-score-wireframe.py
  function main (line 40) | def main():

FILE: evaluation/eval-aph-score-york.py
  function main (line 40) | def main():

FILE: evaluation/eval-fscore-wireframe.py
  function f_score (line 35) | def f_score(tp, fp):
  function line_score (line 45) | def line_score(path, threshold=5):
  function work (line 87) | def work(path):

FILE: evaluation/eval-fscore-york.py
  function f_score (line 33) | def f_score(tp, fp):
  function line_score (line 43) | def line_score(path, threshold=5):
  function work (line 86) | def work(path):

FILE: evaluation/eval-sAP-wireframe.py
  function line_score (line 34) | def line_score(path, threshold=5):
  function work (line 79) | def work(path):

FILE: evaluation/eval-sAP-york.py
  function line_score (line 34) | def line_score(path, threshold=5):
  function work (line 80) | def work(path):

FILE: evaluation/lcnn/box.py
  class BoxError (line 50) | class BoxError(Exception):
  class BoxKeyError (line 54) | class BoxKeyError(BoxError, KeyError, AttributeError):
  function _to_json (line 61) | def _to_json(obj, filename=None,
  function _from_json (line 73) | def _from_json(json_string=None, filename=None,
  function _to_yaml (line 89) | def _to_yaml(obj, filename=None, default_flow_style=False,
  function _from_yaml (line 104) | def _from_yaml(yaml_string=None, filename=None,
  function _safe_key (line 121) | def _safe_key(key):
  function _safe_attr (line 128) | def _safe_attr(attr, camel_killer=False, replacement_char='x'):
  function _camel_killer (line 157) | def _camel_killer(attr):
  function _recursive_tuples (line 174) | def _recursive_tuples(iterable, box_class, recreate_tuples=False, **kwar...
  function _conversion_checks (line 187) | def _conversion_checks(item, keys, box_config, check_only=False,
  function _get_box_config (line 230) | def _get_box_config(cls, kwargs):
  class Box (line 250) | class Box(dict):
    method __new__ (line 271) | def __new__(cls, *args, **kwargs):
    method __init__ (line 280) | def __init__(self, *args, **kwargs):
    method __add_ordered (line 320) | def __add_ordered(self, key):
    method box_it_up (line 325) | def box_it_up(self):
    method __hash__ (line 337) | def __hash__(self):
    method __dir__ (line 345) | def __dir__(self):
    method get (line 382) | def get(self, key, default=None):
    method copy (line 392) | def copy(self):
    method __copy__ (line 395) | def __copy__(self):
    method __deepcopy__ (line 398) | def __deepcopy__(self, memodict=None):
    method __setstate__ (line 406) | def __setstate__(self, state):
    method __getitem__ (line 410) | def __getitem__(self, item, _ignore_default=False):
    method keys (line 423) | def keys(self):
    method values (line 428) | def values(self):
    method items (line 431) | def items(self):
    method __get_default (line 434) | def __get_default(self, item):
    method __box_config (line 445) | def __box_config(self):
    method __convert_and_store (line 452) | def __convert_and_store(self, item, value):
    method __create_lineage (line 481) | def __create_lineage(self):
    method __getattr__ (line 489) | def __getattr__(self, item):
    method __setitem__ (line 517) | def __setitem__(self, key, value):
    method __setattr__ (line 528) | def __setattr__(self, key, value):
    method __delitem__ (line 558) | def __delitem__(self, key):
    method __delattr__ (line 566) | def __delattr__(self, item):
    method pop (line 583) | def pop(self, key, *args):
    method clear (line 603) | def clear(self):
    method popitem (line 607) | def popitem(self):
    method __repr__ (line 614) | def __repr__(self):
    method __str__ (line 617) | def __str__(self):
    method __iter__ (line 620) | def __iter__(self):
    method __reversed__ (line 624) | def __reversed__(self):
    method to_dict (line 628) | def to_dict(self):
    method update (line 645) | def update(self, item=None, **kwargs):
    method setdefault (line 664) | def setdefault(self, item, default=None):
    method to_json (line 675) | def to_json(self, filename=None,
    method from_json (line 690) | def from_json(cls, json_string=None, filename=None,
    method to_yaml (line 717) | def to_yaml(self, filename=None, default_flow_style=False,
    method from_yaml (line 735) | def from_yaml(cls, yaml_string=None, filename=None,
  class BoxList (line 763) | class BoxList(list):
    method __init__ (line 769) | def __init__(self, iterable=None, box_class=Box, **box_options):
    method __delitem__ (line 784) | def __delitem__(self, key):
    method __setitem__ (line 789) | def __setitem__(self, key, value):
    method append (line 794) | def append(self, p_object):
    method extend (line 810) | def extend(self, iterable):
    method insert (line 814) | def insert(self, index, p_object):
    method __repr__ (line 822) | def __repr__(self):
    method __str__ (line 825) | def __str__(self):
    method __copy__ (line 828) | def __copy__(self):
    method __deepcopy__ (line 833) | def __deepcopy__(self, memodict=None):
    method __hash__ (line 841) | def __hash__(self):
    method to_list (line 848) | def to_list(self):
    method to_json (line 861) | def to_json(self, filename=None,
    method from_json (line 885) | def from_json(cls, json_string=None, filename=None, encoding="utf-8",
    method to_yaml (line 913) | def to_yaml(self, filename=None, default_flow_style=False,
    method from_yaml (line 931) | def from_yaml(cls, yaml_string=None, filename=None,
    method box_it_up (line 959) | def box_it_up(self):
  class ConfigBox (line 965) | class ConfigBox(Box):
    method __getattr__ (line 983) | def __getattr__(self, item):
    method __dir__ (line 991) | def __dir__(self):
    method bool (line 996) | def bool(self, item, default=None):
    method int (line 1019) | def int(self, item, default=None):
    method float (line 1034) | def float(self, item, default=None):
    method list (line 1049) | def list(self, item, default=None, spliter=",", strip=True, mod=None):
    method getboolean (line 1074) | def getboolean(self, item, default=None):
    method getint (line 1077) | def getint(self, item, default=None):
    method getfloat (line 1080) | def getfloat(self, item, default=None):
    method __repr__ (line 1083) | def __repr__(self):
  class SBox (line 1087) | class SBox(Box):
    method dict (line 1097) | def dict(self):
    method json (line 1101) | def json(self):
    method yaml (line 1106) | def yaml(self):
    method __repr__ (line 1109) | def __repr__(self):

FILE: evaluation/lcnn/datasets.py
  class WireframeDataset (line 17) | class WireframeDataset(Dataset):
    method __init__ (line 18) | def __init__(self, rootdir, split):
    method __len__ (line 27) | def __len__(self):
    method __getitem__ (line 30) | def __getitem__(self, idx):
    method adjacency_matrix (line 81) | def adjacency_matrix(self, n, link):
  function collate (line 90) | def collate(batch):

FILE: evaluation/lcnn/metric.py
  function ap (line 11) | def ap(tp, fp):
  function fscore (line 23) | def fscore(tp, fp):
  function APJ (line 32) | def APJ(vert_pred, vert_gt, max_distance, im_ids):
  function nms_j (line 68) | def nms_j(heatmap, delta=1):
  function mAPJ (line 82) | def mAPJ(pred, truth, distances, im_ids):
  function post_jheatmap (line 86) | def post_jheatmap(heatmap, offset=None, delta=1):
  function vectorized_wireframe_2d_metric (line 103) | def vectorized_wireframe_2d_metric(
  function vectorized_wireframe_3d_metric (line 155) | def vectorized_wireframe_3d_metric(
  function msTPFP (line 194) | def msTPFP(line_pred, line_gt, threshold):
  function msAP (line 213) | def msAP(line_pred, line_gt, threshold):

FILE: evaluation/lcnn/models/hourglass_pose.py
  class Bottleneck2D (line 14) | class Bottleneck2D(nn.Module):
    method __init__ (line 17) | def __init__(self, inplanes, planes, stride=1, downsample=None):
    method forward (line 30) | def forward(self, x):
  class Hourglass (line 53) | class Hourglass(nn.Module):
    method __init__ (line 54) | def __init__(self, block, num_blocks, planes, depth):
    method _make_residual (line 60) | def _make_residual(self, block, num_blocks, planes):
    method _make_hour_glass (line 66) | def _make_hour_glass(self, block, num_blocks, planes, depth):
    method _hour_glass_forward (line 77) | def _hour_glass_forward(self, n, x):
    method forward (line 91) | def forward(self, x):
  class HourglassNet (line 95) | class HourglassNet(nn.Module):
    method __init__ (line 98) | def __init__(self, block, head, depth, num_stacks, num_blocks, num_cla...
    method _make_residual (line 137) | def _make_residual(self, block, planes, blocks, stride=1):
    method _make_fc (line 157) | def _make_fc(self, inplanes, outplanes):
    method forward (line 162) | def forward(self, x):
  function hg (line 192) | def hg(**kwargs):

FILE: evaluation/lcnn/models/line_vectorizer.py
  class LineVectorizer (line 15) | class LineVectorizer(nn.Module):
    method __init__ (line 16) | def __init__(self, backbone):
    method forward (line 45) | def forward(self, input_dict):
    method sample_lines (line 152) | def sample_lines(self, meta, jmap, joff, mode):
  function non_maximum_suppression (line 248) | def non_maximum_suppression(a):
  class Bottleneck1D (line 254) | class Bottleneck1D(nn.Module):
    method __init__ (line 255) | def __init__(self, inplanes, outplanes):
    method forward (line 271) | def forward(self, x):

FILE: evaluation/lcnn/models/multitask_learner.py
  class MultitaskHead (line 11) | class MultitaskHead(nn.Module):
    method __init__ (line 12) | def __init__(self, input_channels, num_class):
    method forward (line 28) | def forward(self, x):
  class MultitaskLearner (line 32) | class MultitaskLearner(nn.Module):
    method __init__ (line 33) | def __init__(self, backbone):
    method forward (line 40) | def forward(self, input_dict):
  function l2loss (line 93) | def l2loss(input, target):
  function cross_entropy_loss (line 97) | def cross_entropy_loss(logits, positive):
  function sigmoid_l1_loss (line 102) | def sigmoid_l1_loss(logits, target, offset=0.0, mask=None):

FILE: evaluation/lcnn/postprocess.py
  function pline (line 4) | def pline(x1, y1, x2, y2, x, y):
  function psegment (line 14) | def psegment(x1, y1, x2, y2, x, y):
  function plambda (line 24) | def plambda(x1, y1, x2, y2, x, y):
  function postprocess (line 31) | def postprocess(lines, scores, threshold=0.01, tol=1e9, do_clip=False):

FILE: evaluation/lcnn/trainer.py
  class Trainer (line 23) | class Trainer(object):
    method __init__ (line 24) | def __init__(self, device, model, optimizer, train_loader, val_loader,...
    method run_tensorboard (line 54) | def run_tensorboard(self):
    method _loss (line 69) | def _loss(self, result):
    method validate (line 101) | def validate(self):
    method train_epoch (line 163) | def train_epoch(self):
    method _write_metrics (line 205) | def _write_metrics(self, size, total_loss, prefix, do_print=False):
    method _plot_samples (line 228) | def _plot_samples(self, i, index, result, meta, target, prefix):
    method train (line 281) | def train(self):
  function c (line 299) | def c(x):
  function imshow (line 303) | def imshow(im):
  function tprint (line 312) | def tprint(*args):
  function pprint (line 318) | def pprint(*args):
  function _launch_tensorboard (line 324) | def _launch_tensorboard(board_out, port, out):

FILE: evaluation/lcnn/utils.py
  class benchmark (line 11) | class benchmark(object):
    method __init__ (line 12) | def __init__(self, msg, enable=True, fmt="%0.3g"):
    method __enter__ (line 17) | def __enter__(self):
    method __exit__ (line 22) | def __exit__(self, *args):
  function quiver (line 29) | def quiver(x, y, ax):
  function recursive_to (line 45) | def recursive_to(input, device):
  function np_softmax (line 60) | def np_softmax(x, axis=0):
  function argsort2d (line 66) | def argsort2d(arr):
  function __parallel_handle (line 70) | def __parallel_handle(f, q_in, q_out):
  function parmap (line 78) | def parmap(f, X, nprocs=multiprocessing.cpu_count(), progress_bar=lambda...

FILE: evaluation/process.py
  function main (line 44) | def main():
  function c (line 129) | def c(x):

FILE: helper/wireframe.py
  function main (line 25) | def main():
  function save_and_process (line 77) | def save_and_process(image_path, image_name, image, lines):

FILE: helper/wireframe_eval.py
  function inrange (line 35) | def inrange(v, shape):
  function to_int (line 39) | def to_int(x):
  function save_heatmap (line 43) | def save_heatmap(prefix, image, lines):
  function main (line 195) | def main():

FILE: helper/york.py
  function main (line 30) | def main():
  function save_and_process (line 87) | def save_and_process(image_path, image_name, image, lines):

FILE: helper/york_eval.py
  function inrange (line 38) | def inrange(v, shape):
  function to_int (line 42) | def to_int(x):
  function save_heatmap (line 46) | def save_heatmap(prefix, image, lines):
  function main (line 190) | def main():

FILE: src/args.py
  function get_args_parser (line 4) | def get_args_parser():

FILE: src/datasets/__init__.py
  function get_coco_api_from_dataset (line 8) | def get_coco_api_from_dataset(dataset):
  function build_dataset (line 18) | def build_dataset(image_set, args):

FILE: src/datasets/coco.py
  class CocoDetection (line 14) | class CocoDetection(torchvision.datasets.CocoDetection):
    method __init__ (line 15) | def __init__(self, img_folder, ann_file, transforms, args):
    method __getitem__ (line 21) | def __getitem__(self, idx):
  class ConvertCocoPolysToMask (line 30) | class ConvertCocoPolysToMask(object):
    method __call__ (line 32) | def __call__(self, image, target, args):
  function make_coco_transforms (line 73) | def make_coco_transforms(image_set, args):
  function build (line 116) | def build(image_set, args):

FILE: src/datasets/transforms.py
  function crop (line 21) | def crop(image, target, region):
  function hflip (line 79) | def hflip(image, target):
  function vflip (line 95) | def vflip(image, target):
  function ccw_rotation (line 114) | def ccw_rotation(image, target):
  function cw_rotation (line 138) | def cw_rotation(image, target):
  function resize (line 165) | def resize(image, target, size, max_size=None):
  function pad (line 215) | def pad(image, target, padding):
  class RandomCrop (line 229) | class RandomCrop(object):
    method __init__ (line 230) | def __init__(self, size):
    method __call__ (line 233) | def __call__(self, img, target):
  class RandomSizeCrop (line 238) | class RandomSizeCrop(object):
    method __init__ (line 239) | def __init__(self, min_size: int, max_size: int):
    method __call__ (line 243) | def __call__(self, img: PIL.Image.Image, target: dict):
  class CenterCrop (line 250) | class CenterCrop(object):
    method __init__ (line 251) | def __init__(self, size):
    method __call__ (line 254) | def __call__(self, img, target):
  class RandomHorizontalFlip (line 262) | class RandomHorizontalFlip(object):
    method __init__ (line 263) | def __init__(self, p=0.5):
    method __call__ (line 266) | def __call__(self, img, target):
  class RandomVerticalFlip (line 271) | class RandomVerticalFlip(object):
    method __init__ (line 272) | def __init__(self, p=0.5):
    method __call__ (line 275) | def __call__(self, img, target):
  class RandomCounterClockwiseRotation (line 280) | class RandomCounterClockwiseRotation(object):
    method __init__ (line 281) | def __init__(self, p=0.5):
    method __call__ (line 284) | def __call__(self, img, target):
  class RandomClockwiseRotation (line 289) | class RandomClockwiseRotation(object):
    method __init__ (line 290) | def __init__(self, p=0.5):
    method __call__ (line 293) | def __call__(self, img, target):
  class RandomResize (line 298) | class RandomResize(object):
    method __init__ (line 299) | def __init__(self, sizes, max_size=None):
    method __call__ (line 304) | def __call__(self, img, target=None):
  class RandomPad (line 309) | class RandomPad(object):
    method __init__ (line 310) | def __init__(self, max_pad):
    method __call__ (line 313) | def __call__(self, img, target):
  class RandomErasing (line 318) | class RandomErasing(object):
    method __init__ (line 319) | def __init__(self, p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=...
    method get_params (line 343) | def get_params(img: Tensor, scale: Tuple[float, float], ratio: Tuple[f...
    method __call__ (line 377) | def __call__(self, img, target):
  class ColorJitter (line 385) | class ColorJitter(object):
    method __init__ (line 386) | def __init__(self, brightness=0.4, contrast=0.4, saturation=0.4, hue=0...
    method _check_input (line 393) | def _check_input(self, value, name, center=1, bound=(0, float('inf')),...
    method __call__ (line 412) | def __call__(self, img, target):
  class RandomSelect (line 437) | class RandomSelect(object):
    method __init__ (line 438) | def __init__(self, transforms1, transforms2, p=0.5):
    method __call__ (line 443) | def __call__(self, img, target):
  class ToTensor (line 449) | class ToTensor(object):
    method __call__ (line 450) | def __call__(self, img, target):
  class Normalize (line 453) | class Normalize(object):
    method __init__ (line 454) | def __init__(self, mean, std):
    method __call__ (line 458) | def __call__(self, image, target=None):
  class Compose (line 473) | class Compose(object):
    method __init__ (line 474) | def __init__(self, transforms):
    method __call__ (line 477) | def __call__(self, image, target):
    method __repr__ (line 482) | def __repr__(self):

FILE: src/engine.py
  function train_one_epoch (line 18) | def train_one_epoch(model, criterion, postprocessors, data_loader, optim...
  function evaluate (line 77) | def evaluate(model, criterion, postprocessors, data_loader, base_ds, dev...

FILE: src/main.py
  function main (line 20) | def main(args):

FILE: src/models/__init__.py
  function build_model (line 5) | def build_model(args):

FILE: src/models/backbone.py
  class FrozenBatchNorm2d (line 19) | class FrozenBatchNorm2d(torch.nn.Module):
    method __init__ (line 28) | def __init__(self, n):
    method _load_from_state_dict (line 35) | def _load_from_state_dict(self, state_dict, prefix, local_metadata, st...
    method forward (line 45) | def forward(self, x):
  class BackboneBase (line 58) | class BackboneBase(nn.Module):
    method __init__ (line 60) | def __init__(self, backbone: nn.Module, train_backbone: bool, num_chan...
    method forward (line 72) | def forward(self, tensor_list: NestedTensor):
  class Backbone (line 84) | class Backbone(BackboneBase):
    method __init__ (line 86) | def __init__(self, name: str,
  class Joiner (line 97) | class Joiner(nn.Sequential):
    method __init__ (line 98) | def __init__(self, backbone, position_embedding):
    method forward (line 101) | def forward(self, tensor_list: NestedTensor):
  function build_backbone (line 113) | def build_backbone(args):

FILE: src/models/letr.py
  class LETR (line 19) | class LETR(nn.Module):
    method __init__ (line 21) | def __init__(self, backbone, transformer, num_classes, num_queries, ar...
    method forward (line 38) | def forward(self, samples, postprocessors=None, targets=None, criterio...
    method _set_aux_loss (line 57) | def _set_aux_loss(self, outputs_class, outputs_coord):
  class SetCriterion (line 60) | class SetCriterion(nn.Module):
    method __init__ (line 62) | def __init__(self, num_classes, weight_dict, eos_coef, losses, args, m...
    method loss_lines_labels (line 81) | def loss_lines_labels(self, outputs, targets,  num_items,  log=False, ...
    method label_focal_loss (line 104) | def label_focal_loss(self, input, target, weight, gamma=2.0):
    method loss_cardinality (line 124) | def loss_cardinality(self, outputs, targets,  num_items, origin_indice...
    method loss_lines_POST (line 137) | def loss_lines_POST(self, outputs, targets, num_items, origin_indices=...
    method loss_lines (line 158) | def loss_lines(self, outputs, targets, num_items, origin_indices=None):
    method _get_src_permutation_idx (line 173) | def _get_src_permutation_idx(self, indices):
    method _get_tgt_permutation_idx (line 179) | def _get_tgt_permutation_idx(self, indices):
    method get_loss (line 185) | def get_loss(self, loss, outputs, targets, num_items, **kwargs):
    method forward (line 197) | def forward(self, outputs, targets, origin_indices=None):
  class PostProcess_Line (line 245) | class PostProcess_Line(nn.Module):
    method forward (line 249) | def forward(self, outputs, target_sizes, output_type):
  class MLP (line 302) | class MLP(nn.Module):
    method __init__ (line 305) | def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
    method forward (line 311) | def forward(self, x):
  function build (line 317) | def build(args):

FILE: src/models/letr_stack.py
  class LETRstack (line 18) | class LETRstack(nn.Module):
    method __init__ (line 19) | def __init__(self, letr, args):
    method forward (line 48) | def forward(self, samples, postprocessors=None, targets=None, criterio...
    method _set_aux_loss (line 83) | def _set_aux_loss(self, outputs_class, outputs_coord):
    method _set_aux_loss_POST (line 91) | def _set_aux_loss_POST(self, outputs_class, outputs_coord):
  function _expand (line 97) | def _expand(tensor, length: int):
  class MLP (line 100) | class MLP(nn.Module):
    method __init__ (line 103) | def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
    method forward (line 109) | def forward(self, x):
  class Transformer (line 115) | class Transformer(nn.Module):
    method __init__ (line 117) | def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
    method _reset_parameters (line 137) | def _reset_parameters(self):
    method forward (line 142) | def forward(self, src, mask, query_embed, pos_embed):
  class TransformerEncoder (line 155) | class TransformerEncoder(nn.Module):
    method __init__ (line 157) | def __init__(self, encoder_layer, num_layers, norm=None):
    method forward (line 163) | def forward(self, src,
  class TransformerDecoder (line 178) | class TransformerDecoder(nn.Module):
    method __init__ (line 180) | def __init__(self, decoder_layer, num_layers, norm=None, return_interm...
    method forward (line 187) | def forward(self, tgt, memory,
  class TransformerEncoderLayer (line 219) | class TransformerEncoderLayer(nn.Module):
    method __init__ (line 221) | def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
    method with_pos_embed (line 238) | def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    method forward_post (line 241) | def forward_post(self,
    method forward_pre (line 256) | def forward_pre(self, src,
    method forward (line 270) | def forward(self, src,
  class TransformerDecoderLayer (line 279) | class TransformerDecoderLayer(nn.Module):
    method __init__ (line 281) | def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
    method with_pos_embed (line 301) | def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    method forward_post (line 304) | def forward_post(self, tgt, memory,
    method forward_pre (line 327) | def forward_pre(self, tgt, memory,
    method forward (line 350) | def forward(self, tgt, memory,
  function _get_clones (line 364) | def _get_clones(module, N):
  function _get_activation_fn (line 367) | def _get_activation_fn(activation):

FILE: src/models/matcher.py
  class HungarianMatcher_Line (line 8) | class HungarianMatcher_Line(nn.Module):
    method __init__ (line 16) | def __init__(self, cost_class: float = 1, cost_line: float = 1):
    method forward (line 29) | def forward(self, outputs, targets):
  function build_matcher (line 80) | def build_matcher(args, type=None):

FILE: src/models/multi_head_attention.py
  function dropout (line 22) | def dropout(input, p=0.5, training=True, inplace=False):
  function _get_softmax_dim (line 48) | def _get_softmax_dim(name, ndim, stacklevel):
  function softmax (line 58) | def softmax(input, dim=None, _stacklevel=3, dtype=None):
  function linear (line 90) | def linear(input, weight, bias=None):
  function multi_head_attention_forward (line 116) | def multi_head_attention_forward(query: Tensor, key: Tensor, value: Tens...
  class MultiheadAttention (line 381) | class MultiheadAttention(Module):
    method __init__ (line 412) | def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bi...
    method _reset_parameters (line 451) | def _reset_parameters(self):
    method __setstate__ (line 467) | def __setstate__(self, state):
    method forward (line 474) | def forward(self, query, key, value, key_padding_mask=None,

FILE: src/models/position_encoding.py
  class PositionEmbeddingSine (line 12) | class PositionEmbeddingSine(nn.Module):
    method __init__ (line 17) | def __init__(self, num_pos_feats=64, temperature=10000, normalize=Fals...
    method forward (line 28) | def forward(self, tensor_list: NestedTensor):
  class PositionEmbeddingLearned (line 51) | class PositionEmbeddingLearned(nn.Module):
    method __init__ (line 55) | def __init__(self, num_pos_feats=256):
    method reset_parameters (line 61) | def reset_parameters(self):
    method forward (line 65) | def forward(self, tensor_list: NestedTensor):
  function build_position_encoding (line 79) | def build_position_encoding(args):

FILE: src/models/transformer.py
  class Transformer (line 18) | class Transformer(nn.Module):
    method __init__ (line 20) | def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
    method _reset_parameters (line 42) | def _reset_parameters(self):
    method forward (line 47) | def forward(self, src, mask, query_embed, pos_embed):
  class TransformerEncoder (line 64) | class TransformerEncoder(nn.Module):
    method __init__ (line 66) | def __init__(self, encoder_layer, num_layers, norm=None):
    method forward (line 72) | def forward(self, src,
  class TransformerDecoder (line 87) | class TransformerDecoder(nn.Module):
    method __init__ (line 89) | def __init__(self, decoder_layer, num_layers, norm=None, return_interm...
    method forward (line 96) | def forward(self, tgt, memory,
  class TransformerEncoderLayer (line 128) | class TransformerEncoderLayer(nn.Module):
    method __init__ (line 130) | def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
    method with_pos_embed (line 146) | def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    method forward_post (line 149) | def forward_post(self,
    method forward_pre (line 164) | def forward_pre(self, src,
    method forward (line 178) | def forward(self, src,
  class TransformerDecoderLayer (line 187) | class TransformerDecoderLayer(nn.Module):
    method __init__ (line 189) | def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
    method with_pos_embed (line 209) | def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    method forward_post (line 212) | def forward_post(self, tgt, memory,
    method forward_pre (line 235) | def forward_pre(self, tgt, memory,
    method forward (line 258) | def forward(self, tgt, memory,
  function _get_clones (line 272) | def _get_clones(module, N):
  function build_transformer (line 276) | def build_transformer(args):
  function _get_activation_fn (line 289) | def _get_activation_fn(activation):

FILE: src/util/misc.py
  class SmoothedValue (line 26) | class SmoothedValue(object):
    method __init__ (line 31) | def __init__(self, window_size=20, fmt=None):
    method update (line 39) | def update(self, value, n=1):
    method synchronize_between_processes (line 44) | def synchronize_between_processes(self):
    method median (line 58) | def median(self):
    method avg (line 63) | def avg(self):
    method global_avg (line 68) | def global_avg(self):
    method max (line 72) | def max(self):
    method value (line 76) | def value(self):
    method __str__ (line 79) | def __str__(self):
  function all_gather (line 88) | def all_gather(data):
  function reduce_dict (line 131) | def reduce_dict(input_dict, average=True):
  class MetricLogger (line 158) | class MetricLogger(object):
    method __init__ (line 159) | def __init__(self, delimiter="\t"):
    method update (line 163) | def update(self, **kwargs):
    method __getattr__ (line 170) | def __getattr__(self, attr):
    method __str__ (line 178) | def __str__(self):
    method synchronize_between_processes (line 186) | def synchronize_between_processes(self):
    method add_meter (line 190) | def add_meter(self, name, meter):
    method log_every (line 193) | def log_every(self, iterable, print_freq, header=None):
  function get_sha (line 248) | def get_sha():
  function collate_fn (line 268) | def collate_fn(batch):
  function _max_by_axis (line 274) | def _max_by_axis(the_list):
  function nested_tensor_from_tensor_list (line 283) | def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
  function _onnx_nested_tensor_from_tensor_list (line 311) | def _onnx_nested_tensor_from_tensor_list(tensor_list):
  class NestedTensor (line 339) | class NestedTensor(object):
    method __init__ (line 340) | def __init__(self, tensors, mask: Optional[Tensor]):
    method to (line 344) | def to(self, device):
    method decompose (line 355) | def decompose(self):
    method __repr__ (line 358) | def __repr__(self):
  function setup_for_distributed (line 362) | def setup_for_distributed(is_master):
  function is_dist_avail_and_initialized (line 377) | def is_dist_avail_and_initialized():
  function get_world_size (line 385) | def get_world_size():
  function get_rank (line 391) | def get_rank():
  function is_main_process (line 397) | def is_main_process():
  function save_on_master (line 401) | def save_on_master(*args, **kwargs):
  function init_distributed_mode (line 406) | def init_distributed_mode(args):
  function accuracy (line 432) | def accuracy(output, target, topk=(1,)):
  function interpolate (line 450) | def interpolate(input, size=None, scale_factor=None, mode="nearest", ali...
Condensed preview — 61 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (2,474K chars).
[
  {
    "path": ".gitignore",
    "chars": 184,
    "preview": ".nfs*\n*.pyc\n.dumbo.json\n.DS_Store\n.*.swp\n*.pth\n**/__pycache__/**\n.ipynb_checkpoints/\n*.tmp\n*.pkl\n**/.mypy_cache/*\n.mypy_"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 5830,
    "preview": "# LETR: Line Segment Detection Using Transformers without Edges\n\n## Introduction \nThis repository contains the official "
  },
  {
    "path": "evaluation/eval-aph-post-wireframe.py",
    "chars": 4161,
    "preview": "#!/usr/bin/env python3\n\"\"\"Post-processing the output of neural network\nUsage:\n    post.py [options] <input-dir> <output-"
  },
  {
    "path": "evaluation/eval-aph-post-york.py",
    "chars": 4254,
    "preview": "#!/usr/bin/env python3\n\"\"\"Post-processing the output of neural network\nUsage:\n    post.py [options] <input-dir> <output-"
  },
  {
    "path": "evaluation/eval-aph-score-wireframe.py",
    "chars": 6828,
    "preview": "#!/usr/bin/env python3\n\"\"\"Evaluate APH for LCNN\nUsage:\n    eval-APH.py <src> <dst>\n    eval-APH.py (-h | --help )\n\nExamp"
  },
  {
    "path": "evaluation/eval-aph-score-york.py",
    "chars": 6692,
    "preview": "#!/usr/bin/env python3\n\"\"\"Evaluate APH for LCNN\nUsage:\n    eval-APH.py <src> <dst>\n    eval-APH.py (-h | --help )\n\nExamp"
  },
  {
    "path": "evaluation/eval-fscore-wireframe.py",
    "chars": 2626,
    "preview": "#!/usr/bin/env python3\n\"\"\"Evaluate sAP5, sAP10, sAP15 for LCNN\nUsage:\n    eval-sAP.py <path>...\n    eval-sAP.py (-h | --"
  },
  {
    "path": "evaluation/eval-fscore-york.py",
    "chars": 2582,
    "preview": "#!/usr/bin/env python3\n\"\"\"Evaluate sAP5, sAP10, sAP15 for LCNN\nUsage:\n    eval-sAP.py <path>...\n    eval-sAP.py (-h | --"
  },
  {
    "path": "evaluation/eval-sAP-wireframe.py",
    "chars": 2344,
    "preview": "#!/usr/bin/env python3\n\"\"\"Evaluate sAP5, sAP10, sAP15 for LCNN\nUsage:\n    eval-sAP.py <path>...\n    eval-sAP.py (-h | --"
  },
  {
    "path": "evaluation/eval-sAP-york.py",
    "chars": 2306,
    "preview": "#!/usr/bin/env python3\n\"\"\"Evaluate sAP5, sAP10, sAP15 for LCNN\nUsage:\n    eval-sAP.py <path>...\n    eval-sAP.py (-h | --"
  },
  {
    "path": "evaluation/lcnn/__init__.py",
    "chars": 79,
    "preview": "import lcnn.models\nimport lcnn.trainer\nimport lcnn.datasets\nimport lcnn.config\n"
  },
  {
    "path": "evaluation/lcnn/box.py",
    "chars": 40406,
    "preview": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n#\n# Copyright (c) 2017-2019 - Chris Griffith - MIT License\n\"\"\"\nImproved di"
  },
  {
    "path": "evaluation/lcnn/config.py",
    "chars": 134,
    "preview": "import numpy as np\n\nfrom lcnn.box import Box\n\n# C is a dict storing all the configuration\nC = Box()\n\n# shortcut for C.mo"
  },
  {
    "path": "evaluation/lcnn/datasets.py",
    "chars": 3656,
    "preview": "import glob\nimport json\nimport math\nimport os\nimport random\n\nimport numpy as np\nimport numpy.linalg as LA\nimport torch\nf"
  },
  {
    "path": "evaluation/lcnn/metric.py",
    "chars": 7101,
    "preview": "import numpy as np\nimport numpy.linalg as LA\nimport matplotlib.pyplot as plt\n\nfrom lcnn.utils import argsort2d\n\nDX = [0,"
  },
  {
    "path": "evaluation/lcnn/models/__init__.py",
    "chars": 91,
    "preview": "# flake8: noqa\nfrom .hourglass_pose import hg\n# from .dla import dla169, dla102x, dla102x2\n"
  },
  {
    "path": "evaluation/lcnn/models/hourglass_pose.py",
    "chars": 6607,
    "preview": "\"\"\"\nHourglass network inserted in the pre-activated Resnet\nUse lr=0.01 for current version\n(c) Yichao Zhou (LCNN)\n(c) YA"
  },
  {
    "path": "evaluation/lcnn/models/line_vectorizer.py",
    "chars": 10110,
    "preview": "import itertools\nimport random\nfrom collections import defaultdict\n\nimport numpy as np\nimport torch\nimport torch.nn as n"
  },
  {
    "path": "evaluation/lcnn/models/multitask_learner.py",
    "chars": 3691,
    "preview": "from collections import OrderedDict, defaultdict\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn."
  },
  {
    "path": "evaluation/lcnn/postprocess.py",
    "chars": 2185,
    "preview": "import numpy as np\n\n\ndef pline(x1, y1, x2, y2, x, y):\n    px = x2 - x1\n    py = y2 - y1\n    dd = px * px + py * py\n    u"
  },
  {
    "path": "evaluation/lcnn/trainer.py",
    "chars": 11838,
    "preview": "import atexit\nimport os\nimport os.path as osp\nimport shutil\nimport signal\nimport subprocess\nimport threading\nimport time"
  },
  {
    "path": "evaluation/lcnn/utils.py",
    "chars": 2533,
    "preview": "import math\nimport os.path as osp\nimport multiprocessing\nfrom timeit import default_timer as timer\n\nimport numpy as np\ni"
  },
  {
    "path": "evaluation/matlab/eval_release.m",
    "chars": 2981,
    "preview": "function eval_release(image_path, line_gt_path, output_file, result_path, output_size, mode)\n\n% lineThresh = [0.5, 0.6, "
  },
  {
    "path": "evaluation/process.py",
    "chars": 4276,
    "preview": "#!/usr/bin/env python3\n\"\"\"Process a dataset with the trained neural network\nUsage:\n    process.py [options] <yaml-config"
  },
  {
    "path": "helper/gdrive-download.sh",
    "chars": 284,
    "preview": "#!/bin/bash\nfileid=\"$1\"\nfilename=\"$2\"\ncurl -c ./cookie -s -L \"https://drive.google.com/uc?export=download&id=${fileid}\" "
  },
  {
    "path": "helper/wireframe.py",
    "chars": 3278,
    "preview": "#!/usr/bin/env python3\n\"\"\"Process data for LETR\nUsage:\n    wireframe.py <src> <dst>\n    wireframe.py (-h | --help )\n\nExa"
  },
  {
    "path": "helper/wireframe_eval.py",
    "chars": 7466,
    "preview": "#!/usr/bin/env python\n\"\"\"Process Huang's wireframe dataset for L-CNN network\nUsage:\n    dataset/wireframe.py <src> <dst>"
  },
  {
    "path": "helper/york.py",
    "chars": 3576,
    "preview": "#!/usr/bin/env python3\n\"\"\"Process YorkUrban dataset for L-CNN network\nUsage:\n    york.py <src> <dst>\n    york.py (-h | -"
  },
  {
    "path": "helper/york_eval.py",
    "chars": 6098,
    "preview": "#!/usr/bin/env python3\n\"\"\"Process YorkUrban dataset for L-CNN network\nUsage:\n    dataset/york.py <src> <dst>\n    dataset"
  },
  {
    "path": "script/evaluation/eval_aph_wireframe.sh",
    "chars": 552,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name and Epoch'\n"
  },
  {
    "path": "script/evaluation/eval_aph_york.sh",
    "chars": 568,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name and Epoch'\n"
  },
  {
    "path": "script/evaluation/eval_stage1.sh",
    "chars": 1605,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/evaluation/eval_stage2.sh",
    "chars": 1629,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/evaluation/eval_stage2_focal.sh",
    "chars": 1628,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/train/a0_train_stage1_res50.sh",
    "chars": 853,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/train/a1_train_stage1_res101.sh",
    "chars": 855,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/train/a2_train_stage2_res50.sh",
    "chars": 872,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/train/a3_train_stage2_res101.sh",
    "chars": 895,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/train/a4_train_stage2_focal_res50.sh",
    "chars": 979,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "script/train/a5_train_stage2_focal_res101.sh",
    "chars": 981,
    "preview": "# Fail the script if there is any failure\nset -e\n\nif [[ $# -eq 0 ]] ; then\n    echo 'Require Experiment Name'\n    exit 1"
  },
  {
    "path": "src/args.py",
    "chars": 6438,
    "preview": "import argparse\n\n\ndef get_args_parser():\n    parser = argparse.ArgumentParser('Set transformer detector', add_help=False"
  },
  {
    "path": "src/datasets/__init__.py",
    "chars": 574,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport torch.utils.data\nimport torchvision\n\nfrom "
  },
  {
    "path": "src/datasets/coco.py",
    "chars": 4152,
    "preview": "\"\"\"\nModified based on Detr: https://github.com/facebookresearch/detr/blob/master/datasets/coco.py\n\"\"\"\nfrom pathlib impor"
  },
  {
    "path": "src/datasets/transforms.py",
    "chars": 16823,
    "preview": "\"\"\"\nTransforms and data augmentation for both image + line.\nmodfied based on https://github.com/facebookresearch/detr/bl"
  },
  {
    "path": "src/demo_letr.ipynb",
    "chars": 2135432,
    "preview": "{\n \"metadata\": {\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_ext"
  },
  {
    "path": "src/engine.py",
    "chars": 6696,
    "preview": "\"\"\"\nTrain and eval functions used in main.py\n\nmodified based on https://github.com/facebookresearch/detr/blob/master/eng"
  },
  {
    "path": "src/main.py",
    "chars": 9559,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport argparse\nimport datetime\nimport json\nimpor"
  },
  {
    "path": "src/models/__init__.py",
    "chars": 143,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom .letr import build\n\n\ndef build_model(args):\n"
  },
  {
    "path": "src/models/backbone.py",
    "chars": 4456,
    "preview": "\"\"\"\nLETR Backbone modules.\nmodified based on https://github.com/facebookresearch/detr/blob/master/models/backbone.py\n\"\"\""
  },
  {
    "path": "src/models/letr.py",
    "chars": 14859,
    "preview": "\"\"\"\nThis file provides coarse stage LETR definition\nModified based on https://github.com/facebookresearch/detr/blob/mast"
  },
  {
    "path": "src/models/letr_stack.py",
    "chars": 15585,
    "preview": "\"\"\"\nThis file provides fine stage LETR definition\n\n\"\"\"\nimport io\nfrom collections import defaultdict\nfrom typing import "
  },
  {
    "path": "src/models/matcher.py",
    "chars": 3763,
    "preview": "\"\"\"\nModules to compute the matching cost and solve the corresponding LSAP.\n\"\"\"\nimport torch\nfrom scipy.optimize import l"
  },
  {
    "path": "src/models/multi_head_attention.py",
    "chars": 27014,
    "preview": "\"\"\"\nThis file provides definition of multi head attention\n\nborrowed from https://pytorch.org/docs/stable/_modules/torch/"
  },
  {
    "path": "src/models/position_encoding.py",
    "chars": 3361,
    "preview": "\"\"\"\nVarious positional encodings for the transformer.\nborrowed from: https://github.com/facebookresearch/detr/blob/maste"
  },
  {
    "path": "src/models/transformer.py",
    "chars": 12169,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nDETR Transformer class.\n\nCopy-paste from torc"
  },
  {
    "path": "src/util/__init__.py",
    "chars": 71,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n"
  },
  {
    "path": "src/util/misc.py",
    "chars": 15254,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\"\"\"\nMisc functions, including distributed helpers"
  }
]

// ... and 3 more files (download for full content)

About this extraction

This page contains the full source code of the mlpc-ucsd/LETR GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 61 files (2.3 MB), approximately 616.1k tokens, and a symbol index with 414 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!