Full Code of lartpang/PySODEvalToolkit for AI

master a52dbccb90ec cached
37 files
166.1 KB
44.1k tokens
98 symbols
1 requests
Download .txt
Repository: lartpang/PySODEvalToolkit
Branch: master
Commit: a52dbccb90ec
Files: 37
Total size: 166.1 KB

Directory structure:
gitextract_et_nuf0z/

├── .editorconfig
├── .github/
│   └── workflows/
│       └── stale.yml
├── .gitignore
├── .pre-commit-config.yaml
├── LICENSE
├── eval.py
├── examples/
│   ├── alias_for_plotting.json
│   ├── config_dataset_json_example.json
│   ├── config_method_json_example.json
│   ├── converter_config.yaml
│   ├── rgbd_aliases.yaml
│   ├── single_row_style.yml
│   └── two_row_style.yml
├── metrics/
│   ├── __init__.py
│   ├── draw_curves.py
│   ├── image_metrics.py
│   └── video_metrics.py
├── plot.py
├── pyproject.toml
├── readme.md
├── readme_zh.md
├── requirements.txt
├── tools/
│   ├── append_results.py
│   ├── check_path.py
│   ├── converter.py
│   ├── info_py_to_json.py
│   ├── readme.md
│   └── rename.py
└── utils/
    ├── __init__.py
    ├── generate_info.py
    ├── misc.py
    ├── print_formatter.py
    └── recorders/
        ├── __init__.py
        ├── curve_drawer.py
        ├── excel_recorder.py
        ├── metric_recorder.py
        └── txt_recorder.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .editorconfig
================================================
# https://editorconfig.org/

root = true

[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8

# Docstrings and comments use max_line_length = 79
[*.py]
max_line_length = 99

# Use 2 spaces for the HTML files
[*.html]
indent_size = 2

# The JSON files contain newlines inconsistently
[*.json]
indent_size = 2
insert_final_newline = ignore

[**/admin/js/vendor/**]
indent_style = ignore
indent_size = ignore

# Minified JavaScript files shouldn't be changed
[**.min.js]
indent_style = ignore
insert_final_newline = ignore

# Makefiles always use tabs for indentation
[Makefile]
indent_style = space

# Batch files use tabs for indentation
[*.bat]
indent_style = space

[docs/**.txt]
max_line_length = 119


================================================
FILE: .github/workflows/stale.yml
================================================
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
#
# You can adjust the behavior by modifying this file.
# For more information, see:
# https://github.com/actions/stale
name: 'Close stale issues and PR'
on:
  schedule:
    - cron: '0 14 * * *'

jobs:
  stale:
    runs-on: ubuntu-latest
    permissions:
      issues: write
      pull-requests: write
    steps:
      - uses: actions/stale@v9
        with:
          repo-token: ${{ secrets.GITHUB_TOKEN }}
          stale-issue-label: 'no-issue-activity'
          stale-issue-message: 'This issue is stale because it has been open 7 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
          close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
          days-before-stale: 7
          days-before-close: 5

          stale-pr-label: 'no-pr-activity'
          stale-pr-message: 'This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 10 days.'
          days-before-pr-stale: 14
          days-before-pr-close: 10


================================================
FILE: .gitignore
================================================
# Big files
**/*.png
**/*.pdf
**/*.jpg
**/*.bmp
**/*.zip
**/*.7z
**/*.rar
**/*.tar*

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
*.npy

# C extensions
*.so

# Distribution / packaging
.Python
build/
.idea/
.vscode/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

### Example user template template
### Example user template

# IntelliJ project files
.idea
*.iml
out
gen

# private files
/output/
/untracked/
/configs/
# /*.py
/*.sh
/*.ps1
/*.bat
/results/rgb_sod.md
/results/htmls/*.html
!/.github/assets/*.jpg


================================================
FILE: .pre-commit-config.yaml
================================================
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v3.2.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
      - id: check-yaml
      - id: check-toml
      - id: check-added-large-files
      - id: fix-encoding-pragma
      - id: mixed-line-ending
  - repo: https://github.com/pycqa/isort
    rev: 5.6.4
    hooks:
      - id: isort
  - repo: https://github.com/psf/black
    rev: 20.8b1
    # Replace by any tag/version: https://github.com/psf/black/tags
    hooks:
      - id: black
        language_version: python3
        # Should be a command that runs python3.6+


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2020 MY_

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: eval.py
================================================
# -*- coding: utf-8 -*-
import argparse
import os
import textwrap
import warnings

from metrics import image_metrics, video_metrics
from utils.generate_info import get_datasets_info, get_methods_info
from utils.recorders import SUPPORTED_METRICS


def get_args():
    parser = argparse.ArgumentParser(
        description=textwrap.dedent(
            r"""
    A Powerful Evaluation Toolkit based on PySODMetrics.

    INCLUDE: More metrics can be set in `utils/recorders/metric_recorder.py`

    - F-measure-Threshold Curve
    - Precision-Recall Curve
    - MAE
    - weighted F-measure
    - S-measure
    - max/average/adaptive/binary F-measure
    - max/average/adaptive/binary E-measure
    - max/average/adaptive/binary Precision
    - max/average/adaptive/binary Recall
    - max/average/adaptive/binary Sensitivity
    - max/average/adaptive/binary Specificity
    - max/average/adaptive/binary F-measure
    - max/average/adaptive/binary Dice
    - max/average/adaptive/binary IoU

    NOTE:

    - Our method automatically calculates the intersection of `pre` and `gt`.
    - Currently supported pre naming rules: `prefix + gt_name_wo_ext + suffix_w_ext`

    EXAMPLES:

    python eval.py \
        --dataset-json configs/datasets/rgbd_sod.json \
        --method-json \
            configs/methods/json/rgbd_other_methods.json \
            configs/methods/json/rgbd_our_method.json \
        --metric-names sm wfm mae fmeasure em \
        --num-bits 4 \
        --num-workers 4 \
        --metric-npy output/rgbd_metrics.npy \
        --curves-npy output/rgbd_curves.npy \
        --record-txt output/rgbd_results.txt
        --to-overwrite \
        --record-xlsx output/test-metric.xlsx \
        --include-dataset \
            dataset-name1-from-dataset-json \
            dataset-name2-from-dataset-json \
            dataset-name3-from-dataset-json
        --include-methods \
            method-name1-from-method-json \
            method-name2-from-method-json \
            method-name3-from-method-json
    """
        ),
        formatter_class=argparse.RawTextHelpFormatter,
    )
    # fmt: off
    parser.add_argument("--dataset-json", required=True, type=str, help="Json file for datasets.")
    parser.add_argument("--method-json", required=True, nargs="+", type=str, help="Json file for methods.")
    parser.add_argument("--metric-npy", type=str, help="Npy file for saving metric results.")
    parser.add_argument("--curves-npy", type=str, help="Npy file for saving curve results.")
    parser.add_argument("--record-txt", type=str, help="Txt file for saving metric results.")
    parser.add_argument("--to-overwrite", action="store_true", help="To overwrite the txt file.")
    parser.add_argument("--record-xlsx", type=str, help="Xlsx file for saving metric results.")
    parser.add_argument("--include-methods", type=str, nargs="+", help="Names of only specific methods you want to evaluate.")
    parser.add_argument("--exclude-methods", type=str, nargs="+", help="Names of some specific methods you do not want to evaluate.")
    parser.add_argument("--include-datasets", type=str, nargs="+", help="Names of only specific datasets you want to evaluate.")
    parser.add_argument("--exclude-datasets", type=str, nargs="+", help="Names of some specific datasets you do not want to evaluate.")
    parser.add_argument("--num-workers", type=int, default=4, help="Number of workers for multi-threading or multi-processing. Default: 4")
    parser.add_argument("--num-bits", type=int, default=3, help="Number of decimal places for showing results. Default: 3")
    parser.add_argument("--metric-names", type=str, nargs="+", default=["sm", "wfm", "mae", "fmeasure", "em", "precision", "recall", "msiou"], choices=SUPPORTED_METRICS, help="Names of metrics")
    parser.add_argument("--data-type", type=str, default="image", choices=["image", "video"], help="Type of data.")

    known_args = parser.parse_known_args()[0]
    if known_args.data_type == "video":
        parser.add_argument("--valid-frame-start", type=int, default=0, help="Valid start index of the frame in each gt video. Defaults to 1, it will skip the first frame. If it is set to None, the code will not skip frames.")
        parser.add_argument("--valid-frame-end", type=int, default=0, help="Valid end index of the frame in each gt video. Defaults to -1, it will skip the last frame. If it is set to 0, the code will not skip frames.")
    # fmt: on
    args = parser.parse_args()

    if args.data_type == "video":
        args.valid_frame_start = max(args.valid_frame_start, 0)
        args.valid_frame_end = min(args.valid_frame_end, 0)
        if args.valid_frame_end == 0:
            args.valid_frame_end = None

    if args.metric_npy:
        os.makedirs(os.path.dirname(args.metric_npy), exist_ok=True)
    if args.curves_npy:
        os.makedirs(os.path.dirname(args.curves_npy), exist_ok=True)
    if args.record_txt:
        os.makedirs(os.path.dirname(args.record_txt), exist_ok=True)
    if args.record_xlsx:
        os.makedirs(os.path.dirname(args.record_xlsx), exist_ok=True)
    if args.to_overwrite and not args.record_txt:
        warnings.warn("--to-overwrite only works with a valid --record-txt")
    return args


def main():
    args = get_args()

    # 包含所有数据集信息的字典
    datasets_info = get_datasets_info(
        datastes_info_json=args.dataset_json,
        include_datasets=args.include_datasets,
        exclude_datasets=args.exclude_datasets,
    )
    # 包含所有待比较模型结果的信息的字典
    methods_info = get_methods_info(
        methods_info_jsons=args.method_json,
        for_drawing=True,
        include_methods=args.include_methods,
        exclude_methods=args.exclude_methods,
    )

    if args.data_type == "image":
        image_metrics.cal_metrics(
            sheet_name="Results",
            to_append=not args.to_overwrite,
            txt_path=args.record_txt,
            xlsx_path=args.record_xlsx,
            methods_info=methods_info,
            datasets_info=datasets_info,
            curves_npy_path=args.curves_npy,
            metrics_npy_path=args.metric_npy,
            num_bits=args.num_bits,
            num_workers=args.num_workers,
            metric_names=args.metric_names,
        )
    else:
        video_metrics.cal_metrics(
            sheet_name="Results",
            to_append=not args.to_overwrite,
            txt_path=args.record_txt,
            xlsx_path=args.record_xlsx,
            methods_info=methods_info,
            datasets_info=datasets_info,
            curves_npy_path=args.curves_npy,
            metrics_npy_path=args.metric_npy,
            num_bits=args.num_bits,
            num_workers=args.num_workers,
            metric_names=args.metric_names,
            return_group=False,
            start_idx=args.valid_frame_start,
            end_idx=args.valid_frame_end,
        )


# 确保多进程在windows上也可以正常使用
if __name__ == "__main__":
    main()


================================================
FILE: examples/alias_for_plotting.json
================================================
{
  "dataset": {
    "Name_In_Json": "Name_In_SubFigure",
    "NJUD": "NJUD",
    "NLPR": "NLPR",
    "DUTRGBD": "DUTRGBD",
    "STEREO1000": "SETERE",
    "RGBD135": "RGBD135",
    "SSD": "SSD",
    "SIP": "SIP"
  },
  "method": {
    "Name_In_Json": "Name_In_Legend",
    "GateNet_2020": "GateNet",
    "MINet_R50_2020": "MINet"
  }
}


================================================
FILE: examples/config_dataset_json_example.json
================================================
{
  "LFSD": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/LFSD/Image",
      "prefix": "some_gt_prefix",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/LFSD/Mask",
      "prefix": "some_gt_prefix",
      "suffix": ".png"
    }
  },
  "NJUD": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/NJUD_FULL/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/NJUD_FULL/Mask",
      "suffix": ".png"
    }
  },
  "NLPR": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/NLPR_FULL/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/NLPR_FULL/Mask",
      "suffix": ".png"
    }
  },
  "RGBD135": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/RGBD135/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/RGBD135/Mask",
      "suffix": ".png"
    }
  },
  "SIP": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/SIP/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/SIP/Mask",
      "suffix": ".png"
    }
  },
  "SSD": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/SSD/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/SSD/Mask",
      "suffix": ".png"
    }
  },
  "STEREO797": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/STEREO797/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/STEREO797/Mask",
      "suffix": ".png"
    }
  },
  "STEREO1000": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/STEREO1000/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/STEREO1000/Mask",
      "suffix": ".png"
    }
  },
  "DUTRGBD": {
    "image": {
      "path": "Path_Of_RGBDSOD_Datasets/DUT-RGBD/Test/Image",
      "suffix": ".jpg"
    },
    "mask": {
      "path": "Path_Of_RGBDSOD_Datasets/DUT-RGBD/Test/Mask",
      "suffix": ".png"
    }
  }
}


================================================
FILE: examples/config_method_json_example.json
================================================
{
  "Method1": {
    "PASCAL-S": {
      "path": "Path_Of_Method1/PASCAL-S/DGRL",
      "prefix": "some_method_prefix",
      "suffix": ".png"
    },
    "ECSSD": {
      "path": "Path_Of_Method1/ECSSD/DGRL",
      "prefix": "some_method_prefix",
      "suffix": ".png"
    },
    "HKU-IS": {
      "path": "Path_Of_Method1/HKU-IS/DGRL",
      "prefix": "some_method_prefix",
      "suffix": ".png"
    },
    "DUT-OMRON": {
      "path": "Path_Of_Method1/DUT-OMRON/DGRL",
      "prefix": "some_method_prefix",
      "suffix": ".png"
    },
    "DUTS-TE": {
      "path": "Path_Of_Method1/DUTS-TE/DGRL",
      "suffix": ".png"
    }
  },
  "Method2": {
    "PASCAL-S": {
      "path": "Path_Of_Method2/pascal",
      "prefix": "pascal_",
      "suffix": ".png"
    },
    "ECSSD": {
      "path": "Path_Of_Method2/ecssd",
      "prefix": "ecssd_",
      "suffix": ".png"
    },
    "HKU-IS": {
      "path": "Path_Of_Method2/hku",
      "prefix": "hku_",
      "suffix": ".png"
    },
    "DUT-OMRON": {
      "path": "Path_Of_Method2/duto",
      "prefix": "duto_",
      "suffix": ".png"
    },
    "DUTS-TE": {
      "path": "Path_Of_Method2/dut_te",
      "prefix": "dut_te_",
      "suffix": ".png"
    }
  },
  "Method3": {
    "PASCAL-S": {
      "path": "Path_Of_Method3/pascal",
      "prefix": "pascal_",
      "suffix": "_fused_sod.png"
    },
    "ECSSD": {
      "path": "Path_Of_Method3/ecssd",
      "prefix": "ecssd_",
      "suffix": "_fused_sod.png"
    },
    "HKU-IS": {
      "path": "Path_Of_Method3/hku",
      "prefix": "hku_",
      "suffix": "_fused_sod.png"
    },
    "DUT-OMRON": {
      "path": "Path_Of_Method3/duto",
      "prefix": "duto_",
      "suffix": "_fused_sod.png"
    },
    "DUTS-TE": {
      "path": "Path_Of_Method3/dut_te",
      "prefix": "dut_te_",
      "suffix": "_fused_sod.png"
    }
  }
}


================================================
FILE: examples/converter_config.yaml
================================================
dataset_names: [
    'NJUD',
    'NLPR',
    'SIP',
    'STEREO1000',
    'SSD',
    'LFSD',
    'RGBD135',
    'DUTRGBD'
]

# 使用单引号保证不被转义
method_names: {
    '2020-ECCV-DANetV19': 'DANet$_{20}$',
    '2020-ECCV-HDFNetR50': 'HDFNet$_{20}$',
    '2022-AAAI-SSLSOD-ImageNet': 'SSLSOD$_{22}$',
}

# 使用单引号保证不被转义
metric_names: {
    'sm': '$S_{m}~\uparrow$',
    'wfm': '$F^{\omega}_{\beta}~\uparrow$',
    'mae': '$MAE~\downarrow$',
    'adpf': '$F^{adp}_{\beta}~\uparrow$',
    'avgf': '$F^{avg}_{\beta}~\uparrow$',
    'maxf': '$F^{max}_{\beta}~\uparrow$',
    'adpe': '$E^{adp}_{m}~\uparrow$',
    'avge': '$E^{avg}_{m}~\uparrow$',
    'maxe': '$E^{max}_{m}~\uparrow$',
}


================================================
FILE: examples/rgbd_aliases.yaml
================================================
dataset: {
    "NJUD": "NJUD",
    "NLPR": "NLPR",
    "SIP": "SIP",
    "STEREO1000": "STEREO1000",
    "RGBD135": "RGBD135",
    "SSD": "SSD",
    "LFSD": "LFSD",
    "DUTRGBD": "DUTRGBD",
}

method: {
    '2020-ECCV-DANetV19': 'DANet$_{20}$',
    '2020-ECCV-HDFNetR50': 'HDFNet$_{20}$',
    '2022-AAAI-SSLSOD-ImageNet': 'SSLSOD$_{22}$',
}


================================================
FILE: examples/single_row_style.yml
================================================
# Based:
# - https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-default-matplotlibrc-file
# - https://github.com/rougier/scientific-visualization-book/blob/master/code/defaults/mystyle.txt


## ***************************************************************************
## * LINES                                                                   *
## ***************************************************************************
## See https://matplotlib.org/api/artist_api.html#module-matplotlib.lines
## for more information on line properties.
lines.linewidth: 2
lines.markersize: 5


## ***************************************************************************
## * FONT                                                                    *
## ***************************************************************************
## The font properties used by `text.Text`.
## See https://matplotlib.org/api/font_manager_api.html for more information
## on font properties.  The 6 font properties used for font matching are
## given below with their default values.
##
## The font.family property can take either a concrete font name (not supported
## when rendering text with usetex), or one of the following five generic
## values:
##     - 'serif' (e.g., Times),
##     - 'sans-serif' (e.g., Helvetica),
##     - 'cursive' (e.g., Zapf-Chancery),
##     - 'fantasy' (e.g., Western), and
##     - 'monospace' (e.g., Courier).
## Each of these values has a corresponding default list of font names
## (font.serif, etc.); the first available font in the list is used.  Note that
## for font.serif, font.sans-serif, and font.monospace, the first element of
## the list (a DejaVu font) will always be used because DejaVu is shipped with
## Matplotlib and is thus guaranteed to be available; the other entries are
## left as examples of other possible values.
##
## The font.style property has three values: normal (or roman), italic
## or oblique.  The oblique style will be used for italic, if it is not
## present.
##
## The font.variant property has two values: normal or small-caps.  For
## TrueType fonts, which are scalable fonts, small-caps is equivalent
## to using a font size of 'smaller', or about 83%% of the current font
## size.
##
## The font.weight property has effectively 13 values: normal, bold,
## bolder, lighter, 100, 200, 300, ..., 900.  Normal is the same as
## 400, and bold is 700.  bolder and lighter are relative values with
## respect to the current weight.
##
## The font.stretch property has 11 values: ultra-condensed,
## extra-condensed, condensed, semi-condensed, normal, semi-expanded,
## expanded, extra-expanded, ultra-expanded, wider, and narrower.  This
## property is not currently implemented.
##
## The font.size property is the default font size for text, given in points.
## 10 pt is the standard value.
##
## Note that font.size controls default text sizes.  To configure
## special text sizes tick labels, axes, labels, title, etc., see the rc
## settings for axes and ticks.  Special text sizes can be defined
## relative to font.size, using the following values: xx-small, x-small,
## small, medium, large, x-large, xx-large, larger, or smaller
font.family:  sans-serif
font.style:   normal
font.variant: normal
font.weight:  normal
# font.stretch: normal
font.size:    12.0

#font.serif:      DejaVu Serif, Bitstream Vera Serif, Computer Modern Roman, New Century Schoolbook, Century Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times New Roman, Times, Palatino, Charter, serif
font.sans-serif: Trebuchet MS, DejaVu Sans, Bitstream Vera Sans, Computer Modern Sans Serif, Lucida Grande, Verdana, Geneva, Lucid, Arial, Helvetica, Avant Garde, sans-serif
#font.cursive:    Apple Chancery, Textile, Zapf Chancery, Sand, Script MT, Felipa, Comic Neue, Comic Sans MS, cursive
#font.fantasy:    Chicago, Charcoal, Impact, Western, Humor Sans, xkcd, fantasy
#font.monospace:  DejaVu Sans Mono, Bitstream Vera Sans Mono, Computer Modern Typewriter, Andale Mono, Nimbus Mono L, Courier New, Courier, Fixed, Terminal, monospace


## ***************************************************************************
## * AXES                                                                    *
## ***************************************************************************
## Following are default face and edge colors, default tick sizes,
## default font sizes for tick labels, and so on.  See
## https://matplotlib.org/api/axes_api.html#module-matplotlib.axes
axes.linewidth: 1
axes.grid: True
axes.ymargin: 0.1

axes.titlelocation: center  # alignment of the title: {left, right, center}
axes.titlesize:     x-large   # font size of the axes title
axes.titleweight:   bold  # font weight of title
axes.titlecolor:    black    # color of the axes title, auto falls back to text.color as default value

axes.spines.left:   True
axes.spines.bottom: True
axes.spines.right:  False
axes.spines.top:    False

axes.labelsize:     medium  # font size of the x and y labels
axes.labelpad:      2.0     # space between label and axis
axes.labelweight:   normal  # weight of the x and y labels
axes.labelcolor:    black
axes.axisbelow:     True    # draw axis gridlines and ticks:
                            #     - below patches (True)
                            #     - above patches but below lines ('line')
                            #     - above all (False)


## ***************************************************************************
## * TICKS                                                                   *
## ***************************************************************************
## See https://matplotlib.org/api/axis_api.html#matplotlib.axis.Tick
xtick.bottom: True
xtick.top: False
xtick.direction: out
xtick.major.size: 5
xtick.major.width: 1
xtick.minor.size: 3
xtick.minor.width: 0.5
xtick.minor.visible: False
xtick.alignment: center  # alignment of xticks

ytick.left: True
ytick.right: False
ytick.direction: out
ytick.major.size: 5
ytick.major.width: 1
ytick.minor.size: 3
ytick.minor.width: 0.5
ytick.minor.visible: False
ytick.alignment: center_baseline  # alignment of yticks


## ***************************************************************************
## * GRIDS                                                                   *
## ***************************************************************************
grid.color:     black
grid.linewidth: 0.1
grid.alpha:     0.4     # transparency, between 0.0 and 1.0


## ***************************************************************************
## * LEGEND                                                                  *
## ***************************************************************************
legend.fancybox:      True     # if True, use a rounded box for the legend background, else a rectangle
legend.shadow:        False    # if True, give background a shadow effect
legend.numpoints:     1        # the number of marker points in the legend line
legend.scatterpoints: 1        # number of scatter points
legend.markerscale:   1.0      # the relative size of legend markers vs. original
legend.fontsize:      large
legend.framealpha:    0.9

# Dimensions as fraction of font size:
legend.borderpad:     0.4  # border whitespace
legend.labelspacing:  0.5  # the vertical space between the legend entries
legend.handlelength:  2.0  # the length of the legend lines
legend.handleheight:  0.7  # the height of the legend handle
legend.handletextpad: 0.5  # the space between the legend line and legend text
legend.borderaxespad: 0.5  # the border between the axes and legend edge
legend.columnspacing: 0.5  # column separation


## ***************************************************************************
## * FIGURE                                                                  *
## ***************************************************************************
## See https://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure
figure.titlesize:   large     # size of the figure title (``Figure.suptitle()``)
figure.titleweight: normal    # weight of the figure title
figure.figsize:     16,4      # figure size in inches
figure.dpi:         600       # figure dots per inch
figure.facecolor:   white     # figure face color
figure.edgecolor:   white     # figure edge color

# The figure subplot parameters.  All dimensions are a fraction of the figure width and height.
figure.subplot.left:   0.00    # the left side of the subplots of the figure
figure.subplot.right:  1.00    # the right side of the subplots of the figure
figure.subplot.bottom: 0.00    # the bottom of the subplots of the figure
figure.subplot.top:    1.00    # the top of the subplots of the figure
figure.subplot.wspace: 0.10    # the amount of width reserved for space between subplots, expressed as a fraction of the average axis width
figure.subplot.hspace: 0.10    # the amount of height reserved for space between subplots, expressed as a fraction of the average axis height

## Figure layout
figure.autolayout: False  # When True, automatically adjust subplot parameters to make the plot fit the figure using `tight_layout`


## ***************************************************************************
## * IMAGES                                                                  *
## ***************************************************************************
image.interpolation:   antialiased  # see help(imshow) for options
image.cmap:            gray      # A colormap name, gray etc...
image.lut:             256          # the size of the colormap lookup table


## ***************************************************************************
## * SAVING FIGURES                                                          *
## ***************************************************************************
## The default savefig parameters can be different from the display parameters
## e.g., you may want a higher resolution, or to make the figure
## background white
savefig.dpi:       figure      # figure dots per inch or 'figure'
savefig.format:    pdf         # {png, ps, pdf, svg}

## PDF backend params
pdf.compression:    6  # integer from 0 to 9 0 disables compression (good for debugging)
pdf.fonttype:       3  # Output Type 3 (Type3) or Type 42 (TrueType)


================================================
FILE: examples/two_row_style.yml
================================================
# Based:
# - https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-default-matplotlibrc-file
# - https://github.com/rougier/scientific-visualization-book/blob/master/code/defaults/mystyle.txt


## ***************************************************************************
## * LINES                                                                   *
## ***************************************************************************
## See https://matplotlib.org/api/artist_api.html#module-matplotlib.lines
## for more information on line properties.
lines.linewidth: 2
lines.markersize: 5


## ***************************************************************************
## * FONT                                                                    *
## ***************************************************************************
## The font properties used by `text.Text`.
## See https://matplotlib.org/api/font_manager_api.html for more information
## on font properties.  The 6 font properties used for font matching are
## given below with their default values.
##
## The font.family property can take either a concrete font name (not supported
## when rendering text with usetex), or one of the following five generic
## values:
##     - 'serif' (e.g., Times),
##     - 'sans-serif' (e.g., Helvetica),
##     - 'cursive' (e.g., Zapf-Chancery),
##     - 'fantasy' (e.g., Western), and
##     - 'monospace' (e.g., Courier).
## Each of these values has a corresponding default list of font names
## (font.serif, etc.); the first available font in the list is used.  Note that
## for font.serif, font.sans-serif, and font.monospace, the first element of
## the list (a DejaVu font) will always be used because DejaVu is shipped with
## Matplotlib and is thus guaranteed to be available; the other entries are
## left as examples of other possible values.
##
## The font.style property has three values: normal (or roman), italic
## or oblique.  The oblique style will be used for italic, if it is not
## present.
##
## The font.variant property has two values: normal or small-caps.  For
## TrueType fonts, which are scalable fonts, small-caps is equivalent
## to using a font size of 'smaller', or about 83%% of the current font
## size.
##
## The font.weight property has effectively 13 values: normal, bold,
## bolder, lighter, 100, 200, 300, ..., 900.  Normal is the same as
## 400, and bold is 700.  bolder and lighter are relative values with
## respect to the current weight.
##
## The font.stretch property has 11 values: ultra-condensed,
## extra-condensed, condensed, semi-condensed, normal, semi-expanded,
## expanded, extra-expanded, ultra-expanded, wider, and narrower.  This
## property is not currently implemented.
##
## The font.size property is the default font size for text, given in points.
## 10 pt is the standard value.
##
## Note that font.size controls default text sizes.  To configure
## special text sizes tick labels, axes, labels, title, etc., see the rc
## settings for axes and ticks.  Special text sizes can be defined
## relative to font.size, using the following values: xx-small, x-small,
## small, medium, large, x-large, xx-large, larger, or smaller
font.family:  sans-serif
font.style:   normal
font.variant: normal
font.weight:  normal
# font.stretch: normal
font.size:    12.0

#font.serif:      DejaVu Serif, Bitstream Vera Serif, Computer Modern Roman, New Century Schoolbook, Century Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times New Roman, Times, Palatino, Charter, serif
font.sans-serif: Trebuchet MS, DejaVu Sans, Bitstream Vera Sans, Computer Modern Sans Serif, Lucida Grande, Verdana, Geneva, Lucid, Arial, Helvetica, Avant Garde, sans-serif
#font.cursive:    Apple Chancery, Textile, Zapf Chancery, Sand, Script MT, Felipa, Comic Neue, Comic Sans MS, cursive
#font.fantasy:    Chicago, Charcoal, Impact, Western, Humor Sans, xkcd, fantasy
#font.monospace:  DejaVu Sans Mono, Bitstream Vera Sans Mono, Computer Modern Typewriter, Andale Mono, Nimbus Mono L, Courier New, Courier, Fixed, Terminal, monospace


## ***************************************************************************
## * AXES                                                                    *
## ***************************************************************************
## Following are default face and edge colors, default tick sizes,
## default font sizes for tick labels, and so on.  See
## https://matplotlib.org/api/axes_api.html#module-matplotlib.axes
axes.linewidth: 1
axes.grid: True
axes.ymargin: 0.1

axes.titlelocation: center  # alignment of the title: {left, right, center}
axes.titlesize:     x-large   # font size of the axes title
axes.titleweight:   bold  # font weight of title
axes.titlecolor:    black    # color of the axes title, auto falls back to text.color as default value

axes.spines.left:   True
axes.spines.bottom: True
axes.spines.right:  False
axes.spines.top:    False

axes.labelsize:     medium  # font size of the x and y labels
axes.labelpad:      2.0     # space between label and axis
axes.labelweight:   normal  # weight of the x and y labels
axes.labelcolor:    black
axes.axisbelow:     True    # draw axis gridlines and ticks:
                            #     - below patches (True)
                            #     - above patches but below lines ('line')
                            #     - above all (False)


## ***************************************************************************
## * TICKS                                                                   *
## ***************************************************************************
## See https://matplotlib.org/api/axis_api.html#matplotlib.axis.Tick
xtick.bottom: True
xtick.top: False
xtick.direction: out
xtick.major.size: 5
xtick.major.width: 1
xtick.minor.size: 3
xtick.minor.width: 0.5
xtick.minor.visible: False
xtick.alignment: center  # alignment of xticks

ytick.left: True
ytick.right: False
ytick.direction: out
ytick.major.size: 5
ytick.major.width: 1
ytick.minor.size: 3
ytick.minor.width: 0.5
ytick.minor.visible: False
ytick.alignment: center_baseline  # alignment of yticks


## ***************************************************************************
## * GRIDS                                                                   *
## ***************************************************************************
grid.color:     black
grid.linewidth: 0.1
grid.alpha:     0.4     # transparency, between 0.0 and 1.0


## ***************************************************************************
## * LEGEND                                                                  *
## ***************************************************************************
legend.fancybox:      True     # if True, use a rounded box for the legend background, else a rectangle
legend.shadow:        False    # if True, give background a shadow effect
legend.numpoints:     1        # the number of marker points in the legend line
legend.scatterpoints: 1        # number of scatter points
legend.markerscale:   1.0      # the relative size of legend markers vs. original
legend.fontsize:      large
legend.framealpha:    1

# Dimensions as fraction of font size:
legend.borderpad:     0.4  # border whitespace
legend.labelspacing:  0.5  # the vertical space between the legend entries
legend.handlelength:  2.0  # the length of the legend lines
legend.handleheight:  0.7  # the height of the legend handle
legend.handletextpad: 0.5  # the space between the legend line and legend text
legend.borderaxespad: 0.5  # the border between the axes and legend edge
legend.columnspacing: 0.5  # column separation


## ***************************************************************************
## * FIGURE                                                                  *
## ***************************************************************************
## See https://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure
figure.titlesize:   large     # size of the figure title (``Figure.suptitle()``)
figure.titleweight: normal    # weight of the figure title
figure.figsize:     16,8      # figure size in inches
figure.dpi:         600       # figure dots per inch
figure.facecolor:   white     # figure face color
figure.edgecolor:   white     # figure edge color

# The figure subplot parameters.  All dimensions are a fraction of the figure width and height.
figure.subplot.left:   0.00    # the left side of the subplots of the figure
figure.subplot.right:  1.00    # the right side of the subplots of the figure
figure.subplot.bottom: 0.00    # the bottom of the subplots of the figure
figure.subplot.top:    1.00    # the top of the subplots of the figure
figure.subplot.wspace: 0.10    # the amount of width reserved for space between subplots, expressed as a fraction of the average axis width
figure.subplot.hspace: 0.10    # the amount of height reserved for space between subplots, expressed as a fraction of the average axis height

## Figure layout
figure.autolayout: False  # When True, automatically adjust subplot parameters to make the plot fit the figure using `tight_layout`


## ***************************************************************************
## * IMAGES                                                                  *
## ***************************************************************************
image.interpolation:   antialiased  # see help(imshow) for options
image.cmap:            gray      # A colormap name, gray etc...
image.lut:             256          # the size of the colormap lookup table


## ***************************************************************************
## * SAVING FIGURES                                                          *
## ***************************************************************************
## The default savefig parameters can be different from the display parameters
## e.g., you may want a higher resolution, or to make the figure
## background white
savefig.dpi:       figure      # figure dots per inch or 'figure'
savefig.format:    pdf         # {png, ps, pdf, svg}

## PDF backend params
pdf.compression:    6  # integer from 0 to 9 0 disables compression (good for debugging)
pdf.fonttype:       3  # Output Type 3 (Type3) or Type 42 (TrueType)


================================================
FILE: metrics/__init__.py
================================================


================================================
FILE: metrics/draw_curves.py
================================================
# -*- coding: utf-8 -*-

from collections import OrderedDict

import numpy as np
from matplotlib import colors

from utils.recorders import CurveDrawer

# Align the mode with those in GRAYSCALE_METRICS
_YX_AXIS_NAMES = {
    "pr": ("precision", "recall"),
    "fm": ("fmeasure", None),
    "fmeasure": ("fmeasure", None),
    "em": ("em", None),
    "iou": ("iou", None),
    "dice": ("dice", None),
}


def draw_curves(
    mode: str,
    axes_setting: dict = None,
    curves_npy_path: list = None,
    row_num: int = 1,
    our_methods: list = None,
    method_aliases: OrderedDict = None,
    dataset_aliases: OrderedDict = None,
    style_cfg: dict = None,
    ncol_of_legend: int = 1,
    separated_legend: bool = False,
    sharey: bool = False,
    line_width=3,
    save_name=None,
):
    """A better curve painter!

    Args:
        mode (str): `pr` for PR curves, `fm` for F-measure curves, and `em' for E-measure curves.
        axes_setting (dict, optional): Setting for axes. Defaults to None.
        curves_npy_path (list, optional): Paths of curve npy files. Defaults to None.
        row_num (int, optional): Number of rows. Defaults to 1.
        our_methods (list, optional): Names of our methods. Defaults to None.
        method_aliases (OrderedDict, optional): Aliases of methods. Defaults to None.
        dataset_aliases (OrderedDict, optional): Aliases of datasets. Defaults to None.
        style_cfg (dict, optional): Config file for the style of matplotlib. Defaults to None.
        ncol_of_legend (int, optional): Number of columns for the legend. Defaults to 1.
        separated_legend (bool, optional): Use the separated legend. Defaults to False.
        sharey (bool, optional): Use a shared y-axis. Defaults to False.
        line_width (int, optional): Width of lines. Defaults to 3.
        save_name (str, optional): Name or path (without the extension format). Defaults to None.
    """
    save_name = save_name or mode
    y_axis_name, x_axis_name = _YX_AXIS_NAMES[mode]

    assert curves_npy_path
    if not isinstance(curves_npy_path, (list, tuple)):
        curves_npy_path = [curves_npy_path]

    curves = {}
    unique_method_names_from_npy = []
    for p in curves_npy_path:
        single_curves = np.load(p, allow_pickle=True).item()
        for dataset_name, method_infos in single_curves.items():
            curves.setdefault(dataset_name, {})
            for method_name, method_info in method_infos.items():
                curves[dataset_name][method_name] = method_info
                if method_name not in unique_method_names_from_npy:
                    unique_method_names_from_npy.append(method_name)
    dataset_names_from_npy = list(curves.keys())

    if dataset_aliases is None:
        dataset_aliases = OrderedDict({k: k for k in dataset_names_from_npy})
    else:
        for x in dataset_aliases.keys():
            if x not in dataset_names_from_npy:
                raise ValueError(f"{x} must be contained in\n{dataset_names_from_npy}")

    if method_aliases is not None:
        target_unique_method_names = []
        for x in method_aliases:
            if x in unique_method_names_from_npy:
                target_unique_method_names.append(x)
            # Only consider the name in npy is also in alias config.
            # if x not in unique_method_names_from_npy:
            #     raise ValueError(
            #         f"{x} must be contained in\n{sorted(unique_method_names_from_npy)}"
            #     )
    else:
        method_aliases = {}
        target_unique_method_names = unique_method_names_from_npy

    if our_methods is not None:
        our_methods.reverse()
        for x in our_methods:
            if x not in target_unique_method_names:
                raise ValueError(f"{x} must be contained in\n{target_unique_method_names}")
            # Put our methods into the head of the name list
            target_unique_method_names.pop(target_unique_method_names.index(x))
            target_unique_method_names.insert(0, x)
        # assert len(our_methods) <= len(line_styles)
    else:
        our_methods = []
    num_our_methods = len(our_methods)

    # Give each method a unique color and style.
    color_table = sorted(
        [
            color
            for name, color in colors.cnames.items()
            if name not in ["red", "white"] or not name.startswith("light") or "gray" in name
        ]
    )
    style_table = ["-", "--", "-.", ":", "."]

    unique_method_settings = OrderedDict()
    for i, method_name in enumerate(target_unique_method_names):
        if i < num_our_methods:
            line_color = "red"
            line_style = style_table[i % len(style_table)]
        else:
            other_idx = i - num_our_methods
            line_color = color_table[other_idx]
            line_style = style_table[other_idx % 2]

        unique_method_settings[method_name] = {
            "line_color": line_color,
            "line_label": method_aliases.get(method_name, method_name),
            "line_style": line_style,
            "line_width": line_width,
        }
    # ensure that our methods are drawn last to avoid being overwritten by other methods
    target_unique_method_names.reverse()

    curve_drawer = CurveDrawer(
        row_num=row_num,
        num_subplots=len(dataset_aliases),
        style_cfg=style_cfg,
        ncol_of_legend=ncol_of_legend,
        separated_legend=separated_legend,
        sharey=sharey,
    )

    for idx, (dataset_name, dataset_alias) in enumerate(dataset_aliases.items()):
        dataset_results = curves[dataset_name]

        for method_name in target_unique_method_names:
            method_setting = unique_method_settings[method_name]

            if method_name not in dataset_results:
                print(f"{method_name} will be skipped for {dataset_name}!")
                continue

            method_results = dataset_results[method_name]

            if y_axis_name is None:
                y_data = np.linspace(0, 1, 256)
            else:
                y_data = method_results[y_axis_name]
                assert isinstance(y_data, (list, tuple)), (method_name, method_results.keys())

            if x_axis_name is None:
                x_data = np.linspace(0, 1, 256)
            else:
                x_data = method_results[x_axis_name]
                assert isinstance(x_data, (list, tuple)), (method_name, method_results.keys())

            curve_drawer.plot_at_axis(idx, method_setting, x_data=x_data, y_data=y_data)
        curve_drawer.set_axis_property(idx, dataset_alias, **axes_setting[mode])
    curve_drawer.save(path=save_name)


================================================
FILE: metrics/image_metrics.py
================================================
# -*- coding: utf-8 -*-

import os
from collections import defaultdict
from functools import partial
from multiprocessing import pool
from threading import RLock as TRLock

import numpy as np
from tqdm import tqdm

from utils.misc import get_gt_pre_with_name, get_name_list, make_dir
from utils.print_formatter import formatter_for_tabulate
from utils.recorders import (
    BINARY_METRIC_MAPPING,
    GRAYSCALE_METRICS,
    BinaryMetricRecorder,
    GrayscaleMetricRecorder,
    MetricExcelRecorder,
    TxtRecorder,
)


class Recorder:
    def __init__(
        self,
        method_names,
        dataset_names,
        metric_names,
        *,
        txt_path,
        to_append,
        xlsx_path,
        sheet_name,
    ):
        self.curves = defaultdict(dict)  # Two curve metrics
        self.metrics = defaultdict(dict)  # Six numerical metrics
        self.method_names = method_names
        self.dataset_names = dataset_names

        self.txt_recorder = None
        if txt_path:
            self.txt_recorder = TxtRecorder(
                txt_path=txt_path,
                to_append=to_append,
                max_method_name_width=max([len(x) for x in method_names]),  # 显示完整名字
            )

        self.excel_recorder = None
        if xlsx_path:
            excel_metric_names = []
            for x in metric_names:
                if x in GRAYSCALE_METRICS:
                    if x == "em":
                        excel_metric_names.append(f"max{x}")
                        excel_metric_names.append(f"avg{x}")
                        excel_metric_names.append(f"adp{x}")
                    else:
                        config = BINARY_METRIC_MAPPING[x]
                        if config["kwargs"]["with_dynamic"]:
                            excel_metric_names.append(f"max{x}")
                            excel_metric_names.append(f"avg{x}")
                        if config["kwargs"]["with_adaptive"]:
                            excel_metric_names.append(f"adp{x}")
                else:
                    excel_metric_names.append(x)

            self.excel_recorder = MetricExcelRecorder(
                xlsx_path=xlsx_path,
                sheet_name=sheet_name,
                row_header=["methods"],
                dataset_names=dataset_names,
                metric_names=excel_metric_names,
            )

    def record(self, method_results, dataset_name, method_name):
        """Record results"""
        method_curves = method_results.get("sequential")
        method_metrics = method_results["numerical"]
        self.curves[dataset_name][method_name] = method_curves
        self.metrics[dataset_name][method_name] = method_metrics

    def export(self):
        """After evaluating all methods, export results to ensure the order of names."""
        for dataset_name in self.dataset_names:
            if dataset_name not in self.metrics:
                continue

            for method_name in self.method_names:
                dataset_results = self.metrics[dataset_name]
                method_results = dataset_results.get(method_name)
                if method_results is None:
                    continue

                if self.txt_recorder:
                    self.txt_recorder.add_row(row_name="Dataset", row_data=dataset_name)
                    self.txt_recorder(method_results=method_results, method_name=method_name)
                if self.excel_recorder:
                    self.excel_recorder(
                        row_data=method_results, dataset_name=dataset_name, method_name=method_name
                    )


def cal_metrics(
    sheet_name: str = "results",
    txt_path: str = "",
    to_append: bool = True,
    xlsx_path: str = "",
    methods_info: dict = None,
    datasets_info: dict = None,
    curves_npy_path: str = "./curves.npy",
    metrics_npy_path: str = "./metrics.npy",
    num_bits: int = 3,
    num_workers: int = 2,
    metric_names: tuple = ("sm", "wfm", "mae", "fmeasure", "em"),
):
    """Save the results of all models on different datasets in a `npy` file in the form of a
    dictionary.

    Args:
        sheet_name (str, optional): The type of the sheet in xlsx file. Defaults to "results".
        txt_path (str, optional): The path of the txt for saving results. Defaults to "".
        to_append (bool, optional): Whether to append results to the original record. Defaults to True.
        xlsx_path (str, optional): The path of the xlsx file for saving results. Defaults to "".
        methods_info (dict, optional): The method information. Defaults to None.
        datasets_info (dict, optional): The dataset information. Defaults to None.
        curves_npy_path (str, optional): The npy file path for saving curve data. Defaults to "./curves.npy".
        metrics_npy_path (str, optional): The npy file path for saving metric values. Defaults to "./metrics.npy".
        num_bits (int, optional): The number of bits used to format results. Defaults to 3.
        num_workers (int, optional): The number of workers of multiprocessing or multithreading. Defaults to 2.
        metric_names (tuple, optional): Names of metrics. Defaults to ("sm", "wfm", "mae", "fmeasure", "em").

    Returns:
        {
          dataset1:{
            method1:[fm, em, p, r],
            method2:[fm, em, p, r],
            .....
          },
          dataset2:{
            method1:[fm, em, p, r],
            method2:[fm, em, p, r],
            .....
          },
          ....
        }

    """
    if all([x in BinaryMetricRecorder.suppoted_metrics for x in metric_names]):
        metric_class = BinaryMetricRecorder
    elif all([x in GrayscaleMetricRecorder.suppoted_metrics for x in metric_names]):
        metric_class = GrayscaleMetricRecorder
    else:
        raise ValueError(metric_names)

    method_names = tuple(methods_info.keys())
    dataset_names = tuple(datasets_info.keys())
    recorder = Recorder(
        method_names=method_names,
        dataset_names=dataset_names,
        metric_names=metric_names,
        txt_path=txt_path,
        to_append=to_append,
        xlsx_path=xlsx_path,
        sheet_name=sheet_name,
    )

    tqdm.set_lock(TRLock())
    procs = pool.ThreadPool(
        processes=num_workers, initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),)
    )
    print(f"Create a {procs}).")

    for dataset_name, dataset_path in datasets_info.items():
        # 获取真值图片信息
        gt_info = dataset_path["mask"]
        gt_root = gt_info["path"]
        gt_prefix = gt_info.get("prefix", "")
        gt_suffix = gt_info["suffix"]
        # 真值名字列表
        gt_index_file = dataset_path.get("index_file")
        if gt_index_file:
            gt_name_list = get_name_list(
                data_path=gt_index_file, name_prefix=gt_prefix, name_suffix=gt_suffix
            )
        else:
            gt_name_list = get_name_list(
                data_path=gt_root, name_prefix=gt_prefix, name_suffix=gt_suffix
            )
        gt_info_pair = (gt_root, gt_prefix, gt_suffix)
        assert len(gt_name_list) > 0, "there is not ground truth."

        # ==>> test the intersection between pre and gt for each method <<==
        for method_name, method_info in methods_info.items():
            method_root = method_info["path_dict"]
            method_dataset_info = method_root.get(dataset_name, None)
            if method_dataset_info is None:
                tqdm.write(f"{method_name} does not have results on {dataset_name}")
                continue

            # 预测结果存放路径下的图片文件名字列表和扩展名称
            pre_prefix = method_dataset_info.get("prefix", "")
            pre_suffix = method_dataset_info["suffix"]
            pre_root = method_dataset_info["path"]
            pre_name_list = get_name_list(
                data_path=pre_root, name_prefix=pre_prefix, name_suffix=pre_suffix
            )
            pre_info_pair = (pre_root, pre_prefix, pre_suffix)

            # get the intersection
            eval_name_list = sorted(set(gt_name_list).intersection(pre_name_list))
            if len(eval_name_list) == 0:
                tqdm.write(f"{method_name} does not have results on {dataset_name}")
                continue

            desc = f"[{dataset_name}({len(gt_name_list)}):{method_name}({len(pre_name_list)})]"
            kwargs = dict(
                names=eval_name_list,
                num_bits=num_bits,
                pre_info_pair=pre_info_pair,
                gt_info_pair=gt_info_pair,
                metric_names=metric_names,
                metric_class=metric_class,
                desc=desc,
            )
            callback = partial(recorder.record, dataset_name=dataset_name, method_name=method_name)
            procs.apply_async(func=evaluate, kwds=kwargs, callback=callback)
            # print(" -------------------- [DEBUG] -------------------- ")
            # callback(evaluate(**kwargs), dataset_name=dataset_name, method_name=method_name)
    procs.close()
    procs.join()

    recorder.export()
    if curves_npy_path:
        make_dir(os.path.dirname(curves_npy_path))
        np.save(curves_npy_path, recorder.curves)
        tqdm.write(f"All curves has been saved in {curves_npy_path}")
    if metrics_npy_path:
        make_dir(os.path.dirname(metrics_npy_path))
        np.save(metrics_npy_path, recorder.metrics)
        tqdm.write(f"All metrics has been saved in {metrics_npy_path}")
    formatted_string = formatter_for_tabulate(recorder.metrics, method_names, dataset_names)
    tqdm.write(f"All methods have been evaluated:\n{formatted_string}")


def evaluate(names, num_bits, pre_info_pair, gt_info_pair, metric_class, metric_names, desc=""):
    metric_recoder = metric_class(metric_names=metric_names)
    # https://github.com/tqdm/tqdm#parameters
    # https://github.com/tqdm/tqdm/blob/master/examples/parallel_bars.py
    for name in tqdm(names, total=len(names), desc=desc, ncols=79, lock_args=(False,)):
        gt, pre = get_gt_pre_with_name(
            img_name=name,
            pre_root=pre_info_pair[0],
            pre_prefix=pre_info_pair[1],
            pre_suffix=pre_info_pair[2],
            gt_root=gt_info_pair[0],
            gt_prefix=gt_info_pair[1],
            gt_suffix=gt_info_pair[2],
            to_normalize=False,
        )
        metric_recoder.step(pre=pre, gt=gt, gt_path=os.path.join(gt_info_pair[0], name))

    method_results = metric_recoder.show(num_bits=num_bits, return_ndarray=False)
    return method_results


================================================
FILE: metrics/video_metrics.py
================================================
# -*- coding: utf-8 -*-

import os
from collections import defaultdict
from functools import partial
from multiprocessing import pool
from threading import RLock as TRLock

import numpy as np
from tqdm import tqdm

from utils.misc import (
    get_gt_pre_with_name_and_group,
    get_name_with_group_list,
    make_dir,
)
from utils.print_formatter import formatter_for_tabulate
from utils.recorders import (
    BINARY_METRIC_MAPPING,
    GRAYSCALE_METRICS,
    GroupedMetricRecorder,
    MetricExcelRecorder,
    TxtRecorder,
)


class Recorder:
    def __init__(
        self,
        method_names,
        dataset_names,
        metric_names,
        *,
        txt_path,
        to_append,
        xlsx_path,
        sheet_name,
    ):
        self.curves = defaultdict(dict)  # Two curve metrics
        self.metrics = defaultdict(dict)  # Six numerical metrics
        self.method_names = method_names
        self.dataset_names = dataset_names

        self.txt_recorder = None
        if txt_path:
            self.txt_recorder = TxtRecorder(
                txt_path=txt_path,
                to_append=to_append,
                max_method_name_width=max([len(x) for x in method_names]),  # 显示完整名字
            )

        self.excel_recorder = None
        if xlsx_path:
            excel_metric_names = []
            for x in metric_names:
                if x in GRAYSCALE_METRICS:
                    if x == "em":
                        excel_metric_names.append(f"max{x}")
                        excel_metric_names.append(f"avg{x}")
                        excel_metric_names.append(f"adp{x}")
                    else:
                        config = BINARY_METRIC_MAPPING[x]
                        if config["kwargs"]["with_dynamic"]:
                            excel_metric_names.append(f"max{x}")
                            excel_metric_names.append(f"avg{x}")
                        if config["kwargs"]["with_adaptive"]:
                            excel_metric_names.append(f"adp{x}")
                else:
                    excel_metric_names.append(x)

            self.excel_recorder = MetricExcelRecorder(
                xlsx_path=xlsx_path,
                sheet_name=sheet_name,
                row_header=["methods"],
                dataset_names=dataset_names,
                metric_names=excel_metric_names,
            )

    def record(self, method_results, dataset_name, method_name):
        """Record results"""
        method_curves = method_results.get("sequential")
        method_metrics = method_results["numerical"]
        self.curves[dataset_name][method_name] = method_curves
        self.metrics[dataset_name][method_name] = method_metrics

    def export(self):
        """After evaluating all methods, export results to ensure the order of names."""
        for dataset_name in self.dataset_names:
            if dataset_name not in self.metrics:
                continue

            for method_name in self.method_names:
                dataset_results = self.metrics[dataset_name]
                method_results = dataset_results.get(method_name)
                if method_results is None:
                    continue

                if self.txt_recorder:
                    self.txt_recorder.add_row(row_name="Dataset", row_data=dataset_name)
                    self.txt_recorder(method_results=method_results, method_name=method_name)
                if self.excel_recorder:
                    self.excel_recorder(
                        row_data=method_results, dataset_name=dataset_name, method_name=method_name
                    )


def cal_metrics(
    sheet_name: str = "results",
    txt_path: str = "",
    to_append: bool = True,
    xlsx_path: str = "",
    methods_info: dict = None,
    datasets_info: dict = None,
    curves_npy_path: str = "./curves.npy",
    metrics_npy_path: str = "./metrics.npy",
    num_bits: int = 3,
    num_workers: int = 2,
    metric_names: tuple = ("sm", "wfm", "mae", "avgdice", "avgiou", "adpe", "avge", "maxe"),
    return_group: bool = False,
    start_idx: int = 1,
    end_idx: int = -1,
):
    """Save the results of all models on different datasets in a `npy` file in the form of a
    dictionary.

    Args:
        sheet_name (str, optional): The type of the sheet in xlsx file. Defaults to "results".
        txt_path (str, optional): The path of the txt for saving results. Defaults to "".
        to_append (bool, optional): Whether to append results to the original record. Defaults to True.
        xlsx_path (str, optional): The path of the xlsx file for saving results. Defaults to "".
        methods_info (dict, optional): The method information. Defaults to None.
        datasets_info (dict, optional): The dataset information. Defaults to None.
        curves_npy_path (str, optional): The npy file path for saving curve data. Defaults to "./curves.npy".
        metrics_npy_path (str, optional): The npy file path for saving metric values. Defaults to "./metrics.npy".
        num_bits (int, optional): The number of bits used to format results. Defaults to 3.
        num_workers (int, optional): The number of workers of multiprocessing or multithreading. Defaults to 2.
        metric_names (tuple, optional): Names of metrics. Defaults to ("sm", "wfm", "em", "mae", "dice", "iou").
        return_group (bool, optional): Whether to return the grouped results. Defaults to False.
        start_idx (int, optional): The index of the first frame in each gt sequence. Defaults to 1, it will skip the first frame. If it is set to None, the code will not skip frames.
        end_idx (int, optional): The index of the last frame in each gt sequence. Defaults to -1, it will skip the last frame. If it is set to None, the code will not skip frames.

    Returns:
        {
          dataset1:{
            method1:[fm, em, p, r],
            method2:[fm, em, p, r],
            .....
          },
          dataset2:{
            method1:[fm, em, p, r],
            method2:[fm, em, p, r],
            .....
          },
          ....
        }

    """
    metric_class = GroupedMetricRecorder

    method_names = tuple(methods_info.keys())
    dataset_names = tuple(datasets_info.keys())
    recorder = Recorder(
        method_names=method_names,
        dataset_names=dataset_names,
        metric_names=metric_names,
        txt_path=txt_path,
        to_append=to_append,
        xlsx_path=xlsx_path,
        sheet_name=sheet_name,
    )

    tqdm.set_lock(TRLock())
    procs = pool.ThreadPool(processes=num_workers, initializer=tqdm.set_lock,
                            initargs=(tqdm.get_lock(),))
    print(f"Create a {procs}).")
    name_sep = "<sep>"

    for dataset_name, dataset_path in datasets_info.items():
        # 获取真值图片信息
        gt_info = dataset_path["mask"]
        gt_root = gt_info["path"]
        gt_prefix = gt_info.get("prefix", "")
        gt_suffix = gt_info["suffix"]
        # 真值名字列表
        gt_index_file = dataset_path.get("index_file")
        if gt_index_file:
            gt_name_list = get_name_with_group_list(
                data_path=gt_index_file, name_prefix=gt_prefix, name_suffix=gt_suffix, sep=name_sep
            )
        else:
            gt_name_list = get_name_with_group_list(
                data_path=gt_root,
                name_prefix=gt_prefix,
                name_suffix=gt_suffix,
                start_idx=start_idx,
                end_idx=end_idx,
                sep=name_sep,
            )
        gt_info_pair = (gt_root, gt_prefix, gt_suffix)
        assert len(gt_name_list) > 0, f"there is not ground truth in {dataset_path}."

        # ==>> test the intersection between pre and gt for each method <<==
        for method_name, method_info in methods_info.items():
            method_root = method_info["path_dict"]
            method_dataset_info = method_root.get(dataset_name, None)
            if method_dataset_info is None:
                tqdm.write(f"{method_name} does not have results on {dataset_name}")
                continue

            # 预测结果存放路径下的图片文件名字列表和扩展名称
            pre_prefix = method_dataset_info.get("prefix", "")
            pre_suffix = method_dataset_info["suffix"]
            pre_root = method_dataset_info["path"]
            pre_name_list = get_name_with_group_list(
                data_path=pre_root, name_prefix=pre_prefix, name_suffix=pre_suffix, sep=name_sep
            )
            pre_info_pair = (pre_root, pre_prefix, pre_suffix)

            # get the intersection
            eval_name_list = sorted(set(gt_name_list).intersection(pre_name_list))
            if len(eval_name_list) == 0:
                tqdm.write(f"{method_name} does not have results on {dataset_name}")
                continue

            desc = f"[{dataset_name}({len(gt_name_list)}):{method_name}({len(pre_name_list)}->{len(eval_name_list)})]"
            kwargs = dict(
                names=eval_name_list,
                num_bits=num_bits,
                pre_info_pair=pre_info_pair,
                gt_info_pair=gt_info_pair,
                metric_names=metric_names,
                metric_class=metric_class,
                return_group=return_group,
                sep=name_sep,
                desc=desc,
            )
            callback = partial(recorder.record, dataset_name=dataset_name, method_name=method_name)
            procs.apply_async(func=evaluate, kwds=kwargs, callback=callback)
            # print(" -------------------- [DEBUG] -------------------- ")
            # callback(evaluate(**kwargs), dataset_name=dataset_name, method_name=method_name)
    procs.close()
    procs.join()

    recorder.export()
    if curves_npy_path:
        make_dir(os.path.dirname(curves_npy_path))
        np.save(curves_npy_path, recorder.curves)
        tqdm.write(f"All curves has been saved in {curves_npy_path}")
    if metrics_npy_path:
        make_dir(os.path.dirname(metrics_npy_path))
        np.save(metrics_npy_path, recorder.metrics)
        tqdm.write(f"All metrics has been saved in {metrics_npy_path}")
    formatted_string = formatter_for_tabulate(recorder.metrics, method_names, dataset_names)
    tqdm.write(f"All methods have been evaluated:\n{formatted_string}")


def evaluate(
    names,
    num_bits,
    pre_info_pair,
    gt_info_pair,
    metric_class,
    metric_names,
    return_group=False,
    sep="<sep>",
    desc="",
):
    group_names = sorted(set([n.split(sep)[0] for n in names]))
    metric_recoder = metric_class(group_names=group_names, metric_names=metric_names)
    # https://github.com/tqdm/tqdm#parameters
    # https://github.com/tqdm/tqdm/blob/master/examples/parallel_bars.py
    tqdm_bar = tqdm(names, total=len(names), desc=desc, ncols=78, lock_args=(False,))
    for name in tqdm_bar:
        group_name = name.split(sep)[0]
        gt, pre = get_gt_pre_with_name_and_group(
            img_name=name,
            pre_root=pre_info_pair[0],
            pre_prefix=pre_info_pair[1],
            pre_suffix=pre_info_pair[2],
            gt_root=gt_info_pair[0],
            gt_prefix=gt_info_pair[1],
            gt_suffix=gt_info_pair[2],
            to_normalize=False,
            sep=sep,
        )
        metric_recoder.step(
            group_name=group_name, pre=pre, gt=gt, gt_path=os.path.join(gt_info_pair[0], name)
        )

    # TODO: 打印的形式有待进一步完善
    method_results = metric_recoder.show(num_bits=num_bits, return_group=return_group)
    return method_results


================================================
FILE: plot.py
================================================
# -*- coding: utf-8 -*-
import argparse
import textwrap

import numpy as np
import yaml

from metrics import draw_curves


def get_args():
    parser = argparse.ArgumentParser(
        description=textwrap.dedent(
            r"""
    INCLUDE:

    - Fm Curve
    - PR Curves

    NOTE:

    - Our method automatically calculates the intersection of `pre` and `gt`.
    - Currently supported pre naming rules: `prefix + gt_name_wo_ext + suffix_w_ext`

    EXAMPLES:

    python plot.py \
        --curves-npy output/rgbd-othermethods-curves.npy output/rgbd-ours-curves.npy \ # use the information from these npy files to draw curves
        --num-rows 1 \ # set the number of rows of the figure to 2
        --style-cfg configs/single_row_style.yml \ # specific the configuration file for the style of matplotlib
        --num-col-legend 1 \ # set the number of the columns of the legend in the figure to 1
        --mode pr \ # draw `pr` curves
        --our-methods Ours \ # specific the names of our own methods, they must be contained in the npy file
        --save-name ./output/rgbd-pr-curves # save the figure into `./output/rgbd-pr-curves.<ext_name>`, where `ext_name` will be specificed to the item `savefig.format` in the `--style-cfg`

    python plot.py \
        --curves-npy output/rgbd-othermethods-curves.npy output/rgbd-ours-curves.npy \
        --num-rows 2 \
        --style-cfg configs/two_row_style.yml \
        --num-col-legend 2 \
        --separated-legend \ # use a separated legend
        --mode fm \ # draw `fm` curves
        --our-methods OursV0 OursV1 \ # specific the names of our own methods, they must be contained in the npy file
        --save-name output/rgbd-fm \
        --alias-yaml configs/rgbd_aliases.yaml # aliases corresponding to methods and datasets you want to use
    """
        ),
        formatter_class=argparse.RawTextHelpFormatter,
    )
    # fmt: off
    parser.add_argument("--alias-yaml", type=str, help="Yaml file for datasets and methods alias.")
    parser.add_argument("--style-cfg", type=str, required=True, help="Yaml file for plotting curves.")
    parser.add_argument("--curves-npys", required=True, type=str, nargs="+", help="Npy file for saving curve results.")
    parser.add_argument("--our-methods", type=str, nargs="+", help="Names of our methods for highlighting it.")
    parser.add_argument("--num-rows", type=int, default=1, help="Number of rows for subplots. Default: 1")
    parser.add_argument("--num-col-legend", type=int, default=1, help="Number of columns in the legend. Default: 1")
    parser.add_argument("--mode", type=str, choices=["pr", "fm", "em", "iou", "dice"], default="pr", help="Mode for plotting. Default: pr")
    parser.add_argument("--separated-legend", action="store_true", help="Use the separated legend.")
    parser.add_argument("--sharey", action="store_true", help="Use the shared y-axis.")
    parser.add_argument("--save-name", type=str, help="the exported file path")
    # fmt: on
    args = parser.parse_args()

    return args


def main(args):
    method_aliases = dataset_aliases = None
    if args.alias_yaml:
        with open(args.alias_yaml, mode="r", encoding="utf-8") as f:
            aliases = yaml.safe_load(f)
        method_aliases = aliases.get("method")
        dataset_aliases = aliases.get("dataset")

    # TODO: Better method to set axes_setting
    axes_setting = {
        # pr curve
        "pr": {
            "x_label": "Recall",
            "y_label": "Precision",
            "x_ticks": np.linspace(0.5, 1, 6),
            "y_ticks": np.linspace(0.7, 1, 6),
        },
        # fm curve
        "fm": {
            "x_label": "Threshold",
            "y_label": r"F$_{\beta}$",
            "x_ticks": np.linspace(0, 1, 6),
            "y_ticks": np.linspace(0.6, 1, 6),
        },
        # em curve
        "em": {
            "x_label": "Threshold",
            "y_label": r"E$_{m}$",
            "x_ticks": np.linspace(0, 1, 6),
            "y_ticks": np.linspace(0.7, 1, 6),
        },
        # iou curve
        "iou": {
            "x_label": "Threshold",
            "y_label": "IoU",
            "x_ticks": np.linspace(0, 1, 6),
            "y_ticks": np.linspace(0.4, 1, 6),
        },
        # dice curve
        "dice": {
            "x_label": "Threshold",
            "y_label": "Dice",
            "x_ticks": np.linspace(0, 1, 6),
            "y_ticks": np.linspace(0.4, 1, 6),
        },
    }

    draw_curves.draw_curves(
        mode=args.mode,
        axes_setting=axes_setting,
        curves_npy_path=args.curves_npys,
        row_num=args.num_rows,
        method_aliases=method_aliases,
        dataset_aliases=dataset_aliases,
        style_cfg=args.style_cfg,
        ncol_of_legend=args.num_col_legend,
        separated_legend=args.separated_legend,
        sharey=args.sharey,
        our_methods=args.our_methods,
        save_name=args.save_name,
    )


if __name__ == "__main__":
    args = get_args()
    main(args)


================================================
FILE: pyproject.toml
================================================
[tool.ruff]
# Exclude a variety of commonly ignored directories.
exclude = [
    ".bzr",
    ".direnv",
    ".eggs",
    ".git",
    ".git-rewrite",
    ".hg",
    ".ipynb_checkpoints",
    ".mypy_cache",
    ".nox",
    ".pants.d",
    ".pyenv",
    ".pytest_cache",
    ".pytype",
    ".ruff_cache",
    ".svn",
    ".tox",
    ".venv",
    ".vscode",
    "__pypackages__",
    "_build",
    "buck-out",
    "build",
    "dist",
    "node_modules",
    "site-packages",
    "venv",
]

# Same as Black.
line-length = 99
indent-width = 4

[tool.ruff.lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`)  codes by default.
select = ["E4", "E7", "E9", "F"]
ignore = []

# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []

# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"

[tool.ruff.format]
# Like Black, use double quotes for strings.
quote-style = "double"

# Like Black, indent with spaces, rather than tabs.
indent-style = "space"

# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false

# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"


================================================
FILE: readme.md
================================================

# A Python-based image grayscale/binary segmentation evaluation toolbox.

[中文文档](./readme_zh.md)

## TODO

- More flexible configuration script.
    - [ ] Use the yaml file that [meets matplotlib requirements]((https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-default-matplotlibrc-file)) to control the drawing format.
    - [ ] Replace the json with a more flexible configuration format, such as yaml or toml.
- [ ] Add test scripts.
- [ ] Add more detailed comments.
- Optimize the code for exporting evaluation results.
    - [x] Implement code to export results to XLSX files.
    - [ ] Optimize the code for exporting to XLSX files.
    - [ ] Consider if using a text format like CSV would be better? It can be opened as a text file and also organized using Excel.
- [ ] Replace `os.path` with `pathlib.Path`.
- [x] Improve the code for grouping data, supporting tasks like CoSOD, Video Binary Segmentation, etc.
- [x] Support concurrency strategy to speed up computation. Retained support for multi-threading, removed the previous multi-process code.
    - [ ] Currently, due to the use of multi-threading, there is an issue with extra log information being written, which needs more optimization.
- [X] Separate USVOS code into another repository [PyDavis16EvalToolbox](https://github.com/lartpang/PyDavis16EvalToolbox).
- [X] Use more rapid and accurate metric code [PySODMetrics](https://github.com/lartpang/PySODMetrics) as the evaluation benchmark.

> [!tip]
> - Some methods provide result names that do not match the original dataset's ground truth names.
>     - [Note] (2021-11-18) Currently, support is provided for both `prefix` and `suffix` names, so users generally do not need to change the names themselves.
>     - [Optional] The provided script tools/rename.py can be used to rename files in bulk. *Please use it carefully to avoid data overwriting.*
>     - [Optional] Other tools, such as `rename` on Linux, and [`Microsoft PowerToys`](https://github.com/microsoft/PowerToys) on Windows.

## Features

- Benefiting from PySODMetrics, it supports a richer set of metrics. For more details, see `utils/recorders/metric_recorder.py`.
    - Supports evaluating *grayscale images*, such as predictions from saliency object detection (SOD) and camouflaged object detection (COD) tasks.
        - MAE
        - Emeasure
        - Smeasure
        - Weighted Fmeasure
        - Maximum/Average/Adaptive Fmeasure
        - Maximum/Average/Adaptive Precision
        - Maximum/Average/Adaptive Recall
        - Maximum/Average/Adaptive IoU
        - Maximum/Average/Adaptive Dice
        - Maximum/Average/Adaptive Specificity
        - Maximum/Average/Adaptive BER
        - Fmeasure-Threshold Curve (run `eval.py` with the metric `fmeasure`)
        - Emeasure-Threshold Curve (run `eval.py` with the metric `em`)
        - Precision-Recall Curve (run `eval.py` with the metrics `precision` and `recall`, this is different from previous versions as the calculation of `precision` and `recall` has been separated from `fmeasure`)
    - Supports evaluating *binary images*, such as common binary segmentation tasks.
        - Binary Fmeasure
        - Binary Precision
        - Binary Recall
        - Binary IoU
        - Binary Dice
        - Binary Specificity
        - Binary BER
- Richer functions.
    - Supports evaluating models according to the configuration.
    - Supports drawing `PR curves`, `F-measure curves` and `E-measure curves` based on configuration and evaluation results.
    - Supports exporting results to TXT files.
    - Supports exporting results to XLSX files (re-supported on January 4, 2021).
    - Supports exporting LaTeX table code from generated `.npy` files, and marks the top three methods with different colors.
    - … :>.

## How to Use

### Installing Dependencies

Install the required libraries: `pip install -r requirements.txt`

The metric evaluation is based on my another project: [PySODMetrics](https://github.com/lartpang/PySODMetrics). Bug reports are welcome!

### Configuring Paths for Datasets and Method Predictions

This project relies on json files to store data. Examples for dataset and method configurations are provided in `./examples`: `config_dataset_json_example.json` and `config_method_json_example.json`. You can directly modify them for subsequent steps.

> [!note]
> - Please note that since this project relies on OpenCV to read images, ensure that the path strings do not contain non-ASCII characters.
> - Make sure that *the name of the dataset in the dataset configuration file* matches *the name of the dataset in the method configuration file*. After preparing the json files, it is recommended to use the provided `tools/check_path.py` to check if the path information in the json files is correct.

<details>
<summary>
More Details on Configuration
</summary>

Example 1: Dataset Configuration

Note, "image" is not necessary here. The actual evaluation only reads "mask".

```json
{
    "LFSD": {
        "image": {
            "path": "Path_Of_RGBDSOD_Datasets/LFSD/Image",
            "prefix": "some_gt_prefix",
            "suffix": ".jpg"
        },
        "mask": {
            "path": "Path_Of_RGBDSOD_Datasets/LFSD/Mask",
            "prefix": "some_gt_prefix",
            "suffix": ".png"
        }
    }
}
```

Example 2: Method Configuration

```json
{
    "Method1": {
        "PASCAL-S": {
            "path": "Path_Of_Method1/PASCAL-S",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "ECSSD": {
            "path": "Path_Of_Method1/ECSSD",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "HKU-IS": {
            "path": "Path_Of_Method1/HKU-IS",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "DUT-OMRON": {
            "path": "Path_Of_Method1/DUT-OMRON",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "DUTS-TE": {
            "path": "Path_Of_Method1/DUTS-TE",
            "suffix": ".png"
        }
    }
}
```

Here, `path` represents the directory where image data is stored. `prefix` and `suffix` refer to the prefix and suffix *outside the common part in the names* of the predicted images and the actual ground truth images.

During the evaluation process, the matching of method predictions and dataset ground truths is based on the shared part of the file names. Their naming patterns are preset as `[prefix]+[shared-string]+[suffix]`. For example, if there are predicted images like `method1_00001.jpg`, `method1_00002.jpg`, `method1_00003.jpg` and ground truth images `gt_00001.png`, `gt_00002.png`, `gt_00003.png`, then we can configure it as follows:

Example 3: Dataset Configuration

```json
{
    "dataset1": {
        "mask": {
            "path": "path/Mask",
            "prefix": "gt_",
            "suffix": ".png"
        }
    }
}
```

Example 4: Method Configuration

```json
{
    "method1": {
        "dataset1": {
            "path": "path/dataset1",
            "prefix": "method1_",
            "suffix": ".jpg"
        }
    }
}
```

</details>

### Running the Evaluation

- Once all the previous steps are correctly completed, you can begin the evaluation. For usage of the evaluation script, refer to the output of the command `python eval.py --help`.
- Add configuration options according to your needs and execute the command. If there are no exceptions, it will generate result files with the specified filename.
  - If not all files are specified, it will directly output the results, as detailed in the help information of `eval.py`.
  - If `--curves-npy` is specified, the metrics information related to drawing will be saved in the corresponding `.npy` file.
- [Optional] You can use `tools/converter.py` to directly export the LaTeX table code from the generated npy files.

### Plotting Curves for Grayscale Image Evaluation

You can use `plot.py` to read the `.npy` file to organize and draw `PR`, `F-measure`, and `E-measure` curves for specified methods and datasets as needed. The usage of this script can be seen in the output of `python plot.py --help`. Add configuration items as per your requirement and execute the command.

The most basic instruction is to specify the values in the `figure.figsize` item in the configuration file according to the number of subplots reasonably.

### A Basic Execution Process

Here I'll use the RGB SOD configuration in my local configs folder as an example (necessary modifications should be made according to the actual situation).

```shell
# Check Configuration Files
python tools/check_path.py --method-jsons configs/methods/rgb-sod/rgb_sod_methods.json --dataset-jsons configs/datasets/rgb_sod.json

# After ensuring there's nothing unreasonable in the output information, you can begin the evaluation with the following commands:
# --dataset-json: Set `configs/datasets/rgb_sod.json` as dataset configuration file
# --method-json: Set `configs/methods/rgb-sod/rgb_sod_methods.json` as method configuration file
# --metric-npy: Set `output/rgb_sod/metrics.npy` to store the metrics information in npy format
# --curves-npy: Set `output/rgb_sod/curves.npy` to store the curves information in npy format
# --record-txt: Set `output/rgb_sod/results.txt` to store the results information in text format
# --record-xlsx: Set `output/rgb_sod/results.xlsx` to store the results information in Excel format
# --metric-names: Specify `fmeasure em precision recall` as the metrics to be calculated
# --include-methods: Specify the methods from `configs/methods/rgb-sod/rgb_sod_methods.json` to be evaluated
# --include-datasets: Specify the datasets from `configs/datasets/rgb_sod.json` to be evaluated
python eval.py --dataset-json configs/datasets/rgb_sod.json --method-json configs/methods/rgb-sod/rgb_sod_methods.json --metric-npy output/rgb_sod/metrics.npy --curves-npy output/rgb_sod/curves.npy --record-txt output/rgb_sod/results.txt --record-xlsx output/rgb_sod/results.xlsx --metric-names sm wfm mae fmeasure em precision recall --include-methods MINet_R50_2020 GateNet_2020 --include-datasets PASCAL-S ECSSD

# Once you've obtained the curve data file, which in this case is the 'output/rgb_sod/curves.npy' file, you can start drawing the plot.

# For a simple example, after executing the command below, the result will be saved as 'output/rgb_sod/simple_curve_pr.pdf':
# --style-cfg: Specify the style configuration file `examples/single_row_style.yml`,Since there are only a few subplots, you can directly use a single-row configuration.
# --num-rows: The number of subplots in the figure.
# --curves-npys: Use the curve data file `output/rgb_sod/curves.npy` to draw the plot.
# --mode: Use `pr` to draw the `pr` curve, `em` to draw the `E-measure` curve, and `fm` to draw the `F-measure` curve.
# --save-name: Just provide the image save path without the file extension; the code will append the file extension as specified by the `savefig.format` in the `--style-cfg` you designated earlier.
# --alias-yaml: A yaml file that specifies the method and dataset aliases to be used in the plot.
python plot.py --style-cfg examples/single_row_style.yml --num-rows 1 --curves-npys output/rgb_sod/curves.npy --mode pr --save-name output/rgb_sod/simple_curve_pr --alias-yaml configs/rgb_aliases.yaml

# More complex examples, after executing the command below, the result will be saved as 'output/rgb_sod/complex_curve_pr.pdf'.

# --style-cfg: Specify the style configuration file `examples/single_row_style.yml`,Since there are only a few subplots, you can directly use a single-row configuration.
# --num-rows: The number of subplots in the figure.
# --curves-npys: Use the curve data file `output/rgb_sod/curves.npy` to draw the plot.
# --our-methods: The specified method, `MINet_R50_2020`, is highlighted with a bold red solid line in the plot.
# --num-col-legend: The number of columns in the legend.
# --mode: Use `pr` to draw the `pr` curve, `em` to draw the `E-measure` curve, and `fm` to draw the `F-measure` curve.
# --separated-legend: Draw a shared single legend.
# --sharey: Share the y-axis, which will only display the scale value on the first graph in each row.
# --save-name: Just provide the image save path without the file extension; the code will append the file extension as specified by the `savefig.format` in the `--style-cfg` you designated earlier.
python plot.py --style-cfg examples/single_row_style.yml --num-rows 1 --curves-npys output/rgb_sod/curves.npy --our-methods MINet_R50_2020 --num-col-legend 1 --mode pr --separated-legend --sharey --save-name output/rgb_sod/complex_curve_pr
```

## Corresponding Results

**Precision-Recall Curve**:

![PRCurves](https://user-images.githubusercontent.com/26847524/227249768-a41ef076-6355-4b96-a291-fc0e071d9d35.jpg)

**F-measure Curve**:

![fm-curves](https://user-images.githubusercontent.com/26847524/227249746-f61d7540-bb73-464d-bccf-9a36323dec47.jpg)

**E-measure Curve**:

![em-curves](https://user-images.githubusercontent.com/26847524/227249727-8323d5cf-ddd7-427b-8152-b8f47781c4e3.jpg)

## Programming Reference

* `openpyxl` library: <https://www.cnblogs.com/programmer-tlh/p/10461353.html>
* `re` module: <https://www.cnblogs.com/shenjianping/p/11647473.html>

## Relevant Literature

```text
@inproceedings{Fmeasure,
    title={Frequency-tuned salient region detection},
    author={Achanta, Radhakrishna and Hemami, Sheila and Estrada, Francisco and S{\"u}sstrunk, Sabine},
    booktitle=CVPR,
    number={CONF},
    pages={1597--1604},
    year={2009}
}

@inproceedings{MAE,
    title={Saliency filters: Contrast based filtering for salient region detection},
    author={Perazzi, Federico and Kr{\"a}henb{\"u}hl, Philipp and Pritch, Yael and Hornung, Alexander},
    booktitle=CVPR,
    pages={733--740},
    year={2012}
}

@inproceedings{Smeasure,
    title={Structure-measure: A new way to eval foreground maps},
    author={Fan, Deng-Ping and Cheng, Ming-Ming and Liu, Yun and Li, Tao and Borji, Ali},
    booktitle=ICCV,
    pages={4548--4557},
    year={2017}
}

@inproceedings{Emeasure,
    title="Enhanced-alignment Measure for Binary Foreground Map Evaluation",
    author="Deng-Ping {Fan} and Cheng {Gong} and Yang {Cao} and Bo {Ren} and Ming-Ming {Cheng} and Ali {Borji}",
    booktitle=IJCAI,
    pages="698--704",
    year={2018}
}

@inproceedings{wFmeasure,
  title={How to eval foreground maps?},
  author={Margolin, Ran and Zelnik-Manor, Lihi and Tal, Ayellet},
  booktitle=CVPR,
  pages={248--255},
  year={2014}
}
```


================================================
FILE: readme_zh.md
================================================

# 基于 Python 的图像灰度/二值分割测评工具箱

## 一些规划

- 更灵活的配置脚本.
    - [ ] 使用 [符合matplotlib要求的](https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-default-matplotlibrc-file) 的 yaml 文件来控制绘图格式.
    - [ ] 使用更加灵活的配置文件格式, 例如 yaml 或者 toml 替换 json.
- [ ] 添加测试脚本.
- [ ] 添加更详细的注释.
- 优化导出评估结果的代码.
    - [x] 实现导出结果到 XLSX 文件的代码.
    - [ ] 优化导出到 XLSX 文件的代码.
    - [ ] 是否应该使用 CSV 这样的文本格式更好些? 既可以当做文本文件打开, 亦可使用 Excel 来进行整理.
- [ ] 使用 `pathlib.Path` 替换 `os.path`.
- [x] 完善关于分组数据的代码, 即 CoSOD、Video Binary Segmentation 等任务的支持.
- [x] 支持并发策略加速计算. 目前保留了多线程支持, 剔除了之前的多进程代码.
    - [ ] 目前由于多线程的使用, 存在提示信息额外写入的问题, 有待优化.
- [X] 剥离 USVOS 代码到另一个仓库 [PyDavis16EvalToolbox](https://github.com/lartpang/PyDavis16EvalToolbox).
- [X] 使用更加快速和准确的指标代码 [PySODMetrics](https://github.com/lartpang/PySODMetrics) 作为评估基准.

> [!tip]
> - 一些方法提供的结果名字预原始数据集真值的名字不一致
>     - [注意] (2021-11-18) 当前同时提供了对名称前缀与后缀的支持, 所以基本不用用户自己改名字了.
>     - [可选] 可以使用提供的脚本 `tools/rename.py` 来批量修改文件名.**请小心使用, 以避免数据被覆盖.**
>     - [可选] 其他的工具: 例如 Linux 上的 `rename`, Windows 上的 [`Microsoft PowerToys`](https://github.com/microsoft/PowerToys)

## 特性

- 受益于 PySODMetrics, 从而获得了更加丰富的指标的支持. 更多细节可见 `utils/recorders/metric_recorder.py`.
    - 支持评估*灰度图像*, 例如来自显著性目标检测任务的预测.
        - MAE
        - Emeasure
        - Smeasure
        - Weighted Fmeasure
        - Maximum/Average/Adaptive Fmeasure
        - Maximum/Average/Adaptive Precision
        - Maximum/Average/Adaptive Recall
        - Maximum/Average/Adaptive IoU
        - Maximum/Average/Adaptive Dice
        - Maximum/Average/Adaptive Specificity
        - Maximum/Average/Adaptive BER
        - Fmeasure-Threshold Curve (执行 `eval.py` 请指定指标 `fmeasure`)
        - Emeasure-Threshold Curve (执行 `eval.py` 请指定指标 `em`)
        - Precision-Recall Curve (执行 `eval.py` 请指定指标 `precision` 和 `recall`,这一点不同于以前的版本,因为 `precision` 和 `recall` 的计算被从 `fmeasure` 中独立出来了)
    - 支持评估*二值图像*, 例如常见的二值分割任务.
        - Binary Fmeasure
        - Binary Precision
        - Binary Recall
        - Binary IoU
        - Binary Dice
        - Binary Specificity
        - Binary BER
- 更丰富的功能.
    - 支持根据配置评估模型.
    - 支持根据配置和评估结果绘制 PR 曲线和 F-measure 曲线.
    - 支持导出结果到 TXT 文件中.
    - 支持导出结果到 XLSX 文件 (2021 年 01 月 04 日重新提供支持).
    - 支持从生成的 `.npy` 文件导出 LaTeX 表格代码, 同时支持对最优的前三个方法用不同颜色进行标记.
    - … :>.

## 使用方法

### 安装依赖

安装相关依赖库: `pip install -r requirements.txt` .

其中指标库是我的另一个项目: [PySODMetrics](https://github.com/lartpang/PySODMetrics), 欢迎捉 BUG!

### 配置数据集与方法预测的路径信息

本项目依赖于 json 文件存放数据, `./examples` 中已经提供了数据集和方法配置的例子: `config_dataset_json_example.json` 和 `config_method_json_example.json` , 可以至直接修改他们用于后续步骤.

> [!note]
> - 请注意, 由于本项目依赖于 OpenCV 读取图片, 所以请确保路径字符串不包含非 ASCII 字符.
> - 请务必确保*数据集配置文件中数据集的名字*和*方法配置文件中数据集的名字*一致. 准备好 json 文件后, 建议使用提供的 `tools/check_path.py` 来检查下 json 文件中的路径信息是否正常.

<details>
<summary>
关于配置的更多细节
</summary>

例子 1: 数据集配置

注意, 这里的 "image" 并不是必要的. 实际评估仅仅读取 "mask".

```json
{
    "LFSD": {
        "image": {
            "path": "Path_Of_RGBDSOD_Datasets/LFSD/Image",
            "prefix": "some_gt_prefix",
            "suffix": ".jpg"
        },
        "mask": {
            "path": "Path_Of_RGBDSOD_Datasets/LFSD/Mask",
            "prefix": "some_gt_prefix",
            "suffix": ".png"
        }
    }
}
```

例子 2: 方法配置

```json
{
    "Method1": {
        "PASCAL-S": {
            "path": "Path_Of_Method1/PASCAL-S",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "ECSSD": {
            "path": "Path_Of_Method1/ECSSD",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "HKU-IS": {
            "path": "Path_Of_Method1/HKU-IS",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "DUT-OMRON": {
            "path": "Path_Of_Method1/DUT-OMRON",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "DUTS-TE": {
            "path": "Path_Of_Method1/DUTS-TE",
            "suffix": ".png"
        }
    }
}
```

这里 `path` 表示存放图像数据的目录. 而 `prefix` 和 `suffix` 表示实际预测图像和真值图像*名字中除去共有部分外*的前缀预后缀内容.

评估过程中, 方法预测和数据集真值匹配的方式是基于文件名字的共有部分. 二者的名字模式预设为 `[prefix]+[shared-string]+[suffix]` . 例如假如有这样的预测图像 `method1_00001.jpg` , `method1_00002.jpg` , `method1_00003.jpg` 和真值图像 `gt_00001.png` , `gt_00002.png` , `gt_00003.png` . 则我们可以配置如下:

例子 3: 数据集配置

```json
{
    "dataset1": {
        "mask": {
            "path": "path/Mask",
            "prefix": "gt_",
            "suffix": ".png"
        }
    }
}
```

例子 4: 方法配置

```json
{
    "method1": {
        "dataset1": {
            "path": "path/dataset1",
            "prefix": "method1_",
            "suffix": ".jpg"
        }
    }
}
```

</details>

### 执行评估过程

- 前述步骤一切正常后, 可以开始评估了. 评估脚本用法可参考命令 `python eval.py --help` 的输出.
- 根据自己需求添加配置项并执行即可. 如无异常, 会生成指定文件名的结果文件.
  - 如果不指定所有的文件, 那么就直接输出结果, 具体可见 `eval.py` 的帮助信息.
  - 如指定 `--curves-npy`, 绘图相关的指标信息将会保存到对应的 `.npy` 文件中.
- [可选] 可以使用 `tools/converter.py` 直接从生成的 npy 文件中导出 latex 表格代码.

### 为灰度图像的评估绘制曲线

可以使用 `plot.py` 来读取 `.npy` 文件按需对指定方法和数据集的结果整理并绘制 `PR` , `F-measure` 和 `E-measure` 曲线. 该脚本用法可见 `python plot.py --help` 的输出. 按照自己需求添加配置项并执行即可.

最基本的一条是请按照子图数量, 合理地指定配置文件中的 `figure.figsize` 项的数值.

### 一个基本的执行流程

这里以我自己本地的 configs 文件夹中的 RGB SOD 的配置 (需要根据实际情况进行必要的修改) 为例.

```shell
# 检查配置文件
python tools/check_path.py --method-jsons configs/methods/rgb-sod/rgb_sod_methods.json --dataset-jsons configs/datasets/rgb_sod.json

# 在输出信息中没有不合理的地方后,开始进行评估
# --dataset-json 数据集配置文件 configs/datasets/rgb_sod.json
# --method-json 方法配置文件 configs/methods/rgb-sod/rgb_sod_methods.json
# --metric-npy 输出评估结果数据到 output/rgb_sod/metrics.npy
# --curves-npy 输出曲线数据到 output/rgb_sod/curves.npy
# --record-txt 输出评估结果文本到 output/rgb_sod/results.txt
# --record-xlsx 输出评估结果到excel文档 output/rgb_sod/results.xlsx
# --metric-names 所有结果仅包含给定指标的信息, 涉及到曲线的四个指标分别为 fmeasure em precision recall
# --include-methods 评估过程仅包含 configs/methods/rgb-sod/rgb_sod_methods.json 中的给定方法
# --include-datasets 评估过程仅包含 configs/datasets/rgb_sod.json 中的给定数据集
python eval.py --dataset-json configs/datasets/rgb_sod.json --method-json configs/methods/rgb-sod/rgb_sod_methods.json --metric-npy output/rgb_sod/metrics.npy --curves-npy output/rgb_sod/curves.npy --record-txt output/rgb_sod/results.txt --record-xlsx output/rgb_sod/results.xlsx --metric-names sm wfm mae fmeasure em precision recall --include-methods MINet_R50_2020 GateNet_2020 --include-datasets PASCAL-S ECSSD

# 得到曲线数据文件,即这里的 output/rgb_sod/curves.npy 文件后,就可以开始绘制图像了

# 简单的例子,下面指令执行后,结果保存为 output/rgb_sod/simple_curve_pr.pdf
# --style-cfg 使用图像风格配置文件 examples/single_row_style.yml,这里子图较少,直接使用单行的配置
# --num-rows 图像子图都位于一行
# --curves-npys 将使用曲线数据文件 output/rgb_sod/curves.npy 来绘图
# --mode pr: 绘制是pr曲线;fm: 绘制的是fm曲线
# --save-name 图像保存路径,只需写出名字,代码会加上由前面指定的 --style-cfg 中的 `savefig.format` 项指定的格式后缀名
# --alias-yaml: 使用 yaml 文件指定绘图中使用的方法别名和数据集别名
python plot.py --style-cfg examples/single_row_style.yml --num-rows 1 --curves-npys output/rgb_sod/curves.npy --mode pr --save-name output/rgb_sod/simple_curve_pr --alias-yaml configs/rgb_aliases.yaml

# 复杂的例子,下面指令执行后,结果保存为 output/rgb_sod/complex_curve_pr.pdf

# --style-cfg 使用图像风格配置文件 examples/single_row_style.yml,这里子图较少,直接使用单行的配置
# --num-rows 图像子图都位于一行
# --curves-npys 将使用曲线数据文件 output/rgb_sod/curves.npy 来绘图
# --our-methods 在图中使用红色实线加粗标注指定的方法 MINet_R50_2020
# --num-col-legend 图像子图图示中信息的列数
# --mode pr: 绘制是pr曲线;fm: 绘制的是fm曲线
# --separated-legend 使用共享的单个图示
# --sharey 使用共享的 y 轴刻度,这将仅在每行的第一个图上显示刻度值
# --save-name 图像保存路径,只需写出名字,代码会加上由前面指定的 --style-cfg 中的 `savefig.format` 项指定的格式后缀名
python plot.py --style-cfg examples/single_row_style.yml --num-rows 1 --curves-npys output/rgb_sod/curves.npy --our-methods MINet_R50_2020 --num-col-legend 1 --mode pr --separated-legend --sharey --save-name output/rgb_sod/complex_curve_pr
```

## 绘图示例

**Precision-Recall Curve**:

![PRCurves](https://user-images.githubusercontent.com/26847524/227249768-a41ef076-6355-4b96-a291-fc0e071d9d35.jpg)

**F-measure Curve**:

![fm-curves](https://user-images.githubusercontent.com/26847524/227249746-f61d7540-bb73-464d-bccf-9a36323dec47.jpg)

**E-measure Curve**:

![em-curves](https://user-images.githubusercontent.com/26847524/227249727-8323d5cf-ddd7-427b-8152-b8f47781c4e3.jpg)

## 编程参考

- `openpyxl` 库: <https://www.cnblogs.com/programmer-tlh/p/10461353.html>
- `re` 模块: <https://www.cnblogs.com/shenjianping/p/11647473.html>

## 相关文献

```text
@inproceedings{Fmeasure,
    title={Frequency-tuned salient region detection},
    author={Achanta, Radhakrishna and Hemami, Sheila and Estrada, Francisco and S{\"u}sstrunk, Sabine},
    booktitle=CVPR,
    number={CONF},
    pages={1597--1604},
    year={2009}
}

@inproceedings{MAE,
    title={Saliency filters: Contrast based filtering for salient region detection},
    author={Perazzi, Federico and Kr{\"a}henb{\"u}hl, Philipp and Pritch, Yael and Hornung, Alexander},
    booktitle=CVPR,
    pages={733--740},
    year={2012}
}

@inproceedings{Smeasure,
    title={Structure-measure: A new way to eval foreground maps},
    author={Fan, Deng-Ping and Cheng, Ming-Ming and Liu, Yun and Li, Tao and Borji, Ali},
    booktitle=ICCV,
    pages={4548--4557},
    year={2017}
}

@inproceedings{Emeasure,
    title="Enhanced-alignment Measure for Binary Foreground Map Evaluation",
    author="Deng-Ping {Fan} and Cheng {Gong} and Yang {Cao} and Bo {Ren} and Ming-Ming {Cheng} and Ali {Borji}",
    booktitle=IJCAI,
    pages="698--704",
    year={2018}
}

@inproceedings{wFmeasure,
  title={How to eval foreground maps?},
  author={Margolin, Ran and Zelnik-Manor, Lihi and Tal, Ayellet},
  booktitle=CVPR,
  pages={248--255},
  year={2014}
}
```


================================================
FILE: requirements.txt
================================================
# Automatically generated by https://github.com/damnever/pigar.

matplotlib
numpy
opencv-python
openpyxl
pysodmetrics==1.4.2 # Our Metric Libirary
PyYAML==6.0
tabulate
tqdm


================================================
FILE: tools/append_results.py
================================================
# -*- coding: utf-8 -*-
import argparse

import numpy as np


def get_args():
    parser = argparse.ArgumentParser(
        description="""A simple tool for merging two npy file.
    Patch the method items corresponding to the `--method-names` and `--dataset-names` of `--new-npy` into `--old-npy`,
    and output the whole container to `--out-npy`.
    """
    )
    parser.add_argument("--old-npy", type=str, required=True)
    parser.add_argument("--new-npy", type=str, required=True)
    parser.add_argument("--method-names", type=str, nargs="+")
    parser.add_argument("--dataset-names", type=str, nargs="+")
    parser.add_argument("--out-npy", type=str, required=True)
    args = parser.parse_args()
    return args


def main():
    args = get_args()
    new_npy: dict = np.load(args.new_npy, allow_pickle=True).item()
    old_npy: dict = np.load(args.old_npy, allow_pickle=True).item()

    for dataset_name, methods_info in new_npy.items():
        if args.dataset_names and dataset_name not in args.dataset_names:
            continue

        print(f"[PROCESSING INFORMATION ABOUT DATASET {dataset_name}...]")
        old_methods_info = old_npy.get(dataset_name)
        if not old_methods_info:
            raise KeyError(f"{old_npy} doesn't contain the information about {dataset_name}.")

        print(f"OLD_NPY: {list(old_methods_info.keys())}")
        print(f"NEW_NPY: {list(methods_info.keys())}")

        for method_name, method_info in methods_info.items():
            if args.method_names and method_name not in args.method_names:
                continue

            if method_name not in old_npy[dataset_name]:
                old_methods_info[method_name] = method_info
        print(f"MERGED_NPY: {list(old_methods_info.keys())}")

    np.save(file=args.out_npy, arr=old_npy)
    print(f"THE MERGED_NPY IS SAVED INTO {args.out_npy}")


if __name__ == "__main__":
    main()


================================================
FILE: tools/check_path.py
================================================
# -*- coding: utf-8 -*-

import argparse
import json
import os
from collections import OrderedDict

parser = argparse.ArgumentParser(description="A simple tool for checking your json config file.")
parser.add_argument(
    "-m", "--method-jsons", nargs="+", required=True, help="The json file about all methods."
)
parser.add_argument(
    "-d", "--dataset-jsons", nargs="+", required=True, help="The json file about all datasets."
)
args = parser.parse_args()

for method_json, dataset_json in zip(args.method_jsons, args.dataset_jsons):
    with open(method_json, encoding="utf-8", mode="r") as f:
        methods_info = json.load(f, object_hook=OrderedDict)  # 有序载入
    with open(dataset_json, encoding="utf-8", mode="r") as f:
        datasets_info = json.load(f, object_hook=OrderedDict)  # 有序载入

    total_msgs = []
    for method_name, method_info in methods_info.items():
        print(f"Checking for {method_name} ...")
        for dataset_name, results_info in method_info.items():
            if results_info is None:
                continue

            dataset_mask_info = datasets_info[dataset_name]["mask"]
            mask_path = dataset_mask_info["path"]
            mask_suffix = dataset_mask_info["suffix"]

            dir_path = results_info["path"]
            file_prefix = results_info.get("prefix", "")
            file_suffix = results_info["suffix"]

            if not os.path.exists(dir_path):
                total_msgs.append(f"{dir_path} 不存在")
                continue
            elif not os.path.isdir(dir_path):
                total_msgs.append(f"{dir_path} 不是正常的文件夹路径")
                continue
            else:
                pred_names = [
                    name[len(file_prefix) : -len(file_suffix)]
                    for name in os.listdir(dir_path)
                    if name.startswith(file_prefix) and name.endswith(file_suffix)
                ]
                if len(pred_names) == 0:
                    total_msgs.append(f"{dir_path} 中不包含前缀为{file_prefix}且后缀为{file_suffix}的文件")
                    continue

            mask_names = [
                name[: -len(mask_suffix)]
                for name in os.listdir(mask_path)
                if name.endswith(mask_suffix)
            ]
            intersection_names = set(mask_names).intersection(set(pred_names))
            if len(intersection_names) == 0:
                total_msgs.append(f"{dir_path} 中数据名字与真值 {mask_path} 不匹配")
            elif len(intersection_names) != len(mask_names):
                difference_names = set(mask_names).difference(pred_names)
                total_msgs.append(
                    f"{dir_path} 中数据({len(list(pred_names))})与真值({len(list(mask_names))})不一致"
                )

    if total_msgs:
        print(*total_msgs, sep="\n")
    else:
        print(f"{method_json} & {dataset_json} 基本正常")


================================================
FILE: tools/converter.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/8/25
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang

import argparse
import re
from itertools import chain

import numpy as np
import yaml

# fmt: off
parser = argparse.ArgumentParser(description="A useful and convenient tool to convert your .npy results into the table code in latex.")
parser.add_argument("-i", "--result-file", required=True, nargs="+", action="extend", help="The path of the *_metrics.npy file.")
parser.add_argument("-o", "--tex-file", required=True, type=str, help="The path of the exported tex file.")
parser.add_argument("-c", "--config-file", type=str, help="The path of the customized config yaml file.")
parser.add_argument("--contain-table-env", action="store_true", help="Whether to containe the table env in the exported code.")
parser.add_argument("--num-bits", type=int, default=3, help="Number of valid digits.")
parser.add_argument("--transpose", action="store_true", help="Whether to transpose the table.")
# fmt: on
args = parser.parse_args()

arg_head = f"%% Generated by: {vars(args)}"


def update_dict(parent_dict, sub_dict):
    for sub_k, sub_v in sub_dict.items():
        if sub_k in parent_dict:
            if sub_v is not None and isinstance(sub_v, dict):
                update_dict(parent_dict=parent_dict[sub_k], sub_dict=sub_v)
                continue
        parent_dict.update(sub_dict)


results = {}
for result_file in args.result_file:
    result = np.load(file=result_file, allow_pickle=True).item()
    for dataset_name, method_infos in result.items():
        results.setdefault(dataset_name, {})
        for method_name, method_info in method_infos.items():
            new_method_info = {}
            for metric_name, metric_result in method_info.items():
                if "fmeasure" in metric_name:
                    metric_name = metric_name.replace("fmeasure", "f")
                new_method_info[metric_name] = metric_result
            results[dataset_name][method_name] = new_method_info

IMPOSSIBLE_UP_BOUND = 1
IMPOSSIBLE_DOWN_BOUND = 0

# 读取数据
dataset_names = sorted(list(results.keys()))
metric_names = ["SM", "wFm", "MAE", "adpE", "avgE", "maxE", "adpF", "avgF", "maxF"]
method_names = sorted(list(set(chain(*[list(results[n].keys()) for n in dataset_names]))))

if args.config_file is not None:
    assert args.config_file.endswith(".yaml") or args.config_file.endswith("yml")
    with open(args.config_file, mode="r", encoding="utf-8") as f:
        cfg = yaml.safe_load(f)

    if "dataset_names" not in cfg:
        print("`dataset_names` not in the config file, use the default config.")
    else:
        dataset_names = cfg["dataset_names"]
    if "metric_names" not in cfg:
        print("`metric_names` not in the config file, use the default config.")
    else:
        metric_names = cfg["metric_names"]
    if "method_names" not in cfg:
        print("`method_names` not in the config file, use the default config.")
    else:
        method_names = cfg["method_names"]

print(
    f"CONFIG INFORMATION:"
    f"\n- DATASETS ({len(dataset_names)}): {dataset_names}]"
    f"\n- METRICS ({len(metric_names)}): {metric_names}"
    f"\n- METHODS ({len(method_names)}): {method_names}"
)

if isinstance(metric_names, (list, tuple)):
    ori_metric_names = metric_names
elif isinstance(metric_names, dict):
    ori_metric_names, metric_names = list(zip(*list(metric_names.items())))
else:
    raise NotImplementedError

if isinstance(method_names, (list, tuple)):
    ori_method_names = method_names
elif isinstance(method_names, dict):
    ori_method_names, method_names = list(zip(*list(method_names.items())))
else:
    raise NotImplementedError

# 整理表格
ori_columns = []
column_for_index = []
for dataset_idx, dataset_name in enumerate(dataset_names):
    for metric_idx, ori_metric_name in enumerate(ori_metric_names):
        filled_value = (
            IMPOSSIBLE_UP_BOUND if ori_metric_name.lower() == "mae" else IMPOSSIBLE_DOWN_BOUND
        )
        filled_dict = {k: filled_value for k in ori_metric_names}
        ori_column = []
        for method_name in ori_method_names:
            method_result = results[dataset_name].get(method_name, filled_dict)
            if ori_metric_name not in method_result:
                raise KeyError(
                    f"{ori_metric_name} must be contained in {list(method_result.keys())}"
                )
            ori_column.append(method_result[ori_metric_name])

        column_for_index.append([x * round(1 - filled_value * 2) for x in ori_column])
        ori_columns.append(ori_column)

style_templates = dict(
    method_row_body="& {method_name}",
    method_column_body=" {method_name}",
    dataset_row_body="& \multicolumn{{{num_metrics}}}{{c}}{{\\textbf{{{dataset_name}}}}}",
    dataset_column_body="\multirow{{-{num_metrics}}}{{*}}{{\\rotatebox{{90}}{{\\textbf{{{dataset_name}}}}}}}",
    dataset_head=" ",
    metric_body="& {metric_name}",
    metric_row_head=" ",
    metric_column_head="& ",
    body=[
        "& \\first{{{txt:.03f}}}",  # style for top1
        "& \\second{{{txt:.03f}}}",  # style for top2
        "& \\third{{{txt:.03f}}}",  # style for top3
        "& {txt:.03f}",  # style for other
    ],
)


# 排序并添加样式
def replace_cell(ori_value, k):
    if ori_value == IMPOSSIBLE_UP_BOUND or ori_value == IMPOSSIBLE_DOWN_BOUND:
        new_value = "& "
    else:
        new_value = style_templates["body"][k].format(txt=ori_value)
    return new_value


for col, ori_col in zip(column_for_index, ori_columns):
    col_array = np.array(col).reshape(-1).round(args.num_bits)
    sorted_col_array = np.sort(np.unique(col_array), axis=-1)[-3:][::-1]
    # [top1_idxes, top2_idxes, top3_idxes]
    top_k_idxes = [np.argwhere(col_array == x).tolist() for x in sorted_col_array]
    for k, idxes in enumerate(top_k_idxes):
        for row_idx in idxes:
            ori_col[row_idx[0]] = replace_cell(ori_col[row_idx[0]], k)

    for idx, x in enumerate(ori_col):
        if not isinstance(x, str):
            ori_col[idx] = replace_cell(x, -1)

# 构建表头
num_datasets = len(dataset_names)
num_metrics = len(metric_names)
num_methods = len(method_names)

# 先构开头的列,再整体构造开头的行
latex_table_head = []
latex_table_tail = []


def remove_latex_chars_out_of_mathenv(string: str):
    string_splits = string.split("$")  # 'abcd$efg$hij$' -> ['abcd', 'efg', 'hij', '']
    for i, s in enumerate(string_splits):
        if i % 2 == 0:
            string_splits[i] = re.sub(pattern=r"_", repl=r"-", string=s)
    string = "$".join(string_splits)
    return string


method_names = [remove_latex_chars_out_of_mathenv(x) for x in method_names]
dataset_names = [remove_latex_chars_out_of_mathenv(x) for x in dataset_names]
if not args.transpose:
    dataset_row = (
        [style_templates["dataset_head"]]
        + [
            style_templates["dataset_row_body"].format(num_metrics=num_metrics, dataset_name=x)
            for x in dataset_names
        ]
        + [r"\\"]
    )
    metric_row = (
        [style_templates["metric_row_head"]]
        + [style_templates["metric_body"].format(metric_name=x) for x in metric_names]
        * num_datasets
        + [r"\\"]
    )
    additional_rows = [dataset_row, metric_row]

    # 构建第一列
    method_column = [
        style_templates["method_column_body"].format(method_name=x) for x in method_names
    ]
    additional_columns = [method_column]

    columns = additional_columns + ori_columns
    rows = [list(row) + [r"\\"] for row in zip(*columns)]
    rows = additional_rows + rows

    if args.contain_table_env:
        column_style = "|".join([f"*{num_metrics}{{c}}"] * len(dataset_names))
        latex_table_head = [
            f"\\begin{{tabular}}{{l|{column_style}}}\n",
            "\\toprule[2pt]",
        ]
else:
    dataset_column = []
    for x in dataset_names:
        blank_cells = [" "] * (num_metrics - 1)
        dataset_cell = [
            style_templates["dataset_column_body"].format(num_metrics=num_metrics, dataset_name=x)
        ]
        dataset_column.extend(blank_cells + dataset_cell)
    metric_column = [
        style_templates["metric_body"].format(metric_name=x) for x in metric_names
    ] * num_datasets
    additional_columns = [dataset_column, metric_column]

    method_row = (
        [style_templates["dataset_head"], style_templates["metric_column_head"]]
        + [style_templates["method_row_body"].format(method_name=x) for x in method_names]
        + [r"\\"]
    )
    additional_rows = [method_row]

    additional_columns = [list(x) for x in zip(*additional_columns)]
    rows = [cells + row + [r"\\"] for cells, row in zip(additional_columns, ori_columns)]
    rows = additional_rows + rows

    if args.contain_table_env:
        column_style = "".join([f"*{{{num_methods}}}{{c}}"])
        latex_table_head = [
            f"\\begin{{tabular}}{{cc|{column_style}}}\n",
            "\\toprule[2pt]",
        ]

if args.contain_table_env:
    latex_table_tail = [
        "\\bottomrule[2pt]\n",
        "\\end{tabular}",
    ]

rows = [arg_head, latex_table_head] + rows + [latex_table_tail]

with open(args.tex_file, mode="w", encoding="utf-8") as f:
    for row in rows:
        f.write("".join(row) + "\n")


================================================
FILE: tools/info_py_to_json.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/3/14
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang
import argparse
import ast
import json
import os
import sys
from importlib import import_module


def validate_py_syntax(filename):
    with open(filename, "r") as f:
        content = f.read()
    try:
        ast.parse(content)
    except SyntaxError as e:
        raise SyntaxError("There are syntax errors in config " f"file {filename}: {e}")


def convert_py_to_json(source_config_root, target_config_root):
    if not os.path.isdir(source_config_root):
        raise NotADirectoryError(source_config_root)
    if not os.path.exists(target_config_root):
        os.makedirs(target_config_root)
    else:
        if not os.path.isdir(target_config_root):
            raise NotADirectoryError(target_config_root)

    sys.path.insert(0, source_config_root)
    source_config_files = os.listdir(source_config_root)
    for source_config_file in source_config_files:
        source_config_path = os.path.join(source_config_root, source_config_file)
        if not (os.path.isfile(source_config_path) and source_config_path.endswith(".py")):
            continue
        validate_py_syntax(source_config_path)
        print(source_config_path)

        temp_module_name = os.path.splitext(source_config_file)[0]
        mod = import_module(temp_module_name)

        total_dict = {}
        for name, value in mod.__dict__.items():
            if not name.startswith("_") and isinstance(value, dict):
                total_dict[name] = value

        # delete imported module
        del sys.modules[temp_module_name]

        with open(
            os.path.join(target_config_root, os.path.basename(temp_module_name) + ".json"),
            encoding="utf-8",
            mode="w",
        ) as f:
            json.dump(total_dict, f, indent=2)


def get_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("-i", "--source-py-root", required=True, type=str)
    parser.add_argument("-o", "--target-json-root", required=True, type=str)
    args = parser.parse_args()
    return args


if __name__ == "__main__":
    args = get_args()
    convert_py_to_json(
        source_config_root=args.source_py_root, target_config_root=args.target_json_root
    )


================================================
FILE: tools/readme.md
================================================
# Useful tools

## `append_results.py`

将新生成的npy文件与旧的npy文件合并到一个新npy文件中。

```shell
$ python append_results.py --help
usage: append_results.py [-h] --old-npy OLD_NPY --new-npy NEW_NPY [--method-names METHOD_NAMES [METHOD_NAMES ...]]
                         [--dataset-names DATASET_NAMES [DATASET_NAMES ...]] --out-npy OUT_NPY

A simple tool for merging two npy file. Patch the method items corresponding to the `--method-names` and `--dataset-names` of `--new-npy`
into `--old-npy`, and output the whole container to `--out-npy`.

optional arguments:
  -h, --help            show this help message and exit
  --old-npy OLD_NPY
  --new-npy NEW_NPY
  --method-names METHOD_NAMES [METHOD_NAMES ...]
  --dataset-names DATASET_NAMES [DATASET_NAMES ...]
  --out-npy OUT_NPY
```

使用情形:

对于rgb sod数据,我已经生成了包含一批方法的结果的npy文件:`old_rgb_sod_curves.npy`。
对于某个方法重新评估后又获得了一个新的npy文件(文件不重名,所以不会覆盖):`new_rgb_sod_curves.npy`。
现在我想要将这两个结果整合到一个文件中:`finalnew_rgb_sod_curves.npy`。
可以通过如下指令实现。

```shell
python tools/append_results.py --old-npy output/old_rgb_sod_curves.npy \
                               --new-npy output/new_rgb_sod_curves.npy \
                               --out-npy output/finalnew_rgb_sod_curves.npy
```


## `converter.py`

将生成的 `*_metrics.npy` 文件中的信息导出成latex表格的形式.

可以按照例子文件夹中的 `examples/converter_config.py` 进行手动配置, 从而针对性的生成latex表格代码.

```shell
$ python converter.py --help
usage: converter.py [-h] -i RESULT_FILE [RESULT_FILE ...] -o TEX_FILE [-c CONFIG_FILE] [--contain-table-env] [--transpose]

A useful and convenient tool to convert your .npy results into the table code in latex.

optional arguments:
  -h, --help            show this help message and exit
  -i RESULT_FILE [RESULT_FILE ...], --result-file RESULT_FILE [RESULT_FILE ...]
                        The path of the *_metrics.npy file.
  -o TEX_FILE, --tex-file TEX_FILE
                        The path of the exported tex file.
  -c CONFIG_FILE, --config-file CONFIG_FILE
                        The path of the customized config yaml file.
  --contain-table-env   Whether to containe the table env in the exported code.
  --transpose           Whether to transpose the table.
```

使用案例如下.

```shell
$ python tools/converter.py -i output/your_metrics_1.npy output/your_metrics_2.npy -o output/your_metrics.tex -c ./examples/converter_config.yaml --transpose --contain-table-env
```

该指令从多个npy文件中(如果你仅有一个, 可以紧跟一个npy文件)读取数据, 处理后导出到指定的tex文件中. 并且使用指定的config文件设置了相关的数据集、指标以及模型方法.

另外, 对于输出的表格代码, 使用转置后的竖表, 并且包含table的 `tabular` 环境代码.

## `check_path.py`

通过将json中的信息与实际系统中的路径进行匹配, 检验是否存在异常.

```shell
$ python check_path.py --help
usage: check_path.py [-h] -m METHOD_JSONS [METHOD_JSONS ...] -d DATASET_JSONS [DATASET_JSONS ...]

A simple tool for checking your json config file.

optional arguments:
  -h, --help            show this help message and exit
  -m METHOD_JSONS [METHOD_JSONS ...], --method-jsons METHOD_JSONS [METHOD_JSONS ...]
                        The json file about all methods.
  -d DATASET_JSONS [DATASET_JSONS ...], --dataset-jsons DATASET_JSONS [DATASET_JSONS ...]
```

使用案例如下, 请务必确保, `-m` 和 `-d` 后的文件路径是一一对应的, 在代码中会使用 `zip` 来同步迭代两个列表来确认文件名称的匹配关系.

```shell
$ python check_path.py -m ../configs/methods/json/rgb_sod_methods.json ../configs/methods/json/rgbd_sod_methods.json \
                       -d ../configs/datasets/json/rgb_sod.json ../configs/datasets/json/rgbd_sod.json
```

## `info_py_to_json.py`

将基于python格式的配置文件转化为更便携的json文件.

```shell
$ python info_py_to_json.py --help
usage: info_py_to_json.py [-h] -i SOURCE_PY_ROOT -o TARGET_JSON_ROOT

optional arguments:
  -h, --help            show this help message and exit
  -i SOURCE_PY_ROOT, --source-py-root SOURCE_PY_ROOT
  -o TARGET_JSON_ROOT, --target-json-root TARGET_JSON_ROOT
```

即提供了两个必需提供的配置项, 存放python配置文件的输入目录 `-i` 和将要存放生成的json文件输出目录 `-o` .

通过载入输入目录中各个python文件, 我们默认从中获取内部包含的不使用 `_` 开头的字典对象对应于各个数据集或者方法的配置信息.

最后将这些信息汇总到一个完整的字典中, 直接导出到json文件中, 保存到输出目录下.

## `rename.py`

批量重命名.

使用前建议通读代码, 请小心使用, 防止文件覆盖造成不必要的损失.


================================================
FILE: tools/rename.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/4/24
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang

import glob
import os
import re
import shutil


def path_join(base_path, sub_path):
    if sub_path.startswith(os.sep):
        sub_path = sub_path[len(os.sep) :]
    return os.path.join(base_path, sub_path)


def rename_all_files(src_pattern, dst_pattern, src_name, src_dir, dst_dir=None):
    """
    :param src_pattern: 匹配原始数据名字的正则表达式
    :param dst_pattern: 对应的修改后的字符式
    :param src_dir: 存放原始数据的文件夹路径,可以组合src_name来构造路径模式,使用glob进行数据搜索
    :param src_name: glob类型的模式
    :param dst_dir: 存放修改后数据的文件夹路径,默认为None,表示直接修改原始数据
    """
    assert os.path.isdir(src_dir)

    if dst_dir is None:
        dst_dir = src_dir
        rename_func = os.replace
    else:
        assert os.path.isdir(dst_dir)
        if dst_dir == src_dir:
            rename_func = os.replace
        else:
            rename_func = shutil.copy
    print(f"将会使用 {rename_func.__name__} 来修改数据")

    src_dir = os.path.abspath(src_dir)
    dst_dir = os.path.abspath(dst_dir)
    src_data_paths = glob.glob(path_join(src_dir, src_name))

    print(f"开始替换 {src_dir} 中的数据")
    for idx, src_data_path in enumerate(src_data_paths, start=1):
        src_name_w_dir_name = src_data_path[len(src_dir) + 1 :]
        dst_name_w_dir_name = re.sub(src_pattern, repl=dst_pattern, string=src_name_w_dir_name)
        if dst_name_w_dir_name == src_name_w_dir_name:
            continue
        dst_data_path = path_join(dst_dir, dst_name_w_dir_name)

        dst_data_dir = os.path.dirname(dst_data_path)
        if not os.path.exists(dst_data_dir):
            print(f"{idx}: {dst_data_dir} 不存在,新建一下")
            os.makedirs(dst_data_dir)
        rename_func(src=src_data_path, dst=dst_data_path)
        print(f"{src_data_path} -> {dst_data_path}")

    print("OK...")


if __name__ == "__main__":
    rename_all_files(
        src_pattern=r"",
        dst_pattern="",
        src_name="*/*.png",
        src_dir="",
        dst_dir="",
    )


================================================
FILE: utils/__init__.py
================================================


================================================
FILE: utils/generate_info.py
================================================
# -*- coding: utf-8 -*-

import json
import os
from collections import OrderedDict

from matplotlib import colors

# max = 148
_COLOR_Genarator = iter(
    sorted(
        [
            color
            for name, color in colors.cnames.items()
            if name not in ["red", "white"] or not name.startswith("light") or "gray" in name
        ]
    )
)


def curve_info_generator():
    # TODO 当前尽是对方法依次赋予`-`和`--`,但是对于某一方法仅包含特定数据集的结果时,
    #  会导致确实结果的数据集中的该方法相邻的两个方法style一样
    line_style_flag = True

    def _template_generator(
        method_info: dict, method_name: str, line_color: str = None, line_width: int = 2
    ) -> dict:
        nonlocal line_style_flag
        template_info = dict(
            path_dict=method_info,
            curve_setting=dict(
                line_style="-" if line_style_flag else "--",
                line_label=method_name,
                line_width=line_width,
            ),
        )
        if line_color is not None:
            template_info["curve_setting"]["line_color"] = line_color
        else:
            template_info["curve_setting"]["line_color"] = next(_COLOR_Genarator)

        line_style_flag = not line_style_flag
        return template_info

    return _template_generator


def simple_info_generator():
    def _template_generator(method_info: dict, method_name: str) -> dict:
        template_info = dict(path_dict=method_info, label=method_name)
        return template_info

    return _template_generator


def get_valid_elements(source: list, include_elements: list, exclude_elements: list) -> list:
    targeted_elements = []
    if include_elements and not exclude_elements:  # only include_elements is not [] and not None
        for element in include_elements:
            assert element in source, element
            targeted_elements.append(element)

    elif not include_elements and exclude_elements:  # only exclude_elements is not [] and not None
        for element in exclude_elements:
            assert element in source, element
        for element in source:
            if element not in exclude_elements:
                targeted_elements.append(element)

    elif not include_elements and not exclude_elements:
        targeted_elements = source

    else:
        raise ValueError(
            f"include_elements: {include_elements}\nexclude_elements: {exclude_elements}"
        )

    if not targeted_elements:
        print(source, include_elements, exclude_elements)
        raise ValueError("targeted_elements must be a valid and non-empty list.")
    return targeted_elements


def get_methods_info(
    methods_info_jsons: list,
    include_methods: list,
    exclude_methods: list,
    *,
    for_drawing: bool = False,
    our_name: str = "",
) -> OrderedDict:
    """
    在json文件中存储的对应方法的字典的键值会被直接用于绘图

    :param methods_info_jsons: 保存方法信息的json文件,支持多个文件组合使用,按照输入的顺序依此读取
    :param for_drawing: 是否用于绘制曲线图,True会补充一些绘图信息
    :param our_name: 在绘图时,可以通过指定our_name来使用红色加粗实线强调特定方法的曲线
    :param include_methods: 仅返回列表中指定的方法的信息,为None时,返回所有
    :param exclude_methods: 仅返回列表中指定的方法的信息,为None时,返回所有,与include_datasets必须仅有一个非None
    :return: methods_full_info
    """
    if not isinstance(methods_info_jsons, (list, tuple)):
        methods_info_jsons = [methods_info_jsons]

    methods_info = {}
    for f in methods_info_jsons:
        if not os.path.isfile(f):
            raise FileNotFoundError(f"{f} is not be found!!!")

        with open(f, encoding="utf-8", mode="r") as f:
            methods_info.update(json.load(f, object_hook=OrderedDict))  # 有序载入

    if our_name:
        assert our_name in methods_info, f"{our_name} is not in json file."

    targeted_methods = get_valid_elements(
        source=list(methods_info.keys()),
        include_elements=include_methods,
        exclude_elements=exclude_methods,
    )
    if our_name and our_name in targeted_methods:
        targeted_methods.pop(targeted_methods.index(our_name))
        targeted_methods.insert(0, our_name)

    if for_drawing:
        info_generator = curve_info_generator()
    else:
        info_generator = simple_info_generator()

    methods_full_info = []
    for method_name in targeted_methods:
        method_path = methods_info[method_name]

        if for_drawing and our_name and our_name == method_name:
            method_info = info_generator(method_path, method_name, line_color="red", line_width=3)
        else:
            method_info = info_generator(method_path, method_name)
        methods_full_info.append((method_name, method_info))
    return OrderedDict(methods_full_info)


def get_datasets_info(
    datastes_info_json: str, include_datasets: list, exclude_datasets: list
) -> OrderedDict:
    """
    在json文件中存储的所有数据集的信息会被直接导出到一个字典中

    :param datastes_info_json: 保存方法信息的json文件
    :param include_datasets: 指定读取信息的数据集名字,为None时,读取所有
    :param exclude_datasets: 排除读取信息的数据集名字,为None时,读取所有,与include_datasets必须仅有一个非None
    :return: datastes_full_info
    """

    assert os.path.isfile(datastes_info_json), datastes_info_json
    with open(datastes_info_json, encoding="utf-8", mode="r") as f:
        datasets_info = json.load(f, object_hook=OrderedDict)  # 有序载入

    targeted_datasets = get_valid_elements(
        source=list(datasets_info.keys()),
        include_elements=include_datasets,
        exclude_elements=exclude_datasets,
    )

    datasets_full_info = []
    for dataset_name in targeted_datasets:
        data_path = datasets_info[dataset_name]

        datasets_full_info.append((dataset_name, data_path))
    return OrderedDict(datasets_full_info)


================================================
FILE: utils/misc.py
================================================
# -*- coding: utf-8 -*-
import glob
import os
import re

import cv2
import numpy as np


def get_ext(path_list):
    ext_list = list(set([os.path.splitext(p)[1] for p in path_list]))
    if len(ext_list) != 1:
        if ".png" in ext_list:
            ext = ".png"
        elif ".jpg" in ext_list:
            ext = ".jpg"
        elif ".bmp" in ext_list:
            ext = ".bmp"
        else:
            raise NotImplementedError
        print(f"预测文件夹中包含多种扩展名,这里仅使用{ext}")
    else:
        ext = ext_list[0]
    return ext


def get_name_list_and_suffix(data_path: str) -> tuple:
    name_list = []
    if os.path.isfile(data_path):
        print(f" ++>> {data_path} is a file. <<++ ")
        with open(data_path, mode="r", encoding="utf-8") as file:
            line = file.readline()
            while line:
                img_name = os.path.basename(line.split()[0])
                file_ext = os.path.splitext(img_name)[1]
                name_list.append(os.path.splitext(img_name)[0])
                line = file.readline()
        if file_ext == "":
            # 默认为png
            file_ext = ".png"
    else:
        print(f" ++>> {data_path} is a folder. <<++ ")
        data_list = os.listdir(data_path)
        file_ext = get_ext(data_list)
        name_list = [os.path.splitext(f)[0] for f in data_list if f.endswith(file_ext)]
    name_list = list(set(name_list))
    return name_list, file_ext


def get_name_list(data_path: str, name_prefix: str = "", name_suffix: str = "") -> list:
    if os.path.isfile(data_path):
        assert data_path.endswith((".txt", ".lst"))
        data_list = []
        with open(data_path, encoding="utf-8", mode="r") as f:
            line = f.readline().strip()
            while line:
                data_list.append(line)
                line = f.readline().strip()
    else:
        data_list = os.listdir(data_path)

    name_list = data_list
    if not name_prefix and not name_suffix:
        name_list = [os.path.splitext(f)[0] for f in name_list]
    else:
        name_list = [
            f[len(name_prefix) : -len(name_suffix)]
            for f in name_list
            if f.startswith(name_prefix) and f.endswith(name_suffix)
        ]

    name_list = list(set(name_list))
    return name_list


def get_number_from_tail(string):
    tail_number = re.findall(pattern="\d+$", string=string)[0]
    return int(tail_number)


def get_name_with_group_list(
    data_path: str,
    name_prefix: str = "",
    name_suffix: str = "",
    start_idx: int = 0,
    end_idx: int = None,
    sep: str = "<sep>",
):
    """get file names with the group name

    Args:
        data_path (str): The path of data.
        name_prefix (str, optional): The prefix of the file name. Defaults to "".
        name_suffix (str, optional): The suffix of the file name. Defaults to "".
        start_idx (int, optional): The index of the first frame in each group. Defaults to 0, it will not skip any frames.
        end_idx (int, optional): The index of the last frame in each group. Defaults to None, it will not skip any frames.
        sep (str, optional): The returned name is a string containing group_name and file_name separated by `sep`.

    Raises:
        NotImplementedError: Undefined.

    Returns:
        list: The name (with the group name) list and the original number of image in the dataset.
    """
    name_list = []
    if os.path.isfile(data_path):
        # 暂未遇到类似的设定
        raise NotImplementedError
    else:
        if "*" in data_path:  # for VCOD
            group_paths = glob.glob(data_path, recursive=False)
            group_name_start_idx = data_path.find("*")
            for group_path in group_paths:
                group_name = group_path[group_name_start_idx:].split(os.sep)[0]

                file_names = sorted(
                    [
                        n[len(name_prefix) : -len(name_suffix)]
                        for n in os.listdir(group_path)
                        if n.startswith(name_prefix) and n.endswith(name_suffix)
                    ],
                    key=lambda item: get_number_from_tail(item),
                )

                for file_name in file_names[start_idx:end_idx]:
                    name_list.append(f"{group_name}{sep}{file_name}")

        else:  # for CoSOD
            group_names = os.listdir(data_path)
            group_paths = [os.path.join(data_path, n) for n in group_names]
            for group_path in group_paths:
                group_name = os.path.basename(group_path)

                file_names = sorted(
                    [
                        n[len(name_prefix) : -len(name_suffix)]
                        for n in os.listdir(group_path)
                        if n.startswith(name_prefix) and n.endswith(name_suffix)
                    ],
                    key=lambda item: get_number_from_tail(item),
                )

                for file_name in file_names[start_idx:end_idx]:
                    name_list.append(f"{group_name}{sep}{file_name}")
    name_list = sorted(set(name_list))  # 去重
    return name_list


def get_list_with_suffix(dataset_path: str, suffix: str):
    name_list = []
    if os.path.isfile(dataset_path):
        print(f" ++>> {dataset_path} is a file. <<++ ")
        with open(dataset_path, mode="r", encoding="utf-8") as file:
            line = file.readline()
            while line:
                img_name = os.path.basename(line.split()[0])
                name_list.append(os.path.splitext(img_name)[0])
                line = file.readline()
    else:
        print(f" ++>> {dataset_path} is a folder. <<++ ")
        name_list = [
            os.path.splitext(f)[0] for f in os.listdir(dataset_path) if f.endswith(suffix)
        ]
    name_list = list(set(name_list))
    return name_list


def make_dir(path):
    if not os.path.exists(path):
        print(f"`{path}` does not exist,we will create it.")
        os.makedirs(path)
    else:
        assert os.path.isdir(path), f"`{path}` should be a folder"
        print(f"`{path}`已存在")


def imread_with_checking(path, for_color: bool = True) -> np.ndarray:
    assert os.path.exists(path=path) and os.path.isfile(path=path), path
    if for_color:
        data = cv2.imread(path, flags=cv2.IMREAD_COLOR)
        data = cv2.cvtColor(data, cv2.COLOR_BGR2RGB)
    else:
        data = cv2.imread(path, flags=cv2.IMREAD_GRAYSCALE)
    return data


def get_gt_pre_with_name(
    img_name: str,
    gt_root: str,
    pre_root: str,
    *,
    gt_prefix: str = "",
    pre_prefix: str = "",
    gt_suffix: str = ".png",
    pre_suffix: str = "",
    to_normalize: bool = False,
):
    img_path = os.path.join(pre_root, pre_prefix + img_name + pre_suffix)
    gt_path = os.path.join(gt_root, gt_prefix + img_name + gt_suffix)

    pre = imread_with_checking(img_path, for_color=False)
    gt = imread_with_checking(gt_path, for_color=False)

    if pre.shape != gt.shape:
        pre = cv2.resize(pre, dsize=gt.shape[::-1], interpolation=cv2.INTER_LINEAR).astype(
            np.uint8
        )

    if to_normalize:
        gt = normalize_array(gt, to_binary=True, max_eq_255=True)
        pre = normalize_array(pre, to_binary=False, max_eq_255=True)
    return gt, pre


def get_gt_pre_with_name_and_group(
    img_name: str,
    gt_root: str,
    pre_root: str,
    *,
    gt_prefix: str = "",
    pre_prefix: str = "",
    gt_suffix: str = ".png",
    pre_suffix: str = "",
    to_normalize: bool = False,
    interpolation: int = cv2.INTER_CUBIC,
    sep: str = "<sep>",
):
    group_name, file_name = img_name.split(sep)
    if "*" in gt_root:
        gt_root = gt_root.replace("*", group_name)
    else:
        gt_root = os.path.join(gt_root, group_name)
    if "*" in pre_root:
        pre_root = pre_root.replace("*", group_name)
    else:
        pre_root = os.path.join(pre_root, group_name)
    img_path = os.path.join(pre_root, pre_prefix + file_name + pre_suffix)
    gt_path = os.path.join(gt_root, gt_prefix + file_name + gt_suffix)

    pre = imread_with_checking(img_path, for_color=False)
    gt = imread_with_checking(gt_path, for_color=False)

    if pre.shape != gt.shape:
        pre = cv2.resize(pre, dsize=gt.shape[::-1], interpolation=interpolation).astype(np.uint8)

    if to_normalize:
        gt = normalize_array(gt, to_binary=True, max_eq_255=True)
        pre = normalize_array(pre, to_binary=False, max_eq_255=True)
    return gt, pre


def normalize_array(
    data: np.ndarray, to_binary: bool = False, max_eq_255: bool = True
) -> np.ndarray:
    if max_eq_255:
        data = data / 255
    # else: data is in [0, 1]
    if to_binary:
        data = (data > 0.5).astype(np.uint8)
    else:
        if data.max() != data.min():
            data = (data - data.min()) / (data.max() - data.min())
        data = data.astype(np.float32)
    return data


def get_valid_key_name(data_dict: dict, key_name: str) -> str:
    if data_dict.get(key_name.lower(), "keyerror") == "keyerror":
        key_name = key_name.upper()
    else:
        key_name = key_name.lower()
    return key_name


def get_target_key(target_dict: dict, key: str) -> str:
    """
    from the keys of the target_dict, get the valid key name corresponding to the `key`
    if there is not a valid name, return None
    """
    target_keys = {k.lower(): k for k in target_dict.keys()}
    return target_keys.get(key.lower(), None)


def colored_print(msg: str, mode: str = "general"):
    """
    为不同类型的字符串消息的打印提供一些显示格式的定制

    :param msg: 要输出的字符串消息
    :param mode: 对应的字符串打印模式,目前支持 general/warning/error
    :return:
    """
    if mode == "general":
        msg = msg
    elif mode == "warning":
        msg = f"\033[5;31m{msg}\033[0m"
    elif mode == "error":
        msg = f"\033[1;31m{msg}\033[0m"
    else:
        raise ValueError(f"{mode} is invalid mode.")
    print(msg)


class ColoredPrinter:
    """
    为不同类型的字符串消息的打印提供一些显示格式的定制
    """

    @staticmethod
    def info(msg):
        print(msg)

    @staticmethod
    def warn(msg):
        msg = f"\033[5;31m{msg}\033[0m"
        print(msg)

    @staticmethod
    def error(msg):
        msg = f"\033[1;31m{msg}\033[0m"
        print(msg)


def update_info(source_info: dict, new_info: dict):
    for name, info in source_info.items():
        if name in new_info:
            if isinstance(info, dict):
                update_info(source_info=info, new_info=new_info[name])
            else:  # int, float, list, tuple
                info = new_info[name]
            source_info[name] = info
    return source_info


================================================
FILE: utils/print_formatter.py
================================================
# -*- coding: utf-8 -*-

from tabulate import tabulate


def print_formatter(
    results: dict, method_name_length=10, metric_name_length=5, metric_value_length=5
):
    dataset_regions = []
    for dataset_name, dataset_metrics in results.items():
        dataset_head_row = f" Dataset: {dataset_name} "
        dataset_region = [dataset_head_row]
        for method_name, metric_info in dataset_metrics.items():
            showed_method_name = clip_string(
                method_name, max_length=method_name_length, mode="left"
            )
            method_row_head = f"{showed_method_name} "
            method_row_body = []
            for metric_name, metric_value in metric_info.items():
                showed_metric_name = clip_string(
                    metric_name, max_length=metric_name_length, mode="right"
                )
                showed_value_string = clip_string(
                    str(metric_value), max_length=metric_value_length, mode="left"
                )
                method_row_body.append(f"{showed_metric_name}: {showed_value_string}")
            method_row = method_row_head + ", ".join(method_row_body)
            dataset_region.append(method_row)
        dataset_region_string = "\n".join(dataset_region)
        dataset_regions.append(dataset_region_string)
    dividing_line = "\n" + "-" * len(dataset_region[-1]) + "\n"  # 直接使用最后一个数据集区域的最后一行数据的长度作为分割线的长度
    formatted_string = dividing_line.join(dataset_regions)
    return formatted_string


def clip_string(string: str, max_length: int, padding_char: str = " ", mode: str = "left"):
    assert isinstance(max_length, int), f"{max_length} must be `int` type"

    real_length = len(string)
    if real_length <= max_length:
        padding_length = max_length - real_length
        if mode == "left":
            clipped_string = string + f"{padding_char}" * padding_length
        elif mode == "center":
            left_padding_str = f"{padding_char}" * (padding_length // 2)
            right_padding_str = f"{padding_char}" * (padding_length - padding_length // 2)
            clipped_string = left_padding_str + string + right_padding_str
        elif mode == "right":
            clipped_string = f"{padding_char}" * padding_length + string
        else:
            raise NotImplementedError
    else:
        clipped_string = string[:max_length]

    return clipped_string


def formatter_for_tabulate(
    results: dict,
    method_names: tuple,
    dataset_names: tuple,
    dataset_titlefmt: str = "Dataset: {}",
    method_name_length=None,
    metric_value_length=None,
    tablefmt="github",
):
    """
    tabulate format:

    ::

        table = [["spam",42],["eggs",451],["bacon",0]]
        headers = ["item", "qty"]
        print(tabulate(table, headers, tablefmt="github"))

        | item   | qty   |
        |--------|-------|
        | spam   | 42    |
        | eggs   | 451   |
        | bacon  | 0     |

    本函数的作用:
        针对不同的数据集各自构造符合tabulate格式的列表并使用换行符间隔串联起来返回
    """
    all_tables = []
    for dataset_name in dataset_names:
        dataset_metrics = results[dataset_name]
        all_tables.append(dataset_titlefmt.format(dataset_name))

        table = []
        headers = ["methods"]
        for method_name in method_names:
            metric_info = dataset_metrics.get(method_name)
            if metric_info is None:
                continue

            if method_name_length:
                method_name = clip_string(method_name, max_length=method_name_length, mode="left")
            method_row = [method_name]

            for metric_name, metric_value in metric_info.items():
                if metric_value_length:
                    metric_value = clip_string(
                        str(metric_value), max_length=metric_value_length, mode="center"
                    )
                if metric_name not in headers:
                    headers.append(metric_name)
                method_row.append(metric_value)
            table.append(method_row)
        all_tables.append(tabulate(table, headers, tablefmt=tablefmt))

    formatted_string = "\n".join(all_tables)
    return formatted_string


================================================
FILE: utils/recorders/__init__.py
================================================
# -*- coding: utf-8 -*-
from .curve_drawer import CurveDrawer
from .excel_recorder import MetricExcelRecorder
from .metric_recorder import (
    BINARY_METRIC_MAPPING,
    GRAYSCALE_METRICS,
    SUPPORTED_METRICS,
    BinaryMetricRecorder,
    GrayscaleMetricRecorder,
    GroupedMetricRecorder,
)
from .txt_recorder import TxtRecorder


================================================
FILE: utils/recorders/curve_drawer.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/1/4
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang
import math
import os

import matplotlib.pyplot as plt
import numpy as np


class CurveDrawer(object):
    def __init__(
        self,
        row_num,
        num_subplots,
        style_cfg=None,
        ncol_of_legend=1,
        separated_legend=False,
        sharey=False,
    ):
        """A better wrapper of matplotlib for me.

        Args:
            row_num (int): Number of rows.
            num_subplots (int): Number of subplots.
            style_cfg (str, optional): Style yaml file path for matplotlib. Defaults to None.
            ncol_of_legend (int, optional): Number of columns of the legend. Defaults to 1.
            separated_legend (bool, optional): Use the separated legend. Defaults to False.
            sharey (bool, optional): Use a shared y-axis. Defaults to False.
        """
        if style_cfg is not None:
            assert os.path.isfile(style_cfg)
            plt.style.use(style_cfg)

        self.ncol_of_legend = ncol_of_legend
        self.separated_legend = separated_legend
        if self.separated_legend:
            num_subplots += 1
        self.num_subplots = num_subplots
        self.sharey = sharey

        fig, axes = plt.subplots(
            nrows=row_num, ncols=math.ceil(self.num_subplots / row_num), sharey=self.sharey
        )
        self.fig = fig
        self.axes = axes
        if isinstance(self.axes, np.ndarray):
            self.axes = self.axes.flatten()
        else:
            self.axes = [self.axes]

        self.init_subplots()
        self.dummy_data = {}

    def init_subplots(self):
        for ax in self.axes:
            ax.set_axis_off()

    def plot_at_axis(self, idx, method_curve_setting, x_data, y_data):
        """
        :param method_curve_setting:  {
                "line_color": "color"(str),
                "line_style": "style"(str),
                "line_label": "label"(str),
                "line_width": width(int),
            }
        """
        assert isinstance(idx, int) and 0 <= idx < self.num_subplots
        self.axes[idx].plot(
            x_data,
            y_data,
            linewidth=method_curve_setting["line_width"],
            label=method_curve_setting["line_label"],
            color=method_curve_setting["line_color"],
            linestyle=method_curve_setting["line_style"],
        )

        if self.separated_legend:
            self.dummy_data[method_curve_setting["line_label"]] = method_curve_setting

    def set_axis_property(
        self, idx, title=None, x_label=None, y_label=None, x_ticks=None, y_ticks=None
    ):
        ax = self.axes[idx]

        ax.set_axis_on()

        # give plot a title
        ax.set_title(title)

        # make axis labels
        ax.set_xlabel(x_label)
        ax.set_ylabel(y_label)

        # 对坐标刻度的设置
        x_ticks = [] if x_ticks is None else x_ticks
        y_ticks = [] if y_ticks is None else y_ticks
        ax.set_xlim((min(x_ticks), max(x_ticks)))
        ax.set_ylim((min(y_ticks), max(x_ticks)))
        ax.set_xticks(x_ticks)
        ax.set_yticks(y_ticks)
        ax.set_xticklabels(labels=[f"{x:.2f}" for x in x_ticks])
        ax.set_yticklabels(labels=[f"{y:.2f}" for y in y_ticks])

    def _plot(self):
        if self.sharey:
            for ax in self.axes[1:]:
                ax.set_ylabel(None)
                ax.tick_params(bottom=True, top=False, left=False, right=False)

        if self.separated_legend:
            # settings for the legend axis
            for method_label, method_info in self.dummy_data.items():
                self.plot_at_axis(
                    idx=self.num_subplots - 1, method_curve_setting=method_info, x_data=0, y_data=0
                )
            ax = self.axes[self.num_subplots - 1]
            ax.set_axis_off()
            ax.legend(loc=10, ncol=self.ncol_of_legend, facecolor="white", edgecolor="white")
        else:
            # settings for the legneds of all common subplots.
            for ax in self.axes:
                # loc=0,自动将位置放在最合适的
                ax.legend(loc=3, ncol=self.ncol_of_legend, facecolor="white", edgecolor="white")

    def show(self):
        self._plot()
        plt.tight_layout()
        plt.show()

    def save(self, path):
        self._plot()
        plt.tight_layout()
        plt.savefig(path)


================================================
FILE: utils/recorders/excel_recorder.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/1/3
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang

import os
import re

from openpyxl import Workbook, load_workbook
from openpyxl.utils import get_column_letter
from openpyxl.worksheet.worksheet import Worksheet


# Thanks:
# - Python_Openpyxl: https://www.cnblogs.com/programmer-tlh/p/10461353.html
# - Python之re模块: https://www.cnblogs.com/shenjianping/p/11647473.html
class _BaseExcelRecorder(object):
    def __init__(self, xlsx_path: str):
        """
        提供写xlsx文档功能的基础类。主要基于openpyxl实现了一层更方便的封装。

        :param xlsx_path: xlsx文档的路径。
        """
        self.xlsx_path = xlsx_path
        if not os.path.exists(self.xlsx_path):
            print("We have created a new excel file!!!")
            self._initial_xlsx()
        else:
            print("Excel file has existed!")

    def _initial_xlsx(self):
        Workbook().save(self.xlsx_path)

    def load_sheet(self, sheet_name: str):
        wb = load_workbook(self.xlsx_path)
        if sheet_name not in wb.sheetnames:
            wb.create_sheet(title=sheet_name, index=0)
        sheet = wb[sheet_name]

        return wb, sheet

    def append_row(self, sheet: Worksheet, row_data):
        assert isinstance(row_data, (tuple, list))
        sheet.append(row_data)

    def insert_row(self, sheet: Worksheet, row_data, row_id, min_col=1, interval=0):
        assert isinstance(row_id, int) and isinstance(min_col, int) and row_id > 0 and min_col > 0
        assert isinstance(row_data, (tuple, list)), row_data

        num_elements = len(row_data)
        row_data = iter(row_data)
        for row in sheet.iter_rows(
            min_row=row_id,
            max_row=row_id,
            min_col=min_col,
            max_col=min_col + (interval + 1) * (num_elements - 1),
        ):
            for i, cell in enumerate(row):
                if i % (interval + 1) == 0:
                    sheet.cell(row=row_id, column=cell.column, value=next(row_data))

    @staticmethod
    def merge_region(sheet: Worksheet, min_row, max_row, min_col, max_col):
        assert max_row >= min_row > 0 and max_col >= min_col > 0

        merged_region = (
            f"{get_column_letter(min_col)}{min_row}:{get_column_letter(max_col)}{max_row}"
        )
        sheet.merge_cells(merged_region)

    @staticmethod
    def get_col_id_with_row_id(sheet: Worksheet, col_name: str, row_id):
        """
        从指定行中寻找特定的列名,并返回对应的列序号
        """
        assert isinstance(row_id, int) and row_id > 0

        for cell in sheet[row_id]:
            if cell.value == col_name:
                return cell.column
        raise ValueError(f"In row {row_id}, there is not the column {col_name}!")

    def get_row_id_with_col_name(self, sheet: Worksheet, row_name: str, col_name: str):
        """
        从指定列名字的一列中寻找指定行,返回对应的row_id, col_id, is_new_row
        """
        is_new_row = True
        col_id = self.get_col_id_with_row_id(sheet=sheet, col_name=col_name, row_id=1)

        row_id = 0
        for cell in sheet[get_column_letter(col_id)]:
            row_id = cell.row
            if cell.value == row_name:
                return (row_id, col_id), not is_new_row
        return (row_id + 1, col_id), is_new_row

    @staticmethod
    def get_row_id_with_col_id(sheet: Worksheet, row_name: str, col_id: int):
        """
        从指定序号的一列中寻找指定行
        """
        assert isinstance(col_id, int) and col_id > 0

        is_new_row = True
        row_id = 0
        for cell in sheet[get_column_letter(col_id)]:
            row_id = cell.row
            if cell.value == row_name:
                return row_id, not is_new_row
        return row_id + 1, is_new_row

    @staticmethod
    def format_string_with_config(string: str, repalce_config: dict = None):
        assert repalce_config is not None
        if repalce_config.get("lower"):
            string = string.lower()
        elif repalce_config.get("upper"):
            string = string.upper()
        elif repalce_config.get("title"):
            string = string.title()

        sub_rule = repalce_config.get("replace")
        if sub_rule:
            string = re.sub(pattern=sub_rule[0], repl=sub_rule[1], string=string)
        return string


class MetricExcelRecorder(_BaseExcelRecorder):
    def __init__(
        self,
        xlsx_path: str,
        sheet_name: str = None,
        repalce_config=None,
        row_header=None,
        dataset_names=None,
        metric_names=None,
    ):
        """
        向xlsx文档写数据的类

        :param xlsx_path: 对应的xlsx文档路径
        :param sheet_name: 要写入数据对应的sheet名字
            默认为 `results`
        :param repalce_config: 用于替换对应数据字典的键的模式,会被用于re.sub来进行替换
            默认为 dict(lower=True, replace=(r"[_-]", ""))
        :param row_header: 用于指定表格工作表左上角的内容,这里默认为 `["methods", "num_data"]`
        :param dataset_names: 对应的数据集名称列表
            默认为rgb sod的数据集合 ["pascals", "ecssd", "hkuis", "dutste", "dutomron"]
        :param metric_names: 对应指标名称列表
            默认为 ["smeasure","wfmeasure","mae","adpfm","meanfm","maxfm","adpem","meanem","maxem"]
        """
        super().__init__(xlsx_path=xlsx_path)
        if sheet_name is None:
            sheet_name = "results"

        if repalce_config is None:
            self.repalce_config = dict(lower=True, replace=(r"[_-]", ""))
        else:
            self.repalce_config = repalce_config

        if row_header is None:
            row_header = ["methods", "num_data"]
        self.row_header = [
            self.format_string_with_config(s, self.repalce_config) for s in row_header
        ]
        if dataset_names is None:
            dataset_names = ["pascals", "ecssd", "hkuis", "dutste", "dutomron"]
        self.dataset_names = [
            self.format_string_with_config(s, self.repalce_config) for s in dataset_names
        ]
        if metric_names is None:
            metric_names = [
                "smeasure",
                "wfmeasure",
                "mae",
                "adpfm",
                "meanfm",
                "maxfm",
                "adpem",
                "meanem",
                "maxem",
            ]
        self.metric_names = [
            self.format_string_with_config(s, self.repalce_config) for s in metric_names
        ]

        self.sheet_name = sheet_name
        self._initial_table()

    def _initial_table(self):
        wb, sheet = self.load_sheet(sheet_name=self.sheet_name)

        # 插入row_header
        self.insert_row(sheet=sheet, row_data=self.row_header, row_id=1, min_col=1)
        # 合并row_header的单元格
        for col_id in range(len(self.row_header)):
            self.merge_region(
                sheet=sheet, min_row=1, max_row=2, min_col=col_id + 1, max_col=col_id + 1
            )

        # 插入数据集信息
        self.insert_row(
            sheet=sheet,
            row_data=self.dataset_names,
            row_id=1,
            min_col=len(self.row_header) + 1,
            interval=len(self.metric_names) - 1,
        )

        # 插入指标信息
        start_col = len(self.row_header) + 1
        for i in range(len(self.dataset_names)):
            self.insert_row(
                sheet=sheet,
                row_data=self.metric_names,
                row_id=2,
                min_col=start_col + i * len(self.metric_names),
            )
        wb.save(self.xlsx_path)

    def _format_row_data(self, row_data: dict) -> list:
        row_data = {
            self.format_string_with_config(k, self.repalce_config): v for k, v in row_data.items()
        }
        return [row_data[n] for n in self.metric_names]

    def __call__(self, row_data: dict, dataset_name: str, method_name: str):
        dataset_name = self.format_string_with_config(dataset_name, self.repalce_config)
        assert (
            dataset_name in self.dataset_names
        ), f"{dataset_name} is not contained in {self.dataset_names}"

        # 1 载入数据表
        wb, sheet = self.load_sheet(sheet_name=self.sheet_name)
        # 2 搜索method_name是否存在,如果存在则直接寻找对应的行列坐标,如果不存在则直接使用新行
        dataset_col_start_id = self.get_col_id_with_row_id(
            sheet=sheet, col_name=dataset_name, row_id=1
        )
        (method_row_id, method_col_id), is_new_row = self.get_row_id_with_col_name(
            sheet=sheet, row_name=method_name, col_name="methods"
        )
        # 3 插入方法名字到对应的位置
        if is_new_row:
            sheet.cell(row=method_row_id, column=method_col_id, value=method_name)
        # 4 格式化指标数据部分为合理的格式,并插入表中
        row_data = self._format_row_data(row_data=row_data)
        self.insert_row(
            sheet=sheet, row_data=row_data, row_id=method_row_id, min_col=dataset_col_start_id
        )
        # 4 写入新表
        wb.save(self.xlsx_path)


================================================
FILE: utils/recorders/metric_recorder.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/1/4
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang
from collections import OrderedDict

import numpy as np
import py_sod_metrics


def ndarray_to_basetype(data):
    """
    将单独的ndarray,或者tuple,list或者dict中的ndarray转化为基本数据类型,
    即列表(.tolist())和python标量
    """

    def _to_list_or_scalar(item):
        listed_item = item.tolist()
        if isinstance(listed_item, list) and len(listed_item) == 1:
            listed_item = listed_item[0]
        return listed_item

    if isinstance(data, (tuple, list)):
        results = [_to_list_or_scalar(item) for item in data]
    elif isinstance(data, dict):
        results = {k: _to_list_or_scalar(item) for k, item in data.items()}
    else:
        assert isinstance(data, np.ndarray)
        results = _to_list_or_scalar(data)
    return results


def round_w_zero_padding(x, bit_width):
    x = str(round(x, bit_width))
    x += "0" * (bit_width - len(x.split(".")[-1]))
    return x


INDIVADUAL_METRIC_MAPPING = {
    "mae": py_sod_metrics.MAE,
    # "fm": py_sod_metrics.Fmeasure,
    "em": py_sod_metrics.Emeasure,
    "sm": py_sod_metrics.Smeasure,
    "wfm": py_sod_metrics.WeightedFmeasure,
    "msiou": py_sod_metrics.MSIoU,
}

# fmt: off
gray_metric_kwargs = dict(with_dynamic=True, with_adaptive=True, with_binary=False, sample_based=True)
binary_metric_kwargs = dict(with_dynamic=False, with_adaptive=False, with_binary=True, sample_based=False)
BINARY_METRIC_MAPPING = {
    "fmeasure": {"handler": py_sod_metrics.FmeasureHandler, "kwargs": dict(**gray_metric_kwargs, beta=0.3)},
    "f1": {"handler": py_sod_metrics.FmeasureHandler, "kwargs": dict(**gray_metric_kwargs, beta=1)},
    "precision": {"handler": py_sod_metrics.PrecisionHandler, "kwargs": gray_metric_kwargs},
    "recall": {"handler": py_sod_metrics.RecallHandler, "kwargs": gray_metric_kwargs},
    "iou": {"handler": py_sod_metrics.IOUHandler, "kwargs": gray_metric_kwargs},
    "dice": {"handler": py_sod_metrics.DICEHandler, "kwargs": gray_metric_kwargs},
    "specificity": {"handler": py_sod_metrics.SpecificityHandler, "kwargs": gray_metric_kwargs},
     #
    "bif1": {"handler": py_sod_metrics.FmeasureHandler, "kwargs": dict(**binary_metric_kwargs, beta=1)},
    "biprecision": {"handler": py_sod_metrics.PrecisionHandler, "kwargs": binary_metric_kwargs},
    "birecall": {"handler": py_sod_metrics.RecallHandler, "kwargs": binary_metric_kwargs},
    "biiou": {"handler": py_sod_metrics.IOUHandler, "kwargs": binary_metric_kwargs},
    "bioa": {"handler": py_sod_metrics.OverallAccuracyHandler, "kwargs": binary_metric_kwargs},
    "bikappa": {"handler": py_sod_metrics.KappaHandler, "kwargs": binary_metric_kwargs},
}
GRAYSCALE_METRICS = ["em"] + [k for k in BINARY_METRIC_MAPPING.keys() if not k.startswith("bi")]
SUPPORTED_METRICS = ["mae", "em", "sm", "wfm", "msiou"] + sorted(BINARY_METRIC_MAPPING.keys())
# fmt: on


class GrayscaleMetricRecorder:
    # 'fm' is replaced by 'fmeasure' in BINARY_METRIC_MAPPING
    suppoted_metrics = ["mae", "em", "sm", "wfm", "msiou"] + sorted(
        [k for k in BINARY_METRIC_MAPPING.keys() if not k.startswith("bi")]
    )

    def __init__(self, metric_names=None):
        """
        用于统计各种指标的类
        """
        if not metric_names:
            metric_names = self.suppoted_metrics
        assert all(
            [m in self.suppoted_metrics for m in metric_names]
        ), f"Only support: {self.suppoted_metrics}"

        self.metric_objs = {}
        has_existed = False
        for metric_name in metric_names:
            if metric_name in INDIVADUAL_METRIC_MAPPING:
                self.metric_objs[metric_name] = INDIVADUAL_METRIC_MAPPING[metric_name]()
            else:  # metric_name in BINARY_METRIC_MAPPING
                if not has_existed:  # only init once
                    self.metric_objs["fmeasurev2"] = py_sod_metrics.FmeasureV2()
                    has_existed = True
                metric_handler = BINARY_METRIC_MAPPING[metric_name]
                self.metric_objs["fmeasurev2"].add_handler(
                    handler_name=metric_name,
                    metric_handler=metric_handler["handler"](**metric_handler["kwargs"]),
                )

    def step(self, pre: np.ndarray, gt: np.ndarray, gt_path: str):
        assert pre.shape == gt.shape, (pre.shape, gt.shape, gt_path)
        assert pre.dtype == gt.dtype == np.uint8, (pre.dtype, gt.dtype, gt_path)

        for m_obj in self.metric_objs.values():
            m_obj.step(pre, gt)

    def show(self, num_bits: int = 3, return_ndarray: bool = False) -> dict:
        """
        返回指标计算结果:

        - 曲线数据(sequential)
        - 数值指标(numerical)
        """
        sequential_results = {}
        numerical_results = {}
        for m_name, m_obj in self.metric_objs.items():
            info = m_obj.get_results()
            if m_name == "fmeasurev2":
                for _name, results in info.items():
                    dynamic_results = results.get("dynamic")
                    adaptive_results = results.get("adaptive")
                    if dynamic_results is not None:
                        sequential_results[_name] = np.flip(dynamic_results)
                        numerical_results[f"max{_name}"] = dynamic_results.max()
                        numerical_results[f"avg{_name}"] = dynamic_results.mean()
                    if adaptive_results is not None:
                        numerical_results[f"adp{_name}"] = adaptive_results
            else:
                results = info[m_name]
                if m_name in ("wfm", "sm", "mae", "msiou"):
                    numerical_results[m_name] = results
                elif m_name == "em":
                    sequential_results[m_name] = np.flip(results["curve"])
                    numerical_results.update(
                        {
                            "maxem": results["curve"].max(),
                            "avgem": results["curve"].mean(),
                            "adpem": results["adp"],
                        }
                    )
                else:
                    raise NotImplementedError(m_name)

        if num_bits is not None and isinstance(num_bits, int):
            numerical_results = {k: v.round(num_bits) for k, v in numerical_results.items()}
        if not return_ndarray:
            sequential_results = ndarray_to_basetype(sequential_results)
            numerical_results = ndarray_to_basetype(numerical_results)
        return {"sequential": sequential_results, "numerical": numerical_results}


class BinaryMetricRecorder:
    suppoted_metrics = sorted([k for k in BINARY_METRIC_MAPPING.keys() if k.startswith("bi")])

    def __init__(self, metric_names=("bif1", "biprecision", "birecall", "biiou", "bioa")):
        """
        用于统计各种指标的类
        """
        if not metric_names:
            metric_names = self.suppoted_metrics
        assert all(
            [m in self.suppoted_metrics for m in metric_names]
        ), f"Only support: {self.suppoted_metrics}"

        self.metric_objs = {"fmeasurev2": py_sod_metrics.FmeasureV2()}
        for metric_name in metric_names:
            # metric_name in BINARY_CLASSIFICATION_METRIC_MAPPING
            metric_handler = BINARY_METRIC_MAPPING[metric_name]
            self.metric_objs["fmeasurev2"].add_handler(
                handler_name=metric_name,
                metric_handler=metric_handler["handler"](**metric_handler["kwargs"]),
            )

    def step(self, pre: np.ndarray, gt: np.ndarray, gt_path: str):
        assert pre.shape == gt.shape, (pre.shape, gt.shape, gt_path)
        assert pre.dtype == gt.dtype == np.uint8, (pre.dtype, gt.dtype, gt_path)

        for m_name, m_obj in self.metric_objs.items():
            m_obj.step(pre, gt, normalize=True)

    def show(self, num_bits: int = 3, return_ndarray: bool = False) -> dict:
        numerical_results = {}
        for m_name, m_obj in self.metric_objs.items():
            info = m_obj.get_results()
            assert m_name == "fmeasurev2"
            for _name, results in info.items():
                binary_results = results.get("binary")
                if binary_results is not None:
                    numerical_results[_name] = binary_results

        if num_bits is not None and isinstance(num_bits, int):
            numerical_results = {k: v.round(num_bits) for k, v in numerical_results.items()}
        if not return_ndarray:
            numerical_results = ndarray_to_basetype(numerical_results)
        return {"numerical": numerical_results}


class GroupedMetricRecorder:
    def __init__(
        self, group_names=None, metric_names=("sm", "wfm", "mae", "fmeasure", "em", "iou", "dice")
    ):
        self.group_names = group_names
        self.metric_names = metric_names
        self.zero()

    def zero(self):
        self.metric_recorders = {}
        if self.group_names is not None:
            self.metric_recorders.update(
                {
                    n: GrayscaleMetricRecorder(metric_names=self.metric_names)
                    for n in self.group_names
                }
            )

    def step(self, group_name: str, pre: np.ndarray, gt: np.ndarray, gt_path: str):
        if group_name not in self.metric_recorders:
            self.metric_recorders[group_name] = GrayscaleMetricRecorder(
                metric_names=self.metric_names
            )
        self.metric_recorders[group_name].step(pre, gt, gt_path)

    def show(self, num_bits: int = 3, return_group: bool = False):
        groups_metrics = {
            n: r.show(num_bits=None, return_ndarray=True) for n, r in self.metric_recorders.items()
        }

        results = {}  # collect all group metrics into a list
        for group_name, group_metrics in groups_metrics.items():
            for metric_type, metric_group in group_metrics.items():
                # metric_type: sequential and numerical
                results.setdefault(metric_type, {})
                for metric_name, metric_array in metric_group.items():
                    results[metric_type].setdefault(metric_name, []).append(metric_array)

        numerical_results = {}
        sequential_results = {}
        for metric_type, metric_group in results.items():
            for metric_name, metric_arrays in metric_group.items():
                metric_array = np.mean(np.vstack(metric_arrays), axis=0)  # average over all groups

                if metric_name in BINARY_METRIC_MAPPING or metric_name == "em":
                    if metric_type == "sequential":
                        numerical_results[f"max{metric_name}"] = metric_array.max()
                        numerical_results[f"avg{metric_name}"] = metric_array.mean()
                        sequential_results[metric_name] = metric_array
                else:
                    if metric_type == "numerical":
                        if metric_name.startswith(("max", "avg")):
                            # metrics (maxfm, avgfm, maxem, avgem) will be recomputed within the group
                            continue
                        numerical_results[metric_name] = metric_array

        sequential_results = ndarray_to_basetype(sequential_results)
        if not return_group:
            numerical_results = {k: v.round(num_bits) for k, v in numerical_results.items()}
            numerical_results = ndarray_to_basetype(numerical_results)
            numerical_results = self.sort_results(numerical_results)
            return {"sequential": sequential_results, "numerical": numerical_results}
        else:
            group_numerical_results = {}
            for group_name, group_metric in groups_metrics.items():
                group_metric = {k: v.round(num_bits) for k, v in group_metric["numerical"].items()}
                group_metric = ndarray_to_basetype(group_metric)
                group_numerical_results[group_name] = self.sort_results(group_metric)
            return {"sequential": sequential_results, "numerical": group_numerical_results}

    def sort_results(self, results: dict) -> OrderedDict:
        """for a single group of metrics"""
        sorted_results = OrderedDict()
        all_keys = sorted(results.keys(), key=lambda item: item[::-1])
        for name in self.metric_names:
            for key in all_keys:
                if key.endswith(name):
                    sorted_results[key] = results[key]
        return sorted_results


================================================
FILE: utils/recorders/txt_recorder.py
================================================
# -*- coding: utf-8 -*-
# @Time    : 2021/1/4
# @Author  : Lart Pang
# @GitHub  : https://github.com/lartpang

from datetime import datetime


class TxtRecorder:
    def __init__(self, txt_path, to_append=True, max_method_name_width=10):
        """
        用于向txt文档写数据的类。

        :param txt_path: txt文档路径
        :param to_append: 是否要继续使用之前的文档,如果没有就重新创建
        :param max_method_name_width: 方法字符串的最大长度
        """
        self.txt_path = txt_path
        self.max_method_name_width = max_method_name_width
        mode = "a" if to_append else "w"
        with open(txt_path, mode=mode, encoding="utf-8") as f:
            f.write(f"\n ========>> Date: {datetime.now()} <<======== \n")
        self.row_names = []

    def add_row(self, row_name, row_data, row_start_str="", row_end_str="\n"):
        self.row_names.append(row_name)
        with open(self.txt_path, mode="a", encoding="utf-8") as f:
            f.write(f"{row_start_str} ========>> {row_name}: {row_data} <<======== {row_end_str}")

    def __call__(
        self,
        method_results: dict,
        method_name: str = "",
        row_start_str="",
        row_end_str="\n",
        value_width=6,
    ):
        msg = row_start_str
        if len(method_name) > self.max_method_name_width:
            method_name = method_name[: self.max_method_name_width - 3] + "..."
        else:
            method_name += " " * (self.max_method_name_width - len(method_name))
        msg += f"[{method_name}] "
        for metric_name, metric_value in method_results.items():
            assert isinstance(metric_value, float)
            msg += f"{metric_name}: "
            real_width = len(str(metric_value))
            if value_width > real_width:
                # 后补空格
                msg += f"{metric_value}" + " " * (value_width - real_width)
            else:
                # 真实数据长度超过了限定,这时需要近似保留小数
                # 保留指定位数,注意,这里由于数据都是0~1之间的数据,所以使用round的时候需要去掉前面的`0.`
                msg += f"{round(metric_value, ndigits=value_width - 2)}"
            msg += " "
        msg += row_end_str
        with open(self.txt_path, mode="a", encoding="utf-8") as f:
            f.write(msg)
Download .txt
gitextract_et_nuf0z/

├── .editorconfig
├── .github/
│   └── workflows/
│       └── stale.yml
├── .gitignore
├── .pre-commit-config.yaml
├── LICENSE
├── eval.py
├── examples/
│   ├── alias_for_plotting.json
│   ├── config_dataset_json_example.json
│   ├── config_method_json_example.json
│   ├── converter_config.yaml
│   ├── rgbd_aliases.yaml
│   ├── single_row_style.yml
│   └── two_row_style.yml
├── metrics/
│   ├── __init__.py
│   ├── draw_curves.py
│   ├── image_metrics.py
│   └── video_metrics.py
├── plot.py
├── pyproject.toml
├── readme.md
├── readme_zh.md
├── requirements.txt
├── tools/
│   ├── append_results.py
│   ├── check_path.py
│   ├── converter.py
│   ├── info_py_to_json.py
│   ├── readme.md
│   └── rename.py
└── utils/
    ├── __init__.py
    ├── generate_info.py
    ├── misc.py
    ├── print_formatter.py
    └── recorders/
        ├── __init__.py
        ├── curve_drawer.py
        ├── excel_recorder.py
        ├── metric_recorder.py
        └── txt_recorder.py
Download .txt
SYMBOL INDEX (98 symbols across 16 files)

FILE: eval.py
  function get_args (line 12) | def get_args():
  function main (line 110) | def main():

FILE: metrics/draw_curves.py
  function draw_curves (line 21) | def draw_curves(

FILE: metrics/image_metrics.py
  class Recorder (line 24) | class Recorder:
    method __init__ (line 25) | def __init__(
    method record (line 76) | def record(self, method_results, dataset_name, method_name):
    method export (line 83) | def export(self):
  function cal_metrics (line 104) | def cal_metrics(
  function evaluate (line 246) | def evaluate(names, num_bits, pre_info_pair, gt_info_pair, metric_class,...

FILE: metrics/video_metrics.py
  class Recorder (line 27) | class Recorder:
    method __init__ (line 28) | def __init__(
    method record (line 79) | def record(self, method_results, dataset_name, method_name):
    method export (line 86) | def export(self):
  function cal_metrics (line 107) | def cal_metrics(
  function evaluate (line 257) | def evaluate(

FILE: plot.py
  function get_args (line 11) | def get_args():
  function main (line 67) | def main(args):

FILE: tools/append_results.py
  function get_args (line 7) | def get_args():
  function main (line 23) | def main():

FILE: tools/converter.py
  function update_dict (line 27) | def update_dict(parent_dict, sub_dict):
  function replace_cell (line 136) | def replace_cell(ori_value, k):
  function remove_latex_chars_out_of_mathenv (line 167) | def remove_latex_chars_out_of_mathenv(string: str):

FILE: tools/info_py_to_json.py
  function validate_py_syntax (line 13) | def validate_py_syntax(filename):
  function convert_py_to_json (line 22) | def convert_py_to_json(source_config_root, target_config_root):
  function get_args (line 59) | def get_args():

FILE: tools/rename.py
  function path_join (line 12) | def path_join(base_path, sub_path):
  function rename_all_files (line 18) | def rename_all_files(src_pattern, dst_pattern, src_name, src_dir, dst_di...

FILE: utils/generate_info.py
  function curve_info_generator (line 21) | def curve_info_generator():
  function simple_info_generator (line 49) | def simple_info_generator():
  function get_valid_elements (line 57) | def get_valid_elements(source: list, include_elements: list, exclude_ele...
  function get_methods_info (line 85) | def get_methods_info(
  function get_datasets_info (line 143) | def get_datasets_info(

FILE: utils/misc.py
  function get_ext (line 10) | def get_ext(path_list):
  function get_name_list_and_suffix (line 27) | def get_name_list_and_suffix(data_path: str) -> tuple:
  function get_name_list (line 50) | def get_name_list(data_path: str, name_prefix: str = "", name_suffix: st...
  function get_number_from_tail (line 76) | def get_number_from_tail(string):
  function get_name_with_group_list (line 81) | def get_name_with_group_list(
  function get_list_with_suffix (line 149) | def get_list_with_suffix(dataset_path: str, suffix: str):
  function make_dir (line 168) | def make_dir(path):
  function imread_with_checking (line 177) | def imread_with_checking(path, for_color: bool = True) -> np.ndarray:
  function get_gt_pre_with_name (line 187) | def get_gt_pre_with_name(
  function get_gt_pre_with_name_and_group (line 215) | def get_gt_pre_with_name_and_group(
  function normalize_array (line 252) | def normalize_array(
  function get_valid_key_name (line 267) | def get_valid_key_name(data_dict: dict, key_name: str) -> str:
  function get_target_key (line 275) | def get_target_key(target_dict: dict, key: str) -> str:
  function colored_print (line 284) | def colored_print(msg: str, mode: str = "general"):
  class ColoredPrinter (line 303) | class ColoredPrinter:
    method info (line 309) | def info(msg):
    method warn (line 313) | def warn(msg):
    method error (line 318) | def error(msg):
  function update_info (line 323) | def update_info(source_info: dict, new_info: dict):

FILE: utils/print_formatter.py
  function print_formatter (line 6) | def print_formatter(
  function clip_string (line 36) | def clip_string(string: str, max_length: int, padding_char: str = " ", m...
  function formatter_for_tabulate (line 58) | def formatter_for_tabulate(

FILE: utils/recorders/curve_drawer.py
  class CurveDrawer (line 12) | class CurveDrawer(object):
    method __init__ (line 13) | def __init__(
    method init_subplots (line 56) | def init_subplots(self):
    method plot_at_axis (line 60) | def plot_at_axis(self, idx, method_curve_setting, x_data, y_data):
    method set_axis_property (line 82) | def set_axis_property(
    method _plot (line 106) | def _plot(self):
    method show (line 127) | def show(self):
    method save (line 132) | def save(self, path):

FILE: utils/recorders/excel_recorder.py
  class _BaseExcelRecorder (line 17) | class _BaseExcelRecorder(object):
    method __init__ (line 18) | def __init__(self, xlsx_path: str):
    method _initial_xlsx (line 31) | def _initial_xlsx(self):
    method load_sheet (line 34) | def load_sheet(self, sheet_name: str):
    method append_row (line 42) | def append_row(self, sheet: Worksheet, row_data):
    method insert_row (line 46) | def insert_row(self, sheet: Worksheet, row_data, row_id, min_col=1, in...
    method merge_region (line 63) | def merge_region(sheet: Worksheet, min_row, max_row, min_col, max_col):
    method get_col_id_with_row_id (line 72) | def get_col_id_with_row_id(sheet: Worksheet, col_name: str, row_id):
    method get_row_id_with_col_name (line 83) | def get_row_id_with_col_name(self, sheet: Worksheet, row_name: str, co...
    method get_row_id_with_col_id (line 98) | def get_row_id_with_col_id(sheet: Worksheet, row_name: str, col_id: int):
    method format_string_with_config (line 113) | def format_string_with_config(string: str, repalce_config: dict = None):
  class MetricExcelRecorder (line 128) | class MetricExcelRecorder(_BaseExcelRecorder):
    method __init__ (line 129) | def __init__(
    method _initial_table (line 190) | def _initial_table(self):
    method _format_row_data (line 221) | def _format_row_data(self, row_data: dict) -> list:
    method __call__ (line 227) | def __call__(self, row_data: dict, dataset_name: str, method_name: str):

FILE: utils/recorders/metric_recorder.py
  function ndarray_to_basetype (line 11) | def ndarray_to_basetype(data):
  function round_w_zero_padding (line 33) | def round_w_zero_padding(x, bit_width):
  class GrayscaleMetricRecorder (line 72) | class GrayscaleMetricRecorder:
    method __init__ (line 78) | def __init__(self, metric_names=None):
    method step (line 103) | def step(self, pre: np.ndarray, gt: np.ndarray, gt_path: str):
    method show (line 110) | def show(self, num_bits: int = 3, return_ndarray: bool = False) -> dict:
  class BinaryMetricRecorder (line 155) | class BinaryMetricRecorder:
    method __init__ (line 158) | def __init__(self, metric_names=("bif1", "biprecision", "birecall", "b...
    method step (line 177) | def step(self, pre: np.ndarray, gt: np.ndarray, gt_path: str):
    method show (line 184) | def show(self, num_bits: int = 3, return_ndarray: bool = False) -> dict:
  class GroupedMetricRecorder (line 201) | class GroupedMetricRecorder:
    method __init__ (line 202) | def __init__(
    method zero (line 209) | def zero(self):
    method step (line 219) | def step(self, group_name: str, pre: np.ndarray, gt: np.ndarray, gt_pa...
    method show (line 226) | def show(self, num_bits: int = 3, return_group: bool = False):
    method sort_results (line 271) | def sort_results(self, results: dict) -> OrderedDict:

FILE: utils/recorders/txt_recorder.py
  class TxtRecorder (line 9) | class TxtRecorder:
    method __init__ (line 10) | def __init__(self, txt_path, to_append=True, max_method_name_width=10):
    method add_row (line 25) | def add_row(self, row_name, row_data, row_start_str="", row_end_str="\...
    method __call__ (line 30) | def __call__(
Condensed preview — 37 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (187K chars).
[
  {
    "path": ".editorconfig",
    "chars": 786,
    "preview": "# https://editorconfig.org/\n\nroot = true\n\n[*]\nindent_style = space\nindent_size = 4\ninsert_final_newline = true\ntrim_trai"
  },
  {
    "path": ".github/workflows/stale.yml",
    "chars": 1186,
    "preview": "# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.\n#\n# You c"
  },
  {
    "path": ".gitignore",
    "chars": 3746,
    "preview": "# Big files\n**/*.png\n**/*.pdf\n**/*.jpg\n**/*.bmp\n**/*.zip\n**/*.7z\n**/*.rar\n**/*.tar*\n\n# Byte-compiled / optimized / DLL f"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 724,
    "preview": "# See https://pre-commit.com for more information\n# See https://pre-commit.com/hooks.html for more hooks\nrepos:\n  - repo"
  },
  {
    "path": "LICENSE",
    "chars": 1060,
    "preview": "MIT License\n\nCopyright (c) 2020 MY_\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof thi"
  },
  {
    "path": "eval.py",
    "chars": 6931,
    "preview": "# -*- coding: utf-8 -*-\nimport argparse\nimport os\nimport textwrap\nimport warnings\n\nfrom metrics import image_metrics, vi"
  },
  {
    "path": "examples/alias_for_plotting.json",
    "chars": 337,
    "preview": "{\n  \"dataset\": {\n    \"Name_In_Json\": \"Name_In_SubFigure\",\n    \"NJUD\": \"NJUD\",\n    \"NLPR\": \"NLPR\",\n    \"DUTRGBD\": \"DUTRGB"
  },
  {
    "path": "examples/config_dataset_json_example.json",
    "chars": 2038,
    "preview": "{\n  \"LFSD\": {\n    \"image\": {\n      \"path\": \"Path_Of_RGBDSOD_Datasets/LFSD/Image\",\n      \"prefix\": \"some_gt_prefix\",\n    "
  },
  {
    "path": "examples/config_method_json_example.json",
    "chars": 1843,
    "preview": "{\n  \"Method1\": {\n    \"PASCAL-S\": {\n      \"path\": \"Path_Of_Method1/PASCAL-S/DGRL\",\n      \"prefix\": \"some_method_prefix\",\n"
  },
  {
    "path": "examples/converter_config.yaml",
    "chars": 671,
    "preview": "dataset_names: [\n    'NJUD',\n    'NLPR',\n    'SIP',\n    'STEREO1000',\n    'SSD',\n    'LFSD',\n    'RGBD135',\n    'DUTRGBD"
  },
  {
    "path": "examples/rgbd_aliases.yaml",
    "chars": 342,
    "preview": "dataset: {\n    \"NJUD\": \"NJUD\",\n    \"NLPR\": \"NLPR\",\n    \"SIP\": \"SIP\",\n    \"STEREO1000\": \"STEREO1000\",\n    \"RGBD135\": \"RGB"
  },
  {
    "path": "examples/single_row_style.yml",
    "chars": 10326,
    "preview": "# Based:\n# - https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-default-matplotlibrc-file\n# - htt"
  },
  {
    "path": "examples/two_row_style.yml",
    "chars": 10324,
    "preview": "# Based:\n# - https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-default-matplotlibrc-file\n# - htt"
  },
  {
    "path": "metrics/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "metrics/draw_curves.py",
    "chars": 6652,
    "preview": "# -*- coding: utf-8 -*-\n\nfrom collections import OrderedDict\n\nimport numpy as np\nfrom matplotlib import colors\n\nfrom uti"
  },
  {
    "path": "metrics/image_metrics.py",
    "chars": 10481,
    "preview": "# -*- coding: utf-8 -*-\n\nimport os\nfrom collections import defaultdict\nfrom functools import partial\nfrom multiprocessin"
  },
  {
    "path": "metrics/video_metrics.py",
    "chars": 11473,
    "preview": "# -*- coding: utf-8 -*-\n\nimport os\nfrom collections import defaultdict\nfrom functools import partial\nfrom multiprocessin"
  },
  {
    "path": "plot.py",
    "chars": 4990,
    "preview": "# -*- coding: utf-8 -*-\nimport argparse\nimport textwrap\n\nimport numpy as np\nimport yaml\n\nfrom metrics import draw_curves"
  },
  {
    "path": "pyproject.toml",
    "chars": 1221,
    "preview": "[tool.ruff]\n# Exclude a variety of commonly ignored directories.\nexclude = [\n    \".bzr\",\n    \".direnv\",\n    \".eggs\",\n   "
  },
  {
    "path": "readme.md",
    "chars": 14629,
    "preview": "\n# A Python-based image grayscale/binary segmentation evaluation toolbox.\n\n[中文文档](./readme_zh.md)\n\n## TODO\n\n- More flexi"
  },
  {
    "path": "readme_zh.md",
    "chars": 9564,
    "preview": "\n# 基于 Python 的图像灰度/二值分割测评工具箱\n\n## 一些规划\n\n- 更灵活的配置脚本.\n    - [ ] 使用 [符合matplotlib要求的](https://matplotlib.org/stable/tutorial"
  },
  {
    "path": "requirements.txt",
    "chars": 173,
    "preview": "# Automatically generated by https://github.com/damnever/pigar.\n\nmatplotlib\nnumpy\nopencv-python\nopenpyxl\npysodmetrics==1"
  },
  {
    "path": "tools/append_results.py",
    "chars": 1905,
    "preview": "# -*- coding: utf-8 -*-\nimport argparse\n\nimport numpy as np\n\n\ndef get_args():\n    parser = argparse.ArgumentParser(\n    "
  },
  {
    "path": "tools/check_path.py",
    "chars": 2844,
    "preview": "# -*- coding: utf-8 -*-\n\nimport argparse\nimport json\nimport os\nfrom collections import OrderedDict\n\nparser = argparse.Ar"
  },
  {
    "path": "tools/converter.py",
    "chars": 9231,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/8/25\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\n\nimport a"
  },
  {
    "path": "tools/info_py_to_json.py",
    "chars": 2282,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/3/14\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\nimport ar"
  },
  {
    "path": "tools/readme.md",
    "chars": 3977,
    "preview": "# Useful tools\n\n## `append_results.py`\n\n将新生成的npy文件与旧的npy文件合并到一个新npy文件中。\n\n```shell\n$ python append_results.py --help\nusag"
  },
  {
    "path": "tools/rename.py",
    "chars": 2014,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/4/24\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\n\nimport g"
  },
  {
    "path": "utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "utils/generate_info.py",
    "chars": 5578,
    "preview": "# -*- coding: utf-8 -*-\n\nimport json\nimport os\nfrom collections import OrderedDict\n\nfrom matplotlib import colors\n\n# max"
  },
  {
    "path": "utils/misc.py",
    "chars": 10551,
    "preview": "# -*- coding: utf-8 -*-\nimport glob\nimport os\nimport re\n\nimport cv2\nimport numpy as np\n\n\ndef get_ext(path_list):\n    ext"
  },
  {
    "path": "utils/print_formatter.py",
    "chars": 4159,
    "preview": "# -*- coding: utf-8 -*-\n\nfrom tabulate import tabulate\n\n\ndef print_formatter(\n    results: dict, method_name_length=10, "
  },
  {
    "path": "utils/recorders/__init__.py",
    "chars": 336,
    "preview": "# -*- coding: utf-8 -*-\nfrom .curve_drawer import CurveDrawer\nfrom .excel_recorder import MetricExcelRecorder\nfrom .metr"
  },
  {
    "path": "utils/recorders/curve_drawer.py",
    "chars": 4404,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/1/4\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\nimport mat"
  },
  {
    "path": "utils/recorders/excel_recorder.py",
    "chars": 8711,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/1/3\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\n\nimport os"
  },
  {
    "path": "utils/recorders/metric_recorder.py",
    "chars": 12441,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/1/4\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\nfrom colle"
  },
  {
    "path": "utils/recorders/txt_recorder.py",
    "chars": 2159,
    "preview": "# -*- coding: utf-8 -*-\n# @Time    : 2021/1/4\n# @Author  : Lart Pang\n# @GitHub  : https://github.com/lartpang\n\nfrom date"
  }
]

About this extraction

This page contains the full source code of the lartpang/PySODEvalToolkit GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 37 files (166.1 KB), approximately 44.1k tokens, and a symbol index with 98 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!