Full Code of sartorius-research/LIVECell for AI

main 1915b2511dca cached
27 files
132.0 KB
41.1k tokens
26 symbols
1 requests
Download .txt
Repository: sartorius-research/LIVECell
Branch: main
Commit: 1915b2511dca
Files: 27
Total size: 132.0 KB

Directory structure:
gitextract_t99nc979/

├── LICENSE
├── README.md
├── _config.yml
├── code/
│   ├── Fluorescence cell count evaluation.ipynb
│   ├── coco_evaluation.py
│   ├── coco_evaluation_resnest.py
│   └── preprocessing.py
└── model/
    ├── README.md
    ├── anchor_based/
    │   ├── a172_config.yaml
    │   ├── bt474_config.yaml
    │   ├── bv2_config.yaml
    │   ├── huh7_config.yaml
    │   ├── livecell_config.yaml
    │   ├── mcf7_config.yaml
    │   ├── shsy5y_config.yaml
    │   ├── skbr3_config.yaml
    │   └── skov3_config.yaml
    └── anchor_free/
        ├── Base-CenterMask-VoVNet.yaml
        ├── a172_config.yaml
        ├── bt474_config.yaml
        ├── bv2_config.yaml
        ├── huh7_config.yaml
        ├── livecell_config.yaml
        ├── mcf7_config.yaml
        ├── shsy5y_config.yaml
        ├── skbr3_config.yaml
        └── skov3_config.yaml

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2021 Sartorius AG

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# LIVECell dataset

This document contains instructions of how to access the data associated with the submitted
manuscript "LIVECell - A large-scale dataset for label-free live cell segmentation" by Edlund et. al. 2021.

## Background
Light microscopy is a cheap, accessible, non-invasive modality that when combined with well-established
protocols of two-dimensional cell culture facilitates high-throughput quantitative imaging to study biological
phenomena. Accurate segmentation of individual cells enables exploration of complex biological questions, but
this requires sophisticated imaging processing pipelines due to the low contrast and high object density.
Deep learning-based methods are considered state-of-the-art for most computer vision problems but require vast
amounts of annotated data, for which there is no suitable resource available in the field of label-free cellular
imaging. To address this gap we present LIVECell, a high-quality, manually annotated and expert-validated dataset
that is the largest of its kind to date, consisting of over 1.6 million cells from a diverse set of cell morphologies
and culture densities. To further demonstrate its utility, we provide convolutional neural network-based models
trained and evaluated on LIVECell.

## How to access LIVECell

All images in LIVECell are available following [this link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/images.zip)  (requires 1.3 GB). Annotations for the different experiments are linked below.
To see a more details regarding benchmarks and how to use our models, see [this link](model/README.md).

### LIVECell-wide train and evaluate

| Annotation set             | URL           |
| -------------------------- |:-------------:|
| Training set    | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell/livecell_coco_train.json)   |
| Validation set  | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell/livecell_coco_val.json) |
| Test set        | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell/livecell_coco_test.json) |

### Single cell-type experiments


| Cell Type      | Training set  | Validation set | Test set |
| ---------------|:-------------:|:--------------:|:--------:|
| A172           | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/a172/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/a172/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/a172/test.json) |
| BT474          | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/bt474/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/bt474/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/bt474/test.json) |
| BV-2           | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/bv2/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/bv2/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/bv2/test.json) |
| Huh7           | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/huh7/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/huh7/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/huh7/test.json) |
| MCF7           | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/mcf7/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/mcf7/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/mcf7/test.json) |
| SH-SHY5Y       | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/shsy5y/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/shsy5y/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/shsy5y/test.json) |
| SkBr3          | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/skbr3/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/skbr3/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/skbr3/test.json) |
| SK-OV-3        | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/skov3/train.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/skov3/val.json) | [link](//livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_single_cells/skov3/test.json) |


### Dataset size experiments

| Split      | URL   |
| ---------- |:-----:|
| 2 %       | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_dataset_size_split/0_train2percent.json) |
| 4 %       | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_dataset_size_split/1_train4percent.json)|
| 5 %       | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_dataset_size_split/2_train5percent.json)|
| 25 %      | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_dataset_size_split/3_train25percent.json)|
| 50 %      | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell_dataset_size_split/4_train50percent.json)|


### Comparison to fluorescence-based object counts
The images and corresponding json-file with object count per image is available together with the raw fluorescent 
images the counts is based on.

| Cell Type    | Images | Counts | Fluorescent images
| ------------ |:------:|:----------:| :-----: |
| A549         | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/nuclear_count_benchmark/A549.zip) | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/nuclear_count_benchmark/A549_counts.json) | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/nuclear_count_benchmark/A549_fluorescent_images.zip) 
| A172         | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/nuclear_count_benchmark/A172.zip) | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/nuclear_count_benchmark/A172_counts.json) | [link](http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/nuclear_count_benchmark/A172_fluorescent_images.zip) 


### Download all of LIVECell

The LIVECell-dataset and trained models is stored in an Amazon Web Services (AWS) S3-bucket. It is easiest to
download the dataset if you have an AWS IAM-user using the
[AWS-CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the folder
you would like to download the dataset to by simply:
```
aws s3 sync s3://livecell-dataset .
```

If you do not have an AWS IAM-user, the procedure is a little bit more involved. We can use `curl` to make an
HTTP-request to get the S3 XML-response and save to `files.xml`:

```
curl -H "GET /?list-type=2 HTTP/1.1" \
     -H "Host: livecell-dataset.s3.eu-central-1.amazonaws.com" \
     -H "Date: 20161025T124500Z" \
     -H "Content-Type: text/plain" http://livecell-dataset.s3.eu-central-1.amazonaws.com/ > files.xml
```

We then get the urls from files using `grep`:

```
grep -oPm1 "(?<=<Key>)[^<]+" files.xml | sed -e 's/^/http:\/\/livecell-dataset.s3.eu-central-1.amazonaws.com\//' > urls.txt
```

Then download the files you like using `wget`.

## File structure
The top-level structure of the files is arranged like:
```
/livecell-dataset/
    ├── LIVECell_dataset_2021  
    |       ├── annotations/
    |       ├── models/
    |       ├── nuclear_count_benchmark/	
    |       └── images.zip  
    ├── README.md  
    └── LICENSE
```

### LIVECell_dataset_2021/images
The images of the LIVECell-dataset are stored in `/livecell-dataset/LIVECell_dataset_2021/images.zip` along with
their annotations in `/livecell-dataset/LIVECell_dataset_2021/annotations/`.

Within `images.zip` are the training/validation-set and test-set images are completely separate to
facilitate fair comparison between studies. The images require 1.3 GB disk space unzipped and are arranged like:
```
images/
    ├── livecell_test_images
    |       └── <Cell Type>
    |               └── <Cell Type>_Phase_<Well>_<Location>_<Timestamp>_<Crop>.tif
    └── livecell_train_val_images
            └── <Cell Type>
```
Where `<Cell Type>` is each of the eight cell-types in LIVECell (A172, BT474, BV2, Huh7, MCF7, SHSY5Y, SkBr3, SKOV3).
Wells `<Well>` are the location in the 96-well plate used to culture cells, `<Location>` indicates location in the well
where the image was acquired, `<Timestamp>` the time passed since the beginning of the experiment to image acquisition
and `<Crop>` index of the crop of the original larger image. An example image name is `A172_Phase_C7_1_02d16h00m_2.tif`,
which is an image of A172-cells, grown in well C7 where the image is acquired in position 1 two days and 16 hours after
experiment start (crop position 2).

### LIVECell_dataset_2021/annotations/
The annotations of LIVECell are prepared for all tasks along with the training/validation/test splits used for all
experiments in the paper. The annotations require 2.1 GB of disk space and are arranged like:

```
annotations/
    ├── LIVECell
    |       └── livecell_coco_<train/val/test>.json
    ├── LIVECell_single_cells
    |       └── <Cell Type>
    |               └── <train/val/test>.json
    └── LIVECell_dataset_size_split
            └── <Split>_train<Percentage>percent.json
```

*  `annotations/LIVECell` contains the annotations used for the LIVECell-wide train and evaluate task.
*  `annotations/LIVECell_single_cells` contains the annotations used for Single cell type train and evaluate as well
   as the Single cell type transferability tasks.
*  `annotations/LIVECell_dataset_size_split` contains the annotations used to investigate the impact of training set
   scale.

All annotations are in [Microsoft COCO Object Detection-format](https://cocodataset.org/#format-data), and can for
instance be parsed by the Python package [`pycocotools`](https://pypi.org/project/pycocotools/).

### models/
ALL models trained and evaluated for tasks associated with LIVECell are made available for wider use. The models
are trained using [detectron2](https://github.com/facebookresearch/detectron2), Facebook's framework for
object detection and instance segmentation. The models require 15 GB of disk space and are arranged like:

```
models/
   └── Anchor_<free/based>
            ├── ALL/
            |    └──<Model>.pth
            └── <Cell Type>/
                 └──<Model>.pths
       
```

Where each `<Model>.pth` is a binary file containing the model weights.

### configs/
 
The config files for each model can be found in the [LIVECell github repo](https://github.com/sartorius-research/LIVECell)

```
LIVECell
    └── Anchor_<free/based>
            ├── livecell_config.yaml
            ├── a172_config.yaml
            ├── bt474_config.yaml
            ├── bv2_config.yaml
            ├── huh7_config.yaml
            ├── mcf7_config.yaml
            ├── shsy5y_config.yaml
            ├── skbr3_config.yaml
            └── skov3_config.yaml
```

Where each config file can be used to reproduce the training done or in combination with our model weights for usage, 
for more info see the [usage section](model/README.md#Usage).

### nuclear_count_benchmark/
The images and fluorescence-based object counts are stored as the label-free images in a zip-archive
and the corresponding counts in a json as below:

```
nuclear_count_benchmark/
    ├── A172.zip
    ├── A172_counts.json
    ├── A172_fluorescent_images.zip
    ├── A549.zip
    ├── A549_counts.json 
    └── A549_fluorescent_images.zip
```

The json files are on the following format:

```
{
    "<filename>": "<count>"
}
```
Where `<filename>` points to one of the images in the zip-archive, and `<count>` refers to the object count
according fluorescent nuclear labels.

## LICENSE
All images, annotations and models associated with LIVECell are published under
Attribution-NonCommercial 4.0 International ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)) license.

All software source code associated associated with LIVECell are published under the MIT License.


================================================
FILE: _config.yml
================================================
theme: jekyll-theme-cayman
title: LIVECell
description: A large-scale dataset for label-free live cell segmentation


================================================
FILE: code/Fluorescence cell count evaluation.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# LIVECell Fluorescence cell count benchmark\n",
    "\n",
    "This notebook contains a reference implementation of the evaluation of the fluorescence cell count benchmark in \"LIVECell - A large-scale dataset for label-free live cell segmentation\" by Edlund et. al. Given data of predicted and fluorescence-based cell count, the evaluation consists of two parts:\n",
    "\n",
    "1. R2 between predicted and fluorescence-based counts in images with fewer than 1600 cells per image (roughly corresponding to full confluency).\n",
    "2. The point which the linear relationship breaks. This test works by comparing the residuals of a linear vs. a non-linear regression model of the fluorescence-based counts as a function of the predicted ones."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import ipywidgets\n",
    "from IPython.core.display import display\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from sklearn.metrics import r2_score\n",
    "from scipy import stats\n",
    "from sklearn.linear_model import LinearRegression\n",
    "from sklearn.neighbors import KNeighborsRegressor"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First we define our functions.\n",
    "\n",
    "1. `get_counts_from_excel_file` reads the counts from the specific Excel-file format we used for the manuscripts. This is preferrably replaced by whatever format you like.\n",
    "2. `linearity_cutoff_test` contains the test for when linearity breaks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_counts_from_excel_file(sheet_name, excel_file):\n",
    "    \"\"\" Load data from Excel-file and flatten to 1D-arrays. \"\"\"\n",
    "    \n",
    "    sheet = excel_file.parse(sheet_name, index_col=1)\n",
    "    sheet = sheet.rename(columns={sheet.columns[0]: 'time'})\n",
    "\n",
    "    nc_cols = [col for col in sheet.columns if 'Image' in col]\n",
    "    model_cols = [col for col in sheet.columns if not col in nc_cols and col != 'time']\n",
    "\n",
    "    nc_flat = sheet[nc_cols].values.flatten()\n",
    "    model_flat = sheet[model_cols].values.flatten()\n",
    "\n",
    "    nc_is_nan = np.isnan(nc_flat)\n",
    "    model_is_nan = np.isnan(model_flat)\n",
    "    any_is_nan = nc_is_nan | model_is_nan\n",
    "\n",
    "    nc_flat = nc_flat[~any_is_nan]\n",
    "    model_flat = model_flat[~any_is_nan]\n",
    "    return nc_flat, model_flat\n",
    "\n",
    "\n",
    "def linearity_cutoff_test(\n",
    "    fluorescence_counts,\n",
    "    prediction_counts,\n",
    "    start_threshold = 500,\n",
    "    increment = 1,\n",
    "    p_cutoff = 1e-5, \n",
    "    n_neighbors=5\n",
    "):\n",
    "    \"\"\" Test when linearity breaks. \n",
    "    \n",
    "    While the maximum number of objects per image is increased incrementally, \n",
    "    the fluorescence-based counts are regressed as a function of the predicted\n",
    "    counts using linear regression and KNN-regression (default 5 neighbors). \n",
    "    \n",
    "    Then the null hypothesis of equally sized residuals is tested using a \n",
    "    Levene's test. If the null hypothesis is rejected, the fit is considered\n",
    "    non-linear. \n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    fluorescence_counts : array\n",
    "        1D-array of ints containing fluorescence-based counts\n",
    "    prediction_counts : array\n",
    "        1D-array ints containing predicted counts\n",
    "    start_threshold : int\n",
    "        Maximum number of objects per image to start incrementing from (default 500)\n",
    "    increment : int\n",
    "        Number of objects per image to increment with (default 1)\n",
    "    p_cutoff : float\n",
    "        p-value cutoff to reject null hypothesis (default 1E-5)\n",
    "    n_neighbors : int\n",
    "        Number of neighbors in KNN-regression.\n",
    "        \n",
    "    Returns\n",
    "    -------\n",
    "    int\n",
    "        Number of objects per image where null hypothesis was first rejected.\n",
    "    \"\"\"\n",
    "\n",
    "    for test_threshold in range(start_threshold, int(nc_flat.max()), increment):\n",
    "        below_test_threshold = fluorescence_counts < test_threshold\n",
    "        y = fluorescence_counts[below_test_threshold]\n",
    "\n",
    "        prediction_counts_2d = np.atleast_2d(prediction_counts[below_test_threshold]).T\n",
    "        linear_model = LinearRegression().fit(prediction_counts_2d, y)\n",
    "        knn_model = KNeighborsRegressor(n_neighbors).fit(prediction_counts_2d, y)\n",
    "        linear_pred_nc = linear_model.predict(prediction_counts_2d)\n",
    "        knn_pred_nc = knn_model.predict(prediction_counts_2d)\n",
    "\n",
    "        knn_residal = (y - knn_pred_nc)\n",
    "        linear_residual = (y - linear_pred_nc)\n",
    "        test_result = stats.levene(knn_residal, linear_residual)\n",
    "        if test_result.pvalue < p_cutoff:\n",
    "            break\n",
    "            \n",
    "    return test_threshold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Pick file to analyze."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "596c505bc775437995efc34f96ed561d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "FileUpload(value={}, accept='.xlsx', description='Upload')"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "uploader = ipywidgets.FileUpload(accept='.xlsx', multiple=False)\n",
    "display(uploader)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Run tests"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A549 - Anchor-free model\n",
      "R2 below 1600 objects = 0.980\n",
      "Linearity break, n objects = 2031\n",
      "\n",
      "A549 - Anchor-based model\n",
      "R2 below 1600 objects = 0.985\n",
      "Linearity break, n objects = 1403\n",
      "\n",
      "A172 - Anchor-free model\n",
      "R2 below 1600 objects = 0.942\n",
      "Linearity break, n objects = 1948\n",
      "\n",
      "A172 - Anchor-based model\n",
      "R2 below 1600 objects = 0.977\n",
      "Linearity break, n objects = 1328\n",
      "\n"
     ]
    }
   ],
   "source": [
    "if not uploader.value:\n",
    "    print('Pick file using file-picker first')\n",
    "else:\n",
    "    first_key = next(key for key in uploader.value)\n",
    "    excel_file = pd.ExcelFile(uploader.value[first_key]['content'], engine='openpyxl')\n",
    "    sheet_names = excel_file.sheet_names\n",
    "\n",
    "    threshold = 1600\n",
    "\n",
    "    for sheet_name in sheet_names:\n",
    "        cell_type, model_name = sheet_name.split('-', 1)\n",
    "        print(f'{cell_type} - {model_name} model')\n",
    "        nc_flat, model_flat = get_counts_from_excel_file(sheet_name, excel_file)\n",
    "\n",
    "        below_threshold = nc_flat < threshold\n",
    "        r2 = r2_score(nc_flat[below_threshold], model_flat[below_threshold])\n",
    "        linearity_cutoff = linearity_cutoff_test(nc_flat, model_flat)\n",
    "        print(f'R2 below {threshold} objects = {r2:.3f}')\n",
    "        print(f'Linearity break, n objects = {linearity_cutoff}')\n",
    "        print()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: code/coco_evaluation.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates.
# Modified by Sangrok Lee and Youngwan Lee (ETRI), 2020.
# We modify COCOEvaluator for adopting mask_score in mask evalaution.
# Modified by Christoffer Edlund (Sartorius), 2022. All Rights Reserved.
# Modfied COCOEvaluator to support more than 100 detections in the evaluation and added
# evlaution of multiple IoU Levels.

import types
import contextlib
import copy
import io
import itertools
import json
import logging
import numpy as np
import os
import pickle
from collections import OrderedDict
import pycocotools.mask as mask_util
import torch
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
from tabulate import tabulate

import detectron2.utils.comm as comm
from detectron2.config import CfgNode
from detectron2.data import MetadataCatalog
from detectron2.data.datasets.coco import convert_to_coco_json
from detectron2.evaluation.fast_eval_api import COCOeval_opt
from detectron2.structures import Boxes, BoxMode, pairwise_iou
from detectron2.utils.file_io import PathManager
from detectron2.utils.logger import create_small_table

from detectron2.evaluation.evaluator import DatasetEvaluator


class COCOEvaluator(DatasetEvaluator):
    """
    Evaluate AR for object proposals, AP for instance detection/segmentation, AP
    for keypoint detection outputs using COCO's metrics.
    See http://cocodataset.org/#detection-eval and
    http://cocodataset.org/#keypoints-eval to understand its metrics.
    In addition to COCO, this evaluator is able to support any bounding box detection,
    instance segmentation, or keypoint detection dataset.
    """

    def __init__(
        self,
        dataset_name,
        tasks=None,
        distributed=True,
        output_dir=None,
        *,
        use_fast_impl=True,
        kpt_oks_sigmas=(),
    ):
        """
        Args:
            dataset_name (str): name of the dataset to be evaluated.
                It must have either the following corresponding metadata:
                    "json_file": the path to the COCO format annotation
                Or it must be in detectron2's standard dataset format
                so it can be converted to COCO format automatically.
            tasks (tuple[str]): tasks that can be evaluated under the given
                configuration. A task is one of "bbox", "segm", "keypoints".
                By default, will infer this automatically from predictions.
            distributed (True): if True, will collect results from all ranks and run evaluation
                in the main process.
                Otherwise, will only evaluate the results in the current process.
            output_dir (str): optional, an output directory to dump all
                results predicted on the dataset. The dump contains two files:
                1. "instances_predictions.pth" a file in torch serialization
                   format that contains all the raw original predictions.
                2. "coco_instances_results.json" a json file in COCO's result
                   format.
            use_fast_impl (bool): use a fast but **unofficial** implementation to compute AP.
                Although the results should be very close to the official implementation in COCO
                API, it is still recommended to compute results with the official API for use in
                papers. The faster implementation also uses more RAM.
            kpt_oks_sigmas (list[float]): The sigmas used to calculate keypoint OKS.
                See http://cocodataset.org/#keypoints-eval
                When empty, it will use the defaults in COCO.
                Otherwise it should be the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS.
        """
        self._logger = logging.getLogger(__name__)
        self._distributed = distributed
        self._output_dir = output_dir
        self._use_fast_impl = use_fast_impl

        if tasks is not None and isinstance(tasks, CfgNode):
            kpt_oks_sigmas = (
                tasks.TEST.KEYPOINT_OKS_SIGMAS if not kpt_oks_sigmas else kpt_oks_sigmas
            )
            self._logger.warn(
                "COCO Evaluator instantiated using config, this is deprecated behavior."
                " Please pass in explicit arguments instead."
            )
            self._tasks = None  # Infering it from predictions should be better
        else:
            self._tasks = tasks

        self._cpu_device = torch.device("cpu")

        self._metadata = MetadataCatalog.get(dataset_name)
        if not hasattr(self._metadata, "json_file"):
            self._logger.info(
                f"'{dataset_name}' is not registered by `register_coco_instances`."
                " Therefore trying to convert it to COCO format ..."
            )

            cache_path = os.path.join(output_dir, f"{dataset_name}_coco_format.json")
            self._metadata.json_file = cache_path
            convert_to_coco_json(dataset_name, cache_path)

        json_file = PathManager.get_local_path(self._metadata.json_file)
        with contextlib.redirect_stdout(io.StringIO()):
            self._coco_api = COCO(json_file)

        # Test set json files do not contain annotations (evaluation must be
        # performed using the COCO evaluation server).
        self._do_evaluation = "annotations" in self._coco_api.dataset
        if self._do_evaluation:
            self._kpt_oks_sigmas = kpt_oks_sigmas

    def reset(self):
        self._predictions = []

    def process(self, inputs, outputs):
        """
        Args:
            inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
                It is a list of dict. Each dict corresponds to an image and
                contains keys like "height", "width", "file_name", "image_id".
            outputs: the outputs of a COCO model. It is a list of dicts with key
                "instances" that contains :class:`Instances`.
        """
        for input, output in zip(inputs, outputs):
            prediction = {"image_id": input["image_id"]}

            if "instances" in output:
                instances = output["instances"].to(self._cpu_device)
                prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
            if "proposals" in output:
                prediction["proposals"] = output["proposals"].to(self._cpu_device)
            if len(prediction) > 1:
                self._predictions.append(prediction)

    def evaluate(self, img_ids=None):
        """
        Args:
            img_ids: a list of image IDs to evaluate on. Default to None for the whole dataset
        """
        if self._distributed:
            comm.synchronize()
            predictions = comm.gather(self._predictions, dst=0)
            predictions = list(itertools.chain(*predictions))

            if not comm.is_main_process():
                return {}
        else:
            predictions = self._predictions

        if len(predictions) == 0:
            self._logger.warning("[COCOEvaluator] Did not receive valid predictions.")
            return {}

        if self._output_dir:
            PathManager.mkdirs(self._output_dir)
            file_path = os.path.join(self._output_dir, "instances_predictions.pth")
            with PathManager.open(file_path, "wb") as f:
                torch.save(predictions, f)

        self._results = OrderedDict()
        if "proposals" in predictions[0]:
            self._eval_box_proposals(predictions)
        if "instances" in predictions[0]:
            self._eval_predictions(predictions, img_ids=img_ids)
        # Copy so the caller can do whatever with results
        return copy.deepcopy(self._results)

    def _tasks_from_predictions(self, predictions):
        """
        Get COCO API "tasks" (i.e. iou_type) from COCO-format predictions.
        """
        tasks = {"bbox"}
        for pred in predictions:
            if "segmentation" in pred:
                tasks.add("segm")
            if "keypoints" in pred:
                tasks.add("keypoints")
        return sorted(tasks)

    def _eval_predictions(self, predictions, img_ids=None):
        """
        Evaluate predictions. Fill self._results with the metrics of the tasks.
        """
        self._logger.info("Preparing results for COCO format ...")
        coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
        tasks = self._tasks or self._tasks_from_predictions(coco_results)

        # unmap the category ids for COCO
        if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
            dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id
            all_contiguous_ids = list(dataset_id_to_contiguous_id.values())
            num_classes = len(all_contiguous_ids)
            assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1

            reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()}
            for result in coco_results:
                category_id = result["category_id"]
                assert category_id < num_classes, (
                    f"A prediction has class={category_id}, "
                    f"but the dataset only has {num_classes} classes and "
                    f"predicted class id should be in [0, {num_classes - 1}]."
                )
                result["category_id"] = reverse_id_mapping[category_id]

        if self._output_dir:
            file_path = os.path.join(self._output_dir, "coco_instances_results.json")
            self._logger.info("Saving results to {}".format(file_path))
            with PathManager.open(file_path, "w") as f:
                f.write(json.dumps(coco_results))
                f.flush()

        if not self._do_evaluation:
            self._logger.info("Annotations are not available for evaluation.")
            return

        self._logger.info(
            "Evaluating predictions with {} COCO API...".format(
                "unofficial" if self._use_fast_impl else "official"
            )
        )
        for task in sorted(tasks):
            coco_eval = (
                _evaluate_predictions_on_coco(
                    self._coco_api,
                    coco_results,
                    task,
                    kpt_oks_sigmas=self._kpt_oks_sigmas,
                    use_fast_impl=self._use_fast_impl,
                    img_ids=img_ids,
                )
                if len(coco_results) > 0
                else None  # cocoapi does not handle empty results very well
            )

            res = self._derive_coco_results(
                coco_eval, task, class_names=self._metadata.get("thing_classes")
            )
            self._results[task] = res

    def _eval_box_proposals(self, predictions):
        """
        Evaluate the box proposals in predictions.
        Fill self._results with the metrics for "box_proposals" task.
        """
        if self._output_dir:
            # Saving generated box proposals to file.
            # Predicted box_proposals are in XYXY_ABS mode.
            bbox_mode = BoxMode.XYXY_ABS.value
            ids, boxes, objectness_logits = [], [], []
            for prediction in predictions:
                ids.append(prediction["image_id"])
                boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy())
                objectness_logits.append(prediction["proposals"].objectness_logits.numpy())

            proposal_data = {
                "boxes": boxes,
                "objectness_logits": objectness_logits,
                "ids": ids,
                "bbox_mode": bbox_mode,
            }
            with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f:
                pickle.dump(proposal_data, f)

        if not self._do_evaluation:
            self._logger.info("Annotations are not available for evaluation.")
            return

        self._logger.info("Evaluating bbox proposals ...")
        res = {}
        areas = {"all": "", "small": "s", "medium": "m", "large": "l"}
        for limit in [100, 1000]:
            for area, suffix in areas.items():
                stats = _evaluate_box_proposals(predictions, self._coco_api, area=area, limit=limit)
                key = "AR{}@{:d}".format(suffix, limit)
                res[key] = float(stats["ar"].item() * 100)
        self._logger.info("Proposal metrics: \n" + create_small_table(res))
        self._results["box_proposals"] = res

    def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
        """
        Derive the desired score numbers from summarized COCOeval.
        Args:
            coco_eval (None or COCOEval): None represents no predictions from model.
            iou_type (str):
            class_names (None or list[str]): if provided, will use it to predict
                per-category AP.
        Returns:
            a dict of {metric name: score}
        """

        metrics = {
            "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
            "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
            "keypoints": ["AP", "AP50", "AP75", "APm", "APl"],
        }[iou_type]

        if coco_eval is None:
            self._logger.warn("No predictions from the model!")
            return {metric: float("nan") for metric in metrics}

        # the standard metrics
        results = {
            metric: float(coco_eval.stats[idx] * 100 if coco_eval.stats[idx] >= 0 else "nan")
            for idx, metric in enumerate(metrics)
        }
        self._logger.info(
            "Evaluation results for {}: \n".format(iou_type) + create_small_table(results)
        )
        if not np.isfinite(sum(results.values())):
            self._logger.info("Some metrics cannot be computed and is shown as NaN.")

        if class_names is None or len(class_names) <= 1:
            return results
        # Compute per-category AP
        # from https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L222-L252 # noqa
        precisions = coco_eval.eval["precision"]
        # precision has dims (iou, recall, cls, area range, max dets)
        assert len(class_names) == precisions.shape[2]

        results_per_category = []
        for idx, name in enumerate(class_names):
            # area range index 0: all area ranges
            # max dets index -1: typically 100 per image
            precision = precisions[:, :, idx, 0, -1]
            precision = precision[precision > -1]
            ap = np.mean(precision) if precision.size else float("nan")
            results_per_category.append(("{}".format(name), float(ap * 100)))

        # tabulate it
        N_COLS = min(6, len(results_per_category) * 2)
        results_flatten = list(itertools.chain(*results_per_category))
        results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)])
        table = tabulate(
            results_2d,
            tablefmt="pipe",
            floatfmt=".3f",
            headers=["category", "AP"] * (N_COLS // 2),
            numalign="left",
        )
        self._logger.info("Per-category {} AP: \n".format(iou_type) + table)

        results.update({"AP-" + name: ap for name, ap in results_per_category})
        return results


def instances_to_coco_json(instances, img_id):
    """
    Dump an "Instances" object to a COCO-format json that's used for evaluation.
    Args:
        instances (Instances):
        img_id (int): the image id
    Returns:
        list[dict]: list of json annotations in COCO format.
    """
    num_instance = len(instances)
    if num_instance == 0:
        return []

    boxes = instances.pred_boxes.tensor.numpy()
    boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
    boxes = boxes.tolist()
    scores = instances.scores.tolist()
    classes = instances.pred_classes.tolist()

    has_mask = instances.has("pred_masks")
    has_mask_scores = instances.has("mask_scores")
    if has_mask:
        # use RLE to encode the masks, because they are too large and takes memory
        # since this evaluator stores outputs of the entire dataset
        rles = [
            mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
            for mask in instances.pred_masks
        ]
        for rle in rles:
            # "counts" is an array encoded by mask_util as a byte-stream. Python3's
            # json writer which always produces strings cannot serialize a bytestream
            # unless you decode it. Thankfully, utf-8 works out (which is also what
            # the pycocotools/_mask.pyx does).
            rle["counts"] = rle["counts"].decode("utf-8")

        if has_mask_scores:
            mask_scores = instances.mask_scores.tolist()

    has_keypoints = instances.has("pred_keypoints")
    if has_keypoints:
        keypoints = instances.pred_keypoints

    results = []
    for k in range(num_instance):
        result = {
            "image_id": img_id,
            "category_id": classes[k],
            "bbox": boxes[k],
            "score": scores[k],
        }
        if has_mask:
            result["segmentation"] = rles[k]
            if has_mask_scores:
                result["mask_score"] = mask_scores[k]
        if has_keypoints:
            # In COCO annotations,
            # keypoints coordinates are pixel indices.
            # However our predictions are floating point coordinates.
            # Therefore we subtract 0.5 to be consistent with the annotation format.
            # This is the inverse of data loading logic in `datasets/coco.py`.
            keypoints[k][:, :2] -= 0.5
            result["keypoints"] = keypoints[k].flatten().tolist()
        results.append(result)
    return results


# inspired from Detectron:
# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa
def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=None, area="all", limit=None):
    """
    Evaluate detection proposal recall metrics. This function is a much
    faster alternative to the official COCO API recall evaluation code. However,
    it produces slightly different results.
    """
    # Record max overlap value for each gt box
    # Return vector of overlap values
    areas = {
        "all": 0,
        "small": 1,
        "medium": 2,
        "large": 3,
        "96-128": 4,
        "128-256": 5,
        "256-512": 6,
        "512-inf": 7,
    }
    area_ranges = [
        [0 ** 2, 1e5 ** 2],  # all
        [0 ** 2, 32 ** 2],  # small
        [32 ** 2, 96 ** 2],  # medium
        [96 ** 2, 1e5 ** 2],  # large
        [96 ** 2, 128 ** 2],  # 96-128
        [128 ** 2, 256 ** 2],  # 128-256
        [256 ** 2, 512 ** 2],  # 256-512
        [512 ** 2, 1e5 ** 2],
    ]  # 512-inf
    assert area in areas, "Unknown area range: {}".format(area)
    area_range = area_ranges[areas[area]]
    gt_overlaps = []
    num_pos = 0

    for prediction_dict in dataset_predictions:
        predictions = prediction_dict["proposals"]

        # sort predictions in descending order
        # TODO maybe remove this and make it explicit in the documentation
        inds = predictions.objectness_logits.sort(descending=True)[1]
        predictions = predictions[inds]

        ann_ids = coco_api.getAnnIds(imgIds=prediction_dict["image_id"])
        anno = coco_api.loadAnns(ann_ids)
        gt_boxes = [
            BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS)
            for obj in anno
            if obj["iscrowd"] == 0
        ]
        gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4)  # guard against no boxes
        gt_boxes = Boxes(gt_boxes)
        gt_areas = torch.as_tensor([obj["area"] for obj in anno if obj["iscrowd"] == 0])

        if len(gt_boxes) == 0 or len(predictions) == 0:
            continue

        valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])
        gt_boxes = gt_boxes[valid_gt_inds]

        num_pos += len(gt_boxes)

        if len(gt_boxes) == 0:
            continue

        if limit is not None and len(predictions) > limit:
            predictions = predictions[:limit]

        overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes)

        _gt_overlaps = torch.zeros(len(gt_boxes))
        for j in range(min(len(predictions), len(gt_boxes))):
            # find which proposal box maximally covers each gt box
            # and get the iou amount of coverage for each gt box
            max_overlaps, argmax_overlaps = overlaps.max(dim=0)

            # find which gt box is 'best' covered (i.e. 'best' = most iou)
            gt_ovr, gt_ind = max_overlaps.max(dim=0)
            assert gt_ovr >= 0
            # find the proposal box that covers the best covered gt box
            box_ind = argmax_overlaps[gt_ind]
            # record the iou coverage of this gt box
            _gt_overlaps[j] = overlaps[box_ind, gt_ind]
            assert _gt_overlaps[j] == gt_ovr
            # mark the proposal box and the gt box as used
            overlaps[box_ind, :] = -1
            overlaps[:, gt_ind] = -1

        # append recorded iou coverage level
        gt_overlaps.append(_gt_overlaps)
    gt_overlaps = (
        torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32)
    )
    gt_overlaps, _ = torch.sort(gt_overlaps)

    if thresholds is None:
        step = 0.05
        thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)
    recalls = torch.zeros_like(thresholds)
    # compute recall for each iou threshold
    for i, t in enumerate(thresholds):
        recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)
    # ar = 2 * np.trapz(recalls, thresholds)
    ar = recalls.mean()
    return {
        "ar": ar,
        "recalls": recalls,
        "thresholds": thresholds,
        "gt_overlaps": gt_overlaps,
        "num_pos": num_pos,
    }


def _evaluate_predictions_on_coco(
    coco_gt, coco_results, iou_type, kpt_oks_sigmas=None, use_fast_impl=True, img_ids=None
):
    """
    Evaluate the coco results using COCOEval API.
    """
    assert len(coco_results) > 0

    if iou_type == "segm":
        coco_results = copy.deepcopy(coco_results)
        # When evaluating mask AP, if the results contain bbox, cocoapi will
        # use the box area as the area of the instance, instead of the mask area.
        # This leads to a different definition of small/medium/large.
        # We remove the bbox field to let mask AP use mask area.
        has_mask_scores = "mask_score" in coco_results[0]

        for c in coco_results:
            c.pop("bbox", None)
            if has_mask_scores:
                c["score"] = c["mask_score"]
                del c["mask_score"]

    coco_dt = coco_gt.loadRes(coco_results)
    coco_eval = (COCOeval_opt if use_fast_impl else COCOeval)(coco_gt, coco_dt, iou_type)
    if img_ids is not None:
        coco_eval.params.imgIds = img_ids

    if iou_type == "keypoints":
        # Use the COCO default keypoint OKS sigmas unless overrides are specified
        if kpt_oks_sigmas:
            assert hasattr(coco_eval.params, "kpt_oks_sigmas"), "pycocotools is too old!"
            coco_eval.params.kpt_oks_sigmas = np.array(kpt_oks_sigmas)
        # COCOAPI requires every detection and every gt to have keypoints, so
        # we just take the first entry from both
        num_keypoints_dt = len(coco_results[0]["keypoints"]) // 3
        num_keypoints_gt = len(next(iter(coco_gt.anns.values()))["keypoints"]) // 3
        num_keypoints_oks = len(coco_eval.params.kpt_oks_sigmas)
        assert num_keypoints_oks == num_keypoints_dt == num_keypoints_gt, (
            f"[COCOEvaluator] Prediction contain {num_keypoints_dt} keypoints. "
            f"Ground truth contains {num_keypoints_gt} keypoints. "
            f"The length of cfg.TEST.KEYPOINT_OKS_SIGMAS is {num_keypoints_oks}. "
            "They have to agree with each other. For meaning of OKS, please refer to "
            "http://cocodataset.org/#keypoints-eval."
        )


    #Insert this code to increase the number of detections possible /Christoffer :

    def summarize_2(self, all_prec=False):
            '''
            Compute and display summary metrics for evaluation results.
            Note this functin can *only* be applied on the default parameter setting
            '''

            print("In method")
            def _summarize(ap=1, iouThr=None, areaRng='all', maxDets=2000):
                p = self.params
                iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}'
                titleStr = 'Average Precision' if ap == 1 else 'Average Recall'
                typeStr = '(AP)' if ap == 1 else '(AR)'
                iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \
                    if iouThr is None else '{:0.2f}'.format(iouThr)

                aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]
                mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]
                if ap == 1:
                    # dimension of precision: [TxRxKxAxM]
                    s = self.eval['precision']
                    # IoU
                    if iouThr is not None:
                        t = np.where(iouThr == p.iouThrs)[0]
                        s = s[t]
                    s = s[:, :, :, aind, mind]
                else:
                    # dimension of recall: [TxKxAxM]
                    s = self.eval['recall']
                    if iouThr is not None:
                        t = np.where(iouThr == p.iouThrs)[0]
                        s = s[t]
                    s = s[:, :, aind, mind]
                if len(s[s > -1]) == 0:
                    mean_s = -1
                else:
                    mean_s = np.mean(s[s > -1])
                print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))
                return mean_s

            def _summarizeDets():

                stats = np.zeros((12,))
                stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
                stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])
                stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2])
                stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2])
                stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2])
                stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2])
                stats[6] = _summarize(0, maxDets=self.params.maxDets[0])
                stats[7] = _summarize(0, maxDets=self.params.maxDets[1])
                stats[8] = _summarize(0, maxDets=self.params.maxDets[2])
                stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2])
                stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2])
                stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2])
                return stats


            def _summarizeKps():
                stats = np.zeros((10,))
                stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
                stats[1] = _summarize(1, maxDets=self.params.maxDets[2], iouThr=.5)
                stats[2] = _summarize(1, maxDets=self.params.maxDets[2], iouThr=.75)
                stats[3] = _summarize(1, maxDets=self.params.maxDets[2], areaRng='medium')
                stats[4] = _summarize(1, maxDets=self.params.maxDets[2], areaRng='large')
                stats[5] = _summarize(0, maxDets=self.params.maxDets[2])
                stats[6] = _summarize(0, maxDets=self.params.maxDets[2], iouThr=.5)
                stats[7] = _summarize(0, maxDets=self.params.maxDets[2], iouThr=.75)
                stats[8] = _summarize(0, maxDets=self.params.maxDets[2], areaRng='medium')
                stats[9] = _summarize(0, maxDets=self.params.maxDets[2], areaRng='large')
                return stats

            if not self.eval:
                raise Exception('Please run accumulate() first')
            iouType = self.params.iouType
            if iouType == 'segm' or iouType == 'bbox':
                summarize = _summarizeDets
            elif iouType == 'keypoints':
                summarize = _summarizeKps
            self.stats = summarize()

    coco_eval.params.catIds = [1]
    coco_eval.params.useCats = 0
    coco_eval.params.maxDets = [100, 500, 2000]

    coco_eval.params.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 18 ** 2], [18 ** 2, 31 ** 2], [31 ** 2, 1e5 ** 2]]
    coco_eval.params.areaRngLbl = ['all', 'small', 'medium', 'large']

    print(f"Size parameters: {coco_eval.params.areaRng}")

    coco_eval.summarize = types.MethodType(summarize_2, coco_eval)

    coco_eval.evaluate()
    coco_eval.accumulate()
    coco_eval.summarize()

    """
      Added code to produce precision and recall for all iou levels / Chris
    """
    precisions = coco_eval.eval['precision']
    recalls = coco_eval.eval['recall']

    # IoU threshold | instances | Categories | areas | max dets
    pre_per_iou = [precisions[iou_idx, :, :, 0, -1].mean() for iou_idx in range(precisions.shape[0])]
    rec_pre_iou = [recalls[iou_idx, :, 0, -1].mean() for iou_idx in range(recalls.shape[0])]

    print(f"Precision and Recall per iou \n IoU: {coco_eval.params.iouThrs}")
    print(f"Pre: {np.round(np.array(pre_per_iou), 4)}")
    print(f"Rec: {np.round(np.array(rec_pre_iou), 4)}")


    return coco_eval


================================================
FILE: code/coco_evaluation_resnest.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# Modified by Sangrok Lee and Youngwan Lee (ETRI), 2020. All Rights Reserved.
# Modified by Christoffer Edlund (Sartorius), 2020. All Rights Reserved.
import types
import contextlib
import copy
import io
import itertools
import json
import logging
import numpy as np
import os
import pickle
from collections import OrderedDict
import pycocotools.mask as mask_util
import torch
from fvcore.common.file_io import PathManager
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
from tabulate import tabulate

import detectron2.utils.comm as comm
from detectron2.data import MetadataCatalog
from detectron2.data.datasets.coco import convert_to_coco_json
from detectron2.structures import Boxes, BoxMode, pairwise_iou
from detectron2.utils.logger import create_small_table

from detectron2.evaluation.evaluator import DatasetEvaluator
#from detectron2.evaluation.fast_eval_api import COCOeval_opt
_use_fast_impl = True
try:
    from fast_coco_eval import COCOeval_fast as COCOeval_opt
except ImportError:
    print(f"Could not find fast coco implementation")
    _use_fast_impl = False

class COCOEvaluator(DatasetEvaluator):
    """
    Evaluate object proposal, instance detection/segmentation, keypoint detection
    outputs using COCO's metrics and APIs.
    """

    def __init__(self, dataset_name, cfg, distributed, output_dir=None):
        """
        Args:
            dataset_name (str): name of the dataset to be evaluated.
                It must have either the following corresponding metadata:
                    "json_file": the path to the COCO format annotation
                Or it must be in detectron2's standard dataset format
                so it can be converted to COCO format automatically.
            cfg (CfgNode): config instance
            distributed (True): if True, will collect results from all ranks for evaluation.
                Otherwise, will evaluate the results in the current process.
            output_dir (str): optional, an output directory to dump all
                results predicted on the dataset. The dump contains two files:
                1. "instance_predictions.pth" a file in torch serialization
                   format that contains all the raw original predictions.
                2. "coco_instances_results.json" a json file in COCO's result
                   format.
        """
        print("__init__")
        self._tasks = self._tasks_from_config(cfg)
        self._distributed = distributed
        self._output_dir = output_dir

        self._cpu_device = torch.device("cpu")
        self._logger = logging.getLogger(__name__)

        self._metadata = MetadataCatalog.get(dataset_name)
        if not hasattr(self._metadata, "json_file"):
            self._logger.warning(
                f"json_file was not found in MetaDataCatalog for '{dataset_name}'."
                " Trying to convert it to COCO format ..."
            )

            cache_path = os.path.join(output_dir, f"{dataset_name}_coco_format.json")
            self._metadata.json_file = cache_path
            convert_to_coco_json(dataset_name, cache_path)

        json_file = PathManager.get_local_path(self._metadata.json_file)
        with contextlib.redirect_stdout(io.StringIO()):
            self._coco_api = COCO(json_file)

        self._kpt_oks_sigmas = cfg.TEST.KEYPOINT_OKS_SIGMAS
        # Test set json files do not contain annotations (evaluation must be
        # performed using the COCO evaluation server).
        self._do_evaluation = "annotations" in self._coco_api.dataset

    def reset(self):
        self._predictions = []

    def _tasks_from_config(self, cfg):
        """
        Returns:
            tuple[str]: tasks that can be evaluated under the given configuration.
        """
        print("_tasks_from_config")
        tasks = ("bbox",)
        if cfg.MODEL.MASK_ON:
            tasks = tasks + ("segm",)
        if cfg.MODEL.KEYPOINT_ON:
            tasks = tasks + ("keypoints",)
        print(f"tasks: {tasks}")
        return tasks

    def process(self, inputs, outputs):
        """
        Args:
            inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
                It is a list of dict. Each dict corresponds to an image and
                contains keys like "height", "width", "file_name", "image_id".
            outputs: the outputs of a COCO model. It is a list of dicts with key
                "instances" that contains :class:`Instances`.
        """

        for input, output in zip(inputs, outputs):
            prediction = {"image_id": input["image_id"]}

            # TODO this is ugly
            if "instances" in output:
                instances = output["instances"].to(self._cpu_device)
                prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
            if "proposals" in output:
                prediction["proposals"] = output["proposals"].to(self._cpu_device)
            self._predictions.append(prediction)

    def evaluate(self):

        print("evaluate")

        if self._distributed:
            comm.synchronize()
            predictions = comm.gather(self._predictions, dst=0)
            predictions = list(itertools.chain(*predictions))

            if not comm.is_main_process():
                return {}
        else:
            predictions = self._predictions

        if len(predictions) == 0:
            self._logger.warning("[COCOEvaluator] Did not receive valid predictions.")
            return {}

        if self._output_dir:
            PathManager.mkdirs(self._output_dir)
            file_path = os.path.join(self._output_dir, "instances_predictions.pth")
            with PathManager.open(file_path, "wb") as f:
                torch.save(predictions, f)

        self._results = OrderedDict()
        if "proposals" in predictions[0]:
            self._eval_box_proposals(predictions)
        if "instances" in predictions[0]:
            self._eval_predictions(set(self._tasks), predictions)
        # Copy so the caller can do whatever with results
        return copy.deepcopy(self._results)

    def _eval_predictions(self, tasks, predictions):
        """
        Evaluate predictions on the given tasks.
        Fill self._results with the metrics of the tasks.
        """

        print("_eval_predictions")
        print(f"use_fast_impl: {_use_fast_impl}")

        self._logger.info("Preparing results for COCO format ...")
        coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))

        # unmap the category ids for COCO
        if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
            reverse_id_mapping = {
                v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items()
            }
            for result in coco_results:
                category_id = result["category_id"]
                assert (
                        category_id in reverse_id_mapping
                ), "A prediction has category_id={}, which is not available in the dataset.".format(
                    category_id
                )
                result["category_id"] = reverse_id_mapping[category_id]

        if self._output_dir:
            file_path = os.path.join(self._output_dir, "coco_instances_results.json")
            self._logger.info("Saving results to {}".format(file_path))
            with PathManager.open(file_path, "w") as f:
                f.write(json.dumps(coco_results))
                f.flush()

        if not self._do_evaluation:
            self._logger.info("Annotations are not available for evaluation.")
            return

        self._logger.info("Evaluating predictions ...")
        for task in sorted(tasks):
            coco_eval = (
                _evaluate_predictions_on_coco(
                    self._coco_api, coco_results, task, kpt_oks_sigmas=self._kpt_oks_sigmas
                )
                if len(coco_results) > 0
                else None  # cocoapi does not handle empty results very well
            )

            res = self._derive_coco_results(
                coco_eval, task, class_names=self._metadata.get("thing_classes")
            )
            self._results[task] = res

    def _eval_box_proposals(self, predictions):
        """
        Evaluate the box proposals in predictions.
        Fill self._results with the metrics for "box_proposals" task.
        """
        print("_eval_box_proposals")
        if self._output_dir:
            # Saving generated box proposals to file.
            # Predicted box_proposals are in XYXY_ABS mode.
            bbox_mode = BoxMode.XYXY_ABS.value
            ids, boxes, objectness_logits = [], [], []
            for prediction in predictions:
                ids.append(prediction["image_id"])
                boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy())
                objectness_logits.append(prediction["proposals"].objectness_logits.numpy())

            proposal_data = {
                "boxes": boxes,
                "objectness_logits": objectness_logits,
                "ids": ids,
                "bbox_mode": bbox_mode,
            }
            with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f:
                pickle.dump(proposal_data, f)

        if not self._do_evaluation:
            self._logger.info("Annotations are not available for evaluation.")
            return

        self._logger.info("Evaluating bbox proposals ...")
        res = {}
        areas = {"all": "", "small": "s", "medium": "m", "large": "l"}
        for limit in [100, 1000]:
            for area, suffix in areas.items():
                stats = _evaluate_box_proposals(predictions, self._coco_api, area=area, limit=limit)
                key = "AR{}@{:d}".format(suffix, limit)
                res[key] = float(stats["ar"].item() * 100)
        self._logger.info("Proposal metrics: \n" + create_small_table(res))
        self._results["box_proposals"] = res

    def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
        """
        Derive the desired score numbers from summarized COCOeval.
        Args:
            coco_eval (None or COCOEval): None represents no predictions from model.
            iou_type (str):
            class_names (None or list[str]): if provided, will use it to predict
                per-category AP.
        Returns:
            a dict of {metric name: score}
        """
        print("_derive_coco_results")

        metrics = {
            "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
            "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
            "keypoints": ["AP", "AP50", "AP75", "APm", "APl"],
        }[iou_type]

        if coco_eval is None:
            self._logger.warn("No predictions from the model!")
            return {metric: float("nan") for metric in metrics}

        # the standard metrics
        results = {
            metric: float(coco_eval.stats[idx] * 100 if coco_eval.stats[idx] >= 0 else "nan")
            for idx, metric in enumerate(metrics)
        }
        self._logger.info(
            "Evaluation results for {}: \n".format(iou_type) + create_small_table(results)
        )
        if not np.isfinite(sum(results.values())):
            self._logger.info("Note that some metrics cannot be computed.")

        if class_names is None or len(class_names) <= 1:
            return results
        # Compute per-category AP
        # from https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L222-L252 # noqa
        precisions = coco_eval.eval["precision"]
        # precision has dims (iou, recall, cls, area range, max dets)
        assert len(class_names) == precisions.shape[2]

        results_per_category = []
        for idx, name in enumerate(class_names):
            # area range index 0: all area ranges
            # max dets index -1: typically 100 per image
            precision = precisions[:, :, idx, 0, -1]
            precision = precision[precision > -1]
            ap = np.mean(precision) if precision.size else float("nan")
            results_per_category.append(("{}".format(name), float(ap * 100)))

        # tabulate it
        N_COLS = min(6, len(results_per_category) * 2)
        results_flatten = list(itertools.chain(*results_per_category))
        results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)])
        table = tabulate(
            results_2d,
            tablefmt="pipe",
            floatfmt=".3f",
            headers=["category", "AP"] * (N_COLS // 2),
            numalign="left",
        )
        self._logger.info("Per-category {} AP: \n".format(iou_type) + table)

        results.update({"AP-" + name: ap for name, ap in results_per_category})
        return results


def instances_to_coco_json(instances, img_id):
    """
    Dump an "Instances" object to a COCO-format json that's used for evaluation.
    Args:
        instances (Instances):
        img_id (int): the image id
    Returns:
        list[dict]: list of json annotations in COCO format.
    """

    num_instance = len(instances)
    if num_instance == 0:
        return []

    boxes = instances.pred_boxes.tensor.numpy()
    boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
    boxes = boxes.tolist()
    scores = instances.scores.tolist()
    classes = instances.pred_classes.tolist()

    has_mask = instances.has("pred_masks")
    has_mask_scores = instances.has("mask_scores")
    if has_mask:
        # use RLE to encode the masks, because they are too large and takes memory
        # since this evaluator stores outputs of the entire dataset
        rles = [
            mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
            for mask in instances.pred_masks
        ]
        for rle in rles:
            # "counts" is an array encoded by mask_util as a byte-stream. Python3's
            # json writer which always produces strings cannot serialize a bytestream
            # unless you decode it. Thankfully, utf-8 works out (which is also what
            # the pycocotools/_mask.pyx does).
            rle["counts"] = rle["counts"].decode("utf-8")

        if has_mask_scores:
            mask_scores = instances.mask_scores.tolist()

    has_keypoints = instances.has("pred_keypoints")
    if has_keypoints:
        keypoints = instances.pred_keypoints

    results = []
    for k in range(num_instance):
        result = {
            "image_id": img_id,
            "category_id": classes[k],
            "bbox": boxes[k],
            "score": scores[k],
        }
        if has_mask:
            result["segmentation"] = rles[k]
            if has_mask_scores:
                result["mask_score"] = mask_scores[k]

        if has_keypoints:
            # In COCO annotations,
            # keypoints coordinates are pixel indices.
            # However our predictions are floating point coordinates.
            # Therefore we subtract 0.5 to be consistent with the annotation format.
            # This is the inverse of data loading logic in `datasets/coco.py`.
            keypoints[k][:, :2] -= 0.5
            result["keypoints"] = keypoints[k].flatten().tolist()
        results.append(result)
    return results


# inspired from Detectron:
# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa
def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=None, area="all", limit=None):
    """
    Evaluate detection proposal recall metrics. This function is a much
    faster alternative to the official COCO API recall evaluation code. However,
    it produces slightly different results.
    """
#    print("_evaluate_box_proposals")
    # Record max overlap value for each gt box
    # Return vector of overlap values
    areas = {
        "all": 0,
        "small": 1,
        "medium": 2,
        "large": 3,
        "96-128": 4,
        "128-256": 5,
        "256-512": 6,
        "512-inf": 7,
    }


    area_ranges = [
        [0 ** 2, 1e5 ** 2],  # all
        [0 ** 2, 18 ** 2],  # small org: 0 - 32
        [18 ** 2, 31 ** 2],  # medium org: 32 - 96
        [31 ** 2, 1e5 ** 2],  # large org: 96 - 1e5
        [31 ** 2, 128 ** 2],  # org: 96-128
        [128 ** 2, 256 ** 2],  # 128-256
        [256 ** 2, 512 ** 2],  # 256-512
        [512 ** 2, 1e5 ** 2],
    ]  # 512-inf

    """
    area_ranges = [
        [0 ** 2, 1e5 ** 2],  # all
        [0 ** 2, 28 ** 2],  # small org: 0 - 32
        [28 ** 2, 94 ** 2],  # medium org: 32 - 96
        [94 ** 2, 1e5 ** 2],  # large org: 96 - 1e5 - our 64
        [94 ** 2, 128 ** 2],  #  org: 96-128
        [128 ** 2, 256 ** 2],  # 128-256
        [256 ** 2, 512 ** 2],  # 256-512
        [512 ** 2, 1e5 ** 2],
    ]  # 512-inf
    """
    assert area in areas, "Unknown area range: {}".format(area)
    area_range = area_ranges[areas[area]]
    gt_overlaps = []
    num_pos = 0

    for prediction_dict in dataset_predictions:
        predictions = prediction_dict["proposals"]

        # sort predictions in descending order
        # TODO maybe remove this and make it explicit in the documentation
        inds = predictions.objectness_logits.sort(descending=True)[1]
        predictions = predictions[inds]

        ann_ids = coco_api.getAnnIds(imgIds=prediction_dict["image_id"])
        anno = coco_api.loadAnns(ann_ids)
        gt_boxes = [
            BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS)
            for obj in anno
            if obj["iscrowd"] == 0
        ]
        gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4)  # guard against no boxes
        gt_boxes = Boxes(gt_boxes)
        gt_areas = torch.as_tensor([obj["area"] for obj in anno if obj["iscrowd"] == 0])

        if len(gt_boxes) == 0 or len(predictions) == 0:
            continue

        valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])
        gt_boxes = gt_boxes[valid_gt_inds]

        num_pos += len(gt_boxes)

        if len(gt_boxes) == 0:
            continue

        if limit is not None and len(predictions) > limit:
            predictions = predictions[:limit]

        overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes)

        _gt_overlaps = torch.zeros(len(gt_boxes))
        for j in range(min(len(predictions), len(gt_boxes))):
            # find which proposal box maximally covers each gt box
            # and get the iou amount of coverage for each gt box
            max_overlaps, argmax_overlaps = overlaps.max(dim=0)

            # find which gt box is 'best' covered (i.e. 'best' = most iou)
            gt_ovr, gt_ind = max_overlaps.max(dim=0)
            assert gt_ovr >= 0
            # find the proposal box that covers the best covered gt box
            box_ind = argmax_overlaps[gt_ind]
            # record the iou coverage of this gt box
            _gt_overlaps[j] = overlaps[box_ind, gt_ind]
            assert _gt_overlaps[j] == gt_ovr
            # mark the proposal box and the gt box as used
            overlaps[box_ind, :] = -1
            overlaps[:, gt_ind] = -1

        # append recorded iou coverage level
        gt_overlaps.append(_gt_overlaps)
    gt_overlaps = torch.cat(gt_overlaps, dim=0)
    gt_overlaps, _ = torch.sort(gt_overlaps)

    if thresholds is None:
        step = 0.05
        thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)
    recalls = torch.zeros_like(thresholds)
    # compute recall for each iou threshold
    for i, t in enumerate(thresholds):
        recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)
    # ar = 2 * np.trapz(recalls, thresholds)
    ar = recalls.mean()
    return {
        "ar": ar,
        "recalls": recalls,
        "thresholds": thresholds,
        "gt_overlaps": gt_overlaps,
        "num_pos": num_pos,
    }


def _evaluate_predictions_on_coco(coco_gt, coco_results, iou_type, kpt_oks_sigmas=None, use_fast_impl=False):
    """
    Evaluate the coco results using COCOEval API.
    """
#    print("_evaluate_predictions_on_coco")
    assert len(coco_results) > 0

    #Insert this code to increase the number of detections possible /Christoffer :

    def summarize_2(self, all_prec=False):
            '''
            Compute and display summary metrics for evaluation results.
            Note this functin can *only* be applied on the default parameter setting
            '''

            print("In method")
            def _summarize(ap=1, iouThr=None, areaRng='all', maxDets=2000):
                p = self.params
                iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}'
                titleStr = 'Average Precision' if ap == 1 else 'Average Recall'
                typeStr = '(AP)' if ap == 1 else '(AR)'
                iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \
                    if iouThr is None else '{:0.2f}'.format(iouThr)

                aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]
                mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]
                if ap == 1:
                    # dimension of precision: [TxRxKxAxM]
                    s = self.eval['precision']
                    # IoU
                    if iouThr is not None:
                        t = np.where(iouThr == p.iouThrs)[0]
                        s = s[t]
                    s = s[:, :, :, aind, mind]
                else:
                    # dimension of recall: [TxKxAxM]
                    s = self.eval['recall']
                    if iouThr is not None:
                        t = np.where(iouThr == p.iouThrs)[0]
                        s = s[t]
                    s = s[:, :, aind, mind]
                if len(s[s > -1]) == 0:
                    mean_s = -1
                else:
                    mean_s = np.mean(s[s > -1])
                print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))
                return mean_s

            def _summarizeDets():

                stats = np.zeros((12,))
                stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
                stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])
                stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2])
                stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2])
                stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2])
                stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2])
                stats[6] = _summarize(0, maxDets=self.params.maxDets[0])
                stats[7] = _summarize(0, maxDets=self.params.maxDets[1])
                stats[8] = _summarize(0, maxDets=self.params.maxDets[2])
                stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2])
                stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2])
                stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2])
                return stats


            def _summarizeKps():
                stats = np.zeros((10,))
                stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
                stats[1] = _summarize(1, maxDets=self.params.maxDets[2], iouThr=.5)
                stats[2] = _summarize(1, maxDets=self.params.maxDets[2], iouThr=.75)
                stats[3] = _summarize(1, maxDets=self.params.maxDets[2], areaRng='medium')
                stats[4] = _summarize(1, maxDets=self.params.maxDets[2], areaRng='large')
                stats[5] = _summarize(0, maxDets=self.params.maxDets[2])
                stats[6] = _summarize(0, maxDets=self.params.maxDets[2], iouThr=.5)
                stats[7] = _summarize(0, maxDets=self.params.maxDets[2], iouThr=.75)
                stats[8] = _summarize(0, maxDets=self.params.maxDets[2], areaRng='medium')
                stats[9] = _summarize(0, maxDets=self.params.maxDets[2], areaRng='large')
                return stats

            if not self.eval:
                raise Exception('Please run accumulate() first')
            iouType = self.params.iouType
            if iouType == 'segm' or iouType == 'bbox':
                summarize = _summarizeDets
            elif iouType == 'keypoints':
                summarize = _summarizeKps
            self.stats = summarize()


    if iou_type == "segm":
        coco_results = copy.deepcopy(coco_results)
        # When evaluating mask AP, if the results contain bbox, cocoapi will
        # use the box area as the area of the instance, instead of the mask area.
        # This leads to a different definition of small/medium/large.
        # We remove the bbox field to let mask AP use mask area.
        # We also replace `score` with `mask_score` when using mask scoring.
        has_mask_scores = "mask_score" in coco_results[0]

        for c in coco_results:
            c.pop("bbox", None)
            if has_mask_scores:
                c["score"] = c["mask_score"]
                del c["mask_score"]

    coco_dt = coco_gt.loadRes(coco_results)
    coco_eval = (COCOeval_opt if _use_fast_impl else COCOeval)(coco_gt, coco_dt, iou_type)
    # Use the COCO default keypoint OKS sigmas unless overrides are specified
    if kpt_oks_sigmas:
        coco_eval.params.kpt_oks_sigmas = np.array(kpt_oks_sigmas)

    if iou_type == "keypoints":
        num_keypoints = len(coco_results[0]["keypoints"]) // 3
        assert len(coco_eval.params.kpt_oks_sigmas) == num_keypoints, (
            "[COCOEvaluator] The length of cfg.TEST.KEYPOINT_OKS_SIGMAS (default: 17) "
            "must be equal to the number of keypoints. However the prediction has {} "
            "keypoints! For more information please refer to "
            "http://cocodataset.org/#keypoints-eval.".format(num_keypoints)
        )

    coco_eval.params.catIds = [1]
    coco_eval.params.useCats = 0
    coco_eval.params.maxDets = [100, 500, 2000]

    coco_eval.params.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 18 ** 2], [18 ** 2, 31 ** 2], [31 ** 2, 1e5 ** 2]]
    coco_eval.params.areaRngLbl = ['all', 'small', 'medium', 'large']

    print(f"Size parameters: {coco_eval.params.areaRng}")

    coco_eval.summarize = types.MethodType(summarize_2, coco_eval)

    coco_eval.evaluate()
    coco_eval.accumulate()
    coco_eval.summarize()

    """
    Added code to produce precision and recall for all iou levels / Chris
    """
    precisions = coco_eval.eval['precision']
    recalls = coco_eval.eval['recall']

    # IoU threshold | instances | Categories | areas | max dets
    pre_per_iou = [precisions[iou_idx, :, :, 0, -1].mean() for iou_idx in range(precisions.shape[0])]
    rec_pre_iou = [recalls[iou_idx, :, 0, -1].mean() for iou_idx in range(recalls.shape[0])]

    print(f"Precision and Recall per iou: {coco_eval.params.iouThrs}")
    print(np.round(np.array(pre_per_iou), 4))
    print(np.round(np.array(rec_pre_iou), 4))

    return coco_eval


================================================
FILE: code/preprocessing.py
================================================
import cv2
import numpy as np

#this function is designed to adapt images acquired with other light microscopy modalities
#in order to enable inference with LIVECell trained models

#input_image = uint8 numpy array
#magnification_resample_factor = downsample factor needed to make image effective 10X
#   for 40x images, use 0.25
#   for 20x images, use 0.5
#   for 10x images, use 1

def preprocess(input_image, magnification_downsample_factor=1.0): 
    #internal variables
    #   median_radius_raw = used in the background illumination pattern estimation. 
    #       this radius should be larger than the radius of a single cell
    #   target_median = 128 -- LIVECell phase contrast images all center around a 128 intensity
    median_radius_raw = 75
    target_median = 128.0
    
    #large median filter kernel size is dependent on resize factor, and must also be odd
    median_radius = round(median_radius_raw*magnification_downsample_factor)
    if median_radius%2==0:
        median_radius=median_radius+1

    #scale so mean median image intensity is 128
    input_median = np.median(input_image)
    intensity_scale = target_median/input_median
    output_image = input_image.astype('float')*intensity_scale

    #define dimensions of downsampled image image
    dims = input_image.shape
    y = int(dims[0]*magnification_downsample_factor)
    x = int(dims[1]*magnification_downsample_factor)

    #apply resizing image to account for different magnifications
    output_image = cv2.resize(output_image, (x,y), interpolation = cv2.INTER_AREA)
    
    #clip here to regular 0-255 range to avoid any odd median filter results
    output_image[output_image > 255] = 255
    output_image[output_image < 0] = 0

    #estimate background illumination pattern using the large median filter
    background = cv2.medianBlur(output_image.astype('uint8'), median_radius)
    output_image = output_image.astype('float')/background.astype('float')*target_median

    #clipping for zernike phase halo artifacts
    output_image[output_image > 180] = 180
    output_image[output_image < 70] = 70
    output_image = output_image.astype('uint8')

    return output_image

def preprocess_fluorescence(input_image, bInvert=True, magnification_downsample_factor=1.0): 
    #invert to bring background up to 128
    img = (255-input_image)/2
    if not bInvert:
        img = 255-img
    output_image = preprocess(img, magnification_downsample_factor=magnification_downsample_factor) 
    return output_image

================================================
FILE: model/README.md
================================================
# Usage

The anchor free models used in this benchmark is based on the [centermask2](https://github.com/youngwanLEE/centermask2#evaluation) architecture and the anchor based models are 
based on the [detectron2-ResNeSt](https://github.com/chongruo/detectron2-ResNeSt/blob/resnest/GETTING_STARTED.md) architectures, 
which is both built upon the [detectron2](https://github.com/facebookresearch/detectron2) library.

The models in the LIVECell paper was trained in on 8 Nvidia V100 GPUS.
To help others reproduce our results and use the models for further research, we provide pre-trained models and config files.

<table class="tg">
  <tr>
    <th class="tg-0pky">Architecture</th>
    <th class="tg-0pky">Dataset</th>
    <th class="tg-0pky">Box mAP%</th>
    <th class="tg-0pky">Mask mAP%</th>
    <th class="tg-0pky">download</th>
  </tr>
  <tr>
    <td rowspan="9" class="tg-0pky">Anchor free</td>
    <td class="tg-0pky">LIVECell</td>
    <td class="tg-0pky">48.45</td>
    <td class="tg-0pky">47.78</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/livecell_config.yaml">config</a> | <a href="http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/ALL/LIVECell_anchor_free_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">A172</td>
    <td class="tg-0pky">31.49</td>
    <td class="tg-0pky">34.57</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/a172_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/A172/LIVECell_anchor_free_a172_model.pth">model </a> 
  </tr>
   <tr>
    <td class="tg-0pky">BT-474</td>
    <td class="tg-0pky">42.12</td>
    <td class="tg-0pky">42.60</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/bt474_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/BT474/LIVECell_anchor_free_bt474_model.pth ">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">BV-2</td>
    <td class="tg-0pky">42.62</td>
    <td class="tg-0pky">45.69</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/bv2_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/BV2/LIVECell_anchor_free_bv2_model.pth">model </a> 
  </tr>
   <tr>
    <td class="tg-0pky">Huh7</td>
    <td class="tg-0pky">42.44</td>
    <td class="tg-0pky">45.85</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/huh7_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/HUH7/LIVECell_anchor_free_huh7_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">MCF7</td>
    <td class="tg-0pky">36.53</td>
    <td class="tg-0pky">37.30 </td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/mcf7_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/MCF7/LIVECell_anchor_free_mcf7_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">SH-SY5Y</td>
    <td class="tg-0pky">25.20</td>
    <td class="tg-0pky">23.91</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/shsy5y_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/SHSY5Y/LIVECell_anchor_free_shsy5y_model.pth">model </a>
  </tr>
  <tr>
    <td class="tg-0pky">SkBr3</td>
    <td class="tg-0pky">64.35</td>
    <td class="tg-0pky">65.85</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/skbr3_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/SKBR3/LIVECell_anchor_free_skbr3_model.pth">model </a>
  </tr>
  <tr>
    <td class="tg-0pky">SK-OV-3</td>
    <td class="tg-0pky">46.43</td>
    <td class="tg-0pky">49.39</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_free/skov3_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/SKOV3/LIVECell_anchor_free_skov3_model.pth">model </a>
  </tr>
  
   <tr>
    <td rowspan="9" class="tg-0pky">Anchor based</td>
    <td class="tg-0pky">LIVECell</td>
    <td class="tg-0pky">48.43</td>
    <td class="tg-0pky">47.89</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/livecell_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/ALL/LIVECell_anchor_based_model.pth">model </a>
  </tr>
  <tr>
    <td class="tg-0pky">A172</td>
    <td class="tg-0pky">36.37</td>
    <td class="tg-0pky">38.02</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/a172_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/A172/LIVECell_anchor_based_a172_model.pth">model </a> 
  </tr>
   <tr>
    <td class="tg-0pky">BT-474</td>
    <td class="tg-0pky">43.25</td>
    <td class="tg-0pky">43.00</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/bt474_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/BT474/LIVECell_anchor_based_bt474_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">BV-2</td>
    <td class="tg-0pky">54.36</td>
    <td class="tg-0pky">52.60</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/bv2_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/BV2/LIVECell_anchor_based_bv2_model.pth">model </a> 
  </tr>
   <tr>
    <td class="tg-0pky">Huh7</td>
    <td class="tg-0pky">52.79</td>
    <td class="tg-0pky">51.83</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/huh7_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/HUH7/LIVECell_anchor_based_huh7_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">MCF7</td>
    <td class="tg-0pky">37.53</td>
    <td class="tg-0pky">37.94 </td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/mcf7_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/MCF7/LIVECell_anchor_based_mcf7_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">SH-SY5Y</td>
    <td class="tg-0pky">27.87</td>
    <td class="tg-0pky">24.92</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/shsy5y_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/SHSY5Y/LIVECell_anchor_based_shsy5y_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">SkBr3</td>
    <td class="tg-0pky">64.41</td>
    <td class="tg-0pky">65.39</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/skbr3_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/SKBR3/LIVECell_anchor_based_skbr3_model.pth">model </a> 
  </tr>
  <tr>
    <td class="tg-0pky">SK-OV-3</td>
    <td class="tg-0pky">53.29</td>
    <td class="tg-0pky">54.12</td>
    <td class="tg-0lax"><a href="https://github.com/sartorius-research/LIVECell/blob/main/model/anchor_based/skov3_config.yaml">config</a> | <a href="https://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_based/SKOV3/LIVECell_anchor_based_skov3_model.pth">model </a> 
  </tr>
</table>

The box and mask AP presented here is derived by training on either the whole LIVECell dataset or a cell 
cell specific subset, and then evaluated on the corresponding test dataset.

To use our fully trained models download them from our S3 bucket, and use it together with appropriate config file as 
described below in the [traing and evaluation section](#Training and evaluation)



# Installation

The installation takes approximately 30 minutes, and it is recommended to set up different virtual environments for the anchor-free and the anchor-based model if one wants to use both of them. The anchor-based models uses a modified version of detectron2 with the same name which creates conflicts.

## Requirements:

- Linux or macOS with Python ≥ 3.6
- PyTorch ≥ 1.3
- torchvision that matches the PyTorch installation. You can install them together at pytorch.org to make sure of this.
- OpenCV, optional, needed by demo and visualization
- pycocotools: pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

## Anchor-free model (detectron2 + centermask2)

Build from source
````python
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
````

Or, to install it from a local clone:
````python
git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2
````


Or if you are on macOS
````python
CC=clang CXX=clang++ python -m pip install -e detectron2
````


To install a pre-built detectron for different torch and cuda versions and further information, 
see the detectron2 [install document](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md)

### 
Retrive the centermask2 code:
````python
git clone https://github.com/youngwanLEE/centermask2.git
````

For further information, on installation and usage, see the [centermask2 documentation](https://github.com/youngwanLEE/centermask2#evaluation)

## Anchor-based (detectron2-ResNeSt)
Do not install detectron2 as outlined above, and if it is installed, uninstall it or create a new virtual environment where it is not installed.
This is due to the reason that the ResNeSt code has cloned detectron2, made changes to the code and still uses detectron2 as the name which 
generates conflicts with the original detectron2 code. 

Retrive and install the detectron2-ResNeSt code:
```sh
git clone https://github.com/chongruo/detectron2-ResNeSt
python -m pip install -e detectron2-ResNeSt
```
For further information, on installation and usage, see the [detectron2-ResNeSt documentation](https://github.com/chongruo/detectron2-ResNeSt/blob/resnest/GETTING_STARTED.md)


### Common Installation Issues
- Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available".
```sh
CUDA is not found when building detectron2. You should make sure
python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'
print valid outputs at the time you build detectron2.
```

# Training and evaluation
### Register LIVECell dataset
Using a custom dataset such as LIVECell together with the detectron2 code base is done by first registering the dataset
via the detectron2 python API. In practice this can be done adding the following code to the train_net.py file in the cloned
centermask2 repo:

https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html
````python
from detectron2.data.datasets import register_coco_instances
register_coco_instances(dataset_name, {}, /path/coco/annotations.json, path/to/image/dir)
````

Were dataset_name will be the name of your dataset and will be how you decide what dataset to use in your config file.
Per default, the config file will point to *TRAIN* and *TEST*, so registering a test dataset as *TEST* will work directly with the
provided config files, for other names, make sure to update your config file accordingly.

- In the config file change the dataset entries with the name used to register the dataset.
- Set the output directory in the config file to save the models and results.

### Train
To train a model, change the OUTPUT directory in the config file to where the models and checkpoints should be saved.
Make sure you follow the previous step and register a TRAIN and TEST dataset, cd into 
the cloned directory (centermask2 or detectron2-ResNeSt), and run the following code:

````python
python tools/train_net.py --num-gpus 8  --config-file your_config.yaml
````
To train a model on the dataset defined in *you_config.yaml* with 8 gpus.

To fine-tune a model on your own dataset, set MODEL.WEIGTS in the config file to point at one of our weight files,
if you want to finetune our centermask2 model for instance.
````python
MODEL:
  WEIGHTS: "http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/ALL/LIVECell_anchor_free_model.pth"
````
 
 ### Evaluate
To evaluate a model, make sure to register a TEST dataset and point to it in your config file and cd into 
the cloned directory (centermask2 or detectron2-ResNeSt), then run the following code
 ````python
python train_net.py  --config-file <your_config.yaml> --eval-only MODEL.WEIGHTS </path/to/checkpoint_file.pth>
````

This will evaluate a model defined in `your_config.yaml` with the weights saved in `/path/to/checkpoint_file.pth`

To evaluate one of our models, like the centermask2 (anchor-free), you can point directly at the URI link for the weight 
file.


 ````python
python train_net.py  --config-file livecell_config.yaml --eval-only MODEL.WEIGHTS http://livecell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/models/Anchor_free/ALL/LIVECell_anchor_free_model.pth
````

#### Evaluation script
The original evaluation script available in the centermask and detectron2 repo is based on there being no more than 100
detections in an image. In our case we can have thousands of annotations and thus the AP evaluation will be off. We 
therefore provide `coco_evaluation.py` evaluation script in the [code](../code) folder. 

To use this script, go into the `train_net.py` file and remove (or comment out) the current import of `COCOEvaluator`.
Then import `COCOEvaluator` from the provided `coco_evaluator.py` file instead. This will result in AP evaluation
supporting for up to 2000 instances in one image.
 
The evaluation script will take approximately 30 minutes to run on our test dataset with a tesla V100 GPU.
The output of the evaluation will appear in the terminal, begining with information about the environment, data and 
architecture used. Then it will start evaluating all the images and summerize the results in the following manner:
 
````python
.
.
.
[11/18 17:19:06 d2.evaluation.evaluator]: Inference done 1557/1564. 0.1733 s / img. ETA=0:00:06
[11/18 17:19:11 d2.evaluation.evaluator]: Inference done 1561/1564. 0.1734 s / img. ETA=0:00:02
[11/18 17:19:14 d2.evaluation.evaluator]: Total inference time: 0:22:23.437057 (0.861730 s / img per device, on 1 devices)
[11/18 17:19:14 d2.evaluation.evaluator]: Total inference pure compute time: 0:04:30 (0.173426 s / img per device, on 1 devices)
Loading and preparing results...
DONE (t=1.12s)
creating index...
index created!
Size parameters: [[0, 10000000000.0], [0, 324], [324, 961], [961, 10000000000.0]]
Running per image evaluation...
Evaluate annotation type *bbox*
COCOeval_opt.evaluate() finished in 119.67 seconds.
Accumulating evaluation results...
COCOeval_opt.accumulate() finished in 5.86 seconds.
In method
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=2000 ] = 0.485
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=2000 ] = 0.830
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=2000 ] = 0.504
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.483
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.494
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.507
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.212
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=500 ] = 0.480
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=2000 ] = 0.569
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.531
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.602
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.672
Loading and preparing results...
DONE (t=11.04s)
creating index...
index created!
Size parameters: [[0, 10000000000.0], [0, 324], [324, 961], [961, 10000000000.0]]
Running per image evaluation...
Evaluate annotation type *segm*
COCOeval_opt.evaluate() finished in 135.80 seconds.
Accumulating evaluation results...
COCOeval_opt.accumulate() finished in 5.78 seconds.
In method
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=2000 ] = 0.478
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=2000 ] = 0.816
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=2000 ] = 0.509
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.451
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.491
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.570
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.210
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=500 ] = 0.470
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=2000 ] = 0.547
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.516
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.565
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.649
[11/18 17:25:07 d2.engine.defaults]: Evaluation results for cell_phase_test in csv format:
[11/18 17:25:07 d2.evaluation.testing]: copypaste: Task: bbox
[11/18 17:25:07 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[11/18 17:25:07 d2.evaluation.testing]: copypaste: 48.4529,82.9806,50.4426,48.3240,49.4476,50.7434
[11/18 17:25:07 d2.evaluation.testing]: copypaste: Task: segm
[11/18 17:25:07 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[11/18 17:25:07 d2.evaluation.testing]: copypaste: 47.7810,81.6260,50.8958,45.1110,49.0684,56.9874

````
 
For further details on training, testing and inference, 
visit the [centermask2](https://github.com/youngwanLEE/centermask2#evaluation) or 
[detectron2-ResNeSt](https://github.com/chongruo/detectron2-ResNeSt/blob/resnest/GETTING_STARTED.md) docs

## One-shot usage
For LIVECell experiments with zero-shot learning of EVICAN and Cellpose the input images was preprocessed using the 
preprocessing-script preprocessing.py found under the [code folder](../code).


================================================
FILE: model/anchor_based/a172_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/bt474_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/bv2_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/huh7_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/livecell_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/mcf7_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/shsy5y_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/skbr3_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_based/skov3_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  WEIGHTS: "https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_cascade_rcnn_ResNeSt_200_FPN_dcn_syncBN_all_tricks_3x-e1901134.pth"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
    FREEZE_AT: 0
  MASK_ON: True
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
    DEPTH: 200
    STRIDE_IN_1X1: False
    RADIX: 2
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: True
    DEFORM_NUM_GROUPS: 2
    NORM: "SyncBN"

  FPN:
    NORM: "SyncBN"
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[4], [9], [17], [31], [64], [127]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.25, 0.5, 1.0, 2.0, 4.0]]  # Three aspect ratios (same for all in feature maps)
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 512
    NAME: CascadeROIHeads
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "SyncBN"
  RPN:
    IN_FEATURES: ["p2" ,"p2", "p3", "p4", "p5", "p6"]
    BATCH_SIZE_PER_IMAGE: 256
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (17500, 20000)
  MAX_ITER: 30000
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)

  CROP:
    ENABLED: False
  FORMAT: "BGR"
TEST:
  DETECTIONS_PER_IMAGE: 3000 # 1000
  EVAL_PERIOD: 500
  PRECISE_BN:
    ENABLED: False
  AUG:
    ENABLED: False
OUTPUT_DIR: "PATH/TO/SAVE/RESULTS" # PATH TO SAVE THE OUTPUT RESULTS
DATALOADER:
  NUM_WORKERS: 8
VERSION: 2



================================================
FILE: model/anchor_free/Base-CenterMask-VoVNet.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  VOVNET:
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  FCOS:
    POST_NMS_TOPK_TEST: 50
  # PIXEL_MEAN: [102.9801, 115.9465, 122.7717]
  MASK_ON: True
  MASKIOU_ON: True
  ROI_HEADS:
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
DATASETS:
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
SOLVER:
  CHECKPOINT_PERIOD: 10000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01  # Note that RetinaNet uses a different default learning rate
  STEPS: (60000, 80000)
  MAX_ITER: 90000
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)


================================================
FILE: model/anchor_free/a172_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (26000, 28000)
  MAX_ITER: 30000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 1000
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/bt474_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (8000, 9000)
  MAX_ITER: 10000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01  #0.01
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/bv2_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (8000, 9000)
  MAX_ITER: 10000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/huh7_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (8000, 9000)
  MAX_ITER: 10000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/livecell_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (80000, 90000)
  MAX_ITER: 100000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 2500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""


================================================
FILE: model/anchor_free/mcf7_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (8000, 9000)
  MAX_ITER: 10000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/shsy5y_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (18000, 19000)
  MAX_ITER: 20000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01 
  CHECKPOINT_PERIOD: 250
  WARMUP_FACTOR: 0.00005
  WARMUP_ITERS: 5000
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/skbr3_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (19000, 19500)
  MAX_ITER: 20000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""

================================================
FILE: model/anchor_free/skov3_config.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_fcos_vovnet_fpn_backbone"
    FREEZE_AT: 0
  WEIGHTS: "https://www.dropbox.com/s/1mlv31coewx8trd/vovnet99_ese_detectron2.pth?dl=1"
  VOVNET:
    CONV_BODY : "V-99-eSE"
    OUT_FEATURES: ["stage3", "stage4", "stage5"]
  FPN:
    IN_FEATURES: ["stage3", "stage4", "stage5"]
  PROPOSAL_GENERATOR:
    NAME: "FCOS"  
  MASK_ON: True
  MASKIOU_ON: True
  FCOS:
    NUM_CLASSES: 1
    PRE_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 4000
    POST_NMS_TOPK_TRAIN: 15000
    POST_NMS_TOPK_TEST: 3000
  RPN:
    BATCH_SIZE_PER_IMAGE: 3000
    POST_NMS_TOPK_TEST: 3000
    POST_NMS_TOPK_TRAIN: 3000
  ROI_HEADS:
    NUM_CLASSES: 1
    BATCH_SIZE_PER_IMAGE: 3000
    NAME: "CenterROIHeads"
    IN_FEATURES: ["p3", "p4", "p5"]
  ROI_MASK_HEAD:
    NAME: "SpatialAttentionMaskHead"
    ASSIGN_CRITERION: "ratio"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
  RETINANET:
    NUM_CLASSES: 1
    TOPK_CANDIDATES_TEST: 3000
  PIXEL_MEAN: [128, 128, 128]
  PIXEL_STD: [11.578, 11.578, 11.578]
SOLVER:
  STEPS: (8000, 9000)
  MAX_ITER: 10000
  IMS_PER_BATCH: 16
  BASE_LR: 0.01
  CHECKPOINT_PERIOD: 500
DATASETS:
  TRAIN: ("TRAIN",)  #REPLACE TRAIN WITH THE REGISTERED NAME 
  TEST: ("TEST",)    #REPLACE TRAIN WITH THE REGISTERED NAME
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
INPUT:
  MIN_SIZE_TRAIN: (440, 480, 520, 560, 580, 620)
TEST:
  DETECTIONS_PER_IMAGE: 2000 # 1000
DATALOADER:
  NUM_WORKERS: 12
OUTPUT_DIR: ""
Download .txt
gitextract_t99nc979/

├── LICENSE
├── README.md
├── _config.yml
├── code/
│   ├── Fluorescence cell count evaluation.ipynb
│   ├── coco_evaluation.py
│   ├── coco_evaluation_resnest.py
│   └── preprocessing.py
└── model/
    ├── README.md
    ├── anchor_based/
    │   ├── a172_config.yaml
    │   ├── bt474_config.yaml
    │   ├── bv2_config.yaml
    │   ├── huh7_config.yaml
    │   ├── livecell_config.yaml
    │   ├── mcf7_config.yaml
    │   ├── shsy5y_config.yaml
    │   ├── skbr3_config.yaml
    │   └── skov3_config.yaml
    └── anchor_free/
        ├── Base-CenterMask-VoVNet.yaml
        ├── a172_config.yaml
        ├── bt474_config.yaml
        ├── bv2_config.yaml
        ├── huh7_config.yaml
        ├── livecell_config.yaml
        ├── mcf7_config.yaml
        ├── shsy5y_config.yaml
        ├── skbr3_config.yaml
        └── skov3_config.yaml
Download .txt
SYMBOL INDEX (26 symbols across 3 files)

FILE: code/coco_evaluation.py
  class COCOEvaluator (line 37) | class COCOEvaluator(DatasetEvaluator):
    method __init__ (line 47) | def __init__(
    method reset (line 125) | def reset(self):
    method process (line 128) | def process(self, inputs, outputs):
    method evaluate (line 148) | def evaluate(self, img_ids=None):
    method _tasks_from_predictions (line 181) | def _tasks_from_predictions(self, predictions):
    method _eval_predictions (line 193) | def _eval_predictions(self, predictions, img_ids=None):
    method _eval_box_proposals (line 253) | def _eval_box_proposals(self, predictions):
    method _derive_coco_results (line 292) | def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
  function instances_to_coco_json (line 359) | def instances_to_coco_json(instances, img_id):
  function _evaluate_box_proposals (line 427) | def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=No...
  function _evaluate_predictions_on_coco (line 538) | def _evaluate_predictions_on_coco(

FILE: code/coco_evaluation_resnest.py
  class COCOEvaluator (line 37) | class COCOEvaluator(DatasetEvaluator):
    method __init__ (line 43) | def __init__(self, dataset_name, cfg, distributed, output_dir=None):
    method reset (line 89) | def reset(self):
    method _tasks_from_config (line 92) | def _tasks_from_config(self, cfg):
    method process (line 106) | def process(self, inputs, outputs):
    method evaluate (line 127) | def evaluate(self):
    method _eval_predictions (line 159) | def _eval_predictions(self, tasks, predictions):
    method _eval_box_proposals (line 211) | def _eval_box_proposals(self, predictions):
    method _derive_coco_results (line 251) | def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
  function instances_to_coco_json (line 319) | def instances_to_coco_json(instances, img_id):
  function _evaluate_box_proposals (line 389) | def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=No...
  function _evaluate_predictions_on_coco (line 514) | def _evaluate_predictions_on_coco(coco_gt, coco_results, iou_type, kpt_o...

FILE: code/preprocessing.py
  function preprocess (line 13) | def preprocess(input_image, magnification_downsample_factor=1.0):
  function preprocess_fluorescence (line 54) | def preprocess_fluorescence(input_image, bInvert=True, magnification_dow...
Condensed preview — 27 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (143K chars).
[
  {
    "path": "LICENSE",
    "chars": 1069,
    "preview": "MIT License\n\nCopyright (c) 2021 Sartorius AG\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "README.md",
    "chars": 13583,
    "preview": "# LIVECell dataset\n\nThis document contains instructions of how to access the data associated with the submitted\nmanuscri"
  },
  {
    "path": "_config.yml",
    "chars": 116,
    "preview": "theme: jekyll-theme-cayman\ntitle: LIVECell\ndescription: A large-scale dataset for label-free live cell segmentation\n"
  },
  {
    "path": "code/Fluorescence cell count evaluation.ipynb",
    "chars": 8351,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# LIVECell Fluorescence cell count "
  },
  {
    "path": "code/coco_evaluation.py",
    "chars": 29734,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n# Modified by Sangrok Lee and Youngwan Lee (ETRI), 2020.\n# We modify "
  },
  {
    "path": "code/coco_evaluation_resnest.py",
    "chars": 27313,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# Modified by Sangrok Lee and Youngwan Lee (ETRI)"
  },
  {
    "path": "code/preprocessing.py",
    "chars": 2505,
    "preview": "import cv2\nimport numpy as np\n\n#this function is designed to adapt images acquired with other light microscopy modalitie"
  },
  {
    "path": "model/README.md",
    "chars": 19509,
    "preview": "# Usage\n\nThe anchor free models used in this benchmark is based on the [centermask2](https://github.com/youngwanLEE/cent"
  },
  {
    "path": "model/anchor_based/a172_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/bt474_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/bv2_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/huh7_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/livecell_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/mcf7_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/shsy5y_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/skbr3_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_based/skov3_config.yaml",
    "chars": 2093,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHTS: \"https://s3.us-west-1.wasabisys.com/resnest/detectron/mask_casc"
  },
  {
    "path": "model/anchor_free/Base-CenterMask-VoVNet.yaml",
    "chars": 890,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/a172_config.yaml",
    "chars": 1473,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/bt474_config.yaml",
    "chars": 1477,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/bv2_config.yaml",
    "chars": 1470,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/huh7_config.yaml",
    "chars": 1470,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/livecell_config.yaml",
    "chars": 1475,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/mcf7_config.yaml",
    "chars": 1470,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/shsy5y_config.yaml",
    "chars": 1519,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/skbr3_config.yaml",
    "chars": 1472,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  },
  {
    "path": "model/anchor_free/skov3_config.yaml",
    "chars": 1470,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_fcos_vovnet_fpn_backbone\"\n    FREEZE_AT: 0\n  "
  }
]

About this extraction

This page contains the full source code of the sartorius-research/LIVECell GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 27 files (132.0 KB), approximately 41.1k tokens, and a symbol index with 26 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!