Showing preview only (345K chars total). Download the full file or copy to clipboard to get everything.
Repository: Confusezius/Revisiting_Deep_Metric_Learning_PyTorch
Branch: master
Commit: efddbf23ccbe
Files: 73
Total size: 325.4 KB
Directory structure:
gitextract_la23kpcg/
├── .gitignore
├── LICENSE
├── README.md
├── Result_Evaluations.py
├── Sample_Runs/
│ └── ICML2020_RevisitDML_SampleRuns.sh
├── architectures/
│ ├── __init__.py
│ ├── bninception.py
│ ├── googlenet.py
│ └── resnet50.py
├── batchminer/
│ ├── __init__.py
│ ├── distance.py
│ ├── intra_random.py
│ ├── lifted.py
│ ├── npair.py
│ ├── parametric.py
│ ├── random.py
│ ├── random_distance.py
│ ├── rho_distance.py
│ ├── semihard.py
│ └── softhard.py
├── criteria/
│ ├── __init__.py
│ ├── adversarial_separation.py
│ ├── angular.py
│ ├── arcface.py
│ ├── contrastive.py
│ ├── histogram.py
│ ├── lifted.py
│ ├── margin.py
│ ├── multisimilarity.py
│ ├── npair.py
│ ├── proxynca.py
│ ├── quadruplet.py
│ ├── snr.py
│ ├── softmax.py
│ ├── softtriplet.py
│ └── triplet.py
├── datasampler/
│ ├── __init__.py
│ ├── class_random_sampler.py
│ ├── d2_coreset_sampler.py
│ ├── disthist_batchmatch_sampler.py
│ ├── fid_batchmatch_sampler.py
│ ├── greedy_coreset_sampler.py
│ ├── random_sampler.py
│ └── samplers.py
├── datasets/
│ ├── __init__.py
│ ├── basic_dataset_scaffold.py
│ ├── cars196.py
│ ├── cub200.py
│ └── stanford_online_products.py
├── evaluation/
│ └── __init__.py
├── main.py
├── metrics/
│ ├── __init__.py
│ ├── c_f1.py
│ ├── c_mAP_1000.py
│ ├── c_mAP_c.py
│ ├── c_mAP_lim.py
│ ├── c_nmi.py
│ ├── c_recall.py
│ ├── compute_stack.py
│ ├── dists.py
│ ├── e_recall.py
│ ├── f1.py
│ ├── mAP.py
│ ├── mAP_1000.py
│ ├── mAP_c.py
│ ├── mAP_lim.py
│ ├── nmi.py
│ └── rho_spectrum.py
├── parameters.py
├── toy_experiments/
│ └── toy_example_diagonal_lines.py
└── utilities/
├── __init__.py
├── logger.py
└── misc.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
__pycache__
*.pyc
Training_Results
wandb
diva_main.py
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2020 Karsten Roth
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Deep Metric Learning Research in PyTorch
---
## What can I find here?
This repository contains all code and implementations used in:
```
Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
```
accepted to **ICML 2020**.
**Link**: https://arxiv.org/abs/2002.08473
The code is meant to serve as a research starting point in Deep Metric Learning.
By implementing key baselines under a consistent setting and logging a vast set of metrics, it should be easier to ensure that method gains are not due to implementational variations, while better understanding driving factors.
It is set up in a modular way to allow for fast and detailed prototyping, but with key elements written in a way that allows the code to be directly copied into other pipelines. In addition, multiple training and test metrics are logged in W&B to allow for easy and large-scale evaluation.
Finally, please find a public W&B repo with key runs performed in the paper here: https://app.wandb.ai/confusezius/RevisitDML.
**Contact**: Karsten Roth, karsten.rh1@gmail.com
*Suggestions are always welcome!*
---
## Some Notes:
If you use this code in your research, please cite
```
@misc{roth2020revisiting,
title={Revisiting Training Strategies and Generalization Performance in Deep Metric Learning},
author={Karsten Roth and Timo Milbich and Samarth Sinha and Prateek Gupta and Björn Ommer and Joseph Paul Cohen},
year={2020},
eprint={2002.08473},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
This repository contains (in parts) code that has been adapted from:
* https://github.com/idstcv/SoftTriple
* https://github.com/bnu-wangxun/Deep_Metric
* https://github.com/valerystrizh/pytorch-histogram-loss
* https://github.com/Confusezius/Deep-Metric-Learning-Baselines
Make sure to also check out the following repo with a great plug-and-play implementation of DML methods:
* https://github.com/KevinMusgrave/pytorch-metric-learning
---
**[All implemented methods and metrics are listed at the bottom!](#-implemented-methods)**
---
## Paper-related Information
#### Reproduce results from our paper **[Revisiting Training Strategies and Generalization Performance in Deep Metric Learning](https://arxiv.org/pdf/2002.08473.pdf)**
* *ALL* standardized Runs that were used are available in `Revisit_Runs.sh`.
* These runs are also logged in this public W&B repo: https://app.wandb.ai/confusezius/RevisitDML.
* All Runs and their respective metrics can be downloaded and evaluated to generate the plots in our paper by following `Result_Evaluations.py`. This also allows for potential introspection of other relations. It also converts results directly into Latex-table format with mean and standard deviations.
* To utilize different batch-creation methods, simply set the flag `--data_sampler` to the method of choice. Allowed flags are listed in `datasampler/__init__.py`.
* To use the proposed spectral regularization for tuple-based methods, set `--batch_mining rho_distance` with flip probability `--miner_rho_distance_cp e.g. 0.2`.
* A script to run the toy experiments in the paper is provided in `toy_experiments`.
**Note**: There may be small deviations in results based on the Hardware (e.g. between P100 and RTX GPUs) and Software (different PyTorch/Cuda versions) used to run these experiments, but they should be covered in the standard deviations reported in the paper.
---
## How to use this Repo
### Requirements:
* PyTorch 1.2.0+ & Faiss-Gpu
* Python 3.6+
* pretrainedmodels, torchvision 0.3.0+
An exemplary setup of a virtual environment containing everything needed:
```
(1) wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
(2) bash Miniconda3-latest-Linux-x86_64.sh (say yes to append path to bashrc)
(3) source .bashrc
(4) conda create -n DL python=3.6
(5) conda activate DL
(6) conda install matplotlib scipy scikit-learn scikit-image tqdm pandas pillow
(7) conda install pytorch torchvision faiss-gpu cudatoolkit=10.0 -c pytorch
(8) pip install wandb pretrainedmodels
(9) Run the scripts!
```
### Datasets:
Data for
* CUB200-2011 (http://www.vision.caltech.edu/visipedia/CUB-200.html)
* CARS196 (https://ai.stanford.edu/~jkrause/cars/car_dataset.html)
* Stanford Online Products (http://cvgl.stanford.edu/projects/lifted_struct/)
can be downloaded either from the respective project sites or directly via Dropbox:
* CUB200-2011 (1.08 GB): https://www.dropbox.com/s/tjhf7fbxw5f9u0q/cub200.tar?dl=0
* CARS196 (1.86 GB): https://www.dropbox.com/s/zi2o92hzqekbmef/cars196.tar?dl=0
* SOP (2.84 GB): https://www.dropbox.com/s/fu8dgxulf10hns9/online_products.tar?dl=0
**The latter ensures that the folder structure is already consistent with this pipeline and the dataloaders**.
Otherwise, please make sure that the datasets have the following internal structure:
* For CUB200-2011/CARS196:
```
cub200/cars196
└───images
| └───001.Black_footed_Albatross
| │ Black_Footed_Albatross_0001_796111
| │ ...
| ...
```
* For Stanford Online Products:
```
online_products
└───images
| └───bicycle_final
| │ 111085122871_0.jpg
| ...
|
└───Info_Files
| │ bicycle.txt
| │ ...
```
Assuming your folder is placed in e.g. `<$datapath/cub200>`, pass `$datapath` as input to `--source`.
### Training:
Training is done by using `main.py` and setting the respective flags, all of which are listed and explained in `parameters.py`. A vast set of exemplary runs is provided in `Revisit_Runs.sh`.
**[I.]** **A basic sample run using default parameters would like this**:
```
python main.py --loss margin --batch_mining distance --log_online \
--project DML_Project --group Margin_with_Distance --seed 0 \
--gpu 0 --bs 112 --data_sampler class_random --samples_per_class 2 \
--arch resnet50_frozen_normalize --source $datapath --n_epochs 150 \
--lr 0.00001 --embed_dim 128 --evaluate_on_gpu
```
The purpose of each flag explained:
* `--loss <loss_name>`: Name of the training objective used. See folder `criteria` for implementations of these methods.
* `--batch_mining <batchminer_name>`: Name of the batch-miner to use (for tuple-based ranking methods). See folder `batch_mining` for implementations of these methods.
* `--log_online`: Log metrics online via either W&B (Default) or CometML. Regardless, plots, weights and parameters are all stored offline as well.
* `--project`, `--group`: Project name as well as name of the run. Different seeds will be logged into the same `--group` online. The group as well as the used seed also define the local savename.
* `--seed`, `--gpu`, `--source`: Basic Parameters setting the training seed, the used GPU and the path to the parent folder containing the respective Datasets.
* `--arch`: The utilized backbone, e.g. ResNet50. You can append `_frozen` and `_normalize` to the name to ensure that BatchNorm layers are frozen and embeddings are normalized, respectively.
* `--data_sampler`, `--samples_per_class`: How to construct a batch. The default method, `class_random`, selects classes at random and places `<samples_per_class>` samples into the batch until the batch is filled.
* `--lr`, `--n_epochs`, `--bs` ,`--embed_dim`: Learning rate, number of training epochs, the batchsize and the embedding dimensionality.
* `--evaluate_on_gpu`: If set, all metrics are computed using the gpu - requires Faiss-GPU and may need additional GPU memory.
#### Some Notes:
* During training, metrics listed in `--evaluation_metrics` will be logged for both training and validation/test set. If you do not care about detailed training metric logging, simply set the flag `--no_train_metrics`. A checkpoint is saved for improvements in metrics listed in `--storage_metrics` on training, validation or test sets. Detailed information regarding the available metrics can be found at the bottom of this `README`.
* If one wishes to use a training/validation split, simply set `--use_tv_split` and `--tv_split_perc <train/val split percentage>`.
**[II.]** **Advanced Runs**:
```
python main.py --loss margin --batch_mining distance --loss_margin_beta 0.6 --miner_distance_lower_cutoff 0.5 ... (basic parameters)
```
* To use specific parameters that are loss, batchminer or e.g. datasampler-related, simply set the respective flag.
* For structure and ease of use, parameters relating to a specifc loss function/batchminer etc. are marked as e.g. `--loss_<lossname>_<parameter_name>`, see `parameters.py`.
* However, every parameter can be called from every class, as all parameters are stored in a shared namespace that is passed to all methods. This makes it easy to create novel fusion losses and the likes.
### Evaluating Results with W&B
Here some information on using W&B (highly encouraged!)
* Create an account here (free): https://wandb.ai
* After the account is set, make sure to include your API key in `parameters.py` under `--wandb_key`.
* To make sure that W&B data can be stored, ensure to run `wandb on` in the folder pointed to by `--save_path`.
* When data is logged online to W&B, one can use `Result_Evaluations.py` to download all data, create named metric and correlation plots and output a summary in the form of a latex-ready table with mean and standard deviations of all metrics. **This ensures that there are no errors between computed and reported results.**
### Creating custom methods:
1. **Create custom objectives**: Simply take a look at e.g. `criteria/margin.py`, and ensure that the used methods has the following properties:
* Inherit from `torch.nn.Module` and define a custom `forward()` function.
* When using trainable parameters, make sure to either provide a `self.lr` to set the learning rate of the loss-specific parameters, or set `self.optim_dict_list`, which is a list containing optimization dictionaries passed to the optimizer (see e.g `criteria/proxynca.py`). If both are set, `self.optim_dict_list` has priority.
* Depending on the loss, remember to set the variables `ALLOWED_MINING_OPS = None or list of allowed mining operations`, `REQUIRES_BATCHMINER = False or True`, `REQUIRES_OPTIM = False or True` to denote if the method needs a batchminer or optimization of internal parameters.
2. **Create custom batchminer**: Simply take a look at e.g. `batch_mining/distance.py` - The miner needs to be a class with a defined `__call__()`-function, taking in a batch and labels and returning e.g. a list of triplets.
3. **Create custom datasamplers**:Simply take a look at e.g. `datasampler/class_random_sampler.py`. The sampler needs to inherit from `torch.utils.data.sampler.Sampler` and has to provide a `__iter__()` and a `__len__()` function. It has to yield a set of indices that are used to create the batch.
---
# Implemented Methods
For a detailed explanation of everything, please refer to the supplementary of our paper!
### DML criteria
* **Angular** [[Deep Metric Learning with Angular Loss](https://arxiv.org/pdf/1708.01682.pdf)] `--loss angular`
* **ArcFace** [[ArcFace: Additive Angular Margin Loss for Deep Face Recognition](https://arxiv.org/pdf/1801.07698.pdf)] `--loss arcface`
* **Contrastive** [[Dimensionality Reduction by Learning an Invariant Mapping](http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf)] `--loss contrastive`
* **Generalized Lifted Structure** [[In Defense of the Triplet Loss for Person Re-Identification](https://arxiv.org/abs/1703.07737)] `--loss lifted`
* **Histogram** [[Learning Deep Embeddings with Histogram Loss](https://arxiv.org/pdf/1611.00822.pdf)] `--loss histogram`
* **Marginloss** [[Sampling Matters in Deep Embeddings Learning](https://arxiv.org/abs/1706.07567)] `--loss margin`
* **MultiSimilarity** [[Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning](https://arxiv.org/abs/1904.06627)] `--loss multisimilarity`
* **N-Pair** [[Improved Deep Metric Learning with Multi-class N-pair Loss Objective](https://papers.nips.cc/paper/6200-improved-deep-metric-learning-with-multi-class-n-pair-loss-objective)] `--loss npair`
* **ProxyNCA** [[No Fuss Distance Metric Learning using Proxies](https://arxiv.org/pdf/1703.07464.pdf)] `--loss proxynca`
* **Quadruplet** [[Beyond triplet loss: a deep quadruplet network for person re-identification](https://arxiv.org/abs/1704.01719)] `--loss quadruplet`
* **Signal-to-Noise Ratio (SNR)** [[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/pdf/1904.02616.pdf)] `--loss snr`
* **SoftTriple** [[SoftTriple Loss: Deep Metric Learning Without Triplet Sampling](https://arxiv.org/abs/1909.05235)] `--loss softtriplet`
* **Normalized Softmax** [[Classification is a Strong Baseline for Deep Metric Learning](https://arxiv.org/abs/1811.12649)] `--loss softmax`
* **Triplet** [[Facenet: A unified embedding for face recognition and clustering](https://arxiv.org/abs/1503.03832)] `--loss triplet`
### DML batchminer
* **Random** [[Facenet: A unified embedding for face recognition and clustering](https://arxiv.org/abs/1503.03832)] `--batch_mining random`
* **Semihard** [[Facenet: A unified embedding for face recognition and clustering](https://arxiv.org/abs/1503.03832)] `--batch_mining semihard`
* **Softhard** [https://github.com/Confusezius/Deep-Metric-Learning-Baselines] `--batch_mining softhard`
* **Distance-based** [[Sampling Matters in Deep Embeddings Learning](https://arxiv.org/abs/1706.07567)] `--batch_mining distance`
* **Rho-Distance** [[Revisiting Training Strategies and Generalization Performance in Deep Metric Learning](https://arxiv.org/abs/2002.08473)] `--batch_mining rho_distance`
* **Parametric** [[PADS: Policy-Adapted Sampling for Visual Similarity Learning](https://arxiv.org/abs/2003.11113)] `--batch_mining parametric`
### Architectures
* **ResNet50** [[Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)] e.g. `--arch resnet50_frozen_normalize`.
* **Inception-BN** [[Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)] e.g. `--arch bninception_normalize_frozen`.
* **GoogLeNet** (torchvision variant w/ BN) [[Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)] e.g. `--arch googlenet`.
### Datasets
* **CUB200-2011** [[Caltech-UCSD Birds-200-2011](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html)] `--dataset cub200`.
* **CARS196** [[Cars Dataset](https://ai.stanford.edu/~jkrause/cars/car_dataset.html)] `--dataset cars196`.
* **Stanford Online Products** [[Deep Metric Learning via Lifted Structured Feature Embedding](https://cvgl.stanford.edu/projects/lifted_struct/)] `--dataset online_products`.
### Evaluation Metrics
**Metrics based on Euclidean Distances**
* **Recall@k**: Include R@1 e.g. with `e_recall@1` into the list of evaluation metrics `--evaluation_metrics`.
* **Normalized Mutual Information (NMI)**: Include with `nmi`.
* **F1**: include with `f1`.
* **mAP (class-averaged)**: Include standard mAP at Recall with `mAP_lim`. You may also include `mAP_1000` for mAP limited to Recall@1000, and `mAP_c` limited to mAP at Recall@Max_Num_Samples_Per_Class. Note that all of these are heavily correlated.
**Metrics based on Cosine Similarities** *(not included by default)*
* **Cosine Recall@k**: Cosine-Similarity variant of Recall@k. Include with `c_recall@k` in `--evaluation_metrics`.
* **Cosine Normalized Mutual Information (NMI)**: Include with `c_nmi`.
* **Cosine F1**: include with `c_f1`.
* **Cosine mAP (class-averaged)**: Include cosine similarity mAP at Recall variants with `c_mAP_lim`. You may also include `c_mAP_1000` for mAP limited to Recall@1000, and `c_mAP_c` limited to mAP at Recall@Max_Num_Samples_Per_Class.
**Embedding Space Metrics**
* **Spectral Variance**: This metric refers to the spectral decay metric used in our ICML paper. Include it with `rho_spectrum@1`. To exclude the `k` largest spectral values for a more robust estimate, simply include `rho_spectrum@k+1`. Adding `rho_spectrum@0` logs the whole singular value distribution, and `rho_spectrum@-1` computes KL(q,p) instead of KL(p,q).
* **Mean Intraclass Distance**: Include the mean intraclass distance via `dists@intra`.
* **Mean Interclass Distance**: Include the mean interlcass distance via `dists@inter`.
* **Ratio Intra- to Interclass Distance**: Include the ratio of distances via `dists@intra_over_inter`.
================================================
FILE: Result_Evaluations.py
================================================
"""
This scripts downloads and evaluates W&B run data to produce plots and tables used in the original paper.
"""
import numpy as np
import wandb
import matplotlib.pyplot as plt
def get_data(project):
from tqdm import tqdm
api = wandb.Api()
# Project is specified by <entity/project-name>
runs = api.runs(project)
info_list = []
# history_list = []
for run in tqdm(runs, desc='Downloading data...'):
config = {k:v for k,v in run.config.items() if not k.startswith('_')}
info_dict = {'metrics':run.history(), 'config':config}
info_list.append((run.name,info_dict))
return info_list
all_df = get_data("confusezius/RevisitDML")
names_to_check = list(np.unique(['_s'.join(x[0].split('_s')[:-1]) for x in all_df]))
metrics = ['Test: discriminative_e_recall: e_recall@1', 'Test: discriminative_e_recall: e_recall@2', \
'Test: discriminative_e_recall: e_recall@4', 'Test: discriminative_nmi: nmi', \
'Test: discriminative_f1: f1', 'Test: discriminative_mAP: mAP']
metric_names = ['R@1', 'R@2', 'R@4', 'NMI', 'F1', 'mAP']
idxs = {x:[i for i,y in enumerate(all_df) if x=='_s'.join(y[0].split('_s')[:-1])] for x in names_to_check}
vals = {}
for group, runs in idxs.items():
if 'CUB' in group:
min_len = 40
elif 'CAR' in group:
min_len = 40
elif 'SOP' in group:
min_len = 40
vals[group] = {metric_name:[] for metric_name in metric_names}
vals[group]['Max_Epoch'] = []
vals[group]['Intra_over_Inter'] = []
vals[group]['Intra'] = []
vals[group]['Inter'] = []
vals[group]['Rho1'] = []
vals[group]['Rho2'] = []
vals[group]['Rho3'] = []
vals[group]['Rho4'] = []
for i,run in enumerate(runs):
name, data = all_df[run]
for metric,metric_name in zip(metrics, metric_names):
if len(data['metrics']):
sub_data = list(data['metrics'][metric])
if len(sub_data)>min_len:
vals[group][metric_name].append(np.nanmax(sub_data))
if metric_name=='R@1':
r_argmax = np.nanargmax(sub_data)
vals[group]['Max_Epoch'].append(r_argmax)
vals[group]['Intra_over_Inter'].append(data['metrics']['Train: discriminative_dists: dists@intra_over_inter'][r_argmax])
vals[group]['Intra'].append(data['metrics']['Train: discriminative_dists: dists@intra'][r_argmax])
vals[group]['Inter'].append(data['metrics']['Train: discriminative_dists: dists@inter'][r_argmax])
vals[group]['Rho1'].append(data['metrics']['Train: discriminative_rho_spectrum: rho_spectrum@-1'][r_argmax])
vals[group]['Rho2'].append(data['metrics']['Train: discriminative_rho_spectrum: rho_spectrum@1'][r_argmax])
vals[group]['Rho3'].append(data['metrics']['Train: discriminative_rho_spectrum: rho_spectrum@2'][r_argmax])
vals[group]['Rho4'].append(data['metrics']['Train: discriminative_rho_spectrum: rho_spectrum@10'][r_argmax])
vals[group] = {metric_name:(np.mean(metric_vals),np.std(metric_vals)) for metric_name, metric_vals in vals[group].items()}
###
cub_vals = {key:item for key,item in vals.items() if 'CUB' in key}
car_vals = {key:item for key,item in vals.items() if 'CAR' in key}
sop_vals = {key:item for key,item in vals.items() if 'SOP' in key}
##########
def name_filter(n):
n = '_'.join(n.split('_')[1:])
return n
def name_adjust(n, prep='', app='', for_plot=True):
if 'Margin_b06_Distance' in n:
t = 'Margin (D), \\beta=0.6' if for_plot else 'Margin (D, \\beta=0.6)'
elif 'Margin_b12_Distance' in n:
t = 'Margin (D), \\beta=1.2' if for_plot else 'Margin (D, \\beta=1.2)'
elif 'ArcFace' in n:
t = 'ArcFace'
elif 'Histogram' in n:
t = 'Histogram'
elif 'SoftTriple' in n:
t = 'SoftTriple'
elif 'Contrastive' in n:
t = 'Contrastive (D)'
elif 'Triplet_Distance' in n:
t = 'Triplet (D)'
elif 'Quadruplet_Distance' in n:
t = 'Quadruplet (D)'
elif 'SNR_Distance' in n:
t = 'SNR (D)'
elif 'Triplet_Random' in n:
t = 'Triplet (R)'
elif 'Triplet_Semihard' in n:
t = 'Triplet (S)'
elif 'Triplet_Softhard' in n:
t = 'Triplet (H)'
elif 'Softmax' in n:
t = 'Softmax'
elif 'MS' in n:
t = 'Multisimilarity'
else:
t = '_'.join(n.split('_')[1:])
if for_plot:
t = r'${0}$'.format(t)
return prep+t+app
########
def single_table(vals):
print_str = ''
for name,metrics in vals.items():
prep = 'R-' if 'reg_' in name else ''
name = name_adjust(name, for_plot=False, prep=prep)
add = '{0} & ${1:2.2f}\\pm{2:2.2f}$ & ${3:2.2f}\\pm{4:2.2f}$ & ${5:2.2f}\\pm{6:2.2f}$ & ${7:2.2f}\\pm{8:2.2f}$ & ${9:2.2f}\\pm{10:2.2f}$ & ${11:2.2f}\\pm{12:2.2f}$'.format(name,
metrics['R@1'][0]*100, metrics['R@1'][1]*100,
metrics['R@2'][0]*100, metrics['R@2'][1]*100,
metrics['F1'][0]*100, metrics['F1'][1]*100,
metrics['mAP'][0]*100, metrics['mAP'][1]*100,
metrics['NMI'][0]*100, metrics['NMI'][1]*100,
metrics['Max_Epoch'][0], metrics['Max_Epoch'][1])
print_str += add
print_str += '\\'
print_str += '\\'
print_str += '\n'
return print_str
print(single_table(cub_vals))
print(single_table(car_vals))
print(single_table(sop_vals))
########
def shared_table():
cub_names, car_names, sop_names = list(cub_vals.keys()), list(car_vals.keys()), list(sop_vals.keys())
cub_names = [name_adjust(n, for_plot=False, prep='R-' if 'reg_' in n else '') for n in cub_names]
cub_vals_2 = {name_adjust(n, for_plot=False, prep='R-' if 'reg_' in n else ''):item for n,item in cub_vals.items()}
car_names = [name_adjust(n, for_plot=False, prep='R-' if 'reg_' in n else '') for n in car_names]
car_vals_2 = {name_adjust(n, for_plot=False, prep='R-' if 'reg_' in n else ''):item for n,item in car_vals.items()}
sop_names = [name_adjust(n, for_plot=False, prep='R-' if 'reg_' in n else '') for n in sop_names]
sop_vals_2 = {name_adjust(n, for_plot=False, prep='R-' if 'reg_' in n else ''):item for n,item in sop_vals.items()}
cub_vvals, car_vvals, sop_vvals = list(cub_vals.values()), list(car_vals.values()), list(sop_vals.values())
unique_names = np.unique(np.concatenate([cub_names, car_names, sop_names], axis=0).reshape(-1))
unique_names = sorted([x for x in unique_names if 'R-' not in x]) + sorted([x for x in unique_names if 'R-' in x])
print_str = ''
for name in unique_names:
cub_rm, cub_rs = ('{0:2.2f}'.format(cub_vals_2[name]['R@1'][0]*100), '{0:2.2f}'.format(cub_vals_2[name]['R@1'][1]*100)) if name in cub_vals_2 else ('-', '-')
cub_nm, cub_ns = ('{0:2.2f}'.format(cub_vals_2[name]['NMI'][0]*100), '{0:2.2f}'.format(cub_vals_2[name]['NMI'][1]*100)) if name in cub_vals_2 else ('-', '-')
car_rm, car_rs = ('{0:2.2f}'.format(car_vals_2[name]['R@1'][0]*100), '{0:2.2f}'.format(car_vals_2[name]['R@1'][1]*100)) if name in car_vals_2 else ('-', '-')
car_nm, car_ns = ('{0:2.2f}'.format(car_vals_2[name]['NMI'][0]*100), '{0:2.2f}'.format(car_vals_2[name]['NMI'][1]*100)) if name in car_vals_2 else ('-', '-')
sop_rm, sop_rs = ('{0:2.2f}'.format(sop_vals_2[name]['R@1'][0]*100), '{0:2.2f}'.format(sop_vals_2[name]['R@1'][1]*100)) if name in sop_vals_2 else ('-', '-')
sop_nm, sop_ns = ('{0:2.2f}'.format(sop_vals_2[name]['NMI'][0]*100), '{0:2.2f}'.format(sop_vals_2[name]['NMI'][1]*100)) if name in sop_vals_2 else ('-', '-')
add = '{0} & ${1}\\pm{2}$ & ${3}\\pm{4}$ & ${5}\\pm{6}$ & ${7}\\pm{8}$ & ${9}\\pm{10}$ & ${11}\\pm{12}$'.format(name,
cub_rm, cub_rs,
cub_nm, cub_ns,
car_rm, car_rs,
car_nm, car_ns,
sop_rm, sop_rs,
sop_nm, sop_ns)
print_str += add
print_str += '\\'
print_str += '\\'
print_str += '\n'
return print_str
print(shared_table())
"""==================================================="""
def give_basic_metr(vals, key='CUB'):
if key=='CUB':
Basic = sorted(list(filter(lambda x: '{}_'.format(key) in x, list(vals.keys()))))
elif key=='CARS':
Basic = sorted(list(filter(lambda x: 'CARS_' in x, list(vals.keys()))))
elif key=='SOP':
Basic = sorted(list(filter(lambda x: 'SOP_' in x, list(vals.keys()))))
basic_recall = np.array([vals[k]['R@1'][0] for k in Basic])
basic_recall_err = np.array([vals[k]['R@1'][1] for k in Basic])
#
basic_recall2 = np.array([vals[k]['R@2'][0] for k in Basic])
basic_recall4 = np.array([vals[k]['R@4'][0] for k in Basic])
basic_nmi = np.array([vals[k]['NMI'][0] for k in Basic])
basic_f1 = np.array([vals[k]['F1'][0] for k in Basic])
basic_map = np.array([vals[k]['mAP'][0] for k in Basic])
mets = [basic_recall, basic_recall2, basic_recall4, basic_nmi, basic_f1, basic_map]
return Basic, mets, basic_recall, basic_recall_err
def give_reg_metr(vals, key='CUB'):
if key=='CUB':
RhoReg = sorted(list(filter(lambda x: '{}reg_'.format(key) in x, list(vals.keys()))))
elif key=='CARS':
RhoReg = sorted(list(filter(lambda x: 'CARreg_' in x, list(vals.keys()))))
elif key=='SOP':
RhoReg = sorted(list(filter(lambda x: 'SOPreg_' in x, list(vals.keys()))))
rho_recall = np.array([vals[k]['R@1'][0] for k in RhoReg])
rho_recall_err = np.array([vals[k]['R@1'][1] for k in RhoReg])
#
rho_recall2 = np.array([vals[k]['R@2'][0] for k in RhoReg])
rho_recall4 = np.array([vals[k]['R@4'][0] for k in RhoReg])
rho_nmi = np.array([vals[k]['NMI'][0] for k in RhoReg])
rho_f1 = np.array([vals[k]['F1'][0] for k in RhoReg])
rho_map = np.array([vals[k]['mAP'][0] for k in RhoReg])
mets = [rho_recall, rho_recall2, rho_recall4, rho_nmi, rho_f1, rho_map]
return RhoReg, mets, rho_recall, rho_recall_err
cub_basic_names, cub_mets, cub_basic_recall, cub_basic_recall_err = give_basic_metr(cub_vals, key='CUB')
car_basic_names, car_mets, car_basic_recall, car_basic_recall_err = give_basic_metr(car_vals, key='CARS')
sop_basic_names, sop_mets, sop_basic_recall, sop_basic_recall_err = give_basic_metr(sop_vals, key='SOP')
cub_reg_names, cub_reg_mets, cub_reg_recall, cub_reg_recall_err = give_reg_metr(cub_vals, key='CUB')
car_reg_names, car_reg_mets, car_reg_recall, car_reg_recall_err = give_reg_metr(car_vals, key='CARS')
sop_reg_names, sop_reg_mets, sop_reg_recall, sop_reg_recall_err = give_reg_metr(sop_vals, key='SOP')
"""============================================================="""
# def produce_plot(basic_recall, basic_recall_err, BasicLosses, vals, ylim=[0.58, 0.635]):
#
# intra = np.array([vals[k]['Intra'][0] for k in BasicLosses])
# inter = np.array([vals[k]['Inter'][0] for k in BasicLosses])
# ratio = np.array([vals[k]['Intra_over_Inter'][0] for k in BasicLosses])
# rho1 = np.array([vals[k]['Rho1'][0] for k in BasicLosses])
# rho2 = np.array([vals[k]['Rho2'][0] for k in BasicLosses])
# rho3 = np.array([vals[k]['Rho3'][0] for k in BasicLosses])
# rho4 = np.array([vals[k]['Rho4'][0] for k in BasicLosses])
#
# def comp(met):
# sort = np.argsort(met)
# corr = np.corrcoef(met[sort],basic_recall[sort])[0,1]
# m,b = np.polyfit(met[sort], basic_recall[sort], 1)
# lim = [np.min(met)*0.9, np.max(met)*1.1]
# x = np.linspace(lim[0], lim[1], 50)
# linfit = m*x + b
# return sort, corr, linfit, x, lim
#
# intra_sort, intra_corr, intra_linfit, intra_x, intra_lim = comp(intra)
# inter_sort, inter_corr, inter_linfit, inter_x, inter_lim = comp(inter)
# ratio_sort, ratio_corr, ratio_linfit, ratio_x, ratio_lim = comp(ratio)
# rho1_sort, rho1_corr, rho1_linfit, rho1_x, rho1_lim = comp(rho1)
# rho2_sort, rho2_corr, rho2_linfit, rho2_x, rho2_lim = comp(rho2)
# rho3_sort, rho3_corr, rho3_linfit, rho3_x, rho3_lim = comp(rho3)
# rho4_sort, rho4_corr, rho4_linfit, rho4_x, rho4_lim = comp(rho4)
#
#
#
# f,ax = plt.subplots(1,4)
# # f,ax = plt.subplots(1,7)
# colors = np.array([np.random.rand(3,) for _ in range(len(basic_recall))])
# for i in range(len(colors)):
# ax[0].errorbar(intra[intra_sort][i], basic_recall[intra_sort][i], yerr=basic_recall_err[intra_sort][i], fmt='o', color=colors[intra_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# ax[1].errorbar(inter[inter_sort][i], basic_recall[inter_sort][i], yerr=basic_recall_err[inter_sort][i], fmt='o', color=colors[inter_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# ax[2].errorbar(ratio[ratio_sort][i], basic_recall[ratio_sort][i], yerr=basic_recall_err[ratio_sort][i], fmt='o', color=colors[ratio_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# # ax[3].errorbar(rho1[rho1_sort][i], basic_recall[rho1_sort][i], yerr=basic_recall_err[rho1_sort][i], fmt='o', color=colors[rho1_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# # ax[4].errorbar(rho2[rho2_sort][i], basic_recall[rho2_sort][i], yerr=basic_recall_err[rho2_sort][i], fmt='o', color=colors[rho2_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# ax[3].errorbar(rho3[rho3_sort][i], basic_recall[rho3_sort][i], yerr=basic_recall_err[rho3_sort][i], fmt='o', color=colors[rho3_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# # ax[6].errorbar(rho4[rho4_sort][i], basic_recall[rho4_sort][i], yerr=basic_recall_err[rho4_sort][i], fmt='o', color=colors[rho4_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
# ax[1].set_yticks([])
# ax[2].set_yticks([])
# ax[3].set_yticks([])
# # ax[4].set_yticks([])
# # ax[5].set_yticks([])
# # ax[6].set_yticks([])
# ax[0].plot(intra_x, intra_linfit, 'k--', alpha=0.5, linewidth=3)
# ax[1].plot(inter_x, inter_linfit, 'k--', alpha=0.5, linewidth=3)
# ax[2].plot(ratio_x, ratio_linfit, 'k--', alpha=0.5, linewidth=3)
# # ax[3].plot(rho1_x, rho1_linfit, 'k--', alpha=0.5, linewidth=3)
# ax[3].plot(rho2_x, rho2_linfit, 'k--', alpha=0.5, linewidth=3)
# # ax[5].plot(rho3_x, rho3_linfit, 'k--', alpha=0.5, linewidth=3)
# # ax[6].plot(rho4_x, rho4_linfit, 'k--', alpha=0.5, linewidth=3)
# ax[0].text('Correlation: {0:2.2f}'.format(intra_corr), fontsize=18)
# ax[1].text('Correlation: {0:2.2f}'.format(inter_corr), fontsize=18)
# ax[2].text('Correlation: {0:2.2f}'.format(ratio_corr), fontsize=18)
# # ax[3].text('Correlation: {0:2.2f}'.format(rho1_corr), fontsize=18)
# ax[3].text('Correlation: {0:2.2f}'.format(rho2_corr), fontsize=18)
# # ax[5].set_title('Correlation: {0:2.2f}'.format(rho3_corr), fontsize=18)
# # ax[6].set_title('Correlation: {0:2.2f}'.format(rho4_corr), fontsize=18)
# ax[0].set_title(r'$\pi_{intra}$', fontsize=18)
# ax[1].set_title(r'$\pi_{inter}$', fontsize=18)
# ax[2].set_title(r'$\pi_{ratio}$', fontsize=18)
# ax[3].set_title(r'$\rho(\Phi)$', fontsize=18)
# ax[0].set_ylabel('Recall Performance', fontsize=18)
# for a in ax.reshape(-1):
# a.tick_params(axis='both', which='major', labelsize=16)
# a.tick_params(axis='both', which='minor', labelsize=16)
# a.set_ylim(ylim)
# f.set_size_inches(22,8)
# f.tight_layout()
# produce_plot(cub_basic_recall, cub_basic_recall_err, cub_basic_names, cub_vals, ylim=[0.581,0.635])
# produce_plot(car_basic_recall, car_basic_recall_err, car_basic_names, car_vals, ylim=[0.70,0.82])
# produce_plot(sop_basic_recall, sop_basic_recall_err, sop_basic_names, sop_vals, ylim=[0.67,0.79])
def full_rel_plot():
recallss= [cub_basic_recall, car_basic_recall, sop_basic_recall]
rerrss = [cub_basic_recall_err, car_basic_recall_err, sop_basic_recall_err]
namess = [cub_basic_names, car_basic_names, sop_basic_names]
valss = [cub_vals, car_vals, sop_vals]
ylims = [[0.581, 0.638],[0.70,0.82],[0.67,0.79]]
f,axes = plt.subplots(3,4)
for k,(ax, recalls, rerrs, names, vals, ylim) in enumerate(zip(axes, recallss, rerrss, namess, valss, ylims)):
col = 'red' if k==3 else 'gray'
intra = np.array([vals[k]['Intra'][0] for k in names])
inter = np.array([vals[k]['Inter'][0] for k in names])
ratio = np.array([vals[k]['Intra_over_Inter'][0] for k in names])
rho = np.array([vals[k]['Rho3'][0] for k in names])
def comp(met):
sort = np.argsort(met)
corr = np.corrcoef(met[sort],recalls[sort])[0,1]
m,b = np.polyfit(met[sort], recalls[sort], 1)
lim = [np.min(met)*0.9, np.max(met)*1.1]
x = np.linspace(lim[0], lim[1], 50)
linfit = m*x + b
return sort, corr, linfit, x, lim
intra_sort, intra_corr, intra_linfit, intra_x, intra_lim = comp(intra)
inter_sort, inter_corr, inter_linfit, inter_x, inter_lim = comp(inter)
ratio_sort, ratio_corr, ratio_linfit, ratio_x, ratio_lim = comp(ratio)
rho_sort, rho_corr, rho_linfit, rho_x, rho_lim = comp(rho)
# f,ax = plt.subplots(1,7)
colors = np.array([np.random.rand(3,) for _ in range(len(recalls))])
for i in range(len(colors)):
ax[0].errorbar(intra[intra_sort][i], recalls[intra_sort][i], yerr=rerrs[intra_sort][i], fmt='o', color=colors[intra_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
ax[1].errorbar(inter[inter_sort][i], recalls[inter_sort][i], yerr=rerrs[inter_sort][i], fmt='o', color=colors[inter_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
ax[2].errorbar(ratio[ratio_sort][i], recalls[ratio_sort][i], yerr=rerrs[ratio_sort][i], fmt='o', color=colors[ratio_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
ax[3].errorbar(rho[rho_sort][i], recalls[rho_sort][i], yerr=rerrs[rho_sort][i], fmt='o', color=colors[rho_sort][i], ecolor='gray', elinewidth=3, capsize=0, label='Basic Criteria', markersize=8)
ax[1].set_yticks([])
ax[2].set_yticks([])
ax[3].set_yticks([])
ax[0].plot(intra_x, intra_linfit, 'k--', alpha=0.5, linewidth=3)
ax[1].plot(inter_x, inter_linfit, 'k--', alpha=0.5, linewidth=3)
ax[2].plot(ratio_x, ratio_linfit, 'k--', alpha=0.5, linewidth=3)
ax[3].plot(rho_x, rho_linfit, 'r--', alpha=0.5, linewidth=3)
ax[0].text(intra_lim[1]-0.7*(intra_lim[1]-intra_lim[0]),ylim[0]+0.05*(ylim[1]-ylim[0]),'Corr: {0:1.2f}'.format(intra_corr), bbox=dict(facecolor='gray', alpha=0.5), fontsize=26)
ax[1].text(inter_lim[1]-0.7*(inter_lim[1]-inter_lim[0]),ylim[0]+0.05*(ylim[1]-ylim[0]),'Corr: {0:1.2f}'.format(inter_corr), bbox=dict(facecolor='gray', alpha=0.5), fontsize=26)
ax[2].text(ratio_lim[1]-0.7*(ratio_lim[1]-ratio_lim[0]),ylim[0]+0.05*(ylim[1]-ylim[0]),'Corr: {0:1.2f}'.format(ratio_corr), bbox=dict(facecolor='gray', alpha=0.5), fontsize=26)
ax[3].text(rho_lim[1]-0.7*(rho_lim[1]-rho_lim[0]),ylim[0]+0.05*(ylim[1]-ylim[0]),'Corr: {0:1.2f}'.format(rho_corr), bbox=dict(facecolor='red', alpha=0.5), fontsize=26)
if k==0:
ax[0].set_title(r'$\pi_{intra}$', fontsize=26)
ax[1].set_title(r'$\pi_{inter}$', fontsize=26)
ax[2].set_title(r'$\pi_{ratio}$', fontsize=26)
ax[3].set_title(r'$\rho(\Phi)$', fontsize=26, color='red')
if k==0:
ax[0].set_ylabel('CUB200-2011 R@1', fontsize=23)
elif k==1:
ax[0].set_ylabel('CARS196 R@1', fontsize=23)
elif k==2:
ax[0].set_ylabel('SOP R@1', fontsize=23)
for a in ax.reshape(-1):
a.tick_params(axis='both', which='major', labelsize=20)
a.tick_params(axis='both', which='minor', labelsize=20)
a.set_ylim(ylim)
f.set_size_inches(21,15)
f.tight_layout()
f.savefig('comp_metric_relation.pdf')
f.savefig('comp_metric_relation.png')
full_rel_plot()
"""================================================"""
import itertools as it
cub_corr_mat = np.corrcoef(cub_mets)
f,ax = plt.subplots(1,3)
ax[0].imshow(cub_corr_mat, vmin=0, vmax=1, cmap='plasma')
corr_x = [0,1,2,3,4,5]
ax[0].set_xticklabels(metric_names)
ax[0].set_yticklabels(metric_names)
ax[0].set_xticks(corr_x)
ax[0].set_yticks(corr_x)
ax[0].set_xlim([-0.5,5.5])
ax[0].set_ylim([-0.5,5.5])
cs = list(it.product(corr_x, corr_x))
for c in cs:
ax[0].text(c[0]-0.2, c[1]-0.11, '{0:1.2f}'.format(cub_corr_mat[c[0], c[1]]), fontsize=18)
ax[0].tick_params(axis='both', which='major', labelsize=18)
ax[0].tick_params(axis='both', which='minor', labelsize=18)
car_corr_mat = np.corrcoef(car_mets)
ax[1].imshow(car_corr_mat, vmin=0, vmax=1, cmap='plasma')
corr_x = [0,1,2,3,4,5]
ax[1].set_xticklabels(metric_names)
ax[1].set_yticklabels(metric_names)
ax[1].set_xticks(corr_x)
ax[1].set_yticks(corr_x)
ax[1].set_xlim([-0.5,5.5])
ax[1].set_ylim([-0.5,5.5])
cs = list(it.product(corr_x, corr_x))
for c in cs:
ax[1].text(c[0]-0.2, c[1]-0.11, '{0:1.2f}'.format(car_corr_mat[c[0], c[1]]), fontsize=18)
ax[1].tick_params(axis='both', which='major', labelsize=18)
ax[1].tick_params(axis='both', which='minor', labelsize=18)
sop_corr_mat = np.corrcoef(sop_mets)
ax[2].imshow(sop_corr_mat, vmin=0, vmax=1, cmap='plasma')
corr_x = [0,1,2,3,4,5]
ax[2].set_xticklabels(metric_names)
ax[2].set_yticklabels(metric_names)
ax[2].set_xticks(corr_x)
ax[2].set_yticks(corr_x)
ax[2].set_xlim([-0.5,5.5])
ax[2].set_ylim([-0.5,5.5])
cs = list(it.product(corr_x, corr_x))
for c in cs:
ax[2].text(c[0]-0.2, c[1]-0.11, '{0:1.2f}'.format(sop_corr_mat[c[0], c[1]]), fontsize=18)
ax[2].tick_params(axis='both', which='major', labelsize=18)
ax[2].tick_params(axis='both', which='minor', labelsize=18)
ax[0].set_title('CUB200-2011', fontsize=22)
ax[1].set_title('CARS196', fontsize=22)
ax[2].set_title('Stanford Online Products', fontsize=22)
f.set_size_inches(22,8)
f.tight_layout()
f.savefig('metric_correlation_matrix.pdf')
f.savefig('metric_correlation_matrix.png')
"""=================================================="""
####
recallss, valss = [cub_basic_recall, car_basic_recall, sop_basic_recall], [cub_vals, car_vals, sop_vals]
errss = [cub_basic_recall_err, car_basic_recall_err, sop_basic_recall_err]
namess = [cub_basic_names, car_basic_names, sop_basic_names]
reg_recallss, reg_valss = [cub_reg_recall, car_reg_recall, sop_reg_recall], [cub_vals, car_vals, sop_vals]
reg_errss = [cub_reg_recall_err, car_reg_recall_err, sop_reg_recall_err]
reg_namess = [cub_reg_names, car_reg_names, sop_reg_names]
####
def plot(vals, recalls, errs, names, reg_vals=None, reg_recalls=None, reg_errs=None, reg_names=None, xlab=None, ylab=None, xlim=[0,1], ylim=[0,1], savename=None):
from adjustText import adjust_text
f, ax = plt.subplots(1)
texts = []
rho = np.array([vals[k]['Rho3'][0] for k in names])
adj_names = names
nnames = []
for n in adj_names:
nnames.append(name_adjust(n, prep='', app=''))
print(nnames)
ax.errorbar(rho, recalls, yerr=errs,color='deepskyblue',fmt='o',ecolor='deepskyblue',elinewidth=5,capsize=0,markersize=16,mec='k')
recalls = np.array(recalls)
for rho_v, rec_v, n in zip(rho, recalls, nnames):
r = ax.text(rho_v, rec_v, n, fontsize=17, va='top', ha='left')
# r = ax.text(rho_v, rec_v, n, fontsize=15, bbox=dict(facecolor='gray', alpha=0.5), va='left', ha='left')
texts.append(r)
if reg_names is not None:
rho = np.array([vals[k]['Rho3'][0] for k in reg_names])
adj_names = ['_'.join(x.split('_')[1:]) for x in reg_names]
nnames = []
for n in adj_names:
nnames.append(name_adjust(n, prep='R-', app=''))
ax.errorbar(rho, reg_recalls, yerr=reg_errs,color='orange',fmt='o',ecolor='gray',elinewidth=5,capsize=0,markersize=16,mec='k')
for rho_v, rec_v, n in zip(rho, reg_recalls, nnames):
r = ax.text(rho_v, rec_v, n, fontsize=17, va='top', ha='left', color='chocolate')
texts.append(r)
ax.tick_params(axis='both', which='major', labelsize=20)
ax.tick_params(axis='both', which='minor', labelsize=20)
if xlab is not None:
ax.set_xlabel(xlab, fontsize=20)
if ylab is not None:
ax.set_ylabel(ylab, fontsize=20)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.grid()
f.set_size_inches(25,5)
f.tight_layout()
adjust_text(texts, arrowprops=dict(arrowstyle="->", color='k', lw=1))
f.savefig('{}.png'.format(savename))
f.savefig('{}.pdf'.format(savename))
plot(valss[0], recallss[0], errss[0], namess[0], reg_valss[0], reg_recallss[0], reg_errss[0], reg_namess[0], xlab=r'$\rho(\Phi)$', ylab=r'$CUB200-2011, R@1$', xlim=[0,0.59], ylim=[0.58, 0.66], savename='Detailed_Rel_Recall_Rho_CUB')
plot(valss[1], recallss[1], errss[1], namess[1], reg_valss[1], reg_recallss[1], reg_errss[1], reg_namess[1], xlab=r'$\rho(\Phi)$', ylab=r'$CARS196, R@1$', xlim=[0,0.59], ylim=[0.7, 0.84], savename='Detailed_Rel_Recall_Rho_CAR')
plot(valss[2], recallss[2], errss[2], namess[2], reg_valss[2], reg_recallss[2], reg_errss[2], reg_namess[2], xlab=r'$\rho(\Phi)$', ylab=r'$SOP, R@1$', xlim=[0,0.59], ylim=[0.67, 0.81], savename='Detailed_Rel_Recall_Rho_SOP')
"""=================================================="""
#### First Page Figure
plt.style.use('seaborn')
total_recall = np.array(cub_basic_recall.tolist() + cub_reg_recall.tolist())
total_err = np.array(cub_basic_recall_err.tolist() + cub_reg_recall_err.tolist())
total_names = np.array(cub_basic_names+cub_reg_names)
sort_idx = np.argsort(total_recall)
f, ax = plt.subplots(1)
basic_label, reg_label = False, False
for i,idx in enumerate(sort_idx):
if 'reg_' not in total_names[idx]:
if basic_label:
ax.barh(i,total_recall[idx], xerr=total_err[idx], color='orange', alpha=0.6)
else:
ax.barh(i,total_recall[idx], xerr=total_err[idx], color='orange', alpha=0.6, label='Basic DML Criteria')
basic_label = True
ax.text(0.5703,i-0.2,name_adjust(total_names[idx]), fontsize=17)
else:
if reg_label:
ax.barh(i,total_recall[idx], xerr=total_err[idx], color='forestgreen', alpha=0.8)
else:
ax.barh(i,total_recall[idx], xerr=total_err[idx], color='forestgreen', alpha=0.8, label='Regularized Variant')
reg_label = True
ax.text(0.5703,i-0.2,name_adjust(total_names[idx], prep='R-'), fontsize=17)
ax.legend(fontsize=20)
ax.set_yticks([])
ax.set_yticklabels([])
ax.set_xticks([0.58, 0.6, 0.62, 0.64])
ax.tick_params(axis='both', which='major', labelsize=22)
ax.tick_params(axis='both', which='minor', labelsize=22)
ax.set_title('CUB200-2011, R@1', fontsize=20)
ax.set_ylim([-0.5,22.5])
ax.set_xlim([0.57, 0.655])
f.set_size_inches(15,8)
f.tight_layout()
f.savefig('FirstPage.png')
f.savefig('FirstPage.pdf')
================================================
FILE: Sample_Runs/ICML2020_RevisitDML_SampleRuns.sh
================================================
python main.py --kernels 6 --source /home/karsten_dl/Dropbox/Projects/Datasets --n_epochs 150 --seed 0 --gpu 1 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
"""============= Baseline Runs --- CUB200-2011 ===================="""
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Npair --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Npair --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Npair --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Npair --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Npair --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_GenLifted --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_GenLifted --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_GenLifted --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_GenLifted --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_GenLifted --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ProxyNCA --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ProxyNCA --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ProxyNCA --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ProxyNCA --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ProxyNCA --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Histogram --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Histogram --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Histogram --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Histogram --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Histogram --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Contrastive --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Contrastive --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Contrastive --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Contrastive --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Contrastive --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SoftTriple --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SoftTriple --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SoftTriple --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SoftTriple --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SoftTriple --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Angular --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Angular --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Angular --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Angular --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Angular --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ArcFace --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ArcFace --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ArcFace --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ArcFace --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_ArcFace --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Random --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Random --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Random --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Random --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Random --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Semihard --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Semihard --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Semihard --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Semihard --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Semihard --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Softhard --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Softhard --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Softhard --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Softhard --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Softhard --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Triplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Quadruplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Quadruplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Quadruplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Quadruplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Quadruplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b06_Distance --loss_margin_beta 0.6 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b06_Distance --loss_margin_beta 0.6 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b06_Distance --loss_margin_beta 0.6 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b06_Distance --loss_margin_beta 0.6 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b06_Distance --loss_margin_beta 0.6 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b12_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b12_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b12_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b12_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Margin_b12_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SNR_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SNR_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SNR_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SNR_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_SNR_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_MS --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_MS --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_MS --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_MS --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_MS --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Softmax --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Softmax --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Softmax --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Softmax --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUB_Softmax --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
### Specturm-Regularized Ranking Losses
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Contrastive --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Contrastive --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Contrastive --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Contrastive --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Contrastive --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b12_Distance_3 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b12_Distance_3 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b12_Distance_3 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b12_Distance_3 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Margin_b12_Distance_3 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Triplet_Distance_3 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Triplet_Distance_3 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Triplet_Distance_3 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Triplet_Distance_3 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_Triplet_Distance_3 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.4
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_SNR_Distance _3 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.3
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_SNR_Distance _3 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.3
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_SNR_Distance _3 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.3
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_SNR_Distance _3 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.3
python main.py --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CUBreg_SNR_Distance _3 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.3
"""============= Baseline Runs --- CARS196 ===================="""
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Npair --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Npair --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Npair --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Npair --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Npair --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_GenLifted --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_GenLifted --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_GenLifted --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_GenLifted --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_GenLifted --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ProxyNCA --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ProxyNCA --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ProxyNCA --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ProxyNCA --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ProxyNCA --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss proxynca --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Histogram --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Histogram --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Histogram --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Histogram --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Histogram --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 65
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Contrastive --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Contrastive --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Contrastive --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Contrastive --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Contrastive --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SoftTriple --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SoftTriple --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SoftTriple --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SoftTriple --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SoftTriple --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss softtriplet --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Angular --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Angular --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Angular --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Angular --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Angular --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ArcFace --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ArcFace --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ArcFace --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ArcFace --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_ArcFace --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Random --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Random --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Random --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Random --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Random --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Semihard --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Semihard --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Semihard --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Semihard --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Semihard --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Softhard --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Softhard --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Softhard --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Softhard --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Softhard --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Triplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Quadruplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Quadruplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Quadruplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Quadruplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Quadruplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b06_Distance --loss_margin_beta 0.6 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b06_Distance --loss_margin_beta 0.6 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b06_Distance --loss_margin_beta 0.6 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b06_Distance --loss_margin_beta 0.6 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b06_Distance --loss_margin_beta 0.6 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b12_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b12_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b12_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b12_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Margin_b12_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SNR_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SNR_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SNR_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SNR_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_SNR_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_MS --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_MS --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_MS --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_MS --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_MS --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Softmax --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Softmax --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Softmax --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Softmax --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARS_Softmax --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize
### Specturm-Regularized Ranking Losses
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Contrastive --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Contrastive --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Contrastive --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Contrastive --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Contrastive --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b12_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b12_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b12_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b12_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Margin_b12_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Triplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Triplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Triplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Triplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_Triplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_SNR_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_SNR_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_SNR_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_SNR_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
python main.py --dataset cars196 --kernels 6 --source $datapath --n_epochs 150 --log_online --project RevisitDML --group CARreg_SNR_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.35
"""============= Baseline Runs --- Online Products ===================="""
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Npair --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Npair --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Npair --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Npair --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Npair --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss npair --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_GenLifted --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_GenLifted --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_GenLifted --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_GenLifted --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_GenLifted --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss lifted --batch_mining lifted --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Histogram --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 11
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Histogram --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 11
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Histogram --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 11
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Histogram --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 11
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Histogram --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss histogram --arch resnet50_frozen_normalize --loss_histogram_nbins 11
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Contrastive --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Contrastive --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Contrastive --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Contrastive --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Contrastive --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Angular --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Angular --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Angular --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Angular --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Angular --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss angular --batch_mining npair --arch resnet50_frozen
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_ArcFace --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_ArcFace --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_ArcFace --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_ArcFace --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_ArcFace --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss arcface --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Random --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Random --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Random --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Random --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Random --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining random --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Semihard --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Semihard --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Semihard --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Semihard --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Semihard --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining semihard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Softhard --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Softhard --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Softhard --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Softhard --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Softhard --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining softhard --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Triplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Quadruplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Quadruplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Quadruplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Quadruplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Quadruplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss quadruplet --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b06_Distance --loss_margin_beta 0.6 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b06_Distance --loss_margin_beta 0.6 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b06_Distance --loss_margin_beta 0.6 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b06_Distance --loss_margin_beta 0.6 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b06_Distance --loss_margin_beta 0.6 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b12_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b12_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b12_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b12_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Margin_b12_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_SNR_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_SNR_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_SNR_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_SNR_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_SNR_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining distance --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_MS --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_MS --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_MS --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_MS --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_MS --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss multisimilarity --arch resnet50_frozen_normalize
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Softmax --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize --loss_softmax_lr 0.002
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Softmax --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize --loss_softmax_lr 0.002
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Softmax --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize --loss_softmax_lr 0.002
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Softmax --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize --loss_softmax_lr 0.002
python main.py --dataset online_products --kernels 2 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOP_Softmax --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss softmax --batch_mining distance --arch resnet50_frozen_normalize --loss_softmax_lr 0.002
### Specturm-Regularized Ranking Losses
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Contrastive --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Contrastive --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Contrastive --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Contrastive --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Contrastive --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss contrastive --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b06_Distance --loss_margin_beta 0.6 --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b12_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b12_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b12_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b12_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Margin_b12_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss margin --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Triplet_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Triplet_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Triplet_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Triplet_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_Triplet_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss triplet --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_SNR_Distance --seed 0 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_SNR_Distance --seed 1 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_SNR_Distance --seed 2 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_SNR_Distance --seed 3 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
python main.py --dataset online_products --kernels 6 --source $datapath --n_epochs 100 --log_online --project RevisitDML --group SOPreg_SNR_Distance --seed 4 --gpu 0 --bs 112 --samples_per_class 2 --loss snr --batch_mining rho_distance --arch resnet50_frozen_normalize --miner_rho_distance_cp 0.15
================================================
FILE: architectures/__init__.py
================================================
import architectures.resnet50
import architectures.googlenet
import architectures.bninception
def select(arch, opt):
if 'resnet50' in arch:
return resnet50.Network(opt)
if 'googlenet' in arch:
return googlenet.Network(opt)
if 'bninception' in arch:
return bninception.Network(opt)
================================================
FILE: architectures/bninception.py
================================================
"""
The network architectures and weights are adapted and used from the great https://github.com/Cadene/pretrained-models.pytorch.
"""
import torch, torch.nn as nn, torch.nn.functional as F
import pretrainedmodels as ptm
"""============================================================="""
class Network(torch.nn.Module):
def __init__(self, opt, return_embed_dict=False):
super(Network, self).__init__()
self.pars = opt
self.model = ptm.__dict__['bninception'](num_classes=1000, pretrained='imagenet')
self.model.last_linear = torch.nn.Linear(self.model.last_linear.in_features, opt.embed_dim)
if '_he' in opt.arch:
torch.nn.init.kaiming_normal_(self.model.last_linear.weight, mode='fan_out')
torch.nn.init.constant_(self.model.last_linear.bias, 0)
if 'frozen' in opt.arch:
for module in filter(lambda m: type(m) == nn.BatchNorm2d, self.model.modules()):
module.eval()
module.train = lambda _: None
self.return_embed_dict = return_embed_dict
self.pool_base = torch.nn.AdaptiveAvgPool2d(1)
self.pool_aux = torch.nn.AdaptiveMaxPool2d(1) if 'double' in opt.arch else None
self.name = opt.arch
self.out_adjust = None
def forward(self, x, warmup=False, **kwargs):
x = self.model.features(x)
y = self.pool_base(x)
if self.pool_aux is not None:
y += self.pool_aux(x)
if warmup:
y,x = y.detach(), x.detach()
z = self.model.last_linear(y.view(len(x),-1))
if 'normalize' in self.name:
z = F.normalize(z, dim=-1)
if self.out_adjust and not self.training:
z = self.out_adjust(z)
return z,(y,x)
def functional_forward(self, x):
pass
================================================
FILE: architectures/googlenet.py
================================================
"""
The network architectures and weights are adapted and used from the great https://github.com/Cadene/pretrained-models.pytorch.
"""
import torch, torch.nn as nn
import torchvision.models as mod
"""============================================================="""
class Network(torch.nn.Module):
def __init__(self, opt):
super(Network, self).__init__()
self.pars = opt
self.model = mod.googlenet(pretrained=True)
self.model.last_linear = torch.nn.Linear(self.model.fc.in_features, opt.embed_dim)
self.model.fc = self.model.last_linear
self.name = opt.arch
def forward(self, x):
x = self.model(x)
if not 'normalize' in self.pars.arch:
return x
return torch.nn.functional.normalize(x, dim=-1)
================================================
FILE: architectures/resnet50.py
================================================
"""
The network architectures and weights are adapted and used from the great https://github.com/Cadene/pretrained-models.pytorch.
"""
import torch, torch.nn as nn
import pretrainedmodels as ptm
"""============================================================="""
class Network(torch.nn.Module):
def __init__(self, opt):
super(Network, self).__init__()
self.pars = opt
self.model = ptm.__dict__['resnet50'](num_classes=1000, pretrained='imagenet' if not opt.not_pretrained else None)
self.name = opt.arch
if 'frozen' in opt.arch:
for module in filter(lambda m: type(m) == nn.BatchNorm2d, self.model.modules()):
module.eval()
module.train = lambda _: None
self.model.last_linear = torch.nn.Linear(self.model.last_linear.in_features, opt.embed_dim)
self.layer_blocks = nn.ModuleList([self.model.layer1, self.model.layer2, self.model.layer3, self.model.layer4])
self.out_adjust = None
def forward(self, x, **kwargs):
x = self.model.maxpool(self.model.relu(self.model.bn1(self.model.conv1(x))))
for layerblock in self.layer_blocks:
x = layerblock(x)
no_avg_feat = x
x = self.model.avgpool(x)
enc_out = x = x.view(x.size(0),-1)
x = self.model.last_linear(x)
if 'normalize' in self.pars.arch:
x = torch.nn.functional.normalize(x, dim=-1)
if self.out_adjust and not self.train:
x = self.out_adjust(x)
return x, (enc_out, no_avg_feat)
================================================
FILE: batchminer/__init__.py
================================================
from batchminer import random_distance, intra_random
from batchminer import lifted, rho_distance, softhard, npair, parametric, random, semihard, distance
BATCHMINING_METHODS = {'random':random,
'semihard':semihard,
'softhard':softhard,
'distance':distance,
'rho_distance':rho_distance,
'npair':npair,
'parametric':parametric,
'lifted':lifted,
'random_distance': random_distance,
'intra_random': intra_random}
def select(batchminername, opt):
#####
if batchminername not in BATCHMINING_METHODS: raise NotImplementedError('Batchmining {} not available!'.format(batchminername))
batchmine_lib = BATCHMINING_METHODS[batchminername]
return batchmine_lib.BatchMiner(opt)
================================================
FILE: batchminer/distance.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.lower_cutoff = opt.miner_distance_lower_cutoff
self.upper_cutoff = opt.miner_distance_upper_cutoff
self.name = 'distance'
def __call__(self, batch, labels, tar_labels=None, return_distances=False, distances=None):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
bs, dim = batch.shape
if distances is None:
distances = self.pdist(batch.detach()).clamp(min=self.lower_cutoff)
sel_d = distances.shape[-1]
positives, negatives = [],[]
labels_visited = []
anchors = []
tar_labels = labels if tar_labels is None else tar_labels
for i in range(bs):
neg = tar_labels!=labels[i]; pos = tar_labels==labels[i]
anchors.append(i)
q_d_inv = self.inverse_sphere_distances(dim, bs, distances[i], tar_labels, labels[i])
negatives.append(np.random.choice(sel_d,p=q_d_inv))
if np.sum(pos)>0:
#Sample positives randomly
if np.sum(pos)>1: pos[i] = 0
positives.append(np.random.choice(np.where(pos)[0]))
#Sample negatives by distance
sampled_triplets = [[a,p,n] for a,p,n in zip(anchors, positives, negatives)]
if return_distances:
return sampled_triplets, distances
else:
return sampled_triplets
def inverse_sphere_distances(self, dim, bs, anchor_to_all_dists, labels, anchor_label):
dists = anchor_to_all_dists
#negated log-distribution of distances of unit sphere in dimension <dim>
log_q_d_inv = ((2.0 - float(dim)) * torch.log(dists) - (float(dim-3) / 2) * torch.log(1.0 - 0.25 * (dists.pow(2))))
log_q_d_inv[np.where(labels==anchor_label)[0]] = 0
q_d_inv = torch.exp(log_q_d_inv - torch.max(log_q_d_inv)) # - max(log) for stability
q_d_inv[np.where(labels==anchor_label)[0]] = 0
### NOTE: Cutting of values with high distances made the results slightly worse. It can also lead to
# errors where there are no available negatives (for high samples_per_class cases).
# q_d_inv[np.where(dists.detach().cpu().numpy()>self.upper_cutoff)[0]] = 0
q_d_inv = q_d_inv/q_d_inv.sum()
return q_d_inv.detach().cpu().numpy()
def pdist(self, A):
prod = torch.mm(A, A.t())
norm = prod.diag().unsqueeze(1).expand_as(prod)
res = (norm + norm.t() - 2 * prod).clamp(min = 0)
return res.sqrt()
================================================
FILE: batchminer/intra_random.py
================================================
import numpy as np, torch
import itertools as it
import random
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.name = 'random'
def __call__(self, batch, labels):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
unique_classes = np.unique(labels)
indices = np.arange(len(batch))
class_dict = {i:indices[labels==i] for i in unique_classes}
sampled_triplets = []
for cls in np.random.choice(list(class_dict.keys()), len(labels), replace=True):
a,p,n = np.random.choice(class_dict[cls], 3, replace=True)
sampled_triplets.append((a,p,n))
return sampled_triplets
================================================
FILE: batchminer/lifted.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.name = 'lifted'
def __call__(self, batch, labels):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
###
anchors, positives, negatives = [], [], []
list(range(len(batch)))
for i in range(len(batch)):
anchor = i
pos = labels==labels[anchor]
###
if np.sum(pos)>1:
anchors.append(anchor)
positive_set = np.where(pos)[0]
positive_set = positive_set[positive_set!=anchor]
positives.append(positive_set)
###
negatives = []
for anchor,positive_set in zip(anchors, positives):
neg_idxs = [i for i in range(len(batch)) if i not in [anchor]+list(positive_set)]
negative_set = np.arange(len(batch))[neg_idxs]
negatives.append(negative_set)
return anchors, positives, negatives
================================================
FILE: batchminer/npair.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.name = 'npair'
def __call__(self, batch, labels):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
anchors, positives, negatives = [],[],[]
for i in range(len(batch)):
anchor = i
pos = labels==labels[anchor]
if np.sum(pos)>1:
anchors.append(anchor)
avail_positive = np.where(pos)[0]
avail_positive = avail_positive[avail_positive!=anchor]
positive = np.random.choice(avail_positive)
positives.append(positive)
###
negatives = []
for anchor,positive in zip(anchors, positives):
neg_idxs = [i for i in range(len(batch)) if i not in [anchor, positive] and labels[i] != labels[anchor]]
# neg_idxs = [i for i in range(len(batch)) if i not in [anchor, positive]]
negative_set = np.arange(len(batch))[neg_idxs]
negatives.append(negative_set)
return anchors, positives, negatives
================================================
FILE: batchminer/parametric.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.mode = opt.miner_parametric_mode
self.n_support = opt.miner_parametric_n_support
self.support_lim = opt.miner_parametric_support_lim
self.name = 'parametric'
###
self.set_sample_distr()
def __call__(self, batch, labels):
bs = batch.shape[0]
sample_distr = self.sample_distr
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
###
distances = self.pdist(batch.detach())
p_assigns = np.sum((distances.cpu().numpy().reshape(-1)>self.support[1:-1].reshape(-1,1)).T,axis=1).reshape(distances.shape)
outside_support_lim = (distances.cpu().numpy().reshape(-1)<self.support_lim[0]) * (distances.cpu().numpy().reshape(-1)>self.support_lim[1])
outside_support_lim = outside_support_lim.reshape(distances.shape)
sample_ps = sample_distr[p_assigns]
sample_ps[outside_support_lim] = 0
###
anchors, labels_visited = [], []
positives, negatives = [],[]
###
for i in range(bs):
neg = labels!=labels[i]; pos = labels==labels[i]
if np.sum(pos)>1:
anchors.append(i)
#Sample positives randomly
pos[i] = 0
positives.append(np.random.choice(np.where(pos)[0]))
#Sample negatives by distance
sample_p = sample_ps[i][neg]
sample_p = sample_p/sample_p.sum()
negatives.append(np.random.choice(np.arange(bs)[neg],p=sample_p))
sampled_triplets = [[a,p,n] for a,p,n in zip(anchors, positives, negatives)]
return sampled_triplets
def pdist(self, A, eps=1e-4):
prod = torch.mm(A, A.t())
norm = prod.diag().unsqueeze(1).expand_as(prod)
res = (norm + norm.t() - 2 * prod).clamp(min = 0)
return res.clamp(min = eps).sqrt()
def set_sample_distr(self):
self.support = np.linspace(self.support_lim[0], self.support_lim[1], self.n_support)
if self.mode == 'uniform':
self.sample_distr = np.array([1.] * (self.n_support-1))
if self.mode == 'hards':
self.sample_distr = self.support.copy()
self.sample_distr[self.support<=0.5] = 1
self.sample_distr[self.support>0.5] = 0
if self.mode == 'semihards':
self.sample_distr = self.support.copy()
from IPython import embed; embed()
self.sample_distr[(self.support<=0.7) * (self.support>=0.3)] = 1
self.sample_distr[(self.support<0.3) * (self.support>0.7)] = 0
if self.mode == 'veryhards':
self.sample_distr = self.support.copy()
self.sample_distr[self.support<=0.3] = 1
self.sample_distr[self.support>0.3] = 0
self.sample_distr = np.clip(self.sample_distr, 1e-15, 1)
self.sample_distr = self.sample_distr/self.sample_distr.sum()
================================================
FILE: batchminer/random.py
================================================
import numpy as np, torch
import itertools as it
import random
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.name = 'random'
def __call__(self, batch, labels):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
unique_classes = np.unique(labels)
indices = np.arange(len(batch))
class_dict = {i:indices[labels==i] for i in unique_classes}
sampled_triplets = [list(it.product([x],[x],[y for y in unique_classes if x!=y])) for x in unique_classes]
sampled_triplets = [x for y in sampled_triplets for x in y]
sampled_triplets = [[x for x in list(it.product(*[class_dict[j] for j in i])) if x[0]!=x[1]] for i in sampled_triplets]
sampled_triplets = [x for y in sampled_triplets for x in y]
#NOTE: The number of possible triplets is given by #unique_classes*(2*(samples_per_class-1)!)*(#unique_classes-1)*samples_per_class
sampled_triplets = random.sample(sampled_triplets, batch.shape[0])
return sampled_triplets
================================================
FILE: batchminer/random_distance.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.lower_cutoff = opt.miner_distance_lower_cutoff
self.upper_cutoff = opt.miner_distance_upper_cutoff
self.name = 'distance'
def __call__(self, batch, labels):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
labels = labels[np.random.choice(len(labels), len(labels), replace=False)]
bs = batch.shape[0]
distances = self.pdist(batch.detach()).clamp(min=self.lower_cutoff)
positives, negatives = [],[]
labels_visited = []
anchors = []
for i in range(bs):
neg = labels!=labels[i]; pos = labels==labels[i]
if np.sum(pos)>1:
anchors.append(i)
q_d_inv = self.inverse_sphere_distances(batch, distances[i], labels, labels[i])
#Sample positives randomly
pos[i] = 0
positives.append(np.random.choice(np.where(pos)[0]))
#Sample negatives by distance
negatives.append(np.random.choice(bs,p=q_d_inv))
sampled_triplets = [[a,p,n] for a,p,n in zip(anchors, positives, negatives)]
return sampled_triplets
def inverse_sphere_distances(self, batch, anchor_to_all_dists, labels, anchor_label):
dists = anchor_to_all_dists
bs,dim = len(dists),batch.shape[-1]
#negated log-distribution of distances of unit sphere in dimension <dim>
log_q_d_inv = ((2.0 - float(dim)) * torch.log(dists) - (float(dim-3) / 2) * torch.log(1.0 - 0.25 * (dists.pow(2))))
log_q_d_inv[np.where(labels==anchor_label)[0]] = 0
q_d_inv = torch.exp(log_q_d_inv - torch.max(log_q_d_inv)) # - max(log) for stability
q_d_inv[np.where(labels==anchor_label)[0]] = 0
### NOTE: Cutting of values with high distances made the results slightly worse. It can also lead to
# errors where there are no available negatives (for high samples_per_class cases).
# q_d_inv[np.where(dists.detach().cpu().numpy()>self.upper_cutoff)[0]] = 0
q_d_inv = q_d_inv/q_d_inv.sum()
return q_d_inv.detach().cpu().numpy()
def pdist(self, A):
prod = torch.mm(A, A.t())
norm = prod.diag().unsqueeze(1).expand_as(prod)
res = (norm + norm.t() - 2 * prod).clamp(min = 0)
return res.sqrt()
================================================
FILE: batchminer/rho_distance.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.lower_cutoff = opt.miner_rho_distance_lower_cutoff
self.upper_cutoff = opt.miner_rho_distance_upper_cutoff
self.contrastive_p = opt.miner_rho_distance_cp
self.name = 'rho_distance'
def __call__(self, batch, labels, return_distances=False):
if isinstance(labels, torch.Tensor): labels = labels.detach().cpu().numpy()
bs = batch.shape[0]
distances = self.pdist(batch.detach()).clamp(min=self.lower_cutoff)
positives, negatives = [],[]
labels_visited = []
anchors = []
for i in range(bs):
neg = labels!=labels[i]; pos = labels==labels[i]
use_contr = np.random.choice(2, p=[1-self.contrastive_p, self.contrastive_p])
if np.sum(pos)>1:
anchors.append(i)
if use_contr:
positives.append(i)
#Sample negatives by distance
pos[i] = 0
negatives.append(np.random.choice(np.where(pos)[0]))
else:
q_d_inv = self.inverse_sphere_distances(batch, distances[i], labels, labels[i])
#Sample positives randomly
pos[i] = 0
positives.append(np.random.choice(np.where(pos)[0]))
#Sample negatives by distance
negatives.append(np.random.choice(bs,p=q_d_inv))
sampled_triplets = [[a,p,n] for a,p,n in zip(anchors, positives, negatives)]
self.push_triplets = np.sum([m[1]==m[2] for m in labels[sampled_triplets]])
if return_distances:
return sampled_triplets, distances
else:
return sampled_triplets
def inverse_sphere_distances(self, batch, anchor_to_all_dists, labels, anchor_label):
dists = anchor_to_all_dists
bs,dim = len(dists),batch.shape[-1]
#negated log-distribution of distances of unit sphere in dimension <dim>
log_q_d_inv = ((2.0 - float(dim)) * torch.log(dists) - (float(dim-3) / 2) * torch.log(1.0 - 0.25 * (dists.pow(2))))
log_q_d_inv[np.where(labels==anchor_label)[0]] = 0
q_d_inv = torch.exp(log_q_d_inv - torch.max(log_q_d_inv)) # - max(log) for stability
q_d_inv[np.where(labels==anchor_label)[0]] = 0
### NOTE: Cutting of values with high distances made the results slightly worse. It can also lead to
# errors where there are no available negatives (for high samples_per_class cases).
# q_d_inv[np.where(dists.detach().cpu().numpy()>self.upper_cutoff)[0]] = 0
q_d_inv = q_d_inv/q_d_inv.sum()
return q_d_inv.detach().cpu().numpy()
def pdist(self, A, eps=1e-4):
prod = torch.mm(A, A.t())
norm = prod.diag().unsqueeze(1).expand_as(prod)
res = (norm + norm.t() - 2 * prod).clamp(min = 0)
return res.clamp(min = eps).sqrt()
================================================
FILE: batchminer/semihard.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.name = 'semihard'
self.margin = vars(opt)['loss_'+opt.loss+'_margin']
def __call__(self, batch, labels, return_distances=False):
if isinstance(labels, torch.Tensor): labels = labels.detach().numpy()
bs = batch.size(0)
#Return distance matrix for all elements in batch (BSxBS)
distances = self.pdist(batch.detach()).detach().cpu().numpy()
positives, negatives = [], []
anchors = []
for i in range(bs):
l, d = labels[i], distances[i]
neg = labels!=l; pos = labels==l
anchors.append(i)
pos[i] = 0
p = np.random.choice(np.where(pos)[0])
positives.append(p)
#Find negatives that violate tripet constraint semi-negatives
neg_mask = np.logical_and(neg,d>d[p])
neg_mask = np.logical_and(neg_mask,d<self.margin+d[p])
if neg_mask.sum()>0:
negatives.append(np.random.choice(np.where(neg_mask)[0]))
else:
negatives.append(np.random.choice(np.where(neg)[0]))
sampled_triplets = [[a, p, n] for a, p, n in zip(anchors, positives, negatives)]
if return_distances:
return sampled_triplets, distances
else:
return sampled_triplets
def pdist(self, A):
prod = torch.mm(A, A.t())
norm = prod.diag().unsqueeze(1).expand_as(prod)
res = (norm + norm.t() - 2 * prod).clamp(min = 0)
return res.clamp(min = 0).sqrt()
================================================
FILE: batchminer/softhard.py
================================================
import numpy as np, torch
class BatchMiner():
def __init__(self, opt):
self.par = opt
self.name = 'softhard'
def __call__(self, batch, labels, return_distances=False):
if isinstance(labels, torch.Tensor): labels = labels.detach().numpy()
bs = batch.size(0)
#Return distance matrix for all elements in batch (BSxBS)
distances = self.pdist(batch.detach()).detach().cpu().numpy()
positives, negatives = [], []
anchors = []
for i in range(bs):
l, d = labels[i], distances[i]
neg = labels!=l; pos = labels==l
if np.sum(pos)>1:
anchors.append(i)
#1 for batchelements with label l
#0 for current anchor
pos[i] = False
#Find negatives that violate triplet constraint in a hard fashion
neg_mask = np.logical_and(neg,d<d[np.where(pos)[0]].max())
#Find positives that violate triplet constraint in a hard fashion
pos_mask = np.logical_and(pos,d>d[np.where(neg)[0]].min())
if pos_mask.sum()>0:
positives.append(np.random.choice(np.where(pos_mask)[0]))
else:
positives.append(np.random.choice(np.where(pos)[0]))
if neg_mask.sum()>0:
negatives.append(np.random.choice(np.where(neg_mask)[0]))
else:
negatives.append(np.random.choice(np.where(neg)[0]))
sampled_triplets = [[a, p, n] for a, p, n in zip(anchors, positives, negatives)]
if return_distances:
return sampled_triplets, distances
else:
return sampled_triplets
def pdist(self, A):
prod = torch.mm(A, A.t())
norm = prod.diag().unsqueeze(1).expand_as(prod)
res = (norm + norm.t() - 2 * prod).clamp(min = 0)
return res.clamp(min = 0).sqrt()
================================================
FILE: criteria/__init__.py
================================================
### Standard DML criteria
from criteria import triplet, margin, proxynca, npair
from criteria import lifted, contrastive, softmax
from criteria import angular, snr, histogram, arcface
from criteria import softtriplet, multisimilarity, quadruplet
### Non-Standard Criteria
from criteria import adversarial_separation
### Basic Libs
import copy
"""================================================================================================="""
def select(loss, opt, to_optim, batchminer=None):
#####
losses = {'triplet': triplet,
'margin':margin,
'proxynca':proxynca,
'npair':npair,
'angular':angular,
'contrastive':contrastive,
'lifted':lifted,
'snr':snr,
'multisimilarity':multisimilarity,
'histogram':histogram,
'softmax':softmax,
'softtriplet':softtriplet,
'arcface':arcface,
'quadruplet':quadruplet,
'adversarial_separation':adversarial_separation}
if loss not in losses: raise NotImplementedError('Loss {} not implemented!'.format(loss))
loss_lib = losses[loss]
if loss_lib.REQUIRES_BATCHMINER:
if batchminer is None:
raise Exception('Loss {} requires one of the following batch mining methods: {}'.format(loss, loss_lib.ALLOWED_MINING_OPS))
else:
if batchminer.name not in loss_lib.ALLOWED_MINING_OPS:
raise Exception('{}-mining not allowed for {}-loss!'.format(batchminer.name, loss))
loss_par_dict = {'opt':opt}
if loss_lib.REQUIRES_BATCHMINER:
loss_par_dict['batchminer'] = batchminer
criterion = loss_lib.Criterion(**loss_par_dict)
if loss_lib.REQUIRES_OPTIM:
if hasattr(criterion,'optim_dict_list') and criterion.optim_dict_list is not None:
to_optim += criterion.optim_dict_list
else:
to_optim += [{'params':criterion.parameters(), 'lr':criterion.lr}]
return criterion, to_optim
================================================
FILE: criteria/adversarial_separation.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = list(batchminer.BATCHMINING_METHODS.keys())
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = True
### MarginLoss with trainable class separation margin beta. Runs on Mini-batches as well.
class Criterion(torch.nn.Module):
def __init__(self, opt):
"""
Args:
margin: Triplet Margin.
nu: Regularisation Parameter for beta values if they are learned.
beta: Class-Margin values.
n_classes: Number of different classes during training.
"""
super().__init__()
####
self.embed_dim = opt.embed_dim
self.proj_dim = opt.diva_decorrnet_dim
self.directions = opt.diva_decorrelations
self.weights = opt.diva_rho_decorrelation
self.name = 'adversarial_separation'
#Projection network
self.regressors = nn.ModuleDict()
for direction in self.directions:
self.regressors[direction] = torch.nn.Sequential(torch.nn.Linear(self.embed_dim, self.proj_dim), torch.nn.ReLU(), torch.nn.Linear(self.proj_dim, self.embed_dim)).to(torch.float).to(opt.device)
#Learning Rate for Projection Network
self.lr = opt.diva_decorrnet_lr
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, feature_dict):
#Apply gradient reversal on input embeddings.
adj_feature_dict = {key:torch.nn.functional.normalize(grad_reverse(features),dim=-1) for key, features in feature_dict.items()}
#Project one embedding to the space of the other (with normalization), then compute the correlation.
sim_loss = 0
for weight, direction in zip(self.weights, self.directions):
source, target = direction.split('-')
sim_loss += -1.*weight*torch.mean(torch.mean((adj_feature_dict[target]*torch.nn.functional.normalize(self.regressors[direction](adj_feature_dict[source]),dim=-1))**2,dim=-1))
return sim_loss
### Gradient Reversal Layer
class GradRev(torch.autograd.Function):
"""
Implements an autograd class to flip gradients during backward pass.
"""
def forward(self, x):
"""
Container which applies a simple identity function.
Input:
x: any torch tensor input.
"""
return x.view_as(x)
def backward(self, grad_output):
"""
Container to reverse gradient signal during backward pass.
Input:
grad_output: any computed gradient.
"""
return (grad_output * -1.)
### Gradient reverse function
def grad_reverse(x):
"""
Applies gradient reversal on input.
Input:
x: any torch tensor input.
"""
return GradRev()(x)
================================================
FILE: criteria/angular.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = ['npair']
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.tan_angular_margin = np.tan(np.pi/180*opt.loss_angular_alpha)
self.lam = opt.loss_angular_npair_ang_weight
self.l2_weight = opt.loss_angular_npair_l2
self.batchminer = batchminer
self.name = 'angular'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
####NOTE: Normalize Angular Loss, but dont normalize npair loss!
anchors, positives, negatives = self.batchminer(batch, labels)
anchors, positives, negatives = batch[anchors], batch[positives], batch[negatives]
n_anchors, n_positives, n_negatives = F.normalize(anchors, dim=1), F.normalize(positives, dim=1), F.normalize(negatives, dim=-1)
is_term1 = 4*self.tan_angular_margin**2*(n_anchors + n_positives)[:,None,:].bmm(n_negatives.permute(0,2,1))
is_term2 = 2*(1+self.tan_angular_margin**2)*n_anchors[:,None,:].bmm(n_positives[:,None,:].permute(0,2,1))
is_term1 = is_term1.view(is_term1.shape[0], is_term1.shape[-1])
is_term2 = is_term2.view(-1, 1)
inner_sum_ang = is_term1 - is_term2
angular_loss = torch.mean(torch.log(torch.sum(torch.exp(inner_sum_ang), dim=1) + 1))
inner_sum_npair = anchors[:,None,:].bmm((negatives - positives[:,None,:]).permute(0,2,1))
inner_sum_npair = inner_sum_npair.view(inner_sum_npair.shape[0], inner_sum_npair.shape[-1])
npair_loss = torch.mean(torch.log(torch.sum(torch.exp(inner_sum_npair.clamp(max=50,min=-50)), dim=1) + 1))
loss = npair_loss + self.lam*angular_loss + self.l2_weight*torch.mean(torch.norm(batch, p=2, dim=1))
return loss
================================================
FILE: criteria/arcface.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = None
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = True
### This implementation follows the pseudocode provided in the original paper.
class Criterion(torch.nn.Module):
def __init__(self, opt):
super(Criterion, self).__init__()
self.par = opt
####
self.angular_margin = opt.loss_arcface_angular_margin
self.feature_scale = opt.loss_arcface_feature_scale
self.class_map = torch.nn.Parameter(torch.Tensor(opt.n_classes, opt.embed_dim))
stdv = 1. / np.sqrt(self.class_map.size(1))
self.class_map.data.uniform_(-stdv, stdv)
self.name = 'arcface'
self.lr = opt.loss_arcface_lr
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
bs, labels = len(batch), labels.to(self.par.device)
class_map = torch.nn.functional.normalize(self.class_map, dim=1)
#Note that the similarity becomes the cosine for normalized embeddings. Denoted as 'fc7' in the paper pseudocode.
cos_similarity = batch.mm(class_map.T).clamp(min=1e-10, max=1-1e-10)
pick = torch.zeros(bs, self.par.n_classes).bool().to(self.par.device)
pick[torch.arange(bs), labels] = 1
original_target_logit = cos_similarity[pick]
theta = torch.acos(original_target_logit)
marginal_target_logit = torch.cos(theta + self.angular_margin)
class_pred = self.feature_scale * (cos_similarity + pick * (marginal_target_logit-original_target_logit).unsqueeze(1))
loss = torch.nn.CrossEntropyLoss()(class_pred, labels)
return loss
================================================
FILE: criteria/contrastive.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = list(batchminer.BATCHMINING_METHODS.keys())
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.pos_margin = opt.loss_contrastive_pos_margin
self.neg_margin = opt.loss_contrastive_neg_margin
self.batchminer = batchminer
self.name = 'contrastive'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
sampled_triplets = self.batchminer(batch, labels)
anchors = [triplet[0] for triplet in sampled_triplets]
positives = [triplet[1] for triplet in sampled_triplets]
negatives = [triplet[2] for triplet in sampled_triplets]
pos_dists = torch.mean(F.relu(nn.PairwiseDistance(p=2)(batch[anchors,:], batch[positives,:]) - self.pos_margin))
neg_dists = torch.mean(F.relu(self.neg_margin - nn.PairwiseDistance(p=2)(batch[anchors,:], batch[negatives,:])))
loss = pos_dists + neg_dists
return loss
================================================
FILE: criteria/histogram.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = None
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = False
#NOTE: This implementation follows: https://github.com/valerystrizh/pytorch-histogram-loss
class Criterion(torch.nn.Module):
def __init__(self, opt):
"""
Args:
margin: Triplet Margin.
"""
super(Criterion, self).__init__()
self.par = opt
self.nbins = opt.loss_histogram_nbins
self.bin_width = 2/(self.nbins - 1)
# We require a numpy and torch support as parts of the computation require numpy.
self.support = np.linspace(-1,1,self.nbins).reshape(-1,1)
self.support_torch = torch.linspace(-1,1,self.nbins).reshape(-1,1).to(opt.device)
self.name = 'histogram'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
#The original paper utilizes similarities instead of distances.
similarity = batch.mm(batch.T)
bs = labels.size()[0]
### We create a equality matrix for labels occuring in the batch
label_eqs = (labels.repeat(bs, 1) == labels.view(-1, 1).repeat(1, bs))
### Because the similarity matrix is symmetric, we will only utilise the upper triangular.
### These values are indexed by sim_inds
sim_inds = torch.triu(torch.ones(similarity.size()), 1).bool().to(self.par.device)
### For the upper triangular similarity matrix, we want to know where our positives/anchors and negatives are:
pos_inds = label_eqs[sim_inds].repeat(self.nbins, 1)
neg_inds = ~label_eqs[sim_inds].repeat(self.nbins, 1)
###
n_pos = pos_inds[0].sum()
n_neg = neg_inds[0].sum()
### Extract upper triangular from the similarity matrix. (produces a one-dim vector)
unique_sim = similarity[sim_inds].view(1, -1)
### We broadcast this vector to each histogram bin. Each bin entry requires a different summation in self.histogram()
unique_sim_rep = unique_sim.repeat(self.nbins, 1)
### This assigns bin-values for float-similarities. The conversion to numpy is important to avoid rounding errors in torch.
assigned_bin_values = ((unique_sim_rep.detach().cpu().numpy() + 1) / self.bin_width).astype(int) * self.bin_width - 1
### We now compute the histogram over distances
hist_pos_sim = self.histogram(unique_sim_rep, assigned_bin_values, pos_inds, n_pos)
hist_neg_sim = self.histogram(unique_sim_rep, assigned_bin_values, neg_inds, n_neg)
### Compute the CDF for the positive similarity histogram
hist_pos_rep = hist_pos_sim.view(-1, 1).repeat(1, hist_pos_sim.size()[0])
hist_pos_inds = torch.tril(torch.ones(hist_pos_rep.size()), -1).bool()
hist_pos_rep[hist_pos_inds] = 0
hist_pos_cdf = hist_pos_rep.sum(0)
loss = torch.sum(hist_neg_sim * hist_pos_cdf)
return loss
def histogram(self, unique_sim_rep, assigned_bin_values, idxs, n_elem):
"""
Compute the histogram over similarities.
Args:
unique_sim_rep: torch tensor of shape nbins x n_unique_neg_similarities.
assigned_bin_values: Bin value for each similarity value in unique_sim_rep.
idxs: positive/negative entry indices in unique_sim_rep
n_elem: number of elements in unique_sim_rep.
"""
# Cloning is required because we change the similarity matrix in-place, but need it for the
# positive AND negative histogram. Note that clone() allows for backprop.
usr = unique_sim_rep.clone()
# For each bin (and its lower neighbour bin) we find the distance values that belong.
indsa = torch.tensor((assigned_bin_values==(self.support-self.bin_width) ) & idxs.detach().cpu().numpy())
indsb = torch.tensor((assigned_bin_values==self.support) & idxs.detach().cpu().numpy())
# Set all irrelevant similarities to 0
usr[~(indsb|indsa)]=0
#
usr[indsa] = (usr - self.support_torch + self.bin_width)[indsa] / self.bin_width
usr[indsb] = (-usr + self.support_torch + self.bin_width)[indsb] / self.bin_width
return usr.sum(1)/n_elem
================================================
FILE: criteria/lifted.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = ['lifted']
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.margin = opt.loss_lifted_neg_margin
self.l2_weight = opt.loss_lifted_l2
self.batchminer = batchminer
self.name = 'lifted'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
anchors, positives, negatives = self.batchminer(batch, labels)
loss = []
for anchor, positive_set, negative_set in zip(anchors, positives, negatives):
anchor, positive_set, negative_set = batch[anchor, :].view(1,-1), batch[positive_set, :].view(1,len(positive_set),-1), batch[negative_set, :].view(1,len(negative_set),-1)
pos_term = torch.logsumexp(nn.PairwiseDistance(p=2)(anchor[:,:,None], positive_set.permute(0,2,1)), dim=1)
neg_term = torch.logsumexp(self.margin - nn.PairwiseDistance(p=2)(anchor[:,:,None], negative_set.permute(0,2,1)), dim=1)
loss.append(F.relu(pos_term + neg_term))
loss = torch.mean(torch.stack(loss)) + self.l2_weight*torch.mean(torch.norm(batch, p=2, dim=1))
return loss
================================================
FILE: criteria/margin.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = list(batchminer.BATCHMINING_METHODS.keys())
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = True
### MarginLoss with trainable class separation margin beta. Runs on Mini-batches as well.
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.n_classes = opt.n_classes
self.margin = opt.loss_margin_margin
self.nu = opt.loss_margin_nu
self.beta_constant = opt.loss_margin_beta_constant
self.beta_val = opt.loss_margin_beta
if opt.loss_margin_beta_constant:
self.beta = opt.loss_margin_beta
else:
self.beta = torch.nn.Parameter(torch.ones(opt.n_classes)*opt.loss_margin_beta)
self.batchminer = batchminer
self.name = 'margin'
self.lr = opt.loss_margin_beta_lr
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
sampled_triplets = self.batchminer(batch, labels)
if len(sampled_triplets):
d_ap, d_an = [],[]
for triplet in sampled_triplets:
train_triplet = {'Anchor': batch[triplet[0],:], 'Positive':batch[triplet[1],:], 'Negative':batch[triplet[2]]}
pos_dist = ((train_triplet['Anchor']-train_triplet['Positive']).pow(2).sum()+1e-8).pow(1/2)
neg_dist = ((train_triplet['Anchor']-train_triplet['Negative']).pow(2).sum()+1e-8).pow(1/2)
d_ap.append(pos_dist)
d_an.append(neg_dist)
d_ap, d_an = torch.stack(d_ap), torch.stack(d_an)
if self.beta_constant:
beta = self.beta
else:
beta = torch.stack([self.beta[labels[triplet[0]]] for triplet in sampled_triplets]).to(torch.float).to(d_ap.device)
pos_loss = torch.nn.functional.relu(d_ap-beta+self.margin)
neg_loss = torch.nn.functional.relu(beta-d_an+self.margin)
pair_count = torch.sum((pos_loss>0.)+(neg_loss>0.)).to(torch.float).to(d_ap.device)
if pair_count == 0.:
loss = torch.sum(pos_loss+neg_loss)
else:
loss = torch.sum(pos_loss+neg_loss)/pair_count
if self.nu:
beta_regularization_loss = torch.sum(beta)
loss += self.nu * beta_regularisation_loss.to(torch.float).to(d_ap.device)
else:
loss = torch.tensor(0.).to(torch.float).to(batch.device)
return loss
================================================
FILE: criteria/multisimilarity.py
================================================
import torch, torch.nn as nn
"""================================================================================================="""
ALLOWED_MINING_OPS = None
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = False
class Criterion(torch.nn.Module):
def __init__(self, opt):
super(Criterion, self).__init__()
self.n_classes = opt.n_classes
self.pos_weight = opt.loss_multisimilarity_pos_weight
self.neg_weight = opt.loss_multisimilarity_neg_weight
self.margin = opt.loss_multisimilarity_margin
self.thresh = opt.loss_multisimilarity_thresh
self.name = 'multisimilarity'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
similarity = batch.mm(batch.T)
loss = []
for i in range(len(batch)):
pos_idxs = labels==labels[i]
pos_idxs[i] = 0
neg_idxs = labels!=labels[i]
anchor_pos_sim = similarity[i][pos_idxs]
anchor_neg_sim = similarity[i][neg_idxs]
### This part doesn't really work, especially when you dont have a lot of positives in the batch...
neg_idxs = (anchor_neg_sim + self.margin) > torch.min(anchor_pos_sim)
pos_idxs = (anchor_pos_sim - self.margin) < torch.max(anchor_neg_sim)
if not torch.sum(neg_idxs) or not torch.sum(pos_idxs):
continue
anchor_neg_sim = anchor_neg_sim[neg_idxs]
anchor_pos_sim = anchor_pos_sim[pos_idxs]
pos_term = 1./self.pos_weight * torch.log(1+torch.sum(torch.exp(-self.pos_weight* (anchor_pos_sim - self.thresh))))
neg_term = 1./self.neg_weight * torch.log(1+torch.sum(torch.exp(self.neg_weight * (anchor_neg_sim - self.thresh))))
loss.append(pos_term + neg_term)
loss = torch.mean(torch.stack(loss))
return loss
================================================
FILE: criteria/npair.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = ['npair']
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
"""
Args:
"""
super(Criterion, self).__init__()
self.pars = opt
self.l2_weight = opt.loss_npair_l2
self.batchminer = batchminer
self.name = 'npair'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
anchors, positives, negatives = self.batchminer(batch, labels)
##
loss = 0
if 'bninception' in self.pars.arch:
### clamping/value reduction to avoid initial overflow for high embedding dimensions!
batch = batch/4
for anchor, positive, negative_set in zip(anchors, positives, negatives):
a_embs, p_embs, n_embs = batch[anchor:anchor+1], batch[positive:positive+1], batch[negative_set]
inner_sum = a_embs[:,None,:].bmm((n_embs - p_embs[:,None,:]).permute(0,2,1))
inner_sum = inner_sum.view(inner_sum.shape[0], inner_sum.shape[-1])
loss = loss + torch.mean(torch.log(torch.sum(torch.exp(inner_sum), dim=1) + 1))/len(anchors)
loss = loss + self.l2_weight*torch.mean(torch.norm(batch, p=2, dim=1))/len(anchors)
return loss
================================================
FILE: criteria/proxynca.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = None
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = True
class Criterion(torch.nn.Module):
def __init__(self, opt):
"""
Args:
opt: Namespace containing all relevant parameters.
"""
super(Criterion, self).__init__()
####
self.num_proxies = opt.n_classes
self.embed_dim = opt.embed_dim
self.proxies = torch.nn.Parameter(torch.randn(self.num_proxies, self.embed_dim)/8)
self.class_idxs = torch.arange(self.num_proxies)
self.name = 'proxynca'
self.optim_dict_list = [{'params':self.proxies, 'lr':opt.lr * opt.loss_proxynca_lrmulti}]
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
#Empirically, multiplying the embeddings during the computation of the loss seem to allow for more stable training;
#Acts as a temperature in the NCA objective.
batch = 3*torch.nn.functional.normalize(batch, dim=1)
proxies = 3*torch.nn.functional.normalize(self.proxies, dim=1)
#Group required proxies
pos_proxies = torch.stack([proxies[pos_label:pos_label+1,:] for pos_label in labels])
neg_proxies = torch.stack([torch.cat([self.class_idxs[:class_label],self.class_idxs[class_label+1:]]) for class_label in labels])
neg_proxies = torch.stack([proxies[neg_labels,:] for neg_labels in neg_proxies])
#Compute Proxy-distances
dist_to_neg_proxies = torch.sum((batch[:,None,:]-neg_proxies).pow(2),dim=-1)
dist_to_pos_proxies = torch.sum((batch[:,None,:]-pos_proxies).pow(2),dim=-1)
#Compute final proxy-based NCA loss
loss = torch.mean(dist_to_pos_proxies[:,0] + torch.logsumexp(-dist_to_neg_proxies, dim=1))
return loss
================================================
FILE: criteria/quadruplet.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = list(batchminer.BATCHMINING_METHODS.keys())
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.batchminer = batchminer
self.name = 'quadruplet'
self.margin_alpha_1 = opt.loss_quadruplet_margin_alpha_1
self.margin_alpha_2 = opt.loss_quadruplet_margin_alpha_2
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def triplet_distance(self, anchor, positive, negative):
return torch.nn.functional.relu(torch.norm(anchor-positive, p=2, dim=-1)-torch.norm(anchor-negative, p=2, dim=-1)+self.margin_alpha_1)
def quadruplet_distance(self, anchor, positive, negative, fourth_negative):
return torch.nn.functional.relu(torch.norm(anchor-positive, p=2, dim=-1)-torch.norm(negative-fourth_negative, p=2, dim=-1)+self.margin_alpha_2)
def forward(self, batch, labels, **kwargs):
sampled_triplets = self.batchminer(batch, labels)
anchors = np.array([triplet[0] for triplet in sampled_triplets]).reshape(-1,1)
positives = np.array([triplet[1] for triplet in sampled_triplets]).reshape(-1,1)
negatives = np.array([triplet[2] for triplet in sampled_triplets]).reshape(-1,1)
fourth_negatives = negatives!=negatives.T
fourth_negatives = [np.random.choice(np.arange(len(batch))[idxs]) for idxs in fourth_negatives]
triplet_loss = self.triplet_distance(batch[anchors,:],batch[positives,:],batch[negatives,:])
quadruplet_loss = self.quadruplet_distance(batch[anchors,:],batch[positives,:],batch[negatives,:],batch[fourth_negatives,:])
return torch.mean(triplet_loss) + torch.mean(quadruplet_loss)
================================================
FILE: criteria/snr.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = list(batchminer.BATCHMINING_METHODS.keys())
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
### This implements the Signal-To-Noise Ratio Triplet Loss
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.margin = opt.loss_snr_margin
self.reg_lambda = opt.loss_snr_reg_lambda
self.batchminer = batchminer
if self.batchminer.name=='distance': self.reg_lambda = 0
self.name = 'snr'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
sampled_triplets = self.batchminer(batch, labels)
anchors = [triplet[0] for triplet in sampled_triplets]
positives = [triplet[1] for triplet in sampled_triplets]
negatives = [triplet[2] for triplet in sampled_triplets]
pos_snr = torch.var(batch[anchors,:]-batch[positives,:], dim=1)/torch.var(batch[anchors,:], dim=1)
neg_snr = torch.var(batch[anchors,:]-batch[negatives,:], dim=1)/torch.var(batch[anchors,:], dim=1)
reg_loss = torch.mean(torch.abs(torch.sum(batch[anchors,:],dim=1)))
snr_loss = torch.nn.functional.relu(pos_snr - neg_snr + self.margin)
snr_loss = torch.sum(snr_loss)/torch.sum(snr_loss>0)
loss = snr_loss + self.reg_lambda * reg_loss
return loss
================================================
FILE: criteria/softmax.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = None
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = True
### This Implementation follows: https://github.com/azgo14/classification_metric_learning
class Criterion(torch.nn.Module):
def __init__(self, opt):
super(Criterion, self).__init__()
self.par = opt
self.temperature = opt.loss_softmax_temperature
self.class_map = torch.nn.Parameter(torch.Tensor(opt.n_classes, opt.embed_dim))
stdv = 1. / np.sqrt(self.class_map.size(1))
self.class_map.data.uniform_(-stdv, stdv)
self.name = 'softmax'
self.lr = opt.loss_softmax_lr
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
class_mapped_batch = torch.nn.functional.linear(batch, torch.nn.functional.normalize(self.class_map, dim=1))
loss = torch.nn.CrossEntropyLoss()(class_mapped_batch/self.temperature, labels.to(torch.long).to(self.par.device))
return loss
================================================
FILE: criteria/softtriplet.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = None
REQUIRES_BATCHMINER = False
REQUIRES_OPTIM = True
### This implementation follows https://github.com/idstcv/SoftTriple
class Criterion(torch.nn.Module):
def __init__(self, opt):
super(Criterion, self).__init__()
####
self.par = opt
self.n_classes = opt.n_classes
####
self.n_centroids = opt.loss_softtriplet_n_centroids
self.margin_delta = opt.loss_softtriplet_margin_delta
self.gamma = opt.loss_softtriplet_gamma
self.lam = opt.loss_softtriplet_lambda
self.reg_weight = opt.loss_softtriplet_reg_weight
####
self.reg_norm = self.n_classes*self.n_centroids*(self.n_centroids-1)
self.reg_indices = torch.zeros((self.n_classes*self.n_centroids, self.n_classes*self.n_centroids), dtype=torch.bool).to(opt.device)
for i in range(0, self.n_classes):
for j in range(0, self.n_centroids):
self.reg_indices[i*self.n_centroids+j, i*self.n_centroids+j+1:(i+1)*self.n_centroids] = 1
####
self.intra_class_centroids = torch.nn.Parameter(torch.Tensor(opt.embed_dim, self.n_classes*self.n_centroids))
stdv = 1. / np.sqrt(self.intra_class_centroids.size(1))
self.intra_class_centroids.data.uniform_(-stdv, stdv)
self.name = 'softtriplet'
self.lr = opt.lr*opt.loss_softtriplet_lr
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def forward(self, batch, labels, **kwargs):
bs = batch.size(0)
intra_class_centroids = torch.nn.functional.normalize(self.intra_class_centroids, dim=1)
similarities_to_centroids = batch.mm(intra_class_centroids).reshape(-1, self.n_classes, self.n_centroids)
soft_weight_over_centroids = torch.nn.Softmax(dim=1)(self.gamma*similarities_to_centroids)
per_class_embed = torch.sum(soft_weight_over_centroids * similarities_to_centroids, dim=2)
margin_delta = torch.zeros(per_class_embed.shape).to(self.par.device)
margin_delta[torch.arange(0, bs), labels] = self.margin_delta
centroid_classification_loss = torch.nn.CrossEntropyLoss()(self.lam*(per_class_embed-margin_delta), labels.to(torch.long).to(self.par.device))
inter_centroid_similarity = intra_class_centroids.T.mm(intra_class_centroids)
regularisation_loss = torch.sum(torch.sqrt(2.00001-2*inter_centroid_similarity[self.reg_indices]))/self.reg_norm
return centroid_classification_loss + self.reg_weight * regularisation_loss
================================================
FILE: criteria/triplet.py
================================================
import numpy as np
import torch, torch.nn as nn, torch.nn.functional as F
import batchminer
"""================================================================================================="""
ALLOWED_MINING_OPS = list(batchminer.BATCHMINING_METHODS.keys())
REQUIRES_BATCHMINER = True
REQUIRES_OPTIM = False
### Standard Triplet Loss, finds triplets in Mini-batches.
class Criterion(torch.nn.Module):
def __init__(self, opt, batchminer):
super(Criterion, self).__init__()
self.margin = opt.loss_triplet_margin
self.batchminer = batchminer
self.name = 'triplet'
####
self.ALLOWED_MINING_OPS = ALLOWED_MINING_OPS
self.REQUIRES_BATCHMINER = REQUIRES_BATCHMINER
self.REQUIRES_OPTIM = REQUIRES_OPTIM
def triplet_distance(self, anchor, positive, negative):
return torch.nn.functional.relu((anchor-positive).pow(2).sum()-(anchor-negative).pow(2).sum()+self.margin)
def forward(self, batch, labels, **kwargs):
if isinstance(labels, torch.Tensor): labels = labels.cpu().numpy()
sampled_triplets = self.batchminer(batch, labels)
loss = torch.stack([self.triplet_distance(batch[triplet[0],:],batch[triplet[1],:],batch[triplet[2],:]) for triplet in sampled_triplets])
return torch.mean(loss)
================================================
FILE: datasampler/__init__.py
================================================
import datasampler.class_random_sampler
import datasampler.random_sampler
import datasampler.greedy_coreset_sampler
import datasampler.fid_batchmatch_sampler
import datasampler.disthist_batchmatch_sampler
import datasampler.d2_coreset_sampler
def select(sampler, opt, image_dict, image_list=None, **kwargs):
if 'batchmatch' in sampler:
if sampler=='disthist_batchmatch':
sampler_lib = disthist_batchmatch_sampler
elif sampler=='fid_batchmatch':
sampler_lib = spc_fid_batchmatch_sampler
elif 'random' in sampler:
if 'class' in sampler:
sampler_lib = class_random_sampler
elif 'full' in sampler:
sampler_lib = random_sampler
elif 'coreset' in sampler:
if 'greedy' in sampler:
sampler_lib = greedy_coreset_sampler
elif 'd2' in sampler:
sampler_lib = d2_coreset_sampler
else:
raise Exception('Minibatch sampler <{}> not available!'.format(sampler))
sampler = sampler_lib.Sampler(opt,image_dict=image_dict,image_list=image_list)
return sampler
gitextract_la23kpcg/
├── .gitignore
├── LICENSE
├── README.md
├── Result_Evaluations.py
├── Sample_Runs/
│ └── ICML2020_RevisitDML_SampleRuns.sh
├── architectures/
│ ├── __init__.py
│ ├── bninception.py
│ ├── googlenet.py
│ └── resnet50.py
├── batchminer/
│ ├── __init__.py
│ ├── distance.py
│ ├── intra_random.py
│ ├── lifted.py
│ ├── npair.py
│ ├── parametric.py
│ ├── random.py
│ ├── random_distance.py
│ ├── rho_distance.py
│ ├── semihard.py
│ └── softhard.py
├── criteria/
│ ├── __init__.py
│ ├── adversarial_separation.py
│ ├── angular.py
│ ├── arcface.py
│ ├── contrastive.py
│ ├── histogram.py
│ ├── lifted.py
│ ├── margin.py
│ ├── multisimilarity.py
│ ├── npair.py
│ ├── proxynca.py
│ ├── quadruplet.py
│ ├── snr.py
│ ├── softmax.py
│ ├── softtriplet.py
│ └── triplet.py
├── datasampler/
│ ├── __init__.py
│ ├── class_random_sampler.py
│ ├── d2_coreset_sampler.py
│ ├── disthist_batchmatch_sampler.py
│ ├── fid_batchmatch_sampler.py
│ ├── greedy_coreset_sampler.py
│ ├── random_sampler.py
│ └── samplers.py
├── datasets/
│ ├── __init__.py
│ ├── basic_dataset_scaffold.py
│ ├── cars196.py
│ ├── cub200.py
│ └── stanford_online_products.py
├── evaluation/
│ └── __init__.py
├── main.py
├── metrics/
│ ├── __init__.py
│ ├── c_f1.py
│ ├── c_mAP_1000.py
│ ├── c_mAP_c.py
│ ├── c_mAP_lim.py
│ ├── c_nmi.py
│ ├── c_recall.py
│ ├── compute_stack.py
│ ├── dists.py
│ ├── e_recall.py
│ ├── f1.py
│ ├── mAP.py
│ ├── mAP_1000.py
│ ├── mAP_c.py
│ ├── mAP_lim.py
│ ├── nmi.py
│ └── rho_spectrum.py
├── parameters.py
├── toy_experiments/
│ └── toy_example_diagonal_lines.py
└── utilities/
├── __init__.py
├── logger.py
└── misc.py
SYMBOL INDEX (260 symbols across 66 files)
FILE: Result_Evaluations.py
function get_data (line 9) | def get_data(project):
function name_filter (line 80) | def name_filter(n):
function name_adjust (line 84) | def name_adjust(n, prep='', app='', for_plot=True):
function single_table (line 123) | def single_table(vals):
function shared_table (line 148) | def shared_table():
function give_basic_metr (line 190) | def give_basic_metr(vals, key='CUB'):
function give_reg_metr (line 212) | def give_reg_metr(vals, key='CUB'):
function full_rel_plot (line 326) | def full_rel_plot():
function plot (line 481) | def plot(vals, recalls, errs, names, reg_vals=None, reg_recalls=None, re...
FILE: architectures/__init__.py
function select (line 5) | def select(arch, opt):
FILE: architectures/bninception.py
class Network (line 10) | class Network(torch.nn.Module):
method __init__ (line 11) | def __init__(self, opt, return_embed_dict=False):
method forward (line 35) | def forward(self, x, warmup=False, **kwargs):
method functional_forward (line 49) | def functional_forward(self, x):
FILE: architectures/googlenet.py
class Network (line 12) | class Network(torch.nn.Module):
method __init__ (line 13) | def __init__(self, opt):
method forward (line 24) | def forward(self, x):
FILE: architectures/resnet50.py
class Network (line 12) | class Network(torch.nn.Module):
method __init__ (line 13) | def __init__(self, opt):
method forward (line 33) | def forward(self, x, **kwargs):
FILE: batchminer/__init__.py
function select (line 16) | def select(batchminername, opt):
FILE: batchminer/distance.py
class BatchMiner (line 6) | class BatchMiner():
method __init__ (line 7) | def __init__(self, opt):
method __call__ (line 13) | def __call__(self, batch, labels, tar_labels=None, return_distances=Fa...
method inverse_sphere_distances (line 48) | def inverse_sphere_distances(self, dim, bs, anchor_to_all_dists, label...
method pdist (line 66) | def pdist(self, A):
FILE: batchminer/intra_random.py
class BatchMiner (line 5) | class BatchMiner():
method __init__ (line 6) | def __init__(self, opt):
method __call__ (line 10) | def __call__(self, batch, labels):
FILE: batchminer/lifted.py
class BatchMiner (line 3) | class BatchMiner():
method __init__ (line 4) | def __init__(self, opt):
method __call__ (line 8) | def __call__(self, batch, labels):
FILE: batchminer/npair.py
class BatchMiner (line 2) | class BatchMiner():
method __init__ (line 3) | def __init__(self, opt):
method __call__ (line 7) | def __call__(self, batch, labels):
FILE: batchminer/parametric.py
class BatchMiner (line 4) | class BatchMiner():
method __init__ (line 5) | def __init__(self, opt):
method __call__ (line 17) | def __call__(self, batch, labels):
method pdist (line 58) | def pdist(self, A, eps=1e-4):
method set_sample_distr (line 65) | def set_sample_distr(self):
FILE: batchminer/random.py
class BatchMiner (line 5) | class BatchMiner():
method __init__ (line 6) | def __init__(self, opt):
method __call__ (line 10) | def __call__(self, batch, labels):
FILE: batchminer/random_distance.py
class BatchMiner (line 4) | class BatchMiner():
method __init__ (line 5) | def __init__(self, opt):
method __call__ (line 11) | def __call__(self, batch, labels):
method inverse_sphere_distances (line 38) | def inverse_sphere_distances(self, batch, anchor_to_all_dists, labels,...
method pdist (line 57) | def pdist(self, A):
FILE: batchminer/rho_distance.py
class BatchMiner (line 4) | class BatchMiner():
method __init__ (line 5) | def __init__(self, opt):
method __call__ (line 13) | def __call__(self, batch, labels, return_distances=False):
method inverse_sphere_distances (line 50) | def inverse_sphere_distances(self, batch, anchor_to_all_dists, labels,...
method pdist (line 69) | def pdist(self, A, eps=1e-4):
FILE: batchminer/semihard.py
class BatchMiner (line 4) | class BatchMiner():
method __init__ (line 5) | def __init__(self, opt):
method __call__ (line 10) | def __call__(self, batch, labels, return_distances=False):
method pdist (line 43) | def pdist(self, A):
FILE: batchminer/softhard.py
class BatchMiner (line 4) | class BatchMiner():
method __init__ (line 5) | def __init__(self, opt):
method __call__ (line 9) | def __call__(self, batch, labels, return_distances=False):
method pdist (line 50) | def pdist(self, A):
FILE: criteria/__init__.py
function select (line 13) | def select(loss, opt, to_optim, batchminer=None):
FILE: criteria/adversarial_separation.py
class Criterion (line 12) | class Criterion(torch.nn.Module):
method __init__ (line 13) | def __init__(self, opt):
method forward (line 49) | def forward(self, feature_dict):
class GradRev (line 62) | class GradRev(torch.autograd.Function):
method forward (line 66) | def forward(self, x):
method backward (line 75) | def backward(self, grad_output):
function grad_reverse (line 85) | def grad_reverse(x):
FILE: criteria/angular.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method forward (line 29) | def forward(self, batch, labels, **kwargs):
FILE: criteria/arcface.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt):
method forward (line 36) | def forward(self, batch, labels, **kwargs):
FILE: criteria/contrastive.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method forward (line 27) | def forward(self, batch, labels, **kwargs):
FILE: criteria/histogram.py
class Criterion (line 12) | class Criterion(torch.nn.Module):
method __init__ (line 13) | def __init__(self, opt):
method forward (line 36) | def forward(self, batch, labels, **kwargs):
method histogram (line 81) | def histogram(self, unique_sim_rep, assigned_bin_values, idxs, n_elem):
FILE: criteria/lifted.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method forward (line 28) | def forward(self, batch, labels, **kwargs):
FILE: criteria/margin.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method forward (line 39) | def forward(self, batch, labels, **kwargs):
FILE: criteria/multisimilarity.py
class Criterion (line 10) | class Criterion(torch.nn.Module):
method __init__ (line 11) | def __init__(self, opt):
method forward (line 28) | def forward(self, batch, labels, **kwargs):
FILE: criteria/npair.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method forward (line 29) | def forward(self, batch, labels, **kwargs):
FILE: criteria/proxynca.py
class Criterion (line 12) | class Criterion(torch.nn.Module):
method __init__ (line 13) | def __init__(self, opt):
method forward (line 39) | def forward(self, batch, labels, **kwargs):
FILE: criteria/quadruplet.py
class Criterion (line 10) | class Criterion(torch.nn.Module):
method __init__ (line 11) | def __init__(self, opt, batchminer):
method triplet_distance (line 27) | def triplet_distance(self, anchor, positive, negative):
method quadruplet_distance (line 30) | def quadruplet_distance(self, anchor, positive, negative, fourth_negat...
method forward (line 33) | def forward(self, batch, labels, **kwargs):
FILE: criteria/snr.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method forward (line 29) | def forward(self, batch, labels, **kwargs):
FILE: criteria/softmax.py
class Criterion (line 12) | class Criterion(torch.nn.Module):
method __init__ (line 13) | def __init__(self, opt):
method forward (line 33) | def forward(self, batch, labels, **kwargs):
FILE: criteria/softtriplet.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt):
method forward (line 50) | def forward(self, batch, labels, **kwargs):
FILE: criteria/triplet.py
class Criterion (line 11) | class Criterion(torch.nn.Module):
method __init__ (line 12) | def __init__(self, opt, batchminer):
method triplet_distance (line 24) | def triplet_distance(self, anchor, positive, negative):
method forward (line 27) | def forward(self, batch, labels, **kwargs):
FILE: datasampler/__init__.py
function select (line 9) | def select(sampler, opt, image_dict, image_list=None, **kwargs):
FILE: datasampler/class_random_sampler.py
class Sampler (line 12) | class Sampler(torch.utils.data.sampler.Sampler):
method __init__ (line 16) | def __init__(self, opt, image_dict, image_list, **kwargs):
method __iter__ (line 35) | def __iter__(self):
method __len__ (line 48) | def __len__(self):
FILE: datasampler/d2_coreset_sampler.py
class Sampler (line 12) | class Sampler(torch.utils.data.sampler.Sampler):
method __init__ (line 16) | def __init__(self, opt, image_dict, image_list):
method __iter__ (line 38) | def __iter__(self):
method precompute_indices (line 43) | def precompute_indices(self):
method replace_storage_entries (line 72) | def replace_storage_entries(self, embeddings, indices):
method create_storage (line 75) | def create_storage(self, dataloader, model, device):
method d2_coreset (line 90) | def d2_coreset(self, calls, pos):
method __len__ (line 144) | def __len__(self):
FILE: datasampler/disthist_batchmatch_sampler.py
class Sampler (line 12) | class Sampler(torch.utils.data.sampler.Sampler):
method __init__ (line 16) | def __init__(self, opt, image_dict, image_list):
method __iter__ (line 39) | def __iter__(self):
method precompute_indices (line 56) | def precompute_indices(self):
method replace_storage_entries (line 69) | def replace_storage_entries(self, embeddings, indices):
method create_storage (line 72) | def create_storage(self, dataloader, model, device):
method spc_batchfinder (line 87) | def spc_batchfinder(self, n_samples):
method get_distmat (line 99) | def get_distmat(self, arr):
method disthist_match (line 105) | def disthist_match(self, calls, pos):
method __len__ (line 164) | def __len__(self):
FILE: datasampler/fid_batchmatch_sampler.py
class Sampler (line 12) | class Sampler(torch.utils.data.sampler.Sampler):
method __init__ (line 16) | def __init__(self, opt, image_dict, image_list):
method __iter__ (line 38) | def __iter__(self):
method precompute_indices (line 54) | def precompute_indices(self):
method replace_storage_entries (line 67) | def replace_storage_entries(self, embeddings, indices):
method create_storage (line 70) | def create_storage(self, dataloader, model, device):
method spc_batchfinder (line 85) | def spc_batchfinder(self, n_samples):
method spc_fid_match (line 97) | def spc_fid_match(self, calls, pos):
method __len__ (line 147) | def __len__(self):
FILE: datasampler/greedy_coreset_sampler.py
class Sampler (line 12) | class Sampler(torch.utils.data.sampler.Sampler):
method __init__ (line 16) | def __init__(self, opt, image_dict, image_list):
method __iter__ (line 39) | def __iter__(self):
method precompute_indices (line 44) | def precompute_indices(self):
method replace_storage_entries (line 75) | def replace_storage_entries(self, embeddings, indices):
method create_storage (line 78) | def create_storage(self, dataloader, model, device):
method full_storage_update (line 93) | def full_storage_update(self, dataloader, model, device):
method greedy_coreset (line 111) | def greedy_coreset(self, calls, pos):
method __len__ (line 155) | def __len__(self):
FILE: datasampler/random_sampler.py
class Sampler (line 12) | class Sampler(torch.utils.data.sampler.Sampler):
method __init__ (line 16) | def __init__(self, opt, image_dict, image_list=None):
method __iter__ (line 28) | def __iter__(self):
method __len__ (line 40) | def __len__(self):
FILE: datasampler/samplers.py
function sampler_parse_args (line 8) | def sampler_parse_args(parser):
class AdvancedSampler (line 18) | class AdvancedSampler(torch.utils.data.sampler.Sampler):
method __init__ (line 22) | def __init__(self, method='class_random', random_subset_perc=0.1, batc...
method create_storage (line 34) | def create_storage(self, dataloader, model, device):
method update_storage (line 54) | def update_storage(self, embeddings, indices):
method __iter__ (line 58) | def __iter__(self):
method __len__ (line 100) | def __len__(self):
method pdistsq (line 103) | def pdistsq(self, A):
method greedy_coreset (line 108) | def greedy_coreset(self, A, samples):
method presample_infobatch (line 127) | def presample_infobatch(self, classes, A, samples):
FILE: datasets/__init__.py
function select (line 6) | def select(dataset, opt, data_path):
FILE: datasets/basic_dataset_scaffold.py
class BaseDataset (line 9) | class BaseDataset(Dataset):
method __init__ (line 10) | def __init__(self, image_dict, opt, is_validation=False):
method init_setup (line 50) | def init_setup(self):
method ensure_3dim (line 72) | def ensure_3dim(self, img):
method __getitem__ (line 78) | def __getitem__(self, idx):
method __len__ (line 88) | def __len__(self):
FILE: datasets/cars196.py
function Give (line 5) | def Give(opt, datapath):
FILE: datasets/cub200.py
function Give (line 4) | def Give(opt, datapath):
FILE: datasets/stanford_online_products.py
function Give (line 6) | def Give(opt, datapath):
FILE: evaluation/__init__.py
function evaluate (line 7) | def evaluate(dataset, LOG, metric_computer, dataloaders, model, opt, eva...
function set_checkpoint (line 68) | def set_checkpoint(model, opt, progress_saver, savepath, aux=None):
function recover_closest_standard (line 82) | def recover_closest_standard(feature_matrix_all, image_paths, save_path,...
FILE: metrics/__init__.py
function select (line 12) | def select(metricname, opt):
class MetricComputer (line 61) | class MetricComputer():
method __init__ (line 62) | def __init__(self, metric_names, opt):
method compute_standard (line 69) | def compute_standard(self, opt, model, dataloader, evaltypes, device, ...
FILE: metrics/c_f1.py
class Metric (line 5) | class Metric():
method __init__ (line 6) | def __init__(self, **kwargs):
method __call__ (line 10) | def __call__(self, target_labels, computed_cluster_labels_cosine, feat...
FILE: metrics/c_mAP_1000.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features_cosine):
FILE: metrics/c_mAP_c.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features_cosine):
FILE: metrics/c_mAP_lim.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features_cosine):
FILE: metrics/c_nmi.py
class Metric (line 3) | class Metric():
method __init__ (line 4) | def __init__(self, **kwargs):
method __call__ (line 8) | def __call__(self, target_labels, computed_cluster_labels_cosine):
FILE: metrics/c_recall.py
class Metric (line 3) | class Metric():
method __init__ (line 4) | def __init__(self, k, **kwargs):
method __call__ (line 9) | def __call__(self, target_labels, k_closest_classes_cosine, **kwargs):
FILE: metrics/dists.py
class Metric (line 6) | class Metric():
method __init__ (line 7) | def __init__(self, mode, **kwargs):
method __call__ (line 12) | def __call__(self, features, target_labels):
FILE: metrics/e_recall.py
class Metric (line 3) | class Metric():
method __init__ (line 4) | def __init__(self, k, **kwargs):
method __call__ (line 9) | def __call__(self, target_labels, k_closest_classes, **kwargs):
FILE: metrics/f1.py
class Metric (line 5) | class Metric():
method __init__ (line 6) | def __init__(self, **kwargs):
method __call__ (line 10) | def __call__(self, target_labels, computed_cluster_labels, features, c...
FILE: metrics/mAP.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features):
FILE: metrics/mAP_1000.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features):
FILE: metrics/mAP_c.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features):
FILE: metrics/mAP_lim.py
class Metric (line 7) | class Metric():
method __init__ (line 8) | def __init__(self, **kwargs):
method __call__ (line 12) | def __call__(self, target_labels, features):
FILE: metrics/nmi.py
class Metric (line 3) | class Metric():
method __init__ (line 4) | def __init__(self, **kwargs):
method __call__ (line 8) | def __call__(self, target_labels, computed_cluster_labels):
FILE: metrics/rho_spectrum.py
class Metric (line 6) | class Metric():
method __init__ (line 7) | def __init__(self, embed_dim, mode, **kwargs):
method __call__ (line 13) | def __call__(self, features):
FILE: parameters.py
function basic_training_parameters (line 5) | def basic_training_parameters(parser):
function wandb_parameters (line 61) | def wandb_parameters(parser):
function loss_specific_parameters (line 73) | def loss_specific_parameters(parser):
function batchmining_specific_parameters (line 142) | def batchmining_specific_parameters(parser):
function batch_creation_parameters (line 154) | def batch_creation_parameters(parser):
FILE: toy_experiments/toy_example_diagonal_lines.py
class Backbone (line 50) | class Backbone(nn.Module):
method __init__ (line 51) | def __init__(self):
method forward (line 55) | def forward(self, x):
function train (line 67) | def train(net2train, p_switch=0):
function get_embeds (line 119) | def get_embeds(net):
FILE: utilities/logger.py
class CSV_Writer (line 8) | class CSV_Writer():
method __init__ (line 9) | def __init__(self, save_path):
method log (line 14) | def log(self, group, segments, content):
class InfoPlotter (line 30) | class InfoPlotter():
method __init__ (line 31) | def __init__(self, save_path, title='Training Log', figsize=(25,19)):
method make_plot (line 37) | def make_plot(self, base_title, title_append, sub_plots, sub_plots_data):
function set_logging (line 64) | def set_logging(opt):
class Progress_Saver (line 89) | class Progress_Saver():
method __init__ (line 90) | def __init__(self):
method log (line 93) | def log(self, segment, content, group=None):
class LOGGER (line 104) | class LOGGER():
method __init__ (line 105) | def __init__(self, opt, sub_loggers=[], prefix=None, start_new=True, l...
method update (line 139) | def update(self, *sub_loggers, all=False):
FILE: utilities/misc.py
function gimme_params (line 9) | def gimme_params(model):
function gimme_save_string (line 16) | def gimme_save_string(opt):
class DataParallel (line 33) | class DataParallel(nn.Module):
method __init__ (line 34) | def __init__(self, model, device_ids, dim):
method forward (line 39) | def forward(self, x):
Condensed preview — 73 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (344K chars).
[
{
"path": ".gitignore",
"chars": 54,
"preview": "__pycache__\n*.pyc\nTraining_Results\nwandb\ndiva_main.py\n"
},
{
"path": "LICENSE",
"chars": 1069,
"preview": "MIT License\n\nCopyright (c) 2020 Karsten Roth\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
},
{
"path": "README.md",
"chars": 16418,
"preview": "# Deep Metric Learning Research in PyTorch\n\n---\n## What can I find here?\n\nThis repository contains all code and implemen"
},
{
"path": "Result_Evaluations.py",
"chars": 28889,
"preview": "\"\"\"\nThis scripts downloads and evaluates W&B run data to produce plots and tables used in the original paper.\n\"\"\"\nimport"
},
{
"path": "Sample_Runs/ICML2020_RevisitDML_SampleRuns.sh",
"chars": 87966,
"preview": "python main.py --kernels 6 --source /home/karsten_dl/Dropbox/Projects/Datasets --n_epochs 150 --seed 0 --gpu 1 --bs 112 "
},
{
"path": "architectures/__init__.py",
"chars": 318,
"preview": "import architectures.resnet50\nimport architectures.googlenet\nimport architectures.bninception\n\ndef select(arch, opt):\n "
},
{
"path": "architectures/bninception.py",
"chars": 1819,
"preview": "\"\"\"\nThe network architectures and weights are adapted and used from the great https://github.com/Cadene/pretrained-model"
},
{
"path": "architectures/googlenet.py",
"chars": 794,
"preview": "\"\"\"\nThe network architectures and weights are adapted and used from the great https://github.com/Cadene/pretrained-model"
},
{
"path": "architectures/resnet50.py",
"chars": 1567,
"preview": "\"\"\"\nThe network architectures and weights are adapted and used from the great https://github.com/Cadene/pretrained-model"
},
{
"path": "batchminer/__init__.py",
"chars": 893,
"preview": "from batchminer import random_distance, intra_random\nfrom batchminer import lifted, rho_distance, softhard, npair, param"
},
{
"path": "batchminer/distance.py",
"chars": 2771,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\nclass BatchMiner():\n de"
},
{
"path": "batchminer/intra_random.py",
"chars": 743,
"preview": "import numpy as np, torch\nimport itertools as it\nimport random\n\nclass BatchMiner():\n def __init__(self, opt):\n "
},
{
"path": "batchminer/lifted.py",
"chars": 1054,
"preview": "import numpy as np, torch\n\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n self"
},
{
"path": "batchminer/npair.py",
"chars": 1166,
"preview": "import numpy as np, torch\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n self."
},
{
"path": "batchminer/parametric.py",
"chars": 3126,
"preview": "import numpy as np, torch\n\n\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n sel"
},
{
"path": "batchminer/random.py",
"chars": 1097,
"preview": "import numpy as np, torch\nimport itertools as it\nimport random\n\nclass BatchMiner():\n def __init__(self, opt):\n "
},
{
"path": "batchminer/random_distance.py",
"chars": 2515,
"preview": "import numpy as np, torch\n\n\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n sel"
},
{
"path": "batchminer/rho_distance.py",
"chars": 3093,
"preview": "import numpy as np, torch\n\n\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n sel"
},
{
"path": "batchminer/semihard.py",
"chars": 1655,
"preview": "import numpy as np, torch\n\n\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n sel"
},
{
"path": "batchminer/softhard.py",
"chars": 1982,
"preview": "import numpy as np, torch\n\n\nclass BatchMiner():\n def __init__(self, opt):\n self.par = opt\n sel"
},
{
"path": "criteria/__init__.py",
"chars": 2049,
"preview": "### Standard DML criteria\nfrom criteria import triplet, margin, proxynca, npair\nfrom criteria import lifted, contrastive"
},
{
"path": "criteria/adversarial_separation.py",
"chars": 3114,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\n\"\"\"======================="
},
{
"path": "criteria/angular.py",
"chars": 2210,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\n\"\"\"======================="
},
{
"path": "criteria/arcface.py",
"chars": 1985,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/contrastive.py",
"chars": 1421,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/histogram.py",
"chars": 4620,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/lifted.py",
"chars": 1600,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/margin.py",
"chars": 2892,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/multisimilarity.py",
"chars": 2058,
"preview": "import torch, torch.nn as nn\n\n\n\n\"\"\"====================================================================================="
},
{
"path": "criteria/npair.py",
"chars": 1662,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\n\"\"\"======================="
},
{
"path": "criteria/proxynca.py",
"chars": 2172,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\n\"\"\"======================="
},
{
"path": "criteria/quadruplet.py",
"chars": 2104,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\"\"\"========================="
},
{
"path": "criteria/snr.py",
"chars": 1706,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/softmax.py",
"chars": 1320,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/softtriplet.py",
"chars": 2897,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "criteria/triplet.py",
"chars": 1344,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport batchminer\n\n\"\"\"========================"
},
{
"path": "datasampler/__init__.py",
"chars": 1095,
"preview": "import datasampler.class_random_sampler\nimport datasampler.random_sampler\nimport datasampler.greedy_coreset_sampler\nimpo"
},
{
"path": "datasampler/class_random_sampler.py",
"chars": 1490,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\n\n\n\n\"\"\"===="
},
{
"path": "datasampler/d2_coreset_sampler.py",
"chars": 5824,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\nfrom scipy"
},
{
"path": "datasampler/disthist_batchmatch_sampler.py",
"chars": 7215,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\nfrom scipy"
},
{
"path": "datasampler/fid_batchmatch_sampler.py",
"chars": 6035,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\nfrom scipy"
},
{
"path": "datasampler/greedy_coreset_sampler.py",
"chars": 6339,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\nfrom scipy"
},
{
"path": "datasampler/random_sampler.py",
"chars": 1429,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\n\n\n\n\"\"\"===="
},
{
"path": "datasampler/samplers.py",
"chars": 8158,
"preview": "import numpy as np\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom tqdm import tqdm\nimport random\n\n\n\"\"\"====="
},
{
"path": "datasets/__init__.py",
"chars": 563,
"preview": "import datasets.cub200\nimport datasets.cars196\nimport datasets.stanford_online_products\n\n\ndef select(dataset, opt, data_"
},
{
"path": "datasets/basic_dataset_scaffold.py",
"chars": 3476,
"preview": "from torch.utils.data import Dataset\nimport torchvision.transforms as transforms\nimport numpy as np\nfrom PIL import Imag"
},
{
"path": "datasets/cars196.py",
"chars": 3594,
"preview": "from datasets.basic_dataset_scaffold import BaseDataset\nimport os\n\n\ndef Give(opt, datapath):\n image_sourcepath = dat"
},
{
"path": "datasets/cub200.py",
"chars": 3737,
"preview": "from datasets.basic_dataset_scaffold import BaseDataset\nimport os\n\ndef Give(opt, datapath):\n image_sourcepath = data"
},
{
"path": "datasets/stanford_online_products.py",
"chars": 6048,
"preview": "from datasets.basic_dataset_scaffold import BaseDataset\nimport os, numpy as np\nimport pandas as pd\n\n\ndef Give(opt, datap"
},
{
"path": "evaluation/__init__.py",
"chars": 4698,
"preview": "import faiss, matplotlib.pyplot as plt, os, numpy as np, torch\nfrom PIL import Image\n\n\n\n#######################\ndef eval"
},
{
"path": "main.py",
"chars": 12220,
"preview": "\"\"\"===================================================================================================\"\"\"\n##############"
},
{
"path": "metrics/__init__.py",
"chars": 9624,
"preview": "from metrics import e_recall, nmi, f1, mAP, mAP_c, mAP_1000, mAP_lim\nfrom metrics import dists, rho_spectrum\nfrom metric"
},
{
"path": "metrics/c_f1.py",
"chars": 3253,
"preview": "import numpy as np\nfrom scipy.special import comb, binom\nimport torch\n\nclass Metric():\n def __init__(self, **kwargs):"
},
{
"path": "metrics/c_mAP_1000.py",
"chars": 1537,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/c_mAP_c.py",
"chars": 1550,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/c_mAP_lim.py",
"chars": 1607,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/c_nmi.py",
"chars": 399,
"preview": "from sklearn import metrics\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires = ['kmeans_nearest_"
},
{
"path": "metrics/c_recall.py",
"chars": 496,
"preview": "import numpy as np\n\nclass Metric():\n def __init__(self, k, **kwargs):\n self.k = k\n self.requires"
},
{
"path": "metrics/compute_stack.py",
"chars": 1,
"preview": "\n"
},
{
"path": "metrics/dists.py",
"chars": 2407,
"preview": "from scipy.spatial import distance\nfrom sklearn.preprocessing import normalize\nimport numpy as np\nimport torch\n\nclass Me"
},
{
"path": "metrics/e_recall.py",
"chars": 475,
"preview": "import numpy as np\n\nclass Metric():\n def __init__(self, k, **kwargs):\n self.k = k\n self.requires"
},
{
"path": "metrics/f1.py",
"chars": 3132,
"preview": "import numpy as np\nfrom scipy.special import comb, binom\nimport torch\n\nclass Metric():\n def __init__(self, **kwargs):"
},
{
"path": "metrics/mAP.py",
"chars": 1491,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/mAP_1000.py",
"chars": 1479,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/mAP_c.py",
"chars": 1492,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/mAP_lim.py",
"chars": 1550,
"preview": "import torch\nimport numpy as np\nimport faiss\n\n\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires "
},
{
"path": "metrics/nmi.py",
"chars": 376,
"preview": "from sklearn import metrics\n\nclass Metric():\n def __init__(self, **kwargs):\n self.requires = ['kmeans_nearest'"
},
{
"path": "metrics/rho_spectrum.py",
"chars": 1105,
"preview": "from scipy.spatial import distance\nfrom sklearn.preprocessing import normalize\nimport numpy as np\n\n\nclass Metric():\n "
},
{
"path": "parameters.py",
"chars": 15740,
"preview": "import argparse, os\n\n\n#######################################\ndef basic_training_parameters(parser):\n ##### Dataset-r"
},
{
"path": "toy_experiments/toy_example_diagonal_lines.py",
"chars": 9910,
"preview": "import os, numpy as np, matplotlib.pyplot as plt\nimport torch, torch.nn as nn, torchvision as tv\nimport numpy as np\nimpo"
},
{
"path": "utilities/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "utilities/logger.py",
"chars": 8168,
"preview": "import datetime, csv, os, numpy as np\nfrom matplotlib import pyplot as plt\nimport pickle as pkl\nfrom utilities.misc impo"
},
{
"path": "utilities/misc.py",
"chars": 1377,
"preview": "\"\"\"=============================================================================================================\"\"\"\n####"
}
]
About this extraction
This page contains the full source code of the Confusezius/Revisiting_Deep_Metric_Learning_PyTorch GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 73 files (325.4 KB), approximately 88.1k tokens, and a symbol index with 260 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.