Repository: fartashf/vsepp
Branch: master
Commit: abe382fd9c75
Files: 8
Total size: 64.3 KB
Directory structure:
gitextract_rkazkyvw/
├── .gitignore
├── LICENSE
├── README.md
├── data.py
├── evaluation.py
├── model.py
├── train.py
└── vocab.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
*.pyc
*.swp
*.ipynb_checkpoints
*.json
*.pth.tar
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# Improving Visual-Semantic Embeddings with Hard Negatives
Code for the image-caption retrieval methods from
**[VSE++: Improving Visual-Semantic Embeddings with Hard Negatives](https://arxiv.org/abs/1707.05612)**
*, F. Faghri, D. J. Fleet, J. R. Kiros, S. Fidler, Proceedings of the British Machine Vision Conference (BMVC), 2018. (BMVC Spotlight)*
## Dependencies
We recommended to use Anaconda for the following packages.
* Python 2.7 (Checkout branch `python3`)
* [PyTorch](http://pytorch.org/) (>0.2) (Checkout branch `pytorch4.1`)
* [NumPy](http://www.numpy.org/) (>1.12.1)
* [TensorBoard](https://github.com/TeamHG-Memex/tensorboard_logger)
* [pycocotools](https://github.com/cocodataset/cocoapi)
* [torchvision]()
* [matplotlib]()
* Punkt Sentence Tokenizer:
```python
import nltk
nltk.download()
> d punkt
```
## Download data
Download the dataset files and pre-trained models. We use splits produced by [Andrej Karpathy](http://cs.stanford.edu/people/karpathy/deepimagesent/). The precomputed image features are from [here](https://github.com/ryankiros/visual-semantic-embedding/) and [here](https://github.com/ivendrov/order-embedding). To use full image encoders, download the images from their original sources [here](http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html), [here](http://shannon.cs.illinois.edu/DenotationGraph/) and [here](http://mscoco.org/).
```bash
wget http://www.cs.toronto.edu/~faghri/vsepp/vocab.tar
wget http://www.cs.toronto.edu/~faghri/vsepp/data.tar
wget http://www.cs.toronto.edu/~faghri/vsepp/runs.tar
```
We refer to the path of extracted files for `data.tar` as `$DATA_PATH` and
files for `models.tar` as `$RUN_PATH`. Extract `vocab.tar` to `./vocab`
directory.
*Update: The vocabulary was originally built using all sets (including test set
captions). Please see issue #29 for details. Please consider not using test set
captions if building up on this project.*
## Evaluate pre-trained models
```python
python -c "\
from vocab import Vocabulary
import evaluation
evaluation.evalrank('$RUN_PATH/coco_vse++/model_best.pth.tar', data_path='$DATA_PATH', split='test')"
```
To do cross-validation on MSCOCO, pass `fold5=True` with a model trained using
`--data_name coco`.
## Training new models
Run `train.py`:
```bash
python train.py --data_path "$DATA_PATH" --data_name coco_precomp --logger_name
runs/coco_vse++ --max_violation
```
Arguments used to train pre-trained models:
| Method | Arguments |
| :-------: | :-------: |
| VSE0 | `--no_imgnorm` |
| VSE++ | `--max_violation` |
| Order0 | `--measure order --use_abs --margin .05 --learning_rate .001` |
| Order++ | `--measure order --max_violation` |
## Reference
If you found this code useful, please cite the following paper:
@article{faghri2018vse++,
title={VSE++: Improving Visual-Semantic Embeddings with Hard Negatives},
author={Faghri, Fartash and Fleet, David J and Kiros, Jamie Ryan and Fidler, Sanja},
booktitle = {Proceedings of the British Machine Vision Conference ({BMVC})},
url = {https://github.com/fartashf/vsepp},
year={2018}
}
## License
[Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
================================================
FILE: data.py
================================================
import torch
import torch.utils.data as data
import torchvision.transforms as transforms
import os
import nltk
from PIL import Image
from pycocotools.coco import COCO
import numpy as np
import json as jsonmod
def get_paths(path, name='coco', use_restval=False):
"""
Returns paths to images and annotations for the given datasets. For MSCOCO
indices are also returned to control the data split being used.
The indices are extracted from the Karpathy et al. splits using this
snippet:
>>> import json
>>> dataset=json.load(open('dataset_coco.json','r'))
>>> A=[]
>>> for i in range(len(D['images'])):
... if D['images'][i]['split'] == 'val':
... A+=D['images'][i]['sentids'][:5]
...
:param name: Dataset names
:param use_restval: If True, the the `restval` data is included in train.
"""
roots = {}
ids = {}
if 'coco' == name:
imgdir = os.path.join(path, 'images')
capdir = os.path.join(path, 'annotations')
roots['train'] = {
'img': os.path.join(imgdir, 'train2014'),
'cap': os.path.join(capdir, 'captions_train2014.json')
}
roots['val'] = {
'img': os.path.join(imgdir, 'val2014'),
'cap': os.path.join(capdir, 'captions_val2014.json')
}
roots['test'] = {
'img': os.path.join(imgdir, 'val2014'),
'cap': os.path.join(capdir, 'captions_val2014.json')
}
roots['trainrestval'] = {
'img': (roots['train']['img'], roots['val']['img']),
'cap': (roots['train']['cap'], roots['val']['cap'])
}
ids['train'] = np.load(os.path.join(capdir, 'coco_train_ids.npy'))
ids['val'] = np.load(os.path.join(capdir, 'coco_dev_ids.npy'))[:5000]
ids['test'] = np.load(os.path.join(capdir, 'coco_test_ids.npy'))
ids['trainrestval'] = (
ids['train'],
np.load(os.path.join(capdir, 'coco_restval_ids.npy')))
if use_restval:
roots['train'] = roots['trainrestval']
ids['train'] = ids['trainrestval']
elif 'f8k' == name:
imgdir = os.path.join(path, 'images')
cap = os.path.join(path, 'dataset_flickr8k.json')
roots['train'] = {'img': imgdir, 'cap': cap}
roots['val'] = {'img': imgdir, 'cap': cap}
roots['test'] = {'img': imgdir, 'cap': cap}
ids = {'train': None, 'val': None, 'test': None}
elif 'f30k' == name:
imgdir = os.path.join(path, 'images')
cap = os.path.join(path, 'dataset_flickr30k.json')
roots['train'] = {'img': imgdir, 'cap': cap}
roots['val'] = {'img': imgdir, 'cap': cap}
roots['test'] = {'img': imgdir, 'cap': cap}
ids = {'train': None, 'val': None, 'test': None}
return roots, ids
class CocoDataset(data.Dataset):
"""COCO Custom Dataset compatible with torch.utils.data.DataLoader."""
def __init__(self, root, json, vocab, transform=None, ids=None):
"""
Args:
root: image directory.
json: coco annotation file path.
vocab: vocabulary wrapper.
transform: transformer for image.
"""
self.root = root
# when using `restval`, two json files are needed
if isinstance(json, tuple):
self.coco = (COCO(json[0]), COCO(json[1]))
else:
self.coco = (COCO(json),)
self.root = (root,)
# if ids provided by get_paths, use split-specific ids
if ids is None:
self.ids = list(self.coco.anns.keys())
else:
self.ids = ids
# if `restval` data is to be used, record the break point for ids
if isinstance(self.ids, tuple):
self.bp = len(self.ids[0])
self.ids = list(self.ids[0]) + list(self.ids[1])
else:
self.bp = len(self.ids)
self.vocab = vocab
self.transform = transform
def __getitem__(self, index):
"""This function returns a tuple that is further passed to collate_fn
"""
vocab = self.vocab
root, caption, img_id, path, image = self.get_raw_item(index)
if self.transform is not None:
image = self.transform(image)
# Convert caption (string) to word ids.
tokens = nltk.tokenize.word_tokenize(
str(caption).lower().decode('utf-8'))
caption = []
caption.append(vocab('<start>'))
caption.extend([vocab(token) for token in tokens])
caption.append(vocab('<end>'))
target = torch.Tensor(caption)
return image, target, index, img_id
def get_raw_item(self, index):
if index < self.bp:
coco = self.coco[0]
root = self.root[0]
else:
coco = self.coco[1]
root = self.root[1]
ann_id = self.ids[index]
caption = coco.anns[ann_id]['caption']
img_id = coco.anns[ann_id]['image_id']
path = coco.loadImgs(img_id)[0]['file_name']
image = Image.open(os.path.join(root, path)).convert('RGB')
return root, caption, img_id, path, image
def __len__(self):
return len(self.ids)
class FlickrDataset(data.Dataset):
"""
Dataset loader for Flickr30k and Flickr8k full datasets.
"""
def __init__(self, root, json, split, vocab, transform=None):
self.root = root
self.vocab = vocab
self.split = split
self.transform = transform
self.dataset = jsonmod.load(open(json, 'r'))['images']
self.ids = []
for i, d in enumerate(self.dataset):
if d['split'] == split:
self.ids += [(i, x) for x in range(len(d['sentences']))]
def __getitem__(self, index):
"""This function returns a tuple that is further passed to collate_fn
"""
vocab = self.vocab
root = self.root
ann_id = self.ids[index]
img_id = ann_id[0]
caption = self.dataset[img_id]['sentences'][ann_id[1]]['raw']
path = self.dataset[img_id]['filename']
image = Image.open(os.path.join(root, path)).convert('RGB')
if self.transform is not None:
image = self.transform(image)
# Convert caption (string) to word ids.
tokens = nltk.tokenize.word_tokenize(
str(caption).lower().decode('utf-8'))
caption = []
caption.append(vocab('<start>'))
caption.extend([vocab(token) for token in tokens])
caption.append(vocab('<end>'))
target = torch.Tensor(caption)
return image, target, index, img_id
def __len__(self):
return len(self.ids)
class PrecompDataset(data.Dataset):
"""
Load precomputed captions and image features
Possible options: f8k, f30k, coco, 10crop
"""
def __init__(self, data_path, data_split, vocab):
self.vocab = vocab
loc = data_path + '/'
# Captions
self.captions = []
with open(loc+'%s_caps.txt' % data_split, 'rb') as f:
for line in f:
self.captions.append(line.strip())
# Image features
self.images = np.load(loc+'%s_ims.npy' % data_split)
self.length = len(self.captions)
# rkiros data has redundancy in images, we divide by 5, 10crop doesn't
if self.images.shape[0] != self.length:
self.im_div = 5
else:
self.im_div = 1
# the development set for coco is large and so validation would be slow
if data_split == 'dev':
self.length = 5000
def __getitem__(self, index):
# handle the image redundancy
img_id = index/self.im_div
image = torch.Tensor(self.images[img_id])
caption = self.captions[index]
vocab = self.vocab
# Convert caption (string) to word ids.
tokens = nltk.tokenize.word_tokenize(
str(caption).lower().decode('utf-8'))
caption = []
caption.append(vocab('<start>'))
caption.extend([vocab(token) for token in tokens])
caption.append(vocab('<end>'))
target = torch.Tensor(caption)
return image, target, index, img_id
def __len__(self):
return self.length
def collate_fn(data):
"""Build mini-batch tensors from a list of (image, caption) tuples.
Args:
data: list of (image, caption) tuple.
- image: torch tensor of shape (3, 256, 256).
- caption: torch tensor of shape (?); variable length.
Returns:
images: torch tensor of shape (batch_size, 3, 256, 256).
targets: torch tensor of shape (batch_size, padded_length).
lengths: list; valid length for each padded caption.
"""
# Sort a data list by caption length
data.sort(key=lambda x: len(x[1]), reverse=True)
images, captions, ids, img_ids = zip(*data)
# Merge images (convert tuple of 3D tensor to 4D tensor)
images = torch.stack(images, 0)
# Merget captions (convert tuple of 1D tensor to 2D tensor)
lengths = [len(cap) for cap in captions]
targets = torch.zeros(len(captions), max(lengths)).long()
for i, cap in enumerate(captions):
end = lengths[i]
targets[i, :end] = cap[:end]
return images, targets, lengths, ids
def get_loader_single(data_name, split, root, json, vocab, transform,
batch_size=100, shuffle=True,
num_workers=2, ids=None, collate_fn=collate_fn):
"""Returns torch.utils.data.DataLoader for custom coco dataset."""
if 'coco' in data_name:
# COCO custom dataset
dataset = CocoDataset(root=root,
json=json,
vocab=vocab,
transform=transform, ids=ids)
elif 'f8k' in data_name or 'f30k' in data_name:
dataset = FlickrDataset(root=root,
split=split,
json=json,
vocab=vocab,
transform=transform)
# Data loader
data_loader = torch.utils.data.DataLoader(dataset=dataset,
batch_size=batch_size,
shuffle=shuffle,
pin_memory=True,
num_workers=num_workers,
collate_fn=collate_fn)
return data_loader
def get_precomp_loader(data_path, data_split, vocab, opt, batch_size=100,
shuffle=True, num_workers=2):
"""Returns torch.utils.data.DataLoader for custom coco dataset."""
dset = PrecompDataset(data_path, data_split, vocab)
data_loader = torch.utils.data.DataLoader(dataset=dset,
batch_size=batch_size,
shuffle=shuffle,
pin_memory=True,
collate_fn=collate_fn)
return data_loader
def get_transform(data_name, split_name, opt):
normalizer = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
t_list = []
if split_name == 'train':
t_list = [transforms.RandomResizedCrop(opt.crop_size),
transforms.RandomHorizontalFlip()]
elif split_name == 'val':
t_list = [transforms.Resize(256), transforms.CenterCrop(224)]
elif split_name == 'test':
t_list = [transforms.Resize(256), transforms.CenterCrop(224)]
t_end = [transforms.ToTensor(), normalizer]
transform = transforms.Compose(t_list + t_end)
return transform
def get_loaders(data_name, vocab, crop_size, batch_size, workers, opt):
dpath = os.path.join(opt.data_path, data_name)
if opt.data_name.endswith('_precomp'):
train_loader = get_precomp_loader(dpath, 'train', vocab, opt,
batch_size, True, workers)
val_loader = get_precomp_loader(dpath, 'dev', vocab, opt,
batch_size, False, workers)
else:
# Build Dataset Loader
roots, ids = get_paths(dpath, data_name, opt.use_restval)
transform = get_transform(data_name, 'train', opt)
train_loader = get_loader_single(opt.data_name, 'train',
roots['train']['img'],
roots['train']['cap'],
vocab, transform, ids=ids['train'],
batch_size=batch_size, shuffle=True,
num_workers=workers,
collate_fn=collate_fn)
transform = get_transform(data_name, 'val', opt)
val_loader = get_loader_single(opt.data_name, 'val',
roots['val']['img'],
roots['val']['cap'],
vocab, transform, ids=ids['val'],
batch_size=batch_size, shuffle=False,
num_workers=workers,
collate_fn=collate_fn)
return train_loader, val_loader
def get_test_loader(split_name, data_name, vocab, crop_size, batch_size,
workers, opt):
dpath = os.path.join(opt.data_path, data_name)
if opt.data_name.endswith('_precomp'):
test_loader = get_precomp_loader(dpath, split_name, vocab, opt,
batch_size, False, workers)
else:
# Build Dataset Loader
roots, ids = get_paths(dpath, data_name, opt.use_restval)
transform = get_transform(data_name, split_name, opt)
test_loader = get_loader_single(opt.data_name, split_name,
roots[split_name]['img'],
roots[split_name]['cap'],
vocab, transform, ids=ids[split_name],
batch_size=batch_size, shuffle=False,
num_workers=workers,
collate_fn=collate_fn)
return test_loader
================================================
FILE: evaluation.py
================================================
from __future__ import print_function
import os
import pickle
import numpy
from data import get_test_loader
import time
import numpy as np
from vocab import Vocabulary # NOQA
import torch
from model import VSE, order_sim
from collections import OrderedDict
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=0):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / (.0001 + self.count)
def __str__(self):
"""String representation for logging
"""
# for values that should be recorded exactly e.g. iteration number
if self.count == 0:
return str(self.val)
# for stats
return '%.4f (%.4f)' % (self.val, self.avg)
class LogCollector(object):
"""A collection of logging objects that can change from train to val"""
def __init__(self):
# to keep the order of logged variables deterministic
self.meters = OrderedDict()
def update(self, k, v, n=0):
# create a new meter if previously not recorded
if k not in self.meters:
self.meters[k] = AverageMeter()
self.meters[k].update(v, n)
def __str__(self):
"""Concatenate the meters in one log line
"""
s = ''
for i, (k, v) in enumerate(self.meters.iteritems()):
if i > 0:
s += ' '
s += k + ' ' + str(v)
return s
def tb_log(self, tb_logger, prefix='', step=None):
"""Log using tensorboard
"""
for k, v in self.meters.iteritems():
tb_logger.log_value(prefix + k, v.val, step=step)
def encode_data(model, data_loader, log_step=10, logging=print):
"""Encode all images and captions loadable by `data_loader`
"""
batch_time = AverageMeter()
val_logger = LogCollector()
# switch to evaluate mode
model.val_start()
end = time.time()
# numpy array to keep all the embeddings
img_embs = None
cap_embs = None
for i, (images, captions, lengths, ids) in enumerate(data_loader):
# make sure val logger is used
model.logger = val_logger
# compute the embeddings
img_emb, cap_emb = model.forward_emb(images, captions, lengths,
volatile=True)
# initialize the numpy arrays given the size of the embeddings
if img_embs is None:
img_embs = np.zeros((len(data_loader.dataset), img_emb.size(1)))
cap_embs = np.zeros((len(data_loader.dataset), cap_emb.size(1)))
# preserve the embeddings by copying from gpu and converting to numpy
img_embs[ids] = img_emb.data.cpu().numpy().copy()
cap_embs[ids] = cap_emb.data.cpu().numpy().copy()
# measure accuracy and record loss
model.forward_loss(img_emb, cap_emb)
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % log_step == 0:
logging('Test: [{0}/{1}]\t'
'{e_log}\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
.format(
i, len(data_loader), batch_time=batch_time,
e_log=str(model.logger)))
del images, captions
return img_embs, cap_embs
def evalrank(model_path, data_path=None, split='dev', fold5=False):
"""
Evaluate a trained model on either dev or test. If `fold5=True`, 5 fold
cross-validation is done (only for MSCOCO). Otherwise, the full data is
used for evaluation.
"""
# load model and options
checkpoint = torch.load(model_path)
opt = checkpoint['opt']
if data_path is not None:
opt.data_path = data_path
# load vocabulary used by the model
with open(os.path.join(opt.vocab_path,
'%s_vocab.pkl' % opt.data_name), 'rb') as f:
vocab = pickle.load(f)
opt.vocab_size = len(vocab)
# construct model
model = VSE(opt)
# load model state
model.load_state_dict(checkpoint['model'])
print('Loading dataset')
data_loader = get_test_loader(split, opt.data_name, vocab, opt.crop_size,
opt.batch_size, opt.workers, opt)
print('Computing results...')
img_embs, cap_embs = encode_data(model, data_loader)
print('Images: %d, Captions: %d' %
(img_embs.shape[0] / 5, cap_embs.shape[0]))
if not fold5:
# no cross-validation, full evaluation
r, rt = i2t(img_embs, cap_embs, measure=opt.measure, return_ranks=True)
ri, rti = t2i(img_embs, cap_embs,
measure=opt.measure, return_ranks=True)
ar = (r[0] + r[1] + r[2]) / 3
ari = (ri[0] + ri[1] + ri[2]) / 3
rsum = r[0] + r[1] + r[2] + ri[0] + ri[1] + ri[2]
print("rsum: %.1f" % rsum)
print("Average i2t Recall: %.1f" % ar)
print("Image to text: %.1f %.1f %.1f %.1f %.1f" % r)
print("Average t2i Recall: %.1f" % ari)
print("Text to image: %.1f %.1f %.1f %.1f %.1f" % ri)
else:
# 5fold cross-validation, only for MSCOCO
results = []
for i in range(5):
r, rt0 = i2t(img_embs[i * 5000:(i + 1) * 5000],
cap_embs[i * 5000:(i + 1) *
5000], measure=opt.measure,
return_ranks=True)
print("Image to text: %.1f, %.1f, %.1f, %.1f, %.1f" % r)
ri, rti0 = t2i(img_embs[i * 5000:(i + 1) * 5000],
cap_embs[i * 5000:(i + 1) *
5000], measure=opt.measure,
return_ranks=True)
if i == 0:
rt, rti = rt0, rti0
print("Text to image: %.1f, %.1f, %.1f, %.1f, %.1f" % ri)
ar = (r[0] + r[1] + r[2]) / 3
ari = (ri[0] + ri[1] + ri[2]) / 3
rsum = r[0] + r[1] + r[2] + ri[0] + ri[1] + ri[2]
print("rsum: %.1f ar: %.1f ari: %.1f" % (rsum, ar, ari))
results += [list(r) + list(ri) + [rsum, ar, ari]]
print("-----------------------------------")
print("Mean metrics: ")
mean_metrics = tuple(np.array(results).mean(axis=0).flatten())
print("rsum: %.1f" % (mean_metrics[10] * 6))
print("Average i2t Recall: %.1f" % mean_metrics[11])
print("Image to text: %.1f %.1f %.1f %.1f %.1f" %
mean_metrics[:5])
print("Average t2i Recall: %.1f" % mean_metrics[12])
print("Text to image: %.1f %.1f %.1f %.1f %.1f" %
mean_metrics[5:10])
torch.save({'rt': rt, 'rti': rti}, 'ranks.pth.tar')
def i2t(images, captions, npts=None, measure='cosine', return_ranks=False):
"""
Images->Text (Image Annotation)
Images: (5N, K) matrix of images
Captions: (5N, K) matrix of captions
"""
if npts is None:
npts = images.shape[0] / 5
index_list = []
ranks = numpy.zeros(npts)
top1 = numpy.zeros(npts)
for index in range(npts):
# Get query image
im = images[5 * index].reshape(1, images.shape[1])
# Compute scores
if measure == 'order':
bs = 100
if index % bs == 0:
mx = min(images.shape[0], 5 * (index + bs))
im2 = images[5 * index:mx:5]
d2 = order_sim(torch.Tensor(im2).cuda(),
torch.Tensor(captions).cuda())
d2 = d2.cpu().numpy()
d = d2[index % bs]
else:
d = numpy.dot(im, captions.T).flatten()
inds = numpy.argsort(d)[::-1]
index_list.append(inds[0])
# Score
rank = 1e20
for i in range(5 * index, 5 * index + 5, 1):
tmp = numpy.where(inds == i)[0][0]
if tmp < rank:
rank = tmp
ranks[index] = rank
top1[index] = inds[0]
# Compute metrics
r1 = 100.0 * len(numpy.where(ranks < 1)[0]) / len(ranks)
r5 = 100.0 * len(numpy.where(ranks < 5)[0]) / len(ranks)
r10 = 100.0 * len(numpy.where(ranks < 10)[0]) / len(ranks)
medr = numpy.floor(numpy.median(ranks)) + 1
meanr = ranks.mean() + 1
if return_ranks:
return (r1, r5, r10, medr, meanr), (ranks, top1)
else:
return (r1, r5, r10, medr, meanr)
def t2i(images, captions, npts=None, measure='cosine', return_ranks=False):
"""
Text->Images (Image Search)
Images: (5N, K) matrix of images
Captions: (5N, K) matrix of captions
"""
if npts is None:
npts = images.shape[0] / 5
ims = numpy.array([images[i] for i in range(0, len(images), 5)])
ranks = numpy.zeros(5 * npts)
top1 = numpy.zeros(5 * npts)
for index in range(npts):
# Get query captions
queries = captions[5 * index:5 * index + 5]
# Compute scores
if measure == 'order':
bs = 100
if 5 * index % bs == 0:
mx = min(captions.shape[0], 5 * index + bs)
q2 = captions[5 * index:mx]
d2 = order_sim(torch.Tensor(ims).cuda(),
torch.Tensor(q2).cuda())
d2 = d2.cpu().numpy()
d = d2[:, (5 * index) % bs:(5 * index) % bs + 5].T
else:
d = numpy.dot(queries, ims.T)
inds = numpy.zeros(d.shape)
for i in range(len(inds)):
inds[i] = numpy.argsort(d[i])[::-1]
ranks[5 * index + i] = numpy.where(inds[i] == index)[0][0]
top1[5 * index + i] = inds[i][0]
# Compute metrics
r1 = 100.0 * len(numpy.where(ranks < 1)[0]) / len(ranks)
r5 = 100.0 * len(numpy.where(ranks < 5)[0]) / len(ranks)
r10 = 100.0 * len(numpy.where(ranks < 10)[0]) / len(ranks)
medr = numpy.floor(numpy.median(ranks)) + 1
meanr = ranks.mean() + 1
if return_ranks:
return (r1, r5, r10, medr, meanr), (ranks, top1)
else:
return (r1, r5, r10, medr, meanr)
================================================
FILE: model.py
================================================
import torch
import torch.nn as nn
import torch.nn.init
import torchvision.models as models
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
import torch.backends.cudnn as cudnn
from torch.nn.utils.clip_grad import clip_grad_norm
import numpy as np
from collections import OrderedDict
def l2norm(X):
"""L2-normalize columns of X
"""
norm = torch.pow(X, 2).sum(dim=1, keepdim=True).sqrt()
X = torch.div(X, norm)
return X
def EncoderImage(data_name, img_dim, embed_size, finetune=False,
cnn_type='vgg19', use_abs=False, no_imgnorm=False):
"""A wrapper to image encoders. Chooses between an encoder that uses
precomputed image features, `EncoderImagePrecomp`, or an encoder that
computes image features on the fly `EncoderImageFull`.
"""
if data_name.endswith('_precomp'):
img_enc = EncoderImagePrecomp(
img_dim, embed_size, use_abs, no_imgnorm)
else:
img_enc = EncoderImageFull(
embed_size, finetune, cnn_type, use_abs, no_imgnorm)
return img_enc
# tutorials/09 - Image Captioning
class EncoderImageFull(nn.Module):
def __init__(self, embed_size, finetune=False, cnn_type='vgg19',
use_abs=False, no_imgnorm=False):
"""Load pretrained VGG19 and replace top fc layer."""
super(EncoderImageFull, self).__init__()
self.embed_size = embed_size
self.no_imgnorm = no_imgnorm
self.use_abs = use_abs
# Load a pre-trained model
self.cnn = self.get_cnn(cnn_type, True)
# For efficient memory usage.
for param in self.cnn.parameters():
param.requires_grad = finetune
# Replace the last fully connected layer of CNN with a new one
if cnn_type.startswith('vgg'):
self.fc = nn.Linear(self.cnn.classifier._modules['6'].in_features,
embed_size)
self.cnn.classifier = nn.Sequential(
*list(self.cnn.classifier.children())[:-1])
elif cnn_type.startswith('resnet'):
self.fc = nn.Linear(self.cnn.module.fc.in_features, embed_size)
self.cnn.module.fc = nn.Sequential()
self.init_weights()
def get_cnn(self, arch, pretrained):
"""Load a pretrained CNN and parallelize over GPUs
"""
if pretrained:
print("=> using pre-trained model '{}'".format(arch))
model = models.__dict__[arch](pretrained=True)
else:
print("=> creating model '{}'".format(arch))
model = models.__dict__[arch]()
if arch.startswith('alexnet') or arch.startswith('vgg'):
model.features = nn.DataParallel(model.features)
model.cuda()
else:
model = nn.DataParallel(model).cuda()
return model
def load_state_dict(self, state_dict):
"""
Handle the models saved before commit pytorch/vision@989d52a
"""
if 'cnn.classifier.1.weight' in state_dict:
state_dict['cnn.classifier.0.weight'] = state_dict[
'cnn.classifier.1.weight']
del state_dict['cnn.classifier.1.weight']
state_dict['cnn.classifier.0.bias'] = state_dict[
'cnn.classifier.1.bias']
del state_dict['cnn.classifier.1.bias']
state_dict['cnn.classifier.3.weight'] = state_dict[
'cnn.classifier.4.weight']
del state_dict['cnn.classifier.4.weight']
state_dict['cnn.classifier.3.bias'] = state_dict[
'cnn.classifier.4.bias']
del state_dict['cnn.classifier.4.bias']
super(EncoderImageFull, self).load_state_dict(state_dict)
def init_weights(self):
"""Xavier initialization for the fully connected layer
"""
r = np.sqrt(6.) / np.sqrt(self.fc.in_features +
self.fc.out_features)
self.fc.weight.data.uniform_(-r, r)
self.fc.bias.data.fill_(0)
def forward(self, images):
"""Extract image feature vectors."""
features = self.cnn(images)
# normalization in the image embedding space
features = l2norm(features)
# linear projection to the joint embedding space
features = self.fc(features)
# normalization in the joint embedding space
if not self.no_imgnorm:
features = l2norm(features)
# take the absolute value of the embedding (used in order embeddings)
if self.use_abs:
features = torch.abs(features)
return features
class EncoderImagePrecomp(nn.Module):
def __init__(self, img_dim, embed_size, use_abs=False, no_imgnorm=False):
super(EncoderImagePrecomp, self).__init__()
self.embed_size = embed_size
self.no_imgnorm = no_imgnorm
self.use_abs = use_abs
self.fc = nn.Linear(img_dim, embed_size)
self.init_weights()
def init_weights(self):
"""Xavier initialization for the fully connected layer
"""
r = np.sqrt(6.) / np.sqrt(self.fc.in_features +
self.fc.out_features)
self.fc.weight.data.uniform_(-r, r)
self.fc.bias.data.fill_(0)
def forward(self, images):
"""Extract image feature vectors."""
# assuming that the precomputed features are already l2-normalized
features = self.fc(images)
# normalize in the joint embedding space
if not self.no_imgnorm:
features = l2norm(features)
# take the absolute value of embedding (used in order embeddings)
if self.use_abs:
features = torch.abs(features)
return features
def load_state_dict(self, state_dict):
"""Copies parameters. overwritting the default one to
accept state_dict from Full model
"""
own_state = self.state_dict()
new_state = OrderedDict()
for name, param in state_dict.items():
if name in own_state:
new_state[name] = param
super(EncoderImagePrecomp, self).load_state_dict(new_state)
# tutorials/08 - Language Model
# RNN Based Language Model
class EncoderText(nn.Module):
def __init__(self, vocab_size, word_dim, embed_size, num_layers,
use_abs=False):
super(EncoderText, self).__init__()
self.use_abs = use_abs
self.embed_size = embed_size
# word embedding
self.embed = nn.Embedding(vocab_size, word_dim)
# caption embedding
self.rnn = nn.GRU(word_dim, embed_size, num_layers, batch_first=True)
self.init_weights()
def init_weights(self):
self.embed.weight.data.uniform_(-0.1, 0.1)
def forward(self, x, lengths):
"""Handles variable size captions
"""
# Embed word ids to vectors
x = self.embed(x)
packed = pack_padded_sequence(x, lengths, batch_first=True)
# Forward propagate RNN
out, _ = self.rnn(packed)
# Reshape *final* output to (batch_size, hidden_size)
padded = pad_packed_sequence(out, batch_first=True)
I = torch.LongTensor(lengths).view(-1, 1, 1)
I = Variable(I.expand(x.size(0), 1, self.embed_size)-1).cuda()
out = torch.gather(padded[0], 1, I).squeeze(1)
# normalization in the joint embedding space
out = l2norm(out)
# take absolute value, used by order embeddings
if self.use_abs:
out = torch.abs(out)
return out
def cosine_sim(im, s):
"""Cosine similarity between all the image and sentence pairs
"""
return im.mm(s.t())
def order_sim(im, s):
"""Order embeddings similarity measure $max(0, s-im)$
"""
YmX = (s.unsqueeze(1).expand(s.size(0), im.size(0), s.size(1))
- im.unsqueeze(0).expand(s.size(0), im.size(0), s.size(1)))
score = -YmX.clamp(min=0).pow(2).sum(2).sqrt().t()
return score
class ContrastiveLoss(nn.Module):
"""
Compute contrastive loss
"""
def __init__(self, margin=0, measure=False, max_violation=False):
super(ContrastiveLoss, self).__init__()
self.margin = margin
if measure == 'order':
self.sim = order_sim
else:
self.sim = cosine_sim
self.max_violation = max_violation
def forward(self, im, s):
# compute image-sentence score matrix
scores = self.sim(im, s)
diagonal = scores.diag().view(im.size(0), 1)
d1 = diagonal.expand_as(scores)
d2 = diagonal.t().expand_as(scores)
# compare every diagonal score to scores in its column
# caption retrieval
cost_s = (self.margin + scores - d1).clamp(min=0)
# compare every diagonal score to scores in its row
# image retrieval
cost_im = (self.margin + scores - d2).clamp(min=0)
# clear diagonals
mask = torch.eye(scores.size(0)) > .5
I = Variable(mask)
if torch.cuda.is_available():
I = I.cuda()
cost_s = cost_s.masked_fill_(I, 0)
cost_im = cost_im.masked_fill_(I, 0)
# keep the maximum violating negative for each query
if self.max_violation:
cost_s = cost_s.max(1)[0]
cost_im = cost_im.max(0)[0]
return cost_s.sum() + cost_im.sum()
class VSE(object):
"""
rkiros/uvs model
"""
def __init__(self, opt):
# tutorials/09 - Image Captioning
# Build Models
self.grad_clip = opt.grad_clip
self.img_enc = EncoderImage(opt.data_name, opt.img_dim, opt.embed_size,
opt.finetune, opt.cnn_type,
use_abs=opt.use_abs,
no_imgnorm=opt.no_imgnorm)
self.txt_enc = EncoderText(opt.vocab_size, opt.word_dim,
opt.embed_size, opt.num_layers,
use_abs=opt.use_abs)
if torch.cuda.is_available():
self.img_enc.cuda()
self.txt_enc.cuda()
cudnn.benchmark = True
# Loss and Optimizer
self.criterion = ContrastiveLoss(margin=opt.margin,
measure=opt.measure,
max_violation=opt.max_violation)
params = list(self.txt_enc.parameters())
params += list(self.img_enc.fc.parameters())
if opt.finetune:
params += list(self.img_enc.cnn.parameters())
self.params = params
self.optimizer = torch.optim.Adam(params, lr=opt.learning_rate)
self.Eiters = 0
def state_dict(self):
state_dict = [self.img_enc.state_dict(), self.txt_enc.state_dict()]
return state_dict
def load_state_dict(self, state_dict):
self.img_enc.load_state_dict(state_dict[0])
self.txt_enc.load_state_dict(state_dict[1])
def train_start(self):
"""switch to train mode
"""
self.img_enc.train()
self.txt_enc.train()
def val_start(self):
"""switch to evaluate mode
"""
self.img_enc.eval()
self.txt_enc.eval()
def forward_emb(self, images, captions, lengths, volatile=False):
"""Compute the image and caption embeddings
"""
# Set mini-batch dataset
images = Variable(images, volatile=volatile)
captions = Variable(captions, volatile=volatile)
if torch.cuda.is_available():
images = images.cuda()
captions = captions.cuda()
# Forward
img_emb = self.img_enc(images)
cap_emb = self.txt_enc(captions, lengths)
return img_emb, cap_emb
def forward_loss(self, img_emb, cap_emb, **kwargs):
"""Compute the loss given pairs of image and caption embeddings
"""
loss = self.criterion(img_emb, cap_emb)
self.logger.update('Le', loss.data[0], img_emb.size(0))
return loss
def train_emb(self, images, captions, lengths, ids=None, *args):
"""One training step given images and captions.
"""
self.Eiters += 1
self.logger.update('Eit', self.Eiters)
self.logger.update('lr', self.optimizer.param_groups[0]['lr'])
# compute the embeddings
img_emb, cap_emb = self.forward_emb(images, captions, lengths)
# measure accuracy and record loss
self.optimizer.zero_grad()
loss = self.forward_loss(img_emb, cap_emb)
# compute gradient and do SGD step
loss.backward()
if self.grad_clip > 0:
clip_grad_norm(self.params, self.grad_clip)
self.optimizer.step()
================================================
FILE: train.py
================================================
import pickle
import os
import time
import shutil
import torch
import data
from vocab import Vocabulary # NOQA
from model import VSE
from evaluation import i2t, t2i, AverageMeter, LogCollector, encode_data
import logging
import tensorboard_logger as tb_logger
import argparse
def main():
# Hyper Parameters
parser = argparse.ArgumentParser()
parser.add_argument('--data_path', default='/w/31/faghri/vsepp_data/',
help='path to datasets')
parser.add_argument('--data_name', default='precomp',
help='{coco,f8k,f30k,10crop}_precomp|coco|f8k|f30k')
parser.add_argument('--vocab_path', default='./vocab/',
help='Path to saved vocabulary pickle files.')
parser.add_argument('--margin', default=0.2, type=float,
help='Rank loss margin.')
parser.add_argument('--num_epochs', default=30, type=int,
help='Number of training epochs.')
parser.add_argument('--batch_size', default=128, type=int,
help='Size of a training mini-batch.')
parser.add_argument('--word_dim', default=300, type=int,
help='Dimensionality of the word embedding.')
parser.add_argument('--embed_size', default=1024, type=int,
help='Dimensionality of the joint embedding.')
parser.add_argument('--grad_clip', default=2., type=float,
help='Gradient clipping threshold.')
parser.add_argument('--crop_size', default=224, type=int,
help='Size of an image crop as the CNN input.')
parser.add_argument('--num_layers', default=1, type=int,
help='Number of GRU layers.')
parser.add_argument('--learning_rate', default=.0002, type=float,
help='Initial learning rate.')
parser.add_argument('--lr_update', default=15, type=int,
help='Number of epochs to update the learning rate.')
parser.add_argument('--workers', default=10, type=int,
help='Number of data loader workers.')
parser.add_argument('--log_step', default=10, type=int,
help='Number of steps to print and record the log.')
parser.add_argument('--val_step', default=500, type=int,
help='Number of steps to run validation.')
parser.add_argument('--logger_name', default='runs/runX',
help='Path to save the model and Tensorboard log.')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
parser.add_argument('--max_violation', action='store_true',
help='Use max instead of sum in the rank loss.')
parser.add_argument('--img_dim', default=4096, type=int,
help='Dimensionality of the image embedding.')
parser.add_argument('--finetune', action='store_true',
help='Fine-tune the image encoder.')
parser.add_argument('--cnn_type', default='vgg19',
help="""The CNN used for image encoder
(e.g. vgg19, resnet152)""")
parser.add_argument('--use_restval', action='store_true',
help='Use the restval data for training on MSCOCO.')
parser.add_argument('--measure', default='cosine',
help='Similarity measure used (cosine|order)')
parser.add_argument('--use_abs', action='store_true',
help='Take the absolute value of embedding vectors.')
parser.add_argument('--no_imgnorm', action='store_true',
help='Do not normalize the image embeddings.')
parser.add_argument('--reset_train', action='store_true',
help='Ensure the training is always done in '
'train mode (Not recommended).')
opt = parser.parse_args()
print(opt)
logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO)
tb_logger.configure(opt.logger_name, flush_secs=5)
# Load Vocabulary Wrapper
vocab = pickle.load(open(os.path.join(
opt.vocab_path, '%s_vocab.pkl' % opt.data_name), 'rb'))
opt.vocab_size = len(vocab)
# Load data loaders
train_loader, val_loader = data.get_loaders(
opt.data_name, vocab, opt.crop_size, opt.batch_size, opt.workers, opt)
# Construct the model
model = VSE(opt)
# optionally resume from a checkpoint
if opt.resume:
if os.path.isfile(opt.resume):
print("=> loading checkpoint '{}'".format(opt.resume))
checkpoint = torch.load(opt.resume)
start_epoch = checkpoint['epoch']
best_rsum = checkpoint['best_rsum']
model.load_state_dict(checkpoint['model'])
# Eiters is used to show logs as the continuation of another
# training
model.Eiters = checkpoint['Eiters']
print("=> loaded checkpoint '{}' (epoch {}, best_rsum {})"
.format(opt.resume, start_epoch, best_rsum))
validate(opt, val_loader, model)
else:
print("=> no checkpoint found at '{}'".format(opt.resume))
# Train the Model
best_rsum = 0
for epoch in range(opt.num_epochs):
adjust_learning_rate(opt, model.optimizer, epoch)
# train for one epoch
train(opt, train_loader, model, epoch, val_loader)
# evaluate on validation set
rsum = validate(opt, val_loader, model)
# remember best R@ sum and save checkpoint
is_best = rsum > best_rsum
best_rsum = max(rsum, best_rsum)
save_checkpoint({
'epoch': epoch + 1,
'model': model.state_dict(),
'best_rsum': best_rsum,
'opt': opt,
'Eiters': model.Eiters,
}, is_best, prefix=opt.logger_name + '/')
def train(opt, train_loader, model, epoch, val_loader):
# average meters to record the training statistics
batch_time = AverageMeter()
data_time = AverageMeter()
train_logger = LogCollector()
# switch to train mode
model.train_start()
end = time.time()
for i, train_data in enumerate(train_loader):
if opt.reset_train:
# Always reset to train mode, this is not the default behavior
model.train_start()
# measure data loading time
data_time.update(time.time() - end)
# make sure train logger is used
model.logger = train_logger
# Update the model
model.train_emb(*train_data)
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
# Print log info
if model.Eiters % opt.log_step == 0:
logging.info(
'Epoch: [{0}][{1}/{2}]\t'
'{e_log}\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
.format(
epoch, i, len(train_loader), batch_time=batch_time,
data_time=data_time, e_log=str(model.logger)))
# Record logs in tensorboard
tb_logger.log_value('epoch', epoch, step=model.Eiters)
tb_logger.log_value('step', i, step=model.Eiters)
tb_logger.log_value('batch_time', batch_time.val, step=model.Eiters)
tb_logger.log_value('data_time', data_time.val, step=model.Eiters)
model.logger.tb_log(tb_logger, step=model.Eiters)
# validate at every val_step
if model.Eiters % opt.val_step == 0:
validate(opt, val_loader, model)
def validate(opt, val_loader, model):
# compute the encoding for all the validation images and captions
img_embs, cap_embs = encode_data(
model, val_loader, opt.log_step, logging.info)
# caption retrieval
(r1, r5, r10, medr, meanr) = i2t(img_embs, cap_embs, measure=opt.measure)
logging.info("Image to text: %.1f, %.1f, %.1f, %.1f, %.1f" %
(r1, r5, r10, medr, meanr))
# image retrieval
(r1i, r5i, r10i, medri, meanri) = t2i(
img_embs, cap_embs, measure=opt.measure)
logging.info("Text to image: %.1f, %.1f, %.1f, %.1f, %.1f" %
(r1i, r5i, r10i, medri, meanri))
# sum of recalls to be used for early stopping
currscore = r1 + r5 + r10 + r1i + r5i + r10i
# record metrics in tensorboard
tb_logger.log_value('r1', r1, step=model.Eiters)
tb_logger.log_value('r5', r5, step=model.Eiters)
tb_logger.log_value('r10', r10, step=model.Eiters)
tb_logger.log_value('medr', medr, step=model.Eiters)
tb_logger.log_value('meanr', meanr, step=model.Eiters)
tb_logger.log_value('r1i', r1i, step=model.Eiters)
tb_logger.log_value('r5i', r5i, step=model.Eiters)
tb_logger.log_value('r10i', r10i, step=model.Eiters)
tb_logger.log_value('medri', medri, step=model.Eiters)
tb_logger.log_value('meanri', meanri, step=model.Eiters)
tb_logger.log_value('rsum', currscore, step=model.Eiters)
return currscore
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar', prefix=''):
torch.save(state, prefix + filename)
if is_best:
shutil.copyfile(prefix + filename, prefix + 'model_best.pth.tar')
def adjust_learning_rate(opt, optimizer, epoch):
"""Sets the learning rate to the initial LR
decayed by 10 every 30 epochs"""
lr = opt.learning_rate * (0.1 ** (epoch // opt.lr_update))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the precision@k for the specified values of k"""
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0)
res.append(correct_k.mul_(100.0 / batch_size))
return res
if __name__ == '__main__':
main()
================================================
FILE: vocab.py
================================================
# Create a vocabulary wrapper
import nltk
import pickle
from collections import Counter
from pycocotools.coco import COCO
import json
import argparse
import os
annotations = {
'coco_precomp': ['train_caps.txt', 'dev_caps.txt'],
'coco': ['annotations/captions_train2014.json',
'annotations/captions_val2014.json'],
'f8k_precomp': ['train_caps.txt', 'dev_caps.txt'],
'10crop_precomp': ['train_caps.txt', 'dev_caps.txt'],
'f30k_precomp': ['train_caps.txt', 'dev_caps.txt'],
'f8k': ['dataset_flickr8k.json'],
'f30k': ['dataset_flickr30k.json'],
}
class Vocabulary(object):
"""Simple vocabulary wrapper."""
def __init__(self):
self.word2idx = {}
self.idx2word = {}
self.idx = 0
def add_word(self, word):
if word not in self.word2idx:
self.word2idx[word] = self.idx
self.idx2word[self.idx] = word
self.idx += 1
def __call__(self, word):
if word not in self.word2idx:
return self.word2idx['<unk>']
return self.word2idx[word]
def __len__(self):
return len(self.word2idx)
def from_coco_json(path):
coco = COCO(path)
ids = coco.anns.keys()
captions = []
for i, idx in enumerate(ids):
captions.append(str(coco.anns[idx]['caption']))
return captions
def from_flickr_json(path):
dataset = json.load(open(path, 'r'))['images']
captions = []
for i, d in enumerate(dataset):
captions += [str(x['raw']) for x in d['sentences']]
return captions
def from_txt(txt):
captions = []
with open(txt, 'rb') as f:
for line in f:
captions.append(line.strip())
return captions
def build_vocab(data_path, data_name, jsons, threshold):
"""Build a simple vocabulary wrapper."""
counter = Counter()
for path in jsons[data_name]:
full_path = os.path.join(os.path.join(data_path, data_name), path)
if data_name == 'coco':
captions = from_coco_json(full_path)
elif data_name == 'f8k' or data_name == 'f30k':
captions = from_flickr_json(full_path)
else:
captions = from_txt(full_path)
for i, caption in enumerate(captions):
tokens = nltk.tokenize.word_tokenize(
caption.lower().decode('utf-8'))
counter.update(tokens)
if i % 1000 == 0:
print("[%d/%d] tokenized the captions." % (i, len(captions)))
# Discard if the occurrence of the word is less than min_word_cnt.
words = [word for word, cnt in counter.items() if cnt >= threshold]
# Create a vocab wrapper and add some special tokens.
vocab = Vocabulary()
vocab.add_word('<pad>')
vocab.add_word('<start>')
vocab.add_word('<end>')
vocab.add_word('<unk>')
# Add words to the vocabulary.
for i, word in enumerate(words):
vocab.add_word(word)
return vocab
def main(data_path, data_name):
vocab = build_vocab(data_path, data_name, jsons=annotations, threshold=4)
with open('./vocab/%s_vocab.pkl' % data_name, 'wb') as f:
pickle.dump(vocab, f, pickle.HIGHEST_PROTOCOL)
print("Saved vocabulary file to ", './vocab/%s_vocab.pkl' % data_name)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--data_path', default='/w/31/faghri/vsepp_data/')
parser.add_argument('--data_name', default='coco',
help='{coco,f8k,f30k,10crop}_precomp|coco|f8k|f30k')
opt = parser.parse_args()
main(opt.data_path, opt.data_name)
gitextract_rkazkyvw/ ├── .gitignore ├── LICENSE ├── README.md ├── data.py ├── evaluation.py ├── model.py ├── train.py └── vocab.py
SYMBOL INDEX (81 symbols across 5 files)
FILE: data.py
function get_paths (line 12) | def get_paths(path, name='coco', use_restval=False):
class CocoDataset (line 78) | class CocoDataset(data.Dataset):
method __init__ (line 81) | def __init__(self, root, json, vocab, transform=None, ids=None):
method __getitem__ (line 111) | def __getitem__(self, index):
method get_raw_item (line 130) | def get_raw_item(self, index):
method __len__ (line 145) | def __len__(self):
class FlickrDataset (line 149) | class FlickrDataset(data.Dataset):
method __init__ (line 154) | def __init__(self, root, json, split, vocab, transform=None):
method __getitem__ (line 165) | def __getitem__(self, index):
method __len__ (line 189) | def __len__(self):
class PrecompDataset (line 193) | class PrecompDataset(data.Dataset):
method __init__ (line 199) | def __init__(self, data_path, data_split, vocab):
method __getitem__ (line 221) | def __getitem__(self, index):
method __len__ (line 238) | def __len__(self):
function collate_fn (line 242) | def collate_fn(data):
function get_loader_single (line 271) | def get_loader_single(data_name, split, root, json, vocab, transform,
function get_precomp_loader (line 298) | def get_precomp_loader(data_path, data_split, vocab, opt, batch_size=100,
function get_transform (line 311) | def get_transform(data_name, split_name, opt):
function get_loaders (line 328) | def get_loaders(data_name, vocab, crop_size, batch_size, workers, opt):
function get_test_loader (line 360) | def get_test_loader(split_name, data_name, vocab, crop_size, batch_size,
FILE: evaluation.py
class AverageMeter (line 15) | class AverageMeter(object):
method __init__ (line 18) | def __init__(self):
method reset (line 21) | def reset(self):
method update (line 27) | def update(self, val, n=0):
method __str__ (line 33) | def __str__(self):
class LogCollector (line 43) | class LogCollector(object):
method __init__ (line 46) | def __init__(self):
method update (line 50) | def update(self, k, v, n=0):
method __str__ (line 56) | def __str__(self):
method tb_log (line 66) | def tb_log(self, tb_logger, prefix='', step=None):
function encode_data (line 73) | def encode_data(model, data_loader, log_step=10, logging=print):
function evalrank (line 123) | def evalrank(model_path, data_path=None, split='dev', fold5=False):
function i2t (line 205) | def i2t(images, captions, npts=None, measure='cosine', return_ranks=False):
function t2i (line 258) | def t2i(images, captions, npts=None, measure='cosine', return_ranks=False):
FILE: model.py
function l2norm (line 13) | def l2norm(X):
function EncoderImage (line 21) | def EncoderImage(data_name, img_dim, embed_size, finetune=False,
class EncoderImageFull (line 38) | class EncoderImageFull(nn.Module):
method __init__ (line 40) | def __init__(self, embed_size, finetune=False, cnn_type='vgg19',
method get_cnn (line 67) | def get_cnn(self, arch, pretrained):
method load_state_dict (line 85) | def load_state_dict(self, state_dict):
method init_weights (line 105) | def init_weights(self):
method forward (line 113) | def forward(self, images):
class EncoderImagePrecomp (line 134) | class EncoderImagePrecomp(nn.Module):
method __init__ (line 136) | def __init__(self, img_dim, embed_size, use_abs=False, no_imgnorm=False):
method init_weights (line 146) | def init_weights(self):
method forward (line 154) | def forward(self, images):
method load_state_dict (line 170) | def load_state_dict(self, state_dict):
class EncoderText (line 185) | class EncoderText(nn.Module):
method __init__ (line 187) | def __init__(self, vocab_size, word_dim, embed_size, num_layers,
method init_weights (line 201) | def init_weights(self):
method forward (line 204) | def forward(self, x, lengths):
function cosine_sim (line 230) | def cosine_sim(im, s):
function order_sim (line 236) | def order_sim(im, s):
class ContrastiveLoss (line 245) | class ContrastiveLoss(nn.Module):
method __init__ (line 250) | def __init__(self, margin=0, measure=False, max_violation=False):
method forward (line 260) | def forward(self, im, s):
class VSE (line 290) | class VSE(object):
method __init__ (line 295) | def __init__(self, opt):
method state_dict (line 325) | def state_dict(self):
method load_state_dict (line 329) | def load_state_dict(self, state_dict):
method train_start (line 333) | def train_start(self):
method val_start (line 339) | def val_start(self):
method forward_emb (line 345) | def forward_emb(self, images, captions, lengths, volatile=False):
method forward_loss (line 360) | def forward_loss(self, img_emb, cap_emb, **kwargs):
method train_emb (line 367) | def train_emb(self, images, captions, lengths, ids=None, *args):
FILE: train.py
function main (line 19) | def main():
function train (line 136) | def train(opt, train_loader, model, epoch, val_loader):
function validate (line 187) | def validate(opt, val_loader, model):
function save_checkpoint (line 220) | def save_checkpoint(state, is_best, filename='checkpoint.pth.tar', prefi...
function adjust_learning_rate (line 226) | def adjust_learning_rate(opt, optimizer, epoch):
function accuracy (line 234) | def accuracy(output, target, topk=(1,)):
FILE: vocab.py
class Vocabulary (line 22) | class Vocabulary(object):
method __init__ (line 25) | def __init__(self):
method add_word (line 30) | def add_word(self, word):
method __call__ (line 36) | def __call__(self, word):
method __len__ (line 41) | def __len__(self):
function from_coco_json (line 45) | def from_coco_json(path):
function from_flickr_json (line 55) | def from_flickr_json(path):
function from_txt (line 64) | def from_txt(txt):
function build_vocab (line 72) | def build_vocab(data_path, data_name, jsons, threshold):
function main (line 107) | def main(data_path, data_name):
Condensed preview — 8 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (68K chars).
[
{
"path": ".gitignore",
"chars": 49,
"preview": "*.pyc\n*.swp\n*.ipynb_checkpoints\n*.json\n*.pth.tar\n"
},
{
"path": "LICENSE",
"chars": 11357,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 3232,
"preview": "# Improving Visual-Semantic Embeddings with Hard Negatives\n\nCode for the image-caption retrieval methods from\n**[VSE++: "
},
{
"path": "data.py",
"chars": 14471,
"preview": "import torch\nimport torch.utils.data as data\nimport torchvision.transforms as transforms\nimport os\nimport nltk\nfrom PIL "
},
{
"path": "evaluation.py",
"chars": 10230,
"preview": "from __future__ import print_function\nimport os\nimport pickle\n\nimport numpy\nfrom data import get_test_loader\nimport time"
},
{
"path": "model.py",
"chars": 12801,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.init\nimport torchvision.models as models\nfrom torch.autograd import V"
},
{
"path": "train.py",
"chars": 10142,
"preview": "import pickle\nimport os\nimport time\nimport shutil\n\nimport torch\n\nimport data\nfrom vocab import Vocabulary # NOQA\nfrom m"
},
{
"path": "vocab.py",
"chars": 3593,
"preview": "# Create a vocabulary wrapper\nimport nltk\nimport pickle\nfrom collections import Counter\nfrom pycocotools.coco import COC"
}
]
About this extraction
This page contains the full source code of the fartashf/vsepp GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 8 files (64.3 KB), approximately 15.7k tokens, and a symbol index with 81 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.