Repository: uoguelph-mlrg/Cutout
Branch: master
Commit: 287f934ea5fa
Files: 14
Total size: 43.4 KB
Directory structure:
gitextract_gpim7oxl/
├── .gitignore
├── LICENSE.md
├── README.md
├── model/
│ ├── __init__.py
│ ├── resnet.py
│ └── wide_resnet.py
├── shake-shake/
│ ├── README.md
│ ├── cifar10.lua
│ ├── cifar100.lua
│ └── transforms.lua
├── train.py
└── util/
├── __init__.py
├── cutout.py
└── misc.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
*.pyc
checkpoints/
logs/
data/
================================================
FILE: LICENSE.md
================================================
Educational Community License, Version 2.0 (ECL-2.0)
Version 2.0, April 2007
http://www.osedu.org/licenses/
The Educational Community License version 2.0 ("ECL") consists of the Apache 2.0 license, modified to change the scope of the patent grant in section 3 to be specific to the needs of the education communities using this license. The original Apache 2.0 license can be found at: http://www.apache.org/licenses /LICENSE-2.0
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. Any patent license granted hereby with respect to contributions by an individual employed by an institution or organization is limited to patent claims where the individual that is the author of the Work is also the inventor of the patent claims licensed, and where the organization or institution has the right to grant such license under applicable grant and research funding agreements. No other express or implied licenses are granted.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Educational Community License to your work
To apply the Educational Community License to your work, attach
the following boilerplate notice, with the fields enclosed by
brackets "[]" replaced with your own identifying information.
(Don't include the brackets!) The text should be enclosed in the
appropriate comment syntax for the file format. We also recommend
that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the
Educational Community License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may
obtain a copy of the License at
http://www.osedu.org/licenses /ECL-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an "AS IS"
BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the License for the specific language governing
permissions and limitations under the License.
================================================
FILE: README.md
================================================
# Cutout
This repository contains the code for the paper [Improved Regularization of Convolutional Neural Networks with Cutout](https://arxiv.org/abs/1708.04552).
## Introduction
Cutout is a simple regularization method for convolutional neural networks which consists of masking out random sections of input images during training. This technique simulates occluded examples and encourages the model to take more minor features into consideration when making decisions, rather than relying on the presence of a few major features.

Bibtex:
```
@article{devries2017cutout,
title={Improved Regularization of Convolutional Neural Networks with Cutout},
author={DeVries, Terrance and Taylor, Graham W},
journal={arXiv preprint arXiv:1708.04552},
year={2017}
}
```
## Results and Usage
### Dependencies
[PyTorch v0.4.0](http://pytorch.org/)
[tqdm](https://pypi.python.org/pypi/tqdm)
### ResNet18
Test error (%, flip/translation augmentation, mean/std normalization, mean of 5 runs)
| **Network** | **CIFAR-10** | **CIFAR-100** |
| ----------- | ------------ | ------------- |
| ResNet18 | 4.72 | 22.46 |
| ResNet18 + cutout | 3.99 | 21.96 |
To train ResNet18 on CIFAR10 with data augmentation and cutout:
`python train.py --dataset cifar10 --model resnet18 --data_augmentation --cutout --length 16`
To train ResNet18 on CIFAR100 with data augmentation and cutout:
`python train.py --dataset cifar100 --model resnet18 --data_augmentation --cutout --length 8`
### WideResNet
WideResNet model implementation from https://github.com/xternalz/WideResNet-pytorch
Test error (%, flip/translation augmentation, mean/std normalization, mean of 5 runs)
| **Network** | **CIFAR-10** | **CIFAR-100** | **SVHN** |
| ----------- | ------------ | ------------- | -------- |
| WideResNet | 3.87 | 18.8 | 1.60 |
| WideResNet + cutout | 3.08 | 18.41 | **1.30** |
To train WideResNet 28-10 on CIFAR10 with data augmentation and cutout:
`python train.py --dataset cifar10 --model wideresnet --data_augmentation --cutout --length 16`
To train WideResNet 28-10 on CIFAR100 with data augmentation and cutout:
`python train.py --dataset cifar100 --model wideresnet --data_augmentation --cutout --length 8`
To train WideResNet 16-8 on SVHN with cutout:
`python train.py --dataset svhn --model wideresnet --learning_rate 0.01 --epochs 160 --cutout --length 20`
### Shake-shake Regularization Network
Shake-shake regularization model implementation from https://github.com/xgastaldi/shake-shake
Test error (%, flip/translation augmentation, mean/std normalization, mean of 3 runs)
| **Network** | **CIFAR-10** | **CIFAR-100** |
| ----------- | ------------ | ------------- |
| Shake-shake | 2.86 | 15.58 |
| Shake-shake + cutout | **2.56** | **15.20** |
See README in [shake-shake](https://github.com/uoguelph-mlrg/Cutout/tree/master/shake-shake) folder for usage instructions.
================================================
FILE: model/__init__.py
================================================
================================================
FILE: model/resnet.py
================================================
'''ResNet18/34/50/101/152 in Pytorch.'''
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(in_planes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion*planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 64
self.conv1 = conv3x3(3,64)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512*block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def ResNet18(num_classes=10):
return ResNet(BasicBlock, [2,2,2,2], num_classes)
def ResNet34(num_classes=10):
return ResNet(BasicBlock, [3,4,6,3], num_classes)
def ResNet50(num_classes=10):
return ResNet(Bottleneck, [3,4,6,3], num_classes)
def ResNet101(num_classes=10):
return ResNet(Bottleneck, [3,4,23,3], num_classes)
def ResNet152(num_classes=10):
return ResNet(Bottleneck, [3,8,36,3], num_classes)
def test_resnet():
net = ResNet50()
y = net(Variable(torch.randn(1,3,32,32)))
print(y.size())
# test_resnet()
================================================
FILE: model/wide_resnet.py
================================================
# From https://github.com/xternalz/WideResNet-pytorch
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicBlock(nn.Module):
def __init__(self, in_planes, out_planes, stride, dropRate=0.0):
super(BasicBlock, self).__init__()
self.bn1 = nn.BatchNorm2d(in_planes)
self.relu1 = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
self.relu2 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_planes, out_planes, kernel_size=3, stride=1,
padding=1, bias=False)
self.droprate = dropRate
self.equalInOut = (in_planes == out_planes)
self.convShortcut = (not self.equalInOut) and nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride,
padding=0, bias=False) or None
def forward(self, x):
if not self.equalInOut:
x = self.relu1(self.bn1(x))
else:
out = self.relu1(self.bn1(x))
out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else x)))
if self.droprate > 0:
out = F.dropout(out, p=self.droprate, training=self.training)
out = self.conv2(out)
return torch.add(x if self.equalInOut else self.convShortcut(x), out)
class NetworkBlock(nn.Module):
def __init__(self, nb_layers, in_planes, out_planes, block, stride, dropRate=0.0):
super(NetworkBlock, self).__init__()
self.layer = self._make_layer(block, in_planes, out_planes, nb_layers, stride, dropRate)
def _make_layer(self, block, in_planes, out_planes, nb_layers, stride, dropRate):
layers = []
for i in range(nb_layers):
layers.append(block(i == 0 and in_planes or out_planes, out_planes, i == 0 and stride or 1, dropRate))
return nn.Sequential(*layers)
def forward(self, x):
return self.layer(x)
class WideResNet(nn.Module):
def __init__(self, depth, num_classes, widen_factor=1, dropRate=0.0):
super(WideResNet, self).__init__()
nChannels = [16, 16*widen_factor, 32*widen_factor, 64*widen_factor]
assert((depth - 4) % 6 == 0)
n = (depth - 4) / 6
block = BasicBlock
# 1st conv before any network block
self.conv1 = nn.Conv2d(3, nChannels[0], kernel_size=3, stride=1,
padding=1, bias=False)
# 1st block
self.block1 = NetworkBlock(n, nChannels[0], nChannels[1], block, 1, dropRate)
# 2nd block
self.block2 = NetworkBlock(n, nChannels[1], nChannels[2], block, 2, dropRate)
# 3rd block
self.block3 = NetworkBlock(n, nChannels[2], nChannels[3], block, 2, dropRate)
# global average pooling and classifier
self.bn1 = nn.BatchNorm2d(nChannels[3])
self.relu = nn.ReLU(inplace=True)
self.fc = nn.Linear(nChannels[3], num_classes)
self.nChannels = nChannels[3]
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.bias.data.zero_()
def forward(self, x):
out = self.conv1(x)
out = self.block1(out)
out = self.block2(out)
out = self.block3(out)
out = self.relu(self.bn1(out))
out = F.avg_pool2d(out, 8)
out = out.view(-1, self.nChannels)
out = self.fc(out)
return out
================================================
FILE: shake-shake/README.md
================================================
# Cutout in Shake-Shake Regularization Networks
In order to add cutout to Xavier Gastaldi's shake-shake regularization code we simply add a cutout function to transforms.lua (lines 16 to 29) and then append the cutout function to the CIFAR-10 and CIFAR-100 pre-processing pipelines (lines 49 and 60 in cifar10.lua and cifar100.lua respectively).
## Usage
1. Follow Usage instruction 1 from https://github.com/xgastaldi/shake-shake to install fb.resnet.torch and related libraries.
2. Once installed, navigate to your local fb.resnet.torch/datasets folder.
3. Copy the files from this folder (shake-shake) and paste them into the datasets folder. This should overwrite cifar10.lua, cifar100.lua, and transforms.lua.
4. Continue following remaining instructions from https://github.com/xgastaldi/shake-shake. CIFAR-10 should now train using cutout with a length of 16 and CIFAR-100 will train using cutout with a length of 8.
================================================
FILE: shake-shake/cifar10.lua
================================================
--
-- Copyright (c) 2016, Facebook, Inc.
-- All rights reserved.
--
-- This source code is licensed under the BSD-style license found in the
-- LICENSE file in the root directory of this source tree. An additional grant
-- of patent rights can be found in the PATENTS file in the same directory.
--
-- CIFAR-10 dataset loader
--
local t = require 'datasets/transforms'
local M = {}
local CifarDataset = torch.class('resnet.CifarDataset', M)
function CifarDataset:__init(imageInfo, opt, split)
assert(imageInfo[split], split)
self.imageInfo = imageInfo[split]
self.split = split
end
function CifarDataset:get(i)
local image = self.imageInfo.data[i]:float()
local label = self.imageInfo.labels[i]
return {
input = image,
target = label,
}
end
function CifarDataset:size()
return self.imageInfo.data:size(1)
end
-- Computed from entire CIFAR-10 training set
local meanstd = {
mean = {125.3, 123.0, 113.9},
std = {63.0, 62.1, 66.7},
}
function CifarDataset:preprocess()
if self.split == 'train' then
return t.Compose{
t.ColorNormalize(meanstd),
t.HorizontalFlip(0.5),
t.RandomCrop(32, 4),
t.CutOut(8),
}
elseif self.split == 'val' then
return t.ColorNormalize(meanstd)
else
error('invalid split: ' .. self.split)
end
end
return M.CifarDataset
================================================
FILE: shake-shake/cifar100.lua
================================================
--
-- Copyright (c) 2016, Facebook, Inc.
-- All rights reserved.
--
-- This source code is licensed under the BSD-style license found in the
-- LICENSE file in the root directory of this source tree. An additional grant
-- of patent rights can be found in the PATENTS file in the same directory.
--
------------
-- This file is downloading and transforming CIFAR-100.
-- It is based on cifar10.lua
-- Ludovic Trottier
------------
local t = require 'datasets/transforms'
local M = {}
local CifarDataset = torch.class('resnet.CifarDataset', M)
function CifarDataset:__init(imageInfo, opt, split)
assert(imageInfo[split], split)
self.imageInfo = imageInfo[split]
self.split = split
end
function CifarDataset:get(i)
local image = self.imageInfo.data[i]:float()
local label = self.imageInfo.labels[i]
return {
input = image,
target = label,
}
end
function CifarDataset:size()
return self.imageInfo.data:size(1)
end
-- Computed from entire CIFAR-100 training set with this code:
-- dataset = torch.load('cifar100.t7')
-- tt = dataset.train.data:double();
-- tt = tt:transpose(2,4);
-- tt = tt:reshape(50000*32*32, 3);
-- tt:mean(1)
-- tt:std(1)
local meanstd = {
mean = {129.3, 124.1, 112.4},
std = {68.2, 65.4, 70.4},
}
function CifarDataset:preprocess()
if self.split == 'train' then
return t.Compose{
t.ColorNormalize(meanstd),
t.HorizontalFlip(0.5),
t.RandomCrop(32, 4),
t.CutOut(4),
}
elseif self.split == 'val' then
return t.ColorNormalize(meanstd)
else
error('invalid split: ' .. self.split)
end
end
return M.CifarDataset
================================================
FILE: shake-shake/transforms.lua
================================================
--
-- Copyright (c) 2016, Facebook, Inc.
-- All rights reserved.
--
-- This source code is licensed under the BSD-style license found in the
-- LICENSE file in the root directory of this source tree. An additional grant
-- of patent rights can be found in the PATENTS file in the same directory.
--
-- Image transforms for data augmentation and input normalization
--
require 'image'
local M = {}
function M.CutOut(half_length)
return function(input)
local w, h = input:size(3), input:size(2)
local x, y = torch.random(1, w), torch.random(1, h)
local y1 = math.min(math.max(y - half_length, 1), h)
local y2 = math.min(math.max(y + half_length, 1), h)
local x1 = math.min(math.max(x - half_length, 1), w)
local x2 = math.min(math.max(x + half_length, 1), w)
input[{ {}, {y1, y2}, {x1, x2} }] = 0.
return input
end
end
function M.Compose(transforms)
return function(input)
for _, transform in ipairs(transforms) do
input = transform(input)
end
return input
end
end
function M.ColorNormalize(meanstd)
return function(img)
img = img:clone()
for i=1,3 do
img[i]:add(-meanstd.mean[i])
img[i]:div(meanstd.std[i])
end
return img
end
end
-- Scales the smaller edge to size
function M.Scale(size, interpolation)
interpolation = interpolation or 'bicubic'
return function(input)
local w, h = input:size(3), input:size(2)
if (w <= h and w == size) or (h <= w and h == size) then
return input
end
if w < h then
return image.scale(input, size, h/w * size, interpolation)
else
return image.scale(input, w/h * size, size, interpolation)
end
end
end
-- Crop to centered rectangle
function M.CenterCrop(size)
return function(input)
local w1 = math.ceil((input:size(3) - size)/2)
local h1 = math.ceil((input:size(2) - size)/2)
return image.crop(input, w1, h1, w1 + size, h1 + size) -- center patch
end
end
-- Random crop form larger image with optional zero padding
function M.RandomCrop(size, padding)
padding = padding or 0
return function(input)
if padding > 0 then
local temp = input.new(3, input:size(2) + 2*padding, input:size(3) + 2*padding)
temp:zero()
:narrow(2, padding+1, input:size(2))
:narrow(3, padding+1, input:size(3))
:copy(input)
input = temp
end
local w, h = input:size(3), input:size(2)
if w == size and h == size then
return input
end
local x1, y1 = torch.random(0, w - size), torch.random(0, h - size)
local out = image.crop(input, x1, y1, x1 + size, y1 + size)
assert(out:size(2) == size and out:size(3) == size, 'wrong crop size')
return out
end
end
-- Four corner patches and center crop from image and its horizontal reflection
function M.TenCrop(size)
local centerCrop = M.CenterCrop(size)
return function(input)
local w, h = input:size(3), input:size(2)
local output = {}
for _, img in ipairs{input, image.hflip(input)} do
table.insert(output, centerCrop(img))
table.insert(output, image.crop(img, 0, 0, size, size))
table.insert(output, image.crop(img, w-size, 0, w, size))
table.insert(output, image.crop(img, 0, h-size, size, h))
table.insert(output, image.crop(img, w-size, h-size, w, h))
end
-- View as mini-batch
for i, img in ipairs(output) do
output[i] = img:view(1, img:size(1), img:size(2), img:size(3))
end
return input.cat(output, 1)
end
end
-- Resized with shorter side randomly sampled from [minSize, maxSize] (ResNet-style)
function M.RandomScale(minSize, maxSize)
return function(input)
local w, h = input:size(3), input:size(2)
local targetSz = torch.random(minSize, maxSize)
local targetW, targetH = targetSz, targetSz
if w < h then
targetH = torch.round(h / w * targetW)
else
targetW = torch.round(w / h * targetH)
end
return image.scale(input, targetW, targetH, 'bicubic')
end
end
-- Random crop with size 8%-100% and aspect ratio 3/4 - 4/3 (Inception-style)
function M.RandomSizedCrop(size)
local scale = M.Scale(size)
local crop = M.CenterCrop(size)
return function(input)
local attempt = 0
repeat
local area = input:size(2) * input:size(3)
local targetArea = torch.uniform(0.08, 1.0) * area
local aspectRatio = torch.uniform(3/4, 4/3)
local w = torch.round(math.sqrt(targetArea * aspectRatio))
local h = torch.round(math.sqrt(targetArea / aspectRatio))
if torch.uniform() < 0.5 then
w, h = h, w
end
if h <= input:size(2) and w <= input:size(3) then
local y1 = torch.random(0, input:size(2) - h)
local x1 = torch.random(0, input:size(3) - w)
local out = image.crop(input, x1, y1, x1 + w, y1 + h)
assert(out:size(2) == h and out:size(3) == w, 'wrong crop size')
return image.scale(out, size, size, 'bicubic')
end
attempt = attempt + 1
until attempt >= 10
-- fallback
return crop(scale(input))
end
end
function M.HorizontalFlip(prob)
return function(input)
if torch.uniform() < prob then
input = image.hflip(input)
end
return input
end
end
function M.Rotation(deg)
return function(input)
if deg ~= 0 then
input = image.rotate(input, (torch.uniform() - 0.5) * deg * math.pi / 180, 'bilinear')
end
return input
end
end
-- Lighting noise (AlexNet-style PCA-based noise)
function M.Lighting(alphastd, eigval, eigvec)
return function(input)
if alphastd == 0 then
return input
end
local alpha = torch.Tensor(3):normal(0, alphastd)
local rgb = eigvec:clone()
:cmul(alpha:view(1, 3):expand(3, 3))
:cmul(eigval:view(1, 3):expand(3, 3))
:sum(2)
:squeeze()
input = input:clone()
for i=1,3 do
input[i]:add(rgb[i])
end
return input
end
end
local function blend(img1, img2, alpha)
return img1:mul(alpha):add(1 - alpha, img2)
end
local function grayscale(dst, img)
dst:resizeAs(img)
dst[1]:zero()
dst[1]:add(0.299, img[1]):add(0.587, img[2]):add(0.114, img[3])
dst[2]:copy(dst[1])
dst[3]:copy(dst[1])
return dst
end
function M.Saturation(var)
local gs
return function(input)
gs = gs or input.new()
grayscale(gs, input)
local alpha = 1.0 + torch.uniform(-var, var)
blend(input, gs, alpha)
return input
end
end
function M.Brightness(var)
local gs
return function(input)
gs = gs or input.new()
gs:resizeAs(input):zero()
local alpha = 1.0 + torch.uniform(-var, var)
blend(input, gs, alpha)
return input
end
end
function M.Contrast(var)
local gs
return function(input)
gs = gs or input.new()
grayscale(gs, input)
gs:fill(gs[1]:mean())
local alpha = 1.0 + torch.uniform(-var, var)
blend(input, gs, alpha)
return input
end
end
function M.RandomOrder(ts)
return function(input)
local img = input.img or input
local order = torch.randperm(#ts)
for i=1,#ts do
img = ts[order[i]](img)
end
return img
end
end
function M.ColorJitter(opt)
local brightness = opt.brightness or 0
local contrast = opt.contrast or 0
local saturation = opt.saturation or 0
local ts = {}
if brightness ~= 0 then
table.insert(ts, M.Brightness(brightness))
end
if contrast ~= 0 then
table.insert(ts, M.Contrast(contrast))
end
if saturation ~= 0 then
table.insert(ts, M.Saturation(saturation))
end
if #ts == 0 then
return function(input) return input end
end
return M.RandomOrder(ts)
end
return M
================================================
FILE: train.py
================================================
# run train.py --dataset cifar10 --model resnet18 --data_augmentation --cutout --length 16
# run train.py --dataset cifar100 --model resnet18 --data_augmentation --cutout --length 8
# run train.py --dataset svhn --model wideresnet --learning_rate 0.01 --epochs 160 --cutout --length 20
import pdb
import argparse
import numpy as np
from tqdm import tqdm
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.backends.cudnn as cudnn
from torch.optim.lr_scheduler import MultiStepLR
from torchvision.utils import make_grid
from torchvision import datasets, transforms
from util.misc import CSVLogger
from util.cutout import Cutout
from model.resnet import ResNet18
from model.wide_resnet import WideResNet
model_options = ['resnet18', 'wideresnet']
dataset_options = ['cifar10', 'cifar100', 'svhn']
parser = argparse.ArgumentParser(description='CNN')
parser.add_argument('--dataset', '-d', default='cifar10',
choices=dataset_options)
parser.add_argument('--model', '-a', default='resnet18',
choices=model_options)
parser.add_argument('--batch_size', type=int, default=128,
help='input batch size for training (default: 128)')
parser.add_argument('--epochs', type=int, default=200,
help='number of epochs to train (default: 20)')
parser.add_argument('--learning_rate', type=float, default=0.1,
help='learning rate')
parser.add_argument('--data_augmentation', action='store_true', default=False,
help='augment data by flipping and cropping')
parser.add_argument('--cutout', action='store_true', default=False,
help='apply cutout')
parser.add_argument('--n_holes', type=int, default=1,
help='number of holes to cut out from image')
parser.add_argument('--length', type=int, default=16,
help='length of the holes')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='enables CUDA training')
parser.add_argument('--seed', type=int, default=0,
help='random seed (default: 1)')
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
cudnn.benchmark = True # Should make training should go faster for large models
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
test_id = args.dataset + '_' + args.model
print(args)
# Image Preprocessing
if args.dataset == 'svhn':
normalize = transforms.Normalize(mean=[x / 255.0 for x in[109.9, 109.7, 113.8]],
std=[x / 255.0 for x in [50.1, 50.6, 50.8]])
else:
normalize = transforms.Normalize(mean=[x / 255.0 for x in [125.3, 123.0, 113.9]],
std=[x / 255.0 for x in [63.0, 62.1, 66.7]])
train_transform = transforms.Compose([])
if args.data_augmentation:
train_transform.transforms.append(transforms.RandomCrop(32, padding=4))
train_transform.transforms.append(transforms.RandomHorizontalFlip())
train_transform.transforms.append(transforms.ToTensor())
train_transform.transforms.append(normalize)
if args.cutout:
train_transform.transforms.append(Cutout(n_holes=args.n_holes, length=args.length))
test_transform = transforms.Compose([
transforms.ToTensor(),
normalize])
if args.dataset == 'cifar10':
num_classes = 10
train_dataset = datasets.CIFAR10(root='data/',
train=True,
transform=train_transform,
download=True)
test_dataset = datasets.CIFAR10(root='data/',
train=False,
transform=test_transform,
download=True)
elif args.dataset == 'cifar100':
num_classes = 100
train_dataset = datasets.CIFAR100(root='data/',
train=True,
transform=train_transform,
download=True)
test_dataset = datasets.CIFAR100(root='data/',
train=False,
transform=test_transform,
download=True)
elif args.dataset == 'svhn':
num_classes = 10
train_dataset = datasets.SVHN(root='data/',
split='train',
transform=train_transform,
download=True)
extra_dataset = datasets.SVHN(root='data/',
split='extra',
transform=train_transform,
download=True)
# Combine both training splits (https://arxiv.org/pdf/1605.07146.pdf)
data = np.concatenate([train_dataset.data, extra_dataset.data], axis=0)
labels = np.concatenate([train_dataset.labels, extra_dataset.labels], axis=0)
train_dataset.data = data
train_dataset.labels = labels
test_dataset = datasets.SVHN(root='data/',
split='test',
transform=test_transform,
download=True)
# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=args.batch_size,
shuffle=True,
pin_memory=True,
num_workers=2)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=args.batch_size,
shuffle=False,
pin_memory=True,
num_workers=2)
if args.model == 'resnet18':
cnn = ResNet18(num_classes=num_classes)
elif args.model == 'wideresnet':
if args.dataset == 'svhn':
cnn = WideResNet(depth=16, num_classes=num_classes, widen_factor=8,
dropRate=0.4)
else:
cnn = WideResNet(depth=28, num_classes=num_classes, widen_factor=10,
dropRate=0.3)
cnn = cnn.cuda()
criterion = nn.CrossEntropyLoss().cuda()
cnn_optimizer = torch.optim.SGD(cnn.parameters(), lr=args.learning_rate,
momentum=0.9, nesterov=True, weight_decay=5e-4)
if args.dataset == 'svhn':
scheduler = MultiStepLR(cnn_optimizer, milestones=[80, 120], gamma=0.1)
else:
scheduler = MultiStepLR(cnn_optimizer, milestones=[60, 120, 160], gamma=0.2)
filename = 'logs/' + test_id + '.csv'
csv_logger = CSVLogger(args=args, fieldnames=['epoch', 'train_acc', 'test_acc'], filename=filename)
def test(loader):
cnn.eval() # Change model to 'eval' mode (BN uses moving mean/var).
correct = 0.
total = 0.
for images, labels in loader:
images = images.cuda()
labels = labels.cuda()
with torch.no_grad():
pred = cnn(images)
pred = torch.max(pred.data, 1)[1]
total += labels.size(0)
correct += (pred == labels).sum().item()
val_acc = correct / total
cnn.train()
return val_acc
for epoch in range(args.epochs):
xentropy_loss_avg = 0.
correct = 0.
total = 0.
progress_bar = tqdm(train_loader)
for i, (images, labels) in enumerate(progress_bar):
progress_bar.set_description('Epoch ' + str(epoch))
images = images.cuda()
labels = labels.cuda()
cnn.zero_grad()
pred = cnn(images)
xentropy_loss = criterion(pred, labels)
xentropy_loss.backward()
cnn_optimizer.step()
xentropy_loss_avg += xentropy_loss.item()
# Calculate running average of accuracy
pred = torch.max(pred.data, 1)[1]
total += labels.size(0)
correct += (pred == labels.data).sum().item()
accuracy = correct / total
progress_bar.set_postfix(
xentropy='%.3f' % (xentropy_loss_avg / (i + 1)),
acc='%.3f' % accuracy)
test_acc = test(test_loader)
tqdm.write('test_acc: %.3f' % (test_acc))
scheduler.step(epoch) # Use this line for PyTorch <1.4
# scheduler.step() # Use this line for PyTorch >=1.4
row = {'epoch': str(epoch), 'train_acc': str(accuracy), 'test_acc': str(test_acc)}
csv_logger.writerow(row)
torch.save(cnn.state_dict(), 'checkpoints/' + test_id + '.pt')
csv_logger.close()
================================================
FILE: util/__init__.py
================================================
================================================
FILE: util/cutout.py
================================================
import torch
import numpy as np
class Cutout(object):
"""Randomly mask out one or more patches from an image.
Args:
n_holes (int): Number of patches to cut out of each image.
length (int): The length (in pixels) of each square patch.
"""
def __init__(self, n_holes, length):
self.n_holes = n_holes
self.length = length
def __call__(self, img):
"""
Args:
img (Tensor): Tensor image of size (C, H, W).
Returns:
Tensor: Image with n_holes of dimension length x length cut out of it.
"""
h = img.size(1)
w = img.size(2)
mask = np.ones((h, w), np.float32)
for n in range(self.n_holes):
y = np.random.randint(h)
x = np.random.randint(w)
y1 = np.clip(y - self.length // 2, 0, h)
y2 = np.clip(y + self.length // 2, 0, h)
x1 = np.clip(x - self.length // 2, 0, w)
x2 = np.clip(x + self.length // 2, 0, w)
mask[y1: y2, x1: x2] = 0.
mask = torch.from_numpy(mask)
mask = mask.expand_as(img)
img = img * mask
return img
================================================
FILE: util/misc.py
================================================
import csv
class CSVLogger():
def __init__(self, args, fieldnames, filename='log.csv'):
self.filename = filename
self.csv_file = open(filename, 'w')
# Write model configuration at top of csv
writer = csv.writer(self.csv_file)
for arg in vars(args):
writer.writerow([arg, getattr(args, arg)])
writer.writerow([''])
self.writer = csv.DictWriter(self.csv_file, fieldnames=fieldnames)
self.writer.writeheader()
self.csv_file.flush()
def writerow(self, row):
self.writer.writerow(row)
self.csv_file.flush()
def close(self):
self.csv_file.close()
gitextract_gpim7oxl/
├── .gitignore
├── LICENSE.md
├── README.md
├── model/
│ ├── __init__.py
│ ├── resnet.py
│ └── wide_resnet.py
├── shake-shake/
│ ├── README.md
│ ├── cifar10.lua
│ ├── cifar100.lua
│ └── transforms.lua
├── train.py
└── util/
├── __init__.py
├── cutout.py
└── misc.py
SYMBOL INDEX (35 symbols across 5 files)
FILE: model/resnet.py
function conv3x3 (line 9) | def conv3x3(in_planes, out_planes, stride=1):
class BasicBlock (line 13) | class BasicBlock(nn.Module):
method __init__ (line 16) | def __init__(self, in_planes, planes, stride=1):
method forward (line 30) | def forward(self, x):
class Bottleneck (line 38) | class Bottleneck(nn.Module):
method __init__ (line 41) | def __init__(self, in_planes, planes, stride=1):
method forward (line 57) | def forward(self, x):
class ResNet (line 66) | class ResNet(nn.Module):
method __init__ (line 67) | def __init__(self, block, num_blocks, num_classes=10):
method _make_layer (line 79) | def _make_layer(self, block, planes, num_blocks, stride):
method forward (line 87) | def forward(self, x):
function ResNet18 (line 99) | def ResNet18(num_classes=10):
function ResNet34 (line 102) | def ResNet34(num_classes=10):
function ResNet50 (line 105) | def ResNet50(num_classes=10):
function ResNet101 (line 108) | def ResNet101(num_classes=10):
function ResNet152 (line 111) | def ResNet152(num_classes=10):
function test_resnet (line 114) | def test_resnet():
FILE: model/wide_resnet.py
class BasicBlock (line 9) | class BasicBlock(nn.Module):
method __init__ (line 10) | def __init__(self, in_planes, out_planes, stride, dropRate=0.0):
method forward (line 24) | def forward(self, x):
class NetworkBlock (line 35) | class NetworkBlock(nn.Module):
method __init__ (line 36) | def __init__(self, nb_layers, in_planes, out_planes, block, stride, dr...
method _make_layer (line 39) | def _make_layer(self, block, in_planes, out_planes, nb_layers, stride,...
method forward (line 44) | def forward(self, x):
class WideResNet (line 47) | class WideResNet(nn.Module):
method __init__ (line 48) | def __init__(self, depth, num_classes, widen_factor=1, dropRate=0.0):
method forward (line 78) | def forward(self, x):
FILE: train.py
function test (line 168) | def test(loader):
FILE: util/cutout.py
class Cutout (line 5) | class Cutout(object):
method __init__ (line 12) | def __init__(self, n_holes, length):
method __call__ (line 16) | def __call__(self, img):
FILE: util/misc.py
class CSVLogger (line 4) | class CSVLogger():
method __init__ (line 5) | def __init__(self, args, fieldnames, filename='log.csv'):
method writerow (line 21) | def writerow(self, row):
method close (line 25) | def close(self):
Condensed preview — 14 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (46K chars).
[
{
"path": ".gitignore",
"chars": 30,
"preview": "*.pyc\ncheckpoints/\nlogs/\ndata/"
},
{
"path": "LICENSE.md",
"chars": 11093,
"preview": "Educational Community License, Version 2.0 (ECL-2.0)\n\nVersion 2.0, April 2007\n\nhttp://www.osedu.org/licenses/ \n\nThe Educ"
},
{
"path": "README.md",
"chars": 3139,
"preview": "# Cutout\n\nThis repository contains the code for the paper [Improved Regularization of Convolutional Neural Networks with"
},
{
"path": "model/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "model/resnet.py",
"chars": 4026,
"preview": "'''ResNet18/34/50/101/152 in Pytorch.'''\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom torch."
},
{
"path": "model/wide_resnet.py",
"chars": 3800,
"preview": "# From https://github.com/xternalz/WideResNet-pytorch\n\nimport math\nimport torch\nimport torch.nn as nn\nimport torch.nn.fu"
},
{
"path": "shake-shake/README.md",
"chars": 929,
"preview": "# Cutout in Shake-Shake Regularization Networks\n\nIn order to add cutout to Xavier Gastaldi's shake-shake regularization "
},
{
"path": "shake-shake/cifar10.lua",
"chars": 1371,
"preview": "--\n-- Copyright (c) 2016, Facebook, Inc.\n-- All rights reserved.\n--\n-- This source code is licensed under the BSD-sty"
},
{
"path": "shake-shake/cifar100.lua",
"chars": 1687,
"preview": "--\n-- Copyright (c) 2016, Facebook, Inc.\n-- All rights reserved.\n--\n-- This source code is licensed under the BSD-sty"
},
{
"path": "shake-shake/transforms.lua",
"chars": 7966,
"preview": "--\n-- Copyright (c) 2016, Facebook, Inc.\n-- All rights reserved.\n--\n-- This source code is licensed under the BSD-sty"
},
{
"path": "train.py",
"chars": 8610,
"preview": "# run train.py --dataset cifar10 --model resnet18 --data_augmentation --cutout --length 16\n# run train.py --dataset cifa"
},
{
"path": "util/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "util/cutout.py",
"chars": 1172,
"preview": "import torch\nimport numpy as np\n\n\nclass Cutout(object):\n \"\"\"Randomly mask out one or more patches from an image.\n\n "
},
{
"path": "util/misc.py",
"chars": 669,
"preview": "import csv\n\n\nclass CSVLogger():\n def __init__(self, args, fieldnames, filename='log.csv'):\n\n self.filename = f"
}
]
About this extraction
This page contains the full source code of the uoguelph-mlrg/Cutout GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 14 files (43.4 KB), approximately 11.1k tokens, and a symbol index with 35 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.