Repository: xiaochus/MobileNetV3
Branch: master
Commit: 2294bb7df353
Files: 10
Total size: 24.9 KB
Directory structure:
gitextract_f2xk90eu/
├── .gitignore
├── LICENSE
├── README.md
├── config/
│ └── config.json
├── model/
│ ├── LR_ASPP.py
│ ├── layers/
│ │ └── bilinear_upsampling.py
│ ├── mobilenet_base.py
│ ├── mobilenet_v3_large.py
│ └── mobilenet_v3_small.py
└── train_cls.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2019 Larry
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# MobileNetV3
A Keras implementation of MobileNetV3 and Lite R-ASPP Semantic Segmentation (Under Development).
According to the paper: [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?context=cs)
## Requirement
- Python 3.6
- Tensorflow-gpu 1.10.0
- Keras 2.2.4
## Train the model
The ```config/config.json``` file provide a config for training.
### Train the classification
**The dataset folder structure is as follows:**
| - data/
| - train/
| - class 0/
| - image.jpg
....
| - class 1/
....
| - class n/
| - validation/
| - class 0/
| - class 1/
....
| - class n/
**Run command below to train the model:**
```
python train_cls.py
```
## Acknowledgement
Thank [@fzyzcjy](https://github.com/fzyzcjy) for your help in this project.
## Reference
@article{MobileNetv3,
title={Searching for MobileNetV3},
author={Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang Vijay Vasudevan, Quoc V. Le, Hartwig Adam},
journal={arXiv preprint arXiv:1905.02244},
year={2019}
}
## Copyright
See [LICENSE](LICENSE) for details.
================================================
FILE: config/config.json
================================================
{
"model": "small",
"height": 224,
"width": 224,
"class_number": 100,
"learning_rate": 0.001,
"batch": 2,
"epochs": 20,
"train_dir": "data/train",
"eval_dir": "data/eval",
"save_dir": "save",
"weights": ""
}
================================================
FILE: model/LR_ASPP.py
================================================
"""Lite R-ASPP Semantic Segmentation based on MobileNetV3.
"""
from keras.models import Model
from keras.layers import Conv2D, AveragePooling2D, BatchNormalization, Activation, Multiply, Add
from keras.utils.vis_utils import plot_model
from model.layers.bilinear_upsampling import BilinearUpSampling2D
class LiteRASSP:
def __init__(self, input_shape, n_class=19, alpha=1.0, weights=None, backbone='small'):
"""Init.
# Arguments
input_shape: An integer or tuple/list of 3 integers, shape
of input tensor (should be 1024 × 2048 or 512 × 1024 according
to the paper).
n_class: Integer, number of classes.
alpha: Integer, width multiplier for mobilenetV3.
weights: String, weights for mobilenetv3.
backbone: String, name of backbone (must be small or large).
"""
self.shape = input_shape
self.n_class = n_class
self.alpha = alpha
self.weights = weights
self.backbone = backbone
def _extract_backbone(self):
"""extract feature map from backbone.
"""
if self.backbone == 'large':
from model.mobilenet_v3_large import MobileNetV3_Large
model = MobileNetV3_Large(self.shape, self.n_class, alpha=self.alpha, include_top=False).build()
layer_name8 = 'batch_normalization_13'
layer_name16 = 'add_5'
elif self.backbone == 'small':
from model.mobilenet_v3_small import MobileNetV3_Small
model = MobileNetV3_Small(self.shape, self.n_class, alpha=self.alpha, include_top=False).build()
layer_name8 = 'batch_normalization_7'
layer_name16 = 'add_2'
else:
raise Exception('Invalid backbone: {}'.format(self.backbone))
if self.weights is not None:
model.load_weights(self.weights, by_name=True)
inputs= model.input
# 1/8 feature map.
out_feature8 = model.get_layer(layer_name8).output
# 1/16 feature map.
out_feature16 = model.get_layer(layer_name16).output
return inputs, out_feature8, out_feature16
def build(self, plot=False):
"""build Lite R-ASPP.
# Arguments
plot: Boolean, weather to plot model.
# Returns
model: Model, model.
"""
inputs, out_feature8, out_feature16 = self._extract_backbone()
# branch1
x1 = Conv2D(128, (1, 1))(out_feature16)
x1 = BatchNormalization()(x1)
x1 = Activation('relu')(x1)
# branch2
s = x1.shape
x2 = AveragePooling2D(pool_size=(49, 49), strides=(16, 20))(out_feature16)
x2 = Conv2D(128, (1, 1))(x2)
x2 = Activation('sigmoid')(x2)
x2 = BilinearUpSampling2D(target_size=(int(s[1]), int(s[2])))(x2)
# branch3
x3 = Conv2D(self.n_class, (1, 1))(out_feature8)
# merge1
x = Multiply()([x1, x2])
x = BilinearUpSampling2D(size=(2, 2))(x)
x = Conv2D(self.n_class, (1, 1))(x)
# merge2
x = Add()([x, x3])
# out
x = Activation('softmax')(x)
model = Model(inputs=inputs, outputs=x)
if plot:
plot_model(model, to_file='images/LR_ASPP.png', show_shapes=True)
return model
================================================
FILE: model/layers/bilinear_upsampling.py
================================================
"""Keras BilinearUpSampling2D Layer.
"""
import numpy as np
import tensorflow as tf
import keras.backend as K
from keras.engine.topology import Layer, InputSpec
def resize_images_bilinear(X, height_factor=1, width_factor=1, target_height=None, target_width=None, data_format='default'):
'''Resizes the images contained in a 4D tensor of shape
- [batch, channels, height, width] (for 'channels_first' data_format)
- [batch, height, width, channels] (for 'channels_last' data_format)
by a factor of (height_factor, width_factor). Both factors should be
positive integers.
'''
if data_format == 'default':
data_format = K.image_data_format()
if data_format == 'channels_first':
original_shape = K.int_shape(X)
if target_height and target_width:
new_shape = tf.constant(np.array((target_height, target_width)).astype('int32'))
else:
new_shape = tf.shape(X)[2:]
new_shape *= tf.constant(np.array([height_factor, width_factor]).astype('int32'))
X = K.permute_dimensions(X, [0, 2, 3, 1])
X = tf.image.resize_bilinear(X, new_shape)
X = K.permute_dimensions(X, [0, 3, 1, 2])
if target_height and target_width:
X.set_shape((None, None, target_height, target_width))
else:
X.set_shape((None, None, original_shape[2] * height_factor, original_shape[3] * width_factor))
return X
elif data_format == 'channels_last':
original_shape = K.int_shape(X)
if target_height and target_width:
new_shape = tf.constant(np.array((target_height, target_width)).astype('int32'))
else:
new_shape = tf.shape(X)[1:3]
new_shape *= tf.constant(np.array([height_factor, width_factor]).astype('int32'))
X = tf.image.resize_bilinear(X, new_shape)
if target_height and target_width:
X.set_shape((None, target_height, target_width, None))
else:
X.set_shape((None, original_shape[1] * height_factor, original_shape[2] * width_factor, None))
return X
else:
raise Exception('Invalid data_format: ' + data_format)
class BilinearUpSampling2D(Layer):
def __init__(self, size=(1, 1), target_size=None, data_format='default', **kwargs):
"""Init.
size: factor to original shape (ie. original-> size * original).
target size: target size (ie. original->target).
"""
if data_format == 'default':
data_format = K.image_data_format()
self.size = tuple(size)
if target_size is not None:
self.target_size = tuple(target_size)
else:
self.target_size = None
assert data_format in {'channels_last', 'channels_first'}, 'data_format must be in {tf, th}'
self.data_format = data_format
self.input_spec = [InputSpec(ndim=4)]
super(BilinearUpSampling2D, self).__init__(**kwargs)
def compute_output_shape(self, input_shape):
if self.data_format == 'channels_first':
width = int(self.size[0] * input_shape[2] if input_shape[2] is not None else None)
height = int(self.size[1] * input_shape[3] if input_shape[3] is not None else None)
if self.target_size is not None:
width = self.target_size[0]
height = self.target_size[1]
return (input_shape[0],
input_shape[1],
width,
height)
elif self.data_format == 'channels_last':
width = int(self.size[0] * input_shape[1] if input_shape[1] is not None else None)
height = int(self.size[1] * input_shape[2] if input_shape[2] is not None else None)
if self.target_size is not None:
width = self.target_size[0]
height = self.target_size[1]
return (input_shape[0],
width,
height,
input_shape[3])
else:
raise Exception('Invalid data_format: ' + self.data_format)
def call(self, x, mask=None):
if self.target_size is not None:
return resize_images_bilinear(x, target_height=self.target_size[0], target_width=self.target_size[1], data_format=self.data_format)
else:
return resize_images_bilinear(x, height_factor=self.size[0], width_factor=self.size[1], data_format=self.data_format)
def get_config(self):
config = {'size': self.size, 'target_size': self.target_size}
base_config = super(BilinearUpSampling2D, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
================================================
FILE: model/mobilenet_base.py
================================================
"""MobileNet v3 models for Keras.
# Reference
[Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?context=cs)
"""
from keras.layers import Conv2D, DepthwiseConv2D, Dense, GlobalAveragePooling2D
from keras.layers import Activation, BatchNormalization, Add, Multiply, Reshape
from keras import backend as K
class MobileNetBase:
def __init__(self, shape, n_class, alpha=1.0):
"""Init
# Arguments
input_shape: An integer or tuple/list of 3 integers, shape
of input tensor.
n_class: Integer, number of classes.
alpha: Integer, width multiplier.
"""
self.shape = shape
self.n_class = n_class
self.alpha = alpha
def _relu6(self, x):
"""Relu 6
"""
return K.relu(x, max_value=6.0)
def _hard_swish(self, x):
"""Hard swish
"""
return x * K.relu(x + 3.0, max_value=6.0) / 6.0
def _return_activation(self, x, nl):
"""Convolution Block
This function defines a activation choice.
# Arguments
x: Tensor, input tensor of conv layer.
nl: String, nonlinearity activation type.
# Returns
Output tensor.
"""
if nl == 'HS':
x = Activation(self._hard_swish)(x)
if nl == 'RE':
x = Activation(self._relu6)(x)
return x
def _conv_block(self, inputs, filters, kernel, strides, nl):
"""Convolution Block
This function defines a 2D convolution operation with BN and activation.
# Arguments
inputs: Tensor, input tensor of conv layer.
filters: Integer, the dimensionality of the output space.
kernel: An integer or tuple/list of 2 integers, specifying the
width and height of the 2D convolution window.
strides: An integer or tuple/list of 2 integers,
specifying the strides of the convolution along the width and height.
Can be a single integer to specify the same value for
all spatial dimensions.
nl: String, nonlinearity activation type.
# Returns
Output tensor.
"""
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
x = Conv2D(filters, kernel, padding='same', strides=strides)(inputs)
x = BatchNormalization(axis=channel_axis)(x)
return self._return_activation(x, nl)
def _squeeze(self, inputs):
"""Squeeze and Excitation.
This function defines a squeeze structure.
# Arguments
inputs: Tensor, input tensor of conv layer.
"""
input_channels = int(inputs.shape[-1])
x = GlobalAveragePooling2D()(inputs)
x = Dense(input_channels, activation='relu')(x)
x = Dense(input_channels, activation='hard_sigmoid')(x)
x = Reshape((1, 1, input_channels))(x)
x = Multiply()([inputs, x])
return x
def _bottleneck(self, inputs, filters, kernel, e, s, squeeze, nl):
"""Bottleneck
This function defines a basic bottleneck structure.
# Arguments
inputs: Tensor, input tensor of conv layer.
filters: Integer, the dimensionality of the output space.
kernel: An integer or tuple/list of 2 integers, specifying the
width and height of the 2D convolution window.
e: Integer, expansion factor.
t is always applied to the input size.
s: An integer or tuple/list of 2 integers,specifying the strides
of the convolution along the width and height.Can be a single
integer to specify the same value for all spatial dimensions.
squeeze: Boolean, Whether to use the squeeze.
nl: String, nonlinearity activation type.
# Returns
Output tensor.
"""
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
input_shape = K.int_shape(inputs)
tchannel = int(e)
cchannel = int(self.alpha * filters)
r = s == 1 and input_shape[3] == filters
x = self._conv_block(inputs, tchannel, (1, 1), (1, 1), nl)
x = DepthwiseConv2D(kernel, strides=(s, s), depth_multiplier=1, padding='same')(x)
x = BatchNormalization(axis=channel_axis)(x)
x = self._return_activation(x, nl)
if squeeze:
x = self._squeeze(x)
x = Conv2D(cchannel, (1, 1), strides=(1, 1), padding='same')(x)
x = BatchNormalization(axis=channel_axis)(x)
if r:
x = Add()([x, inputs])
return x
def build(self):
pass
================================================
FILE: model/mobilenet_v3_large.py
================================================
"""MobileNet v3 Large models for Keras.
# Reference
[Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?context=cs)
"""
from keras.models import Model
from keras.layers import Input, Conv2D, GlobalAveragePooling2D, Reshape
from keras.utils.vis_utils import plot_model
from model.mobilenet_base import MobileNetBase
class MobileNetV3_Large(MobileNetBase):
def __init__(self, shape, n_class, alpha=1.0, include_top=True):
"""Init.
# Arguments
input_shape: An integer or tuple/list of 3 integers, shape
of input tensor.
n_class: Integer, number of classes.
alpha: Integer, width multiplier.
include_top: if inculde classification layer.
# Returns
MobileNetv3 model.
"""
super(MobileNetV3_Large, self).__init__(shape, n_class, alpha)
self.include_top = include_top
def build(self, plot=False):
"""build MobileNetV3 Large.
# Arguments
plot: Boolean, weather to plot model.
# Returns
model: Model, model.
"""
inputs = Input(shape=self.shape)
x = self._conv_block(inputs, 16, (3, 3), strides=(2, 2), nl='HS')
x = self._bottleneck(x, 16, (3, 3), e=16, s=1, squeeze=False, nl='RE')
x = self._bottleneck(x, 24, (3, 3), e=64, s=2, squeeze=False, nl='RE')
x = self._bottleneck(x, 24, (3, 3), e=72, s=1, squeeze=False, nl='RE')
x = self._bottleneck(x, 40, (5, 5), e=72, s=2, squeeze=True, nl='RE')
x = self._bottleneck(x, 40, (5, 5), e=120, s=1, squeeze=True, nl='RE')
x = self._bottleneck(x, 40, (5, 5), e=120, s=1, squeeze=True, nl='RE')
x = self._bottleneck(x, 80, (3, 3), e=240, s=2, squeeze=False, nl='HS')
x = self._bottleneck(x, 80, (3, 3), e=200, s=1, squeeze=False, nl='HS')
x = self._bottleneck(x, 80, (3, 3), e=184, s=1, squeeze=False, nl='HS')
x = self._bottleneck(x, 80, (3, 3), e=184, s=1, squeeze=False, nl='HS')
x = self._bottleneck(x, 112, (3, 3), e=480, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 112, (3, 3), e=672, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 160, (5, 5), e=672, s=2, squeeze=True, nl='HS')
x = self._bottleneck(x, 160, (5, 5), e=960, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 160, (5, 5), e=960, s=1, squeeze=True, nl='HS')
x = self._conv_block(x, 960, (1, 1), strides=(1, 1), nl='HS')
x = GlobalAveragePooling2D()(x)
x = Reshape((1, 1, 960))(x)
x = Conv2D(1280, (1, 1), padding='same')(x)
x = self._return_activation(x, 'HS')
if self.include_top:
x = Conv2D(self.n_class, (1, 1), padding='same', activation='softmax')(x)
x = Reshape((self.n_class,))(x)
model = Model(inputs, x)
if plot:
plot_model(model, to_file='images/MobileNetv3_large.png', show_shapes=True)
return model
================================================
FILE: model/mobilenet_v3_small.py
================================================
"""MobileNet v3 small models for Keras.
# Reference
[Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?context=cs)
"""
from keras.models import Model
from keras.layers import Input, Conv2D, GlobalAveragePooling2D, Reshape
from keras.utils.vis_utils import plot_model
from model.mobilenet_base import MobileNetBase
class MobileNetV3_Small(MobileNetBase):
def __init__(self, shape, n_class, alpha=1.0, include_top=True):
"""Init.
# Arguments
input_shape: An integer or tuple/list of 3 integers, shape
of input tensor.
n_class: Integer, number of classes.
alpha: Integer, width multiplier.
include_top: if inculde classification layer.
# Returns
MobileNetv3 model.
"""
super(MobileNetV3_Small, self).__init__(shape, n_class, alpha)
self.include_top = include_top
def build(self, plot=False):
"""build MobileNetV3 Small.
# Arguments
plot: Boolean, weather to plot model.
# Returns
model: Model, model.
"""
inputs = Input(shape=self.shape)
x = self._conv_block(inputs, 16, (3, 3), strides=(2, 2), nl='HS')
x = self._bottleneck(x, 16, (3, 3), e=16, s=2, squeeze=True, nl='RE')
x = self._bottleneck(x, 24, (3, 3), e=72, s=2, squeeze=False, nl='RE')
x = self._bottleneck(x, 24, (3, 3), e=88, s=1, squeeze=False, nl='RE')
x = self._bottleneck(x, 40, (5, 5), e=96, s=2, squeeze=True, nl='HS')
x = self._bottleneck(x, 40, (5, 5), e=240, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 40, (5, 5), e=240, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 48, (5, 5), e=120, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 48, (5, 5), e=144, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 96, (5, 5), e=288, s=2, squeeze=True, nl='HS')
x = self._bottleneck(x, 96, (5, 5), e=576, s=1, squeeze=True, nl='HS')
x = self._bottleneck(x, 96, (5, 5), e=576, s=1, squeeze=True, nl='HS')
x = self._conv_block(x, 576, (1, 1), strides=(1, 1), nl='HS')
x = GlobalAveragePooling2D()(x)
x = Reshape((1, 1, 576))(x)
x = Conv2D(1280, (1, 1), padding='same')(x)
x = self._return_activation(x, 'HS')
if self.include_top:
x = Conv2D(self.n_class, (1, 1), padding='same', activation='softmax')(x)
x = Reshape((self.n_class,))(x)
model = Model(inputs, x)
if plot:
plot_model(model, to_file='images/MobileNetv3_small.png', show_shapes=True)
return model
================================================
FILE: train_cls.py
================================================
import os
import json
import pandas as pd
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ModelCheckpoint
def generate(batch, shape, ptrain, pval):
"""Data generation and augmentation
# Arguments
batch: Integer, batch size.
size: Integer, image size.
ptrain: train dir.
pval: eval dir.
# Returns
train_generator: train set generator
validation_generator: validation set generator
count1: Integer, number of train set.
count2: Integer, number of test set.
"""
# Using the data Augmentation in traning data
datagen1 = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
rotation_range=90,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
datagen2 = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen1.flow_from_directory(
ptrain,
target_size=shape,
batch_size=batch,
class_mode='categorical')
validation_generator = datagen2.flow_from_directory(
pval,
target_size=shape,
batch_size=batch,
class_mode='categorical')
count1 = 0
for root, dirs, files in os.walk(ptrain):
for each in files:
count1 += 1
count2 = 0
for root, dirs, files in os.walk(pval):
for each in files:
count2 += 1
return train_generator, validation_generator, count1, count2
def train():
with open('config/config.json', 'r') as f:
cfg = json.load(f)
save_dir = cfg['save_dir']
shape = (int(cfg['height']), int(cfg['width']), 3)
n_class = int(cfg['class_number'])
batch = int(cfg['batch'])
if not os.path.exists(save_dir):
os.mkdir(save_dir)
if cfg['model'] == 'large':
from model.mobilenet_v3_large import MobileNetV3_Large
model = MobileNetV3_Large(shape, n_class).build()
if cfg['model'] == 'small':
from model.mobilenet_v3_small import MobileNetV3_Small
model = MobileNetV3_Small(shape, n_class).build()
pre_weights = cfg['weights']
if pre_weights and os.path.exists(pre_weights):
model.load_weights(pre_weights, by_name=True)
opt = Adam(lr=float(cfg['learning_rate']))
earlystop = EarlyStopping(monitor='val_acc', patience=5, verbose=0, mode='auto')
checkpoint = ModelCheckpoint(filepath=os.path.join(save_dir, '{}_weights.h5'.format(cfg['model'])),
monitor='val_acc', save_best_only=True, save_weights_only=True)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
train_generator, validation_generator, count1, count2 = generate(batch, shape[:2], cfg['train_dir'], cfg['eval_dir'])
hist = model.fit_generator(
train_generator,
validation_data=validation_generator,
steps_per_epoch=count1 // batch,
validation_steps=count2 // batch,
epochs=cfg['epochs'],
callbacks=[earlystop,checkpoint])
df = pd.DataFrame.from_dict(hist.history)
df.to_csv(os.path.join(save_dir, 'hist.csv'), encoding='utf-8', index=False)
#model.save_weights(os.path.join(save_dir, '{}_weights.h5'.format(cfg['model'])))
if __name__ == '__main__':
train()
gitextract_f2xk90eu/ ├── .gitignore ├── LICENSE ├── README.md ├── config/ │ └── config.json ├── model/ │ ├── LR_ASPP.py │ ├── layers/ │ │ └── bilinear_upsampling.py │ ├── mobilenet_base.py │ ├── mobilenet_v3_large.py │ └── mobilenet_v3_small.py └── train_cls.py
SYMBOL INDEX (27 symbols across 6 files)
FILE: model/LR_ASPP.py
class LiteRASSP (line 11) | class LiteRASSP:
method __init__ (line 12) | def __init__(self, input_shape, n_class=19, alpha=1.0, weights=None, b...
method _extract_backbone (line 30) | def _extract_backbone(self):
method build (line 59) | def build(self, plot=False):
FILE: model/layers/bilinear_upsampling.py
function resize_images_bilinear (line 9) | def resize_images_bilinear(X, height_factor=1, width_factor=1, target_he...
class BilinearUpSampling2D (line 60) | class BilinearUpSampling2D(Layer):
method __init__ (line 61) | def __init__(self, size=(1, 1), target_size=None, data_format='default...
method compute_output_shape (line 81) | def compute_output_shape(self, input_shape):
method call (line 107) | def call(self, x, mask=None):
method get_config (line 113) | def get_config(self):
FILE: model/mobilenet_base.py
class MobileNetBase (line 13) | class MobileNetBase:
method __init__ (line 14) | def __init__(self, shape, n_class, alpha=1.0):
method _relu6 (line 27) | def _relu6(self, x):
method _hard_swish (line 32) | def _hard_swish(self, x):
method _return_activation (line 37) | def _return_activation(self, x, nl):
method _conv_block (line 55) | def _conv_block(self, inputs, filters, kernel, strides, nl):
method _squeeze (line 81) | def _squeeze(self, inputs):
method _bottleneck (line 98) | def _bottleneck(self, inputs, filters, kernel, e, s, squeeze, nl):
method build (line 144) | def build(self):
FILE: model/mobilenet_v3_large.py
class MobileNetV3_Large (line 14) | class MobileNetV3_Large(MobileNetBase):
method __init__ (line 15) | def __init__(self, shape, n_class, alpha=1.0, include_top=True):
method build (line 31) | def build(self, plot=False):
FILE: model/mobilenet_v3_small.py
class MobileNetV3_Small (line 14) | class MobileNetV3_Small(MobileNetBase):
method __init__ (line 15) | def __init__(self, shape, n_class, alpha=1.0, include_top=True):
method build (line 31) | def build(self, plot=False):
FILE: train_cls.py
function generate (line 9) | def generate(batch, shape, ptrain, pval):
function train (line 62) | def train():
Condensed preview — 10 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (27K chars).
[
{
"path": ".gitignore",
"chars": 1203,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "LICENSE",
"chars": 1062,
"preview": "MIT License\n\nCopyright (c) 2019 Larry\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof t"
},
{
"path": "README.md",
"chars": 1172,
"preview": "# MobileNetV3\nA Keras implementation of MobileNetV3 and Lite R-ASPP Semantic Segmentation (Under Development).\n\nAccordin"
},
{
"path": "config/config.json",
"chars": 252,
"preview": "{\n \"model\": \"small\",\n \"height\": 224,\n \"width\": 224,\n \"class_number\": 100,\n \"learning_rate\": 0.001,\n \"b"
},
{
"path": "model/LR_ASPP.py",
"chars": 3342,
"preview": "\"\"\"Lite R-ASPP Semantic Segmentation based on MobileNetV3.\n\"\"\"\n\n\nfrom keras.models import Model\nfrom keras.layers import"
},
{
"path": "model/layers/bilinear_upsampling.py",
"chars": 4725,
"preview": "\"\"\"Keras BilinearUpSampling2D Layer.\n\"\"\"\nimport numpy as np\nimport tensorflow as tf\nimport keras.backend as K\nfrom keras"
},
{
"path": "model/mobilenet_base.py",
"chars": 4744,
"preview": "\"\"\"MobileNet v3 models for Keras.\n# Reference\n [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?context=c"
},
{
"path": "model/mobilenet_v3_large.py",
"chars": 2991,
"preview": "\"\"\"MobileNet v3 Large models for Keras.\n# Reference\n [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?con"
},
{
"path": "model/mobilenet_v3_small.py",
"chars": 2665,
"preview": "\"\"\"MobileNet v3 small models for Keras.\n# Reference\n [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244?con"
},
{
"path": "train_cls.py",
"chars": 3374,
"preview": "import os\nimport json\nimport pandas as pd\n\nfrom keras.optimizers import Adam\nfrom keras.preprocessing.image import Image"
}
]
About this extraction
This page contains the full source code of the xiaochus/MobileNetV3 GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 10 files (24.9 KB), approximately 7.0k tokens, and a symbol index with 27 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.