Full Code of holyhao/Baidu-Dogs for AI

master 6c5e56f8b1c3 cached
16 files
19.9 KB
5.6k tokens
5 symbols
1 requests
Download .txt
Repository: holyhao/Baidu-Dogs
Branch: master
Commit: 6c5e56f8b1c3
Files: 16
Total size: 19.9 KB

Directory structure:
gitextract_0h9pdhwq/

├── .gitignore
├── README.md
├── data/
│   ├── README.md
│   ├── test/
│   │   └── README.md
│   ├── train/
│   │   └── README.md
│   └── val/
│       └── README.md
├── evaluate/
│   ├── README.md
│   ├── predict_bygenerator.py
│   └── predict_onebyone.py
├── models/
│   ├── README.md
│   ├── dogs.py
│   ├── inceptionV3.py
│   └── vgg19.py
└── preprocessing/
    ├── README.md
    ├── altoFolders.py
    └── divforValidation.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
*.h5
*.txt


================================================
FILE: README.md
================================================
# Fine-grained Dog Classification competition
- This is a dog classification competition held by Baidu. Further information at http://js.baidu.com/

## Framework
- [Keras](https://keras.io/)
- [Tensorflow Backend](https://www.tensorflow.org/)

## Hardware
- Geforce GTX TITANX 12G
- Intel® Core™ i7-6700 CPU
- Memory 16G
- Operate system Ubuntu 14.04

## Data
- Download the images from Baidu Cloud
  - Training Set: http://pan.baidu.com/s/1slLOqBz Key: 5axb
  - Test set: http://pan.baidu.com/s/1gfaf9rt Key:fl5n
- Put the images into diffrent directory by their class labels. Refer to [altoFolders.py](preprocessing/altoFolders.py) for doing this.
- Take 20% of the labeled data for validation. Refer to [divforValidation.py](preprocessing/divforValidation.py).

## Base Model
- [VGG19](models/vgg19.py) for deep feature extraction, which is provided in keras models.
- [InceptionV3](models/InceptionV3) for deep feature extraction, which is provided in keras models.
- Softmax for classification.

## Evaluate
- Predict the classes for unlabeled data one by one refering to [predict_onebyone](evaluate/predict_onebyone.py) and by generator refering to [predict_bygenerator.by](evaluate/predict_bygenerator.py).
## to be continued
> Feel free to contact me if you have any issues or better ideas about anything.

> by Holy

================================================
FILE: data/README.md
================================================
#Data

Data for train,validation and test.

Note that the validation is 20% here from the original data. (Random sampling from different classes.)

================================================
FILE: data/test/README.md
================================================
#Test
Data for testing.



================================================
FILE: data/train/README.md
================================================
#TRAIN
Data for trainning.



================================================
FILE: data/val/README.md
================================================
#Validation
Data for validation.



================================================
FILE: evaluate/README.md
================================================
#evaluate

some evaluation methods to be added

================================================
FILE: evaluate/predict_bygenerator.py
================================================
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 10 21:27:15 2017
@author: Administrator
"""
import os
import shutil
import cv2
import h5py
import numpy as np
import pandas as pd
model_path="../models/model_dogs_Xception.h5"
test_data_dir="../data/test"
val_data_dir="../data/val"
model=load_model(model_path)
test_datagen = ImageDataGenerator(rescale=1./255)
batch_size=64
#for generate class_indices
val_generator = test_datagen.flow_from_directory(
    val_data_dir,
    target_size=(299, 299),
    batch_size=batch_size,
    shuffle=False,
    class_mode='categorical'
)
label_idxs = sorted(valid_generator.class_indices.items(), key=operator.itemgetter(1))
test_generator = test_datagen.flow_from_directory(
        test_data_dir,
        target_size=(299, 299),
        batch_size=batch_size,
        shuffle=False,
        class_mode='categorical')

y= model.predict_generator(test_generator, test_generator.samples/batch_size + 1)
y_max_idx = np.argmax(y1, 1)
predict_path = 'submission.txt'

with open(predict_path,'a') as fp:
    for i, idx in enumerate(y_max_idx):
        fp.write(str(label_idxs[idx][0]) + '\t' + test_generator.filenames[i][6:-4] + '\n')
    fp.close

================================================
FILE: evaluate/predict_onebyone.py
================================================
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 10 21:27:15 2017
@author: Administrator
"""
import os
import shutil
import cv2
import h5py
import numpy as np
import pandas as pd
model_path=model_dogs_Xception.h5
model=load_model(model_path)
test_filenames=os.listdir(test_data_dir)
test_img=[]
predictions=[]
for file_path in test_filenames:
    img=cv2.imread(test_data_dir+file_path)
    img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    img=cv2.resize(img,(299,299),interpolation=cv2.INTER_CUBIC)
    test_img.append(img)
    test_img=np.array(test_img)
    test_img = test_img.astype('float32')
    test_img/=255
    pre=model.predict(test_img,batch_size=1)[0]
    predictions.append(pre)
    test_img=[]
    
probs=np.array(predictions)
classes=np.argmax(probs,1)
    
#clsses-labels
class_indices=np.load('class_indices.txt.npy')
#(int,int)
class_indices=class_indices.tolist()
#(int,str)
value_indices={v:k for k,v in class_indices.items()}
true_class=[]
for i in range(len(classes)):
   true_class.append(value_indices[classes[i]])

with open('submit.txt','a') as fp:
    for i in range(len(test_filenames)):
        fp.write(str(true_class[i])+"\t"+str(test_filenames[i].split(".")[0])+'\n')
    fp.close

================================================
FILE: models/README.md
================================================
#Models

Some models to be added

================================================
FILE: models/dogs.py
================================================
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 10 20:49:09 2017
@author: Administrator
"""
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import cv2
import h5py
from keras.models import Sequential, Model, load_model
from keras import applications
from keras import optimizers
from keras.layers import Dropout, Flatten, Dense, Input,GlobalAveragePooling2D
from keras.utils import vis_utils,plot_model
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint,ReduceLROnPlateau
from vgg19 import VGG19


#模型的构建
img_rows, img_cols, img_channel = 400, 400, 3
base_model = VGG19(weights='imagenet', include_top=False,input_shape=(img_rows, img_cols, img_channel))
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dense(1024, activation='relu'))
add_model.add(Dense(100, activation='softmax'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.compile(loss='categorical_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
              metrics=['accuracy'])
#冻结某些层设置
#for layer in model.layers[:85]:
 #   layer.trainable = False
#打印网络结构
#model.summary()
for i, layer in enumerate(base_model.layers):
   print(i, layer.name)
#参数设置
batch_size = 32
epochs = 50
train_data_dir="data/train"
val_data_dir="data/val"

#plot_model(model,to_file='model.png')
train_datagen = ImageDataGenerator(
        rotation_range=30, 
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        width_shift_range=0.1,
        height_shift_range=0.1, 
        horizontal_flip=True)
		
val_datagen=ImageDataGenerator(rescale=1./255)
		
val_generator = val_datagen.flow_from_directory(
        val_data_dir,
        target_size=(img_rows, img_cols),
        batch_size=batch_s,
        class_mode='categorical')

train_generator = train_datagen.flow_from_directory(
        train_data_dir,
        target_size=(img_rows, img_cols),
        batch_size=batch_size,
        class_mode='categorical')
#保存映射表
#with h5py.File("class_indices.h5") as h:
    #h.create_dataset("class_indices",data=train_generator.class_indices)
#np.save('class_indices.txt', train_generator.class_indices)

history = model.fit_generator(
     train_generator,
     steps_per_epoch=train_generator.samples/batch_size,
     epochs=epochs,
	 validation_data=val_generator,
     validation_steps=batch_s
     #callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)]
 )
model.save('model_dogs_VGG19_400*400_full.h5') 



================================================
FILE: models/inceptionV3.py
================================================
import os
import keras
import numpy as np
from keras import Input
from keras import backend as K
from keras.applications import Xception,InceptionV3
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.layers import Dense, Dropout, Lambda,AveragePooling2D,Flatten
from keras.models import Model, load_model
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import SGD
from keras.utils import plot_model

img_rows,img_cols,img_channel=299,299,3
train_datagen = ImageDataGenerator(
        rotation_range=30, 
        rescale=1./255,
        shear_range=0.3,
        zoom_range=0.2,
        width_shift_range=0.2,
        height_shift_range=0.2, 
        horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)
early_stopping = EarlyStopping(monitor='val_loss', patience=3)
auto_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=0, mode='auto', epsilon=0.001, cooldown=0, min_lr=0)
save_model = ModelCheckpoint('InceptionV3{epoch:02d}-{val_acc:.2f}.h5', period=2)

# create the base pre-trained model
input_tensor = Input(shape=(img_rows, img_cols, img_channel))
base_model = InceptionV3(weights='imagenet', include_top=False,input_shape=(img_rows, img_cols, img_channel))
x=AveragePooling2D(pool_size=(4,4))(base_model.output)
x=Dropout(0.5)(x)
x=Flatten()(x)
x=Dense(100,activation='softmax')(x)
#base_model
model = Model(inputs=base_model.input, outputs=x)

for layer in base_model.layers:
    layer.trainable = False

batch_size = 48
epoch=5
train_generator = train_datagen.flow_from_directory(
    '../data/train',
    target_size=(img_rows, img_cols),
    batch_size=batch_size,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    '../data/val',
    target_size=(img_rows, img_cols),
    batch_size=batch_size,
    class_mode='categorical')

model.compile(loss="categorical_crossentropy", optimizer=SGD(lr=1e-3, momentum=0.9), metrics=['accuracy'])
# model = make_parallel(model, 3)
# train fc 
model.fit_generator(train_generator,
                steps_per_epoch=train_generator.samples/batch_size+1,
                epochs=epoch,
                validation_data=validation_generator,
                validation_steps=validation_generator.samples/batch_size+1
                #callbacks=[early_stopping, auto_lr, save_model]
                )


# train all				
for layer in model.layers:
    layer.trainable = True  
model.summary()          
batch_size=24
epoch=30
train_generator = train_datagen.flow_from_directory(
	'../data/train',
	target_size=(img_rows, img_cols),
	batch_size=batch_size,
	class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
	'../data/val',
	target_size=(img_rows, img_cols),
	batch_size=batch_size,
	class_mode='categorical')
model.compile(loss="categorical_crossentropy", optimizer=SGD(lr=1e-4, momentum=0.9), metrics=['accuracy'])
save_model = ModelCheckpoint('InceptionV3-{epoch:02d}-{val_acc:.2f}.h5', period=2)
model.fit_generator(train_generator,
                steps_per_epoch=train_generator.samples/batch_size+1,
                epochs=epoch,
                validation_data=validation_generator,
                validation_steps=validation_generator.samples/batch_size+1,
                callbacks=[auto_lr, save_model]) # otherwise the generator would loop indefinitely
model.save('inceptionV3.h5')

================================================
FILE: models/vgg19.py
================================================
# -*- coding: utf-8 -*-
'''VGG19 model for Keras.
# Reference:
- [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
'''
from __future__ import print_function

import numpy as np
import warnings
from keras.models import Model
from keras.layers import Flatten, Dense, Input
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import GlobalMaxPooling2D
from keras.layers import GlobalAveragePooling2D
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras import backend as K
from keras.applications.imagenet_utils import decode_predictions
from keras.applications.imagenet_utils import preprocess_input
from keras.applications.imagenet_utils import _obtain_input_shape
from keras.engine.topology import get_source_inputs

WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels.h5'
WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5'

def VGG19(include_top=True, weights='imagenet',
          input_tensor=None, input_shape=None,
          pooling=None,
          classes=1000):
    """Instantiates the VGG19 architecture.
    Optionally loads weights pre-trained
    on ImageNet. Note that when using TensorFlow,
    for best performance you should set
    `image_data_format="channels_last"` in your Keras config
    at ~/.keras/keras.json.
    The model and the weights are compatible with both
    TensorFlow and Theano. The data format
    convention used by the model is the one
    specified in your Keras config file.
    # Arguments
        include_top: whether to include the 3 fully-connected
            layers at the top of the network.
        weights: one of `None` (random initialization)
            or "imagenet" (pre-training on ImageNet).
        input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
            to use as image input for the model.
        input_shape: optional shape tuple, only to be specified
            if `include_top` is False (otherwise the input shape
            has to be `(224, 224, 3)` (with `channels_last` data format)
            or `(3, 224, 244)` (with `channels_first` data format).
            It should have exactly 3 inputs channels,
            and width and height should be no smaller than 48.
            E.g. `(200, 200, 3)` would be one valid value.
        pooling: Optional pooling mode for feature extraction
            when `include_top` is `False`.
            - `None` means that the output of the model will be
                the 4D tensor output of the
                last convolutional layer.
            - `avg` means that global average pooling
                will be applied to the output of the
                last convolutional layer, and thus
                the output of the model will be a 2D tensor.
            - `max` means that global max pooling will
                be applied.
        classes: optional number of classes to classify images
            into, only to be specified if `include_top` is True, and
            if no `weights` argument is specified.
    # Returns
        A Keras model instance.
    # Raises
        ValueError: in case of invalid argument for `weights`,
            or invalid input shape.
    """
    if weights not in {'imagenet', None}:
        raise ValueError('The `weights` argument should be either '
                         '`None` (random initialization) or `imagenet` '
                         '(pre-training on ImageNet).')

    if weights == 'imagenet' and include_top and classes != 1000:
        raise ValueError('If using `weights` as imagenet with `include_top`'
                         ' as true, `classes` should be 1000')
    # Determine proper input shape
    input_shape = _obtain_input_shape(input_shape,
                                      default_size=224,
                                      min_size=48,
                                      data_format=K.image_data_format(),
                                      include_top=include_top)

    if input_tensor is None:
        img_input = Input(shape=input_shape)
    else:
        if not K.is_keras_tensor(input_tensor):
            img_input = Input(tensor=input_tensor, shape=input_shape)
        else:
            img_input = input_tensor
    # Block 1
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)

    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)

    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv4')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)

    # Block 4
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv4')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)

    # Block 5
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv4')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)

    if include_top:
        # Classification block
        x = Flatten(name='flatten')(x)
        x = Dense(4096, activation='relu', name='fc1')(x)
        x = Dense(4096, activation='relu', name='fc2')(x)
        x = Dense(classes, activation='softmax', name='predictions')(x)
    else:
        if pooling == 'avg':
            x = GlobalAveragePooling2D()(x)
        elif pooling == 'max':
            x = GlobalMaxPooling2D()(x)

    # Ensure that the model takes into account
    # any potential predecessors of `input_tensor`.
    if input_tensor is not None:
        inputs = get_source_inputs(input_tensor)
    else:
        inputs = img_input
    # Create model.
    model = Model(inputs, x, name='vgg19')

    # load weights
    if weights == 'imagenet':
        if include_top:
            weights_path = get_file('vgg19_weights_tf_dim_ordering_tf_kernels.h5',
                                    WEIGHTS_PATH,
                                    cache_subdir='models')
        else:
            weights_path = get_file('vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5',
                                    WEIGHTS_PATH_NO_TOP,
                                    cache_subdir='models')
        model.load_weights(weights_path)
        if K.backend() == 'theano':
            layer_utils.convert_all_kernels_in_model(model)

        if K.image_data_format() == 'channels_first':
            if include_top:
                maxpool = model.get_layer(name='block5_pool')
                shape = maxpool.output_shape[1:]
                dense = model.get_layer(name='fc1')
                layer_utils.convert_dense_weights_data_format(dense, shape, 'channels_first')

            if K.backend() == 'tensorflow':
                warnings.warn('You are using the TensorFlow backend, yet you '
                              'are using the Theano '
                              'image data format convention '
                              '(`image_data_format="channels_first"`). '
                              'For best performance, set '
                              '`image_data_format="channels_last"` in '
                              'your Keras config '
                              'at ~/.keras/keras.json.')
    return model


if __name__ == '__main__':
    model = VGG19(include_top=True, weights='imagenet')

    img_path = 'cat.jpg'
    img = image.load_img(img_path, target_size=(224, 224))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    print('Input image shape:', x.shape)

    preds = model.predict(x)
    print('Predicted:', decode_predictions(preds))

================================================
FILE: preprocessing/README.md
================================================
#Preprocessing

Some preprocessing to be added.

================================================
FILE: preprocessing/altoFolders.py
================================================
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 10 19:15:14 2017
@author: holy
"""
import os
import shutil
import pandas as pd
import numpy as np
label = pd.read_csv("originaldata.txt",sep='\s+',encoding='utf-8',escapechar='\n')
train_filenames=os.listdir('originaldata')
def ex_mkdir(dirname):
    if not os.path.exists(dirname):
        os.mkdir(dirname) 
def rmrf_mkdir(dirname):
    if os.path.exists(dirname):
        shutil.rmtree(dirname)
    os.mkdir(dirname)      
ex_mkdir('../data/train')
for iter in label.index:
    name=label.iloc[iter,0]
    i=label.iloc[iter,1]
    ex_mkdir('train2/'+str(i))
    shutil.copy('originaldata'+name+'.jpg', '../data/train/'+str(i)+'/'+name+'.jpg')

================================================
FILE: preprocessing/divforValidation.py
================================================
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 10 19:15:14 2017
@author: holy
"""
filename ='../data/train'
train_dir='../data/train'
val_dir='../data/val'
ls=os.listdir(filename)
def ex_mkdir(dirname):
    if not os.path.exists(dirname):
        os.mkdir(dirname) 
def rmrf_mkdir(dirname):
    if os.path.exists(dirname):
        shutil.rmtree(dirname)
    os.mkdir(dirname)      
ex_mkdir(val_dir)
for i in range(0,len(ls)):
    ex_mkdir(val_dir+str(ls[i]))
    data=os.listdir(train_dir+str(ls[i]))
    for j in range(0,int(0.2*len(data))):#%20 for validation
        name=data[j]
        shutil.move(train_dir+str(ls[i])+'/'+name,val_dir+str(ls[i])+'/'+name)
Download .txt
gitextract_0h9pdhwq/

├── .gitignore
├── README.md
├── data/
│   ├── README.md
│   ├── test/
│   │   └── README.md
│   ├── train/
│   │   └── README.md
│   └── val/
│       └── README.md
├── evaluate/
│   ├── README.md
│   ├── predict_bygenerator.py
│   └── predict_onebyone.py
├── models/
│   ├── README.md
│   ├── dogs.py
│   ├── inceptionV3.py
│   └── vgg19.py
└── preprocessing/
    ├── README.md
    ├── altoFolders.py
    └── divforValidation.py
Download .txt
SYMBOL INDEX (5 symbols across 3 files)

FILE: models/vgg19.py
  function VGG19 (line 28) | def VGG19(include_top=True, weights='imagenet',

FILE: preprocessing/altoFolders.py
  function ex_mkdir (line 12) | def ex_mkdir(dirname):
  function rmrf_mkdir (line 15) | def rmrf_mkdir(dirname):

FILE: preprocessing/divforValidation.py
  function ex_mkdir (line 10) | def ex_mkdir(dirname):
  function rmrf_mkdir (line 13) | def rmrf_mkdir(dirname):
Condensed preview — 16 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (22K chars).
[
  {
    "path": ".gitignore",
    "chars": 11,
    "preview": "*.h5\n*.txt\n"
  },
  {
    "path": "README.md",
    "chars": 1324,
    "preview": "# Fine-grained Dog Classification competition\n- This is a dog classification competition held by Baidu. Further informat"
  },
  {
    "path": "data/README.md",
    "chars": 146,
    "preview": "#Data\n\nData for train,validation and test.\n\nNote that the validation is 20% here from the original data. (Random samplin"
  },
  {
    "path": "data/test/README.md",
    "chars": 25,
    "preview": "#Test\nData for testing.\n\n"
  },
  {
    "path": "data/train/README.md",
    "chars": 28,
    "preview": "#TRAIN\nData for trainning.\n\n"
  },
  {
    "path": "data/val/README.md",
    "chars": 34,
    "preview": "#Validation\nData for validation.\n\n"
  },
  {
    "path": "evaluate/README.md",
    "chars": 46,
    "preview": "#evaluate\n\nsome evaluation methods to be added"
  },
  {
    "path": "evaluate/predict_bygenerator.py",
    "chars": 1178,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jul 10 21:27:15 2017\n@author: Administrator\n\"\"\"\nimport os\nimport shutil\nimpor"
  },
  {
    "path": "evaluate/predict_onebyone.py",
    "chars": 1206,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jul 10 21:27:15 2017\n@author: Administrator\n\"\"\"\nimport os\nimport shutil\nimpor"
  },
  {
    "path": "models/README.md",
    "chars": 32,
    "preview": "#Models\n\nSome models to be added"
  },
  {
    "path": "models/dogs.py",
    "chars": 2613,
    "preview": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jul 10 20:49:09 2017\n@author: Administrator\n\"\"\"\nimport"
  },
  {
    "path": "models/inceptionV3.py",
    "chars": 3419,
    "preview": "import os\nimport keras\nimport numpy as np\nfrom keras import Input\nfrom keras import backend as K\nfrom keras.applications"
  },
  {
    "path": "models/vgg19.py",
    "chars": 8936,
    "preview": "# -*- coding: utf-8 -*-\n'''VGG19 model for Keras.\n# Reference:\n- [Very Deep Convolutional Networks for Large-Scale Image"
  },
  {
    "path": "preprocessing/README.md",
    "chars": 47,
    "preview": "#Preprocessing\n\nSome preprocessing to be added."
  },
  {
    "path": "preprocessing/altoFolders.py",
    "chars": 692,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jul 10 19:15:14 2017\n@author: holy\n\"\"\"\nimport os\nimport shutil\nimport pandas "
  },
  {
    "path": "preprocessing/divforValidation.py",
    "chars": 662,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jul 10 19:15:14 2017\n@author: holy\n\"\"\"\nfilename ='../data/train'\ntrain_dir='."
  }
]

About this extraction

This page contains the full source code of the holyhao/Baidu-Dogs GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 16 files (19.9 KB), approximately 5.6k tokens, and a symbol index with 5 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!