[
  {
    "path": "README.md",
    "content": "### 声明：开源只是为了方便大家交流学习，数据请勿用于商业用途！！！！转载或解读请注明出处，谢谢！\n\n**背景**\n\n很早之前开源过 pytorch 进行图像分类的代码（[从实例掌握 pytorch 进行图像分类](http://spytensor.com/index.php/archives/21/)），历时两个多月的学习和总结，近期也做了升级。在此基础上写了一个 Ai Challenger 农作物竞赛的 baseline 供大家交流。\n\n**2018 年 12 月 13 日更新**\n\n新增数据集下载链接：[百度网盘]( https://pan.baidu.com/s/16f1nQchS-zBtzSWn9Guyyg ) 提取码：iksk \n数据集是 10 月 23 日 更新后的新数据集，包含训练集、验证集、测试集A/B.\n另外最近有同学拿到类似的数据，想做分类的任务，但是这份代码是针对这次比赛开源的，在数据读取方式上会有区别，对于新手来说不太友好，我开源了一份针对图像分类任务的代码，并附上简单教程，相信看完后能比较轻松使用 pytorch 进行图像分类。\n\n教程: [从实例掌握 pytorch 进行图像分类](http://www.spytensor.com/index.php/archives/21/)\n\n代码: [pytorch-image-classification](https://github.com/spytensor/pytorch-image-classification)\n\n**2018年 10 月 30 日更新**\n\n新增 `data_aug.py` 用于线下数据增强，由于时间问题，这个比赛不再做啦，这些增强方式大家有需要可以研究一下，支持的增强方式：\n\n- 高斯噪声\n- 亮度变化\n- 左右翻转\n- 上下翻转\n- 色彩抖动\n- 对比度变化\n- 锐度变化\n\n注：对比度增强在可视化后，主观感觉特征更明显了，目前我还未跑完。提醒一下，如果做了对比度增强，在测试集的时候最好也做一下。\n\n个人博客：[超杰](http://spytensor.com/)\n\n比赛地址：[农作物病害检测](https://challenger.ai/competition/pdr2018)\n\n完整代码地址：[plants_disease_detection](https://github.com/spytensor/plants_disease_detection)\n\n    注：\n    欢迎大佬学习交流啊，这份代码可改进的地方太多了,\n    如果大佬们有啥改进的意见请指导！\n    联系方式：zhuchaojie@buaa.edu.cn\n\n**成绩**：线上 0.8805，线下0.875，由于划分存在随机性，可能复现会出现波动，已经尽可能排除随机种子的干扰了。\n\n## 提醒\n\n`main.py` 中的test函数已经修正，执行后在 `./submit/`中会得到提交格式的 json 文件,现已支持 Focalloss 和交叉验证，需要的自行修改一下就可以了。\n依赖中的 pytorch 版本请保持一致，不然可能会有一些小 BUG。\n\n### 1. 依赖\n\n    python3.6 pytorch0.4.1\n\n### 2. 关于数据的处理\n\n首先说明，使用的数据为官方更新后的数据，并做了一个统计分析（下文会给出），最后决定删除第 44 类和第 45 类。\n并且由于数据分布的原因，我将 train 和 val 数据集合并后，采用随机划分。\n\n数据增强方式：\n\n- RandomRotation(30)\n- RandomHorizontalFlip()\n- RandomVerticalFlip()\n- RandomAffine(45)\n\n图片尺寸选择了 650，暂时没有对这个尺寸进行调优（毕竟太忙了。。）\n\n### 3. 模型选择\n\n模型目前就尝试了 resnet50，后续有卡的话再说吧。。。\n\n### 4. 超参数设置\n\n详情在 config.py 中\n\n### 5.使用方法\n\n- 第一步：将测试集图片复制到 `data/test/` 下\n- 第二步：将训练集合验证集中的图片都复制到 `data/temp/images/` 下，将两个 `json` 文件放到 `data/temp/labels/` 下\n- 执行 move.py 文件\n- 执行 main.py 进行训练\n\n### 6.数据分布图\n\n训练集\n\n![train](http://www.spytensor.com/images/plants/train.png)\n\n验证集\n\n![val](http://www.spytensor.com/images/plants/val.png)\n\n全部数据集\n\n![all](http://www.spytensor.com/images/plants/all.png)\n"
  },
  {
    "path": "config.py",
    "content": "class DefaultConfigs(object):\n    #1.string parameters\n    train_data = \"./data/train/\"\n    test_data = \"./data/test/\"\n    val_data = \"no\"\n    model_name = \"resnet50\"\n    weights = \"./checkpoints/\"\n    best_models = weights + \"best_model/\"\n    submit = \"./submit/\"\n    logs = \"./logs/\"\n    gpus = \"1\"\n\n    #2.numeric parameters\n    epochs = 40\n    batch_size = 8\n    img_height = 650\n    img_weight = 650\n    num_classes = 59\n    seed = 888\n    lr = 1e-4\n    lr_decay = 1e-4\n    weight_decay = 1e-4\n\nconfig = DefaultConfigs()\n"
  },
  {
    "path": "data/temp/images/.gitkeep",
    "content": ""
  },
  {
    "path": "data/temp/labels/.gitkeep",
    "content": ""
  },
  {
    "path": "data/test/.gitkeep",
    "content": ""
  },
  {
    "path": "data/train/.gitkeep",
    "content": ""
  },
  {
    "path": "data_aug.py",
    "content": "from PIL import Image,ImageEnhance,ImageFilter,ImageOps\nimport os\nimport shutil\nimport numpy as np\nimport cv2\nimport random\nfrom skimage.util import random_noise\nfrom skimage import exposure\n\n\nimage_number = 0\n\nraw_path = \"./data/train/\"\n\nnew_path = \"./aug/train/\"\n\n# 加高斯噪声\ndef addNoise(img):\n    '''\n    注意：输出的像素是[0,1]之间,所以乘以5得到[0,255]之间\n    '''\n    return random_noise(img, mode='gaussian', seed=13, clip=True)*255\n\ndef changeLight(img):\n    rate = random.uniform(0.5, 1.5)\n    # print(rate)\n    img = exposure.adjust_gamma(img, rate) #大于1为调暗，小于1为调亮;1.05\n    return img\n\ntry:\n    for i in range(59):\n        os.makedirs(new_path + os.sep + str(i))\n    except:\n        pass\n\nfor raw_dir_name in range(59):\n\n    raw_dir_name = str(raw_dir_name)\n\n    saved_image_path = new_path + raw_dir_name+\"/\"\n\n    raw_image_path = raw_path + raw_dir_name+\"/\"\n\n    if not os.path.exists(saved_image_path):\n\n        os.mkdir(saved_image_path)\n\n    raw_image_file_name = os.listdir(raw_image_path)\n\n    raw_image_file_path = []\n\n    for i in raw_image_file_name:\n\n        raw_image_file_path.append(raw_image_path+i)\n\n    for x in raw_image_file_path:\n\n        img = Image.open(x)\n        cv_image = cv2.imread(x)\n\n        # 高斯噪声\n        gau_image = addNoise(cv_image)\n        # 随机改变\n        light = changeLight(cv_image)\n        light_and_gau = addNoise(light)\n\n        cv2.imwrite(saved_image_path + \"gau_\" + os.path.basename(x),gau_image)\n        cv2.imwrite(saved_image_path + \"light_\" + os.path.basename(x),light)\n        cv2.imwrite(saved_image_path + \"gau_light\" + os.path.basename(x),light_and_gau)\n        #img = img.resize((800,600))\n\n        #1.翻转 \n\n        img_flip_left_right = img.transpose(Image.FLIP_LEFT_RIGHT)\n\n        img_flip_top_bottom = img.transpose(Image.FLIP_TOP_BOTTOM)\n\n        #2.旋转 \n\n        #img_rotate_90 = img.transpose(Image.ROTATE_90)\n\n        #img_rotate_180 = img.transpose(Image.ROTATE_180)\n\n        #img_rotate_270 = img.transpose(Image.ROTATE_270)\n\n        #img_rotate_90_left = img_flip_left_right.transpose(Image.ROTATE_90)\n\n        #img_rotate_270_left = img_flip_left_right.transpose(Image.ROTATE_270)\n\n        #3.亮度\n\n        #enh_bri = ImageEnhance.Brightness(img)\n        #brightness = 1.5\n        #image_brightened = enh_bri.enhance(brightness)\n\n        #4.色彩\n\n        #enh_col = ImageEnhance.Color(img)\n        #color = 1.5\n\n        #image_colored = enh_col.enhance(color)\n\n        #5.对比度\n\n        enh_con = ImageEnhance.Contrast(img)\n\n        contrast = 1.5\n\n        image_contrasted = enh_con.enhance(contrast)\n\n        #6.锐度\n\n        #enh_sha = ImageEnhance.Sharpness(img)\n        #sharpness = 3.0\n\n        #image_sharped = enh_sha.enhance(sharpness)\n\n        #保存 \n\n        img.save(saved_image_path + os.path.basename(x))\n\n        img_flip_left_right.save(saved_image_path + \"left_right_\" + os.path.basename(x))\n\n        img_flip_top_bottom.save(saved_image_path + \"top_bottom_\" + os.path.basename(x))\n\n        #img_rotate_90.save(saved_image_path + \"rotate_90_\" + os.path.basename(x))\n\n        #img_rotate_180.save(saved_image_path + \"rotate_180_\" + os.path.basename(x))\n\n        #img_rotate_270.save(saved_image_path + \"rotate_270_\" + os.path.basename(x))\n\n        #img_rotate_90_left.save(saved_image_path + \"rotate_90_left_\" + os.path.basename(x))\n\n        #img_rotate_270_left.save(saved_image_path + \"rotate_270_left_\" + os.path.basename(x))\n\n        #image_brightened.save(saved_image_path + \"brighted_\" + os.path.basename(x))\n\n        #image_colored.save(saved_image_path + \"colored_\" + os.path.basename(x))\n\n        image_contrasted.save(saved_image_path + \"contrasted_\" + os.path.basename(x))\n\n        #image_sharped.save(saved_image_path + \"sharped_\" + os.path.basename(x))\n\n        image_number += 1\n\n        print(\"convert pictur\" \"es :%s size:%s mode:%s\" % (image_number, img.size, img.mode))\n\n \n"
  },
  {
    "path": "dataset/dataloader.py",
    "content": "from torch.utils.data import Dataset\nfrom torchvision import transforms as T \nfrom config import config\nfrom PIL import Image \nfrom itertools import chain \nfrom glob import glob\nfrom tqdm import tqdm\nimport random \nimport numpy as np \nimport pandas as pd \nimport os \nimport cv2\nimport torch \n\n#1.set random seed\nrandom.seed(config.seed)\nnp.random.seed(config.seed)\ntorch.manual_seed(config.seed)\ntorch.cuda.manual_seed_all(config.seed)\n\n#2.define dataset\nclass ChaojieDataset(Dataset):\n    def __init__(self,label_list,transforms=None,train=True,test=False):\n        self.test = test \n        self.train = train \n        imgs = []\n        if self.test:\n            for index,row in label_list.iterrows():\n                imgs.append((row[\"filename\"]))\n            self.imgs = imgs \n        else:\n            for index,row in label_list.iterrows():\n                imgs.append((row[\"filename\"],row[\"label\"]))\n            self.imgs = imgs\n        if transforms is None:\n            if self.test or not train:\n                self.transforms = T.Compose([\n                    T.Resize((config.img_weight,config.img_height)),\n                    T.ToTensor(),\n                    T.Normalize(mean = [0.485,0.456,0.406],\n                                std = [0.229,0.224,0.225])])\n            else:\n                self.transforms  = T.Compose([\n                    T.Resize((config.img_weight,config.img_height)),\n                    T.RandomRotation(30),\n                    T.RandomHorizontalFlip(),\n                    T.RandomVerticalFlip(),\n                    T.RandomAffine(45),\n                    T.ToTensor(),\n                    T.Normalize(mean = [0.485,0.456,0.406],\n                                std = [0.229,0.224,0.225])])\n        else:\n            self.transforms = transforms\n    def __getitem__(self,index):\n        if self.test:\n            filename = self.imgs[index]\n            img = Image.open(filename)\n            img = self.transforms(img)\n            return img,filename\n        else:\n            filename,label = self.imgs[index] \n            img = Image.open(filename)\n            img = self.transforms(img)\n            return img,label\n    def __len__(self):\n        return len(self.imgs)\n\ndef collate_fn(batch):\n    imgs = []\n    label = []\n    for sample in batch:\n        imgs.append(sample[0])\n        label.append(sample[1])\n\n    return torch.stack(imgs, 0), \\\n           label\n\ndef get_files(root,mode):\n    #for test\n    if mode == \"test\":\n        files = []\n        for img in os.listdir(root):\n            files.append(root + img)\n        files = pd.DataFrame({\"filename\":files})\n        return files\n    elif mode != \"test\": \n        #for train and val       \n        all_data_path,labels = [],[]\n        image_folders = list(map(lambda x:root+x,os.listdir(root)))\n        jpg_image_1 = list(map(lambda x:glob(x+\"/*.jpg\"),image_folders))\n        jpg_image_2 = list(map(lambda x:glob(x+\"/*.JPG\"),image_folders))\n        all_images = list(chain.from_iterable(jpg_image_1 + jpg_image_2))\n        print(\"loading train dataset\")\n        for file in tqdm(all_images):\n            all_data_path.append(file)\n            labels.append(int(file.split(\"/\")[-2]))\n        all_files = pd.DataFrame({\"filename\":all_data_path,\"label\":labels})\n        return all_files\n    else:\n        print(\"check the mode please!\")\n    \n"
  },
  {
    "path": "main.py",
    "content": "import os \nimport random \nimport time\nimport json\nimport torch\nimport torchvision\nimport numpy as np \nimport pandas as pd \nimport warnings\nfrom datetime import datetime\nfrom torch import nn,optim\nfrom config import config \nfrom collections import OrderedDict\nfrom torch.autograd import Variable \nfrom torch.utils.data import DataLoader\nfrom dataset.dataloader import *\nfrom sklearn.model_selection import train_test_split,StratifiedKFold\nfrom timeit import default_timer as timer\nfrom models.model import *\nfrom utils import *\n\n#1. set random.seed and cudnn performance\nrandom.seed(config.seed)\nnp.random.seed(config.seed)\ntorch.manual_seed(config.seed)\ntorch.cuda.manual_seed_all(config.seed)\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = config.gpus\ntorch.backends.cudnn.benchmark = True\nwarnings.filterwarnings('ignore')\n\n#2. evaluate func\ndef evaluate(val_loader,model,criterion):\n    #2.1 define meters\n    losses = AverageMeter()\n    top1 = AverageMeter()\n    top2 = AverageMeter()\n    #2.2 switch to evaluate mode and confirm model has been transfered to cuda\n    model.cuda()\n    model.eval()\n    with torch.no_grad():\n        for i,(input,target) in enumerate(val_loader):\n            input = Variable(input).cuda()\n            target = Variable(torch.from_numpy(np.array(target)).long()).cuda()\n            #target = Variable(target).cuda()\n            #2.2.1 compute output\n            output = model(input)\n            loss = criterion(output,target)\n\n            #2.2.2 measure accuracy and record loss\n            precision1,precision2 = accuracy(output,target,topk=(1,2))\n            losses.update(loss.item(),input.size(0))\n            top1.update(precision1[0],input.size(0))\n            top2.update(precision2[0],input.size(0))\n\n    return [losses.avg,top1.avg,top2.avg]\n\n#3. test model on public dataset and save the probability matrix\ndef test(test_loader,model,folds):\n    #3.1 confirm the model converted to cuda\n    csv_map = OrderedDict({\"filename\":[],\"probability\":[]})\n    model.cuda()\n    model.eval()\n    with open(\"./submit/baseline.json\",\"w\",encoding=\"utf-8\") as f :\n        submit_results = []\n        for i,(input,filepath) in enumerate(tqdm(test_loader)):\n            #3.2 change everything to cuda and get only basename\n            filepath = [os.path.basename(x) for x in filepath]\n            with torch.no_grad():\n                image_var = Variable(input).cuda()\n                #3.3.output\n                #print(filepath)\n                #print(input,input.shape)\n                y_pred = model(image_var)\n                #print(y_pred.shape)\n                smax = nn.Softmax(1)\n                smax_out = smax(y_pred)\n            #3.4 save probability to csv files\n            csv_map[\"filename\"].extend(filepath)\n            for output in smax_out:\n                prob = \";\".join([str(i) for i in output.data.tolist()])\n                csv_map[\"probability\"].append(prob)\n        result = pd.DataFrame(csv_map)\n        result[\"probability\"] = result[\"probability\"].map(lambda x : [float(i) for i in x.split(\";\")])\n        for index, row in result.iterrows():\n            pred_label = np.argmax(row['probability'])\n            if pred_label > 43:\n                pred_label = pred_label + 2\n            submit_results.append({\"image_id\":row['filename'],\"disease_class\":pred_label})\n        json.dump(submit_results,f,ensure_ascii=False,cls = MyEncoder)\n\n#4. more details to build main function    \ndef main():\n    fold = 0\n    #4.1 mkdirs\n    if not os.path.exists(config.submit):\n        os.mkdir(config.submit)\n    if not os.path.exists(config.weights):\n        os.mkdir(config.weights)\n    if not os.path.exists(config.best_models):\n        os.mkdir(config.best_models)\n    if not os.path.exists(config.logs):\n        os.mkdir(config.logs)\n    if not os.path.exists(config.weights + config.model_name + os.sep +str(fold) + os.sep):\n        os.makedirs(config.weights + config.model_name + os.sep +str(fold) + os.sep)\n    if not os.path.exists(config.best_models + config.model_name + os.sep +str(fold) + os.sep):\n        os.makedirs(config.best_models + config.model_name + os.sep +str(fold) + os.sep)       \n    #4.2 get model and optimizer\n    model = get_net()\n    #model = torch.nn.DataParallel(model)\n    model.cuda()\n    #optimizer = optim.SGD(model.parameters(),lr = config.lr,momentum=0.9,weight_decay=config.weight_decay)\n    optimizer = optim.Adam(model.parameters(),lr = config.lr,amsgrad=True,weight_decay=config.weight_decay)\n    criterion = nn.CrossEntropyLoss().cuda()\n    #criterion = FocalLoss().cuda()\n    log = Logger()\n    log.open(config.logs + \"log_train.txt\",mode=\"a\")\n    log.write(\"\\n----------------------------------------------- [START %s] %s\\n\\n\" % (datetime.now().strftime('%Y-%m-%d %H:%M:%S'), '-' * 51))\n    #4.3 some parameters for  K-fold and restart model\n    start_epoch = 0\n    best_precision1 = 0\n    best_precision_save = 0\n    resume = False\n    \n    #4.4 restart the training process\n    if resume:\n        checkpoint = torch.load(config.best_models + str(fold) + \"/model_best.pth.tar\")\n        start_epoch = checkpoint[\"epoch\"]\n        fold = checkpoint[\"fold\"]\n        best_precision1 = checkpoint[\"best_precision1\"]\n        model.load_state_dict(checkpoint[\"state_dict\"])\n        optimizer.load_state_dict(checkpoint[\"optimizer\"])\n\n    #4.5 get files and split for K-fold dataset\n    #4.5.1 read files\n    train_ = get_files(config.train_data,\"train\")\n    #val_data_list = get_files(config.val_data,\"val\")\n    test_files = get_files(config.test_data,\"test\")\n\n    \"\"\" \n    #4.5.2 split\n    split_fold = StratifiedKFold(n_splits=3)\n    folds_indexes = split_fold.split(X=origin_files[\"filename\"],y=origin_files[\"label\"])\n    folds_indexes = np.array(list(folds_indexes))\n    fold_index = folds_indexes[fold]\n\n    #4.5.3 using fold index to split for train data and val data\n    train_data_list = pd.concat([origin_files[\"filename\"][fold_index[0]],origin_files[\"label\"][fold_index[0]]],axis=1)\n    val_data_list = pd.concat([origin_files[\"filename\"][fold_index[1]],origin_files[\"label\"][fold_index[1]]],axis=1)\n    \"\"\"\n    train_data_list,val_data_list = train_test_split(train_,test_size = 0.15,stratify=train_[\"label\"])\n    #4.5.4 load dataset\n    train_dataloader = DataLoader(ChaojieDataset(train_data_list),batch_size=config.batch_size,shuffle=True,collate_fn=collate_fn,pin_memory=True)\n    val_dataloader = DataLoader(ChaojieDataset(val_data_list,train=False),batch_size=config.batch_size,shuffle=True,collate_fn=collate_fn,pin_memory=False)\n    test_dataloader = DataLoader(ChaojieDataset(test_files,test=True),batch_size=1,shuffle=False,pin_memory=False)\n    #scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer,\"max\",verbose=1,patience=3)\n    scheduler =  optim.lr_scheduler.StepLR(optimizer,step_size = 10,gamma=0.1)\n    #4.5.5.1 define metrics\n    train_losses = AverageMeter()\n    train_top1 = AverageMeter()\n    train_top2 = AverageMeter()\n    valid_loss = [np.inf,0,0]\n    model.train()\n    #logs\n    log.write('** start training here! **\\n')\n    log.write('                           |------------ VALID -------------|----------- TRAIN -------------|------Accuracy------|------------|\\n')\n    log.write('lr       iter     epoch    | loss   top-1  top-2            | loss   top-1  top-2           |    Current Best    | time       |\\n')\n    log.write('-------------------------------------------------------------------------------------------------------------------------------\\n')\n    #4.5.5 train\n    start = timer()\n    for epoch in range(start_epoch,config.epochs):\n        scheduler.step(epoch)\n        # train\n        #global iter\n        for iter,(input,target) in enumerate(train_dataloader):\n            #4.5.5 switch to continue train process\n            model.train()\n            input = Variable(input).cuda()\n            target = Variable(torch.from_numpy(np.array(target)).long()).cuda()\n            #target = Variable(target).cuda()\n            output = model(input)\n            loss = criterion(output,target)\n\n            precision1_train,precision2_train = accuracy(output,target,topk=(1,2))\n            train_losses.update(loss.item(),input.size(0))\n            train_top1.update(precision1_train[0],input.size(0))\n            train_top2.update(precision2_train[0],input.size(0))\n            #backward\n            optimizer.zero_grad()\n            loss.backward()\n            optimizer.step()\n            lr = get_learning_rate(optimizer)\n            print('\\r',end='',flush=True)\n            print('%0.4f %5.1f %6.1f        | %0.3f  %0.3f  %0.3f         | %0.3f  %0.3f  %0.3f         |         %s         | %s' % (\\\n                         lr, iter/len(train_dataloader) + epoch, epoch,\n                         valid_loss[0], valid_loss[1], valid_loss[2],\n                         train_losses.avg, train_top1.avg, train_top2.avg,str(best_precision_save),\n                         time_to_str((timer() - start),'min'))\n            , end='',flush=True)\n        #evaluate\n        lr = get_learning_rate(optimizer)\n        #evaluate every half epoch\n        valid_loss = evaluate(val_dataloader,model,criterion)\n        is_best = valid_loss[1] > best_precision1\n        best_precision1 = max(valid_loss[1],best_precision1)\n        try:\n            best_precision_save = best_precision1.cpu().data.numpy()\n        except:\n            pass\n        save_checkpoint({\n                    \"epoch\":epoch + 1,\n                    \"model_name\":config.model_name,\n                    \"state_dict\":model.state_dict(),\n                    \"best_precision1\":best_precision1,\n                    \"optimizer\":optimizer.state_dict(),\n                    \"fold\":fold,\n                    \"valid_loss\":valid_loss,\n        },is_best,fold)\n        #adjust learning rate\n        #scheduler.step(valid_loss[1])\n        print(\"\\r\",end=\"\",flush=True)\n        log.write('%0.4f %5.1f %6.1f        | %0.3f  %0.3f  %0.3f          | %0.3f  %0.3f  %0.3f         |         %s         | %s' % (\\\n                        lr, 0 + epoch, epoch,\n                        valid_loss[0], valid_loss[1], valid_loss[2],\n                        train_losses.avg,    train_top1.avg,    train_top2.avg, str(best_precision_save),\n                        time_to_str((timer() - start),'min'))\n                )\n        log.write('\\n')\n        time.sleep(0.01)\n    best_model = torch.load(config.best_models + os.sep+config.model_name+os.sep+ str(fold) +os.sep+ 'model_best.pth.tar')\n    model.load_state_dict(best_model[\"state_dict\"])\n    test(test_dataloader,model,fold)\n\nif __name__ ==\"__main__\":\n    main()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
  },
  {
    "path": "models/__init__.py",
    "content": ""
  },
  {
    "path": "models/model.py",
    "content": "import torchvision\nimport torch.nn.functional as F \nfrom torch import nn\nfrom config import config\n\ndef generate_model():\n    class DenseModel(nn.Module):\n        def __init__(self, pretrained_model):\n            super(DenseModel, self).__init__()\n            self.classifier = nn.Linear(pretrained_model.classifier.in_features, config.num_classes)\n\n            for m in self.modules():\n                if isinstance(m, nn.Conv2d):\n                    nn.init.kaiming_normal(m.weight)\n                elif isinstance(m, nn.BatchNorm2d):\n                    m.weight.data.fill_(1)\n                    m.bias.data.zero_()\n                elif isinstance(m, nn.Linear):\n                    m.bias.data.zero_()\n\n            self.features = pretrained_model.features\n            self.layer1 = pretrained_model.features._modules['denseblock1']\n            self.layer2 = pretrained_model.features._modules['denseblock2']\n            self.layer3 = pretrained_model.features._modules['denseblock3']\n            self.layer4 = pretrained_model.features._modules['denseblock4']\n\n        def forward(self, x):\n            features = self.features(x)\n            out = F.relu(features, inplace=True)\n            out = F.avg_pool2d(out, kernel_size=8).view(features.size(0), -1)\n            out = F.sigmoid(self.classifier(out))\n            return out\n\n    return DenseModel(torchvision.models.densenet169(pretrained=True))\n\ndef get_net():\n    #return MyModel(torchvision.models.resnet101(pretrained = True))\n    model = torchvision.models.resnet50(pretrained = True)    \n    #for param in model.parameters():\n    #    param.requires_grad = False\n    model.avgpool = nn.AdaptiveAvgPool2d(1)\n    model.fc = nn.Linear(2048,config.num_classes)\n    return model\n\n"
  },
  {
    "path": "move.py",
    "content": "import json\nimport shutil\nimport os\nfrom glob import glob\nfrom tqdm import tqdm\n\ntry:\n    for i in range(0,59):\n        os.mkdir(\"./data/train/\" + str(i))\nexcept:\n    pass\n    \nfile_train = json.load(open(\"./data/temp/labels/AgriculturalDisease_train_annotations.json\",\"r\",encoding=\"utf-8\"))\nfile_val = json.load(open(\"./data/temp/labels/AgriculturalDisease_validation_annotations.json\",\"r\",encoding=\"utf-8\"))\n\nfile_list = file_train + file_val\n\nfor file in tqdm(file_list):\n    filename = file[\"image_id\"]\n    origin_path = \"./data/temp/images/\" + filename\n    ids = file[\"disease_class\"]\n    if ids ==  44:\n        continue\n    if ids == 45:\n        continue\n    if ids > 45:\n        ids = ids -2\n    save_path = \"./data/train/\" + str(ids) + \"/\"\n    shutil.copy(origin_path,save_path)\n\n"
  },
  {
    "path": "utils.py",
    "content": "import shutil\nimport torch\nimport sys\nimport os\nimport json\nimport numpy as np\nfrom config import config\nfrom torch import nn\nimport torch.nn.functional as F\ndef save_checkpoint(state, is_best,fold):\n    filename = config.weights + config.model_name + os.sep +str(fold) + os.sep + \"_checkpoint.pth.tar\"\n    torch.save(state, filename)\n    if is_best:\n        shutil.copyfile(filename, config.best_models + config.model_name+ os.sep +str(fold)  + os.sep + 'model_best.pth.tar')\n\nclass AverageMeter(object):\n    \"\"\"Computes and stores the average and current value\"\"\"\n    def __init__(self):\n        self.reset()\n\n    def reset(self):\n        self.val = 0\n        self.avg = 0\n        self.sum = 0\n        self.count = 0\n\n    def update(self, val, n=1):\n        self.val = val\n        self.sum += val * n\n        self.count += n\n        self.avg = self.sum / self.count\n\ndef adjust_learning_rate(optimizer, epoch):\n    \"\"\"Sets the learning rate to the initial LR decayed by 10 every 3 epochs\"\"\"\n    lr = config.lr * (0.1 ** (epoch // 3))\n    for param_group in optimizer.param_groups:\n        param_group['lr'] = lr\n\n\ndef schedule(current_epoch, current_lrs, **logs):\n        lrs = [1e-3, 1e-4, 0.5e-4, 1e-5, 0.5e-5]\n        epochs = [0, 1, 6, 8, 12]\n        for lr, epoch in zip(lrs, epochs):\n            if current_epoch >= epoch:\n                current_lrs[5] = lr\n                if current_epoch >= 2:\n                    current_lrs[4] = lr * 1\n                    current_lrs[3] = lr * 1\n                    current_lrs[2] = lr * 1\n                    current_lrs[1] = lr * 1\n                    current_lrs[0] = lr * 0.1\n        return current_lrs\n\ndef accuracy(output, target, topk=(1,)):\n    \"\"\"Computes the accuracy over the k top predictions for the specified values of k\"\"\"\n    with torch.no_grad():\n        maxk = max(topk)\n        batch_size = target.size(0)\n\n        _, pred = output.topk(maxk, 1, True, True)\n        pred = pred.t()\n        correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n        res = []\n        for k in topk:\n            correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)\n            res.append(correct_k.mul_(100.0 / batch_size))\n        return res\n\nclass Logger(object):\n    def __init__(self):\n        self.terminal = sys.stdout  #stdout\n        self.file = None\n\n    def open(self, file, mode=None):\n        if mode is None: mode ='w'\n        self.file = open(file, mode)\n\n    def write(self, message, is_terminal=1, is_file=1 ):\n        if '\\r' in message: is_file=0\n\n        if is_terminal == 1:\n            self.terminal.write(message)\n            self.terminal.flush()\n            #time.sleep(1)\n\n        if is_file == 1:\n            self.file.write(message)\n            self.file.flush()\n\n    def flush(self):\n        # this flush method is needed for python 3 compatibility.\n        # this handles the flush command by doing nothing.\n        # you might want to specify some extra behavior here.\n        pass\n\ndef get_learning_rate(optimizer):\n    lr=[]\n    for param_group in optimizer.param_groups:\n       lr +=[ param_group['lr'] ]\n\n    #assert(len(lr)==1) #we support only one param_group\n    lr = lr[0]\n\n    return lr\n\n\ndef time_to_str(t, mode='min'):\n    if mode=='min':\n        t  = int(t)/60\n        hr = t//60\n        min = t%60\n        return '%2d hr %02d min'%(hr,min)\n\n    elif mode=='sec':\n        t   = int(t)\n        min = t//60\n        sec = t%60\n        return '%2d min %02d sec'%(min,sec)\n\n\n    else:\n        raise NotImplementedError\n\n\nclass FocalLoss(nn.Module):\n\n    def __init__(self, focusing_param=2, balance_param=0.25):\n        super(FocalLoss, self).__init__()\n\n        self.focusing_param = focusing_param\n        self.balance_param = balance_param\n\n    def forward(self, output, target):\n\n        cross_entropy = F.cross_entropy(output, target)\n        cross_entropy_log = torch.log(cross_entropy)\n        logpt = - F.cross_entropy(output, target)\n        pt    = torch.exp(logpt)\n\n        focal_loss = -((1 - pt) ** self.focusing_param) * logpt\n\n        balanced_focal_loss = self.balance_param * focal_loss\n\n        return balanced_focal_loss\n\nclass MyEncoder(json.JSONEncoder):\n    def default(self, obj):\n        if isinstance(obj, np.integer):\n            return int(obj)\n        elif isinstance(obj, np.floating):\n            return float(obj)\n        elif isinstance(obj, np.ndarray):\n            return obj.tolist()\n        else:\n            return super(MyEncoder, self).default(obj)\n"
  }
]