Full Code of Weifeng-Chen/DL_tools for AI

main df9634b6ccce cached
22 files
229.0 KB
94.8k tokens
25 symbols
1 requests
Download .txt
Showing preview only (252K chars total). Download the full file or copy to clipboard to get everything.
Repository: Weifeng-Chen/DL_tools
Branch: main
Commit: df9634b6ccce
Files: 22
Total size: 229.0 KB

Directory structure:
gitextract_mbi_vrdw/

├── .gitignore
├── LICENSE
├── README.md
├── detection/
│   ├── coco2yolo.py
│   ├── coco_eval.py
│   ├── vis_yolo_gt_dt.py
│   └── yolo2coco.py
└── text-image/
    ├── convert_diffusers_to_original_stable_diffusion.py
    ├── data_filter/
    │   ├── data_filter_demo.ipynb
    │   ├── wukong_filter.py
    │   └── wukong_reader.py
    ├── fid_clip_score/
    │   ├── .gitignore
    │   ├── coco_sample_generator.py
    │   ├── compute_fid.ipynb
    │   ├── fid_clip_coco.ipynb
    │   ├── fid_clip_coco_cn.ipynb
    │   ├── run_generator.sh
    │   └── run_generator_cn.sh
    ├── imagenet_CN_zeroshot_data.py
    ├── iterable_tar_unzip.sh
    ├── save_hg_ckpt.ipynb
    └── zeroshot_retrieval_evaluation.ipynb

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2021 Weifeng Chen

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
<h2 align="center">
Some Scripts For DEEP LEARNING

# 1. detection 
## yolo2coco.py
将yolo格式数据集修改成coco格式。`$ROOT_PATH`是根目录,需要按下面的形式组织数据:

```bash
└── $ROOT_PATH

  ├── classes.txt

  ├── images

  └──labels
```

- `classes.txt` 是类的声明,一行一类。

- `images` 目录包含所有图片 (目前支持`png`和`jpg`格式数据)

- `labels` 目录包含所有标签(与图片**同名**的`txt`格式数据)

配置好文件夹后,执行:`python yolo2coco.py --root_dir $ROOT_PATH ` ,然后就能看见生成的 `annotations` 文件夹。

**参数说明**
- `--root_path` 输入根目录`$ROOT_PATH`的位置。
- `--save_path` 如果不进行数据集划分,可利用此参数指定输出文件的名字,默认保存为`train.json`
- `--random_split`  随机划分参数,若指定`--random_split`参数,则输出在`annotations`文件夹下包含 `train.json` `val.json` `test.json` (默认随机划分成8:1:1)
- `--split_by_file` 自定义数据集划分,若指定`--split_by_file`参数,则输出在`annotations`文件夹 `train.json` `val.json` `test.json`。需要在`$ROOT_PATH`文件下有 `./train.txt ./val.txt ./test.txt` ,可以这3个文件来定义训练集、验证集、测试集。**注意**, 这里里面填写的应是图片文件名字,而不是图片的绝对地址。(在line 43也自行可以修改一下读取方式,为了方便起见,不推荐把图片放在不同位置) 


## coco2yolo.py

读入coco数据集json格式的标注,输出可供yolo训练的标签。

**需要注意的是,COCO2017官方的数据集中categories id 是不连续的**,这在yolo读取的时候会出问题,所以需要重新映射一下,这个代码会按id从小到大映射到0~79之间。(如果是自己的数据集,也会重新映射)

执行:`python coco2yolo.py --json_path $JSON_FILE_PATH --save_path $LABEL_SAVE_PATH`

- `$JSON_FILE_PATH`是json文件的地址。
- `$JSON_FILE_PATH`是输出目录(默认为工作目录下的`./labels`目录。


## zeroshot_retrieval_evaluation.ipynb
- 检索topN的计算,支持一对多检索。(一张图对应有多个captions)

## vis_yolo_gt_dt.py
同时把GT和预测结果可视化在同一张图中。`$DT_DIR`是预测结果标签地址,必须是和GT同名的标签。`$ROOT_PATH`文件目录:

```bash
└── $ROOT_PATH

  ├── classes.txt

  ├── images

  └── labels
```

执行:`python vis_yolo_gt_dt.py --root $ROOT_PATH --dt $DT_DIR`后生成在`outputs`文件夹中。

- `classes.txt`和`images`必须有。
- `labels`可以没有,那样就只展示`$DT_DIR`预测结果。
- `$DT_DIR` 若没有输入,则只展示标签结果。

## coco_eval.py

评估生成的结果,针对**yolov5**生成的检测结果(test中的`--save-json`参数,会生成`best_predictions.json`),但是这个不适应cocoapi,需要用脚本来修改适应。执行:

`python coco_eval.py --gt $GT_PATH --dt $DT_PATH --yolov5`

- `--gt` json格式,用于指定测试集的结果,如果没有,可以利用前面的`yolo2coco.py`进行转换。
- `--dt` 同样检测网络生成的预测,使用cocoapi中`loadRes`来加载,所以需要有相应格式的检测结果。
- `--yolov5` 将官方代码中生成的结果转换成适配cocoapi的结果。

# 2. text-image
## zeroshot_retrieval_evalution.ipynb
检索模型的评估指标。(topK召回率),支持多对多的情况。(比如一个文本匹配多张图片)
## fid_clip_score
用于画text2image的 FID-CLIP Score曲线图。

================================================
FILE: detection/coco2yolo.py
================================================
"""
2021/1/24
COCO 格式的数据集转化为 YOLO 格式的数据集,源代码采取遍历方式,太慢,
这里改进了一下时间复杂度,从O(nm)改为O(n+m),但是牺牲了一些内存占用
--json_path 输入的json文件路径
--save_path 保存的文件夹名字,默认为当前目录下的labels。
"""

import os 
import json
from tqdm import tqdm
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--json_path', default='./instances_val2017.json',type=str, help="input: coco format(json)")
parser.add_argument('--save_path', default='./labels', type=str, help="specify where to save the output dir of labels")
arg = parser.parse_args()

def convert(size, box):
    dw = 1. / (size[0])
    dh = 1. / (size[1])
    x = box[0] + box[2] / 2.0
    y = box[1] + box[3] / 2.0
    w = box[2]
    h = box[3]

    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return (x, y, w, h)

if __name__ == '__main__':
    json_file =   arg.json_path # COCO Object Instance 类型的标注
    ana_txt_save_path = arg.save_path  # 保存的路径

    data = json.load(open(json_file, 'r'))
    if not os.path.exists(ana_txt_save_path):
        os.makedirs(ana_txt_save_path)
    
    id_map = {} # coco数据集的id不连续!重新映射一下再输出!
    for i, category in enumerate(data['categories']): 
        id_map[category['id']] = i

    # 通过事先建表来降低时间复杂度
    max_id = 0
    for img in data['images']:
        max_id = max(max_id, img['id'])
    # 注意这里不能写作 [[]]*(max_id+1),否则列表内的空列表共享地址
    img_ann_dict = [[] for i in range(max_id+1)] 
    for i, ann in enumerate(data['annotations']):
        img_ann_dict[ann['image_id']].append(i)

    for img in tqdm(data['images']):
        filename = img["file_name"]
        img_width = img["width"]
        img_height = img["height"]
        img_id = img["id"]
        head, tail = os.path.splitext(filename)
        ana_txt_name = head + ".txt"  # 对应的txt名字,与jpg一致
        f_txt = open(os.path.join(ana_txt_save_path, ana_txt_name), 'w')
        '''for ann in data['annotations']:
            if ann['image_id'] == img_id:
                box = convert((img_width, img_height), ann["bbox"])
                f_txt.write("%s %s %s %s %s\n" % (id_map[ann["category_id"]], box[0], box[1], box[2], box[3]))'''
        # 这里可以直接查表而无需重复遍历
        for ann_id in img_ann_dict[img_id]:
            ann = data['annotations'][ann_id]
            box = convert((img_width, img_height), ann["bbox"])
            f_txt.write("%s %s %s %s %s\n" % (id_map[ann["category_id"]], box[0], box[1], box[2], box[3]))
        f_txt.close()
        
# 旧版,很慢hhh
# """
# COCO 格式的数据集转化为 YOLO 格式的数据集
# --json_path 输入的json文件路径
# --save_path 保存的文件夹名字,默认为当前目录下的labels。
# """

# import os 
# import json
# from tqdm import tqdm
# import argparse

# parser = argparse.ArgumentParser()
# parser.add_argument('--json_path', default='./instances_val2017.json',type=str, help="input: coco format(json)")
# parser.add_argument('--save_path', default='./labels', type=str, help="specify where to save the output dir of labels")
# arg = parser.parse_args()

# def convert(size, box):
#     dw = 1. / (size[0])
#     dh = 1. / (size[1])
#     x = box[0] + box[2] / 2.0
#     y = box[1] + box[3] / 2.0
#     w = box[2]
#     h = box[3]

#     x = x * dw
#     w = w * dw
#     y = y * dh
#     h = h * dh
#     return (x, y, w, h)

# if __name__ == '__main__':
#     json_file =   arg.json_path # COCO Object Instance 类型的标注
#     ana_txt_save_path = arg.save_path  # 保存的路径

#     data = json.load(open(json_file, 'r'))
#     if not os.path.exists(ana_txt_save_path):
#         os.makedirs(ana_txt_save_path)
    
#     id_map = {} # coco数据集的id不连续!重新映射一下再输出!
#     with open(os.path.join(ana_txt_save_path, 'classes.txt'), 'w') as f:
#         # 写入classes.txt
#         for i, category in enumerate(data['categories']): 
#             f.write(f"{category['name']}\n") 
#             id_map[category['id']] = i
#     # print(id_map)

#     for img in tqdm(data['images']):
#         filename = img["file_name"]
#         img_width = img["width"]
#         img_height = img["height"]
#         img_id = img["id"]
#         head, tail = os.path.splitext(filename)
#         ana_txt_name = head + ".txt"  # 对应的txt名字,与jpg一致
#         f_txt = open(os.path.join(ana_txt_save_path, ana_txt_name), 'w')
#         for ann in data['annotations']:
#             if ann['image_id'] == img_id:
#                 box = convert((img_width, img_height), ann["bbox"])
#                 f_txt.write("%s %s %s %s %s\n" % (id_map[ann["category_id"]], box[0], box[1], box[2], box[3]))
#         f_txt.close()


================================================
FILE: detection/coco_eval.py
================================================
import json
import argparse
from pycocotools.coco import COCO 
from pycocotools.cocoeval import COCOeval 
import os
import time

def transform_yolov5_result(result, filename2id):
    f = open(result ,'r',encoding='utf-8')
    dts = json.load(f)
    output_dts = []
    for dt in dts:
        dt['image_id'] = filename2id[dt['image_id']+'.jpg']
        dt['category_id'] # id对应好,coco格式和yolo格式的category_id可能不同。
        output_dts.append(dt)
    with open('temp.json', 'w') as f:
        json.dump(output_dts, f)

def coco_evaluate(gt_path, dt_path, yolov5_flag):
    cocoGt = COCO(gt_path)
    imgIds = cocoGt.getImgIds()
    gts = cocoGt.loadImgs(imgIds)
    filename2id = {}

    for gt in gts:
        filename2id[gt['file_name']] = gt['id']
    print("NUM OF TEST IMAGES: ",len(filename2id))

    if yolov5_flag:
        transform_yolov5_result(dt_path, filename2id)
        cocoDt = cocoGt.loadRes('temp.json')
    else:
        cocoDt = cocoGt.loadRes(dt_path)
    cocoEval = COCOeval(cocoGt, cocoDt, "bbox")
    cocoEval.evaluate()
    cocoEval.accumulate()
    cocoEval.summarize()
    if yolov5_flag:
        os.remove('temp.json')

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("--gt", type=str, help="Assign the groud true path.", default=None)
    parser.add_argument("--dt", type=str, help="Assign the detection result path.", default=None)
    parser.add_argument("--yolov5",action='store_true',help="fix yolov5 output bug", default=None)

    args = parser.parse_args()
    gt_path = args.gt
    dt_path = args.dt
    if args.yolov5:
        coco_evaluate(gt_path, dt_path, True)
    else:
        coco_evaluate(gt_path, dt_path, False)
    

================================================
FILE: detection/vis_yolo_gt_dt.py
================================================
import cv2
import os
from glob import glob
import random
import matplotlib.pyplot as plt 
import argparse
from tqdm import tqdm
import numpy as np

parser = argparse.ArgumentParser()
parser.add_argument('--root',type=str ,default='', help="which should include ./images and ./labels and classes.txt")
parser.add_argument('--dt',type=str ,default='' , help="yolo format results of detection, include ./labels")
parser.add_argument('--conf' , type=float ,default=0.5, help="visulization conf thres")
arg = parser.parse_args()

colorlist = []
# 5^3种颜色。
for i in range(25,256,50):
    for j in range(25,256,50):
        for k in range(25,256,50):
            colorlist.append((i,j,k))
random.shuffle(colorlist)

def plot_bbox(img_path, img_dir, out_dir, gt=None ,dt=None, cls2label=None, line_thickness=None):
    img = cv2.imread(os.path.join(img_dir, img_path))
    height, width,_ = img.shape
    tl = line_thickness or round(0.002 * (width + height) / 2) + 1  # line/font thickness
    font = cv2.FONT_HERSHEY_SIMPLEX
    if gt:
        tf = max(tl - 1, 1)  # font thickness
        with open(gt,'r') as f:
            annotations = f.readlines()
            # print(annotations)    
            for ann in annotations:
                ann = list(map(float,ann.split()))
                ann[0] = int(ann[0])
                # print(ann)
                cls,x,y,w,h = ann
                color = colorlist[cls]
                c1, c2 = (int((x-w/2)*width),int((y-h/2)*height)), (int((x+w/2)*width), int((y+h/2)*height))
                cv2.rectangle(img, c1, c2, color, thickness=tl*2, lineType=cv2.LINE_AA)
                # 类别名称显示
                cv2.putText(img, str(cls2label[cls]), (c1[0], c1[1] - 2), 0, tl / 4, color, thickness=tf, lineType=cv2.LINE_AA)
    if dt:
        with open(dt,'r') as f:
            annotations = f.readlines()
            # print(annotations)    
            for ann in annotations:
                ann = list(map(float,ann.split()))
                ann[0] = int(ann[0])
                # print(ann)
                if len(ann) == 6:
                    cls,x,y,w,h,conf = ann
                    if conf < arg.conf:
                        # thres = 0.5
                        continue
                elif len(ann) == 5:
                    cls,x,y,w,h = ann
                color = colorlist[len(colorlist) - cls - 1]

                c1, c2 = (int((x-w/2)*width), int((y-h/2)*height)), (int((x+w/2)*width), int((y+h/2)*height))
                cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)

                # # cls label
                tf = max(tl - 1, 1)  # font thickness
                t_size = cv2.getTextSize(cls2label[cls], 0, fontScale=tl / 3, thickness=tf)[0]
                c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3
                # cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA)  # filled
                if len(ann) == 6:
                    cv2.putText(img, str(round(conf,2)), (c1[0], c1[1] - 2), 0, tl / 4, color, thickness=tf, lineType=cv2.LINE_AA)
    cv2.imwrite(os.path.join(out_dir,img_path),img)
    
if __name__ == "__main__":
    root_path = arg.root
    pred_path = arg.dt
    img_dir = os.path.join(root_path,'images')
    GT_dir = os.path.join(root_path,'labels')
    DT_dir = os.path.join(pred_path)
    out_dir = os.path.join(root_path,'outputs')
    cls_dir = os.path.join(root_path,'classes.txt')
    cls_dict = {}

    if not os.path.exists(img_dir):
        raise Exception("image dir {} do not exist!".format(img_dir))
    if not os.path.exists(cls_dir):
        raise Exception("class dir {} do not exist!".format(cls_dir))
    else:
        with open(cls_dir,'r') as f:
            classes = f.readlines()
            for i in range(len(classes)):
                cls_dict[i] = classes[i].strip()
            print("class map:", cls_dict)
    if not os.path.exists(out_dir):
        os.mkdir(out_dir)
    if not os.path.exists(GT_dir):
        print(f"WARNNING: {GT_dir} ,GT NOT Available!")
    if not os.path.exists(DT_dir):
        print(f"WARNNING: {DT_dir} ,DT NOT Available!")
    for each_img in tqdm(os.listdir(img_dir)):
        gt = None
        dt = None
        if os.path.exists(os.path.join(GT_dir,each_img.replace('jpg','txt'))):
            gt = os.path.join(GT_dir,each_img.replace('jpg','txt'))
        if os.path.exists(os.path.join(DT_dir,each_img.replace('jpg','txt'))):
            dt = os.path.join(DT_dir,each_img.replace('jpg','txt'))
        
        plot_bbox(each_img, img_dir, out_dir, gt, dt, cls2label=cls_dict)
        

================================================
FILE: detection/yolo2coco.py
================================================
"""
YOLO 格式的数据集转化为 COCO 格式的数据集
--root_dir 输入根路径
--save_path 保存文件的名字(没有random_split时使用)
--random_split 有则会随机划分数据集,然后再分别保存为3个文件。
--split_by_file 按照 ./train.txt ./val.txt ./test.txt 来对数据集进行划分。
"""

import os
import cv2
import json
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--root_dir', default='./data',type=str, help="root path of images and labels, include ./images and ./labels and classes.txt")
parser.add_argument('--save_path', type=str,default='./train.json', help="if not split the dataset, give a path to a json file")
parser.add_argument('--random_split', action='store_true', help="random split the dataset, default ratio is 8:1:1")
parser.add_argument('--split_by_file', action='store_true', help="define how to split the dataset, include ./train.txt ./val.txt ./test.txt ")

arg = parser.parse_args()

def train_test_val_split_random(img_paths,ratio_train=0.8,ratio_test=0.1,ratio_val=0.1):
    # 这里可以修改数据集划分的比例。
    assert int(ratio_train+ratio_test+ratio_val) == 1
    train_img, middle_img = train_test_split(img_paths,test_size=1-ratio_train, random_state=233)
    ratio=ratio_val/(1-ratio_train)
    val_img, test_img  =train_test_split(middle_img,test_size=ratio, random_state=233)
    print("NUMS of train:val:test = {}:{}:{}".format(len(train_img), len(val_img), len(test_img)))
    return train_img, val_img, test_img

def train_test_val_split_by_files(img_paths, root_dir):
    # 根据文件 train.txt, val.txt, test.txt(里面写的都是对应集合的图片名字) 来定义训练集、验证集和测试集
    phases = ['train', 'val', 'test']
    img_split = []
    for p in phases:
        define_path = os.path.join(root_dir, f'{p}.txt')
        print(f'Read {p} dataset definition from {define_path}')
        assert os.path.exists(define_path)
        with open(define_path, 'r') as f:
            img_paths = f.readlines()
            # img_paths = [os.path.split(img_path.strip())[1] for img_path in img_paths]  # NOTE 取消这句备注可以读取绝对地址。
            img_split.append(img_paths)
    return img_split[0], img_split[1], img_split[2]


def yolo2coco(arg):
    root_path = arg.root_dir
    print("Loading data from ",root_path)

    assert os.path.exists(root_path)
    originLabelsDir = os.path.join(root_path, 'labels')                                        
    originImagesDir = os.path.join(root_path, 'images')
    with open(os.path.join(root_path, 'classes.txt')) as f:
        classes = f.read().strip().split()
    # images dir name
    indexes = os.listdir(originImagesDir)

    if arg.random_split or arg.split_by_file:
        # 用于保存所有数据的图片信息和标注信息
        train_dataset = {'categories': [], 'annotations': [], 'images': []}
        val_dataset = {'categories': [], 'annotations': [], 'images': []}
        test_dataset = {'categories': [], 'annotations': [], 'images': []}

        # 建立类别标签和数字id的对应关系, 类别id从0开始。
        for i, cls in enumerate(classes, 0):
            train_dataset['categories'].append({'id': i, 'name': cls, 'supercategory': 'mark'})
            val_dataset['categories'].append({'id': i, 'name': cls, 'supercategory': 'mark'})
            test_dataset['categories'].append({'id': i, 'name': cls, 'supercategory': 'mark'})
            
        if arg.random_split:
            print("spliting mode: random split")
            train_img, val_img, test_img = train_test_val_split_random(indexes,0.8,0.1,0.1)
        elif arg.split_by_file:
            print("spliting mode: split by files")
            train_img, val_img, test_img = train_test_val_split_by_files(indexes, root_path)
    else:
        dataset = {'categories': [], 'annotations': [], 'images': []}
        for i, cls in enumerate(classes, 0):
            dataset['categories'].append({'id': i, 'name': cls, 'supercategory': 'mark'})
    
    # 标注的id
    ann_id_cnt = 0
    for k, index in enumerate(tqdm(indexes)):
        # 支持 png jpg 格式的图片。
        txtFile = index.replace('images','txt').replace('.jpg','.txt').replace('.png','.txt')
        # 读取图像的宽和高
        im = cv2.imread(os.path.join(root_path, 'images/') + index)
        height, width, _ = im.shape
        if arg.random_split or arg.split_by_file:
            # 切换dataset的引用对象,从而划分数据集
                if index in train_img:
                    dataset = train_dataset
                elif index in val_img:
                    dataset = val_dataset
                elif index in test_img:
                    dataset = test_dataset
        # 添加图像的信息
        dataset['images'].append({'file_name': index,
                                    'id': k,
                                    'width': width,
                                    'height': height})
        if not os.path.exists(os.path.join(originLabelsDir, txtFile)):
            # 如没标签,跳过,只保留图片信息。
            continue
        with open(os.path.join(originLabelsDir, txtFile), 'r') as fr:
            labelList = fr.readlines()
            for label in labelList:
                label = label.strip().split()
                x = float(label[1])
                y = float(label[2])
                w = float(label[3])
                h = float(label[4])

                # convert x,y,w,h to x1,y1,x2,y2
                H, W, _ = im.shape
                x1 = (x - w / 2) * W
                y1 = (y - h / 2) * H
                x2 = (x + w / 2) * W
                y2 = (y + h / 2) * H
                # 标签序号从0开始计算, coco2017数据集标号混乱,不管它了。
                cls_id = int(label[0])   
                width = max(0, x2 - x1)
                height = max(0, y2 - y1)
                dataset['annotations'].append({
                    'area': width * height,
                    'bbox': [x1, y1, width, height],
                    'category_id': cls_id,
                    'id': ann_id_cnt,
                    'image_id': k,
                    'iscrowd': 0,
                    # mask, 矩形是从左上角点按顺时针的四个顶点
                    'segmentation': [[x1, y1, x2, y1, x2, y2, x1, y2]]
                })
                ann_id_cnt += 1

    # 保存结果
    folder = os.path.join(root_path, 'annotations')
    if not os.path.exists(folder):
        os.makedirs(folder)
    if arg.random_split or arg.split_by_file:
        for phase in ['train','val','test']:
            json_name = os.path.join(root_path, 'annotations/{}.json'.format(phase))
            with open(json_name, 'w') as f:
                if phase == 'train':
                    json.dump(train_dataset, f)
                elif phase == 'val':
                    json.dump(val_dataset, f)
                elif phase == 'test':
                    json.dump(test_dataset, f)
            print('Save annotation to {}'.format(json_name))
    else:
        json_name = os.path.join(root_path, 'annotations/{}'.format(arg.save_path))
        with open(json_name, 'w') as f:
            json.dump(dataset, f)
            print('Save annotation to {}'.format(json_name))

if __name__ == "__main__":

    yolo2coco(arg)

================================================
FILE: text-image/convert_diffusers_to_original_stable_diffusion.py
================================================
# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint.
# *Only* converts the UNet, VAE, and Text Encoder.
# Does not convert optimizer state or any other thing.

import argparse
import os.path as osp

import torch


# =================#
# UNet Conversion #
# =================#

unet_conversion_map = [
    # (stable-diffusion, HF Diffusers)
    ("time_embed.0.weight", "time_embedding.linear_1.weight"),
    ("time_embed.0.bias", "time_embedding.linear_1.bias"),
    ("time_embed.2.weight", "time_embedding.linear_2.weight"),
    ("time_embed.2.bias", "time_embedding.linear_2.bias"),
    ("input_blocks.0.0.weight", "conv_in.weight"),
    ("input_blocks.0.0.bias", "conv_in.bias"),
    ("out.0.weight", "conv_norm_out.weight"),
    ("out.0.bias", "conv_norm_out.bias"),
    ("out.2.weight", "conv_out.weight"),
    ("out.2.bias", "conv_out.bias"),
]

unet_conversion_map_resnet = [
    # (stable-diffusion, HF Diffusers)
    ("in_layers.0", "norm1"),
    ("in_layers.2", "conv1"),
    ("out_layers.0", "norm2"),
    ("out_layers.3", "conv2"),
    ("emb_layers.1", "time_emb_proj"),
    ("skip_connection", "conv_shortcut"),
]

unet_conversion_map_layer = []
# hardcoded number of downblocks and resnets/attentions...
# would need smarter logic for other networks.
for i in range(4):
    # loop over downblocks/upblocks

    for j in range(2):
        # loop over resnets/attentions for downblocks
        hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
        sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
        unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))

        if i < 3:
            # no attention layers in down_blocks.3
            hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
            sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
            unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))

    for j in range(3):
        # loop over resnets/attentions for upblocks
        hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
        sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
        unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))

        if i > 0:
            # no attention layers in up_blocks.0
            hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
            sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
            unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))

    if i < 3:
        # no downsample in down_blocks.3
        hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
        sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
        unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))

        # no upsample in up_blocks.3
        hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
        sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}."
        unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))

hf_mid_atn_prefix = "mid_block.attentions.0."
sd_mid_atn_prefix = "middle_block.1."
unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))

for j in range(2):
    hf_mid_res_prefix = f"mid_block.resnets.{j}."
    sd_mid_res_prefix = f"middle_block.{2*j}."
    unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))


def convert_unet_state_dict(unet_state_dict):
    # buyer beware: this is a *brittle* function,
    # and correct output requires that all of these pieces interact in
    # the exact order in which I have arranged them.
    mapping = {k: k for k in unet_state_dict.keys()}
    for sd_name, hf_name in unet_conversion_map:
        mapping[hf_name] = sd_name
    for k, v in mapping.items():
        if "resnets" in k:
            for sd_part, hf_part in unet_conversion_map_resnet:
                v = v.replace(hf_part, sd_part)
            mapping[k] = v
    for k, v in mapping.items():
        for sd_part, hf_part in unet_conversion_map_layer:
            v = v.replace(hf_part, sd_part)
        mapping[k] = v
    new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()}
    return new_state_dict


# ================#
# VAE Conversion #
# ================#

vae_conversion_map = [
    # (stable-diffusion, HF Diffusers)
    ("nin_shortcut", "conv_shortcut"),
    ("norm_out", "conv_norm_out"),
    ("mid.attn_1.", "mid_block.attentions.0."),
]

for i in range(4):
    # down_blocks have two resnets
    for j in range(2):
        hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}."
        sd_down_prefix = f"encoder.down.{i}.block.{j}."
        vae_conversion_map.append((sd_down_prefix, hf_down_prefix))

    if i < 3:
        hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0."
        sd_downsample_prefix = f"down.{i}.downsample."
        vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix))

        hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
        sd_upsample_prefix = f"up.{3-i}.upsample."
        vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix))

    # up_blocks have three resnets
    # also, up blocks in hf are numbered in reverse from sd
    for j in range(3):
        hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}."
        sd_up_prefix = f"decoder.up.{3-i}.block.{j}."
        vae_conversion_map.append((sd_up_prefix, hf_up_prefix))

# this part accounts for mid blocks in both the encoder and the decoder
for i in range(2):
    hf_mid_res_prefix = f"mid_block.resnets.{i}."
    sd_mid_res_prefix = f"mid.block_{i+1}."
    vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix))


vae_conversion_map_attn = [
    # (stable-diffusion, HF Diffusers)
    ("norm.", "group_norm."),
    ("q.", "query."),
    ("k.", "key."),
    ("v.", "value."),
    ("proj_out.", "proj_attn."),
]


def reshape_weight_for_sd(w):
    # convert HF linear weights to SD conv2d weights
    return w.reshape(*w.shape, 1, 1)


def convert_vae_state_dict(vae_state_dict):
    mapping = {k: k for k in vae_state_dict.keys()}
    for k, v in mapping.items():
        for sd_part, hf_part in vae_conversion_map:
            v = v.replace(hf_part, sd_part)
        mapping[k] = v
    for k, v in mapping.items():
        if "attentions" in k:
            for sd_part, hf_part in vae_conversion_map_attn:
                v = v.replace(hf_part, sd_part)
            mapping[k] = v
    new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()}
    weights_to_convert = ["q", "k", "v", "proj_out"]
    for k, v in new_state_dict.items():
        for weight_name in weights_to_convert:
            if f"mid.attn_1.{weight_name}.weight" in k:
                print(f"Reshaping {k} for SD format")
                new_state_dict[k] = reshape_weight_for_sd(v)
    return new_state_dict


# =========================#
# Text Encoder Conversion #
# =========================#
# pretty much a no-op


def convert_text_enc_state_dict(text_enc_dict):
    return text_enc_dict


if __name__ == "__main__":
    parser = argparse.ArgumentParser()

    parser.add_argument("--model_path", default=None, type=str, required=True, help="Path to the model to convert.")
    parser.add_argument("--checkpoint_path", default=None, type=str, required=True, help="Path to the output model.")
    parser.add_argument("--half", action="store_true", help="Save weights in half precision.")

    args = parser.parse_args()

    assert args.model_path is not None, "Must provide a model path!"

    assert args.checkpoint_path is not None, "Must provide a checkpoint path!"

    unet_path = osp.join(args.model_path, "unet", "diffusion_pytorch_model.bin")
    vae_path = osp.join(args.model_path, "vae", "diffusion_pytorch_model.bin")
    text_enc_path = osp.join(args.model_path, "text_encoder", "pytorch_model.bin")

    # Convert the UNet model
    unet_state_dict = torch.load(unet_path, map_location="cpu")
    unet_state_dict = convert_unet_state_dict(unet_state_dict)
    unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()}

    # Convert the VAE model
    vae_state_dict = torch.load(vae_path, map_location="cpu")
    vae_state_dict = convert_vae_state_dict(vae_state_dict)
    vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()}

    # Convert the text encoder model
    text_enc_dict = torch.load(text_enc_path, map_location="cpu")
    text_enc_dict = convert_text_enc_state_dict(text_enc_dict)
    text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()}

    # Put together new checkpoint
    state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict}
    if args.half:
        state_dict = {k: v.half() for k, v in state_dict.items()}
    state_dict = {"state_dict": state_dict}
    torch.save(state_dict, args.checkpoint_path)

================================================
FILE: text-image/data_filter/data_filter_demo.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 集成了水印、美学、CLIP模型,用于给图文质量打分"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pytorch_lightning as pl\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch\n",
    "import timm\n",
    "from torchvision import transforms as T\n",
    "import open_clip\n",
    "import torch\n",
    "from transformers import BertModel, BertTokenizer\n",
    "from PIL import Image\n",
    "\n",
    "class AestheticsMLP(pl.LightningModule):\n",
    "    # 美学判别器是基于CLIP的基础上接了一个MLP\n",
    "    def __init__(self, input_size, xcol='emb', ycol='avg_rating'):\n",
    "        super().__init__()\n",
    "        self.input_size = input_size\n",
    "        self.xcol = xcol\n",
    "        self.ycol = ycol\n",
    "        self.layers = nn.Sequential(\n",
    "            nn.Linear(self.input_size, 1024),\n",
    "            #nn.ReLU(),\n",
    "            nn.Dropout(0.2),\n",
    "            nn.Linear(1024, 128),\n",
    "            #nn.ReLU(),\n",
    "            nn.Dropout(0.2),\n",
    "            nn.Linear(128, 64),\n",
    "            #nn.ReLU(),\n",
    "            nn.Dropout(0.1),\n",
    "\n",
    "            nn.Linear(64, 16),\n",
    "            #nn.ReLU(),\n",
    "\n",
    "            nn.Linear(16, 1)\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.layers(x)\n",
    "\n",
    "    def training_step(self, batch, batch_idx):\n",
    "            x = batch[self.xcol]\n",
    "            y = batch[self.ycol].reshape(-1, 1)\n",
    "            x_hat = self.layers(x)\n",
    "            loss = F.mse_loss(x_hat, y)\n",
    "            return loss\n",
    "    \n",
    "    def validation_step(self, batch, batch_idx):\n",
    "        x = batch[self.xcol]\n",
    "        y = batch[self.ycol].reshape(-1, 1)\n",
    "        x_hat = self.layers(x)\n",
    "        loss = F.mse_loss(x_hat, y)\n",
    "        return loss\n",
    "\n",
    "    def configure_optimizers(self):\n",
    "        optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)\n",
    "        return optimizer\n",
    "\n",
    "\n",
    "class WaterMarkModel(nn.Module):\n",
    "    def __init__(self, model_path='./watermark_model_v1.pt'):\n",
    "        super(WaterMarkModel, self).__init__()\n",
    "        # model definition\n",
    "        self.model = timm.create_model(\n",
    "                'efficientnet_b3a', pretrained=True, num_classes=2)\n",
    "\n",
    "        self.model.classifier = nn.Sequential(\n",
    "            # 1536 is the orginal in_features\n",
    "            nn.Linear(in_features=1536, out_features=625),\n",
    "            nn.ReLU(),  # ReLu to be the activation function\n",
    "            nn.Dropout(p=0.3),\n",
    "            nn.Linear(in_features=625, out_features=256),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(in_features=256, out_features=2),\n",
    "        )\n",
    "        self.model.load_state_dict(torch.load(model_path))\n",
    "    def forward(self, x):\n",
    "        return self.model(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FilterSystem:\n",
    "    def __init__(\n",
    "                    self, \n",
    "                    clip_model_path=\"IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese\",\n",
    "                    aesthetics_model_path=\"./ava+logos-l14-linearMSE.pth\",\n",
    "                    watermark_model_path=\"./watermark_model_v1.pt\"\n",
    "                ):\n",
    "        self.clip_model_path = clip_model_path\n",
    "        self.aesthetics_model_path = aesthetics_model_path\n",
    "        self.watermark_model_path = watermark_model_path\n",
    "\n",
    "    def init_clip_model(self, ):\n",
    "        # 此处初始化clip模型,返回模型、tokenizer、processor\n",
    "        text_encoder = BertModel.from_pretrained(self.clip_model_path).eval().cuda()\n",
    "        text_tokenizer = BertTokenizer.from_pretrained(self.clip_model_path)\n",
    "        clip_model, _, processor = open_clip.create_model_and_transforms('ViT-L-14', pretrained='openai')\n",
    "        clip_model = clip_model.eval().cuda()\n",
    "        self.text_encoder, self.text_tokenizer, self.clip_model, self.processor = text_encoder, text_tokenizer, clip_model, processor\n",
    "        print(\"clip model loaded\")\n",
    "        return None\n",
    "\n",
    "    def init_aesthetics_model(self, ):\n",
    "        # 此处初始化美学模型\n",
    "        self.aesthetics_model = AestheticsMLP(768)\n",
    "        self.aesthetics_model.load_state_dict(torch.load(self.aesthetics_model_path))\n",
    "        self.aesthetics_model.eval().cuda()\n",
    "        print(\"aesthetics model loaded\")\n",
    "        return None\n",
    "\n",
    "    def init_watermark_model(self, ):\n",
    "        self.watermark_model = WaterMarkModel(self.watermark_model_path)\n",
    "        self.watermark_model.eval().cuda()\n",
    "        self.watermark_processor =  T.Compose([\n",
    "                                                T.Resize((256, 256)),\n",
    "                                                T.ToTensor(),\n",
    "                                                T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n",
    "                                            ])\n",
    "        print(\"watermark model loaded\")\n",
    "        return None\n",
    "\n",
    "    def get_image_feature(self, images):\n",
    "        # 此处返回图像的特征向量\n",
    "        if isinstance(images, list):\n",
    "            images = torch.stack([self.processor(image) for image in images]).cuda()\n",
    "        elif isinstance(images, torch.Tensor):\n",
    "            images = images.cuda()\n",
    "\n",
    "        with torch.no_grad():\n",
    "            image_features = self.clip_model.encode_image(images)\n",
    "            image_features /= image_features.norm(dim=1, keepdim=True)\n",
    "        return image_features\n",
    "    \n",
    "    def get_text_feature(self, text):\n",
    "        # 此处返回文本的特征向量\n",
    "        if isinstance(text, list) or isinstance(text, str):\n",
    "            text = self.text_tokenizer(text, return_tensors='pt', padding=True)['input_ids'].cuda()\n",
    "        elif isinstance(text, torch.Tensor):\n",
    "            text = text.cuda()\n",
    "\n",
    "        with torch.no_grad():\n",
    "            text_features = self.text_encoder(text)[1]\n",
    "            text_features /= text_features.norm(dim=1, keepdim=True)\n",
    "        return text_features\n",
    "\n",
    "    def calculate_clip_score(self, features1, features2):\n",
    "        # 此处2个特征向量的相似度,输入可以是 图片+文本、文本+文本、图片+图片。\n",
    "        # 返回的是相似度矩阵,维度为 f1.shape[0] * f2.shape[0]\n",
    "        score_matrix =  features1 @ features2.t()\n",
    "        return score_matrix\n",
    "\n",
    "    def get_aesthetics_score(self, features):\n",
    "        # 此处返回美学分数,传入的是CLIP的feature, 先计算get_image_feature在传入此函数~(模型是ViT-L-14)\n",
    "        with torch.no_grad():\n",
    "            scores = self.aesthetics_model(features)\n",
    "            scores = scores[:, 0].detach().cpu().numpy()\n",
    "        return scores\n",
    "    \n",
    "    def get_watermark_score(self, images):\n",
    "        if isinstance(images, list):\n",
    "            images = torch.stack([self.watermark_processor(image) for image in images]).cuda()\n",
    "        elif isinstance(images, torch.Tensor):\n",
    "            images = images.cuda()\n",
    "        with torch.no_grad():\n",
    "            pred = self.watermark_model(images)\n",
    "            watermark_scores = F.softmax(pred, dim=1)[:,0].detach().cpu().numpy()\n",
    "\n",
    "        return watermark_scores"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 小规模数据测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "demo = FilterSystem()\n",
    "demo.init_clip_model()\n",
    "demo.init_aesthetics_model()\n",
    "demo.init_watermark_model()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "image_path = './demo_images/watermark_example.png'\n",
    "image_path2 = './demo_images/mengna.jpg'\n",
    "image_path3 = './demo_images/shuiyin.jpg'\n",
    "image_path4 = './demo_images/1.jpg'\n",
    "image_demo =  [Image.open(image_path).convert('RGB'), Image.open(image_path2).convert('RGB'), Image.open(image_path3).convert('RGB'), Image.open(image_path4).convert('RGB')]\n",
    "image_feature = demo.get_image_feature(image_demo,)  # 计算图片特征,传入图片列表,一般而言,可以在数据库保存这个东西,用于响应文本query\n",
    "aes_score = demo.get_aesthetics_score(image_feature)  # 计算美学分数,传入图片特征,一般而言,可以在数据库保存这个东西,用于响应文本query\n",
    "print(aes_score)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "text_demo = ['一副很美的画','港口小船', '蒙娜丽莎'] # 这里也可以只有一个文本,也就是query\n",
    "text_feature = demo.get_text_feature(text_demo) # 计算文本特征,传入文本列表\n",
    "similarity = demo.calculate_clip_score(image_feature, text_feature)  # 计算相似度\n",
    "print(similarity)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "watermark_score = demo.get_watermark_score(image_demo)\n",
    "print(watermark_score)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 读取处理保存(单个进程)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# data setting\n",
    "root_path = \"./project/dataset/laion_chinese_cwf/image_part00\"\n",
    "all_folders = sorted(os.listdir(root_path))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# model setting\n",
    "filter_model = FilterSystem()\n",
    "filter_model.init_clip_model()\n",
    "filter_model.init_aesthetics_model()\n",
    "filter_model.init_watermark_model()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from model import FilterSystem\n",
    "from dataset import TxtDataset\n",
    "import os\n",
    "from torch.utils.data import DataLoader\n",
    "from tqdm import tqdm\n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "def sub_process(filter_model, each_folder_path):\n",
    "    each_dataset = TxtDataset(each_folder_path)\n",
    "    each_dataloader = DataLoader(each_dataset, batch_size=8, shuffle=False, num_workers=8)\n",
    "\n",
    "    image_paths = []\n",
    "    aes_scores = []\n",
    "    clip_scores = []\n",
    "    watermark_scores = []\n",
    "    for iii, (batch_image_paths, texts,) in enumerate(tqdm(each_dataloader)):\n",
    "        images =  [Image.open(each_image_path).convert(\"RGB\") for each_image_path in batch_image_paths]\n",
    "        image_paths.extend(batch_image_paths)\n",
    "\n",
    "        image_features = filter_model.get_image_feature(images,)  # 计算图片特征,传入图片列表,一般而言,可以在数据库保存这个东西,用于响应文本query\n",
    "        aes_score = filter_model.get_aesthetics_score(image_features)  # 计算美学分数,传入图片特征,一般而言,可以在数据库保存这个东西,用于响应文本query\n",
    "        aes_scores.extend(aes_score)\n",
    "\n",
    "        text_features = filter_model.get_text_feature(list(texts)) # 计算文本特征,传入文本列表\n",
    "        clip_score = filter_model.calculate_clip_score(image_features, text_features)  # 计算相似度\n",
    "        clip_scores.extend(torch.diagonal(clip_score).detach().cpu().numpy())  # 需要取对角线,只需要自己和对应文本的相似度\n",
    "\n",
    "        watermark_score = filter_model.get_watermark_score(images)  # 计算水印分数,传入图片路径列表\n",
    "        watermark_scores.extend(watermark_score)\n",
    "        \n",
    "        # print('aes_score:', aes_score, '\\n',\n",
    "        #     'clip_score:', clip_score, '\\n',\n",
    "        #     'watermark_score:', watermark_score, '\\n',\n",
    "        #     'image_paths:', image_paths, '\\n',\n",
    "        #     'texts:', texts)\n",
    "        \n",
    "    score_pd = pd.DataFrame({'image_path': image_paths, 'aes_score': aes_scores, 'clip_score': clip_scores, 'watermark_score': watermark_scores})\n",
    "    score_pd.to_csv(os.path.join(each_folder_path, 'score.csv'), index=False)\n",
    "    print('save score.csv in {}'.format(each_folder_path), '\\n', '-'*20)\n",
    "\n",
    "for each_folder in all_folders[:10]:\n",
    "    each_folder_path = os.path.join(root_path, each_folder)\n",
    "    sub_process(filter_model, each_folder_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from model import FilterSystem\n",
    "from dataset import TxtDataset\n",
    "import os\n",
    "from torch.utils.data import DataLoader\n",
    "from tqdm import tqdm\n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from concurrent.futures import ProcessPoolExecutor, wait, ALL_COMPLETED\n",
    "\n",
    "p = ProcessPoolExecutor(max_workers=4)\n",
    "\n",
    "def sub_process(filter_model, each_folder_path):\n",
    "    each_dataset = TxtDataset(each_folder_path)\n",
    "    each_dataloader = DataLoader(each_dataset, batch_size=8, shuffle=False, num_workers=8)\n",
    "\n",
    "    image_paths = []\n",
    "    aes_scores = []\n",
    "    clip_scores = []\n",
    "    watermark_scores = []\n",
    "    for iii, (batch_image_paths, texts,) in enumerate(each_dataloader):\n",
    "        images =  [Image.open(each_image_path) for each_image_path in batch_image_paths]\n",
    "        image_paths.extend(batch_image_paths)\n",
    "\n",
    "        image_features = filter_model.get_image_feature(images,)  # 计算图片特征,传入图片列表,一般而言,可以在数据库保存这个东西,用于响应文本query\n",
    "        aes_score = filter_model.get_aesthetics_score(image_features)  # 计算美学分数,传入图片特征,一般而言,可以在数据库保存这个东西,用于响应文本query\n",
    "        aes_scores.extend(aes_score)\n",
    "\n",
    "        text_features = filter_model.get_text_feature(list(texts)) # 计算文本特征,传入文本列表\n",
    "        clip_score = filter_model.calculate_clip_score(image_features, text_features)  # 计算相似度\n",
    "        clip_scores.extend(torch.diagonal(clip_score).detach().cpu().numpy())  # 需要取对角线,只需要自己和对应文本的相似度\n",
    "\n",
    "        watermark_score = filter_model.get_watermark_score(images)  # 计算水印分数,传入图片路径列表\n",
    "        watermark_scores.extend(watermark_score)\n",
    "        \n",
    "        # print('aes_score:', aes_score, '\\n',\n",
    "        #     'clip_score:', clip_score, '\\n',\n",
    "        #     'watermark_score:', watermark_score, '\\n',\n",
    "        #     'image_paths:', image_paths, '\\n',\n",
    "        #     'texts:', texts)\n",
    "        \n",
    "    score_pd = pd.DataFrame({'image_path': image_paths, 'aes_score': aes_scores, 'clip_score': clip_scores, 'watermark_score': watermark_scores})\n",
    "    score_pd.to_csv(os.path.join(each_folder_path, 'score.csv'), index=False)\n",
    "    print('save score.csv in {}'.format(each_folder_path), '\\n', '-'*20)\n",
    "\n",
    "for each_folder in all_folders[:10]:\n",
    "    each_folder_path = os.path.join(root_path, each_folder)\n",
    "    f1 = p.submit(sub_process, model_pool[0], each_folder_path)\n",
    "    f2 = p.submit(sub_process, model_pool[1], each_folder_path)\n",
    "    f3 = p.submit(sub_process, model_pool[2], each_folder_path)\n",
    "    f4 = p.submit(sub_process, model_pool[3], each_folder_path)\n",
    "    res = wait([f1, f2, f3, f4], return_when=ALL_COMPLETED)\n",
    "p.shutdown()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 用model pool来开启4进程跑\n",
    "model_pool = [FilterSystem() for i in range(4)]\n",
    "for model in model_pool:\n",
    "    model.init_clip_model()\n",
    "    model.init_aesthetics_model()\n",
    "    model.init_watermark_model()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(aes_scores, clip_scores, watermark_scores)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('image_paths:', image_paths, '\\n',  'texts:', texts)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# pytorch lightning + multi process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pytorch_lightning as pl\n",
    "\n",
    "class ScoreSystem(pl.LightningModule):\n",
    "    def __init__(Self):\n",
    "        super().__init__()\n",
    "        self.text_encoder, self.text_tokenizer, self.clip_model, self.processor = self.init_clip_model()\n",
    "        self.aesthetics_model = self.init_aesthetics_model()\n",
    "        self.watermark_model, self.watermark_processor = self.init_watermark_model()\n",
    "\n",
    "    def init_clip_model(self):\n",
    "        text_encoder = BertModel.from_pretrained(self.clip_model_path).eval().cuda()\n",
    "        text_tokenizer = BertTokenizer.from_pretrained(self.clip_model_path)\n",
    "        clip_model, _, processor = open_clip.create_model_and_transforms('ViT-L-14', pretrained='openai')\n",
    "        clip_model = clip_model.eval().cuda()\n",
    "        print(\"clip model loaded\")\n",
    "        return text_encoder, text_tokenizer, clip_model, processor\n",
    "\n",
    "    def init_aesthetics_model(self, ):\n",
    "        # 此处初始化美学模型\n",
    "        aesthetics_model = AestheticsMLP(768)\n",
    "        aesthetics_model.load_state_dict(torch.load(self.aesthetics_model_path)).eval().cuda()\n",
    "        print(\"aesthetics model loaded\")\n",
    "        return aesthetics_model\n",
    "\n",
    "    def init_watermark_model(self, ):\n",
    "        watermark_model = WaterMarkModel(self.watermark_model_path)\n",
    "        watermark_model.eval().cuda()\n",
    "        watermark_processor =  T.Compose([\n",
    "                                                T.Resize((256, 256)),\n",
    "                                                T.ToTensor(),\n",
    "                                                T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n",
    "                                            ])\n",
    "        print(\"watermark model loaded\")\n",
    "        return watermark_model, watermark_processor\n",
    "\n",
    "    def get_image_feature(self, images):\n",
    "        # 此处返回图像的特征向量\n",
    "        if isinstance(images, list):\n",
    "            images = torch.stack([self.processor(image) for image in images]).cuda()\n",
    "        elif isinstance(images, torch.Tensor):\n",
    "            images = images.cuda()\n",
    "\n",
    "        with torch.no_grad():\n",
    "            image_features = self.clip_model.encode_image(images)\n",
    "            image_features /= image_features.norm(dim=1, keepdim=True)\n",
    "        return image_features\n",
    "    \n",
    "    def get_text_feature(self, text):\n",
    "        # 此处返回文本的特征向量\n",
    "        if isinstance(text, list) or isinstance(text, str):\n",
    "            text = self.text_tokenizer(text, return_tensors='pt', padding=True)['input_ids'].cuda()\n",
    "        elif isinstance(text, torch.Tensor):\n",
    "            text = text.cuda()\n",
    "\n",
    "        with torch.no_grad():\n",
    "            text_features = self.text_encoder(text)[1]\n",
    "            text_features /= text_features.norm(dim=1, keepdim=True)\n",
    "        return text_features\n",
    "\n",
    "    def calculate_clip_score(self, features1, features2):\n",
    "        # 此处2个特征向量的相似度,输入可以是 图片+文本、文本+文本、图片+图片。\n",
    "        # 返回的是相似度矩阵,维度为 f1.shape[0] * f2.shape[0]\n",
    "        score_matrix =  features1 @ features2.t()\n",
    "        return score_matrix\n",
    "\n",
    "    def get_aesthetics_score(self, features):\n",
    "        # 此处返回美学分数,传入的是CLIP的feature, 先计算get_image_feature在传入此函数~(模型是ViT-L-14)\n",
    "        with torch.no_grad():\n",
    "            scores = self.aesthetics_model(features)\n",
    "            scores = scores[:, 0].detach().cpu().numpy()\n",
    "        return scores\n",
    "    \n",
    "    def get_watermark_score(self, images):\n",
    "        if isinstance(images, list):\n",
    "            images = torch.stack([self.watermark_processor(image) for image in images]).cuda()\n",
    "        elif isinstance(images, torch.Tensor):\n",
    "            images = images.cuda()\n",
    "        with torch.no_grad():\n",
    "            pred = self.watermark_model(images)\n",
    "            watermark_scores = F.softmax(pred, dim=1)[:,0].detach().cpu().numpy()\n",
    "\n",
    "        return watermark_scores\n",
    "\n",
    "    def predict_step(self, batch, batch_idx):\n",
    "        images, texts = batch   \n",
    "        # TODO 这里要么传入处理后的2种图片,要么传入纯图片,然后在下面的函数处理。(目前是传入纯图片)\n",
    "        image_features = self.get_image_feature(images)\n",
    "        text_features = self.get_text_feature(texts)\n",
    "        clip_scores = self.calculate_clip_score(image_features, text_features)\n",
    "        aes_scores = self.get_aesthetics_score(image_features)\n",
    "        watermark_scores = self.get_watermark_score(images)\n",
    "        return clip_scores, aes_scores, watermark_scores\n",
    "\n",
    "    def on_predict_epoch_end(self, outputs):\n",
    "        # 此处返回所有预测结果\n",
    "        clip_scores = torch.cat([output[0] for output in outputs], dim=0)\n",
    "        aes_scores = torch.cat([output[1] for output in outputs], dim=0)\n",
    "        watermark_scores = torch.cat([output[2] for output in outputs], dim=0)\n",
    "        return clip_scores, aes_scores, watermark_scores"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.13 ('base')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "4cc247672a8bfe61dc951074f9ca89ab002dc0f7e14586a8bb0828228bebeefa"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: text-image/data_filter/wukong_filter.py
================================================
# %%
from torch.utils.data import Dataset, ConcatDataset
from torchvision import transforms
import os
from PIL import Image
from concurrent.futures import ProcessPoolExecutor
import json
import torch
from transformers import BertModel
import open_clip
import numpy as np
from transformers import BertTokenizer
import pandas as pd
from tqdm import tqdm
import argparse


parser = argparse.ArgumentParser(description="Simple example of a training script.")
parser.add_argument(
    "--part",
    type=int,
    default=0,
    required=True,
)
args = parser.parse_args()


class CsvDataset(Dataset):
    def __init__(self, input_filename, transforms, input_root, tokenizer, img_key, caption_key, sep="\t"):
        # logging.debug(f'Loading csv data from {input_filename}.')
        print(f'Loading csv data from {input_filename}.')
        self.images = []
        self.captions = []
        if input_filename.endswith('.csv'):
            df = pd.read_csv(input_filename, index_col=0)
            df = df[df['used'] == 1]
            self.images.extend(df[img_key].tolist())
            self.captions.extend(df[caption_key].tolist())
        # NOTE 中文的tokenizer
        self.tokenizer = tokenizer
        self.context_length = 77
        self.root = input_root
        self.transforms = transforms

    def __len__(self):
        return len(self.images)

    def __getitem__(self, idx):
        img_path = str(self.images[idx])
        image = self.transforms(Image.open( os.path.join(self.root, img_path ))) 
        text = self.tokenizer(str(self.captions[idx]), max_length=self.context_length, padding='max_length', truncation=True, return_tensors='pt')['input_ids'][0]
        return image, text, img_path


text_encoder = BertModel.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese").eval().cuda()
clip_model, _, processor = open_clip.create_model_and_transforms('ViT-L-14', pretrained='openai')
clip_model = clip_model.eval().cuda()
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese")


input_filename = './project/dataset/wukong/release'
preprocess_fn = processor
input_root = './project/dataset/wukong/images'
tokenizer = text_tokenizer
all_csvs = sorted(os.listdir(input_filename))

for i in range(len(all_csvs)*args.part//5, len(all_csvs)*(args.part+1)//5):
    # 分成5part
    each_csv_path = os.path.join(input_filename, all_csvs[i])
    dataset = CsvDataset(each_csv_path, preprocess_fn, input_root, tokenizer, img_key="name", caption_key="caption")
    dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=False, num_workers=8, pin_memory=True)
    
    df = pd.read_csv(each_csv_path, index_col=0)
    df = df[df['used'] == 1]
    scores = []
    for iii, (image, text, image_path) in enumerate(tqdm(dataloader)):
        # print(image.shape, text.shape)
        with torch.no_grad():
            image = image.cuda()
            text = text.cuda()
            # print(image.shape, text.shape)
            image_features = clip_model.encode_image(image)
            text_features = text_encoder(text)[1]

            # print(image_features.shape, text_features.shape)
            # 归一化
            image_features = image_features / image_features.norm(dim=1, keepdim=True)
            text_features = text_features / text_features.norm(dim=1, keepdim=True)
            score_each_pair =  image_features @ text_features.t()

            scores.extend(torch.diagonal(score_each_pair).detach().cpu().numpy())
            # break
    df['score'] = scores
    df.to_csv( each_csv_path.replace(all_csvs[i], 'score'+all_csvs[i]) , index=False)
    print('saving score to', each_csv_path.replace(all_csvs[i], 'score'+all_csvs[i]) )
   

================================================
FILE: text-image/data_filter/wukong_reader.py
================================================
from torch.utils.data import Dataset, ConcatDataset
from torchvision import transforms
import os
from PIL import Image
from concurrent.futures import ProcessPoolExecutor
import json
import torch
from transformers import BertModel
import open_clip
import numpy as np
from transformers import BertTokenizer
import pandas as pd
from tqdm import tqdm
import argparse
import torch
# NOTE 加速读取数据,直接用原版的,在外部使用并行读取策略。30min->3min
class CsvDataset(Dataset):
    def __init__(self, input_filename, input_root, img_key, caption_key, transforms=None, thres=0.2, sep="\t"):
        # logging.debug(f'Loading csv data from {input_filename}.')
        print(f'Loading csv data from {input_filename}.')
        self.images = []
        self.captions = []

        if input_filename.endswith('.csv'):
            # print(f"Load Data from{input_filename}")
            df = pd.read_csv(input_filename, index_col=0)
            df = df[df['used'] == 1]
            df = df[df['score']>thres]
            self.images.extend(df[img_key].tolist())
            self.captions.extend(df[caption_key].tolist())
        
        # NOTE 中文的tokenizer
        self.tokenizer = BertTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")

        self.context_length = 77
        self.root = input_root
        self.transforms = transforms

    def __len__(self):
        return len(self.images)

    def __getitem__(self, idx):
        img_path = str(self.images[idx])
        image = self.transforms(Image.open( os.path.join(self.root, img_path ))) 
        text = self.tokenizer(str(self.captions[idx]), max_length=self.context_length, padding='max_length', truncation=True, return_tensors='pt')['input_ids'][0]
        return image, text


def process_pool_read_csv_dataset(input_root, input_filename, thres=0.20):
    # here input_filename is a directory containing a CSV file
    all_csvs = os.listdir(input_filename)

    csv_with_score = [each for each in all_csvs if 'score' in each ]
    all_datasets = []
    res = []        
    p = ProcessPoolExecutor(max_workers=24)
    for i in range(len(csv_with_score)):
        each_csv_path = os.path.join(input_filename, csv_with_score[i])
        print(i, each_csv_path)
        res.append(p.submit(CsvDataset, each_csv_path, input_root, img_key="name", caption_key="caption", thres=thres))
    p.shutdown()
    for future in res:
        all_datasets.append(future.result())
    dataset = ConcatDataset(all_datasets)
    return dataset


tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese", model_max_length=512)
input_filename = './project/dataset/wukong/release'   # 这里存的是csv标注地址
input_root = './project/dataset/wukong/images'
dataset = process_pool_read_csv_dataset(input_root, input_filename, thres=0.22)

print(len(dataset))

================================================
FILE: text-image/fid_clip_score/.gitignore
================================================
/output*


================================================
FILE: text-image/fid_clip_score/coco_sample_generator.py
================================================
from torch.utils.data import Dataset, DataLoader
import pandas as pd 
import os
from diffusers import StableDiffusionPipeline
from argparse import ArgumentParser
from tqdm import tqdm
from multiprocessing import Process

parser = ArgumentParser()
parser.add_argument('--coco_path', type=str, default='../dataset/coco')
parser.add_argument('--coco_cache_file', type=str, default='../dataset/coco/subset.parquet')
parser.add_argument('--output_path', type=str, default='./output')
parser.add_argument('--model_path', type=str, default='../pretrained_models/stable-diffusion-v1-4')
parser.add_argument('--sample_step', type=int, default=20)
parser.add_argument('--guidance_scale', type=float, default=1.5)
parser.add_argument('--batch_size', type=int, default=2)
args = parser.parse_args()


class COCOCaptionSubset(Dataset):
    def __init__(self, path, transform=None):
        self.df = pd.read_parquet(path)

    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx):
        row = self.df.iloc[idx]
        return row['file_name'], row['caption']

def save_images(images, image_paths, output_path):
    for i, image_path in enumerate(image_paths):
        image_path = image_path.replace('/', '_')
        image_path = os.path.join(output_path, image_path)
        images[i].save(image_path)

if __name__ == '__main__':
    # testing 
    coco_path = args.coco_path
    # coco_cache_file = f'{coco_path}/subset.parquet'     # sampled subsets
    cocosubset = COCOCaptionSubset(args.coco_cache_file)
    cocosubsetloader = DataLoader(cocosubset, batch_size=args.batch_size, shuffle=False, num_workers=8)

    # load the t2i model
    stable_diffusion = StableDiffusionPipeline.from_pretrained(args.model_path, requires_safety_checker=False).to('cuda')   

    sample_step = args.sample_step
    guidance_scale = args.guidance_scale


    output_path = os.path.join(
        args.output_path,
        f'./gs{guidance_scale}_ss{sample_step}'
    ) 
    os.makedirs(output_path, exist_ok=True)

    for i, (image_paths, captions) in enumerate(tqdm(cocosubsetloader)):
        outputs = stable_diffusion(list(captions), num_inference_steps=sample_step, guidance_scale=guidance_scale).images
        p = Process(target=save_images, args=(outputs, image_paths, output_path))
        p.start()

================================================
FILE: text-image/fid_clip_score/compute_fid.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FID指标计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "FID (same): -0.001\n",
      "FID (different): 486.117\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from scipy.linalg import sqrtm\n",
    "def calculate_fid(act1, act2):\n",
    "    # calculate mean and covariance statistics\n",
    "    mu1, sigma1 = act1.mean(axis=0), np.cov(act1, rowvar=False)\n",
    "    mu2, sigma2 = act2.mean(axis=0), np.cov(act2, rowvar=False)\n",
    "    # print(mu1.shape, mu2.shape, sigma1.shape, sigma2.shape)\n",
    "    # calculate sum squared difference between means\n",
    "    ssdiff = np.sum((mu1 - mu2)**2.0)\n",
    "    # print(ssdiff)\n",
    "    # calculate sqrt of product between cov\n",
    "    covmean = sqrtm(sigma1.dot(sigma2)) # 负数平方根也能算\n",
    "    # print(covmean)\n",
    "    # check and correct imaginary numbers from sqrt\n",
    "    if np.iscomplexobj(covmean):\n",
    "        covmean = covmean.real\n",
    "    # calculate score\n",
    "    fid = ssdiff + np.trace(sigma1 + sigma2 - 2.0 * covmean)\n",
    "    return fid\n",
    "\n",
    "# define two collections of activations\n",
    "act1 = np.random.rand(2, 2048)\n",
    "act2 = np.random.rand(3, 2048)\n",
    "# fid between act1 and act1\n",
    "fid = calculate_fid(act1, act1)\n",
    "print('FID (same): %.3f' % fid) # should be 0.0\n",
    "# fid between act1 and act2\n",
    "fid = calculate_fid(act1, act2)\n",
    "print('FID (different): %.3f' % fid)    # should be > 0.0"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.13 ('base')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "4cc247672a8bfe61dc951074f9ca89ab002dc0f7e14586a8bb0828228bebeefa"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: text-image/fid_clip_score/fid_clip_coco.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "reference: https://wandb.ai/dalle-mini/dalle-mini/reports/CLIP-score-vs-FID-pareto-curves--VmlldzoyMDYyNTAy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Sampling data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# load data\n",
    "import json\n",
    "coco_path = '/home/tiger/project/dataset/coco'\n",
    "data_file = f'{coco_path}/annotations/captions_val2014.json'\n",
    "data = json.load(open(data_file))\n",
    "\n",
    "\n",
    "# merge images and annotations\n",
    "import pandas as pd\n",
    "images = data['images']\n",
    "annotations = data['annotations']\n",
    "df = pd.DataFrame(images)\n",
    "df_annotations = pd.DataFrame(annotations)\n",
    "df = df.merge(pd.DataFrame(annotations), how='left', left_on='id', right_on='image_id')\n",
    "\n",
    "\n",
    "# keep only the relevant columns\n",
    "df = df[['file_name', 'caption']]\n",
    "\n",
    "\n",
    "# shuffle the dataset\n",
    "df = df.sample(frac=1)\n",
    "\n",
    "\n",
    "# remove duplicate images\n",
    "df = df.drop_duplicates(subset='file_name')\n",
    "\n",
    "\n",
    "# create a random subset\n",
    "n_samples = 10000\n",
    "df_sample = df.sample(n_samples)\n",
    "\n",
    "\n",
    "# save the sample to a parquet file\n",
    "df_sample.to_parquet(f'{coco_path}/subset.parquet')\n",
    "\n",
    "\n",
    "# copy the images to reference folder\n",
    "from pathlib import Path\n",
    "import shutil\n",
    "subset_path = Path(f'{coco_path}/subset')\n",
    "subset_path.mkdir(exist_ok=True)\n",
    "for i, row in df_sample.iterrows():\n",
    "    path = f'{coco_path}/val2014/' + row['file_name']\n",
    "    shutil.copy(path, f'{coco_path}/subset/')\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# center crop the images\n",
    "def center_crop_images(folder, output_folder, size):\n",
    "    # coco images are not square, so we need to center crop them\n",
    "    from PIL import Image\n",
    "    import os\n",
    "    os.makedirs(output_folder, exist_ok=True)\n",
    "    for file in os.listdir(folder):\n",
    "        image_path = os.path.join(folder, file)\n",
    "        image = Image.open(image_path)\n",
    "        width, height = image.size\n",
    "        left = (width - size) / 2 if width > size else 0\n",
    "        top = (height - size) / 2 if height > size else 0\n",
    "        right = (width + size) / 2 if width > size else width\n",
    "        bottom = (height + size) / 2 if height > size else height\n",
    "        image = image.crop((left, top, right, bottom))\n",
    "        image = image.resize((size, size))  # resize non-square images\n",
    "        image.save(os.path.join(output_folder, file))\n",
    "\n",
    "folder_name = '/home/tiger/project/dataset/coco/subset'\n",
    "output_folder = '/home/tiger/project/dataset/coco/subset_cropped'\n",
    "center_crop_images(folder_name, output_folder, 320)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Load subset as dataloader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# load the subset\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "import pandas as pd \n",
    "\n",
    "class COCOCaptionSubset(Dataset):\n",
    "    def __init__(self, path, transform=None):\n",
    "        self.df = pd.read_parquet(path)\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.df)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        row = self.df.iloc[idx]\n",
    "        return row['file_name'], row['caption']\n",
    "\n",
    "# testing \n",
    "coco_path = '/home/tiger/project/dataset/coco'\n",
    "coco_cache_file = f'{coco_path}/subset.parquet'     # sampled subsets\n",
    "cocosubset = COCOCaptionSubset(coco_cache_file)\n",
    "cocosubsetloader = DataLoader(cocosubset, batch_size=64, shuffle=False, num_workers=8)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Generating Images Via T2I Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# demo inference, use coco_sample_generator.py to generate more\n",
    "# load the t2i model\n",
    "# from diffusers import StableDiffusionPipeline\n",
    "# stable_diffusion = StableDiffusionPipeline.from_pretrained(\"/home/tiger/project/pretrained_models/stable-diffusion-v1-4\").to('cuda')   \n",
    "\n",
    "# sample_step = 20\n",
    "# guidance_scale = 1.5\n",
    "\n",
    "# import os\n",
    "\n",
    "# output_path = f'./output_gs{guidance_scale}_ss{sample_step}'\n",
    "# os.makedirs(output_path, exist_ok=True)\n",
    "\n",
    "# for i, (image_paths, captions) in enumerate(cocosubsetloader):\n",
    "#     outputs = stable_diffusion(list(captions), num_inference_steps=sample_step, guidance_scale=guidance_scale).images\n",
    "#     for j, image_path in enumerate(image_paths):\n",
    "#         image_path = image_path.replace('/', '_')\n",
    "#         image_path = os.path.join(output_path, image_path)\n",
    "#         outputs[j].save(image_path)\n",
    "#     break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "device = torch.device('cuda')\n",
    "\n",
    "coco_subset_crop_path = '/home/tiger/project/dataset/coco/subset_cropped'\n",
    "output_root = '/home/tiger/project/position-guided-t2i/output'\n",
    "output_paths = [os.path.join(output_root, out) for out in sorted(os.listdir(output_root))]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/tiger/anaconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.\n",
      "  warnings.warn(\n",
      "/home/tiger/anaconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=None`.\n",
      "  warnings.warn(msg)\n",
      "100%|██████████| 50/50 [00:13<00:00,  3.58it/s]\n",
      "100%|██████████| 50/50 [00:16<00:00,  3.06it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs1.5_ss20 22.765903388613765\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:09<00:00,  5.50it/s]\n",
      "100%|██████████| 50/50 [00:16<00:00,  3.03it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs2.0_ss20 18.159921113816665\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:10<00:00,  4.95it/s]\n",
      "100%|██████████| 50/50 [00:15<00:00,  3.14it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs3.0_ss20 15.94397287378655\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:09<00:00,  5.18it/s]\n",
      "100%|██████████| 50/50 [00:15<00:00,  3.14it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs4.0_ss20 16.315106185605657\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:09<00:00,  5.17it/s]\n",
      "100%|██████████| 50/50 [00:16<00:00,  3.00it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs5.0_ss20 17.35088805364785\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:09<00:00,  5.15it/s]\n",
      "100%|██████████| 50/50 [00:16<00:00,  3.01it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs6.0_ss20 17.933771904354728\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:09<00:00,  5.21it/s]\n",
      "100%|██████████| 50/50 [00:16<00:00,  3.06it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs7.0_ss20 19.059673548019532\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [00:09<00:00,  5.18it/s]\n",
      "100%|██████████| 50/50 [00:17<00:00,  2.90it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs8.0_ss20 20.12984543749127\n"
     ]
    }
   ],
   "source": [
    "# fid score\n",
    "\n",
    "# !pip install pytorch_fid\n",
    "# !python -m pytorch_fid /home/tiger/project/dataset/coco/subset_cropped /home/tiger/project/position-guided-t2i/output/gs2.0_ss20\n",
    "\n",
    "from pytorch_fid.fid_score import calculate_fid_given_paths\n",
    "\n",
    "fids = []\n",
    "for output_path in output_paths:\n",
    "    fid_value = calculate_fid_given_paths([coco_subset_crop_path, output_path], batch_size=200, device=device, dims=2048, num_workers=8)\n",
    "    fids.append(fid_value)\n",
    "    print(output_path, fid_value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import CLIPProcessor, CLIPModel, CLIPTokenizer\n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "from tqdm import tqdm\n",
    "\n",
    "\n",
    "def load_clip_model(model_path='openai/clip-vit-large-patch14'):\n",
    "    # text_encoder = BertModel.from_pretrained(model_path).eval().cuda()\n",
    "    # text_tokenizer = BertTokenizer.from_pretrained(model_path)\n",
    "    clip_model = CLIPModel.from_pretrained(model_path)\n",
    "    processor = CLIPProcessor.from_pretrained(model_path)\n",
    "    tokenizer = CLIPTokenizer.from_pretrained(model_path)\n",
    "\n",
    "    clip_model = clip_model.eval().cuda()\n",
    "    return clip_model, processor, tokenizer\n",
    "\n",
    "\n",
    "def clip_score(clip_model, processor, tokenizer, dataloader, output_image_path):\n",
    "    all_image_features = []\n",
    "    all_text_features = []\n",
    "    for (i, (image_paths, captions)) in enumerate(tqdm(dataloader)):\n",
    "        # print(image_paths, captions)\n",
    "        text_inputs = tokenizer(list(captions), padding=True, return_tensors=\"pt\").to('cuda')\n",
    "        text_features = clip_model.get_text_features(**text_inputs)\n",
    "        text_features = text_features / text_features.norm(dim=-1, keepdim=True)\n",
    "        text_features = text_features.detach().cpu().numpy()\n",
    "        all_text_features.append(text_features)\n",
    "\n",
    "        # vit 速度比较龟\n",
    "        images = [Image.open(os.path.join( output_image_path , image_path)) for image_path in image_paths]\n",
    "        image_inputs = processor(images = images, return_tensors=\"pt\").to('cuda')\n",
    "        image_features = clip_model.get_image_features(**image_inputs)\n",
    "        image_features = image_features / image_features.norm(dim=-1, keepdim=True)\n",
    "        image_features = image_features.detach().cpu().numpy()\n",
    "        all_image_features.append(image_features)\n",
    "\n",
    "        # NOTE testing 等太久了,抽样吧... 需要全部的话,把这个 if 去掉\n",
    "        if i == 10:\n",
    "            break\n",
    "\n",
    "    all_text_features = np.concatenate(all_text_features, axis=0)\n",
    "    all_image_features = np.concatenate(all_image_features, axis=0)\n",
    "    mean_similarity = (all_image_features @ all_text_features.T).diagonal().mean()\n",
    "    return mean_similarity\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:15<03:45,  1.54s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs1.5_ss20 0.23490335\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:14<03:39,  1.50s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs2.0_ss20 0.24406949\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:15<03:41,  1.51s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs3.0_ss20 0.25112092\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:14<03:39,  1.49s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs4.0_ss20 0.25709876\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:14<03:37,  1.48s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs5.0_ss20 0.25781947\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:14<03:40,  1.50s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs6.0_ss20 0.2593051\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:14<03:37,  1.48s/it]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs7.0_ss20 0.26007786\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  6%|▋         | 10/157 [00:14<03:37,  1.48s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/tiger/project/position-guided-t2i/output/gs8.0_ss20 0.2596085\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "clip_model_path=\"/home/tiger/project/pretrained_models/clip-vit-large-patch14\"\n",
    "clip_model, processor, tokenizer = load_clip_model(clip_model_path)\n",
    "clip_scores = []\n",
    "for output_path in output_paths:\n",
    "    clip_score_each = clip_score(clip_model, processor, tokenizer, cocosubsetloader, output_path)   # 3min ....\n",
    "    print(output_path, clip_score_each)\n",
    "    clip_scores.append(clip_score_each)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjMAAAGxCAYAAACXwjeMAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAA9hAAAPYQGoP6dpAABU5ElEQVR4nO3deVwUdeMH8M/sArscyyIol4CCiIoY4IFnHmWmj2mm5VGWdmcemT1lPuVVmVq/NK208vGozDPx6PJJU/HWVFARREBUVBAB2eW+dn5/GFskKMcuM7t83q/Xvnrt7LDzYV4b+3Hm+50RRFEUQURERGShFFIHICIiIqoPlhkiIiKyaCwzREREZNFYZoiIiMiiscwQERGRRWOZISIiIovGMkNEREQWjWWGiIiILJqN1AHMzWAw4Pr169BoNBAEQeo4REREVAOiKCI3Nxfe3t5QKO5+7MXqy8z169fh6+srdQwiIiKqg9TUVPj4+Nx1HUnLzPz58xEZGYnz58/D3t4ePXr0wMKFC9GmTRvjOnPmzMGGDRuQmpoKOzs7dOrUCfPmzUPXrl1rtA2NRgPg9s5wdnY2y+9BREREpqXX6+Hr62v8Hr8bSctMVFQUJk6ciC5duqCsrAzvvPMOBgwYgLi4ODg6OgIAgoKC8PnnnyMgIACFhYVYvHgxBgwYgKSkJDRr1uye26g4teTs7MwyQ0REZGFqMkREkNONJm/evAl3d3dERUWhd+/eVa6j1+uh1Wqxe/duPPjgg/d8z4r1dTodywwREZGFqM33t6zGzOh0OgCAq6trla+XlJTg66+/hlarRWhoaJXrFBcXo7i42Phcr9ebPigRERHJhmymZouiiGnTpqFXr14ICQmp9NpPP/0EJycnqNVqLF68GLt27ULTpk2rfJ/58+dDq9UaHxz8S0REZN1kc5pp4sSJ+Pnnn3Hw4ME7Ri3n5+cjLS0NmZmZWLFiBfbs2YNjx47B3d39jvep6siMr68vTzMRERFZkNqcZpLFkZnJkydjx44d2Lt3b5XTrxwdHREYGIhu3bph5cqVsLGxwcqVK6t8L5VKZRzsy0G/RERE1k/SMTOiKGLy5MnYunUr9u3bB39//xr/3N+PvhAREVHjJWmZmThxItatW4ft27dDo9EgPT0dAKDVamFvb4/8/HzMmzcPQ4cOhZeXF7KysrBs2TJcvXoVTzzxhJTRiYiISCYkLTPLly8HAPTt27fS8tWrV2P8+PFQKpU4f/48vvnmG2RmZsLNzQ1dunTBgQMH0L59ewkSExERkdxIfprpbtRqNSIjIxsoDREREVkiWV1nxpKUG0QcT8lGRm4R3DVqRPi7QqngjSyJiIgaGstMHeyMTcPcH+OQpisyLvPSqjF7SDAGhnhJmIyIiKjxkcXUbEuyMzYNE9aeqlRkACBdV4QJa09hZ2yaRMmIiIgaJ5aZWig3iJj7YxyqGulTsWzuj3EoN8jiOoRERESNAstMLRxPyb7jiMzfiQDSdEU4npLdcKGIiIgaOZaZWsjIrb7I1GU9IiIiqj+WmVpw16hNuh4RERHVH8tMLUT4u8JLq0Z1E7AF3J7VFOHv2pCxiIiIGjWWmVpQKgTMHhIMAFUWGhHA7CHBvN4MERFRA2KZqaWBIV5YPrYjPLV3nkpSKgCfJg4SpCIiImq8BPFe9xSwcHq9HlqtFjqdDs7OziZ738pXAFZhzaFL+F/cDfg3dcRPk3vBUcXrERIREdVVbb6/+Y1bR0qFgO6t3IzP23k548ySA0jJzMecHefw8ROhEqYjIiJqPHiayURcHOyweFQYFAKw+eRV/Hj6utSRiIiIGgWWGRPqFuCGSf0CAQD/iTyL1OwCiRMRERFZP5YZE5vyYGt0atEEucVleG1DNMrKDVJHIiIismosMyZmo1Tg01Fh0KhtcOpKDpb8nih1JCIiIqvGMmMGvq4O+PCxDgCAz/cm4ejFLIkTERERWS+WGTMZEuqNkZ19IIrA6xtjcCu/ROpIREREVollxozmDG2PgKaOSNMVYfqWM7DyS/oQERFJgmXGjBzsbLB0TDhslQJ+i7uB749dkToSERGR1WGZMbOQ5lpMH9gWAPD+T3G4cCNX4kRERETWhWWmATzX0x99gpqhuMyAyeuiUVRaLnUkIiIiq8Ey0wAUCgH/90QomjqpkHAjFx/+Ei91JCIiIqvBMtNAmmlU+GTk7fs1fXvkMnbF3ZA4ERERkXVgmWlAfYKa4cX7/QEAb/1wGum6IokTERERWT6WmQb25sNtEdLcGbcKSvH6xhiUGzhdm4iIqD5YZhqYnY0CS0eHw8FOiSMXs/BlVLLUkYiIiCway4wEApo5Ye7Q9gCARbsuIPrKLYkTERERWS6WGYk83skHQ0K9UW4QMWVDNPRFpVJHIiIiskgsMxIRBAHzHguBTxN7pGYXYua2WN7ugIiIqA5YZiTkrLbFktHhUCoEbI+5jshT16SOREREZHFYZiTWqUUTvN6/NQBg1vZYpGTmS5yIiIjIsrDMyMCEvoHoFuCK/JJyTFkfjZIyg9SRiIiILAbLjAwoFQIWjwqDi4Mtzl7T4ZPfEqSOREREZDFYZmTCS2uPhSPuAwB8tf8iDiTelDgRERGRZWCZkZGH23tibDc/AMC0TaeRmVcscSIiIiL5Y5mRmXcHByPIwwk3c4vx5ubTnK5NRER0DywzMqO2VWLpmHDY2SiwN+EmVh+6JHUkIiIiWZO0zMyfPx9dunSBRqOBu7s7hg0bhoSEvwa/lpaWYvr06ejQoQMcHR3h7e2NZ555BtevX5cwtfm19XTGu4PbAQAW/Hoe567rJE5EREQkX5KWmaioKEycOBFHjx7Frl27UFZWhgEDBiA///a1VgoKCnDq1CnMnDkTp06dQmRkJC5cuIChQ4dKGbtBPN2tBfq380BJuQFT1kejoKRM6khERESyJIgyGpRx8+ZNuLu7IyoqCr17965ynT/++AMRERG4fPky/Pz87vmeer0eWq0WOp0Ozs7Opo5sVtn5JRi0ZD9u6IsxuosvFvw524mIiMja1eb7W1ZjZnS626dTXF1d77qOIAhwcXFpoFTScXW0w+JRYRAEYMMfqfjlbJrUkYiIiGRHNmVGFEVMmzYNvXr1QkhISJXrFBUV4e2338aTTz5ZbUsrLi6GXq+v9LBkPVo1xat9WwEA3t5yBtdyCiVOREREJC+yKTOTJk3CmTNnsH79+ipfLy0txejRo2EwGLBs2bJq32f+/PnQarXGh6+vr7kiN5ip/YMQ5usCfVEZpm6IRlk5b3dARERUQRZlZvLkydixYwf27t0LHx+fO14vLS3FyJEjkZKSgl27dt313NmMGTOg0+mMj9TUVHNGbxC2SgWWjg6Hk8oGf1y6hc/2JEkdiYiISDYkLTOiKGLSpEmIjIzEnj174O/vf8c6FUUmMTERu3fvhpub213fU6VSwdnZudLDGvi5OWDeY7dPv322JxHHU7IlTkRERCQPkpaZiRMnYu3atVi3bh00Gg3S09ORnp6OwsLb40LKysrw+OOP48SJE/j+++9RXl5uXKekpETK6JJ4NKw5RnT0gUEEpm6Ihq6gVOpIREREkpN0arYgCFUuX716NcaPH49Lly5VebQGAPbu3Yu+ffvecxuWPDW7KnnFZXhk6QFcyirAoBBPLHuqY7X7kYiIyFLV5vvbpoEyVelePaply5a8N9E/OKlssHRMOEYsP4xfY9Ox4Y9UjIm49/V2iIiIrJUsBgBT7dzn44J/D2gDAJj74zkkZeRKnIiIiEg6LDMW6sX7A3B/66YoKjVg8voYFJWWSx2JiIhIEiwzFkqhEPDJyFC4OdohPk2PBb+elzoSERGRJFhmLJi7Ro3/eyIUALDm8CXsOX9D4kREREQNj2XGwvVr647net6e8fXvzWeQoS+SOBEREVHDYpmxAtMHtUGwlzOy80swbdNpGAycAUZERI0Hy4wVUNkosXRMOOxtlTiYlImvD1yUOhIREVGDYZmxEoHuTpg9JBgA8H//S8Dp1BxpAxERETUQlhkrMqqLLwZ38EKZQcSUDdHIKy6TOhIREZHZscxYEUEQ8OHwDmjuYo/LWQWYtS1W6khERERmxzJjZbT2tlgyOgwKAYiMvoZt0dekjkRERGRWLDNWqHNLV0x5sDUA4N1tsbiSVSBxIiIiIvNhmbFSk/oFIqKlK/KKyzB5QzRKyw1SRyIiIjILlhkrZaNUYPHoMDirbXA6NQeLdl2QOhIREZFZsMxYseYu9lg44j4AwJdRyTiclClxIiIiItNjmbFygzp4YUyEL0QRmLoxBtn5JVJHIiIiMimWmUZg1iPtEejuhIzcYrz1w2mIIm93QERE1oNlphGwt1Ni6ehw2CkV2B2fgW+PXJY6EhERkcmwzDQSwd7OmPGvtgCAeb/EIz5NL3EiIiIi02CZaUTG92iJB9q6o6TMgCnro1FYUi51JCIionpjmWlEBEHAx4/fB3eNCokZeXj/5zipIxEREdUby0wj4+akwqKRYRAEYN2xK9gZmyZ1JCIionphmWmEerVuipd6BwAApm85i+s5hRInIiIiqjuWmUbqjYfaINRHC11hKaZujEG5gdO1iYjIMrHMNFJ2NgosHRMORzsljqdk44u9SVJHIiIiqhOWmUashZsj3h8WAgBY8nsiTl7OljgRERFR7bHMNHLDO/pgWJg3yg0ipqyPga6wVOpIREREtcIyQ3h/WAj8XB1wLacQ72w9y9sdEBGRRWGZIWjUtlg6Jhw2CgE/nUnD5hNXpY5ERERUYywzBAAI83XBtAFBAIDZO84h+WaexImIiIhqhmWGjF7p3Qo9WrmhsLQcU9ZHo7iMtzsgIiL5Y5khI4VCwOJRYWjiYItz1/X4aGeC1JGIiIjuiWWGKvFwVuPjx0MBACsPpmBfQobEiYiIiO6OZYbu0D/YA+O6twAA/HvzadzMLZY4ERERUfVYZqhKM/7VDm09NcjMK8Ebm0/DwNsdEBGRTLHMUJXUtkp8NiYcalsF9l+4iZUHU6SOREREVCWWGapWaw8NZj4SDAD46H/ncfaqTuJEREREd2KZobt6MsIPA9t7orRcxJQN0cgvLpM6EhERUSUsM3RXgiBgwYgO8NKqkZKZjzk7zkkdiYiIqBJJy8z8+fPRpUsXaDQauLu7Y9iwYUhIqHxtk8jISDz88MNo2rQpBEFATEyMNGEbMRcHO3w6KgwKAdh88ip2nL4udSQiIiIjSctMVFQUJk6ciKNHj2LXrl0oKyvDgAEDkJ+fb1wnPz8fPXv2xIIFCyRMSl0D3DCpXyAA4J3Is0jNLpA4ERER0W2CKKNbJN+8eRPu7u6IiopC7969K7126dIl+Pv7Izo6GmFhYTV+T71eD61WC51OB2dnZxMnblzKyg0Y9fVRnLx8C+F+Ltj0cnfYKnmmkoiITK8239+y+ibS6W7PlnF1da3zexQXF0Ov11d6kGnYKBX4dFQYNGobRF/JwZLdiVJHIiIikk+ZEUUR06ZNQ69evRASElLn95k/fz60Wq3x4evra8KU5OvqgPnDOwAAvtiXhCPJWRInIiKixk42ZWbSpEk4c+YM1q9fX6/3mTFjBnQ6nfGRmppqooRU4ZH7vDGysw9EEXh9Ywxu5ZdIHYmIiBoxWZSZyZMnY8eOHdi7dy98fHzq9V4qlQrOzs6VHmR6c4a2R0BTR6TrizB9yxnIaOgVERE1MpKWGVEUMWnSJERGRmLPnj3w9/eXMg7VgoOdDZaOCYedUoHf4m5g7bErUkciIqJGStIyM3HiRKxduxbr1q2DRqNBeno60tPTUVhYaFwnOzsbMTExiIuLAwAkJCQgJiYG6enpUsWmP4U01+KtgW0AAB/8FIeE9FyJExERUWMkaZlZvnw5dDod+vbtCy8vL+Nj48aNxnV27NiB8PBwDB48GAAwevRohIeH48svv5QqNv3Ncz390SeoGYrLDJiyPhpFpeVSRyIiokZGVteZMQdeZ8b8buYWY9CSA8jMK8Yz3VvgvUfrPhuNiIgIsODrzJBlaqZRYdHIUADAt0cu47dzPAVIREQNh2WGTKJ3UDO8eP/tAdxvbTmDdF2RxImIiKixYJkhk3nz4bYIae6MnIJSvL4xBuUGqz6DSUREMsEyQyZjZ6PA0tHhcLBT4sjFLHwZlSx1JCIiagRYZsikApo5Ye7Q9gCARbsu4NSVWxInIiIia8cyQyb3eCcfDAn1RrlBxGsboqEvKpU6EhERWTGWGTI5QRAw77EQ+DSxR2p2Id7dGsvbHRARkdmwzJBZOKttsWR0OJQKATtOX8eWU9ekjkRERFaKZYbMplOLJni9f2sAwKztsUjJzJc4ERERWSOWGTKrCX0D0S3AFQUl5ZiyPholZQapIxERkZVhmSGzUioELB4VBhcHW5y9psMnvyVIHYmIiKwMywyZnZfWHgtH3AcA+Gr/Rey/cFPiREREZE1YZqhBPNzeE2O7+QEApm06jcy8YokTERGRtWCZoQbz7uBgBHk4ITOvGG9uPs3p2kREZBIsM9Rg1LZKfDamI1Q2CuxNuInVhy5JHYmIiKwAyww1qDaeGrw7uB0AYMGv5xF7TSdxIiIisnQsM9TgxnZrgYeCPVBSbsCUDdEoKCmTOhIREVkwlhlqcIIgYOGI++DhrMLFm/l478c4qSMREZEFY5khSbg62mHxqDAIArDhj1T8fCZN6khERGShWGZIMj1aNcWrfVsBAN6OPIOrtwokTkRERJaIZYYkNbV/EMJ8XZBbVIapG2JQVs7bHRARUe2wzJCkbJUKLB0dDieVDU5cvoXP9iRJHYmIiCwMywxJzs/NAfMeCwEAfLYnEcdTsiVOREREloRlhmTh0bDmGNHRBwYRmLohGrqCUqkjERGRhWCZIdmY+2h7tHRzwHVdEd6OPMPbHRARUY2wzJBsOKls8NmYjrBVCvg1Nh0b/kiVOhIREVkAlhmSlQ4+Wrz5cBsAwNwfzyHxRq7EiYiISO5YZkh2XugVgPtbN0VRqQGT10ejqLRc6khERCRjLDMkOwqFgE9GhsLN0Q7n03Ox4NfzUkciIiIZY5khWXLXqPF/T4QCANYcvoTf429InIiIiOSKZYZkq19bdzzX0x8A8OYPZ5ChL5I4ERERyRHLDMna9EFtEOzljOz8EkzbdBoGA6drExE1hHKDiCPJWdgecw1HkrNQLuO/vzZSByC6G5WNEkvHhGPIZwdxMCkTXx+4iFf6tJI6FhGRVdsZm4a5P8YhTffXEXEvrRqzhwRjYIiXhMmqxiMzJHuB7k6YMzQYAPB//0tATGqOtIGIiKzYztg0TFh7qlKRAYB0XREmrD2FnbFpEiWrHssMWYSRnX0xuIMXygwiXtsQjbziMqkjERFZnXKDiLk/xqGqE0oVy+b+GCe7U04sM2QRBEHAh8M7oLmLPS5nFWDWtlipIxERWZ3jKdl3HJH5OxFAmq5IdjcEZpkhi6G1t8WS0WFQCEBk9DVsjb4qdSQiIquSkVuzWaM1Xa+hsMyQRenc0hWvPRgEAHh3aywuZ+VLnIiIyHq4a9QmXa+hSFpm5s+fjy5dukCj0cDd3R3Dhg1DQkJCpXVEUcScOXPg7e0Ne3t79O3bF+fOnZMoMcnBpAcCEdHSFfkl5ZiyIQal5QapIxERWYUIf1d4adUQqnldwO1ZTRH+rg0Z654kLTNRUVGYOHEijh49il27dqGsrAwDBgxAfv5f/9r+6KOPsGjRInz++ef4448/4OnpiYceegi5ubwBYWOlVAhYPDoMzmobnE7NwaJdF6SORERkFZQKAbOH3J49+s9CU/F89pBgKBXV1R1pCKIoymZI8s2bN+Hu7o6oqCj07t0boijC29sbU6dOxfTp0wEAxcXF8PDwwMKFC/Hyyy/f8z31ej20Wi10Oh2cnZ3N/StQA/r1bBomfH8KggCsfb4regY2lToSEZFVkMN1Zmrz/S2rMTM6nQ4A4Op6+/BVSkoK0tPTMWDAAOM6KpUKffr0weHDh6t8j+LiYuj1+koPsk6DOnhhTIQfRBF4fWMMsvKKpY5ERGQVBoZ44eD0B/DfZzobl+2c2luWF8wDZFRmRFHEtGnT0KtXL4SEhAAA0tPTAQAeHh6V1vXw8DC+9k/z58+HVqs1Pnx9fc0bnCQ165FgBLo7ISO3GNO3nIGMDjQSEVk0pUJA/2APNHVSAQAuZcp3woVsysykSZNw5swZrF+//o7XBKHyuTlRFO9YVmHGjBnQ6XTGR2pqqlnykjzY2ymxdHQ47JQK7I7PwLdHLksdiYjIqgS6OwIAkjLyJE5SPVmUmcmTJ2PHjh3Yu3cvfHx8jMs9PT0B4I6jMBkZGXccramgUqng7Oxc6UHWLdjbGf/5V1sAwLxf4hGfxlOLRESmEujuBABIuskyUyVRFDFp0iRERkZiz5498Pf3r/S6v78/PD09sWvXLuOykpISREVFoUePHg0dl2RsXI+WeLCtO0rKDJi8PhqFJeVSRyIisgqtmt0uM8nWdmQmOTkZ7777LsaMGYOMjAwAwM6dO2t9/ZeJEydi7dq1WLduHTQaDdLT05Geno7CwkIAt08vTZ06FR9++CG2bt2K2NhYjB8/Hg4ODnjyySfrEp2slCAI+Ojx++CuUSEpIw/v/xwndSQiIqtglUdmoqKi0KFDBxw7dgyRkZHIy7v9y505cwazZ8+u1XstX74cOp0Offv2hZeXl/GxceNG4zpvvfUWpk6dildffRWdO3fGtWvX8Ntvv0Gj0dQ2Olk5NycVFo0MgyAA645dkeWdXYmILE1FmbmcVYCSMnlepLTWZebtt9/GBx98gF27dsHOzs64vF+/fjhy5Eit3ksUxSof48ePN64jCALmzJmDtLQ0FBUVISoqyjjbieiferVuipd7twIATN9yFtdzCiVORERk2Tyd1XBS2aDcIMr2FjK1LjNnz57FY489dsfyZs2aISsryyShiOrjjQFBCPXRQldYiqkbY2R3q3oiIksiCAJaNZP3jKZalxkXFxekpd15+D46OhrNmzc3SSii+rBVKrB0TDgc7ZQ4npKNL/YmSR2JiMiitaoYN2MtZebJJ5/E9OnTkZ6eDkEQYDAYcOjQIfz73//GM888Y46MRLXWws0R7w+7fTry090XcOJStsSJiIgsl3FGk0wHAde6zMybNw9+fn5o3rw58vLyEBwcjN69e6NHjx549913zZGRqE6Gd/TBY+HNYRCB1zbEQFdYKnUkIiKLJPcZTbUqM6Io4vr161ixYgUSExOxadMmrF27FufPn8d3330HpVJprpxEdfLeo+3h5+qAazmF+M/Ws7zdARFRHVSUmeSMfBhkOA7RpjYri6KI1q1b49y5c2jdujUCAgLMlYvIJDRqWywdE47Hlx/Gz2fS0Kd1M4zswvt1ERHVRgtXB9gqBRSWluO6rhA+TRykjlRJrY7MKBQKtG7dmrOWyKKE+brgjQFtAACzd5yT7QA2IiK5slEq0NJNvjOaaj1m5qOPPsKbb76J2NhYc+QhMouXewegZ6AbCkvLMWV9NIrLeLsDIqLaCJTxjKZal5mxY8fi+PHjCA0Nhb29PVxdXSs9iORIoRCwaGQYmjjYIi5Nj492JkgdiYjIovw1o0l+F86r1ZgZAPj000/NEIPI/Dyc1fj48VC88O0JrDyYgl6tm6JfG3epYxERWYS/BgHL78hMrcvMuHHjzJGDqEH0D/bA+B4tsebwJfx702n8OvV+uGvUUsciIpI9OU/PrnWZAYDy8nJs27YN8fHxEAQBwcHBGDp0KKdmk0V4e1BbHL2YhfPpuXhj02l882wEFApB6lhERLIW8OctDbLzS5CdXwJXR7t7/ETDqfWYmaSkJLRr1w7PPPMMIiMj8cMPP2Ds2LFo3749kpOTzZGRyKTUtkp8NiYcalsFDiRmYuXBFKkjERHJnoOdDZq72AOQ3yDgWpeZKVOmoFWrVkhNTcWpU6cQHR2NK1euwN/fH1OmTDFHRiKTa+2hwcxHggEAH/3vPM5e1UmciIhI/uQ6o6nWZSYqKgofffRRpZlLbm5uWLBgAaKiokwajsicnozww8D2nigtFzFlQzTyi8ukjkREJGtyvUdTrcuMSqVCbm7uHcvz8vJgZyef82dE9yIIAhaM6AAvrRopmfmYveOc1JGIiGTNao7MPPLII3jppZdw7NgxiKIIURRx9OhRvPLKKxg6dKg5MhKZjYuDHT4dFQaFAPxw8iq2x1yTOhIRkWxZTZlZunQpWrVqhe7du0OtVkOtVqNnz54IDAzEkiVLzJGRyKy6BrhhUr9AAMC7W2ORml0gcSIiInmqKDPXcgpRUCKfU/O1nprt4uKC7du3IykpCfHx8RBFEcHBwQgMDDRHPqIGMeXB1jiUnIWTl29hyoZobHq5O2yVte76RERWzdXRDq6OdsjOL8HFm/kIaa6VOhKAOhyZqRAYGIghQ4Zg6NChLDJk8WyUCiwZHQaN2gbRV3KwZHei1JGIiGQpUIaDgGtdZh5//HEsWLDgjuUff/wxnnjiCZOEIpKCTxMHzB/eAQDwxb4kHEnm3eGJiP6plbv87p5dp6nZgwcPvmP5wIEDsX//fpOEIpLKI/d5Y1RnX4gi8PrGGNzKL5E6EhGRrFRMz7boMlPdFGxbW1vo9XqThCKS0uyhwQho5oh0fRHe2nIGoihKHYmISDbkOKOp1mUmJCQEGzduvGP5hg0bEBwcbJJQRFJysLPB0tHhsFMqsCvuBtYeuyJ1JCIi2agoM5ey8lFWbpA4zW21ns00c+ZMjBgxAsnJyXjggQcAAL///jvWr1+PzZs3mzwgkRRCmmvx1sA2+ODneHzwUxwiWrqijadG6lhERJLz1trD3laJwtJyXM4uMJ52klKtj8wMHToU27ZtQ1JSEl599VW88cYbuHr1Knbv3o1hw4aZISKRNJ7r6Y++bZqhuMyAyetPoai0XOpIRESSUygE4yDgZJmcaqrT1OzBgwfj0KFDyM/PR2ZmJvbs2YM+ffqYOhuRpBQKAf/3RCiaOqlw4UYe5v0cL3UkIiJZqJienSST6dm1LjOpqam4evWq8fnx48cxdepUfP311yYNRiQHTZ1UWDQyFADw3dHL+O1cusSJiIikJ7cZTbUuM08++ST27t0LAEhPT0f//v1x/Phx/Oc//8F7771n8oBEUusd1Awv3u8PAHhryxmk6QolTkREJK2KQcAWe5opNjYWERERAIBNmzahQ4cOOHz4MNatW4c1a9aYOh+RLLz5cFt0aK5FTkEpXt8Yg3IDp2sTUeNlLDM382Vx+Ypal5nS0lKoVCoAwO7du413ym7bti3S0tJMm45IJuxsFFg6JhwOdkocvZiNL6OSpY5ERCSZFm6OUCoE5BWXIV1fJHWc2peZ9u3b48svv8SBAwewa9cuDBw4EABw/fp1uLm5mTwgkVz4N3XE3KHtAQCLdl3AqSu3JE5ERCQNOxsFWrg5AACSM/IlTlOHMrNw4UJ89dVX6Nu3L8aMGYPQ0NuDI3fs2GE8/URkrR7v5IMhod4oN4iYsj4a+qJSqSMREUnCOKMpI1fiJHW4aF7fvn2RmZkJvV6PJk2aGJe/9NJLcHBwMGk4IrkRBAHzHgtB9JVbuHqrEO9ujcWS0WEQBEHqaEREDaqVuxMQd0MW07PrdJ0ZpVJZqcgAQMuWLeHu7m6SUERy5qy2xdIx4VAqBOw4fR1bTl2TOhIRUYMLaHr7wnlHL2bhSHKWpBMj6lRmiBq7jn5N8Hr/1gCAWdtjcVEG/zIhImooO2PTsODX8wCApIx8jFlxFL0W7sHOWGkmArHMENXRhL6B6BbgioKScry2IQYlZfK44RoRkTntjE3DhLWnkJVfUml5uq4IE9aekqTQsMwQ1ZFSIeDTUeFwcbDF2Ws6/N9vCVJHIiIyq3KDiLk/xqGqE0oVy+b+GNfgp5wkLTP79+/HkCFD4O3tDUEQsG3btkqv37hxA+PHj4e3tzccHBwwcOBAJCYmShOWqAqeWjU+GnEfAODr/Rex/8JNiRMREZnP8ZRspOmqv66MCCBNV4TjKdkNFwo1nM20dOnSGr/hlClTarxufn4+QkND8eyzz2LEiBGVXhNFEcOGDYOtrS22b98OZ2dnLFq0CP3790dcXBwcHR1rvB0icxrQ3hNju/lh7dErmLbpNHZOvR9NnVRSxyIiMrmM3JpdIK+m65lKjcrM4sWLKz2/efMmCgoK4OLiAgDIycmBg4MD3N3da1VmBg0ahEGDBlX5WmJiIo4ePYrY2Fi0b3/7QmXLli2Du7s71q9fjxdeeKHG2yEyt3cHB+N4SjYu3MjDvzefxqpxXaBQcLo2EVkXd43apOuZSo1OM6WkpBgf8+bNQ1hYGOLj45GdnY3s7GzEx8ejY8eOeP/9900WrLi4GACgVv+1Q5RKJezs7HDw4MG7/pxer6/0IDI3ta0Sn43pCJWNAvsSbmL14UtSRyIiMrkIf1d4adWo7p9qAgAvrRoR/q4NGav2Y2ZmzpyJzz77DG3atDEua9OmDRYvXox3333XZMHatm2LFi1aYMaMGbh16xZKSkqwYMECpKen3/UeUPPnz4dWqzU+fH19TZaJ6G7aeGrw7uB2AICFv55H7DWdxImIiExLqRAwe0hwla9VFJzZQ4KhbOAj07UuM2lpaSgtvfMS7uXl5bhx44ZJQgGAra0ttmzZggsXLsDV1RUODg7Yt28fBg0aBKVSWe3PzZgxAzqdzvhITU01WSaiexnbrQUeCvZASbkBUzZEo6CkTOpIREQmNTDEC8vHdoS9XeXvYk+tGsvHdsTAEK8Gz1TrMvPggw/ixRdfxIkTJ4y3/T5x4gRefvll9O/f36ThOnXqhJiYGOTk5CAtLQ07d+5EVlYW/P39q/0ZlUoFZ2fnSg+ihiIIAj4acR88ndW4eDMfc3fESR2JiMjkBoZ4wVNze6LDS/f7Y/2L3XBw+gOSFBmgDmVm1apVaN68OSIiIqBWq6FSqdC1a1d4eXnhv//9rzkyQqvVolmzZkhMTMSJEyfw6KOPmmU7RKbQxNEOi0aFQhCAjSdS8fMZaa6ISURkLjf0RUjJKoBCACY+0BrdW7k1+Kmlv6v1jSabNWuGX375BRcuXMD58+chiiLatWuHoKCgWm88Ly8PSUlJxucpKSmIiYmBq6sr/Pz8sHnzZjRr1gx+fn44e/YsXnvtNQwbNgwDBgyo9baIGlKPVk3xat9W+GJvMt6OPINQXy18mvBGrERkHY5ezAIAtPfWQmtvK3GaOpSZCkFBQXUqMH934sQJ9OvXz/h82rRpAIBx48ZhzZo1SEtLw7Rp03Djxg14eXnhmWeewcyZM+u1TaKGMrV/EA4lZSEmNQevbYjBxpe6wUbJi24TkeU7evH2RfG6BTTsrKXqCGLFwJe7mDZtGt5//304OjoaC0d1Fi1aZLJwpqDX66HVaqHT6Th+hhpcanYB/rXkAHKLyzDlwdaY9lD9/gFARCQHD/zfPlzMzMd/n+mM/sEeZtlGbb6/a3RkJjo62jiD6dSpUxCEqs+LVbecqLHydXXAB4+F4LUNMfh8TyJ6tnJD1wA3qWMREdXZDX0RLmbmQxCALg18PZnq1KjMLFmyxNiK9u3bZ848RFbn0bDm2H8hE1tOXcXUjTH49bX74eJgJ3UsIqI6+Wu8jLMsxssANZzNFB4ejszMTABAQEAAsrKyzBqKyNrMfbQ9/Js6Ik1XhLe3nEUNzu4SEcmScbyMv3yOMteozLi4uCAlJQUAcOnSJRgMBrOGIrI2TiobLB0dDlulgJ3n0rH+OC/mSESW6difR2a6yeiUeY1OM40YMQJ9+vSBl5cXBEFA586dq70K78WLF00akMhadPDR4s2H2+DDX87jvZ/OoUvLJmjtoZE6FhFRjWXIcLwMUMMy8/XXX2P48OFISkrClClT8OKLL0Kj4R9hotp6oVcADiRm4kBiJiavj8a2iT2htq3+9hxERHJyNOX2KSY5jZcBanGdmYEDBwIATp48iddee41lhqgOFAoBn4wMxaBPD+B8ei4W/Hoec4a2lzoWEVGNVAz+ldN4GaAOtzNYvXo1iwxRPbhr1Pi/kaEAgDWHL+H3eNPdoJWIyJyOynC8DFCHMkNE9devjTue63n7hqlv/nAGN/RFEiciIrq7DH0RLt6U33gZgGWGSDLTB7VBsJczsvNLMG1TDAwGTtcmIvmqGC8T7CWv8TIAywyRZFQ2SiwdEw57WyUOJWXhq/2cCUhE8iXXU0wAywyRpALdnTBnaDAA4JPfEhCTmiNtICKiarDMEFG1Rnb2xeAOXigziJiyPhq5RaVSRyIiquTv42UiWsprvAzAMkMkOUEQ8OHwDmjuYo8r2QWYtf2c1JGIiCqpNF7GQV7jZQCWGSJZ0NrbYsnoMCgEYGv0NWyNvip1JCIiIznewuDvWGaIZKJzS1e89mAQAODdrbG4nJUvcSIiotvkPF4GYJkhkpVJDwQioqUr8kvKMWV9NErKeFNXIpJWRm4RkmU8XgZgmSGSFaVCwOLRYdDa2+L0VR0W7bogdSQiauSOXZT3eBmAZYZIdpq72GPhiA4AgK/2J+NQUqbEiYioMas4xdRVZvdj+juWGSIZGhjihTERfhBF4PWNMcjKK5Y6EhE1Un+Nl5HnKSaAZYZItmY9EoxAdydk5BbjzR/OQBR5uwMialiVxsvI7H5Mf8cyQyRT9nZKLB0dDjsbBfacz8A3hy9JHYmIGpmK8TLtPJ3h4mAncZrqscwQyViwtzP+M6gtAODDX88jPk0vcSIiakzkPiW7AssMkcyN69ESD7Z1R0mZAZPXR6OwpFzqSETUSFjCeBmAZYZI9gRBwEeP3wd3jQpJGXl476c4qSMRUSNgKeNlAJYZIovg5qTC4lFhEARg/fEr2BmbJnUkIrJyx1MsY7wMwDJDZDF6BjbFy71bAQDe+uEMruUUSpyIiKyZpYyXAVhmiCzKGwOCEOqjhb6oDK9viEG5gdO1icg8jv45k6mrzMfLACwzRBbFVqnA0jHhcFLZ4PilbHy+J0nqSERkhW7mFiMpIw+CAHSV+XgZgGWGyOK0cHPE+8PaAwCW/H4BJy5lS5yIiKzNsZTbp5jaWsB4GYBlhsgiPRbug8fCm8MgAq9tiEF2fgmOJGdhe8w1HEnO4uknIqoXS5mSXcFG6gBEVDfvPdoeJy/fwpXsAvRY8DuKSg3G17y0asweEoyBIV4SJiQiS1UxXsYSBv8CPDJDZLE0aluMifADgEpFBgDSdUWYsPYUp3ATUa1Z2ngZgGWGyGKVG0R8e+RSla9VnGSa+2McTzkRUa1Y2ngZgGWGyGIdT8lGmq6o2tdFAGm6IuOFr4iIasLSxssALDNEFisjt/oiU5f1iIiAv+6UbSnjZQCWGSKL5a5Rm3Q9IqLMvGIkZuQBACJa8sgMEZlZhL8rvLRqCHdZx0llgy4tmzRYJiKybBVHZdp6atDE0TLGywASl5n9+/djyJAh8Pb2hiAI2LZtW6XX8/LyMGnSJPj4+MDe3h7t2rXD8uXLpQlLJDNKhYDZQ4IBoNpCk1dchre2nEFJmaGaNYiI/mJJ92P6O0nLTH5+PkJDQ/H5559X+frrr7+OnTt3Yu3atYiPj8frr7+OyZMnY/v27Q2clEieBoZ4YfnYjvDUVj6V5KVV48kIPygVAiJPXcO4VcehKyiVKCURWQpLLTOSXjRv0KBBGDRoULWvHzlyBOPGjUPfvn0BAC+99BK++uornDhxAo8++mgDpSSSt4EhXngo2BPHU7KRkVsEd40aEf6uUCoEDGjvgYnfn8KRi1kYvvwQ1jwbAV9XB6kjE5EM/X28jKVcX6aCrMfM9OrVCzt27MC1a9cgiiL27t2LCxcu4OGHH5Y6GpGsKBUCurdyw6NhzdG9lRuUitsnnvq2ccfmV3rA01mN5Jv5eGzZIURfuSVxWiKSm3KDiO+OXAYA+Daxh7O9rcSJakfWZWbp0qUIDg6Gj48P7OzsMHDgQCxbtgy9evWq9meKi4uh1+srPYgas2BvZ2yb2BPBXs7IzCvB6K+P8srARGS0MzYNvRbuwZLfEwEAqbcK0WvhHov6OyH7MnP06FHs2LEDJ0+exCeffIJXX30Vu3fvrvZn5s+fD61Wa3z4+vo2YGIiefLUqrHple7o16YZissMmPD9Kfz3wEWIIq8OTNSY7YxNw4S1p+64AKel3RJFEGXy10wQBGzduhXDhg0DABQWFkKr1WLr1q0YPHiwcb0XXngBV69exc6dO6t8n+LiYhQXFxuf6/V6+Pr6QqfTwdnZ2ay/A5HclZUbMOfHc1h79AoA4OluLTB7SDBslLL+dw0RmUG5QUSvhXuqvZK4gNv/EDo4/QHjqeuGpNfrodVqa/T9Ldu/YKWlpSgtLYVCUTmiUqmEwVD9NFOVSgVnZ+dKDyK6zUapwPuPhuCdf7WDIADfHb2Ml747ifziMqmjEVEDs6Zbokg6mykvLw9JSUnG5ykpKYiJiYGrqyv8/PzQp08fvPnmm7C3t0eLFi0QFRWFb7/9FosWLZIwNZFlEwQBL/YOgE8Te0zdGIM95zPwxJdHsGp8lzumeBOR9bKmW6JIemTmxIkTCA8PR3h4OABg2rRpCA8Px6xZswAAGzZsQJcuXfDUU08hODgYCxYswLx58/DKK69IGZvIKgzq4IUNL3WDm6Md4tL0eGzZIcSnccA8UWNhTbdEkc2YGXOpzTk3osboSlYBnl1zHMk38+GkssEXT3VEn6BmUsciIjM7npKNkV8dqfZ1jpkhIovh5+aAyAk90S3AFXnFZXhuzR9Yd+yK1LGIyIzOp+vx4rcnjM//WVUqns8eEixJkaktlhkigtbBFt8+1xXDw5uj3CDiP1vPYv6v8TAYrPrALVGjdCWrAE+vPA5dYSk6+rlgyeiwO8bLeWrVWD62IwaGeEmUsnYkHQBMRPJhZ6PAJyND4efmgE93J+KrqIu4ml2IT0aGQm2rlDoeEZlAhr4IY1cew83cYrT11GD1+AhoHWzxyH3eVd4SxVKwzBCRkSAImNo/CL5NHPB25Bn8fDYNabpCrHimM9ycVFLHI6J60BWU4plVx3EluwB+rg749rnbRQb465YoloqnmYjoDiM6+eDb57rCWW2DU1dyMHz5YSTfzJM6FhHVUUFJGZ5dcxzn03PRTKPC2ue7wt1Z/rOUaoplhoiq1L2VGyJf7QFfV3tczirA8GWHcexiltSxiKiWSsoMeGXtKZy6kgNntQ2+ez4Cfm4OUscyKZYZIqpWoLsGW1/tiTBfF+gKS/H0yuPYHnNN6lhEVEPlBhHTNsVg/4WbsLdVYvWzEWjraX2XKWGZIaK7auqkwvoXu2Fge0+UlBvw2oYYfPZ7Im9SSSRzoihi1vZY/HQmDbZKAV8+3QmdWjSROpZZsMwQ0T3Z2ymx7KmOePF+fwDAJ7su4K0fzqC0vPr7pBGRtP7vtwR8f+wKBAFYNDLMqi+GyTJDRDWiUAh4Z3Aw3h8WAoUAbD55FeNX375WBRHJy38PXMQXe5MBAB8MC8GQUG+JE5kXywwR1crT3Vpg5bgucLBT4lBSFh5ffhhXbxVIHYuI/rTpRCo++DkeAPDmw23wVNcWEicyP5YZIqq1fm3dsenl7vBwViExIw/DvjiMM1dzpI5F1OjtjE3H21vOAABevN8fr/ZtJXGihsEyQ0R1EtJci20Te6KtpwaZecUY9dVR/HYuXepYRI3W4aRMTFkfDYMIjOzsg//8qx0EwXKu4lsfLDNEVGdeWntsfqU7+gQ1Q2FpOV5eexKrDqZIHYuo0TmdmoMXvz2BknIDHm7vgQ8f69BoigzAMkNE9aRR22LluM54sqsfRBF476c4zNlxDuW8SSVRg0jKyMX41ceRX1KOHq3csGR0OGyUjevrvXH9tkRkFjZKBeYNC8GMQW0BAGsOX8LL351AQUmZxMmIrNvVWwUY+9/juFVQilAfLb5+pnOjvDEsywwRmYQgCHi5Tyt88WRH2NkosDs+A6O+OooMfZHU0YisUmZeMZ5eeRzp+iIEujth9bMRcFI1zvtHs8wQkUkNvs8L61/sBldHO5y9psNjyw4jIT1X6lhEVkVfVIpnVh5HSmY+mrvY47vnI+DqaCd1LMmwzBCRyXVq0QRbX+2BgKaOuJZTiMeXH8aBxJtSxyKyCkWl5XhhzQnEpenh5miH756PgJfWXupYkmKZISKzaOHmiMhXeyCipStyi8vw7Oo/sPGPK1LHIrJopeUGTPz+FI5fyoZGZYNvnotAQDMnqWNJjmWGiMzGxcEO370QgUfDvFFmEDF9y1l8/L/zMHCmE1GtGQwi3vrhDH4/nwGVjQL/HdcZIc21UseShcY5UoiIGozKRolPR4WhhasDlu5Jwhd7k3EluxAfP35fo5x1QVRT5QYRx1OykZFbBHeNCr/GpmNr9DUoFQKWPdURXQPcpI4oGywzRGR2giBg2oA28HV1wIzIs/jx9HWk5RTi62c6N+pBi0TV2Rmbhrk/xiFNd+dswE+eCMWD7TwkSCVfPM1ERA3mic6++Oa5CGjUNjhx+RaGLzuElMx8qWMRycrO2DRMWHuqyiIDAGpbfnX/E/cIETWonoFNETmhB5q72ONSVgGGLzuEE5eypY5FJAvlBhFzf4xDdaPKBABzf4zjFbb/gWWGiBpcaw8Ntk7sgVAfLW4VlOLJ/x7Dj6evSx2LSHLHU7KrPSIDACKANF0RjqfwHwB/xzJDRJJw16ix4aXuGBDsgZIyAyavj8YXe5MgivwXJzVeGbk1u2J2TddrLFhmiEgy9nZKLB/bCc/19AcAfPy/BLy95SxKyw0SJyOShrtGbdL1GguWGSKSlFIhYNaQYMwd2h4KAdh4IhXPrfkD+qJSqaMRNbgIf1d4adUQqnldAOClVSPC37UhY8keywwRycK4Hi3x9dOdYW+rxIHETDyx/Aiu5RRKHYuoQSkVAmYPCa7ytYqCM3tIMJSK6upO48QyQ0Sy0T/YA5tf6Q53jQoJN3Ix7ItDOHtVJ3UsogY1MMQLkx4IvGO5p1aN5WM7YmCIlwSp5I0XzSMiWQlprsXWiT3x3Oo/kHAjFyO/OoLPxoSjfzAvEkaNR2n57YHwvYOaYkRHH7hrbp9a4hGZqvHIDBHJTnMXe2ye0B33t26KwtJyvPTdCXxz+JLUsYgazJGLWQCAR0Ob49Gw5ujeyo1F5i5YZohIlpzVtlg1vgtGd/GFQQRm7ziH93ixMGoEcotKEXvt9unV7q14/6WaYJkhItmyVSowf3gHvDWwDQBg1aEUvLL2JApKyiRORmQ+Jy7dQrlBRAs3B3i72EsdxyKwzBCRrAmCgFf7BuKzMeGws1FgV9wNjP76KC8aRlar4hRTN38elakplhkisghDQr2x7oWuaOJgizNXdXjsi8NIvJErdSwikzuSfLvM8BRTzbHMEJHF6NzSFZGv9oR/U0dcyynE8OWHcTgpU+pYRCajKyzFuescL1NbLDNEZFH8mzoickIPdGnZBLlFZXhm1XFsPpEqdSwikziekg2DCAQ0dYSHM29ZUFOSlpn9+/djyJAh8Pb2hiAI2LZtW6XXBUGo8vHxxx9LE5iIZKGJox2+e74rhoR6o8wg4s0fzuCT3xJ4k0qyeEcrxsvwqEytSFpm8vPzERoais8//7zK19PS0io9Vq1aBUEQMGLEiAZOSkRyo7ZVYsmoMEzs1woA8NmeJLy+MQbFZeUSJyOqu4rxMt0CWGZqQ9IrAA8aNAiDBg2q9nVPT89Kz7dv345+/fohICDA3NGIyAIoFALefLgt/Fwd8M7WWGyLuY7ruiJ8/XQnuDjYSR2PqFZyCkoQn64HAHQL4I0ka8NixszcuHEDP//8M55//vm7rldcXAy9Xl/pQUTWbVQXP6x5NgIalQ2Op2Rj+LLDuJyVL3Usolo5ejEboggEujvBXcPxMrVhMWXmm2++gUajwfDhw++63vz586HVao0PX1/fBkpIRFLq1bopfpjQA95aNS5m5uOxZYdx8vItqWMR1VjFeJnuPMVUaxZTZlatWoWnnnoKavXd2+qMGTOg0+mMj9RUznIgaizaeGqwbWJPhDR3RnZ+CcasOIqfz6RJHYuoRoyDf1lmas0iysyBAweQkJCAF1544Z7rqlQqODs7V3oQUePh7qzGppe7o387d5SUGTBx3Sl8GZXMmU4ka1l5xTiffvsikBwvU3sWUWZWrlyJTp06ITQ0VOooRGQBHOxs8NXTnTG+R0sAwIJfz+OdbbEoKzdIG4yoGsdSsgEAbTw0cHNSSZzG8khaZvLy8hATE4OYmBgAQEpKCmJiYnDlyhXjOnq9Hps3b67RURkiogpKhYA5Q9tj1iPBEARg3bEreO6bE8gtKpU6GtEdeAuD+pG0zJw4cQLh4eEIDw8HAEybNg3h4eGYNWuWcZ0NGzZAFEWMGTNGqphEZMGe6+WPr8Z2gtpWgf0XbuKJL48gTVcodSyiSo5wvEy9CKKVn0jW6/XQarXQ6XQcP0PUiJ25moPn1pxAZl4xPJxVWDW+C9p7a6WORYSbucXoMm83BAE49e5DaOLIayQBtfv+togxM0RE9XWfjwu2TeyBIA8n3NAX44kvj2Dv+QypYxEZZzG19XRmkakjlhkiajR8mjhg8ys90DPQDQUl5Xj+mz/w3ZFLUseiRu4Iry9TbywzRNSoaO1tsXp8BJ7o5AODCMzcfg7zfo6DwWDVZ9xJxo5y8G+9scwQUaNjZ6PAR4/fh38PCAIArDiQgle/P4XCEt6kkhrWDX0RLmbmQxCAiJa8vkxdscwQUaMkCAImPdAaS0aHwU6pwM5z6Riz4ihu5hZLHY0akYrxMu29naF1sJU4jeVimSGiRu3RsOZY+0JXuDjYIiY1B48tO4SkjFypY1EjYby+DMfL1AvLDBE1ehH+roic0AMt3Bxw9VYhhi87bPySITIn4+BfjpepF5YZIiIAAc2cEDmhBzq1aAJ9URmeWXUMkaeuSh2LrNj1nEJcziqAQgC6cLxMvbDMEBH9yc1Jhe9f6IrB93mhtFzEtE2nsXjXBd6kksyiYrxMh+ZaaNQcL1MfLDNERH+jtlXis9HheKVPKwDAkt8T8cam0ygp400qybQqTmV24ymmemOZISL6B4VCwNuD2mL+8A5QKgRERl/DM6uOQVfAm1SS6fBieabDMkNEVI0xEX5YNb4LnFQ2OHoxG8OXH0JqdoHUscgKpGYX4OqtQigVAsfLmADLDBHRXfQJaobNr3SHl1aN5Jv5GPbFIURfuSV1LLJwFUdl7vPRwlFlI3Eay8cyQ0R0D+28nLFtYk+093ZGVn4JRn99FL+eTZM6FlmwozzFZFIsM0RENeDhrMaml7vjgbbuKC4z4NV1p7Bi/0XOdKJaE0WR92MyMZYZIqIaclTZ4OunO+GZ7i0gisC8X+Ixc3ssyso504lq7kp2Aa7rimCrFNC5BcfLmALLDBFRLdgoFZg7tD3eHdwOggCsPXoFL357AnnFZVJHIwtRMSU7zNcF9nZKidNYB5YZIqJaEgQBL9wfgOVPdYLaVoG9CTcx8ssjSNcVSR2NLEDF4N9uHC9jMiwzRER1NDDEExte6o6mTnaIS9Nj2BeHEHddL3UskjFRFDn41wxYZoiI6iHM1wVbX+2JQHcnpOuL8MSXh7EvIUPqWCRTKZn5uKEvhp1SgY4tmkgdx2qwzBAR1ZOvqwO2vNID3QPckF9Sjue/OYF1x65IHYtkqOIUU7ifC9S2HC9jKiwzREQmoHWwxTfPRWB4x+YoN4j4z9azmP9rPAwGTt2mvxjvx8RTTCbFMkNEZCJ2Ngp88kQoXu8fBAD4KuoiJq+PRlFpucTJSGrlBhFHkjMRlXATANDVn1OyTYllhojIhARBwGv9W2PxqFDYKgX8fDYNT644iqy8YqmjkUR2xqah18I9GLPiGHL/nMI/bVMMdsbyKtKmwjJDRGQGj4X74Lvnu0Jrb4tTV3Lw2LLDSL6ZJ3UsamA7Y9MwYe0ppP1j2v4NfTEmrD3FQmMiLDNERGbSLcANWyb0gK+rPa5kF2D4ssM49ucAULJ+5QYRc3+MQ1WjpiqWzf0xDuUcV1VvLDNERGYU6O6Era/2RJivC3SFpXh65XFsi74mdSwyA1EUcS2nEHsTMvBVVDLGrz5+xxGZSusDSNMV4XhKdsOFtFK87zgRkZk1dVJhw0vd8PrGGPwam46pG2OQml2ASQ8EQhAEqeNRLYmiiMy8Ely4kYuE9FwkZvz53xt5xjExtZGRyytH1xfLDBFRA1DbKvHFkx2xcOd5fLX/Ij7ZdQGXswvw4WMdYGfDg+RylVNQggs38nDhRq6xvFy4kYtbBaVVrm+jEBDQzBFBHho42Cqx6eTVe27DXaM2dexGh2WGiKiBKBQCZvyrHXxdHTBreyx+OHkV13MKsXxsJ2jtbaWO16jlFZch8cbtoysJfxaXCzdycUNf9Sw0QQBaujkiyMMJbTw0aO2hQRtPDVq6ORrLablBxIGkTKTriqocNyMA8NSqEcFp2vXGMkNE1MDGdmuB5k3sMen7UzicnIXHlx/GqvFd4OvqIHU0q1dUWo7km3l/HmXJQ+KNXCTcyMXVW4XV/kxzF3u08dSg9Z/FJchDg0B3p3tewVepEDB7SDAmrD0FAahUaCpOLs4eEgylgqca60sQRdGqh1Hr9XpotVrodDo4OztLHYeIyOjcdR2eX3MC6foiNHVSYeW4zgj1dZE6llUoLTfgUmb+7aMs6bnGU0WXsvJR3eQhd40KQX+WlTaeTmjtoUFrdydo1PU7arYzNg1zf4yrNBjYS6vG7CHBGBjiVa/3tma1+f5mmSEiklCarhDPrTmB+DQ91LYKLB0djgHtPaWOZTHKDSJSswuQcCP3z6MsebiQnouLmXkoLa/6683FwfZ2YfHQIMjDyVhgmjjamTXn8ZRsZOQWwV1z+9QSj8jcHcvM37DMEJHc5RWXYeL3pxB14SYEAXh3cDCe69mSM53+RhRFXNcV3R7Lkp5rHNeSlJGHolJDlT/jaKdEkKcGQe4aBHn+WV48ndDMScV9awFYZv6GZYaILEFZuQGzd5zD93/ebXtc9xaYNaR9o/vXuyiKuJlXfHsgbvpfA3HvNu1ZZaNAaw+nSqWltYcTmrvYs7RYsNp8f3MAMBGRDNgoFfhgWAhauDngw1/O45sjl3H1ViGWjgmHo8o6/1RXTHs2niKqxbTnv88g8nN1aHSljyrjkRkiIpn55WwaXt8Yg+IyA0KaO2PVuC5wd7bca5FUNe05IT0XGbn3nvb814DcytOeyfrxyAwRkQX7VwcveGrVePGbE4i9psewLw5h1bNd0NZT3v8g++e054pTRPea9hzk4fTXmJYaTnsm+jtJj8zs378fH3/8MU6ePIm0tDRs3boVw4YNq7ROfHw8pk+fjqioKBgMBrRv3x6bNm2Cn59fjbbBIzNEZKmuZBVg/JrjuHgzH04qGyx7qiN6BzWTOladpj0306iMZcWU057JelnMkZn8/HyEhobi2WefxYgRI+54PTk5Gb169cLzzz+PuXPnQqvVIj4+Hmq15R5uJSKqKT83B0RO6IGXvzuJYynZeHbNH5g3LASjI2r2j7n6spRpz0SyGTMjCMIdR2ZGjx4NW1tbfPfdd3V+Xx6ZISJLV1xWjre3nMXWP++2/WrfVvj3gDYQAZNcu8Q47fnPAbic9kxyYDFHZu7GYDDg559/xltvvYWHH34Y0dHR8Pf3x4wZM+44FfV3xcXFKC7+a1CZXq9vgLREROajslFi0chQ+Lo6YOnviVi2LxnHUrJw7VYh0v9276B7XVW2umnPF27kIe8u054D3f+8jD+nPZNMyfbITHp6Ory8vODg4IAPPvgA/fr1w86dO/Gf//wHe/fuRZ8+fap8nzlz5mDu3Ll3LOeRGSKyBj+cvIq3fjhd5diUimqxfGxHdAtwM057vvC34sJpz2QpLPKief8sM9evX0fz5s0xZswYrFu3zrje0KFD4ejoiPXr11f5PlUdmfH19WWZISKrUG4Q0WXebmTnl1S7jkJAtQNxOe2ZLIVVnGZq2rQpbGxsEBwcXGl5u3btcPDgwWp/TqVSQaVSmTseEZEkjqdk37XIAH8VGU57psZCtmXGzs4OXbp0QUJCQqXlFy5cQIsWLSRKRUQkrYzconuvBGDhiA4Y1aVhZj0RSU3SMpOXl4ekpCTj85SUFMTExMDV1RV+fn548803MWrUKPTu3ds4ZubHH3/Evn37pAtNRCQhd03NLk3h5+po5iRE8iHpmJl9+/ahX79+dywfN24c1qxZAwBYtWoV5s+fj6tXr6JNmzaYO3cuHn300Rpvg1OzicialBtE9Fq4B+m6IlT1x1sA4KlV4+D0BzhwlyyaRQ4ANheWGSKyNjtj0zBh7SkAqFRo/j6bqbrp2USWojbf3xy6TkRkYQaGeGH52I7w1FY+5eSpVbPIUKMk2wHARERUvYEhXngo2NMkVwAmsnQsM0REFkqpENC9lZvUMYgkx9NMREREZNFYZoiIiMiiscwQERGRRWOZISIiIovGMkNEREQWjWWGiIiILBrLDBEREVk0lhkiIiKyaCwzREREZNGs/grAFffR1Ov1EichIiKimqr43q7J/bCtvszk5uYCAHx9fSVOQkRERLWVm5sLrVZ713UEsSaVx4IZDAZcv34dGo0GgmD6G7Dp9Xr4+voiNTX1nrcop7rjfjY/7uOGwf3cMLifzc/c+1gUReTm5sLb2xsKxd1HxVj9kRmFQgEfHx+zb8fZ2Zn/wzQA7mfz4z5uGNzPDYP72fzMuY/vdUSmAgcAExERkUVjmSEiIiKLxjJTTyqVCrNnz4ZKpZI6ilXjfjY/7uOGwf3cMLifzU9O+9jqBwATERGRdeORGSIiIrJoLDNERERk0VhmiIiIyKKxzBAREZFFa/RlZtmyZfD394darUanTp1w4MCBateNjIzEQw89hGbNmsHZ2Rndu3fH//73vzvW6dy5M1xcXODo6IiwsDB89913ldaZM2cOBEGo9PD09DTL7ycXpt7Pf7dhwwYIgoBhw4bVa7uWTop9zM9y/ffzmjVr7tiHgiCgqKioztu1dFLsY36WTfM3IycnBxMnToSXlxfUajXatWuHX375pc7brTGxEduwYYNoa2srrlixQoyLixNfe+010dHRUbx8+XKV67/22mviwoULxePHj4sXLlwQZ8yYIdra2oqnTp0yrrN3714xMjJSjIuLE5OSksRPP/1UVCqV4s6dO43rzJ49W2zfvr2YlpZmfGRkZJj995WKOfZzhUuXLonNmzcX77//fvHRRx+t13YtmVT7mJ/l+u/n1atXi87OzpX2YVpaWr22a8mk2sf8LNd/PxcXF4udO3cW//Wvf4kHDx4UL126JB44cECMiYmp83ZrqlGXmYiICPGVV16ptKxt27bi22+/XeP3CA4OFufOnXvXdcLDw8V3333X+Hz27NliaGhorbJaMnPt57KyMrFnz57if//7X3HcuHF3fNGaYruWQqp9zM9y/ffz6tWrRa1Wa/btWgqp9jE/y/Xfz8uXLxcDAgLEkpISs263Ko32NFNJSQlOnjyJAQMGVFo+YMAAHD58uEbvYTAYkJubC1dX1ypfF0URv//+OxISEtC7d+9KryUmJsLb2xv+/v4YPXo0Ll68WLdfRObMuZ/fe+89NGvWDM8//7xZtmsppNrHFfhZrv9+zsvLQ4sWLeDj44NHHnkE0dHRJt2upZBqH1fgZ7l++3nHjh3o3r07Jk6cCA8PD4SEhODDDz9EeXm5ybZbHau/0WR1MjMzUV5eDg8Pj0rLPTw8kJ6eXqP3+OSTT5Cfn4+RI0dWWq7T6dC8eXMUFxdDqVRi2bJleOihh4yvd+3aFd9++y2CgoJw48YNfPDBB+jRowfOnTsHNze3+v9yMmKu/Xzo0CGsXLkSMTExZtuupZBqHwP8LAP1389t27bFmjVr0KFDB+j1eixZsgQ9e/bE6dOn0bp1a36WYf59DPCzDNR/P1+8eBF79uzBU089hV9++QWJiYmYOHEiysrKMGvWLLN+lhttmakgCEKl56Io3rGsKuvXr8ecOXOwfft2uLu7V3pNo9EgJiYGeXl5+P333zFt2jQEBASgb9++AIBBgwYZ1+3QoQO6d++OVq1a4ZtvvsG0adPq/0vJkCn3c25uLsaOHYsVK1agadOmZtmuJZJiH/OzXP+/Gd26dUO3bt2Mz3v27ImOHTvis88+w9KlS+u9XUskxT7mZ7n++9lgMMDd3R1ff/01lEolOnXqhOvXr+Pjjz/GrFmz6r3du2m0ZaZp06ZQKpV3tMGMjIw7WuM/bdy4Ec8//zw2b96M/v373/G6QqFAYGAgACAsLAzx8fGYP3++scz8k6OjIzp06IDExMS6/TIyZo79nJycjEuXLmHIkCHGZQaDAQBgY2ODhIQE+Pr61nm7lkaqfdyqVas73o+f5ard62/G3ykUCnTp0sW4D+uzXUsj1T6uCj/LVbvbfvby8oKtrS2USqVxWbt27ZCeno6SkhKzfpYb7ZgZOzs7dOrUCbt27aq0fNeuXejRo0e1P7d+/XqMHz8e69atw+DBg2u0LVEUUVxcXO3rxcXFiI+Ph5eXV83CWxBz7Oe2bdvi7NmziImJMT6GDh2Kfv36ISYmBr6+vnXeriWSah9XhZ/lO9X2b4YoioiJiTHuQ36Wzb+Pq8LP8p3utZ979uyJpKQk4z98AODChQvw8vKCnZ2deT/L9Ro+bOEqpoitXLlSjIuLE6dOnSo6OjqKly5dEkVRFN9++23x6aefNq6/bt060cbGRvziiy8qTd/LyckxrvPhhx+Kv/32m5icnCzGx8eLn3zyiWhjYyOuWLHCuM4bb7wh7tu3T7x48aJ49OhR8ZFHHhE1Go1xu9bGHPv5n6qaaXOv7VoTqfYxP8v1389z5swRd+7cKSYnJ4vR0dHis88+K9rY2IjHjh2r8XatiVT7mJ/l+u/nK1euiE5OTuKkSZPEhIQE8aeffhLd3d3FDz74oMbbratGXWZEURS/+OILsUWLFqKdnZ3YsWNHMSoqyvjauHHjxD59+hif9+nTRwRwx2PcuHHGdd555x0xMDBQVKvVYpMmTcTu3buLGzZsqLTNUaNGiV5eXqKtra3o7e0tDh8+XDx37py5f1VJmXo//1NVX7T32q61kWIf87Nc//08depU0c/PT7SzsxObNWsmDhgwQDx8+HCttmttpNjH/Cyb5m/G4cOHxa5du4oqlUoMCAgQ582bJ5aVldV4u3UliKIo1u/YDhEREZF0Gu2YGSIiIrIOLDNERERk0VhmiIiIyKKxzBAREZFFY5khIiIii8YyQ0RERBaNZYaIiIgsGssMEZnNpUuXIAiC8c7b+/btgyAIyMnJkTQXEVkXlhkiajA9evRAWloatFqt1FGIyIqwzBBRg7Gzs4OnpycEQZA6So2VlJRIHYGI7oFlhojqxWAwYOHChQgMDIRKpYKfnx/mzZtX5br/PM20Zs0auLi4YNu2bQgKCoJarcZDDz2E1NTUardXUlKCSZMmwcvLC2q1Gi1btsT8+fONr+fk5OCll16Ch4cH1Go1QkJC8NNPPxlf37JlC9q3bw+VSoWWLVvik08+qfT+LVu2xAcffIDx48dDq9XixRdfBAAcPnwYvXv3hr29PXx9fTFlyhTk5+fXdbcRkQmxzBBRvcyYMQMLFy7EzJkzERcXh3Xr1sHDw6PGP19QUIB58+bhm2++waFDh6DX6zF69Ohq11+6dCl27NiBTZs2ISEhAWvXrkXLli0B3C5WgwYNwuHDh7F27VrExcVhwYIFUCqVAICTJ09i5MiRGD16NM6ePYs5c+Zg5syZWLNmTaVtfPzxxwgJCcHJkycxc+ZMnD17Fg8//DCGDx+OM2fOYOPGjTh48CAmTZpU6/1FRGZQ71tVElGjpdfrRZVKJa5YsaLK11NSUkQAYnR0tCiKorh3714RgHjr1i1RFEVx9erVIgDx6NGjxp+Jj48XAYjHjh2r8j0nT54sPvDAA6LBYLjjtf/973+iQqEQExISqvzZJ598UnzooYcqLXvzzTfF4OBg4/MWLVqIw4YNq7TO008/Lb700kuVlh04cEBUKBRiYWFhldsioobDIzNEVGfx8fEoLi7Ggw8+WOf3sLGxQefOnY3P27ZtCxcXF8THx1e5/vjx4xETE4M2bdpgypQp+O2334yvxcTEwMfHB0FBQdXm7dmzZ6VlPXv2RGJiIsrLy43L/p4HuH1EZ82aNXBycjI+Hn74YRgMBqSkpNT6dyYi07KROgARWS57e3uTvE9VA4KrGyTcsWNHpKSk4Ndff8Xu3bsxcuRI9O/fHz/88MM984iieMf7iqJ4x3qOjo6VnhsMBrz88suYMmXKHev6+fnddZtEZH48MkNEdda6dWvY29vj999/r/N7lJWV4cSJE8bnCQkJyMnJQdu2bav9GWdnZ4waNQorVqzAxo0bsWXLFmRnZ+O+++7D1atXceHChSp/Ljg4GAcPHqy07PDhwwgKCjKOq6lKx44dce7cOQQGBt7xsLOzq+VvTESmxiMzRFRnarUa06dPx1tvvQU7Ozv07NkTN2/exLlz5/D888/X6D1sbW0xefJkLF26FLa2tpg0aRK6deuGiIiIKtdfvHgxvLy8EBYWBoVCgc2bN8PT0xMuLi7o06cPevfujREjRmDRokUIDAzE+fPnIQgCBg4ciDfeeANdunTB+++/j1GjRuHIkSP4/PPPsWzZsrtmnD59Orp164aJEyfixRdfhKOjI+Lj47Fr1y589tlntd5vRGRaLDNEVC8zZ86EjY0NZs2ahevXr8PLywuvvPJKjX/ewcEB06dPx5NPPomrV6+iV69eWLVqVbXrOzk5YeHChUhMTIRSqUSXLl3wyy+/QKG4faB5y5Yt+Pe//40xY8YgPz8fgYGBWLBgAYDbR1g2bdqEWbNm4f3334eXlxfee+89jB8//q4Z77vvPkRFReGdd97B/fffD1EU0apVK4waNarGvycRmY8gVnXCmIioAaxZswZTp07l7Q2IqF44ZoaIiIgsGssMERERWTSeZiIiIiKLxiMzREREZNFYZoiIiMiiscwQERGRRWOZISIiIovGMkNEREQWjWWGiIiILBrLDBEREVk0lhkiIiKyaCwzREREZNH+H8K2bvrmBMC0AAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 640x480 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# plot clip score as x-axis, fid score as y-axis, line chart\n",
    "import matplotlib.pyplot as plt\n",
    "plt.plot(clip_scores, fids, 'o-')\n",
    "plt.xlabel('clip score')\n",
    "plt.ylabel('fid score')\n",
    "plt.show()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.13 ('base')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "4cc247672a8bfe61dc951074f9ca89ab002dc0f7e14586a8bb0828228bebeefa"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: text-image/fid_clip_score/fid_clip_coco_cn.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# load data\n",
    "import json\n",
    "import pandas as pd\n",
    "import os\n",
    "\n",
    "coco_path = '../dataset/coco'\n",
    "data_file = f'../dataset/coco/coco-cn-version1805v1.1/imageid.human-written-caption.txt'\n",
    "\n",
    "df = pd.read_table(data_file, sep='\\t', header=None, names = ['file_name', 'caption'])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df['file_name'] = df['file_name'].apply(lambda x: os.path.join(x.split('_')[1], x.split('#')[0]+'.jpg'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# shuffle the dataset\n",
    "df = df.sample(frac=1)\n",
    "\n",
    "\n",
    "# remove duplicate images\n",
    "df = df.drop_duplicates(subset='file_name')\n",
    "\n",
    "\n",
    "# create a random subset\n",
    "n_samples = 1000\n",
    "df_sample = df.sample(n_samples)\n",
    "# save the sample to a parquet file\n",
    "df_sample.to_parquet(f'{coco_path}/subset_cn.parquet')\n",
    "\n",
    "\n",
    "# copy the images to reference folder\n",
    "from pathlib import Path\n",
    "import shutil\n",
    "subset_path = Path(f'{coco_path}/subset_cn')\n",
    "subset_path.mkdir(exist_ok=True)\n",
    "for i, row in df_sample.iterrows():\n",
    "    path = f'{coco_path}/' + row['file_name']\n",
    "    shutil.copy(path, f'{coco_path}/subset_cn/')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# center crop the images\n",
    "def center_crop_images(folder, output_folder, size):\n",
    "    # coco images are not square, so we need to center crop them\n",
    "    from PIL import Image\n",
    "    import os\n",
    "    os.makedirs(output_folder, exist_ok=True)\n",
    "    for file in os.listdir(folder):\n",
    "        image_path = os.path.join(folder, file)\n",
    "        image = Image.open(image_path)\n",
    "        width, height = image.size\n",
    "        left = (width - size) / 2 if width > size else 0\n",
    "        top = (height - size) / 2 if height > size else 0\n",
    "        right = (width + size) / 2 if width > size else width\n",
    "        bottom = (height + size) / 2 if height > size else height\n",
    "        image = image.crop((left, top, right, bottom))\n",
    "        image = image.resize((size, size))  # resize non-square images\n",
    "        image.save(os.path.join(output_folder, file))\n",
    "\n",
    "folder_name = '../dataset/coco/subset_cn'\n",
    "output_folder = '../dataset/coco/subset_cn_cropped'\n",
    "center_crop_images(folder_name, output_folder, 320)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ⬆️ preprocess data ⬇️ load data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# load the subset\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "import pandas as pd \n",
    "\n",
    "class COCOCaptionSubset(Dataset):\n",
    "    def __init__(self, path, transform=None):\n",
    "        self.df = pd.read_parquet(path)\n",
    "        self.df['file_name'] = self.df['file_name'].apply(lambda x: x.replace('/', '_'))\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.df)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        row = self.df.iloc[idx]\n",
    "        \n",
    "        return row['file_name'], row['caption']\n",
    "\n",
    "# testing \n",
    "coco_path = '../dataset/coco'\n",
    "coco_cache_file = f'{coco_path}/subset_cn.parquet'     # sampled subsets\n",
    "cocosubset = COCOCaptionSubset(coco_cache_file)\n",
    "cocosubsetloader = DataLoader(cocosubset, batch_size=16, shuffle=False, num_workers=8)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cocosubset[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from diffusers import StableDiffusionPipeline, DDIMScheduler\n",
    "stable_diffusion = StableDiffusionPipeline.from_pretrained(\"../pretrained_models/stable_cn\").to('cuda')   \n",
    "out = stable_diffusion('古道西风瘦马,中国画')\n",
    "out[0][0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# After generation by `run_generator_cn.sh`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/tiger/anaconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.\n",
      "  warnings.warn(\n",
      "/home/tiger/anaconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=None`.\n",
      "  warnings.warn(msg)\n",
      "100%|██████████| 5/5 [00:10<00:00,  2.07s/it]\n",
      "100%|██████████| 5/5 [00:04<00:00,  1.08it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.37982094 0.32691598 0.32423221 ... 0.46967193 0.41200255 0.40518777] (2048,)\n",
      "./output_cn/gs1.5_ss20 82.83122165497986\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 5/5 [00:02<00:00,  1.67it/s]\n",
      "  0%|          | 0/5 [00:01<?, ?it/s]\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "\u001b[0;32m/tmp/ipykernel_253571/2121088339.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m     12\u001b[0m \u001b[0mfids\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     13\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0moutput_path\u001b[0m \u001b[0;32min\u001b[0m \u001b[0moutput_paths\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 14\u001b[0;31m     \u001b[0mfid_value\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcalculate_fid_given_paths\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mcoco_subset_crop_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutput_path\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m200\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdevice\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mdevice\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdims\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m2048\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnum_workers\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m8\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     15\u001b[0m     \u001b[0mfids\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfid_value\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     16\u001b[0m     \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0moutput_path\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfid_value\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/pytorch_fid/fid_score.py\u001b[0m in \u001b[0;36mcalculate_fid_given_paths\u001b[0;34m(paths, batch_size, device, dims, num_workers)\u001b[0m\n\u001b[1;32m    257\u001b[0m     m1, s1 = compute_statistics_of_path(paths[0], model, batch_size,\n\u001b[1;32m    258\u001b[0m                                         dims, device, num_workers)\n\u001b[0;32m--> 259\u001b[0;31m     m2, s2 = compute_statistics_of_path(paths[1], model, batch_size,\n\u001b[0m\u001b[1;32m    260\u001b[0m                                         dims, device, num_workers)\n\u001b[1;32m    261\u001b[0m     \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mm1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mm1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/pytorch_fid/fid_score.py\u001b[0m in \u001b[0;36mcompute_statistics_of_path\u001b[0;34m(path, model, batch_size, dims, device, num_workers)\u001b[0m\n\u001b[1;32m    239\u001b[0m         files = sorted([file for ext in IMAGE_EXTENSIONS\n\u001b[1;32m    240\u001b[0m                        for file in path.glob('*.{}'.format(ext))])\n\u001b[0;32m--> 241\u001b[0;31m         m, s = calculate_activation_statistics(files, model, batch_size,\n\u001b[0m\u001b[1;32m    242\u001b[0m                                                dims, device, num_workers)\n\u001b[1;32m    243\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/pytorch_fid/fid_score.py\u001b[0m in \u001b[0;36mcalculate_activation_statistics\u001b[0;34m(files, model, batch_size, dims, device, num_workers)\u001b[0m\n\u001b[1;32m    224\u001b[0m                \u001b[0mthe\u001b[0m \u001b[0minception\u001b[0m \u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    225\u001b[0m     \"\"\"\n\u001b[0;32m--> 226\u001b[0;31m     \u001b[0mact\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mget_activations\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfiles\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmodel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdims\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdevice\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnum_workers\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    227\u001b[0m     \u001b[0mmu\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmean\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mact\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0maxis\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    228\u001b[0m     \u001b[0msigma\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcov\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mact\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrowvar\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/pytorch_fid/fid_score.py\u001b[0m in \u001b[0;36mget_activations\u001b[0;34m(files, model, batch_size, dims, device, num_workers)\u001b[0m\n\u001b[1;32m    128\u001b[0m     \u001b[0mstart_idx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    129\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 130\u001b[0;31m     \u001b[0;32mfor\u001b[0m \u001b[0mbatch\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mtqdm\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdataloader\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    131\u001b[0m         \u001b[0mbatch\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbatch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdevice\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    132\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/tqdm/std.py\u001b[0m in \u001b[0;36m__iter__\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m   1193\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1194\u001b[0m         \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1195\u001b[0;31m             \u001b[0;32mfor\u001b[0m \u001b[0mobj\u001b[0m \u001b[0;32min\u001b[0m \u001b[0miterable\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1196\u001b[0m                 \u001b[0;32myield\u001b[0m \u001b[0mobj\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1197\u001b[0m                 \u001b[0;31m# Update and possibly print the progressbar.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py\u001b[0m in \u001b[0;36m__next__\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m    679\u001b[0m                 \u001b[0;31m# TODO(https://github.com/pytorch/pytorch/issues/76750)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    680\u001b[0m                 \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reset\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m  \u001b[0;31m# type: ignore[call-arg]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 681\u001b[0;31m             \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_next_data\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    682\u001b[0m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_num_yielded\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    683\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_dataset_kind\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0m_DatasetKind\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mIterable\u001b[0m \u001b[0;32mand\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py\u001b[0m in \u001b[0;36m_next_data\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m   1357\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1358\u001b[0m             \u001b[0;32massert\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_shutdown\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_tasks_outstanding\u001b[0m \u001b[0;34m>\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1359\u001b[0;31m             \u001b[0midx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_get_data\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1360\u001b[0m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_tasks_outstanding\u001b[0m \u001b[0;34m-=\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1361\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_dataset_kind\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0m_DatasetKind\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mIterable\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py\u001b[0m in \u001b[0;36m_get_data\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m   1323\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1324\u001b[0m             \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1325\u001b[0;31m                 \u001b[0msuccess\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_try_get_data\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1326\u001b[0m                 \u001b[0;32mif\u001b[0m \u001b[0msuccess\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1327\u001b[0m                     \u001b[0;32mreturn\u001b[0m \u001b[0mdata\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py\u001b[0m in \u001b[0;36m_try_get_data\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m   1161\u001b[0m         \u001b[0;31m#   (bool: whether successfully get data, any: data if successful else None)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1162\u001b[0m         \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1163\u001b[0;31m             \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_data_queue\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1164\u001b[0m             \u001b[0;32mreturn\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdata\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1165\u001b[0m         \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/multiprocessing/queues.py\u001b[0m in \u001b[0;36mget\u001b[0;34m(self, block, timeout)\u001b[0m\n\u001b[1;32m    111\u001b[0m                 \u001b[0;32mif\u001b[0m \u001b[0mblock\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    112\u001b[0m                     \u001b[0mtimeout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdeadline\u001b[0m \u001b[0;34m-\u001b[0m \u001b[0mtime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmonotonic\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 113\u001b[0;31m                     \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_poll\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    114\u001b[0m                         \u001b[0;32mraise\u001b[0m \u001b[0mEmpty\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    115\u001b[0m                 \u001b[0;32melif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_poll\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/multiprocessing/connection.py\u001b[0m in \u001b[0;36mpoll\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    260\u001b[0m         \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_check_closed\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    261\u001b[0m         \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_check_readable\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 262\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_poll\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    263\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    264\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m__enter__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/multiprocessing/connection.py\u001b[0m in \u001b[0;36m_poll\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    427\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    428\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m_poll\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 429\u001b[0;31m         \u001b[0mr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mwait\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    430\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mbool\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mr\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    431\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/multiprocessing/connection.py\u001b[0m in \u001b[0;36mwait\u001b[0;34m(object_list, timeout)\u001b[0m\n\u001b[1;32m    934\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    935\u001b[0m             \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 936\u001b[0;31m                 \u001b[0mready\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mselector\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mselect\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    937\u001b[0m                 \u001b[0;32mif\u001b[0m \u001b[0mready\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    938\u001b[0m                     \u001b[0;32mreturn\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfileobj\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mevents\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mready\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m~/anaconda3/lib/python3.9/selectors.py\u001b[0m in \u001b[0;36mselect\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    414\u001b[0m         \u001b[0mready\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    415\u001b[0m         \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 416\u001b[0;31m             \u001b[0mfd_event_list\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_selector\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpoll\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    417\u001b[0m         \u001b[0;32mexcept\u001b[0m \u001b[0mInterruptedError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    418\u001b[0m             \u001b[0;32mreturn\u001b[0m \u001b[0mready\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "# fid\n",
    "import torch\n",
    "device = torch.device('cuda')\n",
    "\n",
    "coco_subset_crop_path = '../dataset/coco/subset_cn_cropped'\n",
    "output_root = './output_cn'\n",
    "output_paths = [os.path.join(output_root, out) for out in sorted(os.listdir(output_root))]\n",
    "\n",
    "\n",
    "from pytorch_fid.fid_score import calculate_fid_given_paths\n",
    "\n",
    "fids = []\n",
    "for output_path in output_paths:\n",
    "    fid_value = calculate_fid_given_paths([coco_subset_crop_path, output_path], batch_size=200, device=device, dims=2048, num_workers=8)\n",
    "    fids.append(fid_value)\n",
    "    print(output_path, fid_value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  1.98it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs1.5_ss20 0.10783037\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  2.00it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs2.0_ss20 0.12101555\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  2.00it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs3.0_ss20 0.13520376\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  2.01it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs4.0_ss20 0.14092724\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  1.99it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs5.0_ss20 0.14418064\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  1.99it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs6.0_ss20 0.1452085\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  2.00it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs7.0_ss20 0.1480369\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 63/63 [00:31<00:00,  2.00it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./output_cn/gs8.0_ss20 0.14733191\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "# clip score\n",
    "from transformers import CLIPProcessor, CLIPModel, CLIPTokenizer, BertTokenizer, BertModel\n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "from tqdm import tqdm\n",
    "\n",
    "def load_clip_model(model_path='openai/clip-vit-large-patch14'):\n",
    "    text_encoder = BertModel.from_pretrained('../pretrained_models/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese').eval().cuda()\n",
    "    text_tokenizer = BertTokenizer.from_pretrained('../pretrained_models/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese')\n",
    "    clip_model = CLIPModel.from_pretrained(model_path)\n",
    "    processor = CLIPProcessor.from_pretrained(model_path)\n",
    "    # tokenizer = CLIPTokenizer.from_pretrained(model_path)\n",
    "\n",
    "    clip_model = clip_model.eval().cuda()\n",
    "    return clip_model, processor, text_tokenizer, text_encoder\n",
    "\n",
    "\n",
    "def clip_score(clip_model, processor, tokenizer, text_encoder, dataloader, output_image_path):\n",
    "    all_image_features = []\n",
    "    all_text_features = []\n",
    "    for (i, (image_paths, captions)) in enumerate(tqdm(dataloader)):\n",
    "        # print(image_paths, captions)\n",
    "        text_inputs = tokenizer(list(captions), padding=True, return_tensors=\"pt\").to('cuda')\n",
    "        # print(text_inputs)\n",
    "        text_features = text_encoder(text_inputs.input_ids)[1]\n",
    "        text_features = text_features / text_features.norm(dim=-1, keepdim=True)\n",
    "        text_features = text_features.detach().cpu().numpy()\n",
    "        all_text_features.append(text_features)\n",
    "\n",
    "        images = [Image.open(os.path.join( output_image_path , image_path)) for image_path in image_paths]\n",
    "        image_inputs = processor(images = images, return_tensors=\"pt\").to('cuda')\n",
    "        image_features = clip_model.get_image_features(**image_inputs)\n",
    "        # image_inputs = [processor(image) for image in images]\n",
    "        # image_inputs = torch.stack(image_inputs).to('cuda')\n",
    "        image_features = clip_model.get_image_features(**image_inputs)\n",
    "        image_features = image_features / image_features.norm(dim=-1, keepdim=True)\n",
    "        image_features = image_features.detach().cpu().numpy()\n",
    "        all_image_features.append(image_features)\n",
    "\n",
    "    all_text_features = np.concatenate(all_text_features, axis=0)\n",
    "    all_image_features = np.concatenate(all_image_features, axis=0)\n",
    "    mean_similarity = (all_image_features @ all_text_features.T).diagonal().mean()\n",
    "    return mean_similarity\n",
    "\n",
    "\n",
    "clip_model_path=\"../pretrained_models/clip-vit-large-patch14\"\n",
    "clip_model, processor, tokenizer, text_encoder = load_clip_model(clip_model_path)\n",
    "clip_scores = []\n",
    "for output_path in output_paths:\n",
    "    clip_score_each = clip_score(clip_model, processor, tokenizer, text_encoder, cocosubsetloader, output_path)   \n",
    "    print(output_path, clip_score_each)\n",
    "    clip_scores.append(clip_score_each)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAkYAAAGwCAYAAABM/qr1AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAABXKklEQVR4nO3deVhU9eIG8PfMDDMswiDIqiAIKuKCK4jmklpqZprmmomJuaSZWd20X2ZdM5fMW1qpuaFhbrnk1qKm5grIouKCGwoqiIowrAPMnN8f6hSJCTpwZob38zznuZczZ855T+e5d97O9hVEURRBRERERJBJHYCIiIjIVLAYEREREd3HYkRERER0H4sRERER0X0sRkRERET3sRgRERER3cdiRERERHSfQuoAlU2v1+PGjRuwt7eHIAhSxyEiIqJyEEUROTk58PT0hExWdedxLL4Y3bhxA15eXlLHICIioieQmpqKOnXqVNn2LL4Y2dvbA7j3D9bBwUHiNERERFQeGo0GXl5eht/xqmLxxejB5TMHBwcWIyIiIjNT1bfB8OZrIiIiovtYjIiIiIjuYzEiIiIiuo/FiIiIiOg+FiMiIiKi+1iMiIiIiO5jMSIiIiK6j8WIiIiI6D4WIyIiIqL7LP7N15VFpxcRnZyJjJxCuNpbI9jXCXIZB6klIiIyZyxGT+DXxDR8uv0M0rILDfM81NaY3jsQPZp4SJiMiIiIngYvpVXQr4lpGBcZV6oUAUB6diHGRcbh18Q0iZIRERHR02IxqgCdXsSn289ALOOzB/M+3X4GOn1ZSxAREZGpYzGqgOjkzIfOFP2dCCAtuxDRyZlVF4qIiIiMhsWoAjJyHl2KnmQ5IiIiMi0sRhXgam9t1OWIiIjItLAYVUCwrxM81Nb4t4fyPdT3Ht0nIiIi88NiVAFymYDpvQMB4JHl6PlAN77PiIiIyEyxGFVQjyYeWDSsJdzVpS+X2SnlAIAfjl3lI/tERERmShBF0aKfLddoNFCr1cjOzoaDg4PR1vvPN1+38amJ/9uSiPXHU2ElF7A8rA06NnAx2vaIiIiqk8r6/X4cFiMj0ulFTFwbj52n0mBjJccP4cFo7cP7jYiIiCpKqmLES2lGJJcJ+N+g5ujUwAUFxTq8HhGDxOvZUsciIiKicmIxMjKlQobFw1oh2McJOYUlCFsRjUu3cqWORUREROXAYlQJbJRyLBvRGk1qO+BOXhGGLYvCtbv5UsciIiKix2AxqiQO1lZYPTIE/q41kJZdiGHLonArRyt1LCIiIvoXLEaVyMlOicjwENSpaYMrd/Lx2vIoZOcXSx2LiIiIHoHFqJK5q62xZlQIXO1VOJeegxER0cjTlkgdi4iIiMrAYlQF6jrb4YfwEDjaWiE+JQujfziOwmKd1LGIiIjoH1iMqkhDd3usej0Ydko5Dl+8g7fWxqNYp5c6FhEREf0Ni1EVCvJyxLKwNlAqZNh95ib+89NJ6PUW/X5NIiIisyJpMdLpdJg2bRp8fX1hY2MDPz8/zJgxAw9exl1cXIwPPvgATZs2hZ2dHTw9PTF8+HDcuHFDythPJdTPGYtebQmFTMCW+OuYvu00LPzl40RERGZD0mI0Z84cLFq0CN988w3Onj2LOXPmYO7cuVi4cCEAID8/H3FxcZg2bRri4uKwefNmJCUl4aWXXpIy9lPr2sgN8wc1hyDcG3T2i9+SpI5EREREkHistBdffBFubm5Yvny5YV7//v1hY2ODyMjIMr8TExOD4OBgXL16Fd7e3g99rtVqodX+9b4gjUYDLy+vKh9rpTx+jErBh1tOAQA+6BGAcZ39JE5ERERkGqrlWGnt2rXD3r17cf78eQDAiRMncOjQIfTs2fOR38nOzoYgCHB0dCzz81mzZkGtVhsmLy+vyohuFENDvDG1ZwAAYM6v5xB57KrEiYiIiKo3Sc8Y6fV6fPjhh5g7dy7kcjl0Oh1mzpyJqVOnlrl8YWEh2rdvj4CAAKxZs6bMZczpjNED835Lwjf7LkIQgK8GNUef5rWljkRERCQpqc4YKapsS2XYsGED1qxZgx9//BGNGzdGQkICJk2aBE9PT4SFhZVatri4GAMHDoQoili0aNEj16lSqaBSqSo7ulG9+3wD5BQWY9XRq5i84QTslAp0C3STOhYREVG1I+kZIy8vL0yZMgXjx483zPvss88QGRmJc+fOGeY9KEWXL1/GH3/8AWdn53JvQ6rGWVF6vYj3fjqBzXHXoVTIEDGiDdr515I6FhERkSSq5T1G+fn5kMlKR5DL5dDr/3rx4YNSdOHCBezZs6dCpcicyGQC5vZvhucD3VBUoseo1ccRn3JX6lhERETViqTFqHfv3pg5cyZ27tyJK1euYMuWLZg/fz5efvllAPdK0SuvvILjx49jzZo10Ol0SE9PR3p6OoqKiqSMXikUchkWDm2BZ/xrIb9IhxErY3AuXSN1LCIiompD0ktpOTk5mDZtGrZs2YKMjAx4enpiyJAh+Pjjj6FUKnHlyhX4+vqW+d19+/ahc+fOj92GuVxK+7v8ohIMWxaFuJQs1Kqhwk9jQ+FTy07qWERERFVGqt9vSYtRVTDHYgQA2fnFGLz0GM6maVDb0QY/jQuFh9pG6lhERERVolreY0SPpra1wuqRwahXyw7XswowbFkU7uRqH/9FIiIiemIsRibMxV6FH0aFwFNtjUu38jB8RTQ0hcVSxyIiIrJYLEYmrrajDSJHhaBWDSVO39AgPCIGBUU6qWMRERFZJBYjM1DPpQZWjwyBvbUCMVfuYkxkLLQlLEdERETGxmJkJgI9HRDxehvYWMnx5/lbmLQuASU6/eO/SEREROXGYmRGWtV1wtLhraGUy/BLYjqmbj4Fvd6iHyokIiKqUixGZuaZ+rWwYEgLyGUCNsZew4ydZ2Dhb1wgIiKqMixGZqhHE3fM7d8MALDy8BV8teeCxImIiIgsA4uRmerfqg7+26cxAODrvRew7OBliRMRERGZPxYjMzY81Afvd28IAPhs51msj0mROBEREZF5YzEyc2929sOYjvUAAFM3n8LOk2kSJyIiIjJfLEZmThAETOkZgCHB3tCLwKT18diXlCF1LCIiIrPEYmQBBEHAZ32boHeQJ4p1Isb+EIuoy3ekjkVERGR2WIwshFwmYP7AIHQNcIW2RI/wVcdx6lq21LGIiIjMCouRBbGSy/Dtqy3Rtp4TcrUlGL4iChdu5kgdi4iIyGywGFkYays5loW1QVAdNe7mF2PY8iikZuZLHYuIiMgssBhZoBoqBSJeD0YDtxq4qdHi1WVRuKkplDoWERGRyWMxslA17ZSIDA+Bt5MtUjLz8dryKNzNK5I6FhERkUljMbJgrg7WWDMqBO4O1jh/MxcjVkYjV1sidSwiIiKTxWJk4bycbBE5Khg1ba1w4lo2wiNiUFiskzoWERGRSWIxqgb8Xe2xemQI7FUKRCVn4s01cSjW6aWORUREZHJYjKqJpnXUWD6iDVQKGf44l4HJG05ApxeljkVERGRSWIyqkWBfJyx+rRWs5AK2n7iBj7YmQhRZjoiIiB5gMapmnm3oiq8GtYBMANZGp2D2L+dYjoiIiO5jMaqGejXzwOx+zQAAS/68jG/3XZQ4ERERkWlgMaqmBrbxwke9GgEA5v1+HquOXJE2EBERkQlgMarGRnWoh7e71gcATN92Gptir0mciIiISFosRtXcpG718Xp7HwDA+z+dwK+J6dIGIiIikhCLUTUnCAKm9QrEgFZ1oBeBiWvjcfDCLaljERERSYLFiCCTCZjdvxleaOqOIp0eo1fHIvZqptSxiIiIqhyLEQEA5DIB/xvUHB0buKCgWIcRK2Nw5oZG6lhERERVisWIDFQKOZYMa4U2PjWRU1iC4SuicPlWrtSxiIiIqgyLEZVio5Rj+Yg2aOzpgNu5RRi2LArXswqkjkVERFQlWIzoIQ7WVlg9Mhh+Lna4kV2IYcuicCtHK3UsIiKiSsdiRGVyrqFC5KgQ1Ha0QfLtPLy2PArZ+cVSxyIiIqpULEb0SB5qG6wZFQIXexXOpefg9Yho5GlLpI5FRERUaSQtRjqdDtOmTYOvry9sbGzg5+eHGTNmlBrUVBRFfPzxx/Dw8ICNjQ26deuGCxcuSJi6evGpZYcfwoOhtrFCXEoWxvwQi8JindSxiIiIKoWkxWjOnDlYtGgRvvnmG5w9exZz5szB3LlzsXDhQsMyc+fOxYIFC7B48WJERUXBzs4O3bt3R2FhoYTJq5cAdwesGhkMO6Uchy7exsS18SjR6aWORUREZHSC+PfTM1XsxRdfhJubG5YvX26Y179/f9jY2CAyMhKiKMLT0xPvvvsu3nvvPQBAdnY23NzcEBERgcGDBz92GxqNBmq1GtnZ2XBwcKi0fakOjly6jRErY1BUoke/FrUxb0AQZDJB6lhERGSBpPr9lvSMUbt27bB3716cP38eAHDixAkcOnQIPXv2BAAkJycjPT0d3bp1M3xHrVYjJCQER48eLXOdWq0WGo2m1ETG0c6vFr4b2hJymYDN8dfxyfbTkLBXExERGZ2kxWjKlCkYPHgwAgICYGVlhRYtWmDSpEl49dVXAQDp6fcGNHVzcyv1PTc3N8Nn/zRr1iyo1WrD5OXlVbk7Uc10C3TD/IFBEARg9dGr+PL381JHIiIiMhpJi9GGDRuwZs0a/Pjjj4iLi8OqVaswb948rFq16onXOXXqVGRnZxum1NRUIyYmAOjTvDY+69sEAPDNvotYcuCSxImIiIiMQyHlxt9//33DWSMAaNq0Ka5evYpZs2YhLCwM7u7uAICbN2/Cw8PD8L2bN2+iefPmZa5TpVJBpVJVevbq7tWQusgpLMHsX85h1i/nUMNagVdD6kodi4iI6KlIesYoPz8fMlnpCHK5HHr9vSeefH194e7ujr179xo+12g0iIqKQmhoaJVmpYeN7eSHNzv7AQA+2pqInxOuS5yIiIjo6Uh6xqh3796YOXMmvL290bhxY8THx2P+/PkYOXIkAEAQBEyaNAmfffYZ6tevD19fX0ybNg2enp7o27evlNHpvve7N0ROYQl+OHYV7244gRoqBbo2cnv8F4mIiEyQpI/r5+TkYNq0adiyZQsyMjLg6emJIUOG4OOPP4ZSqQRw7wWP06dPx/fff4+srCw888wz+O6779CgQYNybYOP61c+vV7EuxtPYEv8dSgVMkS83gbt/GpJHYuIiMyYVL/fkhajqsBiVDVKdHqMWxOH3Wduwk4px5o32qK5l6PUsYiIyExVy/cYkeVQyGVYOKQF2vs7I69Ih7AV0UhKz5E6FhERUYWwGJHRWFvJ8f1rrdHC2xHZBcUYtjwKV+/kSR2LiIio3FiMyKjsVApEjAhGgLs9buVo8eqyKKRlF0gdi4iIqFxYjMjo1LZW+CE8BD7Otrh2twDDlkXhTq5W6lhERESPxWJElcLFXoXIUSHwVFvj0q08hK2MhqawWOpYRERE/4rFiCpNnZq2+GFUCJztlEi8rkF4RAwKinRSxyIiInokFiOqVH4uNbA6PBj21grEXLmLsZGxKCrRSx2LiIioTCxGVOkae6oR8Xob2FjJceD8LbyzPgE6vUW/PouIiMwUixFViVZ1nfD98FZQymXYeSoNUzefhJ7liIiITAyLEVWZDvVdsGBIc8gEYMPxa/hs51lY+IvXiYjIzLAYUZXq0cQDc18JAgCsOJyMr/dekDgRERHRX1iMqMq90qoOPukdCAD4as8FLD+ULHEiIiKie1iMSBIj2vvi3ecaAABm7DiDDTGpEiciIiJiMSIJTejij9Ed6wEApmw+iZ0n0yRORERE1R2LEUlGEARM7RmAIcFe0IvApPXx2J+UIXUsIiKqxliMSFKCIOCzvk3xYjMPFOtEjI2MRXRyptSxiIiommIxIsnJZQLmD2yOZxu6oLBYj/CIGCRez5Y6FhERVUMsRmQSlAoZFg1rhRBfJ+RoSzB8RTQuZuRIHYuIiKoZFiMyGdZWciwLa41mddTIzCvCsGXRSM3MlzoWERFVIyxGZFLsra2w6vVg1HetgXRNIYYtj0KGplDqWEREVE2wGJHJqWmnROSoEHg52eDqnXy8tjwaWflFUsciIqJqgMWITJKbgzXWhLeFm4MKSTdzELYyBrnaEqljERGRhWMxIpPl7WyLyPAQ1LS1wonULIxaFYPCYp3UsYiIyIKxGJFJq+9mj1Ujg1FDpcCxy5mY8GMcinV6qWMREZGFYjEik9esjiOWh7WGSiHDnrMZeHfDCej0otSxiIjIArEYkVkIqeeMxcNaQSETsO3EDUz7ORGiyHJERETGxWJEZuPZAFd8Nbg5BAH4MSoFs389x3JERERGxWJEZuXFZp6Y9XJTAMCSA5fx3f5LEiciIiJLwmJEZmdwsDc+6tUIAPDFb0lYffSKtIGIiMhisBiRWRrVoR4mdvEHAHz882lsjrsmcSIiIrIELEZktt55rgFGtPMBALz/00n8djpd2kBERGT2WIzIbAmCgI9fDMQrrepApxfx1o/xOHThttSxiIjIjLEYkVmTyQTM7tcUPZu4o0inx+gfjiP26l2pYxERkZliMSKzp5DL8NXg5uhQvxbyi3R4fWU0ztzQSB2LiIjMEIsRWQSVQo4lr7VC67o1oSkswfAVUbh8K1fqWEREZGZYjMhi2CoVWD6iDQI9HHA7twjDlkXhelaB1LGIiMiMSFqMfHx8IAjCQ9P48eMBAOnp6Xjttdfg7u4OOzs7tGzZEps2bZIyMpk4tY0VVocHo56LHW5kF+K1ZVG4laOVOhYREZkJSYtRTEwM0tLSDNPu3bsBAAMGDAAADB8+HElJSdi2bRtOnTqFfv36YeDAgYiPj5cyNpm4WjVUiAwPQW1HG1y+nYfhK6KRnV8sdSwiIjIDkhYjFxcXuLu7G6YdO3bAz88PnTp1AgAcOXIEb731FoKDg1GvXj189NFHcHR0RGxsrJSxyQx4OtogclQIatVQ4WyaBq9HRCO/qETqWEREZOJM5h6joqIiREZGYuTIkRAEAQDQrl07rF+/HpmZmdDr9Vi3bh0KCwvRuXPnR65Hq9VCo9GUmqh68q1lhx/Cg+FgrUBcShbG/BALbYlO6lhERGTCTKYYbd26FVlZWRgxYoRh3oYNG1BcXAxnZ2eoVCqMGTMGW7Zsgb+//yPXM2vWLKjVasPk5eVVBenJVDXycEDEyGDYKuU4eOE2Jq6NR4lOL3UsIiIyUSZTjJYvX46ePXvC09PTMG/atGnIysrCnj17cPz4cUyePBkDBw7EqVOnHrmeqVOnIjs72zClpqZWRXwyYS29a2LZ8NZQKmT47fRN/GfTSej1otSxiIjIBAmiKEr+C3H16lXUq1cPmzdvRp8+fQAAly5dgr+/PxITE9G4cWPDst26dYO/vz8WL15crnVrNBqo1WpkZ2fDwcGhUvKTefj9dDrGrYmDTi8iLLQuPnmpseGyLRERmRapfr9N4ozRypUr4erqil69ehnm5efnAwBkstIR5XI59HpeCqGKe76xO74cEARBAFYdvYr5u89LHYmIiEyM5MVIr9dj5cqVCAsLg0KhMMwPCAiAv78/xowZg+joaFy6dAlffvkldu/ejb59+0oXmMxa3xa18d8+TQAAC/+4iO//vCRxIiIiMiWSF6M9e/YgJSUFI0eOLDXfysoKu3btgouLC3r37o1mzZph9erVWLVqFV544QWJ0pIleK1tXfynR0MAwOe7zmFtdIrEiYiIyFSYxD1GlYn3GNGjzPn1HBbtvwRBAL4e3AIvBXk+/ktERFQlqvU9RkRS+E/3hhjW1huiCExen4A/zt2UOhIREUmMxYiqLUEQ8N+XmqBvc0+U6EWMi4zD0Ut3pI5FREQSYjGiak0mE/DFgCB0a+QGbYkeo1bF4ERqltSxiIhIIixGVO1ZyWX4ZmgLtPNzRl6RDmEro5GUniN1LCIikgCLEREAays5vh/eGs29HJGVX4zXlkfh6p08qWMREVEVYzEiuq+GSoGI19sgwN0eGTlavLosCunZhVLHIiKiKsRiRPQ3jrZKrA4Pho+zLa7dLcCw5VHIzCuSOhYREVURFiOif3C1t0bkqBB4qK1xMSMXYSuioSksljoWERFVARYjojLUqWmLH8JD4GynxKnr2RgVcRwFRTqpYxERUSVjMSJ6BH/XGlg1Mhj2KgWir2Ri3JpYFJVwAGMiIkvGYkT0L5rUVmPF621gbSXD/qRbeGd9AnR6ix5Fh4ioWmMxInqMNj5OWPJaa1jJBew8lYYPN5+ChQ8xSERUbbEYEZVDpwYuWDC4BWQCsP54KmbuPMtyRERkgViMiMqpZ1MPzOnfDACw7FAyFuy9KHEiIiIyNhYjogoY0NoLH78YCAD4357zWHEoWeJERERkTCxGRBU08hlfvNOtAQDgvzvOYMPxVIkTERGRsbAYET2BiV39MeoZXwDAlE0nsetUmsSJiIjIGFiMiJ6AIAj4v16NMKi1F/Qi8Pa6eBw4f0vqWERE9JRYjIiekCAI+LxfU/Rq5oFinYgxPxxHzJVMqWMREdFTeKJidOnSJXz00UcYMmQIMjIyAAC//PILTp8+bdRwRKZOLhPwv4HN0bmhCwqL9Ri5MgaJ17OljkVERE+owsXowIEDaNq0KaKiorB582bk5uYCAE6cOIHp06cbPSCRqVMqZFj0aisE+zohR1uC4SuicTEjV+pYRET0BCpcjKZMmYLPPvsMu3fvhlKpNMzv0qULjh07ZtRwRObCRinH8rDWaFpbjcy8IgxbFoXUzHypYxERUQVVuBidOnUKL7/88kPzXV1dcfv2baOEIjJH9tZWWDUyGP6uNZCuKcSw5VHI0BRKHYuIiCqgwsXI0dERaWkPP5ocHx+P2rVrGyUUkblyslMiMjwEXk42uHonH68tj0ZWfpHUsYiIqJwqXIwGDx6MDz74AOnp6RAEAXq9HocPH8Z7772H4cOHV0ZGIrPirrbGmvC2cLVXIelmDsJWxiBXWyJ1LCIiKocKF6PPP/8cAQEB8PLyQm5uLgIDA9GxY0e0a9cOH330UWVkJDI73s62iBwVAkdbK5xIzcLo1cdRWKyTOhYRET2GIFZgiHBRFJGamgoXFxfcvn0bp06dQm5uLlq0aIH69etXZs4nptFooFarkZ2dDQcHB6njUDVzIjULry6LQq62BN0auWLRsFawkvP1YUREjyPV73eFipFer4e1tTVOnz5tskXon1iMSGrHLt9B2IpoaEv06NPcE/8b2BwymSB1LCIikybV73eF/tVVJpOhfv36uHPnTmXlIbI4bes5Y9GwllDIBPyccAPTfk5EBf59hIiIqlCFz+nPnj0b77//PhITEysjD5FF6hLghv8Nag5BANZEpWDOr0lSRyIiojJU6FIaANSsWRP5+fkoKSmBUqmEjY1Nqc8zM01rrCheSiNTsjY6BVM3nwIA/KdHQ7zZ2V/iREREpkmq329FRb/w1VdfVUIMouphSLA3cgqL8fmuc5j7axLsVQoMDamL6ORMZOQUwtXeGsG+TpDzHiQiIklU+IyRueEZIzJFX/6ehIV/XAQAqG2skF1QbPjMQ22N6b0D0aOJh1TxiIgkZxZPpT2g0+mwdetWnD17FgDQuHFjvPTSS5DL5UYP+LRYjMgUiaKIkREx2Jd066HPHpwrWjSsJcsREVVbZnMp7eLFi3jhhRdw/fp1NGzYEAAwa9YseHl5YefOnfDz8zN6SCJLoxeBs2k5ZX4m4l45+nT7GTwX6M7LakREVajCT6VNnDgRfn5+SE1NRVxcHOLi4pCSkgJfX19MnDixMjISWZzo5Eyk/8sAsyKAtOxCRCeb1sMMRESWrsLF6MCBA5g7dy6cnJwM85ydnTF79mwcOHCgQuvy8fGBIAgPTePHjzcsc/ToUXTp0gV2dnZwcHBAx44dUVBQUNHYRCYlI+fRpehJliMiIuOo8KU0lUqFnJyHLwHk5uZCqVRWaF0xMTHQ6f4aPyoxMRHPPfccBgwYAOBeKerRowemTp2KhQsXQqFQ4MSJE5DJOKQCmTdXe2ujLkdERMZR4WL04osvYvTo0Vi+fDmCg4MBAFFRURg7dixeeumlCq3LxcWl1N+zZ8+Gn58fOnXqBAB45513MHHiREyZMsWwzIP7mojMWbCvEzzU1kjPLsSjnn4QAGTnF1VlLCKiaq/Cp14WLFgAPz8/hIaGwtraGtbW1mjfvj38/f3x9ddfP3GQoqIiREZGYuTIkRAEARkZGYiKioKrqyvatWsHNzc3dOrUCYcOHfrX9Wi1Wmg0mlITkamRywRM7x0I4K+n0P5JBDB2TRw+2XYa2hLdI5YiIiJjeuL3GF28eNHwuH6jRo3g7/90b/DdsGEDhg4dipSUFHh6euLYsWMIDQ2Fk5MT5s2bh+bNm2P16tX47rvvkJiY+MhBbD/55BN8+umnD83n4/pkin5NTMOn288gLfuve4k81Nb48IVGOHktC0sPJgMAmtR2wDdDWsKnlp1UUYmIqpRZvceoMnTv3h1KpRLbt28HABw5cgTt27fH1KlT8fnnnxuWa9asGXr16oVZs2aVuR6tVgutVmv4W6PRwMvLi8WITJZOLz7yzdd/nLuJdzecwN38YtRQKTDz5Sbo07y2xImJiCqfVMWowpfS+vfvjzlz5jw0f+7cuYabpivq6tWr2LNnD0aNGmWY5+Fx78V2gYGBpZZt1KgRUlJSHrkulUoFBweHUhORKZPLBIT6OaNP89oI9XMu9d6iLgFu2PV2BwT7OCFXW4K31yVgyqaTKCjipTUiospQ4WL0559/4oUXXnhofs+ePfHnn38+UYiVK1fC1dUVvXr1Mszz8fGBp6cnkpJKj0J+/vx51K1b94m2Q2SOPNQ2+PGNEEzs4g9BANbFpKLPt4dw4WbZL4gkIqInV+Fi9KjH8q2srJ7oRme9Xo+VK1ciLCwMCsVfD8kJgoD3338fCxYswE8//YSLFy9i2rRpOHfuHMLDwyu8HSJzppDLMPn5hogMD4GLvQrnb+ai9zeHsCEmFSZyNZyIyCJUuBg1bdoU69evf2j+unXrHrrsVR579uxBSkoKRo4c+dBnkyZNwtSpU/HOO+8gKCgIe/fuxe7duznsCFVb7f1rYdfEDuhQvxYKi/X4z6aTmLQ+AbnaEqmjERFZhArffL19+3b069cPQ4cORZcuXQAAe/fuxdq1a7Fx40b07du3MnI+MQ4iS5ZIrxex+M9L+PL389DpRfg42+KboS3RpLZa6mhEREZhVk+l7dy5E59//jkSEhJgY2ODZs2aYfr06YYXM5oSFiOyZMevZGLi2njcyC6EUi7Dhy8EIKzdvaF2iIjMmVkVI3PCYkSWLiu/CO9tPIk9Z28CAJ4PdMMXrwRBbWslcTIioidnNo/rp6am4tq1a4a/o6OjMWnSJHz//fdGDUZE5eNoq8TS4a0wvXcglHIZfj9zEy8sOIjYq3eljkZEZHYqXIyGDh2Kffv2AQDS09PRrVs3REdH4//+7//w3//+1+gBiejxBEHA6+19sWlcO9R1tsX1rAIMXHIUiw9cgl5v0SeFiYiMqsLFKDEx0TB47IYNG9C0aVMcOXIEa9asQUREhLHzEVEFNK2jxo63nkHvIE/o9CJm/3IOIyJicDtX+/gvExFRxYtRcXExVCoVgHuP2r/00ksAgICAAKSlpRk3HRFVmL21FRYMbo7Z/ZpCpZDhz/O38MLXB3Hk0m2poxERmbwKF6PGjRtj8eLFOHjwIHbv3o0ePXoAAG7cuAFnZ2ejBySiihMEAYODvbFtwjOo71oDGTlavLosCvN333u8n4iIylbhYjRnzhwsWbIEnTt3xpAhQxAUFAQA2LZtm+ESGxGZhobu9vh5QnsMbF0Hoggs2HsBQ5ceQ3p2odTRiIhM0hM9rq/T6aDRaFCzZk3DvCtXrsDW1haurq5GDfi0+Lg+0T0/J1zHh5tPIa9IByc7Jb4cGIRnG5rW/16JiB7ge4wqCYsR0V+Sb+dhwo9xOH3j3riGYzrWw3vdG8JKXuGTx0RElcps3mNERObLt5YdNr/ZDiPa+QAAlvx5GQMWH0VqZr60wYiITASLEVE1o1LI8clLjbF4WEs4WCuQkJqFFxYcxC+n+FQpERGLEVE11aOJB3ZO7IAW3o7IKSzBuDVxmLY1EYXFOqmjERFJhsWIqBrzcrLFhjGhGNOpHgDgh2NX8fJ3R3D5Vq7EyYiIpFGum68XLFhQ7hVOnDjxqQIZG2++Jiqf/UkZeHfDCdzJK4KtUo6ZLzfByy3qSB2LiKopk34qzdfXt9Tft27dQn5+PhwdHQEAWVlZhkf1L1++XClBnxSLEVH53dQU4u118Th2ORMAMKBVHXzapzFslQqJkxFRdWPST6UlJycbppkzZ6J58+Y4e/YsMjMzkZmZibNnz6Jly5aYMWNGZeclokrk5mCNNaPa4p1uDSATgI2x1/DSN4dxLl0jdTQioipR4fcY+fn54aeffkKLFi1KzY+NjcUrr7yC5ORkowZ8WjxjRPRkjl66g7fXxSMjRwuVQobpvRtjSLAXBEGQOhoRVQMmfcbo79LS0lBSUvLQfJ1Oh5s3bxolFBFJL9TPGb+83QGdG7pAW6LHh1tOYcLaeGgKi6WORkRUaSpcjLp27YoxY8YgLi7OMC82Nhbjxo1Dt27djBqOiKTlXEOFFWFtMLVnABQyATtPpuHFBYdw8lqW1NGIiCpFhYvRihUr4O7ujtatW0OlUkGlUiE4OBhubm5YtmxZZWQkIgnJZALGdPLDhrGhqO1og5TMfPRfdATLDyXDwkcUIqJq6InHSjt//jzOnTsHAAgICECDBg2MGsxYeI8RkfFk5xfjg00n8evpdABAt0Zu+OKVZqhpp5Q4GRFZGpN+XN+csRgRGZcoiog8dhUzdpxFkU4PD7U1FgxpgTY+TlJHIyILYtLFaPLkyZgxYwbs7OwwefLkf112/vz5RgtnDCxGRJUj8Xo23lobj+TbeZDLBEx+rgHGdfKDTMan1ojo6Un1+12ut7bFx8ejuPjekyhxcXGPfFyXj/ESVR9Naqux/a1n8NGWU9iacANf/JaEo5fuYP6gILjaW0sdj4joiZTrjNHJkyfRpEkTyGTmN7QazxgRVS5RFLEx9hqm/3waBcU61KqhwleDmuOZ+rWkjkZEZsyk32PUokUL3L59GwBQr1493Llzp1JDEZH5EAQBA1t7YduE9mjoZo/buVq8tiIK835LQolOL3U8IqIKKVcxcnR0NLzR+sqVK9Dr+X92RFRafTd7/DyhPYYEe0MUgW/2XcSQpceQll0gdTQionIr16W00aNHY/Xq1fDw8EBKSgrq1KkDuVxe5rIcRJaItp+4gambTyFXWwJHWyvMeyUI3QLdpI5FRGbEpG++/v7779GvXz9cvHgREydOxBtvvAF7e/vKzkZEZqp3kCea1lbjrbXxOHU9G6NWH0f4M774oEcAlArzu1eRiKqPCr/H6PXXX8eCBQvMphjxjBGRdLQlOsz5JQkrDt+7FN+sjhoLh7RAXWc7iZMRkakz6fcYmTMWIyLp7T5zE+9tPIHsgmLYqxSY1b8pXmzmKXUsIjJhJv1UGhHR03gu0A273u6A1nVrIkdbggk/xuPDLadQWKyTOhoRUSksRkRUJWo72mDd6LYY/6wfBAH4MSoFfb89jIsZuVJHIyIyYDEioiqjkMvwfvcArB4ZjFo1lDiXnoPeCw/hp9hrUkcjIgLAYkREEuhQ3wW73u6A9v7OKCjW4b2NJzB5fQLytCVSRyOiak7SYuTj4wNBEB6axo8fX2o5URTRs2dPCIKArVu3ShOWiIzK1d4aq0eG4L3nG0AmAJvjr6P3wkM4fSNb6mhEVI1JWoxiYmKQlpZmmHbv3g0AGDBgQKnlvvrqKw5QS2SB5DIBE7rUx7rRoXB3sMbl23l4+bsj+OHYVVj4A7NEZKIkLUYuLi5wd3c3TDt27ICfnx86depkWCYhIQFffvklVqxYIWFSIqpMwb5O2PV2B3QNcEVRiR7TtibizTVxyC4oljoaEVUzJnOPUVFRESIjIzFy5EjD2aH8/HwMHToU3377Ldzd3cu1Hq1WC41GU2oiItPnZKfEsrDW+KhXI1jJBfySmI5eCw4iITVL6mhEVI2YTDHaunUrsrKyMGLECMO8d955B+3atUOfPn3KvZ5Zs2ZBrVYbJi8vr0pIS0SVQRAEjOpQDz+NbQcvJxtcu1uAVxYdwdI/L0Ov56U1Iqp8JlOMli9fjp49e8LT897bcLdt24Y//vgDX331VYXWM3XqVGRnZxum1NTUSkhLRJUpyMsROyd2QK+mHijRi5i56yzCV8UgM69I6mhEZOFMohhdvXoVe/bswahRowzz/vjjD1y6dAmOjo5QKBRQKO6Nd9u/f3907tz5ketSqVRwcHAoNRGR+XGwtsI3Q1tg5stNoFTIsC/pFnp+/SeOXb4jdTQismAmMVbaJ598giVLliA1NdVQgNLT03H79u1SyzVt2hRff/01evfuDV9f33Ktm2OlEZm/s2kaTPgxDpdu5UEmAJO6NcD4Z/0hl/FpVSJLJdXvt6LKtvQIer0eK1euRFhYmKEUATA8qfZP3t7e5S5FRGQZGnk4YNuEZ/Dxz6exKe4a5u8+j2OX7+CrQc3h6mAtdTwisiCSX0rbs2cPUlJSMHLkSKmjEJEJs1Mp8OXAIMwfGARbpRxHLt1Bz68P4sD5W1JHIyILYhKX0ioTL6URWZ5Lt3Ixfk0czqXnAADGdfbD5OcawEou+b/rEZGRSPX7zf8XISKz4+dSA1vHt8drbesCABbtv4RBS47i2t18iZMRkbljMSIis2RtJceMvk3w3astYa9SIC4lCy98fRC/nU6XOhoRmTEWIyIyay809cCutzsgyMsRmsISjPkhFp9sOw1tiU7qaERkhliMiMjseTnZYuOYULzR4d4TqxFHrqD/oiO4cjtP4mREZG5YjIjIIigVMvxfr0CsGNEaNW2tkHhdgxcXHsLPCdeljkZEZoTFiIgsSpcAN+x6uwOCfZ2Qqy3B2+sSMGXTSRQU8dIaET0eixERWRwPtQ1+HBWCiV3rQxCAdTGp6PPtIZy/mSN1NCIycSxGRGSRFHIZJj/XAJHhIXCxV+H8zVy89M0hrI9JgYW/vo2IngKLERFZtPb+tbBrYgd0qF8LhcV6fLDpFCatT0CutkTqaERkgliMiMjiudirsOr1YPynR0PIZQJ+TriBFxccROL1bKmjEZGJYTEiompBJhPwZmd/bBjTFrUdbXDlTj76fXcEEYeTeWmNiAxYjIioWmlV1wk7Jz6D5wLdUKTT45PtZzDmh1hk5xdLHY2ITACLERFVO462Snz/Wit80jsQSrkMv5+5iRcWHETs1btSRyMiibEYEVG1JAgCRrT3xaZx7VDX2RbXswowcMlRLNp/CXo9L60RVVcsRkRUrTWto8aOt55B7yBP6PQi5vx6DiMiYnA7Vyt1NCKSAIsREVV79tZWWDC4OWb3awprKxn+PH8LL3x9EEcu3ZY6GhFVMRYjIiLcu7Q2ONgbP49/BvVdayAjR4tXl0Vh/u7z0PHSGlG1wWJERPQ3Dd3tsW3CMxjU2guiCCzYewFDlx5Denah1NGIqAqwGBER/YONUo45rzTD14Obw04pR1RyJl5YcBD7zmVIHY2IKhmLERHRI/RpXhs7JnZAY08HZOYV4fWIGHy+6yyKdXqpoxFRJWExIiL6F7617LD5zXYY0c4HAPD9n5cxYPFRpGbmSxuMiCoFixER0WOoFHJ88lJjLB7WCg7WCiSkZuGFBQfxy6k0qaMRkZGxGBERlVOPJu7Y9XYHtPB2RE5hCcaticO0rYkoLNZJHY2IjITFiIioAurUtMWGMaEY28kPAPDDsat4+bsjuHwrV+JkRGQMLEZERBVkJZdhSs8ARLzeBs52SpxN0+DFhYewOe6a1NGI6CmxGBERPaHODV2x6+0OaFvPCflFOkzecALvbTyB/KISqaMR0RNiMSIiegpuDtZYM6ot3unWADIB+Cn2Gl765jDOpWukjkZET4DFiIjoKcllAt7uVh8/vtEWbg4qXMzIRZ9vDuPHqBSIIocTITInLEZEREbStp4zdk3sgM4NXaAt0ePDLacwYW08NIXFUkcjonJiMSIiMiLnGiqsCGuDD18IgEImYOfJNLy44BBOXsuSOhoRlQOLERGRkclkAkZ39MPGsaGo7WiDlMx89F90BMsOXualNSITx2JERFRJWnjXxK6JHdCjsTuKdSI+23kWb6w+jrt5RVJHI6JHYDEiIqpEalsrLBrWEjP6NIZSLsOesxl4YcFBxFzJlDoaEZWBxYiIqJIJgoDXQn2wZXw71Ktlh7TsQgz+/hi+3XcRej0vrRGZEhYjIqIq0thTjW1vPYOXW9SGTi/ii9+SMHxFNDJyCqWORkT3sRgREVWhGioF5g8MwhevNIONlRyHLt7GC18fwqELt6WORkSQuBj5+PhAEISHpvHjxyMzMxNvvfUWGjZsCBsbG3h7e2PixInIzs6WMjIR0VMTBAEDWnth+1vt0dDNHrdztXhtRRTm/ZaEEp1e6nhE1ZqkxSgmJgZpaWmGaffu3QCAAQMG4MaNG7hx4wbmzZuHxMRERERE4Ndff0V4eLiUkYmIjMbf1R4/T2iPIcHeEEXgm30XMWTpMdzIKpA6GlG1JYgm9FKNSZMmYceOHbhw4QIEQXjo840bN2LYsGHIy8uDQqEo1zo1Gg3UajWys7Ph4OBg7MhEREax/cQNTN18CrnaEjjaWmHeK0HoFugmdSwiyUj1+20y9xgVFRUhMjISI0eOLLMUATD8w/m3UqTVaqHRaEpNRESmrneQJ3ZOfAZNa6uRlV+MUauPY8aOMygq4aU1oqpkMsVo69atyMrKwogRI8r8/Pbt25gxYwZGjx79r+uZNWsW1Gq1YfLy8qqEtERExlfX2Q4/jQvFyPa+AIDlh5LxyuIjuHonT+JkRNWHyVxK6969O5RKJbZv3/7QZxqNBs899xycnJywbds2WFlZPXI9Wq0WWq221He9vLx4KY2IzMqeMzfx3k8nkJVfDHuVArP6N8WLzTyljkVUZar1pbSrV69iz549GDVq1EOf5eTkoEePHrC3t8eWLVv+tRQBgEqlgoODQ6mJiMjcdAt0w66JHdC6bk3kaEsw4cd4fLjlFAqLdVJHI7JoJlGMVq5cCVdXV/Tq1avUfI1Gg+effx5KpRLbtm2DtbW1RAmJiKqep6MN1o1uiwnP+kMQgB+jUtD328O4mJEjdTQiiyV5MdLr9Vi5ciXCwsJK3VT9oBTl5eVh+fLl0Gg0SE9PR3p6OnQ6/hsTEVUPCrkM73VviNUjg1GrhhLn0nPQe+Fh/BR7TepoRBZJ8nuMfv/9d3Tv3h1JSUlo0KCBYf7+/fvx7LPPlvmd5ORk+Pj4lGv9fFyfiCxFRk4h3lmfgMMX7wAA+rWojRl9m8BOVb7XlxCZE6l+vyUvRpWNxYiILIlOL2LR/ouYv/s89CJQr5YdFg5tgcaeaqmjERlVtb75moiIykcuEzChS32sHxMKD7U1Lt/Ow8vfHcEPR6/Awv89l6hKsBgREZmhNj5O2DWxA7oGuKKoRI9pP5/Gm2vikF1QLHU0IrPGYkREZKZq2imxLKw1pr0YCCu5gF8S09FrwUHEp9yVOhqR2WIxIiIyY4IgIPwZX/w0th28nGxw7W4BBiw+iqV/XoZez0trRBXFYkREZAGCvByxc2IH9GrmgRK9iJm7ziJ8VQwy84qkjkZkVliMiIgshIO1Fb4Z0gIzX24ClUKGfUm30PPrP3Hs8h2poxGZDRYjIiILIggCXg2pi58ntIefix1uarQYuvQYvt5zATpeWiN6LBYjIiILFODugO1vPYNXWtWBXgT+t+c8hi2LQoamUOpoRCaNxYiIyELZKhWYNyAI8wcGwVYpx9HLd9Dz64M4cP6WYRmdXsTRS3fwc8J1HL10h2eVqNrjm6+JiKqBS7dyMX5NHM6l3xuAdlxnPzT2dMDMnWeRlv3XWSQPtTWm9w5EjyYeUkUlAsAhQSoNixER0T2FxTrM3HkWPxy7+shlhPv/uWhYS5YjM6DTi4hOzkRGTiFc7a0R7OsEuUx4/BfNgFS/3xx5kIiomrC2kmNG3yZo6+uECWvjUda/FYu4V44+3X4GzwW6W8yPrCX6NTENn24/wzN+RsZ7jIiIqhmnGqoyS9EDIoC07EJEJ2dWVSSqoF8T0zAuMq5UKQKA9OxCjIuMw6+JaRIlM38sRkRE1UxGTvmeTCvvclS1dHoRn24/88gzfsC9M368kf7JsBgREVUzrvbWRl2OqlZ0cuZDZ4r+jmf8ng7vMSIiqmaCfZ3gobZGenbhIy+peajv3chLpqe8Z/LeXBOLUD9nBNVxRHMvRzSprYadij/7j8N/QkRE1YxcJmB670CMi4yDAJRZjmrVUKJYp4dcJq/qePQY5T2Tdze/GLtOpWPXqXQAgEwAGrjZ3ytK3o4IquOIBm41oJDz4tHf8XF9IqJqqqynmpzslNAUFKNEL6KdnzOWDm/NswwmRqcXETprLzJytGV+LgBwc7DGlwOCcOpGNk6kZuFEahZulHH5zdpKhqa11aXKUp2aNhCExz+NWNmvCuB7jCoJixER0aOV9eMWcyUT4RExyCvSoYW3IyJGBENtayV1VPqbwUuO4Vjyw4MD/9t7qDI0hUhIzcKJa1k4kZqNE9eykFNY8tA6nO2UCPJy/FtZUsPRVllqmap4VQCLUSVhMSIiqriE1CyErYhGdkExAj0csDo8GLVqqKSORQCiLt/BoO+PAbhXYu7kFRk+q0g50etFJN/JQ0LKg7KUhTNpGhTrHq4FPs62CPK6d69SQZEOX/yW9NAlWGO/HJTFqJKwGBERPZlz6RoMWxaN27la+LnYIXJUCDzUNlLHqtaKdXr0WnAQ52/mYkiwNz7r28Sol7O0JTqcuaG5d/nt2r3LcJdv55X7+wIAd7U1Dn3Q5akvq7EYVRIWIyKiJ3f5Vi6GLYvCjexC1Klpgx9HtYW3s63UsaqtxQcuYfYv5+Bsp8Tedzs9dImrMmTlF+Hk/ZK0LykDcSlZj/3O2jfaItTP+am2K9XvN29FJyKiR6rnUgMbxobCx9kW1+4WYMCSI7iYkSN1rGrp2t18fL3nAgBg6guNqqQUAYCjrRIdG7jgra71EdbOp1zfMeeXg7IYERHRv6pT0xYbxoSigVsN3NRoMXDJMSRez5Y6VrXz6fYzKCjWIdjHCf1b1pYkQ3V4OSiLERERPZargzXWjw5FszpqZOYVYcjSY4i9yjcrV5U9Z25i95mbUMgEfPZyk3I9Tl8ZHrwc9FFbF2D+LwdlMSIionKpaafEmlEhCPZxQk5hCYYti8ahC7eljmXxCop0+GT7aQBAeAdfNHCzlyzLg5eDluVBWZreO9Co7zOqaixGRERUbvbWVlg1Mhgd6tdCQbEOIyNisPvMTaljWbSFf1zAtbsFqO1og7e71pc6Dno08cDCoS0eOmvkrrY22qP6UmIxIiKiCrFRyrEsrDW6N3ZDkU6PsZGx+DnhutSxLNLFjBwsPXgZwL0zMbZK03gLuZuDNUQANVRy/G9Qc6x9oy0OfdDF7EsRwGJERERPQKWQ49uhLdGvRW3o9CImrU/AuugUqWNZFFEU8dHWRBTrRHQNcMVzgW5SRzI4kHQLANAlwA0vt6iNUD9ns7589nemUT2JiMjsKOQyzBsQBBulHGuiUjBl8ynkFekQ/oyv1NHM1t+HaDl/MwfHLmfC2kqGT15qLNkN12U5cP5eMerUwEXiJMbHYkRERE9MJhPwWd8msFMp8P2flzFjxxnkaUvwVhd/k/ohNwdljT8GAN0bu8PLyXReqnk7V4tT91/X0NECixEvpRER0VMRBAFTewZg8nMNAADzd5/H7F/OwcIHVjCqXxPTMC4y7qFSBADbEm7g18Q0CVKV7eCFe2eLmtR2gIu95Y2fx2JERERPTRAETOxaH9NevPco95I/L+OjrYnQ61mOHkenF/Hp9jMPDcr6d59uPwOdifyzfHB/kSVeRgNYjIiIyIjCn/HF7H5NIQjAmqgUvLvxBEp0eqljmbTo5MwyzxQ9IAJIyy5EdLL0L9TU60X8ef/dVZ0auEqcpnKwGBERkVENDvbG14NbQCETsCX+Osb/GAdtiU7qWCarvOOKmcL4Y4k3spGZVwR7lQItvB2ljlMpWIyIiMjoXgryxOJhraBUyPDb6ZsYteo4CopYjspiTuOPPbiM1t6/FqzkllkhJN0rHx8fCILw0DR+/HgAQGFhIcaPHw9nZ2fUqFED/fv3x82bfMMqEZE56BbohpUj2sDGSo6DF25j+IooaAqLpY5lclrXrQlrq0f/HJvS+GOGx/QbWub9RYDExSgmJgZpaWmGaffu3QCAAQMGAADeeecdbN++HRs3bsSBAwdw48YN9OvXT8rIRERUAe39ayFyVDDsrRWIuXIXry6Nwt28IqljmZS5v51DYXHZ92GZ0vhj2fnFiEu5C8AyH9N/QNJi5OLiAnd3d8O0Y8cO+Pn5oVOnTsjOzsby5csxf/58dOnSBa1atcLKlStx5MgRHDt2TMrYRERUAa3qOmHtG23hZKfEqevZGPT9UWRopL9fxhQsOXAJSw8mAwDCQuvCQ136cpkpjT92+NJt6EWgvmsN1Ha0kTpOpTGZFzwWFRUhMjISkydPhiAIiI2NRXFxMbp162ZYJiAgAN7e3jh69Cjatm1b5nq0Wi20Wq3hb41GU+nZiYjo3zWprcb60W0xbHkUzt/MxcAlRxE5KgR1aprOiwur2k+x1zDrl3MAgA9fCMDojn74uHdjw5uvXe3vXT6T+kzRA5b+mP4DJnPn1NatW5GVlYURI0YAANLT06FUKuHo6FhqOTc3N6Snpz9yPbNmzYJarTZMXl5elZiaiIjKq76bPTaOaYc6NW1w5U4+Bi4+isu3cqWOJYk/zt3EB5tOAgBGd6yH0R39AABymYBQP2f0aW5a44+Jolgt7i8CTKgYLV++HD179oSnp+dTrWfq1KnIzs42TKmpqUZKSERET8vb2RY/jW0HPxc73MguxMAlx3A2rXqd2Y+9mok318RBpxfRr0VtTOkRIHWkxzp/MxfpmkJYW8nQxkf6m8Ark0kUo6tXr2LPnj0YNWqUYZ67uzuKioqQlZVVatmbN2/C3d39ketSqVRwcHAoNRERkelwV1tj/ZhQBHo44HauFoO/P4aE1CypY1WJ8zdzMDLiOAqL9Xi2oQvmvNIMMhM5K/RvDpzPAACE1nOGtZVc4jSVyySK0cqVK+Hq6opevXoZ5rVq1QpWVlbYu3evYV5SUhJSUlIQGhoqRUwiIjKSWjVUWDu6LVp4OyK7oBivLj2GY5fvSB2rUl3PKsDw5dHILihGC29HfPtqS7N5F5DhMpqF318EmEAx0uv1WLlyJcLCwqBQ/HUvuFqtRnh4OCZPnox9+/YhNjYWr7/+OkJDQx954zUREZkPtY0VIsND0M7PGXlFOoStiMa+pAypY1WKu3lFGL48CumaQvi71sCKsDawVZrM80//Kk9bgpjke4/pd2pomcOA/J3kxWjPnj1ISUnByJEjH/rsf//7H1588UX0798fHTt2hLu7OzZv3ixBSiIiqgx2KgVWjGiDrgGu0JboMXr1cew6ZTojyRtDflEJXo+IwaVbefBQW2P1yGDUtFNKHatcdHoREUeSUaTTw9VeBa+alvuY/gOCKIqmMVxvJdFoNFCr1cjOzub9RkREJqpYp8c76xOw42QaZAIw95UgvNKqjtSxnlqxTo83Vh/H/qRbcLS1wsYxoajvZi91rHL5NTENn24/U2qAWw+1Nab3DqyS9ypJ9fst+RkjIiIiK7kMXw9ugYGt60AvAu9tPIEfjl6ROtZT0etF/Oenk9ifdAvWVjIsD2tjVqVoXGRcqVIEAOnZhRgXGYdfEy3rrN7fsRgREZFJkMsEzO7XDK+39wEATPv5NBbtvyRtqKcw65ez2BJ/HXKZgEWvtkKrujWljlQuOr2IT7efQVmXkx7M+3T7Gej0lnnBicWIiIhMhkwm4OMXA/FWF38AwJxfz+GL387B3O76+PtQH3P7N8OzAeZz03J0cuZDZ4r+TgSQll2I6OTMqgtVhViMiIjIpAiCgHefb4gpPe+9+PDbfZfw6fYz0JvJGYp/DvXR38zulcrIKd84duVdztywGBERkUka28kPM/o0BgBEHLmCDzadNPnLN3vPlj3Uhzlxtbd+/EIVWM7csBgREZHJei3UB18OCIJMADbGXsPEtfEoKtFLHatMsVczMf7H+0N9tDSPoT7KEuzrBA+1NR71Pm4B955OC/a1zKFBWIyIiMik9W9VB98ObQkruYCdp9IwNjIWhcU6qWOV8tBQH/3NY6iPsshlAqb3DgSAh8rRg7+n9w40mQFujY3FiIiITF7Pph5YOrw1VAoZ/jiXgddXxiBXWyJ1LADmPdTHo/Ro4oFFw1rCXV36cpm72hqLhrWskvcYSYUveCQiIrMRdfkOwlcdR662BM29HLHq9WCoba0ky5OZV4QBi4/g0q08+LvWwMYxoWbzVuvy0OlFRCdnIiOnEK729y6fVdWZIql+v1mMiIjIrJxIzcLwFffO0DTycMAP4cGoVUNV5Tnyi0owdGkUElKz4KG2xqZx7eDpaPlDZlQVvvmaiIioHIK8HLF+TFvUqqHC2TQNBi45irTsgirNUKzT4801cUhIzYKjrRV+CA9mKbIQLEZERGR2AtwdsGFMW3iqrXH5Vh4GLD6Kq3fyqmTb/xzqY8WINvB3NY+hPujxWIyIiMgs1XOpgQ1jQ+HjbItrdwswYPFRXLiZU6nbFEURn+8qPdRHS2/zGOqDyofFiIiIzFadmrbYMCYUDd3skZGjxcAlR5F4PbvStvf9n5ex7NC9oT6+eMW8hvqg8mExIiIis+bqYI11o9uiWR017uYXY8j3x3D8ivHH8fr7UB//90Ij9GtpXkN9UPmwGBERkdmraafEmlEhCPZxQo62BK8tj8bBC7eMtv6/D/UxpmM9vNGxntHWTaaFxYiIiCyCvbUVVo0MRscGLigo1iE84jh+P53+1Ov951AfH5jpUB9UPixGRERkMWyUciwd3grdG7uhSKfHuDVx+Dnh+hOv7+9DfXQJcDXroT6ofFiMiIjIoqgUcnw7tCX6tagNnV7EpPUJWBudUuH1/H2oj5bejvfHa+PPpqXjESYiIoujkMswb0AQXg3xhigCUzefwrKDl8v9/cy8IgxfHoV0TSHqu9bAihFtYKOUV2JiMhUsRkREZJFkMgGf9W2CMfdvlP5s51l8tec8HjcSVn5RCUZGxODSrTx4qq2xOjwYjraWM/4Z/TsWIyIisliCIGBKzwC8+1wDAMBXey7g811nH1mOinV6jIv8a6iP1eHB8FBzqI/qRCF1ACIiosokCALe6loftioFZuw4g6UHk5Gr1eGzvk0AwDB6vEsNFTYcT8WB87dgYyXnUB/VFIsRERFVC+HP+KKGSo4pm09hbXQKLmbkIDWzAOmawlLLyQTgu2EtOdRHNcViRERE1cagNt6wUSowaV08Yq7cLXMZvQhoi3VVnIxMBe8xIiKiaqVXUw+obawe+bkA4NPtZ6DT//tN2mSZWIyIiKhaiU7OxN384kd+LgJIyy5EdLLxx1sj08diRERE1UpGTuHjF6rAcmRZWIyIiKhacbW3NupyZFlYjIiIqFoJ9nWCh9oajxrxTADgobZGsK9TVcYiE8FiRERE1YpcJmB670AAeKgcPfh7eu9AyDlYbLXEYkRERNVOjyYeWDSsJdzVpS+XuautsWhYS/Ro4iFRMpIa32NERETVUo8mHngu0N3w5mtX+3uXz3imqHpjMSIiompLLhMQ6ucsdQwyIbyURkRERHQfixERERHRfZIXo+vXr2PYsGFwdnaGjY0NmjZtiuPHjxs+z83NxYQJE1CnTh3Y2NggMDAQixcvljAxERERWSpJ7zG6e/cu2rdvj2effRa//PILXFxccOHCBdSs+deIxpMnT8Yff/yByMhI+Pj44Pfff8ebb74JT09PvPTSSxKmJyIiIksjaTGaM2cOvLy8sHLlSsM8X1/fUsscOXIEYWFh6Ny5MwBg9OjRWLJkCaKjo1mMiIiIyKgkvZS2bds2tG7dGgMGDICrqytatGiBpUuXllqmXbt22LZtG65fvw5RFLFv3z6cP38ezz//fJnr1Gq10Gg0pSYiIiKi8pC0GF2+fBmLFi1C/fr18dtvv2HcuHGYOHEiVq1aZVhm4cKFCAwMRJ06daBUKtGjRw98++236NixY5nrnDVrFtRqtWHy8vKqqt0hIiIiMyeIoihKtXGlUonWrVvjyJEjhnkTJ05ETEwMjh49CgCYN28eli5dinnz5qFu3br4888/MXXqVGzZsgXdunV7aJ1arRZardbwt0ajgZeXF7Kzs+Hg4FD5O0VERERPTaPRQK1WV/nvt6T3GHl4eCAwMLDUvEaNGmHTpk0AgIKCAnz44YfYsmULevXqBQBo1qwZEhISMG/evDKLkUqlgkqlqvzwREREZHEkLUbt27dHUlJSqXnnz59H3bp1AQDFxcUoLi6GTFb6ip9cLodery/XNh6cEOO9RkRERObjwe92lV/YEiUUHR0tKhQKcebMmeKFCxfENWvWiLa2tmJkZKRhmU6dOomNGzcW9+3bJ16+fFlcuXKlaG1tLX733Xfl2kZqaqoIgBMnTpw4ceJkhlNqampl1ZAySXqPEQDs2LEDU6dOxYULF+Dr64vJkyfjjTfeMHyenp6OqVOn4vfff0dmZibq1q2L0aNH45133oEgPH6gP71ejxs3bsDe3r5cy5uSB/dHpaamWuT9Udw/88b9M2/cP/NWXfbvzJkzaNiw4UNXjiqT5MWIHk2qG8+qCvfPvHH/zBv3z7xx/yqP5EOCEBEREZkKFiMiIiKi+1iMTJhKpcL06dMt9vUD3D/zxv0zb9w/88b9qzy8x4iIiIjoPp4xIiIiIrqPxYiIiIjoPhYjIiIiovtYjIiIiIjuYzGqRN9++y18fHxgbW2NkJAQREdHP3LZ06dPo3///vDx8YEgCPjqq68eWubPP/9E79694enpCUEQsHXr1oeWEUURH3/8MTw8PGBjY4Nu3brhwoULRtyrv0ixfyNGjIAgCKWmHj16GHGv/mLs/Zs1axbatGkDe3t7uLq6om/fvg+NFVhYWIjx48fD2dkZNWrUQP/+/XHz5k1j7xoAafavc+fODx2/sWPHGnvXABh//xYtWoRmzZrBwcEBDg4OCA0NxS+//FJqGXM+fuXZP3M+fn83e/ZsCIKASZMmlZpflccPkGYfzfkYfvLJJw9lDwgIKLWMMY4hi1ElWb9+PSZPnozp06cjLi4OQUFB6N69OzIyMspcPj8/H/Xq1cPs2bPh7u5e5jJ5eXkICgrCt99++8jtzp07FwsWLMDixYsRFRUFOzs7dO/eHYWFhUbZrwek2j8A6NGjB9LS0gzT2rVrn3p//qky9u/AgQMYP348jh07ht27d6O4uBjPP/888vLyDMu888472L59OzZu3IgDBw7gxo0b6Nevn8XsHwC88cYbpY7f3LlzzWL/6tSpg9mzZyM2NhbHjx9Hly5d0KdPH5w+fdqwjDkfv/LsH2C+x++BmJgYLFmyBM2aNXvos6o6foB0+wiY9zFs3LhxqeyHDh0q9blRjmGVjsxWjQQHB4vjx483/K3T6URPT09x1qxZj/1u3bp1xf/973//ugwAccuWLaXm6fV60d3dXfziiy8M87KyskSVSiWuXbu2QvkfR4r9E0VRDAsLE/v06VPBtBVX2fsniqKYkZEhAhAPHDggiuK9Y2VlZSVu3LjRsMzZs2dFAOLRo0crvhP/Qor9E8V7g0K//fbbTxK5Qqpi/0RRFGvWrCkuW7ZMFEXLO36iWHr/RNH8j19OTo5Yv359cffu3Q/tS1UeP1GUZh9F0byP4fTp08WgoKBHfs9Yx5BnjCpBUVERYmNj0a1bN8M8mUyGbt264ejRo5W23eTkZKSnp5farlqtRkhIiFG3K9X+PbB//364urqiYcOGGDduHO7cuWPU9VfV/mVnZwMAnJycAACxsbEoLi4utd2AgAB4e3ub5fH75/49sGbNGtSqVQtNmjTB1KlTkZ+fb7RtAlWzfzqdDuvWrUNeXh5CQ0MBWNbxK2v/HjDn4zd+/Hj06tWr1LofqKrjB0i3jw+Y8zG8cOECPD09Ua9ePbz66qtISUkxfGasY6h4qoRUptu3b0On08HNza3UfDc3N5w7d67Stpuenm7Yzj+3++AzY5Bq/4B7l9H69esHX19fXLp0CR9++CF69uyJo0ePQi6XG2UbVbF/er0ekyZNQvv27dGkSRMA946fUqmEo6PjQ9s1t+NX1v4BwNChQ1G3bl14enri5MmT+OCDD5CUlITNmzcbZbtA5e7fqVOnEBoaisLCQtSoUQNbtmxBYGAgAMs4fv+2f4B5H79169YhLi4OMTExZX5eVccPkG4fAfM+hiEhIYiIiEDDhg2RlpaGTz/9FB06dEBiYiLs7e2NdgxZjMisDB482PDfmzZtimbNmsHPzw/79+9H165dJUxWMePHj0diYuJD18ctxaP2b/To0Yb/3rRpU3h4eKBr1664dOkS/Pz8qjpmhTVs2BAJCQnIzs7GTz/9hLCwMBw4cKBUeTBnj9s/cz1+qampePvtt7F7925YW1tLHadSlHcfzfUYAkDPnj0N/71Zs2YICQlB3bp1sWHDBoSHhxttO7yUVglq1aoFuVz+0J3wN2/efOxNc0/jwbore7tS7V9Z6tWrh1q1auHixYtGW2dl79+ECROwY8cO7Nu3D3Xq1DHMd3d3R1FREbKysipluw9ItX9lCQkJAQCzOX5KpRL+/v5o1aoVZs2ahaCgIHz99dcALOP4/dv+lcVcjl9sbCwyMjLQsmVLKBQKKBQKHDhwAAsWLIBCoYBOp6uy4wdIt49lMZdjWBZHR0c0aNDAkN1Yx5DFqBIolUq0atUKe/fuNczT6/XYu3fvQ9frjcnX1xfu7u6ltqvRaBAVFWXU7Uq1f2W5du0a7ty5Aw8PD6Ots7L2TxRFTJgwAVu2bMEff/wBX1/fUp+3atUKVlZWpbablJSElJQUszh+j9u/siQkJACAWRy/suj1emi1WgDmf/zK8vf9K4u5HL+uXbvi1KlTSEhIMEytW7fGq6++ioSEBMjl8io7foB0+1gWczmGZcnNzcWlS5cM2Y12DMt9mzZVyLp160SVSiVGRESIZ86cEUePHi06OjqK6enpoiiK4muvvSZOmTLFsLxWqxXj4+PF+Ph40cPDQ3zvvffE+Ph48cKFC4ZlcnJyDMsAEOfPny/Gx8eLV69eNSwze/Zs0dHRUfz555/FkydPin369BF9fX3FgoICs9+/nJwc8b333hOPHj0qJicni3v27BFbtmwp1q9fXywsLDT5/Rs3bpyoVqvF/fv3i2lpaYYpPz/fsMzYsWNFb29v8Y8//hCPHz8uhoaGiqGhoUbdN6n27+LFi+J///tf8fjx42JycrL4888/i/Xq1RM7duxoFvs3ZcoU8cCBA2JycrJ48uRJccqUKaIgCOLvv/9uWMacj9/j9s/cj98/lfV0VlUdP6n20dyP4bvvvivu379fTE5OFg8fPix269ZNrFWrlpiRkWFYxhjHkMWoEi1cuFD09vYWlUqlGBwcLB47dszwWadOncSwsDDD38nJySKAh6ZOnToZltm3b1+Zy/x9PXq9Xpw2bZro5uYmqlQqsWvXrmJSUpJF7F9+fr74/PPPiy4uLqKVlZVYt25d8Y033jD8D83U96+szwGIK1euNCxTUFAgvvnmm2LNmjVFW1tb8eWXXxbT0tIsYv9SUlLEjh07ik5OTqJKpRL9/f3F999/X8zOzjaL/Rs5cqRYt25dUalUii4uLmLXrl1LlSJRNO/j97j9M/fj909lFaOqPH6iWPX7aO7HcNCgQaKHh4eoVCrF2rVri4MGDRIvXrxYapvGOIaCKIpi+c8vEREREVku3mNEREREdB+LEREREdF9LEZERERE97EYEREREd3HYkRERER0H4sRERER0X0sRkRERET3sRgRERER3cdiRESV5sqVKxAEwTAe0/79+yEIwkODPBIRmQoWIyKqMu3atUNaWhrUarXUUYiIysRiRERVRqlUwt3dHYIgSB2l3IqKiqSOQERViMWIiJ6KXq/H3Llz4e/vD5VKBW9vb8ycObPMZf95KS0iIgKOjo7YunUr6tevD2tra3Tv3h2pqamP3F5RUREmTJgADw8PWFtbo27dupg1a5bh86ysLIwZMwZubm6wtrZGkyZNsGPHDsPnmzZtQuPGjaFSqeDj44Mvv/yy1Pp9fHwwY8YMDB8+HA4ODhg9ejQA4NChQ+jQoQNsbGzg5eWFiRMnIi8v70n/sRGRiWIxIqKnMnXqVMyePRvTpk3DmTNn8OOPP8LNza3c38/Pz8fMmTOxevVqHD58GFlZWRg8ePAjl1+wYAG2bduGDRs2ICkpCWvWrIGPjw+AeyWtZ8+eOHz4MCIjI3HmzBnMnj0bcrkcABAbG4uBAwdi8ODBOHXqFD755BNMmzYNERERpbYxb948BAUFIT4+HtOmTcOlS5fQo0cP9O/fHydPnsT69etx6NAhTJgwocL/vIjIxIlERE9Io9GIKpVKXLp0aZmfJycniwDE+Ph4URRFcd++fSIA8e7du6IoiuLKlStFAOKxY8cM3zl79qwIQIyKiipznW+99ZbYpUsXUa/XP/TZb7/9JspkMjEpKanM7w4dOlR87rnnSs17//33xcDAQMPfdevWFfv27VtqmfDwcHH06NGl5h08eFCUyWRiQUFBmdsiIvPEM0ZE9MTOnj0LrVaLrl27PvE6FAoF2rRpY/g7ICAAjo6OOHv2bJnLjxgxAgkJCWjYsCEmTpyI33//3fBZQkIC6tSpgwYNGjwyb/v27UvNa9++PS5cuACdTmeY17p161LLnDhxAhEREahRo4Zh6t69O/R6PZKTkyu8z0RkuhRSByAi82VjY1Pl22zZsiWSk5Pxyy+/YM+ePRg4cCC6deuGn376yWh57OzsSv2dm5uLMWPGYOLEiQ8t6+3tbZRtEpFp4BkjInpi9evXh42NDfbu3fvE6ygpKcHx48cNfyclJSErKwuNGjV65HccHBwwaNAgLF26FOvXr8emTZuQmZmJZs2a4dq1azh//nyZ32vUqBEOHz5cat7hw4fRoEEDw31IZWnZsiXOnDkDf3//hyalUlnBPSYiU8YzRkT0xKytrfHBBx/gP//5D5RKJdq3b49bt27h9OnTCA8PL9c6rKys8NZbb2HBggVQKBSYMGEC2rZti+Dg4DKXnz9/Pjw8PNCiRQvIZDJs3LgR7u7ucHR0RKdOndCxY0f0798f8+fPh7+/P86dOwdBENCjRw+8++67aNOmDWbMmIFBgwbh6NGj+Oabb/Ddd9/9a8YPPvgAbdu2xYQJEzBq1CjY2dnhzJkz2L17N7755psK/3MjItPFYkRET2XatGlQKBT4+OOPcePGDXh4eGDs2LHl/r6trS0++OADDB06FNevX0eHDh2wfPnyRy5vb2+PuXPn4sKFC5DL5WjTpg127doFmezeCfBNmzbhvffew5AhQ5CXlwd/f3/Mnj0bwL0zPxs2bMDHH3+MGTNmwMPDA//9738xYsSIf83YrFkzHDhwAP/3f/+HDh06QBRF+Pn5YdCgQeXeTyIyD4IoiqLUIYioeoqIiMCkSZM4RAgRmQzeY0RERER0H4sRERER0X28lEZERER0H88YEREREd3HYkRERER0H4sRERER0X0sRkRERET3sRgRERER3cdiRERERHQfixERERHRfSxGRERERPf9P6BNN2cOMtCxAAAAAElFTkSuQmCC",
      "text/plain": [
       "<Figure size 640x480 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# plot clip score as x-axis, fid score as y-axis, line chart\n",
    "import matplotlib.pyplot as plt\n",
    "plt.plot(clip_scores, fids, 'o-')\n",
    "plt.xlabel('clip score')\n",
    "plt.ylabel('fid score')\n",
    "plt.show()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.13 ('base')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "4cc247672a8bfe61dc951074f9ca89ab002dc0f7e14586a8bb0828228bebeefa"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: text-image/fid_clip_score/run_generator.sh
================================================
#!/bin/bash
ps aux | grep -E 'run_watch.sh|watch.py' |awk '{print $2}' | xargs kill -9 # kill previous watchdog
guidance_scales=(1.5 2.0 3.0 4.0 5.0 6.0 7.0 8.0)
for i in {0..7}
do
    echo ${i}
    CUDA_VISIBLE_DEVICES=${i} nohup python coco_sample_generator.py --guidance_scale ${guidance_scales[${i}]} --batch_size 16 --sample_step 20 > stable_generator.log 2>&1 &
done
wait
bash ~/release_watchdog.sh # start watchdog

================================================
FILE: text-image/fid_clip_score/run_generator_cn.sh
================================================
#!/bin/bash
ps aux | grep -E 'run_watch.sh|watch.py' |awk '{print $2}' | xargs kill -9 # kill previous watchdog
guidance_scales=(1.5 2.0 3.0 4.0 5.0 6.0 7.0 8.0)
for i in {0..7}
do
    echo ${i}
    CUDA_VISIBLE_DEVICES=${i} nohup python coco_sample_generator.py --model_path ../pretrained_models/stable_cn --coco_cache_file ../dataset/coco/subset_cn.parquet --output_path ./output_cn --guidance_scale ${guidance_scales[${i}]} --batch_size 16 --sample_step 20 > stable_generator.log 2>&1 &
done
wait
bash ~/release_watchdog.sh # start watchdog

================================================
FILE: text-image/imagenet_CN_zeroshot_data.py
================================================


imagenet_classnames = [
                        "丁鲷",
                        "金鱼",
                        "大白鲨",
                        "虎鲨",
                        "锤头鲨",
                        "电鳐",
                        "黄貂鱼",
                        "公鸡",
                        "母鸡",
                        "鸵鸟",
                        "燕雀",
                        "金翅雀",
                        "家朱雀",
                        "灯芯草雀",
                        "靛蓝雀",
                        "蓝鹀",
                        "夜莺",
                        "松鸦",
                        "喜鹊",
                        "山雀",
                        "河鸟",
                        "鸢(猛禽)",
                        "秃头鹰",
                        "秃鹫",
                        "大灰猫头鹰",
                        "欧洲火蝾螈",
                        "普通蝾螈",
                        "水蜥",
                        "斑点蝾螈",
                        "蝾螈",
                        "牛蛙",
                        "树蛙",
                        "尾蛙",
                        "红海龟",
                        "皮革龟",
                        "泥龟",
                        "淡水龟",
                        "箱龟",
                        "带状壁虎",
                        "普通鬣蜥",
                        "美国变色龙",
                        "鞭尾蜥蜴",
                        "飞龙科蜥蜴",
                        "褶边蜥蜴",
                        "鳄鱼蜥蜴",
                        "毒蜥",
                        "绿蜥蜴",
                        "非洲变色龙",
                        "科莫多蜥蜴",
                        "非洲鳄",
                        "美国鳄鱼",
                        "三角龙",
                        "雷蛇",
                        "环蛇",
                        "希腊蛇",
                        "绿蛇",
                        "国王蛇",
                        "袜带蛇",
                        "水蛇",
                        "藤蛇",
                        "夜蛇",
                        "大蟒蛇",
                        "岩石蟒蛇",
                        "印度眼镜蛇",
                        "绿曼巴",
                        "海蛇",
                        "角腹蛇",
                        "菱纹响尾蛇",
                        "角响尾蛇",
                        "三叶虫",
                        "盲蜘蛛",
                        "蝎子",
                        "黑金花园蜘蛛",
                        "谷仓蜘蛛",
                        "花园蜘蛛",
                        "黑寡妇蜘蛛",
                        "狼蛛",
                        "狼蜘蛛",
                        "壁虱",
                        "蜈蚣",
                        "黑松鸡",
                        "松鸡",
                        "披肩鸡",
                        "草原鸡",
                        "孔雀",
                        "鹌鹑",
                        "鹧鸪",
                        "非洲灰鹦鹉",
                        "金刚鹦鹉",
                        "硫冠鹦鹉",
                        "短尾鹦鹉",
                        "褐翅鸦鹃",
                        "食蜂鸟;蜂虎",
                        "犀鸟",
                        "蜂鸟",
                        "鹟䴕",
                        "巨嘴鸟;大嘴鸟",
                        "野鸭",
                        "红胸秋沙鸭",
                        "鹅",
                        "黑天鹅",
                        "大象",
                        "针鼹鼠",
                        "鸭嘴兽",
                        "沙袋鼠",
                        "考拉",
                        "袋熊",
                        "水母",
                        "海葵",
                        "脑珊瑚",
                        "扁形虫扁虫",
                        "线虫",
                        "海螺",
                        "蜗牛",
                        "鼻涕虫",
                        "海蛞蝓;海参",
                        "石鳖",
                        "鹦鹉螺",
                        "珍宝蟹",
                        "石蟹",
                        "招潮蟹",
                        "帝王蟹",
                        "美国龙虾",
                        "大螯虾",
                        "小龙虾",
                        "寄居蟹",
                        "等足目动物(明虾和螃蟹近亲)",
                        "白鹳",
                        "黑鹳",
                        "鹭",
                        "火烈鸟",
                        "小蓝鹭",
                        "美国鹭",
                        "麻鸦",
                        "鹤",
                        "秧鹤",
                        "欧洲水鸡",
                        "沼泽泥母鸡",
                        "鸨",
                        "红翻石鹬",
                        "红背鹬",
                        "红脚鹬",
                        "半蹼鹬",
                        "蛎鹬",
                        "鹈鹕",
                        "国王企鹅",
                        "信天翁",
                        "灰鲸",
                        "杀人鲸",
                        "海牛",
                        "海狮",
                        "吉娃娃",
                        "日本狆犬",
                        "马尔济斯犬",
                        "狮子狗",
                        "西施犬",
                        "布莱尼姆猎犬",
                        "巴比狗",
                        "玩具犬",
                        "罗得西亚长背猎狗",
                        "阿富汗猎犬",
                        "巴吉度猎犬",
                        "比格犬",
                        "侦探犬",
                        "蓝色快狗",
                        "黑褐猎浣熊犬",
                        "沃克猎犬",
                        "英国猎狐犬",
                        "美洲赤狗",
                        "俄罗斯猎狼犬",
                        "爱尔兰猎狼犬",
                        "意大利灰狗",
                        "惠比特犬",
                        "依比沙猎犬",
                        "挪威猎犬",
                        "奥达猎犬",
                        "沙克犬",
                        "苏格兰猎鹿犬",
                        "威玛猎犬",
                        "斯塔福德郡斗牛犬",
                        "美国斯塔福德郡梗",
                        "贝德灵顿梗",
                        "边境梗",
                        "凯丽蓝梗",
                        "爱尔兰梗",
                        "诺福克梗",
                        "诺维奇梗",
                        "约克犬;约克夏梗犬",
                        "刚毛猎狐梗",
                        "莱克兰梗",
                        "锡利哈姆梗",
                        "艾尔谷犬",
                        "凯恩梗",
                        "澳大利亚梗",
                        "丹迪丁蒙梗",
                        "波士顿梗",
                        "迷你雪纳瑞犬",
                        "巨型雪纳瑞犬",
                        "标准雪纳瑞犬",
                        "苏格兰梗犬",
                        "西藏梗",
                        "丝毛梗",
                        "爱尔兰软毛梗犬",
                        "西高地白梗",
                        "拉萨阿普索犬",
                        "平毛寻回犬",
                        "卷毛寻回犬",
                        "金毛猎犬",
                        "拉布拉多猎犬",
                        "乞沙比克猎犬",
                        "德国短毛指示犬",
                        "维兹拉犬",
                        "英国塞特犬",
                        "爱尔兰雪达犬",
                        "戈登雪达犬",
                        "布列塔尼犬猎犬",
                        "黄毛",
                        "英国史宾格犬",
                        "威尔士史宾格犬",
                        "可卡犬",
                        "萨塞克斯猎犬",
                        "爱尔兰水猎犬",
                        "哥威斯犬",
                        "舒柏奇犬",
                        "比利时牧羊犬",
                        "马里努阿犬",
                        "伯瑞犬",
                        "凯尔皮犬",
                        "匈牙利牧羊犬",
                        "老英国牧羊犬",
                        "喜乐蒂牧羊犬",
                        "牧羊犬",
                        "边境牧羊犬",
                        "法兰德斯牧牛狗",
                        "罗特韦尔犬",
                        "德国牧羊犬",
                        "多伯曼犬",
                        "鹿犬;迷你杜宾犬",
                        "大瑞士山地犬",
                        "伯恩山犬",
                        "阿策尔山犬",
                        "恩特尔布赫山犬",
                        "拳师狗",
                        "斗牛獒",
                        "藏獒",
                        "法国斗牛犬",
                        "大丹犬",
                        "圣伯纳德狗",
                        "爱斯基摩犬",
                        "阿拉斯加雪橇犬",
                        "哈士奇",
                        "达尔马提亚",
                        "狮毛狗",
                        "巴辛吉狗",
                        "八哥犬",
                        "莱昂贝格狗",
                        "纽芬兰犬",
                        "大白熊犬",
                        "萨摩耶犬",
                        "博美犬",
                        "松狮",
                        "凯斯犬",
                        "布鲁塞尔格林芬犬",
                        "彭布洛克威尔士科基犬",
                        "威尔士柯基犬",
                        "玩具贵宾犬",
                        "迷你贵宾犬",
                        "标准贵宾犬",
                        "墨西哥无毛犬",
                        "灰狼",
                        "白狼",
                        "红太狼",
                        "狼",
                        "澳洲野狗",
                        "豺",
                        "非洲猎犬",
                        "鬣狗",
                        "红狐狸",
                        "沙狐",
                        "北极狐狸",
                        "灰狐狸",
                        "虎斑猫",
                        "山猫",
                        "波斯猫",
                        "暹罗猫",
                        "埃及猫",
                        "美洲狮",
                        "猞猁",
                        "豹子",
                        "雪豹",
                        "美洲虎",
                        "狮子",
                        "老虎",
                        "猎豹",
                        "棕熊",
                        "美洲黑熊",
                        "冰熊",
                        "懒熊",
                        "獴",
                        "猫鼬",
                        "虎甲虫",
                        "瓢虫",
                        "土鳖虫",
                        "天牛",
                        "龟甲虫",
                        "粪甲虫",
                        "犀牛甲虫",
                        "象甲",
                        "苍蝇",
                        "蜜蜂",
                        "蚂蚁",
                        "蚱蜢",
                        "蟋蟀",
                        "竹节虫",
                        "蟑螂",
                        "螳螂",
                        "蝉",
                        "叶蝉",
                        "草蜻蛉",
                        "蜻蜓",
                        "豆娘",
                        "优红蛱蝶",
                        "小环蝴蝶",
                        "君主蝴蝶",
                        "菜粉蝶",
                        "白蝴蝶",
                        "灰蝶",
                        "海星",
                        "海胆",
                        "海黄瓜;海参",
                        "野兔",
                        "兔",
                        "安哥拉兔",
                        "仓鼠",
                        "刺猬",
                        "黑松鼠",
                        "土拨鼠",
                        "海狸",
                        "豚鼠",
                        "栗色马",
                        "斑马",
                        "猪",
                        "野猪",
                        "疣猪",
                        "河马",
                        "牛",
                        "水牛",
                        "野牛",
                        "公羊",
                        "大角羊",
                        "山羊",
                        "狷羚",
                        "黑斑羚",
                        "瞪羚",
                        "阿拉伯单峰骆驼",
                        "骆驼",
                        "黄鼠狼",
                        "水貂",
                        "臭猫",
                        "黑足鼬",
                        "水獭",
                        "臭鼬",
                        "獾",
                        "犰狳",
                        "树懒",
                        "猩猩",
                        "大猩猩",
                        "黑猩猩",
                        "长臂猿",
                        "合趾猿长臂猿",
                        "长尾猴",
                        "赤猴",
                        "狒狒",
                        "恒河猴",
                        "白头叶猴",
                        "疣猴",
                        "长鼻猴",
                        "狨(美洲产小型长尾猴)",
                        "卷尾猴",
                        "吼猴",
                        "伶猴",
                        "蜘蛛猴",
                        "松鼠猴",
                        "马达加斯加环尾狐猴",
                        "大狐猴",
                        "印度大象",
                        "非洲象",
                        "小熊猫",
                        "大熊猫",
                        "杖鱼",
                        "鳗鱼",
                        "银鲑",
                        "三色刺蝶鱼",
                        "海葵鱼",
                        "鲟鱼",
                        "雀鳝",
                        "狮子鱼",
                        "河豚",
                        "算盘",
                        "长袍",
                        "学位袍",
                        "手风琴",
                        "原声吉他",
                        "航空母舰",
                        "客机",
                        "飞艇",
                        "祭坛",
                        "救护车",
                        "水陆两用车",
                        "模拟时钟",
                        "蜂房",
                        "围裙",
                        "垃圾桶",
                        "攻击步枪",
                        "背包",
                        "面包店",
                        "平衡木",
                        "热气球",
                        "圆珠笔",
                        "创可贴",
                        "班卓琴",
                        "栏杆",
                        "杠铃",
                        "理发师的椅子",
                        "理发店",
                        "牲口棚",
                        "晴雨表",
                        "圆筒",
                        "园地小车",
                        "棒球",
                        "篮球",
                        "婴儿床",
                        "巴松管",
                        "游泳帽",
                        "沐浴毛巾",
                        "浴缸",
                        "沙滩车",
                        "灯塔",
                        "烧杯",
                        "熊皮高帽",
                        "啤酒瓶",
                        "啤酒杯",
                        "钟塔",
                        "(小儿用的)围嘴",
                        "串联自行车",
                        "比基尼",
                        "装订册",
                        "双筒望远镜",
                        "鸟舍",
                        "船库",
                        "双人雪橇",
                        "饰扣式领带",
                        "阔边女帽",
                        "书橱",
                        "书店",
                        "瓶盖",
                        "弓箭",
                        "蝴蝶结领结",
                        "铜制牌位",
                        "奶罩",
                        "防波堤",
                        "铠甲",
                        "扫帚",
                        "桶",
                        "扣环",
                        "防弹背心",
                        "动车",
                        "肉铺",
                        "出租车",
                        "大锅",
                        "蜡烛",
                        "大炮",
                        "独木舟",
                        "开瓶器",
                        "开衫",
                        "车镜",
                        "旋转木马",
                        "木匠的工具包",
                        "纸箱",
                        "车轮",
                        "取款机",
                        "盒式录音带",
                        "卡带播放器",
                        "城堡",
                        "双体船",
                        "CD播放器",
                        "大提琴",
                        "移动电话",
                        "铁链",
                        "围栏",
                        "链甲",
                        "电锯",
                        "箱子",
                        "梳妆台",
                        "编钟",
                        "中国橱柜",
                        "圣诞袜",
                        "教堂",
                        "电影院",
                        "切肉刀",
                        "悬崖屋",
                        "斗篷",
                        "木屐",
                        "鸡尾酒调酒器",
                        "咖啡杯",
                        "咖啡壶",
                        "螺旋结构(楼梯)",
                        "组合锁",
                        "电脑键盘",
                        "糖果",
                        "集装箱船",
                        "敞篷车",
                        "瓶塞钻",
                        "短号",
                        "牛仔靴",
                        "牛仔帽",
                        "摇篮",
                        "起重机",
                        "头盔",
                        "板条箱",
                        "小儿床",
                        "砂锅",
                        "槌球",
                        "拐杖",
                        "胸甲",
                        "大坝",
                        "书桌",
                        "台式电脑",
                        "有线电话",
                        "尿布湿",
                        "数字时钟",
                        "数字手表",
                        "餐桌板",
                        "抹布",
                        "洗碗机",
                        "盘式制动器",
                        "码头",
                        "狗拉雪橇",
                        "圆顶",
                        "门垫",
                        "钻井平台",
                        "鼓",
                        "鼓槌",
                        "哑铃",
                        "荷兰烤箱",
                        "电风扇",
                        "电吉他",
                        "电力机车",
                        "组合电视柜",
                        "信封",
                        "浓缩咖啡机",
                        "扑面粉",
                        "女用长围巾",
                        "文件",
                        "消防船",
                        "消防车",
                        "火炉栏",
                        "旗杆",
                        "长笛",
                        "折叠椅",
                        "橄榄球头盔",
                        "叉车",
                        "喷泉",
                        "钢笔",
                        "有四根帷柱的床",
                        "运货车厢",
                        "圆号",
                        "煎锅",
                        "裘皮大衣",
                        "垃圾车",
                        "防毒面具",
                        "汽油泵",
                        "高脚杯",
                        "卡丁车",
                        "高尔夫球",
                        "高尔夫球车",
                        "狭长小船",
                        "锣",
                        "礼服",
                        "钢琴",
                        "温室",
                        "散热器格栅",
                        "杂货店",
                        "断头台",
                        "小发夹",
                        "头发喷雾",
                        "半履带装甲车",
                        "锤子",
                        "大篮子",
                        "手摇鼓风机",
                        "手提电脑",
                        "手帕",
                        "硬盘",
                        "口琴",
                        "竖琴",
                        "收割机",
                        "斧头",
                        "手枪皮套",
                        "家庭影院",
                        "蜂窝",
                        "钩爪",
                        "衬裙",
                        "单杠",
                        "马车",
                        "沙漏",
                        "iPod",
                        "熨斗",
                        "南瓜灯笼",
                        "牛仔裤",
                        "吉普车",
                        "T恤衫",
                        "拼图",
                        "人力车",
                        "操纵杆",
                        "和服",
                        "护膝",
                        "蝴蝶结",
                        "大褂",
                        "长柄勺",
                        "灯罩",
                        "笔记本电脑",
                        "割草机",
                        "镜头盖",
                        "开信刀",
                        "图书馆",
                        "救生艇",
                        "点火器",
                        "豪华轿车",
                        "远洋班轮",
                        "唇膏",
                        "平底便鞋",
                        "洗剂",
                        "扬声器",
                        "放大镜",
                        "锯木厂",
                        "磁罗盘",
                        "邮袋",
                        "信箱",
                        "女游泳衣",
                        "有肩带浴衣",
                        "窨井盖",
                        "沙球(一种打击乐器)",
                        "马林巴木琴",
                        "面膜",
                        "火柴",
                        "花柱",
                        "迷宫",
                        "量杯",
                        "药箱",
                        "巨石",
                        "麦克风",
                        "微波炉",
                        "军装",
                        "奶桶",
                        "迷你巴士",
                        "迷你裙",
                        "面包车",
                        "导弹",
                        "连指手套",
                        "搅拌钵",
                        "活动房屋(由汽车拖拉的)",
                        "T型发动机小汽车",
                        "调制解调器",
                        "修道院",
                        "显示器",
                        "电瓶车",
                        "砂浆",
                        "学士",
                        "清真寺",
                        "蚊帐",
                        "摩托车",
                        "山地自行车",
                        "登山帐",
                        "鼠标",
                        "捕鼠器",
                        "搬家货车",
                        "动物的口套",
                        "金属钉子",
                        "颈托",
                        "项链",
                        "乳头(瓶)",
                        "笔记本",
                        "方尖碑",
                        "双簧管",
                        "陶笛",
                        "里程表",
                        "滤油器",
                        "风琴",
                        "示波器",
                        "罩裙",
                        "牛车",
                        "氧气面罩",
                        "包装",
                        "船桨",
                        "明轮",
                        "挂锁",
                        "画笔",
                        "睡衣",
                        "宫殿",
                        "排箫",
                        "纸巾",
                        "降落伞",
                        "双杠",
                        "公园长椅",
                        "停车收费表",
                        "客车",
                        "露台",
                        "付费电话",
                        "基座",
                        "铅笔盒",
                        "卷笔刀",
                        "香水(瓶)",
                        "培养皿",
                        "复印机",
                        "拨弦片",
                        "尖顶头盔",
                        "用尖板条连成的尖桩篱栅",
                        "皮卡",
                        "桥墩",
                        "存钱罐",
                        "药瓶",
                        "枕头",
                        "乒乓球",
                        "风车",
                        "海盗船",
                        "水
Download .txt
gitextract_mbi_vrdw/

├── .gitignore
├── LICENSE
├── README.md
├── detection/
│   ├── coco2yolo.py
│   ├── coco_eval.py
│   ├── vis_yolo_gt_dt.py
│   └── yolo2coco.py
└── text-image/
    ├── convert_diffusers_to_original_stable_diffusion.py
    ├── data_filter/
    │   ├── data_filter_demo.ipynb
    │   ├── wukong_filter.py
    │   └── wukong_reader.py
    ├── fid_clip_score/
    │   ├── .gitignore
    │   ├── coco_sample_generator.py
    │   ├── compute_fid.ipynb
    │   ├── fid_clip_coco.ipynb
    │   ├── fid_clip_coco_cn.ipynb
    │   ├── run_generator.sh
    │   └── run_generator_cn.sh
    ├── imagenet_CN_zeroshot_data.py
    ├── iterable_tar_unzip.sh
    ├── save_hg_ckpt.ipynb
    └── zeroshot_retrieval_evaluation.ipynb
Download .txt
SYMBOL INDEX (25 symbols across 8 files)

FILE: detection/coco2yolo.py
  function convert (line 19) | def convert(size, box):

FILE: detection/coco_eval.py
  function transform_yolov5_result (line 8) | def transform_yolov5_result(result, filename2id):
  function coco_evaluate (line 19) | def coco_evaluate(gt_path, dt_path, yolov5_flag):

FILE: detection/vis_yolo_gt_dt.py
  function plot_bbox (line 24) | def plot_bbox(img_path, img_dir, out_dir, gt=None ,dt=None, cls2label=No...

FILE: detection/yolo2coco.py
  function train_test_val_split_random (line 24) | def train_test_val_split_random(img_paths,ratio_train=0.8,ratio_test=0.1...
  function train_test_val_split_by_files (line 33) | def train_test_val_split_by_files(img_paths, root_dir):
  function yolo2coco (line 48) | def yolo2coco(arg):

FILE: text-image/convert_diffusers_to_original_stable_diffusion.py
  function convert_unet_state_dict (line 90) | def convert_unet_state_dict(unet_state_dict):
  function reshape_weight_for_sd (line 161) | def reshape_weight_for_sd(w):
  function convert_vae_state_dict (line 166) | def convert_vae_state_dict(vae_state_dict):
  function convert_text_enc_state_dict (line 193) | def convert_text_enc_state_dict(text_enc_dict):

FILE: text-image/data_filter/wukong_filter.py
  class CsvDataset (line 28) | class CsvDataset(Dataset):
    method __init__ (line 29) | def __init__(self, input_filename, transforms, input_root, tokenizer, ...
    method __len__ (line 45) | def __len__(self):
    method __getitem__ (line 48) | def __getitem__(self, idx):

FILE: text-image/data_filter/wukong_reader.py
  class CsvDataset (line 17) | class CsvDataset(Dataset):
    method __init__ (line 18) | def __init__(self, input_filename, input_root, img_key, caption_key, t...
    method __len__ (line 39) | def __len__(self):
    method __getitem__ (line 42) | def __getitem__(self, idx):
  function process_pool_read_csv_dataset (line 49) | def process_pool_read_csv_dataset(input_root, input_filename, thres=0.20):

FILE: text-image/fid_clip_score/coco_sample_generator.py
  class COCOCaptionSubset (line 20) | class COCOCaptionSubset(Dataset):
    method __init__ (line 21) | def __init__(self, path, transform=None):
    method __len__ (line 24) | def __len__(self):
    method __getitem__ (line 27) | def __getitem__(self, idx):
  function save_images (line 31) | def save_images(images, image_paths, output_path):
Condensed preview — 22 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (263K chars).
[
  {
    "path": ".gitignore",
    "chars": 1799,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "LICENSE",
    "chars": 1069,
    "preview": "MIT License\n\nCopyright (c) 2021 Weifeng Chen\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "README.md",
    "chars": 2158,
    "preview": "<h2 align=\"center\">\nSome Scripts For DEEP LEARNING\n\n# 1. detection \n## yolo2coco.py\n将yolo格式数据集修改成coco格式。`$ROOT_PATH`是根目录"
  },
  {
    "path": "detection/coco2yolo.py",
    "chars": 4419,
    "preview": "\"\"\"\n2021/1/24\nCOCO 格式的数据集转化为 YOLO 格式的数据集,源代码采取遍历方式,太慢,\n这里改进了一下时间复杂度,从O(nm)改为O(n+m),但是牺牲了一些内存占用\n--json_path 输入的json文件路径\n-"
  },
  {
    "path": "detection/coco_eval.py",
    "chars": 1698,
    "preview": "import json\nimport argparse\nfrom pycocotools.coco import COCO \nfrom pycocotools.cocoeval import COCOeval \nimport os\nimpo"
  },
  {
    "path": "detection/vis_yolo_gt_dt.py",
    "chars": 4559,
    "preview": "import cv2\nimport os\nfrom glob import glob\nimport random\nimport matplotlib.pyplot as plt \nimport argparse\nfrom tqdm impo"
  },
  {
    "path": "detection/yolo2coco.py",
    "chars": 6945,
    "preview": "\"\"\"\nYOLO 格式的数据集转化为 COCO 格式的数据集\n--root_dir 输入根路径\n--save_path 保存文件的名字(没有random_split时使用)\n--random_split 有则会随机划分数据集,然后再分别保存"
  },
  {
    "path": "text-image/convert_diffusers_to_original_stable_diffusion.py",
    "chars": 8870,
    "preview": "# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint.\n# *Only* converts the UNet, VAE,"
  },
  {
    "path": "text-image/data_filter/data_filter_demo.ipynb",
    "chars": 22870,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# 集成了水印、美学、CLIP模型,用于给图文质量打分\"\n   ]\n "
  },
  {
    "path": "text-image/data_filter/wukong_filter.py",
    "chars": 3719,
    "preview": "# %%\nfrom torch.utils.data import Dataset, ConcatDataset\nfrom torchvision import transforms\nimport os\nfrom PIL import Im"
  },
  {
    "path": "text-image/data_filter/wukong_reader.py",
    "chars": 2794,
    "preview": "from torch.utils.data import Dataset, ConcatDataset\nfrom torchvision import transforms\nimport os\nfrom PIL import Image\nf"
  },
  {
    "path": "text-image/fid_clip_score/.gitignore",
    "chars": 9,
    "preview": "/output*\n"
  },
  {
    "path": "text-image/fid_clip_score/coco_sample_generator.py",
    "chars": 2312,
    "preview": "from torch.utils.data import Dataset, DataLoader\nimport pandas as pd \nimport os\nfrom diffusers import StableDiffusionPip"
  },
  {
    "path": "text-image/fid_clip_score/compute_fid.ipynb",
    "chars": 2310,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# FID指标计算\"\n   ]\n  },\n  {\n   \"cell_t"
  },
  {
    "path": "text-image/fid_clip_score/fid_clip_coco.ipynb",
    "chars": 46400,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"reference: https://wandb.ai/dalle-m"
  },
  {
    "path": "text-image/fid_clip_score/fid_clip_coco_cn.ipynb",
    "chars": 61710,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": "
  },
  {
    "path": "text-image/fid_clip_score/run_generator.sh",
    "chars": 421,
    "preview": "#!/bin/bash\nps aux | grep -E 'run_watch.sh|watch.py' |awk '{print $2}' | xargs kill -9 # kill previous watchdog\nguidance"
  },
  {
    "path": "text-image/fid_clip_score/run_generator_cn.sh",
    "chars": 543,
    "preview": "#!/bin/bash\nps aux | grep -E 'run_watch.sh|watch.py' |awk '{print $2}' | xargs kill -9 # kill previous watchdog\nguidance"
  },
  {
    "path": "text-image/imagenet_CN_zeroshot_data.py",
    "chars": 33562,
    "preview": "\n\nimagenet_classnames = [\n                        \"丁鲷\",\n                        \"金鱼\",\n                        \"大白鲨\",\n   "
  },
  {
    "path": "text-image/iterable_tar_unzip.sh",
    "chars": 212,
    "preview": "# for name in `ls -d */`;\n# do;\nname=\"image_part12/\"\nfor i in `ls $name*.tar`;\ndo \nmkdir ./project/dataset/laion_chinese"
  },
  {
    "path": "text-image/save_hg_ckpt.ipynb",
    "chars": 8692,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Roberta-base 转换为hugging face版\"\n  "
  },
  {
    "path": "text-image/zeroshot_retrieval_evaluation.ipynb",
    "chars": 17400,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": "
  }
]

About this extraction

This page contains the full source code of the Weifeng-Chen/DL_tools GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 22 files (229.0 KB), approximately 94.8k tokens, and a symbol index with 25 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!