[
  {
    "path": "README.md",
    "content": "# pytorch-lightning入门到精通（0）\n\n[概述](./pytorch-lightning入门到精通（0）.md)\n\n# pytorch-lightning入门到精通（1）\n\n[MNIST手写数字分类](./pytorch-lightning入门到精通（1）.md)\n\n# pytorch-lightning入门到精通（2）\n\n[LightningModule](./pytorch-lightning入门到精通（2）.md)\n\n# pytorch-lightning入门到精通（3）\n\n[Trainer](./pytorch-lightning入门到精通（3）.md)\n\n# pytorch-lightning入门到精通（4）\n\n[Callback](./pytorch-lightning入门到精通（4）.md)\n\n# pytorch-lightning入门到精通（5）\n\n[自定义各种Callback](./pytorch-lightning入门到精通（5）.md)\n\n# pytorch-lightning入门到精通（6）\n\n[CCF2020训练赛：通用音频分类/5天上0.97585955分](./pytorch-lightning入门到精通（6）.md)"
  },
  {
    "path": "codes/MNIST.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"import torch\\n\",\n    \"import torch.nn.functional as F\\n\",\n    \"from torch import nn\\n\",\n    \"from torch.nn import *\\n\",\n    \"from torch.optim import *\\n\",\n    \"from torch.optim.lr_scheduler import *\\n\",\n    \"from torch.utils.data import Dataset, DataLoader\\n\",\n    \"from torchvision import datasets, transforms\\n\",\n    \"\\n\",\n    \"import pytorch_lightning as pl\\n\",\n    \"from pytorch_lightning.callbacks import Callback\\n\",\n    \"\\n\",\n    \"import os, gc, time, argparse, json\\n\",\n    \"import math, random, cv2\\n\",\n    \"import numpy as np\\n\",\n    \"import pandas as pd\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import seaborn as sns\\n\",\n    \"from tqdm import tqdm\\n\",\n    \"from sklearn.metrics import *\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": true\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"class CrossEntropyLoss(Module):\\n\",\n    \"    def __init__(self):\\n\",\n    \"        super().__init__()\\n\",\n    \"\\n\",\n    \"    def forward(self, input, target, reduction=\\\"mean\\\"):\\n\",\n    \"        N, C = input.size()\\n\",\n    \"\\n\",\n    \"        if target.dim() > 1:\\n\",\n    \"            one_hot = target\\n\",\n    \"        else:\\n\",\n    \"            one_hot = torch.zeros((N, C), dtype=input.dtype, device=input.device)\\n\",\n    \"            one_hot.scatter_(1, target.reshape(N, 1), 1)\\n\",\n    \"\\n\",\n    \"        loss = -(one_hot * F.log_softmax(input, 1)).sum(1)\\n\",\n    \"        if reduction == \\\"mean\\\":\\n\",\n    \"            return loss.mean(0)\\n\",\n    \"        elif reduction == \\\"sum\\\":\\n\",\n    \"            return loss.sum(0)\\n\",\n    \"        else:\\n\",\n    \"            return loss\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"class ClassificationMetric(object):\\n\",\n    \"    def __init__(self, accuracy=True, recall=True, precision=True, f1=True, average=\\\"macro\\\"):\\n\",\n    \"        self.accuracy = accuracy\\n\",\n    \"        self.recall = recall\\n\",\n    \"        self.precision = precision\\n\",\n    \"        self.f1 = f1\\n\",\n    \"        self.average = average\\n\",\n    \"\\n\",\n    \"        self.preds = []\\n\",\n    \"        self.target = []\\n\",\n    \"\\n\",\n    \"    def reset(self):\\n\",\n    \"        self.preds.clear()\\n\",\n    \"        self.target.clear()\\n\",\n    \"        gc.collect()\\n\",\n    \"\\n\",\n    \"    def update(self, preds, target):\\n\",\n    \"        preds = list(preds.cpu().detach().argmax(1).numpy())\\n\",\n    \"        target = list(target.cpu().detach().argmax(1).numpy()) if target.dim() > 1 else list(target.cpu().detach().numpy())\\n\",\n    \"        self.preds += preds\\n\",\n    \"        self.target += target\\n\",\n    \"\\n\",\n    \"    def compute(self):\\n\",\n    \"        metrics = []\\n\",\n    \"        if self.accuracy:\\n\",\n    \"            metrics.append(accuracy_score(self.target, self.preds))\\n\",\n    \"        if self.recall:\\n\",\n    \"            metrics.append(recall_score(self.target, self.preds, labels=list(set(self.preds)), average=self.average))\\n\",\n    \"        if self.precision:\\n\",\n    \"            metrics.append(precision_score(self.target, self.preds, labels=list(set(self.preds)), average=self.average))\\n\",\n    \"        if self.f1:\\n\",\n    \"            metrics.append(f1_score(self.target, self.preds, labels=list(set(self.preds)), average=self.average))\\n\",\n    \"        self.reset()\\n\",\n    \"        return metrics\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {\n    \"jupyter\": {\n     \"source_hidden\": true\n    }\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"class FlexibleTqdm(Callback):\\n\",\n    \"    def __init__(self, steps, column_width=10):\\n\",\n    \"        super(FlexibleTqdm, self).__init__()\\n\",\n    \"        self.steps = steps\\n\",\n    \"        self.column_width = column_width\\n\",\n    \"        self.info = \\\"\\\\rEpoch_%d %s%% [%s]\\\"\\n\",\n    \"\\n\",\n    \"    def on_train_start(self, trainer, module):\\n\",\n    \"        history = module.history\\n\",\n    \"        self.row = \\\"-\\\" * (self.column_width + 1) * (len(history) + 2) + \\\"-\\\"\\n\",\n    \"        title = \\\"|\\\"\\n\",\n    \"        title += \\\"epoch\\\".center(self.column_width) + \\\"|\\\"\\n\",\n    \"        title += \\\"time\\\".center(self.column_width) + \\\"|\\\"\\n\",\n    \"        for i in history.keys():\\n\",\n    \"            title += i.center(self.column_width) + \\\"|\\\"\\n\",\n    \"        print(self.row)\\n\",\n    \"        print(title)\\n\",\n    \"        print(self.row)\\n\",\n    \"\\n\",\n    \"    def on_train_batch_end(self, trainer, module, outputs, batch, batch_idx, dataloader_idx):\\n\",\n    \"        current_index = int((batch_idx + 1) * 100 / self.steps)\\n\",\n    \"        tqdm = [\\\".\\\"] * 100\\n\",\n    \"        for i in range(current_index - 1):\\n\",\n    \"            tqdm[i] = \\\"=\\\"\\n\",\n    \"        if current_index:\\n\",\n    \"            tqdm[current_index - 1] = \\\">\\\"\\n\",\n    \"        print(self.info % (module.current_epoch, str(current_index).rjust(3), \\\"\\\".join(tqdm)), end=\\\"\\\")\\n\",\n    \"\\n\",\n    \"    def on_epoch_start(self, trainer, module):\\n\",\n    \"        print(self.info % (module.current_epoch, \\\"  0\\\", \\\".\\\" * 100), end=\\\"\\\")\\n\",\n    \"        self.begin = time.perf_counter()\\n\",\n    \"\\n\",\n    \"    def on_epoch_end(self, trainer, module):\\n\",\n    \"        self.end = time.perf_counter()\\n\",\n    \"        history = module.history\\n\",\n    \"        detail = \\\"\\\\r|\\\"\\n\",\n    \"        detail += str(module.current_epoch).center(self.column_width) + \\\"|\\\"\\n\",\n    \"        detail += (\\\"%d\\\" % (self.end - self.begin)).center(self.column_width) + \\\"|\\\"\\n\",\n    \"        for j in history.values():\\n\",\n    \"            detail += (\\\"%.06f\\\" % j[-1]).center(self.column_width) + \\\"|\\\"\\n\",\n    \"        print(\\\"\\\\r\\\" + \\\" \\\" * 120, end=\\\"\\\")\\n\",\n    \"        print(detail)\\n\",\n    \"        print(self.row)\\n\",\n    \"        \\n\",\n    \"        \\n\",\n    \"class LearningCurve(Callback):\\n\",\n    \"    def __init__(self, figsize=(12, 4), names=(\\\"loss\\\", \\\"acc\\\", \\\"f1\\\")):\\n\",\n    \"        super(LearningCurve, self).__init__()\\n\",\n    \"        self.figsize = figsize\\n\",\n    \"        self.names = names\\n\",\n    \"\\n\",\n    \"    def on_fit_end(self, trainer, module):\\n\",\n    \"        history = module.history\\n\",\n    \"        plt.figure(figsize=self.figsize)\\n\",\n    \"        for i, j in enumerate(self.names):\\n\",\n    \"            plt.subplot(1, len(self.names), i + 1)\\n\",\n    \"            plt.title(j + \\\"/val_\\\" + j)\\n\",\n    \"            plt.plot(history[j], \\\"--o\\\", color='r', label=j)\\n\",\n    \"            plt.plot(history[\\\"val_\\\" + j], \\\"-*\\\", color='g', label=\\\"val_\\\" + j)\\n\",\n    \"            plt.legend()\\n\",\n    \"        plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"epochs = 10\\n\",\n    \"batch_size = 128\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"train_transform = transforms.Compose([\\n\",\n    \"   transforms.ToTensor(),\\n\",\n    \"   transforms.Normalize((0.1307,), (0.3081,))\\n\",\n    \"])\\n\",\n    \"val_transform = transforms.Compose([\\n\",\n    \"   transforms.ToTensor(),\\n\",\n    \"   transforms.Normalize((0.1307,), (0.3081,))\\n\",\n    \"])\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"train_dataset = datasets.MNIST(\\\"data\\\", train=True, download=True, transform=train_transform)\\n\",\n    \"train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True)\\n\",\n    \"val_dataset = datasets.MNIST(\\\"data\\\", train=False, transform=val_transform)\\n\",\n    \"val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False, num_workers=4, pin_memory=True)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"class CustomModel(pl.LightningModule):\\n\",\n    \"    def __init__(self):\\n\",\n    \"        super().__init__()\\n\",\n    \"        self.conv1 = Conv2d(1, 10, 5)\\n\",\n    \"        self.conv2 = Conv2d(10, 20, 3)\\n\",\n    \"        self.fc1 = Linear(20*10*10, 500)\\n\",\n    \"        self.fc2 = Linear(500, 10)\\n\",\n    \"        \\n\",\n    \"        self.train_criterion = CrossEntropyLoss()\\n\",\n    \"        self.val_criterion = CrossEntropyLoss()\\n\",\n    \"\\n\",\n    \"        self.train_metric = ClassificationMetric(recall=False, precision=False)\\n\",\n    \"        self.val_metric = ClassificationMetric(recall=False, precision=False)\\n\",\n    \"\\n\",\n    \"        self.history = {\\n\",\n    \"            \\\"loss\\\": [], \\\"acc\\\": [], \\\"f1\\\": [],\\n\",\n    \"            \\\"val_loss\\\": [], \\\"val_acc\\\": [], \\\"val_f1\\\": [],\\n\",\n    \"        }\\n\",\n    \"        \\n\",\n    \"    def forward(self,x):\\n\",\n    \"        in_size = x.size(0)\\n\",\n    \"        out = self.conv1(x)\\n\",\n    \"        out = F.relu(out)\\n\",\n    \"        out = F.max_pool2d(out, 2, 2)\\n\",\n    \"        out = self.conv2(out)\\n\",\n    \"        out = F.relu(out)\\n\",\n    \"        out = out.view(in_size, -1)\\n\",\n    \"        out = self.fc1(out)\\n\",\n    \"        out = F.relu(out)\\n\",\n    \"        out = self.fc2(out)\\n\",\n    \"        out = F.log_softmax(out, dim=1)\\n\",\n    \"        return out\\n\",\n    \"    \\n\",\n    \"    def training_step(self, batch, idx):\\n\",\n    \"        x, y = batch\\n\",\n    \"        _y = self(x)\\n\",\n    \"        loss = self.train_criterion(_y, y)\\n\",\n    \"        self.train_metric.update(_y, y)\\n\",\n    \"        return loss\\n\",\n    \"\\n\",\n    \"    def training_epoch_end(self, outs):\\n\",\n    \"        loss = 0.\\n\",\n    \"        for out in outs:\\n\",\n    \"            loss += out[\\\"loss\\\"].cpu().detach().item()\\n\",\n    \"        loss /= len(outs)\\n\",\n    \"        acc, f1 = self.train_metric.compute()\\n\",\n    \"\\n\",\n    \"        self.history[\\\"loss\\\"].append(loss)\\n\",\n    \"        self.history[\\\"acc\\\"].append(acc)\\n\",\n    \"        self.history[\\\"f1\\\"].append(f1)\\n\",\n    \"\\n\",\n    \"    def validation_step(self, batch, idx):\\n\",\n    \"        x, y = batch\\n\",\n    \"        _y = self(x)\\n\",\n    \"        val_loss = self.val_criterion(_y, y)\\n\",\n    \"        self.val_metric.update(_y, y)\\n\",\n    \"        return val_loss\\n\",\n    \"\\n\",\n    \"    def validation_epoch_end(self, outs):\\n\",\n    \"        val_loss = sum(outs).item() / len(outs)\\n\",\n    \"        val_acc, val_f1 = self.val_metric.compute()\\n\",\n    \"\\n\",\n    \"        self.history[\\\"val_loss\\\"].append(val_loss)\\n\",\n    \"        self.history[\\\"val_acc\\\"].append(val_acc)\\n\",\n    \"        self.history[\\\"val_f1\\\"].append(val_f1)\\n\",\n    \"\\n\",\n    \"    def configure_optimizers(self):\\n\",\n    \"        return Adam(self.parameters())\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"model = CustomModel()\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"metadata\": {},\n   \"outputs\": [\n    {\n     \"name\": \"stderr\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"GPU available: True, used: True\\n\",\n      \"TPU available: False, using: 0 TPU cores\\n\",\n      \"LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"trainer_params = {\\n\",\n    \"    \\\"gpus\\\": 1,\\n\",\n    \"    \\\"max_epochs\\\": epochs,  # 1000\\n\",\n    \"    \\\"checkpoint_callback\\\": False,  # True\\n\",\n    \"    \\\"logger\\\": False,  # TensorBoardLogger\\n\",\n    \"    \\\"progress_bar_refresh_rate\\\": 0,  # 1\\n\",\n    \"    \\\"num_sanity_val_steps\\\": 0,  # 2\\n\",\n    \"    \\\"callbacks\\\": [\\n\",\n    \"        FlexibleTqdm(len(train_dataset) // batch_size, column_width=12),\\n\",\n    \"        LearningCurve(figsize=(12, 4), names=(\\\"loss\\\", \\\"acc\\\", \\\"f1\\\")),\\n\",\n    \"    ],  # None\\n\",\n    \"}\\n\",\n    \"trainer = pl.Trainer(**trainer_params)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"metadata\": {},\n   \"outputs\": [\n    {\n     \"name\": \"stderr\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"\\n\",\n      \"  | Name            | Type             | Params\\n\",\n      \"-----------------------------------------------------\\n\",\n      \"0 | conv1           | Conv2d           | 260   \\n\",\n      \"1 | conv2           | Conv2d           | 1.8 K \\n\",\n      \"2 | fc1             | Linear           | 1.0 M \\n\",\n      \"3 | fc2             | Linear           | 5.0 K \\n\",\n      \"4 | train_criterion | CrossEntropyLoss | 0     \\n\",\n      \"5 | val_criterion   | CrossEntropyLoss | 0     \\n\"\n     ]\n    },\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|   epoch    |    time    |    loss    |    acc     |     f1     |  val_loss  |  val_acc   |   val_f1   |\\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     0      |     7      |  0.185612  |  0.944383  |  0.943983  |  0.060953  |  0.980000  |  0.980004  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     1      |     7      |  0.052078  |  0.984217  |  0.984119  |  0.038682  |  0.986900  |  0.986838  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     2      |     7      |  0.033784  |  0.989483  |  0.989413  |  0.033859  |  0.987900  |  0.987861  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     3      |     8      |  0.023178  |  0.992483  |  0.992433  |  0.035293  |  0.988800  |  0.988779  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     4      |     7      |  0.017861  |  0.994350  |  0.994348  |  0.034236  |  0.989100  |  0.989022  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     5      |     7      |  0.012569  |  0.996100  |  0.996074  |  0.040690  |  0.987100  |  0.987041  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     6      |     7      |  0.010513  |  0.996683  |  0.996684  |  0.042072  |  0.988500  |  0.988386  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     7      |     7      |  0.010734  |  0.996367  |  0.996361  |  0.050810  |  0.987400  |  0.987184  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     8      |     7      |  0.008306  |  0.997350  |  0.997347  |  0.041368  |  0.989500  |  0.989442  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\",\n      \"|     9      |     8      |  0.005323  |  0.998183  |  0.998188  |  0.039716  |  0.990400  |  0.990296  |               \\n\",\n      \"---------------------------------------------------------------------------------------------------------\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": \"iVBORw0KGgoAAAANSUhEUgAAAs8AAAEICAYAAACgdxkmAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAABr7UlEQVR4nO3dd3hUZfbA8e9JIaF3aQECiNKLIKKuoGBFV1CRYuwFsbe14lpQ1P2tu3ZRLKtoWEBFxYoiILKiEpTeJKEFAoReQ0hyfn+8k2QICZkkk0zJ+TzPPJm5bc4ETu6Ze98iqooxxhhjjDGmeBGBDsAYY4wxxphQYcWzMcYYY4wxPrLi2RhjjDHGGB9Z8WyMMcYYY4yPrHg2xhhjjDHGR1Y8G2OMMcYY4yMrno0xxhhjjPGRFc8hQETWisjZgY7jWHyJUUSeEJEPKyomY0zRRGSWiNwY6DiMCSUicqKILBCRvSJyZwW957UiMseH7U4XkT9FZJ+IDKqA0CotK54NIjJNRM4NdBzGGN+IyJsiMiLQcRhTCT0AzFTVmsBiEZkpIrtFZG1hG4vIqSLycwXFNhp4VVVrqOpnIjJERH4WkQMiMquCYqgUrHiu5ESkOtAT+DHQsRhjfHYB8HWggzCmEmoJLPU83w+8C9x/jO0vpOJy1Ts2gB3Ai8BzFfT+lYYVzyFERGJE5EUR2eR5vCgiMZ51DUTkSxHZJSI7ROQnEYnwrHtQRDZ6bjOtFJH+XoftD/wPqC8iB0Wkntf7dReRbSISLSJtRGSGiGz3LEsUkTpl/DwXi8hST8yzRKS917pCYxaRXiKSJCJ7RGSLiPy7LDEYU95E5CERSfb8X14mIpd4rbtJRJZ7rTvJs7y5iEwRkXRPzr3qtU8XYBeQ7smdTl7rGnry+DgRqev5m5AuIjs9z+NKGPsx876YOAv9bMaEKhGZAZwFvCoi+4BdqvoBkHKM3QYAX4vIWBF5vsDxPheRez3Pi/w74WNsyUBr4AtPs40YVZ2uqpOBTSU5limeFc+hZRTQG+gGdAV6AY961t0HpAINgUbAI4CKyInA7cDJnttM5wFrvY45APhKVTcBc4HLvNZdAXysqocBAZ4FmgLtgebAE6X9ICJyAvBf4G5PzF/jkr5KMTG/BLykqrWANsDk0sZgTAVJBs4AagNPAh+KSBMRuRyXQ1cDtYCLge0iEgl8CawD4oFmwESv4+Xm7CFgCjDca90Q4EdV3Yr7+/4f3NWoFsBB4FVKpsi8P1acRX22Er63MUFFVfsBPwG3e5pGrDrW9iLSBHc+/gN3vhsqIuJZVxc4l/zcLvTvRAliawOsB/7qie1QiT6cKRErnkNLAjBaVbeqajouwa7yrDsMNAFaquphVf1JVRXIBmKADiISraprVTXZ65gDyL+lNAHPidiT4MM8y1DV1ar6vaoe8rz3v4G+ZfgsQ3EFwPee4vx5oCpwWjExHwaOF5EGqrpPVX8pQwzGlDtV/UhVN6lqjqpOAv7EffG9Efg/VZ2nzmpVXedZ1xS4X1X3q2qGqnp3FvK+DTwBl6e5riA/Z7er6ieqekBV9wJjKGHOFpP3x4qzqM9mTGUyAPjWcy7+CVBcgQwwGJjruXB1rL8TJghZ8RxamuKu8uRa51kG8E9gNfCdiKSIyEPgTn64q7tPAFtFZKKINAUQkc7AblXd4DnGJ8Cpnm+7fYAcXMIjIo08+24UkT3Ah0ADf30WVc0BNgDNjhUzcANwArBCROaJyEVliMGYciciV4vrnb9LRHYBnXC50xx3tamg5sA6Vc0q5Fh1gHZAbgekmUA1ETlFROJxd6U+9WxbTVzHwnWenJ0N1PFcMfY19mPlfZFxHuOzGVOZ5F2c8hTQE8m/U3QFkJi74TH+TpggZMVzaNmEuwWbq4VnGaq6V1XvU9XWuFuk9+a2E1bVCar6F8++CvzDs7/3VWdUdSfwHe6q8BXARE/CAzzj2bezp8nElbhbun75LJ4r3c2BjceKWVX/VNXhwHGeZR+L6/RoTNARkZbAW7hmSPVVtQ6wBJc7G3BNjwraALQQkahC1p0HzFDVbADPz8m4E/Jw4EvPVWZwTblOBE7x5Gyf3LBK8BGOlffHirOoz2ZMpSAi0bi7NN97Lf4vMNjzd+EU3AWr4v5OmCBkxXNo+S/wqKdTUAPgMdyVIETkIhE53lOE7sY1fcgRNyZlP3EdCzNw7R5zPMcbAHxV4D0m4NopDvY8z1UT2AfsFpFmHLt3sS8mAxeKSH/PH5n7gEPAz8eKWUSuFJGGnivVuzzHyjn68MYEheq44jMdQESuw11RAngb+JuI9BDneM9J9DcgDXhORKqLSKyInO7Zp6icHYpr1lUwZw8Cu8R1BH68FPEfK++PFWdRn82YsCEiESISC0S7lxIrIlU8q/8CLFLVPbnbq+ofwDZcfkxT1V2eVcf6O1GW+CI98UUBEZ74ost6XGPFc6h5GkgCFgGLgd89ywDaAtNxJ7q5wOuqOhPXdvg5XMJuxl2xfdhz+7cD+bd/c031HGuzqi70Wv4kcBKuMP8K11Gp1FR1Je4q1iue2P6K6+iQWVTMnl3PB5aK6+n8EjBMVQ+WJRZjyouqLgP+hcvJLUBn3Og2qOpHuHbIE4C9wGdAPc/V5L8Cx+M6AKWS39HoPODbAu/xK27IrKbAN16rXsT1I9gG/FJwPx8VmfdFxXmsz1aK9zcmmPXBfUH9mvxOud951hU1RN0E4Gy8vuge6+9EGV3liWksrq31QdwVblNGkn9X3lQmIjIEGKyqQwIdizGmeCLSCzcBgnUiMibIicgy3Dl2WaBjMf5nV54rr13AC4EOwhhTIqVpemGMqUCephvjrXAOX3bl2fiViHxD/lA83p5R1WcqOh5jzLGJyBu4JlQFfaiqIys6HmNM4UTkDI5smpVHVWtUcDiVmhXPxhhjjDHG+KiwIYaCVoMGDTQ+Pj7QYRgTNObPn79NVRv6sq2IvAtcBGxV1aN6cns6pL2EG9HhAHCtqv7uWXcN+bNZPq2q7xf3fpavxhzJ8tWY0HGsfA2p4jk+Pp6kpKRAh2FM0BCRksza9h5ueubxRay/ADfSSlvcGKRjgVO8hjnriRtOab6ITPWMC14ky1djjmT5akzoOFa+WodBYyoJVZ0N7DjGJgNxnVzUM+15Hc9sk+cB36vqDs8J+HvckIHGmHJi+WpM8LLi2RiTqxluZrhcqZ5lRS03xgSO5asxAWLFszHGb0RkhIgkiUhSenp6oMMxxhyD5asxpRNSbZ5N+Dl8+DCpqalkZGQEOpSgFhsbS1xcHNHR5Tqz6kagudfrOM+yjcCZBZbPKuwAqjoOGAfQs2dPG8rHmPJj+WpMgFjxbAIqNTWVmjVrEh8fj+s8bgpSVbZv305qaiqtWrUqz7eaCtwuIhNxHZB2q2qaiEwDnhGRup7tziV/unRjTGBYvhoTIOHTbCMxEeLjISLC/UxMDHRExgcZGRnUr1/fCudjEBHq169f5qvzIvJfYC5wooikisgNIjJSRHInwvgaSAFWA28BtwKo6g7gKWCe5zHas8wYU07nHstXY8qBn/I1PK48JybCiBFw4IB7vW6dew2QkBC4uIxPrHAunj9+R6o6vJj1CtxWxLp3gXfLHIQx4aQczz2Wr8b4mR/zNTyuPI8alf/LyHXggFtujDHGlIeHH7ZzjzGh4pFH/Jav4VE8r19fsuXGeKlRo0agQzDGVJSS3rY9cAAWLIBJk+DJJ+GKK+Cnn9y61NTC97FzjzH+U5Kczc6GlBT4+mt44QW4+WZ4+mm3bsOGwvcpRb6GR7ONFi3c5ffClpvwkpjoviWuX+/+fceMsaY5xhjfHOu2bd++sGIFrFwJXbvCX/4Cq1bBiSfm7y8CLVvCJZe413FxhZ+Q7dxjjH8UlbMHD7o8XbECMjLgppvc+t69wXumzHr1YNAg99yPtWJ4XHkeMwaqVTtyWbVqbrkJH7lJtG4dqOYnkZ866Kgq999/P506daJz585MmjQJgLS0NPr06UO3bt3o1KkTP/30E9nZ2Vx77bV5277wwgt+icEYU46Kum171VXQvDmccw7cfjt89plbFx8Po0fD5MmwcCHs3w9r1sDll7v1zz5r5x5jytNDDxWeszfdBL16wdVXw+OP56+76y54+22YMwfS02H7dnjnHbfOj7VieFx5zr3yeOutsGeP+xbxzDN2RTIUnXnm0cuGDHH/tkW1L7zrLvdvvW0bDB585PpZs3x+6ylTprBgwQIWLlzItm3bOPnkk+nTpw8TJkzgvPPOY9SoUWRnZ3PgwAEWLFjAxo0bWbJkCQC7du0q0cc0JmwF292hd9+F335zxW9Rt2dV4fXX3VXmdu2gSRO3vEoV+Pvfiz527ucKps9rTEkFU87Oneu+vC5a5HI2La3obT/91OVr69b5y668sujt/Ziv4XHlGdyH/8c/3PP//c/+eIWjotoXbt/ul8PPmTOH4cOHExkZSaNGjejbty/z5s3j5JNP5j//+Q9PPPEEixcvpmbNmrRu3ZqUlBTuuOMOvv32W2rVquWXGIwJaeVxd6i49o45OfDnn/Dxx/DYYzBwYP6VYYA333RXjmNjoWbNwt+jZUu45Rbo1w+aNnXNM3yVkABr17o41q61c48JLYHI2S1b4Pvv4V//cleOu3VzuQPw66/w4ouwebO7E1SnTuHv0bKla47Rrp37kusrP+VreFx5ztWpk7tyWfDqpAkdx7pSXFR7pZYt3c8GDUp0pdlXffr0Yfbs2Xz11Vdce+213HvvvVx99dUsXLiQadOm8cYbbzB58mTefddGhjIhqqRXng4ccCfG2FjXjGHSJNi40d0ePXjw6G2vuw7GjoXatfMfDz7oTqyrVsG8eUeuq10bmjVzxy3Y3vGGG+DHH2HcOLds+HBXHIOL6cQT4dRT89//u++gVi1XEBdsPwnWzMKEnpLma3a2+z+f++XxlVdcLm3Y4K7yZmYeuX1uzr71lsvFOnXgwgvdXeCsLHjvvSNztU4daNzYPS+sjfL118PWrXDPPe5q8aWX5r9X06bQpUv+9iNGwG23Qe5susGas6oaMo8ePXqoCS/Lli3zfeMPP1StVk3VfT92j2rV3PIyqF69uqqqfvLJJ3ruuedqVlaWbt26VVu0aKFpaWm6du1azcrKUlXVV155Re+66y5NT0/X3bt3q6rq4sWLtWvXrmWKwReF/a6AJA2C3CzsYfkaIgrLq9hY1VdfdevXrVO98UbV889X7dxZtW5dt81//+vWz5rlXteufeQxCj7OOkv1pJNU27RRbdBAdfFit/+rrxa+/apVqi1bFn28HTvc/l9/rfrOO6pJSaoHDvj2eVu2VBVxP8v496MkLF9NmRWWr1Wrqj7/fP42Y8aoDh2qevrpqi1aqEZFqf71r/nrmzZ1Od627bFz9owzXM63aKE6erTbd/v2wrd98km3vlmzwtfXq+fWp6aqvvCC6g8/qKan+/6ZA5Czx8pXcetDQ8+ePTXJuxelCXnLly+nffv2vu9QDm2zatSowb59+1BVHnjgAb755htEhEcffZShQ4fy/vvv889//pPo6Ghq1KjB+PHj2bNnD9dddx05OTkAPPvss1xwwQVliqM4hf2uRGS+qvYs1zcuJcvXENG8eeFNourUgZ073ZXlU091I0s0a5b/c+BA6NjRXbXKzIQaNdyV5KLuDuXeli1o717YtAl27z7yMWyYu1JW2DlKxF1NC7EJlixfTZkVlWPR0flXkM880+V08+YuX5s3h5NOyu8TtHt3/t2YkuZsdvaR+bprl/vZqZO7ghwRUXTOes6XoeJY+Rp+xfMFF7jbB//5T8UEZcqkxMVzJWbFs/GLrCw3TvH+/XDRRf492RV1i3XcuNJ9yS1NMR7ELF9NqaxZ45ofjRgBkZH+LU4tZ4t0rHz1qcOgiJwvIitFZLWIPFTI+j4i8ruIZInIYK/lZ4nIAq9HhogM8qx7T0TWeK3rVrqPV0B2Nixd6pdDGWNMWMjMhG++ccM7NWniOsY99phbV9QYp6UZqzghwZ10W7bMHxO5tCdhsGFITeWk6kabePJJ15mudWsYOdL1D/BnvoLlbCkVWzyLSCTwGnAB0AEYLiIdCmy2HrgWmOC9UFVnqmo3Ve0G9AMOAN95bXJ/7npVXVDaD3GENm0gOdkvhzLGmJB16FD+8+uvhwEDXAe8c85xI1PkzpLn75OdP0ef8PeJ3ZhglZOT39l26lQ3AciTT7rmUM8/7+qaE08sn+LUcrbEfLny3AtYraopqpoJTAQGem+gqmtVdRFwrHsGg4FvVLV8h8Jo0wZ27HDtcIwxJhwVNRTUnj0wcaIbqq1+fTdNLbiJP7780k0aMGECXHYZVK/u1gX7yc6GgjOhrqh8zcyEb791U0g3bQovveSW9+vnhlhMS3OTfdx3X/5YxsGer1ApctaXoeqaAd7zj6YCp5TivYYB/y6wbIyIPAb8ADykqocK7iQiI4ARAC18uS3Rpo37mZLiGsgbY0w4KWwoqBtvdFenli1zJ+TGjd34qbkd6nr3PvYxExLC8gRnTMAVNb30m2+6SUD27HFXlwcMgJ6e5rU1a+ZPG18Yy9eAq5BxnkWkCdAZmOa1+GFgM1AFGAc8CIwuuK+qjvOsp2fPnsX3buzUyU21Ghtb9sCNMSbYjBp19Fj2GRmwZAnccYcbQ/XUU13HImNMYBWWrwcOwPz5bkSZSy6Bs8+2miXE+FI8bwSae72O8ywriSHAp6p6OHeBqubOuXhIRP4D/K2Exyxc27YwfrxfDmWMMUFj9243wUBhPdnBdZb+d8Gbe8aYgMjIgK+/LjpfDx50kwqZkORLm+d5QFsRaSUiVXDNL6aW8H2GA//1XuC5Go2ICDAIWFLCYxZN1WYZNMaEvsOH859ffLGb9auoK8ql7W1vjPGP7Oz8sZbHjXN9CyKKKLMsX0NascWzqmYBt+OaXCwHJqvqUhEZLSIXA4jIySKSClwOvCkieWPFiUg87sr1jwUOnSgii4HFQAPgaT98Huecc9z4pSYspe1No+97fdm8b3OFv3eNGjWKXLd27Vo6depUgdGYsHToEHzxBVxxhWu7nNv5+amn4Jdf4P33K8VQUMaEBFX47Te4+243IckEz6Bjw4fDtGluKmvL17DjU5tnVf0a+LrAsse8ns/DNecobN+1uE6HBZf3K0mgJdKkSf4wTCbsPDX7Keasn8PoH0fz+oWvBzocY/xj9Wp49lmYMsUVzPXquRnBDhxws/316eO2O8XTX9vPM20aY0ogKwtGj3bFcnIyxMTAhRfmD1rQsCGce657HhFh+RpmKqTDYIVr3dr9h87MhCpVAh2N8dHd397Ngs0Lilz/0/qfyNH80RDHJo1lbNJYIiSCM1qcUeg+3Rp348XzXyzymA899BDNmzfntttuA+CJJ54gKiqKmTNnsnPnTg4fPszTTz/NwIEDizxGYTIyMrjllltISkoiKiqKf//735x11lksXbqU6667jszMTHJycvjkk09o2rQpQ4YMITU1lezsbP7+978zdOjQEr2fCXKFTSs/fDj8/LMbMq57d3fL96OPXAeiYcNcJ6Lo6MKPZ73tjSk/heVrQgJs2AB//OGaUEVFuTbNrVvDo4+6vK1du/DjWb6GnfAsntu0yR9f8IQTAh2N8ZNeTXuRsjOFbQe3kaM5REgEDao1oE3dNqU+5tChQ7n77rvziufJkyczbdo07rzzTmrVqsW2bdvo3bs3F198MZI77JcPXnvtNUSExYsXs2LFCs4991xWrVrFG2+8wV133UVCQgKZmZlkZ2fz9ddf07RpU7766isAdu/eXerPY4JQYUNVXXutGxlj505XRE+Y4CZASE93V7CMMYFRWL5ef71rNrVypWtysW0bVK3qvvzaBbpKKXyLZ3C3Uqx4DhnHukKc65Yvb2Hc7+OIjYolMzuTy9pfVqamG927d2fr1q1s2rSJ9PR06tatS+PGjbnnnnuYPXs2ERERbNy4kS1bttC4cWOfjztnzhzuuOMOANq1a0fLli1ZtWoVp556KmPGjCE1NZVLL72Utm3b0rlzZ+677z4efPBBLrroIs44o/Cr6CZEFTZUVVaW642fmAh//Wv+ciucjQmswvI1M9PVE0895e4KVa3qllvhXGn5MtpG6GnXDh55xM28Y8LKlv1bGNljJL/c8Asje4z0S6fByy+/nI8//phJkyYxdOhQEhMTSU9PZ/78+SxYsIBGjRqRkZHhh+jhiiuuYOrUqVStWpUBAwYwY8YMTjjhBH7//Xc6d+7Mo48+yujRRw13bkJVVlbRQ1VlZLhOgTVrVmxMxpiirV9f+PLsbNc84/jjKzYeE5TC88pz/frWkzVMTRk6Je/5axe+5pdjDh06lJtuuolt27bx448/MnnyZI477jiio6OZOXMm64oqfo7hjDPOIDExkX79+rFq1SrWr1/PiSeeSEpKCq1bt+bOO+9k/fr1LFq0iHbt2lGvXj2uvPJK6tSpw9tvv+2Xz2UCSBW++gruv7/obWyoKmOCx4YNrjjWIuZis3w1XsLzyjO4CQVSUgIdhQkBHTt2ZO/evTRr1owmTZqQkJBAUlISnTt3Zvz48bRr167Ex7z11lvJycmhc+fODB06lPfee4+YmBgmT55Mp06d6NatG0uWLOHqq69m8eLF9OrVi27duvHkk0/y6KOPlsOnNBXm99+hf3/XHCMnB+65x4aqMiZY7dnjmmqccAJMmuSGuc1tlpHL8tUUIFrUt6wg1LNnT01KSvJt40GD3NBPS/w394rxv+XLl9O+fftAhxESCvtdich8Ve0ZoJCOqUT5Gi5UoVs32LQJnnjCdTyKji66976pVCxfg9Azz7jcTEhwedmypeWrAY6dr+HZbANcp8HvvnMnsxKMkmCMMSWye7ebFvvee91QVRMnQtOmRw5bZUNVGRMcVN0kRNWruztEd97pxmPu6VUjWb6aYoRvs402bdzc8ZsrfhY6E94WL15Mt27djnickjtxhak8Dh+GV191HYhGj3aziQG0b1/0eK8mLAVy1lNTAvPmwZlnwsCB8NJLblmNGkcWzibs+SNfw7d4bt3a/UxODmwcplih1HQIoHPnzixYsOCIx6+//lqu7+mv35GInC8iK0VktYg8VMj6liLyg4gsEpFZIhLnte4fIrLE86i8s7iowmefQceObqzmzp1h/nwYMiTQkZkA8Z711J8sX/1k3Tp3JblXL1i+HF5/HT75JNBRmQDxR76Gd7MNcMXzX/4S2FhMkWJjY9m+fTv169cv0SQklYmqsn37dmJjY8t0HBGJBF4DzgFSgXkiMlVVl3lt9jwwXlXfF5F+wLPAVSJyIXAS0A2IAWaJyDequqdMQYUiERg3zs0w9uWXMGCANQ2rRLJyskjekcyy9GUM+XgIWTlZeetyZz2NjYrl4KiDZXofy1c/mjnTTXs/ahQ88ADUqhXoiEwFytEcUvek0vaVtmRmZ+YtL0u+hm/xHB8PY8fC6acHOhJzDHFxcaSmppKenh7oUIJabGwscXFxxW94bL2A1aqaAiAiE4GBgPfJuANwr+f5TOAzr+WzVTULyBKRRcD5wOSyBhWUCnYYuvtud3V59Gho1QrGj4c6dVwBbUJO2t40hn0yjEmDJ9G4RuGTH+UWyUvTl7IsfRlL05eydOtSVm5fecQJuFp0NTKyMsjRHKpFVeOS9pfw/LnP+yNMy1dfFczXJ590fRGqVYMbb4SrroJzzoFmzQIdqSklX3I2IyuDP7f/yYptK1ixbQXLty1nxbYVrNy+kgOHDxy1fWxkLJd1uKxU+Rq+f/mjo2HkyEBHYYoRHR1Nq1atAh1GZdEM2OD1OhUo2Fh7IXAp8BJwCVBTROp7lj8uIv8CqgFnceRJHAARGQGMAGgRquOiFjY97z33uL8pF1/siucGDQIboykT79u2L1/wMqt3rHYF8talLNu2rNAiOb5OPB0bduT848+nY8OOdGjYgfYN23P/d/fnzXqakZ1BrZhaRZ7cS8jy1ReF5et117nmVZdf7ornyEgrnENcbs4+OetJRp81Oq9AXrFtBSu2u59rdq5BcU0cBaFlnZa0a9COvi370q5BO9o1aMe7f7zLh4s/pEpkFTKzM0udr+FbPIMb53nDBujbN9CRGBMq/ga8KiLXArOBjUC2qn4nIicDPwPpwFwgu+DOqjoOGAdu6KuKCtqvCpueF+C449zJOEj4ciXGHKnqmKpkZOXPFpp729Zbqzqt6NCwAxccfwEdGnag43Edad+gPdWrVC/0mLmzno7oMYJx88eRti+tXD9DAZavheWrKjRs6MZtDhKWr6VTMGffmP8Gb8x/I399VFVObHAivZr14uouV+cVyW3rt6VadLWjjvfSry/5JV/Du3j+xz/g009h69ZAR2JMMNgINPd6HedZlkdVN+GuZCEiNYDLVHWXZ90YYIxn3QRgVfmHHABFTc+7aVOZDuuvk+fh7MPszdzLA98/kHf19PULXy9TbOFOVZmfNp/BHQYzccnEvHbKERJBuwbtuLnHzZze/HTaNWhXZJFclPKY9dTD8tUXReXrtm1l6ovgz2I3KyeLR2c8avlaAvsy95G4KJFWdVqxfNvyvOWREkm3xt2499R7Oa35abSo3YII8X3sC3/la3gXz61bQ3o67N0LNWsGOhpjAm0e0FZEWuFOwsOAK7w3EJEGwA5VzQEeBt71LI8E6qjqdhHpAnQBvqvI4CuEqru9m5p69Loy3tZ+8scnmbNuDn/77m/cf9r97Dm0hz2H9rD70O6850Ut252R//pg1pEdW/zZSc1fguUq246DO/hw0Ye888c7LNqyiKpRVWldtzV/bv+TmKgYMrMz6duyL3eecmfAYjwGy1dflEO+qioPTn+Qn9b9xIgvRnDTSTexL3Mf+w/vdz8z9x/xfN9hz0/PNt7PdxzcccSxgzFfIXhydln6MsbOG8v7C99nb+ZeujXuRp8WfZizYU5eU4tezXpxRecrij9YOQrv4tl7xI1u3QIaijGBpqpZInI7MA2IBN5V1aUiMhpIUtWpwJnAsyKiuNvAt3l2jwZ+8oyIsge40tMZKXyoup74Bw646XkPep3YSjA97+Hsw/y540/XfjZ9GaNnjyZHc/LWJy5OJHFxYqH7RkoktWJqHfE4rvpxHF/veGpVca8F4Yc1P7B462IO5xzO27djw468mfQmQzoOoW7VuqX7HfiJd5viir7KlqM5zFgzg3f+eIcpy6eQmZ1Jz6Y9GXvhWIZ3Gs51n1/H2a3ODlQzC59Zvvrgp59g+3aIiYFDh/KX+5ivqsqW/VtYunUpS9OXsmTrEt7+/e28drMAX6z6gi9WfXHUvlERUdSoUoPq0dWpXqV63vOG1RoSXyee6tFumary0/qfWLFtxRH52rhGY56c9SRXdrmSNvXalO334AeBzNnD2Yf5bMVnvJ70OrPWzqJKZBWGdBzCrT1vpXdcby6bfFkgm0YVKnyn5wb44w846SQ3nuOll5ZfYMYEiE336yeqbkSNl1+GW2+FU08lbcxDDOu9kUm/xNH40eeOmnEsMzuTP7f/mTcSQ+7PVdtX5TULEIQWtVuQlZPFlv1byMrJokpkFXo3683tvW4nvk58XpFcO7Y2VaOq+jRk4y1f3sK438dRJbIKh7IOcUqzU9ibuZel6UuJiYzh4hMv5pqu13Bum3OJjowuj9/YUXI0h6pjqh7RyS5XdEQ0q+5YRcvaLcttSMoNuzfw3oL3eHfBu6zdtZa6sXW5ssuV3ND9Bro27lou71lSlq9+NGMG/PWv7grzbbeR9tpzx8zX9P3pLNm6JG/UlKXp7uF9Zbhe1Xq0rdeW7Qe2s273Og7nHCYmMoaz4s/ikTMeIb5OvCuSq1SnSmQVn0P1ztfMrEz6xrt+WLPWzkJRTmt+Gld1uYohHYdQr2o9//x+fHA4+zA1nq1RaM5WxJXx1D2pvDX/Ld76/S3S9qURXyeeW3rewnXdrqNh9Ybl+t6+KPP03CJyPq43byTwtqo+V2B9H+BF3K2hYar6sde6bGCx5+V6Vb3Ys7wVMBGoD8wHrlLVo/8Fy8ImSjHGFCcnxxXMb77pRtX4179AhKfq/syc+W/y+GUDuLNXN5YunXzEkGV/7vjziCK5Tb02dGzYkYEnDswbjaFdg3ZUja6ad/KMjYolMzuTjsd15PKOpe98WFgntZ+H/Mwfm//g/QXvM2HJBD5a9hHHVT+OhM4JXN31aro17uanX5ijqqzavooZa2bww5ofmLV2Vt5JWBAURXCF8uGcw7R6qRX1q9anZ9OeRzya1WxW6oI6MzuTL1Z+wdt/vM201dNQlP6t+vNs/2cZ1G4QsVFlGxvdBKlvv4VLLnGze06fDo0a8VSrZcyZ/yaPDDqHa7s1Z+m8sXkF8tKtS0k/kD8cau2Y2nQ8riOD2w+m43Ed6diwIx2P60ij6o0QkaPytVXdVpzR8oxSh1tYvk4ZOoUNuzcwYfEEPlj0Abd8dQt3fnMnF55wIVd1uYoL215ITFSMP35beQ5lHWLepnn8uPZHflz3I//b8L+jcjaXIPR7vx+943rnPY6rflyZY1BVZqyZwetJr/P5is/J0RwuaHsBb/V8i/OPP5/IiMgyv0dFKPbKs6ft1Cq8BmoHhnsP1C4i8UAtXM/fqQWK532qWqOQ404GpqjqRBF5A1ioqmMLbuetVN+Mv/kGunSxYWpMWLIrWX7wj3/AQw+hDz1I6oO30uaV44+4veotQiJoXbe1O9l6CuSOx3XkxPonUjW6apFvcemkS2lSo8lRJ8/ykpmdyberv+X9he/zxcovOJxzmC6NunB1l6tJ6JJQ6jaN63atY8aaGcxYO4MZa2awaa/rRNmidgv6t+pPv1b9mLZ6GhOWTMhrn3hj9xu5qcdNJG1Kynss2bqEbHWDPzSq3uiogrpgfAXbYy5LX8Y7v7/DB4s+IP1AOnG14riu23Vc1+06WtUN3qEvLV/9YOVK6NKF7A7tWTXpdbpOOrPIfK1ZpWZ+cewpkDs27EjTmk2P+YWtovNVVVmweQEfLPqACYsnsGX/FurG1mVIxyFc1eUqTmt+Wqm+YGZkZfBL6i95xfLc1Ll5I1d0Pq4zfVv25cz4M/li1Rd8sOiDvDtZ/Vr1o0PDDsxNncuCzQvyLhK0rtuaU+NOpXdcb06NO5UujboUeWerYM7uytjF+wveZ2zSWFZuX0n9qvW58aQbubnHzUGbs8fKV1+K51OBJ1T1PM/rhwFU9dlCtn0P+LK44lnc/4J0oLGnXdcR71GUkEluYyqInYxL51DWIZZvW86CzQtYuO43Fi6bwcLI9CNu4eZeiYmKiKJXs148eeaTnN789GMWycFo+4HtTFo6ifELx/Prxl+JkAjOa3Me13S9hotPvDjv8xTWYWjLvi3MXDuTH1J+YMbaGaTsTAHguOrH0a9VP/rF96Nfq360rts67+TuS+Fx8PBBFm5ZeERBvXzb8ry24c1qNjuimM6Nv0+LPhzKPsTc1LlER0Rz8YkXc0P3Gzi3zbkhccXK8rV09mXuY/GWxSzYvIAFm/9gwaLvWMzWvM6zuXc4FCU6Ipq/tPgL/zznn5zU5KSQm7k2KyeL6SnT+WDRB3y6/FMOZh2kdd3WXNn5Sq7qehXH1zseKDxfDxw+wNwNc/lxnSuWf039lUPZhxCEro27cmbLM+kb35czWpxB/Wr1897zWDl78PBB5qfN55fUX5ibOpe5G+bmtTmOjYqlZ9OeRxTUTWo2AeDWr27lzflvcmm7S6kTW4fExYkczDrIqXGncuvJtzK4w+CgvzNU1uJ5MHC+qt7oeX0VcIqq3l7Itu9xdPGcBSwAsoDnVPUzTw/hX1T1eM82zYFvVLVTIcf0HsS9x7p164r/xN4WLHCPa68t2X7GhAA7Gecrqrd4+v50Fm5ZyMLNC1m4ZSELNi9g+bbleVdTqkZVpXOjznRt1JVujbvRtVFX3v3jXd5b+F7e1dObe9wcFsNLrdi2gvELx/PBog9I3ZNK7ZjaDOk4hKu7Xk3iokTGzR/HuW3OpW39tsxYM4Ol6UsBd5v7zPgzXcHcqh8dG3b0e1GyL3MfCzYvOKKgXrl9ZaHbRkVEsfHejX65jVyRLF/zFZavqsrmfZs9RfICFmxxP//c/mdek4K6sXXp3qQ73Rp1o1tj93j1t1d5+4+3wy5f9x7ay5TlU/hg0QfMWDMDRekd15urulxF0qYk3l/4PgOOH0DnRp35cd2PzNs4j8M5h4mQCE5qchJ9W/alb8u+/KXFX/zWiVhVSd2TytzUuXkF9e9pvxfabtpbpEQy76Z5dG/S3S9xVIRAF8/NVHWjiLQGZgD9gd34WDx7K1Vy//3v8Oyzrud8dMV0nDGmotjJON8tX97Cm/Pf5OxWZ9OjaQ9XMG9ZmNe0AKBpzaZ0O64LXf+XTNef/qTbIy9x/LDbjrpqWdG3bStajuYwa+0s3l/4PuMXji90mwiJ4Jl+z9C/dX+6N+4ekCu7q7av4tavbmX2utl5nbcu63AZ/zr3XyE50YTla75bv7qVN5LeoH+r/pzU5KS8Qnnr/vx5GVrXbe0K5E05dHvjM7qddCFxiV8c9cUt3PMVXOe6CYsn8ND0h45om5xLEO4/7X7OjD+T01ucTq2YWhUW26GsQ/yx+Q9+Sf2FmWtmMn3N9LzpsKMjohnUbhAvX/ByyOVsWTsMFjtQ+7Go6kbPzxQRmQV0Bz4B6ohIlGf4nBIds0Rat4bsbDeQepvADwdjjPGvgjNQfb/me75f8z2CcGWXK/OuJndt3JUGETXgssvg6z/hlVfgiqOuAQDlOvFFUIiQiLyryH/v83eumnIV8zbNI1uziYmM4ZJ2l/DC+S8E/GR3Qv0TaFuvLTPXzszrvFU7pnbA4zKlVzBfp6+ZzvQ10xGE67pdl3c1uUujLtSOre3pyDsSzjkH3p5c6MQn4Z6vAHG14njg9Ae4svOV3PjFjXyf8j1ZOVnERMYwsN1AXjr/pYDlRUxUTF6nwrt73+06XM4fR5UodyegQbUGYZezvhTPxQ7UXhQRqQscUNVDnqYapwP/p6oqIjOBwbgRN64BPi/NByiW91jPVjwbE3ZS7kyh73t9+XPHnwDERsYyqN2go4u/Awfc0FY//OBOyCNGBCji4HJ8vePp1rgbv236La9ArVu1btCc7AI8/bXxs5Q7Uxjy0RDmbJgDuHy9pP0l/Pu8fx/9f+7ll+Guu+DCC+HjjyE2uNvIVoSmtZrSsnZLcjQnL1/rV60fNPkKnpztGd45W2zx7MtA7SJyMvApUBf4q4g8qaodgfbAmyKSA0Tg2jznjtLxIDBRRJ4G/gDe8fungyOLZ2NM2Pk+5fv8wvlYxd/GjbB4MfznP3DNNQGINHgFc4FaGa4qVibbDmxjbupcID9f68TWOTpfs7Phyy/dkHQTJ0IV38dVDnfBnK9QOXLWp3GeVfVr4OsCyx7zej4P1/Si4H4/A52LOGYK0KskwZZKkybu22pKSrm/lTGmYv2R9gc3f3kzDao2YHCHwYzsOfLok8nBg+5vQNu28OefULNm4AIOUpXhZGcCb+fBnVwy6RKiIqK4stOV3NP7nsKLv4wMl7Offw5RUdZfqQDL18AL7+m5ASIi3EyDcUfV9saYELbj4A4um3wZDao1YP6I+XkjLxxxMtm5E847D84/H0aPtsLZmADJzsnmiilXsH73emZeM5PTW5wOFMhXVdfJ/7vv3AyCNY6aIsKYoBAR6AAqRLt2loTGhJHsnGwSpiSwce9GPr7848KHLNu2Dfr3h4ULoWdQDnBgTKXx2MzH+Hb1t7x8wct5hfMRVOGBB2DMGOjaFapVq/ggjfFR5Sie586FUaNcchpjQt6TPz7pTsTnv8wpcaccvcGWLdCvHyxb5m79XnxxxQdpjAHgk2Wf8MycZ7ixu5tR7ig5OXDnnfD883D77a5Db0TlKE9MaKoc/zuTkuCZZ2Dr1uK3NcYEtS9WfsFTs5/ium7XMaJHISNmHD4MZ58Nq1fDV1+5JhvGmIBYunUp13x2Dac0O4VXB7xa+OQ6TzwBr74K993nRtiwwtkEucrxP9RG3DAmLKzesZqrPr2Kk5qcxGsDXss/EScmQny8O+m2bQt9+8K337pmG8aYgNiVsYtBkwZRo0oNPhnyCTFRMW6Fd77Gx0O9evB//wf//Geh4zgbE2wqV/FsI24YE7L2Z+7n0kmXEhkRySdDPqFqdFW3IjHRjdm8bp1rmrVunRuObsOGwAZsTCWWozkkTElg7a61fDzkY5rVauZWFJavo0ZB06ZWOJuQUTmK5/h4l5R25dmYkKSqjPhyBEu2LuG/l/2X+Drx+StHjXIToHg7cMAtN8YExOMzH+frP7/mpfNf4i8t/pK/wvLVhIHwH6oOICbGDVW3sXxmADfGlK9XfnuFCYsn8PRZT3Num3OPXLl+feE7FbXcGFOuPl3+KU//9DTXd7ueW3recuRKy1cTBipH8Qyu170NV2dMyJmzfg73fXcfF594MQ+f8fDRG7Ro4W79FrbcGFOhlqcv5+rPrubkpifz2oWvHd1B0PLVhIHK0WwDrHA2JgSl7U3j8o8up1WdVowfNJ4IKeRP1pgxbjYyb9WqueXGmAqzO2M3gyYNolp0NaYMnUJsVOzRG40Zc/QYzpavJsRUnuJ59my44grYvz/QkRhjfJCZncnlH13OnkN7mDJ0CrVjaxe+YUIC3Habey4CLVvCuHFuuTGmQuRoDld+eiUpO1P46PKPiKtVxKy+CQluHOdclq8mBFWe4jktDf77Xxtxw5gQ8bfv/sb/NvyPdy9+l07HdTr2xs8958Z1PnAA1q61E7ExFWz0j6P5ctWXvHDeC/Rp2efYGyckuHPy5s2WryYkVZ7i2cZ6NiZkfLjoQ1757RXu6X0PQzsNLX6HqCiX4wWbbxhjyt3nKz7nyR+f5Jqu13DbybcVv4MING4MjRqVf3DGlAMrno0xQWXRlkWM+GIEfVr24R9n/8O3ncaOdePHGmMq1IptK7jq06vo2bQnb1z0RuEzCBY0Zw489ZQ1ozQhq/IUz3XrQp061mzDmCC2K2MXl066lLpV6zJ58GSiI6N92/Hll2HKlPINzhhzhD2H9jBo4iBio2KZMqSIDoKFmTbNTcldpUq5xmdMeak8xTNA166QnR3oKIwxhcjRHK769CrW717PR5d/RKMaPt7SzcmBNWugdevyDdAYkyc3X1fvWM3kyyfTvHZz33dOTnZD00X7+OXYmCBTecZ5Bpg1K9ARGGOK8PTsp/ly1Ze8esGrnNb8NN933LgRDh3Kb5pljCl3T89+mqkrp/LS+S9xZvyZJds5Odny1YS0ynXl2RgTlL758xuemPUEV3W5iltPvrVkO+c2xbKTsTEV4stVX/L4rMe5uuvV3NHrjpIfICXF8tWEtMpVPM+eDWecYdOAGhNEUnamcMWUK+jSqIvvHY68bdrkftrJ2IQZETlfRFaKyGoReaiQ9S1F5AcRWSQis0Qkzmvd/4nIUhFZLiIvS4kTq3Art60kYUoCJzU5iTcuLEW+7t8Pe/ZYvpqQ5lPx7EMC9xGR30UkS0QGey3vJiJzPQm8SESGeq17T0TWiMgCz6ObXz7RsWRnu16+f/5Z7m9lTDAKppNx2t40znj3DP464a8IwpShU6gWXa34HQsaPhwOHoT4+LKEY0xQEZFI4DXgAqADMFxEOhTY7HlgvKp2AUYDz3r2PQ04HegCdAJOBvqWJZ60vWmc/u7pXPTfi6gSWYUpQ6ZQNbpqyQ9Uvbobj/3OO8sSjjEBVWzx7GMCrweuBSYUWH4AuFpVOwLnAy+KSB2v9ferajfPY0GpPkFJ2HB1phILtpPx6B9HM2fDHJZtW0bipYm0rluGDn+xsRBRuW6kmbDXC1itqimqmglMBAYW2KYDMMPzfKbXegVigSpADBANbClLMKN/HM3PG352HQQHT6ZlnZalP1hkpI3JbkKaLx0G8xIYQERyE3hZ7gaqutazLsd7R1Vd5fV8k4hsBRoCu8oaeKk0a+aGxrHi2VROxeYy7mR8r+f5TOAzz3Pvk7FQhpNx1TFVycjKOGLZgAkDiI2K5eCogyU/4H33wQknwM03lyYcY4JVM2CD1+tU4JQC2ywELgVeAi4BaopIfVWdKyIzgTRcvr6qqstLE0Rh+dpvfL/S5+ukSfDDD/D6625yI2NCkC+XagpL4GYlfSMR6YU78XpXrmM8t4dfEJGYIvYbISJJIpKUnp5e0rc9UmQktGplYz2bysqXXM49GUOBkzGumE7zPKYVdjL2JV9T7kzhik5XEBXhTpzVoqqR0DmBNXetKd2neu89WLCgdPsaE9r+BvQVkT9wd4I2AtkicjzQHojD5Xg/ETmj4M4lydeqUa6JRpnzdcYM+PRTK5xNSKuQ+5wi0gT4ALhOVXOvTj8MtMPd/q0HPFjYvqo6TlV7qmrPhg0blj2Ys86Cpk3LfhxjwlOZTsa+5GuTmk2oFVOLHM0hNiqWjOwMasXUonGNxiWPdtcu2LHDOh+ZcLQR8B48Oc6zLI+qblLVS1W1OzDKs2wX7ovvL6q6T1X3Ad8ApxZ8g5Lk66HsQ2XPV7Bh6kxY8KV4LjaBj0VEagFfAaNU9Zfc5aqaps4h4D+4W8rlb+xYeOmlCnkrY4JMuZ+MfbVl/xZG9hjJLzf8wsgeI9m8b3PpDpTbBMtOxib8zAPaikgrEakCDAOmem8gIg1EJPc8/jDwruf5etyX4CgRicZ9ES5Vsw3wY76CFc8mLPhy3yQvgXEn2mHAFb4c3JPwn+I6IH1cYF0TVU3z9NgfBCwpSeDGmBIrNpdFpAGww3OHqODJ+CYReRbXhrIv8GJpA5kyNH8q7dcufK20h8kvnm12QRNmVDVLRG4HpgGRwLuqulRERgNJqjoVOBN4VkQUmA3c5tn9Y6AfsBjXX+FbVf2itLH4LV8zM91QsVdeWfpjGBMEii2efUlgETkZVyTXBf4qIk96RtgYAvQB6ovItZ5DXusZWSNRRBriTsQLgJH+/WhFmDsXrrgCJk+Gk0+ukLc0JhgE08nYbw4ehOOOs+LZhCVV/Rr4usCyx7yef4zLzYL7ZQPB14N22zZo2dJ18DUmhImqBjoGn/Xs2VOTkpLKdpBly6BjR0hMdEW0MSFMROaras9Ax1EYv+SrMWHE8tWY0HGsfK18A6O2auV+2nB1xhhjjDGmhCpf8Vy1qhttw4pnY0LfhRfCK68EOgpjjC/++U+47LJAR2FMmVW+4hlcT18rno0JbYcOwTffwPbtgY7EGOOL//0PVqwIdBTGlFnlHKV84EA74RoT6tatA1Ub9sqYUJGSYvlqwkLlLJ7vuy/QERhjysqGqTMmdKi64rl//0BHYkyZVc5mGwDZ2ZCVFegojDGlZROkGBM6tmyB/fstX01YqJzF84IFruPgN98EOhJjTGnVrAmnnQaNGgU6EmNMcfbvhz59oFOnQEdiTJlVzmYbcXFw+LB1GjQmlF1zjXsYY4Jfmzbw44+BjsIYv6icV57r14datax4NsYYY4wxJVI5i2cR18nIimdjQpOqm+b3xRcDHYkxxhc33wznnRfoKIzxi8pZPION9WxMKEtLg/XroUqVQEdijPHF4sWuuaQxYaBytnkGGDYMTjkl0FEYY0rDRtowJrQkJ8PFFwc6CmP8ovIWz4MHBzoCY0xpWfFsTOjYtw+2brV8NWGj8jbbUHW3fnfvDnQkxpiSSk6GiAho0SLQkRhjipOS4n7ahEYmTFTe4nndOmjaFD76KNCRGGNKql07uP56a/NsTCioUgUSEqBz50BHYoxfVN5mG3FxEBWV/43YGBM6EhLcwxgT/Nq1gw8/DHQUxvhN5b3yHBUF8fE24oYxoejgwUBHYIzx1cGDrqmkMWGi8hbPYMPVGROK9uyBatXglVcCHYkxxhcDB0K/foGOwhi/8al4FpHzRWSliKwWkYcKWd9HRH4XkSwRGVxg3TUi8qfncY3X8h4isthzzJdFRMr+cUrIimdjQk9uzjZpEtg4jDG+SU6GRo0CHYUxflNs8SwikcBrwAVAB2C4iHQosNl64FpgQoF96wGPA6cAvYDHRaSuZ/VY4Cagredxfqk/RWklJMBLL0FOToW/tTGmlHL7KdiwV8YEv6ws10Hf8tWEEV+uPPcCVqtqiqpmAhOBgd4bqOpaVV0EFKxCzwO+V9UdqroT+B44X0SaALVU9RdVVWA8MKiMn6XkTjsNrr7aDXlljAkNNsazMaFj/XrIzrZ8NWHFl6qxGbDB63WqZ5kvitq3med5sccUkREikiQiSenp6T6+rY+ysuC331xyG2NCQ3IyNGgAtWoFOhJjTHHsy64JQ0F/yVVVx6lqT1Xt2bBhQ/8e/NAhN0W3DaFjTOg4/3x44IFAR2GM8UVcHDz0ELRvH+hIjPEbX8Z53gg093od51nmi43AmQX2neVZHlfKY/pP9erQuLF1GjQmlFxySaAjMMb4qn17ePbZQEdhjF/5cuV5HtBWRFqJSBVgGDDVx+NPA84VkbqejoLnAtNUNQ3YIyK9PaNsXA18Xor4y651ayuejQkV2dnw559w+HCgIzHG+GLtWje8pDFhpNjiWVWzgNtxhfByYLKqLhWR0SJyMYCInCwiqcDlwJsistSz7w7gKVwBPg8Y7VkGcCvwNrAaSAa+8esn85UNV2dM6FizBk44ARITAx2JMcYXgwbBFVcEOgpj/Mqn6blV9Wvg6wLLHvN6Po8jm2F4b/cu8G4hy5OATiUJtly0aePaPGdkQGxsoKMxxhyLdT4yJnSoupzt2zfQkRjjV0HfYbDcJSTADz9AZGSgIzHGFCe3eG7dOrBxGFMBfJigrKWI/CAii0RklojEeZafJSILvB4ZIjKowj9Aejrs22f5asKOFc/HHw9nnQXR0YGOxJhyF/In4+Rkd4fIZhc0Yc7HCcqeB8arahdgNPAsgKrOVNVuqtoN6AccAL6rqNjz2J0iE6aseM7Ohk8+gd9/D3QkxpSrsDkZt25tExuZyqDYCcpweTzD83xmIesBBgPfqOqBcou0KFY8mzBlZ6CICLjmGhg/PtCRGFPeQv9kfMcdMHp0hb+tMQHgywRlC4FLPc8vAWqKSP0C2wwD/lvYG5TrJGTg5lF49VVo1cr/xzYmgKx4FrERN0xlEfon4/794bLL/H9cY0LT34C+IvIH0Bc3X0J27koRaQJ0xo2WdZRynYQMoG1buO0264xvwo4Vz2DFszH5gvdkvG8fzJgBu3b597jGBKdiJyhT1U2qeqmqdgdGeZbt8tpkCPCpqgZmYPQ5cyAlJSBvbUx5suIZXBvKlBTIyQl0JMaUp9A+GS9a5K48//xzhb+1MQFQ7ARlItJARHLP4w9z9LCwwyniLlGFGDIEnn46YG9vTHmx4hncledDh2DTpkBHYkx5Cu2TsXU+MpWILxOUAWcCK0VkFdAIGJO7v4jE474s/1iRcec5cADS0ixfTVjyaZKUsHf55XD22Tb8lQlrqpolIrkn40jg3dyTMZCkqlNxJ+NnRUSB2cBtufsH/GScnOz6KMTHB+TtjaloPkxQ9jHwcRH7ruXoPg0VJ7e5hhXPJgxZ8QzQoIF7GBPmQvpknJwMcXEQExOwEIwxPrIJjUwYs2YbuV59Fb76KtBRGGOKkpxsV7GMCRXWzMqEMbvynOvf/4beveHCCwMdiTGmMK+9BocDM2iAMaaEhg6FE0+EevUCHYkxfmfFcy4brs6Y4Na9e6AjMMb4qlkz9zAmDFmzjVxWPBsTvDZtgvffh/KYeMUY438ffAALFgQ6CmPKhRXPudq0ge3bYffuQEdijCno11/h2mth3bpAR2KMKU52NtxwA0ycGOhIjCkXVjznyu0RvHZtQMMwxhTCOh8ZEzpSU13/BMtXE6aszXOuCy90g7pXrRroSIwxBSUnQ9267mGMCW72ZdeEOSuec8XGBjoCY0xRUlLsRGxMqLDi2YQ5n5ptiMj5IrJSRFaLyEOFrI8RkUme9b96ZiJDRBJEZIHXI0dEunnWzfIcM3fdcf78YKUyejSMHRvoKIwxBSUn22QLxoSK5GSIjnaTGhkThoq98iwikcBrwDlAKjBPRKaq6jKvzW4Adqrq8SIyDPgHMFRVE4FEz3E6A5+p6gKv/RJUNck/H8UPvvkGqlWDW24JdCTGGG8//QSZmYGOwhjji1Gj4KqrIDIy0JEYUy58ufLcC1itqimqmglMBAYW2GYg8L7n+cdAfxGRAtsM9+wbvFq3tuHqjAlGTZpAy5aBjsIY44uaNaFjx0BHYUy58aV4bgZs8Hqd6llW6DaqmgXsBuoX2GYo8N8Cy/7jabLx90KK7YrXpg1s2GBXuIwJJkuXwtNPw9atgY7EGFMcVXjsMfj550BHYky5qZCh6kTkFOCAqi7xWpygqp2BMzyPq4rYd4SIJIlIUnp5T5DQpg3k5NhYssYEkzlz4O9/h0OHAh2JMaY4O3bAU0+5sdmNCVO+FM8bgeZer+M8ywrdRkSigNrAdq/1wyhw1VlVN3p+7gUm4JqHHEVVx6lqT1Xt2bBhQx/CLYM2baBBA9i2rXzfxxjju+RkiImxqX6NCQW5TR+tg68JY74MVTcPaCsirXBF8jDgigLbTAWuAeYCg4EZqqoAIhIBDMFdXcazLAqoo6rbRCQauAiYXsbPUnann27T/xoTbJKToVUriLA5nYwJejZMnakEii2eVTVLRG4HpgGRwLuqulRERgNJqjoVeAf4QERWAztwBXauPsAGVU3xWhYDTPMUzpG4wvktv3yisgiCZtfGmAJsmDpjQoddeTaVgE+TpKjq18DXBZY95vU8A7i8iH1nAb0LLNsP9ChhrBXj/vtdh4fnnw90JMYYcFP99ukT6CiMMb5ITXWj41SrFuhIjCk3dh+0oFWrYNq0QEdhjMm1ZQs880ygozDG+GLsWFixItBRGFOurHguqE0bNxWwa7JtjAm0yEioUSPQURhjfCECtWoFOgpjypUVzwW1bg0HDsDmzYGOxBgzYwaMHOmGvzLGBLeDB+HKK2HWrEBHYky5suK5oNwewjbToDGBN2cOvPkmVK0a6EiMMcVZswYSE2FjwdFsjQkvVjwXdMIJ0L07ZGUFOhJjTEqKG9/Zimdjgl+KZ1AtG6bOhDmfRtuoVNq0gd9/D3QUxhhwd4DsRGxMaLBh6kwlYVeejTHBy4pnY0JHcrLr3FveswEbE2BhVTyn7U2j73t92byvjJ397rwTLr7YP0EZY0rn8GGIjXVNqYwxwS8rCzp3tgnHTNgLq2YbT/74JHPWz2H0j6N5/cLXS3+ggwfh11/9F5gxpuSio/PbUBpjgt/rZTjvGhNCwuLKc9UxVZEnhTfnv0mO5jA2aSzypFB1TCk7GbVpA1u3wt69/g3UGGOM8ZGInC8iK0VktYg8VMj6liLyg4gsEpFZIhLnta6FiHwnIstFZJmIxFdo8MaEsbAonlPuTGF4p+FEiPs4MZExJHROYM1da0p3wNw2lnbVy4SZkDoZjx8PAwa4O0HGVDIiEgm8BlwAdACGi0iHAps9D4xX1S7AaOBZr3XjgX+qanugF7C1XAPeuBH69rUxnk2lEBbFc5OaTagdUxsUBOFQ9iEEoXGNxqU7YG5PYRvr2YSRkDsZ//Yb/O9/rt2zMZVPL2C1qqaoaiYwERhYYJsOwAzP85m56z15HaWq3wOo6j5VPVCu0f75J8ye7foqGBPmwqJ4Btiyfwsje45kytApREdE88WqLzhwuJR/K44/3nUYrFvXv0EaE1ihdTLOHWnDOh+ZyqkZsMHrdapnmbeFwKWe55cANUWkPnACsEtEpojIHyLyT8+X5/KTe7HJRscxlUDYFM9Thk7htQtfY1C7QXw27DP2HNrD9Z9fj6qW/GC1a8Pnn8NZZ/k/UGMCJ7ROxikpdiI25tj+BvQVkT+AvsBGIBs3GMAZnvUnA62BawvuLCIjRCRJRJLS09PLFklKCkRFQYsWZTuOMSEgbIpnbwPaDuDZ/s8yaekknpvzXOkPZLMMmsonOE7G2dluql8rnk3ltRFo7vU6zrMsj6puUtVLVbU7MMqzbBfui/ECz12mLOAz4KSCb6Cq41S1p6r2bFjWsZmTk6FlS1dAGxPmwrJ4Bnjg9AcY3mk4o2aM4stVX5b8ALfcAu3b+z8wYwIndE7Ge/ZA797QrVvpj2FMaJsHtBWRViJSBRgGTPXeQEQaiEjuefxh4F2vfeuISG4S9gOWlWu0jRvDmWeW61sYEyzCtngWEd6++G26N+nOFZ9cwfL05SU7QL16sHatXX024SR0TsZ167rOR8OGldtbGBPMPF9SbwemAcuByaq6VERGi0juLF5nAitFZBXQCBjj2Tcbd5foBxFZDAjwVrkG/OKL8Pbb5foWxgSLsC2eAapFV+OzoZ9RNboqAycOZOfBnb7vvHWrK5yrVIH4eEhMLLc4jakIIXcyNqaSU9WvVfUEVW2jqrm5+JiqTvU8/1hV23q2uVFVD3nt+72qdlHVzqp6raeTsDHGD8K6eAZoXrs5U4ZMYe2utQz/ZDjZOdnF75SYCB984J6rwrp1MGKEFdBhxG9TuYeYkDkZP/UU9Ozp8s8YE9z++MMN8TpnTqAjMaZC+FQ8+zCxQoyITPKs/zV38gQRiReRgyKywPN4w2ufHiKy2LPPyyLlNx7V6S1O57UBrzEteRoPTT8q/KONGgWHDh257MABt9yEhadmP5U3lbsJQosXu3bPNkydMcHvzz9dB99atQIdiTEVothusV4TK5yD6zQ0T0Smqqp3e8cbgJ2qeryIDAP+AQz1rEtW1W6FHHoscBPwK/A1cD7wTWk/SHFu6nETC7cs5Pm5z9OlUReu6npV0RuvX1+y5SZkVB1TlYysjLzXY5PGMjZpLLFRsRwcZTPZBY3cMZ6NMcEvd4znVq0CG4cxFcSXK8++TKwwEHjf8/xjoP+xriSLSBOglqr+om4g5vHAoJIGX1IvnPcCZ8WfxU1f3MRvG38resOixqls3rzw5SYkrN+9nuEd86dxz9WxYUf+GPFHgKIyR1G14tmYUJKcDMcdBzVrBjoSYyqEL8WzLxMr5G3j6ZS0G6jvWdfKM6nCjyJyhtf2qcUcE/DvIO7RkdFMvnwyTWo24ZJJl5C2N63wDceMgWrVjl7esaO1wQxBf27/kxs+v4E2L7fhg8UfcEK9ExCEmMgYAJamL+XUd0/lmZ+eYV/mvgBHa9i5E3bvtuLZmFBhExqZSqa8OwymAS08Y8beC0wQkRI1ivLrIO5Ag2oN+HzY5+zO2M2lky894hZ+noQEGDfODfgu4q5En3sufPMNPPCAFdAhYsnWJVzxyRW0e60dE5ZMYGSPkSTfmUz7hu25pect/Hrjr9za81b6xffjjBZnMGrGKNq83IYXf3mx8P8XpmJkZMDw4dCjR6AjMcb44tRTYWDBG9LGhC9fpgIqdmIFr21SRSQKqA1s9zTJOASgqvNFJBk3ze9Gz3GOdcxy06VRF94f9D6DPxrMLV/dwrsXv8tRrUwSEtwjlyrcfjs8/zxERsKzz1pnpiA1b+M8xvw0hs9Xfk6NKjW479T7uPfUe2lcozHgpnLP9dqFr+U9/yX1F0bNGMU90+7hX3P/xWN9HuPabtcSHRld4Z+hUmvaFCZMCHQUxhhfjRkT6AiMqVC+XHkudmIFz+trPM8HAzNUVUWkoafDISLSGmgLpKhqGrBHRHp72kZfDXzuh8/js8s6XMZjfR7jvQXv8fKvLxe/gwi8+qqbefCf/4RFi8o/SFMis9fN5rwPz6PX2734cd2PPN73cdbdvY7/O+f/8grnY+kd15sfrv6BH67+gbhacYz4cgQdXu/AhMUTyNGcCvgEBoDDhwMdgTHGV1lZkGN/H03lUmzx7OPECu8A9UVkNa55Ru54cH2ARSKyANeRcKSq7vCsuxV4G1gNJFOOI20U5fEzH2dQu0Hc9919TE+ZXvwOuQX03LnQtWv5B2iKpapMWz2NM/5zBn3f68uCzQt4rv9zrLt7HU+c+QT1qtYr8TH7terHz9f/zBfDv6BadDUSpiTQ9Y2ufLbiM9Sa7JS/m2+GDh0CHYUxxhfffQdVq8L8+YGOxJgK40uzDVT1a9xwct7LHvN6ngFcXsh+nwCfFHHMJKBTSYL1twiJYPyg8Zz6zqkM+WgI826aR5t6xXR6iIiAXr3c86lTYcECeOyxY+5i/C9Hc/h8xeeM+WkM89PmE1crjpfOf4kbT7qRatGFdPYsIRHhohMuYkDbAXy09CMem/UYl0y6hJObnszT/Z7mnNbnHN3Ux/hHcjLUr1/8dsaYwEtJgcxMaFZon39jwlLYzzBYnJoxNZk6fCoiwsCJA9l7aK/vO3/7LTz+ODz5ZPkFaID8GQFT96QyYfEEuoztwqWTL2VXxi7e+utbJN+ZzJ2n3OmXwtlbhEQwtNNQlt66lHcufoct+7dw3ofncdb7Z/G/9f87Kr7KNmNhuUhOdrOVGWOCX3IyVK8OjRoFOhJjKkylL54BWtdtzeTBk1mxbQVXfXqV7+1bX30VrrsOnngCRttMdd78WUxm5WTxyA+P8NO6n+j4WkcSpiSgKImXJrLi9hXceNKNVIms4oeoixYVEcX13a9n1e2reOWCV1ixbQV/+c9fGJA4gN/TfrcZC/3l4EHYuNGGvTImVOR+2bU7caYS8anZRmXQv3V//n3ev7nr27t4YtYTjD7LhyIoIgLeftuNxPH44+71o4+Wf7AhwLuYfP3C1/OWH84+zPaD29l+YDvbDmxj+0HPz4KvPT9X71h9xHH3ZO4BIGVnCld0vqJCPxNATFQMt/e6neu6Xcerv73KQz88xDer85vr581YGBnLwUdLP2Nh2t40hn0yjEmDJ/nU2TFsrFnjflrxbExoSE6GE04IdBTGVCgrnr3c0esOFm5eyFOzn6JLoy6c3vz04guY3AI6Jwe2bnWFdCX+Bl7U9NeCUDOmJnsO7Sly32rR1WhQrQH1q9anQbUGxNeJ54zmZzAvbR4rt63kcM5hqkVV45L2l/D8uc9XxMcpUvUq1XnwLw8yqN0gBk8ezJL0JUesz8jOoPZztWlUvRGNajSicY3G7nnB1zXcsqrRVY/Yv6gvH2GvRg148EHo2TPQkRhjfHHttTb7rql0rHj2IiK8fuHrLN+2nGs+u4YL217oWwETGQnvvusKaRHYtQvq1KmosINCRlYGX676kjPjz2Ta6mkoblQKQWhcozGnxp1KXK04VxxXq39EkVy/Wn3qV61/VAGZ65Yvb2FZ+jJio2LJyM6gVkytoLkae2KDE/lLi7+wLH0ZUZFRHM4+TN+WfTmnzTls2beFzfs3s2XfFpZsXcIP+35gZ8bOQo9TK6YWjao3YvWO1Xm/O/C6kh0Vy8FRpb+SHTJatIDnngt0FMYYX91/f6AjMKbCWfFcQExUDL+n/c6h7EN8tOwjwMcCJjLS/UxNhVNOgbvucrMRhrHsnGxmrZ1F4uJEPln+CXsO7aFJjSZ0Oq4TS7YuISYqhszsTAa1G1Smq6db9m9hZI+RjOgxgnHzx5G2r4hp1QNky/4tjOx5ZHyPnPFIodseyjrE1v1b2bJ/C5v3ucI67/n+LdSvWp+l6UvZm+k6rgbLlfYKk5YGNWu6K9DGmOC2Zw8cOOA6C1biO66m8rHiuRBr7lrDdZ9fx7TkaXnL4mrFcWvPW1m5bSUn1D+h6GHKGjeGPn3creeICPjb3yoo6oqhqvyx+Q8SFyUycelENu3dRM0qNbmsw2UkdE7grPizuPyjyzmjxRl+K3aLmhEwWJQkvpioGJrXbk7z2kXf5rzly1sY9/s4qkRWCbor7eXuppvcF9AFCwIdiTGmOFOmuE7zq1ZB27aBjsaYCmPFcyGa1GxCqzqtiCCCyIhIDuccZlfGLh6Z8QiPzHiEuFpxnN36bM5udTb9W/c/srCJioIPPnBtoO+/3xXQ997r9xgrukNZys4UJiyeQOLiRFZsW0F0RDQD2g4goXMCF51w0RFNLoK92A12wX6lvVwlJ9sEKcaEiuRkd45r2TLQkRhToax4LkJht+KfP/d5pqdMZ3rKdKaunMp7C94DoNNxnTi71dmc3fps+rTsQ82YmpCY6DoP3ncfNG0Kw4b5NT5/dygrrBhP35/O5KWTSVycyNzUuQD0admHe3rfw+AOg0s1e58pXqX98pGT40bb+OtfAx2JMcYXKSmun0KV8h0q1JhgI6E03XDPnj01KSkp0GEAboa7BZsX5BXTP63/iYysDKIiougd15uzW53NOS3P4uTx04m+936YOpW0px9kWO+NTPoljsaPPgcJCcc8/q6MXWw7sI1tB7aRvj+dbQe2MfKrkWTlZB21faRE8kz/Z6gdU5taMbWoHVub2jG1j/hZo0oNIqTwob1v/epW3pz/Jtd3u56zWp1F4uJEvkv+jqycLDof15mEzgkM7zycFrVb+O13aMpOROaralAOTVHifN2wwZ2Ix46FkSPLLzBjAiSs8hWgd2/XP2H69PIJypgAOla+WvHsJxlZGfy84ee8YjppUxKKUrNKTc6MOp7+U5fwv8aH+bgjDFgFw/6swrZrh7CtfUtXHB9IzyuUc8c9ztbsQt8rQiJQVRRFEKIjo8nKziKHY0/uIkheYV0rpha1Y2ozN3VukZPCPHDaAyR0SaBLoy5l/v2Y8hFWJ+NZs+Css+C77+Ccc8otLmMCJazyFaBhQ7j0UnjzzfIJypgAOla+WrMNP4mNiqVfq370a9WPZ/o/w46DO5i1dhbTU6bzxryxfHF2/rZfnQhfnZgJ2z4kck5k3tBtDas1pH2D9jSo1iDv0bBawyNfV2/IfdPuY9zv44iJdKNZ3ND9Bl4b8BoHDh9g96Hd7M7YXeTPPYf2uOee1x0bdmTNzjXsO7wPcFewz4o/i/cveZ+mNZsG6LdpKqU2beCVV6Br10BHYowpjir88582oZGplKx4Lif1qtbj0vaXcmn7S/n78LGMHADftoXMKIjJgnNXw/9NhxO2ZhbZlKIohXUoExGqV6lO9SrVS1z0eo/ukJmdSdv6ba1wNhWveXO4/fZAR2GM8YWImyDFmErIiucK0KReS5ruW0dWBMQehsxIiNsL7Q7VhBIWzuD/DmWVenQHEzz++ANq1bIrWcaEgtRU2LzZ3SmKjg50NMZUKCueK8KYMWz56mpGJuUwYj6M6wFptSPgqafc+uRk9y2+deuAhFdpR3cwweXmm93MnN99F+hITDEOHz5MamoqGRkZgQ4lKMXGxhIXF0d0OBeVEya4+Qx277biOcRV9nwuTb5a8VwREhKYAjBqFGxdz2tLW8CYMfmjbdx3H3z7rfv58MM2u5qpnJKTYciQQEdhfJCamkrNmjWJj48vesKoSkpV2b59O6mpqbRq1SrQ4ZSf5GRo0MDdLTIhrTLnc2nzteRtBkzpJCTA2rVuLNu1a48cpu71113R8Mwz0K4d/Pe/rjOGMZXFrl2wY4c12QgRGRkZ1K9fv9KdaH0hItSvXz/8r+IlJ1u+honKnM+lzVcrnoNB06Ywfjz8/LOb3vuKK1xBbUxlkZzsftrJOGRUxhOtryrF7yYlxfI1jFSK/7NFKM1n96l4FpHzRWSliKwWkYcKWR8jIpM8638VkXjP8nNEZL6ILPb87Oe1zyzPMRd4HseVOPpwc+qp8Ntv8J//wNVXu2WLF0N6emDjMqa85RbPAWr3b0ww8uHc21JEfhCRRZ5zapzXumyv8+tUvwZ2+DCsX2/5aiqtYotnEYkEXgMuADoAw0WkQ4HNbgB2qurxwAvAPzzLtwF/VdXOwDXABwX2S1DVbp7H1jJ8jvAREeGG/6lZ0zXduPJKOOEEN/5t1tEzCxpTEkF7Mu7bFz7/3P1fN+EnMRHi493ft/h499ock4/n3ueB8araBRgNPOu17qDX+fViPwcH06a585OpfMohn19++WXat2/PZZddxqmnnkpMTAzPP/98mY9bXnzpMNgLWK2qKQAiMhEYCCzz2mYg8ITn+cfAqyIiqvqH1zZLgaoiEqOqh8oceWUg4to/33033Hmnm8XppZfc8ECjRrlv/i0KdD40pgheJ+NzgFRgnohMVVXvXM49Gb/vuVP0LHCVZ91BVe1WLsE1agQX+/f8boJEYiKMGAEHDrjX69a512B/t47Nl3NvB+Bez/OZwGcVEllUFPTvXyFvZYJMOeXz66+/zvTp06lSpQrr1q3js88+K3us5ciXZhvNgA1er1M9ywrdRlWzgN1A/QLbXAb8XqBw/o/nKtbfpYhGJyIyQkSSRCQpvTI2X+jQwX3D/+wz95/17LPhhhvcf1jV/P+4diXHFC/vZKyqmUDuydhbB2CG5/nMQtaXj6lTXZMlE5rOPPPoR26/jYcfzj/R5jpwAO66yz3ftu3ofX00aNAgevToQceOHRk3bhwA3377LSeddBJdu3alv6fA27dvH9dddx2dO3emS5cufPLJJ6X9pBXJl3PvQuBSz/NLgJoiknvujfWcO38RkUGFvUGpz68LFsCnn9rd0HBVwfk8cuRIUlJSuOCCC0hMTOTkk08O+mEeK2SoOhHpiGvKca7X4gRV3SgiNYFPcFe3xhfcV1XHAeMAevbsWTmHoBCBgQPhvPOgWTM3KoG3AwfclWi7imOOrbCT8SkFtsk9Gb+E18lYVbfjORkDWcBzqvpZwTcQkRHACIAWLVr4Htkdd0CfPvBBwZZdJuSlpha+fPv2Mh/63XffpV69ehw8eJCTTz6ZgQMHctNNNzF79mxatWrFDs/fyqeeeoratWuzePFiAHbu3Fnm9w4Sf8Pd6b0WmA1sBLI961p6zrGtgRkislhVk713LvX59YMPYOxY2L/fDx/BhJRyyOc33niDb7/9lpkzZ9KgQYNSH6ci+VI8bwSae72O8ywrbJtUEYkCagPbATxtJj8FrvZOXFXd6Pm5V0Qm4K6KHVU8Gy+xsVDUH/1162DGDPjLX6BKlYqNy4STij8ZHzoEGzZYz/1QNmtW0etatHB/nwpq2dL9bNDg2Psfw8svv8ynn34KwIYNGxg3bhx9+vTJG6+1Xr16AEyfPp2JEyfm7Ve3bt1SvV8FK/bcq6qb8Fx5FpEawGWqusuzLvccmyIis4DuwBH5WmrJya6zYCUeoSGsBSifQ4kvzTbmAW1FpJWIVAGGAQU7C03FdQgEGAzMUFUVkTrAV8BDqvq/3I1FJEpEGnieRwMXAUvK9Ekqi2Ndzevf3020Am486c2bKyYmEyp8Ohmr6qWq2h0Y5Vm2y/Mz72QMzMKdjMtu7VrXBMl67oenMWOgWrUjl1Wr5paXwaxZs5g+fTpz585l4cKFdO/enW7dupXpmEGm2HOviDQQkdzz+MPAu57ldUUkJncb4HSObCtdNrnFs6l8yimfQ02xxbOnDfPtwDRgOTBZVZeKyGgRye3h8w5QX0RW4zov5Pbivx04HniswJB0McA0EVkELMCdwN/y4+cKX0X9x33nHTdawY03umXz50OTJnDyyfDEEzBvniuoTWUWnCdjG+M5vCUkwLhx7sqUiPs5blyZm5nt3r2bunXrUq1aNVasWMEvv/xCRkYGs2fPZs2aNQB5zTbOOeccXnvttbx9Q6HZho/n3jOBlSKyCmgE5FYw7YEkEVmI67vwXIGOwWUJzMZ4rszKKZ9DjqqGzKNHjx5qVPXDD1VbtlQVcT8//PDobVJTVceMUT3tNLcdqB53nOqiRaU7nglKQJKWIIeAAcAq3O3bUZ5lo4GLPc8HA396tnkbiPEsPw1YjGsTvRi4obj38jlfX3nF/f9MS/PXr8WUs2XLlgU6BM3IyNDzzz9f27VrpwMHDtS+ffvqzJkz9euvv9Zu3bpply5d9Oyzz1ZV1b179+rVV1+tHTt21C5duugnn3xS7vEV9jsqab5W5MPnfE1Lc/n6yis+/y5McAuGfG7ZsqWmp6drWlqaNmvWTGvWrKm1a9fWZs2a6e7du8v9/Uuar6IhNA10z549NSkpKdBhhJ5t2+Dbb93jrbegalV4+mnXRrpxY9dr2ntqymrVKuc3yRAkIvNVtWeg4yiMz/m6bx+sWgXdu1sbyhCxfPly2rdvH+gwglphv6OwyNfsbHe3qG5daNiw/AMz5c7yueT5atNzVwYNGrjB7D/80BXOAPXquZkL//vfIwtnyB+9w5iKUKMGnHSSFc7GhILISDeZkRXOphKz4rmyuvVWN/V3UQXLunXwxhtuIhZjytMLL8D06YGOwhjji2+/Ba/248ZURlY8V3ZFjd4RGQm33OI6A3TpAg89BEtsQBTjZzk58Mgj8M03gY7EGOOLxET4v/8LdBTGBJQVz5VdUaN3vP8+LFsG//wn1K8P//oX/PKLW5+W5gbJr4wzPhr/2rzZNRuynvvGhIbkZMtXU+lZ8VzZHWvYmfbt4W9/g5kzXafDYcPcPl9/DVdfDY0aQe/e8NRT8PvvbggjcFcm4uMhIsL9tKnDTVFyh6mzMWONCQ02xrMxFTM9twlyCQnFj6xRu3b+8+uug65d4auv3OOxx9xj0yY3gsdNN8HBg27bdetgxIj89zHGm43xbEzo2LsXtm61fDWVnl15NiUXEQE9e8Ljj8Nvv7lb75995iZlGTUqv3DOdeAA3HVX/ut9+/KvUvvC31ey7cp48Fi3zv075E7tasJW2t40+r7Xl837bObTkJU7LbMVz5VeZc9nK55N2TVqBAMHuudFjc6xfXv+81NOgerV4cQT4eyz3ZXs//wnf/3q1bBnj3uemOiuXK9b5wru3CvZpS14/X08UzaPPeauZFWpEuhITDl7avZTzFk/h9E/jq7w965Ro0aFv2dY6tTJXfz4618DHYkJsGDO5/vvv5+OHTty//33M3v2bE466SSioqL4+OOP/RaDNdsw/tWiRf7VCW/Nm+c/v+suNynGhg2u2P7+ezfw/nXXufUnneRuD9aq5a5iHz585LEOHHAjgfz8s9svOxsuvBAGDYLdu+Hmm/OX5z6uvRYGD4YHH3T7Fzze/fdbs5JAEHEdUk3Iuvvbu1mweUGR639a/xM5mpP3emzSWMYmjSVCIjijxRmF7tOtcTdePP9FP0dq/KJ69UBHYMpROOTzuHHj2LFjB5GRkaxdu5b33nuP559/3q/vYcWz8a8xY9yVXO8CtVo1ePbZ/Ne5baC95TbjyMlxsyBu2OAeL79c+Pvs3QuTJrkh9SIjoW1btzw7G/74I3957mP/frd+06bCj5eW5n4mJblCvHVrd2uydWv3OPdcaNq08H0TE11zlfXr3ZeHMWOsEPfVLbe43/dFFwU6ElNOejXtRcrOFLYd3EaO5hAhETSo1oA2dUt/6/+hhx6iefPm3HbbbQA88cQTREVFMXPmTHbu3Mnhw4d5+umnGZh7R+wY9u3bx8CBAwvdb/z48Tz//POICF26dOGDDz5gy5YtjBw5kpSUFADGjh3LaaedVurPElLeesv9rXzssUBHYgIk2PP54osvZt++ffTo0YOHH36YoUOHAhAR4eeGFkXN2x2Mjx49epRt8nJTMT78ULVlS1UR9/PDD0t/rJYtVV1pfeSjZUv/Hq95c7d+2TLVm25S7ddPNT5eNSLCrZ8+3a2fOlW1XTvVAQNU77hDNSFBNSbmyGNVq1a2z1wCQJIGQW4W9ig2X3fvdr+v554r66/BVLBly5aVaPuRX4zUiCcjNPbpWI14MkJv+fKWMr3/77//rn369Ml73b59e12/fr3u3r1bVVXT09O1TZs2mpOTo6qq1atXL/JYhw8fLnS/JUuWaNu2bTU9PV1VVbdv366qqkOGDNEXXnhBVVWzsrJ0165dhR63sN9RSOerquq556r27Fn8diakhFM+F7X+mmuu0Y8++qjIfUqar3bl2fifL6N3+KqoK9ljxvj3eLlXxtu3d0P15crMdFeUc68616oFHTpASgr89JO7Al7QgQNuBsecHLdtu3Z2q7MwNtJGpbFl/xZG9hjJiB4jGDd/HGn70sp0vO7du7N161Y2bdpEeno6devWpXHjxtxzzz3Mnj2biIgINm7cyJYtW2jcuPExj6WqPPLII0ftN2PGDC6//HIaNGgAQL169QCYMWMG48ePByAyMpLa3iMRhbvkZNdZ3FRqwZzPFcWKZxPccotwfzWLKOnxqlSB44/Pf923r3uAu84cGVn4yCF79rixsMGNJrFnjyugv/4atmyBjh1doV6z5tH7VoZmIImJcPfd7vmdd8KhQ+H3GU2eKUOn5D1/7UL/TO18+eWX8/HHH7N582aGDh1KYmIi6enpzJ8/n+joaOLj48nIyCj2OKXdr1JJTHQzga5f7ybHSky0fK3EgjmfK4qNtmGCX0ICrF3rruSuXVv2P9r+Op5I0dObt2gBy5fDJ5+42Rlzrzy//TZcf70bcaRWLbdd7uQzAP/+txsnO5xHA8kd8WTbNvc6LS38PqMpd0OHDmXixIl8/PHHXH755ezevZvjjjuO6OhoZs6cybrCOi4Xoqj9+vXrx0cffcR2z0hBO3bsAKB///6MHTsWgOzsbHbv3l0Ony6I5OZr7khKe/ZYvhq/81c+VxQrno0pi6KmN3/mGddc49JL86+wAkye7EYa+ewzt03fvkfu/9BDhY+TPWpUeX2CijdqVOEjnoTTZzTlrmPHjuzdu5dmzZrRpEkTEhISSEpKonPnzowfP5527dr5dJyi9uvYsSOjRo2ib9++dO3alXvvvReAl156iZkzZ9K5c2d69OjBsmXLyu0zBgXLV1MB/JXPBc2bN4+4uDg++ugjbr75Zjp27OiXeEULu+UcpHr27KlJSUmBDsOYI/mzmUVEROHNQETclfKjFst8VQ3KRohF5msJP6MJPsuXL6d9+/aBDiOoFfY7snw1wcjyueT5aleejSkrfzYrOVYzkHBRGT6jMeHC8tWYo1jxbEwwKaoZSGlHFwlGleEzmqCzePFiunXrdsTjlFNOCXRYwc/y1QShQOezT6NtiMj5wEtAJPC2qj5XYH0MMB7oAWwHhqrqWs+6h4EbgGzgTlWd5ssxjamU/D26SDCqDJ+xElBVRCTQYfisc+fOLFiwoELeK5SaQxbL8rVSqMz5XJp8LbZ4FpFI4DXgHCAVmCciU1XVu5fEDcBOVT1eRIYB/wCGikgHYBjQEWgKTBeREzz7FHdMYyonf46THawqw2cMY7GxsWzfvp369euH1Am3Iqgq27dvJzY2NtCh+I/la1irzPlc2nz15cpzL2C1qqYAiMhEYCDgXegOBJ7wPP8YeFXcv8BAYKKqHgLWiMhqz/Hw4ZjGGGOCUFxcHKmpqaSnpwc6lKAUGxtLXFxcoMMwxieVPZ9Lk6++FM/NgA1er1OBgg1L8rZR1SwR2Q3U9yz/pcC+zTzPizsmACIyAhgB0MI6KBhjTMBFR0fTqlWrQIdhjPEDy+eSC/oOg6o6TlV7qmrPhg0bBjocY4wxxhhTiflSPG8Emnu9jvMsK3QbEYkCauM6Dha1ry/HNMYYY4wxJqj4UjzPA9qKSCsRqYLrADi1wDZTgWs8zwcDM9R1X5wKDBORGBFpBbQFfvPxmMYYY4wxxgQVn2YYFJEBwIu4YeXeVdUxIjIaSFLVqSISC3wAdAd2AMO8OgOOAq4HsoC7VfWboo7pQxzpQHETnDcAthX7oQInmOML5tjA4itMS1UNyvZMlq8VIpjjC+bYwPL1CJavFcLiK5uKjq/IfA2p6bl9ISJJwTr9KQR3fMEcG1h84SjYf2cWX+kFc2wQ/PEFo2D/nVl8ZWPx+S7oOwwaY4wxxhgTLKx4NsYYY4wxxkfhWDyPC3QAxQjm+II5NrD4wlGw/84svtIL5tgg+OMLRsH+O7P4ysbi81HYtXk2xhhjjDGmvITjlWdjjDHGGGPKhRXPxhhjjDHG+ChsimcROV9EVorIahF5KNDxeBOR5iIyU0SWichSEbkr0DEVRkQiReQPEfky0LEUJCJ1RORjEVkhIstF5NRAx5RLRO7x/LsuEZH/esY9N8UI1py1fC27YM5XsJwtjWDNVwiNnLV8Lb1gzNewKJ5FJBJ4DbgA6AAMF5EOgY3qCFnAfaraAegN3BZk8eW6C1ge6CCK8BLwraq2A7oSJHGKSDPgTqCnqnbCTfozLLBRBb8gz1nL17ILynwFy9nSCPJ8hdDIWcvXUgjWfA2L4hnoBaxW1RRVzQQmAgMDHFMeVU1T1d89z/fi/mM2C2xURxKROOBC4O1Ax1KQiNQG+gDvAKhqpqruCmhQR4oCqopIFFAN2BTgeEJB0Oas5WvZhEC+guVsSQVtvkLw56zla5kFXb6GS/HcDNjg9TqVIEocbyISj5vG/NcAh1LQi8ADQE6A4yhMKyAd+I/nttfbIlI90EEBqOpG4HlgPZAG7FbV7wIbVUgIiZy1fC2VoM1XsJwtpZDIVwjanH0Ry9dSCdZ8DZfiOSSISA3gE+BuVd0T6HhyichFwFZVnR/oWIoQBZwEjFXV7sB+ICja3IlIXdwVmFZAU6C6iFwZ2KiMP1i+llrQ5itYzoazYMxZy9eyCdZ8DZfieSPQ3Ot1nGdZ0BCRaFxSJ6rqlEDHU8DpwMUishZ3O66fiHwY2JCOkAqkqmrulYSPcckeDM4G1qhquqoeBqYApwU4plAQ1Dlr+VomwZyvYDlbGkGdrxDUOWv5WjZBma/hUjzPA9qKSCsRqYJrTD41wDHlERHBtSdarqr/DnQ8Banqw6oap6rxuN/dDFUN+De7XKq6GdggIid6FvUHlgUwJG/rgd4iUs3z79yfIOpsEcSCNmctX8smyPMVLGdLI2jzFYI7Zy1fyywo8zUq0AH4g6pmicjtwDRcT8x3VXVpgMPydjpwFbBYRBZ4lj2iql8HLqSQcweQ6PnDnQJcF+B4AFDVX0XkY+B3XI/vPwiiKUSDVZDnrOVr2QVlvoLlbGkEeb6C5WxZWb6WkE3PbYwxxhhjjI/CpdmGMcYYY4wx5c6KZ2OMMcYYY3xkxbMxxhhjjDE+suLZGGOMMcYYH1nxbIwxxhhjjI+seDbGGGOMMcZHVjwbY4wxxhjjo/8HwheNYrKrPyYAAAAASUVORK5CYII=\\n\",\n      \"text/plain\": [\n       \"<Figure size 864x288 with 3 Axes>\"\n      ]\n     },\n     \"metadata\": {\n      \"needs_background\": \"light\"\n     },\n     \"output_type\": \"display_data\"\n    },\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"1\"\n      ]\n     },\n     \"execution_count\": 10,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"trainer.fit(model, train_dataloader, val_dataloader)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.7.4\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "pytorch-lightning入门到精通（0）.md",
    "content": "# 引言\n\n唠嗑唠嗑笔者的深度学习历程：\n\n- 2019年夏天，笔者申报了一个项目，随便买了一本Keras之父的《Python深度学习》，就此开启了我的深度学习之路。Keras给我的第一印象就是漂亮，简单的模型搭建和训练，啪的一下，上手很快啊，我开始使用Keras打比赛，慢慢的，代码需求越来越大，自定义模型、自定义指标和损失函数、自定义进度条，各种自定义使我不断地深入学习Keras的各种逻辑实现，我依旧觉得万事皆可Callback（Keras的Callback抽象类）。2019年到了冬天，我参加了Kaggle的“孟加拉字素识别”，打了两个月，连铜牌也没有，打了个寂寞，但我感受到了Pytorch的流行和灵活。我开始学习Pytorch，但通过Keras入门深度学习的我就感觉：太麻烦了，太麻烦了，不学了。实际上是因为我用Pytorch进行鲜花分类loss一直不收敛，又找不到bug，糟心放弃了。\n- 2020年上半年Tensorflow2和TPU又开始火了，而且Tensorflow2集成了Keras，我赶紧学了学，说实话还不错，相比于Keras，解决了启动慢的问题和一些奇怪的bug，还可以自己编写逻辑，最好的地方在于tf2的Dataset是真的快，训练飞速，在6月份还得了个CV赛的第一。但更多问题出现了，tf2无法复现模型（tf2.3以上设置单线程好像可以），去官方仓库看了看，无法解决，心态崩了。\n- 2020年下半年，笔者参加了华录杯，初赛中期发现了一个Pytorch的高分baseline，想着用tf2复现一下方案。baseline用的是ImageNet的文件结构，看得令人很糟心，可读性极低，导致我对Pytorch愈发讨厌。同时我复现失败了，因为差分训练和tf2的Dataset不支持numpy（别说什么tf.py_function支持numpy，tf.py_function效率代价心里没b数吗？），我对Tensorflow2失望了。\n- 2020年国庆节，笔者看见了一篇微信推送《PyTorch Lightning 1.0版发布，终于可以抛弃Keras》，好家伙，想什么来什么，学了一波，第一感觉就是Pytorch和Keras一起生出来的一样，还可以当作插件放在Pytorch代码中，我的内心告诉我，就是它了。\n\nPyTorch Lightning的精华主要在两个模块LightningModule和Trainer，其他API包括Accelerators、Callback、LightningDataModule、Logging、Metrics和Plugins。其中Accelerators和Plugins几乎用不到，Callback应该很眼熟吧，就和Keras的Callback差不多，但功能更丰富，逻辑更灵活，LightningDataModule是数据模块，Logging是个日志模块，Metrics是指标模块。**最重要的来了，LightningDataModule、Logging、Metrics都是垃圾，不要学！不要学！不要学！我们只学LightningModule、Trainer和Callback**，附上[官方教程](https://pytorch-lightning.readthedocs.io/en/latest/)。\n\n**本教程针对学过Pytorch的人群，再看本教程之前，可以先看看[Pytorch](https://github.com/zergtant/pytorch-handbook)和官方的[自编码器小例子](https://github.com/PyTorchLightning/pytorch-lightning)。**\n\n# 环境搭建\n\n两步：\n\n- [装Pytorch](https://pytorch.org/get-started/locally/) >=1.3\n- 装PyTorch Lightning（与硬件无关，Pytorch是CPU版，那就是CPU，否则就是GPU）\n  - pip install pytorch-lightning\n  - conda install pytorch-lightning\n\n# 整体结构\n\n简单说来，就三个步骤：\n\n- 用Pytorch的Dataset和DataLoader定义数据集。\n\n- 用LightningModule定义模型并实现训练逻辑。\n- 用Trainer配置参数进行自动训练。\n\nPytorch到Pytorch Lightning如下图（有点糊）：\n\n![](./images/pytorch2pl.jpg)\n\n如果还不懂可以看看下面这个动图：\n\n![](./images/pl_quick_start_full_compressed.gif)\n\n# 兼容关系\n\n- PyTorch Lightning完全继承于Pytorch，Pytorch的所有东西都可以在PyTorch Lightning中使用，PyTorch Lightning的所有物品也可在Pytorch中使用。\n- PyTorch Lightning作为Pytorch的高级封装，内部包含着完整且可修改的训练逻辑。\n- PyTorch Lightning的硬件检测基于Pytorch，也可以使用Trainer修改。\n- PyTorch Lightning中数据类型自动变化，无需.cpu和.cuda。"
  },
  {
    "path": "pytorch-lightning入门到精通（1）.md",
    "content": "# MNIST手写数字分类\n\n- 先导入一堆库\n\n  ```python\n  import torch\n  import torch.nn.functional as F\n  from torch import nn\n  from torch.nn import *\n  from torch.optim import *\n  from torch.optim.lr_scheduler import *\n  from torch.utils.data import Dataset, DataLoader\n  from torchvision import datasets, transforms\n  \n  import pytorch_lightning as pl\n  from pytorch_lightning.callbacks import Callback\n  \n  import os, gc, time\n  import math, random\n  import numpy as np\n  import pandas as pd\n  import matplotlib.pyplot as plt\n  import seaborn as sns\n  from tqdm import tqdm\n  from sklearn.metrics import *\n  ```\n\n- 定义超参数\n\n  ```python\n  epochs = 10\n  batch_size = 128\n  ```\n\n- 创建数据集和生成器\n\n  ```python\n  train_transform = transforms.Compose([\n     transforms.ToTensor(),\n     transforms.Normalize((0.1307,), (0.3081,))\n  ])\n  val_transform = transforms.Compose([\n     transforms.ToTensor(),\n     transforms.Normalize((0.1307,), (0.3081,))\n  ])\n  ```\n\n  ```python\n  train_dataset = datasets.MNIST(\"data\", train=True, download=True, transform=train_transform)\n  train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True)\n  \n  val_dataset = datasets.MNIST(\"data\", train=False, transform=val_transform)\n  val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False, num_workers=4, pin_memory=True)\n  ```\n\n- 定义损失函数和指标\n\n  ```python\n  class CrossEntropyLoss(Module): # 和Pytorch官方实现一样\n      def __init__(self):\n          super().__init__()\n  \n      def forward(self, input, target, reduction=\"mean\"):\n          N, C = input.size()\n  \n          if target.dim() > 1:\n              one_hot = target\n          else:\n              one_hot = torch.zeros((N, C), dtype=input.dtype, device=input.device)\n              one_hot.scatter_(1, target.reshape(N, 1), 1)\n  \n          loss = -(one_hot * F.log_softmax(input, 1)).sum(1)\n          if reduction == \"mean\":\n              return loss.mean(0)\n          elif reduction == \"sum\":\n              return loss.sum(0)\n          else:\n              return loss\n  \n  \n  class ClassificationMetric(object): # 记录结果并计算指标\n      def __init__(self, accuracy=True, recall=True, precision=True, f1=True, average=\"macro\"):\n          self.accuracy = accuracy\n          self.recall = recall\n          self.precision = precision\n          self.f1 = f1\n          self.average = average\n  \n          self.preds = []\n          self.target = []\n  \n      def reset(self): # 重置结果\n          self.preds.clear()\n          self.target.clear()\n          gc.collect()\n  \n      def update(self, preds, target): # 更新结果\n          preds = list(preds.cpu().detach().argmax(1).numpy())\n          target = list(target.cpu().detach().argmax(1).numpy()) if target.dim() > 1 else list(target.cpu().detach().numpy())\n          self.preds += preds\n          self.target += target\n  \n      def compute(self): # 计算结果\n          metrics = []\n          if self.accuracy:\n              metrics.append(accuracy_score(self.target, self.preds))\n          if self.recall:\n              metrics.append(recall_score(self.target, self.preds, labels=list(set(self.preds)), average=self.average))\n          if self.precision:\n              metrics.append(precision_score(self.target, self.preds, labels=list(set(self.preds)), average=self.average))\n          if self.f1:\n              metrics.append(f1_score(self.target, self.preds, labels=list(set(self.preds)), average=self.average))\n          self.reset()\n          return metrics\n  ```\n\n- 定义模型和训练逻辑\n\n  ```python\n  class CustomModel(pl.LightningModule):\n      def __init__(self):\n          super().__init__()\n          # 定义网络层\n          self.conv1 = Conv2d(1, 10, 5)\n          self.conv2 = Conv2d(10, 20, 3)\n          self.fc1 = Linear(20*10*10, 500)\n          self.fc2 = Linear(500, 10)\n          # 定义损失函数\n          self.train_criterion = CrossEntropyLoss()\n          self.val_criterion = CrossEntropyLoss()\n          # 定义指标\n          self.train_metric = ClassificationMetric(recall=False, precision=False)\n          self.val_metric = ClassificationMetric(recall=False, precision=False)\n          # 定义log\n          self.history = {\n              \"loss\": [], \"acc\": [], \"f1\": [],\n              \"val_loss\": [], \"val_acc\": [], \"val_f1\": [],\n          }\n          \n      def forward(self,x):\n          in_size = x.size(0)\n          out = self.conv1(x)\n          out = F.relu(out)\n          out = F.max_pool2d(out, 2, 2)\n          out = self.conv2(out)\n          out = F.relu(out)\n          out = out.view(in_size, -1)\n          out = self.fc1(out)\n          out = F.relu(out)\n          out = self.fc2(out)\n          out = F.log_softmax(out, dim=1)\n          return out\n      \n      def training_step(self, batch, idx):\n          x, y = batch\n          _y = self(x)\n          # 计算loss\n          loss = self.train_criterion(_y, y)\n          # 更新结果\n          self.train_metric.update(_y, y)\n          return loss\n  \n      def training_epoch_end(self, outs):\n          # 计算平均loss\n          loss = 0.\n          for out in outs:\n              loss += out[\"loss\"].cpu().detach().item()\n          loss /= len(outs)\n          # 计算指标\n          acc, f1 = self.train_metric.compute()\n          # 记录log\n          self.history[\"loss\"].append(loss)\n          self.history[\"acc\"].append(acc)\n          self.history[\"f1\"].append(f1)\n  \n      def validation_step(self, batch, idx):\n          x, y = batch\n          _y = self(x)\n          val_loss = self.val_criterion(_y, y)\n          self.val_metric.update(_y, y)\n          return val_loss\n  \n      def validation_epoch_end(self, outs):\n          val_loss = sum(outs).item() / len(outs)\n          val_acc, val_f1 = self.val_metric.compute()\n  \n          self.history[\"val_loss\"].append(val_loss)\n          self.history[\"val_acc\"].append(val_acc)\n          self.history[\"val_f1\"].append(val_f1)\n  \n      def configure_optimizers(self):\n          # 设置优化器\n          return Adam(self.parameters())\n  ```\n\n  ```python\n  model = CustomModel()\n  ```\n\n- 定义进度条和学习曲线（我不喜欢官方的进度条，所以自己写了一个）\n\n  ```python\n  class FlexibleTqdm(Callback):\n      def __init__(self, steps, column_width=10):\n          super(FlexibleTqdm, self).__init__()\n          self.steps = steps\n          self.column_width = column_width\n          self.info = \"\\rEpoch_%d %s%% [%s]\"\n  \n      def on_train_start(self, trainer, module):\n          history = module.history\n          self.row = \"-\" * (self.column_width + 1) * (len(history) + 2) + \"-\"\n          title = \"|\"\n          title += \"epoch\".center(self.column_width) + \"|\"\n          title += \"time\".center(self.column_width) + \"|\"\n          for i in history.keys():\n              title += i.center(self.column_width) + \"|\"\n          print(self.row)\n          print(title)\n          print(self.row)\n  \n      def on_train_batch_end(self, trainer, module, outputs, batch, batch_idx, dataloader_idx):\n          current_index = int((batch_idx + 1) * 100 / self.steps)\n          tqdm = [\".\"] * 100\n          for i in range(current_index - 1):\n              tqdm[i] = \"=\"\n          if current_index:\n              tqdm[current_index - 1] = \">\"\n          print(self.info % (module.current_epoch, str(current_index).rjust(3), \"\".join(tqdm)), end=\"\")\n  \n      def on_epoch_start(self, trainer, module):\n          print(self.info % (module.current_epoch, \"  0\", \".\" * 100), end=\"\")\n          self.begin = time.perf_counter()\n  \n      def on_epoch_end(self, trainer, module):\n          self.end = time.perf_counter()\n          history = module.history\n          detail = \"\\r|\"\n          detail += str(module.current_epoch).center(self.column_width) + \"|\"\n          detail += (\"%d\" % (self.end - self.begin)).center(self.column_width) + \"|\"\n          for j in history.values():\n              detail += (\"%.06f\" % j[-1]).center(self.column_width) + \"|\"\n          print(\"\\r\" + \" \" * 120, end=\"\")\n          print(detail)\n          print(self.row)\n          \n          \n  class LearningCurve(Callback):\n      def __init__(self, figsize=(12, 4), names=(\"loss\", \"acc\", \"f1\")):\n          super(LearningCurve, self).__init__()\n          self.figsize = figsize\n          self.names = names\n  \n      def on_fit_end(self, trainer, module):\n          history = module.history\n          plt.figure(figsize=self.figsize)\n          for i, j in enumerate(self.names):\n              plt.subplot(1, len(self.names), i + 1)\n              plt.title(j + \"/val_\" + j)\n              plt.plot(history[j], \"--o\", color='r', label=j)\n              plt.plot(history[\"val_\" + j], \"-*\", color='g', label=\"val_\" + j)\n              plt.legend()\n          plt.show()\n  ```\n\n- 设置训练参数（我不喜欢用logger，所以全关闭了）\n\n  ```python\n  trainer_params = {\n      \"gpus\": 1,\n      \"max_epochs\": epochs,  # 1000\n      \"checkpoint_callback\": False,  # True\n      \"logger\": False,  # TensorBoardLogger\n      \"progress_bar_refresh_rate\": 0,  # 1\n      \"num_sanity_val_steps\": 0,  # 2\n      \"callbacks\": [\n          FlexibleTqdm(len(train_dataset) // batch_size, column_width=12), # 注意设置progress_bar_refresh_rate=0，取消自带的进度条\n          LearningCurve(figsize=(12, 4), names=(\"loss\", \"acc\", \"f1\")),\n      ],  # None\n  }\n  trainer = pl.Trainer(**trainer_params)\n  ```\n\n- 开启训练\n\n  ```python\n  trainer.fit(model, train_dataloader, val_dataloader)\n  ```\n\n# 训练结果\n\n![](./images/train_log.png)\n\n![](./images/history.png)"
  },
  {
    "path": "pytorch-lightning入门到精通（2）.md",
    "content": "[TOC]\n\n# LightningModule\n\n## 简介\n\nPytorch Lightning的两大API之一，是torch.nn.Module的高级封装。\n\n## 方法\n\n### 定义模型\n\n#### \\_\\_init\\_\\_()\n\n同torch.nn.Module中的\\_\\_init\\_\\_，用于构建模型。\n\n#### forward(\\*args, \\*\\*kwargs)\n\n同torch.nn.Module中的forward，通过\\_\\_init\\_\\_中的各个模块实现前向传播。\n\n### 训练模型\n\n#### training_step(\\*args, \\*\\*kwargs)\n\n训练一批数据并反向传播。参数如下：\n\n- batch (Tensor | (Tensor, …) | [Tensor, …]) – 数据输入，一般为x, y = batch。\n- batch_idx (int) – 批次索引。\n- optimizer_idx (int) – 当使用多个优化器时，会使用本参数。\n- hiddens (Tensor) – 当truncated_bptt_steps > 0时使用。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx): # 数据类型自动转换，模型自动调用.train()\n    x, y = batch\n    _y = self(x)\n    loss = criterion(_y, y) # 计算loss\n    return loss # 返回loss，更新网络\n\ndef training_step(self, batch, batch_idx, hiddens):\n    # hiddens是上一次截断反向传播的隐藏状态\n    out, hiddens = self.lstm(data, hiddens)\n    return {\"loss\": loss, \"hiddens\": hiddens}\n```\n\n#### training_step_end(\\*args, \\*\\*kwargs)\n\n一批数据训练结束时的操作。一般用不着，分布式训练的时候会用上。参数如下：\n\n- batch_parts_outputs – 当前批次的training_step()的返回值\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    x, y = batch\n    _y = self(x)\n    return {\"output\": _y， \"target\": y}\n\ndef training_step_end(self, training_step_outputs): # 多GPU分布式训练,计算loss\n    gpu_0_output = training_step_outputs[0][\"output\"]\n    gpu_1_output = training_step_outputs[1][\"output\"]\n    \n    gpu_0_target = training_step_outputs[0][\"target\"]\n    gpu_1_target = training_step_outputs[1][\"target\"]\n\n    # 对所有GPU的数据进行处理\n    loss = criterion([gpu_0_output, gpu_1_output]， [gpu_0_target, gpu_1_target])\n    return loss\n```\n\n#### training_epoch_end(outputs)\n\n一轮数据训练结束时的操作。主要针对于本轮所有training_step的输出。参数如下：\n\n- outputs (List[Any]) – training_step()的输出。\n\n举个例子：\n\n```python\ndef training_epoch_end(self, outs): # 计算本轮的loss和acc\n    loss = 0.\n    for out in outs: # outs按照训练顺序排序\n        loss += out[\"loss\"].cpu().detach().item()\n    loss /= len(outs)\n    acc = self.train_metric.compute()\n\n    self.history[\"loss\"].append(loss)\n    self.history[\"acc\"].append(acc)\n```\n\n### 验证模型\n\n#### validation_step(\\*args, \\*\\*kwargs)\n\n见training_step。\n\n#### validation_step_end(\\*args, \\*\\*kwargs)\n\n见training_step_end。\n\n#### validation_epoch_end(outputs)\n\n见training_epoch_end。\n\n### 测试模型\n\n#### test_step(\\*args, \\*\\*kwargs)\n\n见training_step。\n\n#### test_step_end(\\*args, \\*\\*kwargs)\n\n见training_step_end。\n\n#### test_epoch_end(outputs)\n\n见training_epoch_end。\n\n### 其他有用的功能\n\n#### configure_optimizers()\n\n在优化过程中选择优化器和学习率调度器，通常只需要一个，但对于GAN之类的可能需要多个。\n\n举一堆例子：\n\n- 单个优化器\n\n  ```python\n  def configure_optimizers(self):\n      return Adam(self.parameters(), lr=1e-3)\n  ```\n\n- 多个优化器（比如GAN）\n\n  ```python\n  def configure_optimizers(self):\n      generator_opt = Adam(self.model_gen.parameters(), lr=0.01)\n      disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02)\n      return generator_opt, disriminator_opt\n  ```\n\n  可以修改frequency键来控制优化频率：\n\n  ```python\n  def configure_optimizers(self):\n      gen_opt = Adam(self.model_gen.parameters(), lr=0.01)\n      dis_opt = Adam(self.model_disc.parameters(), lr=0.02)\n      n_critic = 5\n      return (\n          {\"optimizer\": dis_opt, \"frequency\": n_critic},\n          {\"optimizer\": gen_opt, \"frequency\": 1}\n      )\n  ```\n\n- 多个优化器和多个调度器或学习率字典（比如GAN）\n\n  ```python\n  def configure_optimizers(self):\n      generator_opt = Adam(self.model_gen.parameters(), lr=0.01)\n      disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02)\n      discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10)\n      return [generator_opt, disriminator_opt], [discriminator_sched]\n  ```\n\n  ```python\n  def configure_optimizers(self):\n      generator_opt = Adam(self.model_gen.parameters(), lr=0.01)\n      disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02)\n      discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10)\n      return {\"optimizer\": [generator_opt, disriminator_opt], \"lr_scheduler\": [discriminator_sched]}\n  ```\n\n  对于调度器，可以修改其属性：\n\n  ```python\n  {\n      \"scheduler\": lr_scheduler, # 调度器\n      \"interval\": \"epoch\", # 调度的单位，epoch或step\n      \"frequency\": 1, # 调度的频率，多少轮一次\n      \"reduce_on_plateau\": False, # ReduceLROnPlateau\n      \"monitor\": \"val_loss\", # ReduceLROnPlateau的监控指标\n      \"strict\": True # 如果没有monitor，是否中断训练\n  }\n  ```\n\n  ```python\n  def configure_optimizers(self):\n      gen_opt = Adam(self.model_gen.parameters(), lr=0.01)\n      dis_opt = Adam(self.model_disc.parameters(), lr=0.02)\n      gen_sched = {\"scheduler\": ExponentialLR(gen_opt, 0.99), \"interval\": \"step\"}\n      dis_sched = CosineAnnealing(discriminator_opt, T_max=10)\n      return [gen_opt, dis_opt], [gen_sched, dis_sched]\n  ```\n\n一些注意事项：\n\n- Lightning在需要的时候会调用backward和step。\n\n- 如果使用半精度（precision=16），Lightning会自动处理。\n\n- 如果使用多个优化器，training_step会附加一个参数optimizer_idx。\n\n- 如果使用LBFGS，Lightning将自动为您处理关闭功能。\n\n- 如果使用多个优化器，则在每个训练步骤中仅针对当前优化器的参数计算梯度。\n\n- 如果您需要控制这些优化程序执行或改写默认step的频率，请改写optimizer_step。\n\n- 如果在每n步都调用调度器，或者只想监视自定义指标，则可以在lr_dict中指定它们。\n\n  ```python\n  {\n      \"scheduler\": lr_scheduler,\n      \"interval\": \"step\",  # or \"epoch\"\n      \"monitor\": \"val_f1\",\n      \"frequency\": n,\n  }\n  ```\n\n#### freeze()/unfreeze()\n\n冻结所有参数/解冻所有参数。\n\n#### save_hyperparameters(\\*args, frame=None)\n\n保存\\_\\_init\\_\\_中传入的超参数。\n\n举个例子：\n\n```python\ndef __init__(self, arg1, arg2, arg3): # 1, \"abc\", 3.14\n\tsuper().__init__()\n    # self.save_hyperparameters() # 保存所有超参数\n    # self.save_hyperparameters(\"arg1\", \"arg2\", \"arg3\") # 同上，保存所有超参数\n    self.save_hyperparameters(\"arg1\", \"arg3\") # 保存部分超参数\n\ndef __init__(self, params): # params=Namespace(p1=1, p2=\"abc\", p3=3.14)\n\tsuper().__init__()\n    self.save_hyperparameters(params) # 保存所有超参数\n```\n\n#### to_onnx(file_path, input_sample=None, \\*\\*kwargs)\n\n保存模型为ONNX格式。参数如下：\n\n- file_path (str) – 保存路径。\n- input_sample (Optional[Tensor]) – 用于跟踪的输入张量的样本。\n- **kwargs – 将传递给torch.onnx.export函数。\n\n举个例子：\n\n```python\nwith tempfile.NamedTemporaryFile(suffix=\".onnx\", delete=False) as tmpfile:\n    model = SimpleModel()\n    input_sample = torch.randn((1, 64))\n    model.to_onnx(tmpfile.name, input_sample, export_params=True)\n    os.path.isfile(tmpfile.name)\n```\n\n### 其他没啥用的功能\n\n#### log(name, value, prog_bar=False, logger=True, on_step=None, on_epoch=None, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op=\"mean\", sync_dist_group=None)\n\n#### log_dict(dictionary, prog_bar=False, logger=True, on_step=None, on_epoch=None, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op=\"mean\", sync_dist_group=None)\n\n#### print(\\*args, \\*\\*kwargs)\n\n#### to_torchscript(file_path=None, method=\\'script\\', example_inputs=None, \\*\\*kwargs)\n\n有人问，log不重要吗？当然重要，但不是用Pytorch Lightning的log，因为实在是太反人类了，各种bug，后续我们会用LightningModule和Callback实现我们自己的logger。\n\n## 属性\n\n#### current_epoch\n\n当前轮数。\n\n#### device\n\n当前设备。\n\n#### global_rank\n\n全局排名是指该GPU在所有GPU中的索引。如果使用10台计算机，每台计算机具有4个GPU，则第10台计算机上的第4个GPU的global_rank = 39。\n\nLightning仅从开始global_rank = 0保存日志，权重等。通常不需要使用此属性。\n\n#### global_step\n\n当前步数，每轮不重置。\n\n#### hparams\n\nsave_hyperparameters所保存的超参数。\n\n#### logger（不推荐使用）\n\n当前日志。\n\n#### local_rank\n\n本地排名是指该计算机上的索引。如果使用10台计算机，则每台计算机上索引为0的GPU的local_rank = 0。\n\nLightning仅从开始global_rank = 0保存日志，权重等。通常不需要使用此属性。\n\n#### precision\n\n所使用的进度类型。\n\n#### trainer\n\n指向trainer。\n\n#### use_amp/use_ddp/use_ddp2/use_dp/use_tpu\n\n是否使用了自动混合精度/ddp/ddp2/dp/tpu。\n\n## 更多自定义方法\n\n下面的伪代码描述了训练过程：\n\n```python\ndef fit(...):\n    on_fit_start()\n\n    if global_rank == 0:\n        # prepare data is called on GLOBAL_ZERO only\n        prepare_data()\n\n    for gpu/tpu in gpu/tpus:\n        train_on_device(model.copy())\n\n    on_fit_end()\n\ndef train_on_device(model):\n    # setup is called PER DEVICE\n    setup()\n    configure_optimizers()\n    on_pretrain_routine_start()\n\n    for epoch in epochs:\n        train_loop()\n\n    teardown()\n\ndef train_loop():\n    on_train_epoch_start()\n    train_outs = []\n    for train_batch in train_dataloader():\n        on_train_batch_start()\n\n        # ----- train_step methods -------\n        out = training_step(batch)\n        train_outs.append(out)\n\n        loss = out.loss\n\n        backward()\n        on_after_backward()\n        optimizer_step()\n        on_before_zero_grad()\n        optimizer_zero_grad()\n\n        on_train_batch_end(out)\n\n        if should_check_val:\n            val_loop()\n\n    # end training epoch\n    logs = training_epoch_end(outs)\n\ndef val_loop():\n    model.eval()\n    torch.set_grad_enabled(False)\n\n    on_validation_epoch_start()\n    val_outs = []\n    for val_batch in val_dataloader():\n        on_validation_batch_start()\n\n        # -------- val step methods -------\n        out = validation_step(val_batch)\n        val_outs.append(out)\n\n        on_validation_batch_end(out)\n\n    validation_epoch_end(val_outs)\n    on_validation_epoch_end()\n\n    # set up for train\n    model.train()\n    torch.set_grad_enabled(True)\n```\n\n#### backward(loss, optimizer, optimizer_idx, \\*args, \\*\\*kwargs)\n\n反向传播。参数如下：\n\n- loss (Tensor) – 已经被积累的梯度所放缩的损失。\n- optimizer (Optimizer) – 当前被使用的优化器。\n- optimizer_idx (int) – 当前被使用的优化器的索引。\n\n举个例子：\n\n```python\ndef backward(self, loss, optimizer, optimizer_idx):\n    loss.backward()\n```\n\n#### get_progress_bar_dict()\n\n修改进度条的内容。\n\n举个例子：\n\n```python\n# Epoch 1:   4%|▎         | 40/1095 [00:03<01:37, 10.84it/s, loss=4.501, v_num=10]\n\ndef get_progress_bar_dict(self):\n    # 不显示v_num\n    items = super().get_progress_bar_dict()\n    items.pop(\"v_num\", None)\n    return items\n```\n\n#### manual_backward(loss, optimizer, \\*args, \\*\\*kwargs)\n\n手动反向传播。无法覆盖trainer的设置。\n\n举个例子：\n\n```python\ndef training_step(...):\n    (opt_a, opt_b) = self.optimizers()\n    loss = ...\n    self.manual_backward(loss, opt_a)\n    self.manual_optimizer_step(opt_a)\n```\n\n#### manual_optimizer_step(optimizer, force_optimizer_step=False)\n\n手动优化。无法覆盖trainer的设置。参数如下：\n\n- optimizer (Optimizer) – 用于step()的优化器。\n- force_optimizer_step (bool) – 是否强制执行优化程序步骤。当有2个优化器且其中一个应使用累积的渐变而不是另一个使用渐变时，这可能会很有用。可以采用自己的逻辑来强制执行优化程序步骤。\n\n举个例子：\n\n```python\ndef training_step(...):\n    (opt_a, opt_b) = self.optimizers()\n    loss = ...\n    self.manual_backward(loss, opt_a)\n    self.manual_optimizer_step(opt_a, force_optimizer_step=True)\n```\n\n#### on_after_backward()\n\n在loss.backward()之后且优化程序执行任何操作之前在训练循环中调用。这是检查或记录梯度信息的理想位置。\n\n举个例子：\n\n```python\ndef on_after_backward(self):\n    if self.trainer.global_step % 25 == 0:\n        params = self.state_dict()\n        for k, v in params.items():\n            grads = v\n            name = k\n            self.logger.experiment.add_histogram(tag=name, values=grads, global_step=self.trainer.global_step)\n```\n\n#### on_before_zero_grad(optimizer)\n\n在optimizer.step()之后和optimizer.zero_grad()之前调用。检查权重信息并更新权重的理想位置。\n\n举个例子：\n\n```python\nfor optimizer in optimizers:\n    optimizer.step()\n    model.on_before_zero_grad(optimizer) # < ---- 调用\n    optimizer.zero_grad()\n```\n\n#### on_fit_start()/on_fit_end()\n\n在训练开始/结束时调用。如果在DDP上，则在每个进程上调用。\n\n#### on_load_checkpoint()/on_save_checkpoint()\n\n使模型有机会在state_dict存在之前/后加载某些内容。\n\n#### on_pretrain_routine_start()/on_pretrain_routine_end()\n\n- fit\n- pretrain_routine start\n- pretrain_routine end\n- training_start\n\n#### on_train_batch_start(batch, batch_idx, dataloader_idx)/on_train_batch_end(outputs, batch, batch_idx, dataloader_idx)\n\n对于每个训练批次，在开始/结束时调用。\n\n#### on_train_epoch_start()/on_train_epoch_end()\n\n对于每轮训练，在开始/结束时调用。\n\n#### on_validation_batch_start(batch, batch_idx, dataloader_idx)/on_validation_batch_end(outputs, batch, batch_idx, dataloader_idx)\n\n同上。\n\n#### on_validation_epoch_start()/on_validation_epoch_end()\n\n同上。\n\n#### on_test_batch_start(batch, batch_idx, dataloader_idx)/on_test_batch_end(outputs, batch, batch_idx, dataloader_idx)\n\n同上。\n\n#### on_test_epoch_start()/on_test_epoch_end()\n\n同上。\n\n#### optimizer_step(epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs)\n\n重写此方法可以调整Trainer调用每个优化器的默认方式 。默认情况下，每个优化程序都会一次调用Lightning step()，zero_grad()。参数如下：\n\n- epoch (int) – 当前轮数。\n- batch_idx (int) – 当前批次的索引。\n- optimizer (Optimizer) –优化器。\n- optimizer_idx (int) – 如果有多个优化器，则使用。\n- optimizer_closure (Optional[Callable]) – 所有优化器的关闭。\n- on_tpu (bool) – 是否为TPU。\n- using_native_amp (bool) – 是否为自动混合精度。\n- using_lbfgs (bool) – 匹配的优化器是否为lbfgs。\n\n举个例子：\n\n```python\n# 学习率预热\ndef optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs):\n    # 预热\n    if self.trainer.global_step < 500:\n        lr_scale = min(1., float(self.trainer.global_step + 1) / 500.)\n        for pg in optimizer.param_groups:\n            pg['lr'] = lr_scale * self.learning_rate\n\n    # 更新参数\n    optimizer.step(closure=optimizer_closure)\n    optimizer.zero_grad()\n```\n\n#### optimizer_zero_grad(epoch, batch_idx, optimizer, optimizer_idx)\n\n#### prepare_data()\n\n用于下载和准备数据。\n\n#### setup(stage)\n\n在训练和测试开始时调用。当您需要动态构建模型或对模型进行调整时，这是一个很好的选择。使用DDP时，每个进程都会调用此钩子。参数如下：\n\n- stage (str) – ‘fit’或‘test’。\n\n举个例子：\n\n```python\nclass LitModel(...):\n    def __init__(self):\n        self.l1 = None\n\n    def prepare_data(self):\n        download_data()\n        tokenize()\n\n        self.something = else\n\n    def setup(stage):\n        data = Load_data(...)\n        self.l1 = nn.Linear(28, data.num_classes)\n```\n\n#### tbptt_split_batch(batch, split_size)\n\n使用经过时间的截断的反向传播时，必须沿时间维度拆分每个批次。默认情况下，Lightning会处理此问题，但对于自定义行为，请覆盖此功能。参数如下：\n\n- batch (Tensor) – 当前批次。\n- split_size (int) – 分割的大小。\n\n举个例子：\n\n```python\ndef tbptt_split_batch(self, batch, split_size):\n  splits = []\n  for t in range(0, time_dims[0], split_size):\n      batch_split = []\n      for i, x in enumerate(batch):\n          if isinstance(x, torch.Tensor):\n              split_x = x[:, t:t + split_size]\n          elif isinstance(x, collections.Sequence):\n              split_x = [None] * len(x)\n              for batch_idx in range(len(x)):\n                  split_x[batch_idx] = x[batch_idx][t:t + split_size]\n\n          batch_split.append(split_x)\n\n      splits.append(batch_split)\n\n  return splits\n```\n\n#### teardown(stage)\n\n在训练和测试结束时调用。参数如下：\n\n- stage (str) – ‘fit’或‘test’。\n\n#### train_dataloader()/val_dataloader()/test_dataloader()\n\n- fit()\n- …\n- prepare_data()\n- setup()\n- train_dataloader()\n- val_dataloader()\n- test_dataloader()\n\n#### transfer_batch_to_device(batch, device)\n\n# 哪些需要掌握，哪些不需要\n\n- 需要掌握的方法：\n  - \\_\\_init\\_\\_/forword\n  - training_step/training_step_end/training_epoch_end\n  - validation_step/validation_step_end/validation_epoch_end\n  - configure_optimizer\n  - freeze/unfreeze\n  - save_hyperparameters\n- 需要掌握的属性：\n  - current_epoch\n  - device\n  - hparams\n  - precision\n  - trainer\n\n其他的需要时再看即可。"
  },
  {
    "path": "pytorch-lightning入门到精通（3）.md",
    "content": "[TOC]\n\n# Trainer\n\n## 简介\n\nPytorch Lightning的两大API之一，类似于“胶水”，将LightningModule各个部分连接形成完整的逻辑。\n\n## 方法\n\n#### \\_\\_init\\_\\_(logger=True, checkpoint_callback=True, callbacks=None, default_root_dir=None, gradient_clip_val=0, process_position=0, num_nodes=1, num_processes=1, gpus=None, auto_select_gpus=False, tpu_cores=None, log_gpu_memory=None, progress_bar_refresh_rate=1, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=1, max_epochs=1000, min_epochs=1, max_steps=None, min_steps=None, limit_train_batches=1.0, limit_val_batches=1.0, limit_test_batches=1.0, val_check_interval=1.0, flush_logs_every_n_steps=100, log_every_n_steps=50, accelerator=None, sync_batchnorm=False, precision=32, weights_summary='top', weights_save_path=None, num_sanity_val_steps=2, truncated_bptt_steps=None, resume_from_checkpoint=None, profiler=None, benchmark=False, deterministic=False, reload_dataloaders_every_epoch=False, auto_lr_find=False, replace_sampler_ddp=True, terminate_on_nan=False, auto_scale_batch_size=False, prepare_data_per_node=True, plugins=None, amp_backend='native', amp_level='O2', distributed_backend=None, automatic_optimization=True, move_metrics_to_cpu=False)\n\n初始化训练器，参数很多，下面将分别介绍：\n\n- 硬件参数：\n  - gpus[None]:\n    - 设置为0或None，表示使用cpu。\n    - 设置为大于0的整数n，表示使用n块gpu。\n    - 设置为大于0的整数字符串'n'，表示使用id为n的gpu。\n    - 设置为-1或'-1'，表示使用所有gpu。\n    - 设置为整数数组[a, b]或整数数组字符串'a, b'，表示使用id为a和b的gpu。\n  - auto_select_gpus[False]:\n    - 设置为True，自动选择所需gpu。\n    - 设置为False，按顺序选择所需gpu。\n  - num_nodes[1]:\n    - 设置为1，选择当前gpu节点。\n    - 设置为大于0的整数n，表示使用n个节点。\n  - tpu_cores[None]:\n    - 设置为None，表示不使用tpu。\n    - 设置为1，表示使用1个tpu内核。\n    - 设置为大于0的整数数组[n]，表示使用id为n的tpu内核。\n    - 设置为8，表示使用所有tpu内核。\n- 精度参数：\n  - precision[32]:\n    - 设置为2、4、8、16或32，分别表示不同的精度。\n  - amp_backend[\"native\"]:\n    - 设置为\"native\"，表示使用本地混合精度。\n    - 设置为\"apex\"，表示使用apex混合精度。\n  - amp_level[\"O2\"]:\n    - 设置为O0、O1、O2或O3，分别表示:\n      - O0：纯FP32训练，可以作为accuracy的baseline。\n      - O1：混合精度训练（推荐使用），根据黑白名单自动决定使用FP16（GEMM, 卷积）还是FP32（Softmax）进行计算。\n      - O2：“几乎FP16”混合精度训练，不存在黑白名单，除了Batch norm，几乎都是用FP16计算。\n      - O3：纯FP16训练，很不稳定，但是可以作为speed的baseline。\n- 训练超参：\n  - max_epochs[1000]:\n    - 最大训练轮数。\n  - min_epochs[1]:\n    - 最小训练轮数。\n  - max_steps[None]:\n    - 每轮最大训练步数。\n  - min_steps[None]:\n    - 每轮最小训练步数。\n- 日志参数和检查点参数：\n  - checkpoint_callback[True]:\n    - 设置为True，自动进行检查点保存。\n    - 设置为False，不进行检查点保存。\n  - logger[TensorBoardLogger]:\n    - 设置log工具。False表示不使用logger。\n  - default_root_dir[os.getcwd()]:\n    - 默认的根目录，用于日志和检查点的保存。\n  - flush_logs_every_n_steps[100]:\n    - 多少步更新一次日志到磁盘。\n  - log_every_n_steps[50]:\n    - 多少步更新一次日志到内存。\n  - log_gpu_memory[None]:\n    - 设置为None，不记录gpu显存信息。\n    - 设置为\"all\"，记录所有gpu显存信息。\n    - 设置为\"min_max\"，记录gpu显存信息最值。\n  - check_val_every_n_epoch[1]:\n    - 多少轮验证一次。\n  - val_check_interval[1.0]:\n    - 设置为小数，表示取一定比例的验证集。\n    - 设置为整数，表示取一定数量的验证集。\n  - resume_from_checkpoint[None]:\n    - 检查点恢复，输入路径。\n  - progress_bar_refresh_rate[1]:\n    - 进度条的刷新率。\n  - weights_summary[\"top\"]:\n    - 设置为None，不输出模型信息。\n    - 设置为\"top\"，输出模型简要信息。\n    - 设置为\"full\"，输出模型所有信息。\n  - weights_save_path[os.getcwd()]:\n    - 权重的保存路径。\n- 测试参数：\n  - num_sanity_val_steps[2]:\n    - 训练前检查多少批验证数据。\n  - fast_dev_run[False]:\n    - 一系列单元测试。\n  - reload_dataloaders_every_epoch[False]:\n    - 每一轮是否重新载入数据。\n- 分布式参数：\n  - accelerator[None]:\n    - dp（DataParallel）是在同一计算机的GPU之间拆分批处理。\n    - ddp（DistributedDataParallel）是每个节点上的每个GPU训练并同步梯度。TPU默认选项。\n\n    - ddp_cpu（CPU上的DistributedDataParallel）与ddp相同，但不使用GPU。对于多节点CPU训练或单节点调试很有用。\n\n    - ddp2是节点上的dp，节点间的ddp。\n  - accumulate_grad_batches[1]:\n    - 多少批进行一次梯度累积。\n  - sync_batchnorm[False]:\n    - 同步批处理，一般是在分布式多GPU时使用。\n- 自动参数：\n  - automatic_optimization[True]:\n    - 是否开启自动优化。\n  - auto_scale_batch_size[None]:\n    - 是否自动寻找最大批大小。\n  - auto_lr_find[False]:\n    - 是否自动寻找最佳学习率。\n- 确定性参数：\n  - benchmark[False]:\n    - 是否使用cudnn.benchmark。\n  - deterministic[False]:\n    - 是否开启确定性。\n- 限制性参数和采样参数：\n  - gradient_clip_val[0.0]:\n    - 梯度裁剪。\n  - limit_train_batches[1.0]:\n    - 限制每轮的训练批次数量。\n  - limit_val_batches[1.0]:\n    - 限制每轮的验证批次数量。\n  - limit_test_batches[1.0]:\n    - 限制每轮的测试批次数量。\n  - overfit_batches[0.0]:\n    - 限制批次的重复数量。\n  - prepare_data_per_node[True]:\n    - 是否对每个结点准备数据。\n  - replace_sampler_ddp[True]:\n    - 是否启用自动添加分布式采样器的功能。\n- 其他参数：\n  - callbacks[]:\n    - 好家伙，callback。\n  - process_position[0]:\n    - 对进度条进行有序处理。\n  - profiler[None]\n  - track_grad_norm[-1]\n  - truncated_bptt_steps[None]\n\n#### fit(model, train_dataloader=None, val_dataloaders=None, datamodule=None)\n\n开启训练。参数如下：\n\n- datamodule (Optional[LightningDataModule]) – 一个LightningDataModule实例。\n- model (LightningModule) – 训练的模型。\n- train_dataloader (Optional[DataLoader]) – 训练数据。\n- val_dataloaders (Union[DataLoader, List[DataLoader], None]) – 验证数据。\n\n#### test(model=None, test_dataloaders=None, ckpt_path='best', verbose=True, datamodule=None)\n\n开启测试。参数如下：\n\n- ckpt_path (Optional[str]) – best或者你最希望测试的检查点权重的路径，None使用最后的权重。\n- datamodule (Optional[LightningDataModule]) – 一个LightningDataModule实例。\n- model (Optional[LightningModule]) – 测试的模型。\n- test_dataloaders (Union[DataLoader, List[DataLoader], None]) –  测试数据。\n- verbose (bool) – 是否打印结果。\n\n#### tune(model, train_dataloader=None, val_dataloaders=None, datamodule=None)\n\n训练之前调整超参数。参数如下：\n\n- datamodule (Optional[LightningDataModule]) – 一个LightningDataModule实例。\n- model (LightningModule) – 调整的模型。\n- train_dataloader (Optional[DataLoader]) – 训练数据。\n- val_dataloaders (Union[DataLoader, List[DataLoader], None]) – 验证数据。\n\n## 属性\n\n#### callback_metrics\n\n回调指标。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    self.log('a_val', 2)\n\ncallback_metrics = trainer.callback_metricpythons\nassert callback_metrics['a_val'] == 2\n```\n\n#### current_epoch\n\n当前轮数。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    current_epoch = self.trainer.current_epoch\n    if current_epoch > 100:\n        # do something\n        pass\n```\n\n#### logger\n\n当前日志。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    logger = self.trainer.logger\n    tensorboard = logger.experiment\n```\n\n#### logged_metrics\n\n发送到日志的指标。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    self.log('a_val', 2, log=True)\n\nlogged_metrics = trainer.logged_metrics\nassert logged_metrics['a_val'] == 2\n```\n\n#### log_dir\n\n当前目录，用于保存图像等。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    img = ...\n    save_img(img, self.trainer.log_dir)\n```\n\n#### is_global_zero\n\n是否为全局第一个。\n\n#### progress_bar_metrics\n\n发送到进度条的指标。\n\n举个例子：\n\n```python\ndef training_step(self, batch, batch_idx):\n    self.log('a_val', 2, prog_bar=True)\n\nprogress_bar_metrics = trainer.progress_bar_metrics\nassert progress_bar_metrics['a_val'] == 2\n```\n\n# 哪些需要掌握，哪些不需要\n\n- 需要掌握的方法：\n  - \\_\\_init\\_\\_（参数比较多，可以花时间记录一下自己最常用的参数配置）\n  - fit/tune\n- 需要掌握的属性：\n  - current_epoch\n\n其他的需要时再看即可。"
  },
  {
    "path": "pytorch-lightning入门到精通（4）.md",
    "content": "[TOC]\n\n# Callback\n\n## 简介\n\nPytorch Lightning最nb的插件，万能，无敌，随处可插，即插即用。\n\n## 方法\n\n### 训练方法\n\n#### on_train_start(trainer, pl_module)\n\n当第一次训练开始时的操作。\n\n#### on_train_end(trainer, pl_module)\n\n当最后一次训练结束时的操作。\n\n#### on_train_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)\n\n当一批数据训练开始时的操作。\n\n#### on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)\n\n当一批数据训练结束时的操作。\n\n#### on_train_epoch_start(trainer, pl_module)\n\n当一轮数据训练开始时的操作。\n\n#### on_train_epoch_end(trainer, pl_module, outputs)\n\n当一轮数据训练结束时的操作。\n\n### 验证方法\n\n#### on_validation_start(trainer, pl_module)\n\n当第一次验证开始时的操作。\n\n#### on_validation_end(self, trainer, pl_module)\n\n当最后一次验证结束时的操作。\n\n#### on_validation_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)\n\n当一批数据验证开始时的操作。\n\n#### on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)\n\n当一批数据验证结束时的操作。\n\n#### on_validation_epoch_start(trainer, pl_module)\n\n当一轮数据验证开始时的操作。\n\n#### on_validation_epoch_end(trainer, pl_module)\n\n当一轮数据验证结束时的操作。\n\n### 测试方法\n\n#### on_test_start(trainer, pl_module)\n\n当第一次测试开始时的操作。\n\n#### on_test_end(self, trainer, pl_module)\n\n当最后一次测试结束时的操作。\n\n#### on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)\n\n当一批数据测试开始时的操作。\n\n#### on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)\n\n当一批数据测试结束时的操作。\n\n#### on_test_epoch_start(trainer, pl_module)\n\n当一轮数据测试开始时的操作。\n\n#### on_test_epoch_end(trainer, pl_module)\n\n当一轮数据测试结束时的操作。\n\n### 其他方法\n\n#### on_fit_start(trainer, pl_module)\n\n当调用.fit时的操作。\n\n#### on_fit_end(trainer, pl_module)\n\n.fit结束时的操作。\n\n#### setup(trainer, pl_module, stage)\n\n#### teardown(trainer, pl_module, stage)\n\n#### on_init_start(trainer)\n\n#### on_init_end(trainer)\n\n#### on_sanity_check_start(trainer, pl_module)\n\n#### on_sanity_check_end(trainer, pl_module)\n\n#### on_batch_start(trainer, pl_module)\n\n#### on_batch_end(trainer, pl_module)\n\n#### on_epoch_start(trainer, pl_module)\n\n#### on_epoch_end(trainer, pl_module)\n\n#### on_keyboard_interrupt(trainer, pl_module)\n\n#### on_save_checkpoint(trainer, pl_module)\n\n#### on_load_checkpoint(checkpointed_state)\n\n# 哪些需要掌握，哪些不需要\n\n类似于LightningModule的各种方法，需要进行操作时在对应位置进行修改。后面将会举几个实用的例子。"
  },
  {
    "path": "pytorch-lightning入门到精通（5）.md",
    "content": "[TOC]\n\n# CSVLogger\n\n抛弃Pytorch Lightning自带的logger，自定义logger。\n\n## 修改LightningModule\n\n```python\nclass CustomModel(pl.LightningModule):\n    def __init__(self, ...):\n        super().__init__()\n        self.model = ...\n        # 用于计算loss\n        self.train_criterion = CrossEntropyLoss()\n        self.val_criterion = CrossEntropyLoss()\n        # 用于计算metric\n        self.train_metric = ClassificationMetric()\n        self.val_metric = ClassificationMetric()\n        # 用于保存log\n        self.history = {\n            \"loss\": [], \"acc\": [],\n            \"val_loss\": [], \"val_acc\": [],\n        }\n\n    def forward(self, x):\n        return self.model(x)\n\n    def training_step(self, batch, idx):\n        x, y = batch\n        _y = self(x)\n        # 计算loss\n        loss = self.train_criterion(_y, y)\n        # 统计结果\n        self.train_metric.update(_y, y)\n        return loss\n\n    def training_epoch_end(self, outs):\n        # 计算平均loss\n        loss = 0.\n        for out in outs:\n            loss += out[\"loss\"].cpu().detach().item()\n        loss /= len(outs)\n        # 计算指标\n        acc = self.train_metric.compute()\n        # 保存log\n        self.history[\"loss\"].append(loss)\n        self.history[\"acc\"].append(acc)\n\n    def validation_step(self, batch, idx):\n        x, y = batch\n        _y = self(x)\n        # 计算loss\n        val_loss = self.val_criterion(_y, y)\n        # 统计结果\n        self.val_metric.update(_y, y)\n        return val_loss\n\n    def validation_epoch_end(self, outs):\n        # 计算平均loss\n        val_loss = sum(outs).item() / len(outs)\n        # 计算指标\n        val_acc1 = self.val_metric.compute()\n        # 保存log\n        self.history[\"val_loss\"].append(val_loss)\n        self.history[\"val_acc\"].append(val_acc)\n\n    def configure_optimizers(self):\n        optimizer = Adam(self.parameters())\n        scheduler = ...\n        return [optimizer], [scheduler]\n```\n\n## 自定义Callback\n\n```python\nclass CSVLogger(Callback):\n    def __init__(self, dirpath=\"history/\", filename=\"history\"):\n        super(CSVLogger, self).__init__()\n        if not os.path.exists(dirpath):\n            os.makedirs(dirpath)\n        self.name = dirpath + filename\n        if len(filename) > 4 and filename[-4:] != \".csv\":\n            self.name += \".csv\"\n\n    def on_epoch_end(self, trainer, module): # 在每轮结束时保存log到磁盘\n        history = pd.DataFrame(module.history)\n        history.to_csv(self.name, index=False)\n```\n\n# ModelCheckpoint\n\n模型检查点，尽管Pytorch Lightning官方有实现，我们依旧可以自定义一个。\n\n## 修改LightningModule\n\n和CSVLogger的一样，主要是history记录log。\n\n## 自定义Callback\n\n```python\nclass ModelCheckpoint(Callback):\n    def __init__(self, dirpath=\"checkpoint/\", filename=\"checkpoint\", monitor=\"val_acc\", mode=\"max\"):\n        super(ModelCheckpoint, self).__init__()\n        if not os.path.exists(dirpath):\n            os.makedirs(dirpath)\n        self.name = dirpath + filename\n        if len(filename) > 4 and filename[-4:] != \".pth\":\n            self.name += \".pth\"\n        self.monitor = monitor\n        self.mode = mode\n        self.value = 0. if mode == \"max\" else 1e6\n\n    def on_epoch_end(self, trainer, module): # 在每轮结束时检查\n        if self.mode == \"max\" and module.history[self.monitor][-1] > self.value:\n            self.value = module.history[self.monitor][-1]\n            torch.save(module.state_dict(), self.name)\n        if self.mode == \"min\" and module.history[self.monitor][-1] < self.value:\n            self.value = module.history[self.monitor][-1]\n            torch.save(module.state_dict(), self.name)\n```\n\n# LearningCurve\n\n我们来画个学习曲线，看看训练的各个指标的趋势。\n\n## 修改LightningModule\n\n和CSVLogger的一样，主要是history记录log。\n\n## 自定义Callback\n\n```python\nclass LearningCurve(Callback):\n    def __init__(self, dirpath=\"checkpoint/\", filename=\"log\", figsize=(12, 4), names=(\"loss\", \"acc\", \"f1\")):\n        super(LearningCurve, self).__init__()\n        if not os.path.exists(dirpath):\n            os.makedirs(dirpath)\n        self.name = dirpath + filename\n        if len(filename) > 4 and filename[-4:] != \".png\":\n            self.name += \".png\"\n        self.figsize = figsize\n        self.names = names\n\n    def on_fit_end(self, trainer, module): # 在.fit结束时画图\n        history = module.history\n        plt.figure(figsize=self.figsize)\n        for i, j in enumerate(self.names):\n            plt.subplot(1, len(self.names), i + 1)\n            plt.title(j + \"/val_\" + j)\n            plt.plot(history[j], \"--o\", color='r', label=j)\n            plt.plot(history[\"val_\" + j], \"-*\", color='g', label=\"val_\" + j)\n            plt.legend()\n        plt.savefig(self.name)\n        plt.show()\n```\n\n# 注意事项\n\n- 当你定义多个Callback时，一定要使他们不相关。\n- 定义Callback时注意每个操作的调用时间顺序。\n- 建议在LightningModule中定义一个同上的history用来保存log，而不是用官方的logger，这样可以避免很多bug，而且随时都能用上。"
  },
  {
    "path": "pytorch-lightning入门到精通（6）.md",
    "content": "[TOC]\n\n# 赛题背景\n\n[CCF2020训练赛：通用音频分类](https://www.datafountain.cn/competitions/486)\n\n- **赛题名**：通用音频分类\n\n- **赛道**：训练赛道\n\n- **背景**：随着移动终端的广泛应用以及数据量的不断积累，海量多媒体信息的处理需求日益凸显。作为多媒体信息的重要载体，音频信息处理应用广泛且多样，如自动语音识别、音乐风格识别等。有些声音是独特的，可以立即识别，例如婴儿的笑声或吉他的弹拨声。有些音频背景噪声复杂，很难区分。如果闭上眼睛，您能说出电锯和搅拌机是下面哪种声音？音频分类是音频信息处理领域的一个基本问题，从本质上说，音频分类的性能依赖于音频中的特征提取。传统特征提取算法使用音频特征的统计信息作为分类的依据,使用到的音频特征包括线性预测编码、短时平均能量等。近年来，基于深度学习的音频分类取得了较大进展。基于端到端的特征提取方式，深度学习可以避免繁琐的人工特征设计。音频的多样化给“机器听觉”带来了巨大挑战。如何对音频信息进行有效的分类,从繁芜丛杂的数据集中将具有某种特定形态的音频归属到同一个集合，对于学术研究及工业应用具有重要意义。\n\n- **任务**：基于上述实际需求以及深度学习的进展，本次训练赛旨在构建通用的基于深度学习的自动音频分类系统。通过本赛题建立准确的音频分类模型，希望大家探索更为鲁棒的音频表述方法，以及转移学习、自监督学习等方法在音频分类中的应用。\n- 训练集大约6万条音频数据，测试集大约6千条。一共30类，采样率为16000，每条数据大约1秒。打榜指标为accuracy。\n- [代码地址](https://github.com/3017218062/Universal-Audio-Classification)\n\n# 文件概括\n\n- \\_\\_init\\_\\_\\.py：导入所需的库。\n- arg\\.py：命令行参数。\n- callback\\.py：进度条、日志等辅助工具。\n- dataset\\.py：数据集文件。\n- model\\.py：定义模型和训练逻辑。\n- preprocess\\.py：预处理和数据划分。\n- transform\\.py：数据增强文件。\n- util\\.py：指标和损失函数。\n- train\\.py：训练文件。\n- inference\\.py：推理文件。\n\n# 环境要求\n\n- 硬件：2080Ti*5\n- 框架：Pytorch1.6，Pytorch Lightning\n- 库：见requirements.txt\n- 数据：修改train\\.py和inference中的input_path为训练集路径\n\n# 文件运行\n\n- 训练：\n  - python train.py -t 224 -m \"dla60_res2next\" -f 0 -g 0\n  - python train.py -t 224 -m \"dla60_res2next\" -f 1 -g 1\n  - python train.py -t 224 -m \"dla60_res2next\" -f 2 -g 2\n  - python train.py -t 224 -m \"dla60_res2next\" -f 3 -g 3\n  - python train.py -t 224 -m \"dla60_res2next\" -f 4 -g 4\n- 推理：\n  - python inference.py -t 224 -m \"dla60_res2next\" -f 5 -a \"y\"\n\n# 总体思路\n\n- 将数据进行五折划分，使用第一折进行试验。\n- 使用librosa.feature.melspectrogram提取频谱图，从小分辨率开始实验（高32维持不变），注意归一化。\n- 数据增强主要是高斯噪声、音频偏移和音量调节。\n- 从resnet18开始，依次替换为更大更复杂的模型。\n- 找到最终模型后进行五折集成。\n- 进行不同种类模型的集成。\n- 进行测试时增强集成。\n\n# 实验过程\n\n- 0.95259692758\n  - 模型：resnet50\n  - n_mels：64\n- 0.95610826628\n  - 模型：resnet50d\n  - n_mels：64\n- 0.95918068764\n  - 模型：res2next50\n  - n_mels：64\n- 0.96576444770\n  - 模型：res2next50\n  - n_mels：64\n  - width：64\n- 0.96971470373\n  - 模型：res2next50\n  - n_mels：128\n  - width：128\n  - more augment\n- 0.96898317484\n  - 模型：resnest50d\n  - n_mels：128\n  - width：128\n  - more augment\n- 0.97307973665\n  - 模型：res2next50\n  - n_mels：224\n  - width：224\n  - more augment\n- 0.97527432334\n  - 模型：res2next50\n  - n_mels：224\n  - width：224\n  - more augment\n  - 5-fold hard ensemble\n- 0.97542062911\n  - 模型：res2next50\n  - n_mels：224\n  - width：224\n  - more augment\n  - 5-fold soft ensemble\n- 0.97585954645\n  - 模型：res2next50\n  - n_mels：224\n  - width：224\n  - more augment\n  - 5-fold soft ensemble\n  - 4TTA\n- 0.97527432334\n  - 模型：res2next50\n  - n_mels：224\n  - width：224\n  - more augment\n  - 5-fold soft ensemble\n  - 4TTA\n  - smooth0.1\n  - ohem0.9\n\n# 反思总结\n\n- 更大的分辨率可以达到更好的效果，但对机器要求也会随之提高。\n- efficientnet系列训练快，效果好，但容易过拟合。\n- 五折和TTA永远的神。\n- 数据增强时不要使用音调调整，太慢了。\n- 标签平滑为什么没用呢，俺也没有明白。\n- OHEM可以更好地分类tree/three这种难例，但对整体的精度有所损失，可能需要训练更多epoch。"
  }
]