Repository: stockmarkteam/bert-book Branch: master Commit: 4b6e61cb44fa Files: 11 Total size: 147.1 KB Directory structure: gitextract_v8utlsjf/ ├── CORRECTION.md ├── Chapter10.ipynb ├── Chapter4.ipynb ├── Chapter5.ipynb ├── Chapter6.ipynb ├── Chapter7.ipynb ├── Chapter8.ipynb ├── Chapter9.ipynb ├── LICENSE ├── README.md └── README_studio-lab.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: CORRECTION.md ================================================ # 誤植 読者の皆様から誤植の報告をいただきました。どうもありがとうございます。誤植を見つけられたら、issueを立て、ご連絡いただければと思います。 ### コード 以下のコードの修正は公開しているファイルにはすでに反映済みです。 | コードブロック | 場所 | 訂正 | 反映 | | ---- | ---- | ---- | ---- | | 4-2 || BertJapanese-Tokenizer -> BertJapaneseTokenizer | 第3刷 | | 5-5 | 下から4行目のコメント内 | iput_ids -> input_ids | 第3刷 | | 6-5 | 下から6行目 | scores -> scoresのサイズ | 第3刷 | | 8-5 | p113の上から15, 21, 22行目、p114の下から14, 15行目(計5ヶ所) | tokenizer -> self | 第3刷 | | 8-11 | 下から9行目 (このままでも間違いではありませんが、`return_dict=True`は入れても入れなくても出力は同じなので、削除しました。) | (\*\*encoding, return_dict=True) -> (\*\*encoding)| 第3刷 | | 8-16 | p126の下から3行目、p127の上から4行目(計2ヶ所) | (\*\*batch, return_dict=True) -> (\*\*batch)| 第3刷 | | 8-17 | 上から14行目 | (\*\*encoding, return_dict=True) -> (\*\*encoding)| 第3刷 | | 8-21 | p136の上から7, 22, 23行目、p137の下から13, 14行目(計5ヶ所) | tokenizer -> self | 第3刷 | | 8-23 | p141の下から13行目 | (\*\*encoding, return_dict=True) -> (\*\*encoding)| 第3刷 | | 9-4 | p148の上から5, 12行目、p149の上から15, 16行目、p150の上から10行目(計5ヶ所) | tokenizer -> self | 第3刷 | ### 本文 | ページ | 場所 | 訂正 | 反映 | | ---- | ---- | ---- | ---- | | iv | 前書き最終行 | 読者の皆様の -> 読者の皆様に | 第3刷 | | v | 目次2.4.2 | Long-Short Term Memoty -> Long Short-Term Memory | 第5刷 | | p2 | 箇条書きの「形態素解析」の項 | 活用系 -> 活用形 | 第3刷 | | p3 | 箇条書きの「文章校正」の項 | 第10章 -> 第9章 | 第5刷 | | p6 | 参考文献[1] | NACACL -> NAACL | 第3刷 | | p12 | 図2.1(b) | x<0でyが0より少し小さい値になってしまっていますが、正確にはx<0ではy=0、x>=0ではy=xです。| 第3刷 | | p21 | 最終行 | 再起的 -> 再帰的 | 第3刷 | | p22 | 下から7行目 | ベクトル$h'\_i$をSoftmax変換 -> ベクトル$h'\_(i-1)$をSoftmax変換 | 第5刷 | | p24 | 2.4.2項見出し | Long-Short Term Memoty -> Long Short-Term Memory | 第5刷 | | p25 | 下から2行目 | 出力値$h_i$計算する際 -> 出力値$h_i$を計算する際 | 第3刷 | | p27 | 上から2行目 | 式の先頭に -(マイナス)をつける。 | 第3刷 | | p28 | 上から1行目 | 図2.6(a) -> 図2.6 | 第3刷 | | p34 | 上から8行目 | と表現されます。 -> と表現されます($K^T$は行列$K$の転置行列を表す)。 | 第5刷 | | p34 | 3.1.2項、最初の段落の3行目 | Scale -> Scaled | 第3刷 | | p36 | 3.1.5項、上から2行目 | Layer Normalizatin -> Layer Normalization | 第5刷 | | p36 | 3.2.1項、上から5行目 | 受け入れるられる -> 受け入れられる | 第5刷 | | p36 | 3.2.1項、上から8行目 | 末尾にに -> 末尾に | 第5刷 | | p37 | 下から1行目 | e^S_i -> e^P_i | 第3刷 | | p41 | 参考文献[2] | NACACL -> NAACL | 第3刷 | | p48 | コードブロック#4-4の出力の直後の文 | マシーンラーニング -> マシンラーニング | 第3刷 | | p53 | 出力例の下 | num_hidden_layer -> num_hidden_layers | 第3刷 | | p53 | 出力例の下 | max_position_embedding -> max_position_embeddings | 第3刷 | | p55 | 上から3行目 | 隠れ状態の次元は728 -> 隠れ状態の次元は768 | 第3刷 | | p58 | 上から2行目 | なにかを予測する」というというタスク -> なにかを予測する」というタスク | | | p63 | 上から2行目 | predict_topk_mask -> predict_mask_topk | 第3刷 | | p73 | 出力例の上から3行目 | size -> scoresのサイズ | 第3刷 | | p78 | 1段落の最終行 | 本項をを -> 本項を | 第3刷 | | p79 | 2段落の最終行 | m_1 -> m | 第3刷 | | p83 | 下から5行目 | train_step -> training_step | 第3刷 | | p84 | 上から1行目 | train_step -> training_step | 第3刷 | | p84 | 上から2行目 | train_step -> training_step | 第3刷 | | p85 | 上から4行目 | ModelCehckpoint -> ModelCheckpoint | 第3刷 | | p85 | 本文の下から2行目 | checkpoint.bert_model_path -> checkpoint.best_model_score | 第3刷 | | p92 | 下から3行目 | 選択肢ない -> 選択しない | 第5刷 | | p107 | ページ下部のIREXの固有表現のカテゴリーのリスト | 「固有物名」が抜けていました。| 第3刷 | | p117 | 例文 | Ltdだ。 -> Ltdである。 | 第5刷 | | p118 | 上から3行目 | 「Tencent Holdings Limited」 -> 「Tencent Holdings Ltd」 | 第5刷 | | p121 | 本文最終段落中の2箇所と図8.2のキャプション| BertForToeknClassification -> BertForTokenClassification | 第3刷 | | p128 | 上から15行目 | 〜再現率は下がってしまします。 -> 〜再現率は下がってしまいます。 | 第5刷 | | p132 | 上から2行目| この線は -> この例は | 第5刷 | | p135 | 下から1行目| ここでで -> ここで | 第5刷 | | p156 | 箇条書き1つめ| categry -> category | 第5刷 | | p179 | A-1節の最終段落の下から2行目 | アルゴリズムをを -> アルゴリズムを | 第3刷 | ================================================ FILE: Chapter10.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter10.ipynb", "provenance": [], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "Cao0hx_ts_xb" }, "source": [ "# 10章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。" ] }, { "cell_type": "code", "metadata": { "id": "BDX6Gi6xiCOY" }, "source": [ "# 10-1\n", "!mkdir chap10\n", "%cd ./chap10" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "0hJ-pXOwXBzH" }, "source": [ "# 10-2\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "V_BGiKTflI39" }, "source": [ "# 10-3\n", "import random\n", "import glob\n", "from tqdm import tqdm\n", "import numpy as np\n", "from sklearn.manifold import TSNE\n", "from sklearn.decomposition import PCA\n", "import matplotlib.pyplot as plt\n", "\n", "import torch\n", "from torch.utils.data import DataLoader\n", "from transformers import BertJapaneseTokenizer, BertModel\n", "\n", "# BERTの日本語モデル\n", "MODEL_NAME = 'tohoku-nlp/bert-base-japanese-whole-word-masking'" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "r97ZbgVeZ-Hi" }, "source": [ "# 10-4\n", "#データのダウンロード\n", "!wget https://www.rondhuit.com/download/ldcc-20140209.tar.gz \n", "#ファイルの解凍\n", "!tar -zxf ldcc-20140209.tar.gz " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "G9YGEfZUAxea" }, "source": [ "# 10-5\n", "# カテゴリーのリスト\n", "category_list = [\n", " 'dokujo-tsushin',\n", " 'it-life-hack',\n", " 'kaden-channel',\n", " 'livedoor-homme',\n", " 'movie-enter',\n", " 'peachy',\n", " 'smax',\n", " 'sports-watch',\n", " 'topic-news'\n", "]\n", "\n", "# トークナイザとモデルのロード\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)\n", "model = BertModel.from_pretrained(MODEL_NAME)\n", "model = model.cuda()\n", "\n", "# 各データの形式を整える\n", "max_length = 256\n", "sentence_vectors = [] # 文章ベクトルを追加していく。\n", "labels = [] # ラベルを追加していく。\n", "for label, category in enumerate(tqdm(category_list)):\n", " for file in glob.glob(f'./text/{category}/{category}*'):\n", " # 記事から文章を抜き出し、符号化を行う。\n", " lines = open(file).read().splitlines()\n", " text = '\\n'.join(lines[3:])\n", " encoding = tokenizer(\n", " text, \n", " max_length=max_length, \n", " padding='max_length', \n", " truncation=True, \n", " return_tensors='pt'\n", " )\n", " encoding = { k: v.cuda() for k, v in encoding.items() } \n", " attention_mask = encoding['attention_mask']\n", "\n", " # 文章ベクトルを計算\n", " # BERTの最終層の出力を平均を計算する。(ただし、[PAD]は除く。)\n", " with torch.no_grad():\n", " output = model(**encoding)\n", " last_hidden_state = output.last_hidden_state \n", " averaged_hidden_state = \\\n", " (last_hidden_state*attention_mask.unsqueeze(-1)).sum(1) \\\n", " / attention_mask.sum(1, keepdim=True) \n", "\n", " # 文章ベクトルとラベルを追加\n", " sentence_vectors.append(averaged_hidden_state[0].cpu().numpy())\n", " labels.append(label)\n", "\n", "# それぞれをnumpy.ndarrayにする。\n", "sentence_vectors = np.vstack(sentence_vectors)\n", "labels = np.array(labels)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "4h6wmubg5joK" }, "source": [ "# 10-6\n", "sentence_vectors_pca = PCA(n_components=2).fit_transform(sentence_vectors) \n", "print(sentence_vectors_pca.shape)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Tupdek04-_0j" }, "source": [ "# 10-7\n", "plt.figure(figsize=(10,10))\n", "for label in range(9):\n", " plt.subplot(3,3,label+1)\n", " index = labels == label\n", " plt.plot(\n", " sentence_vectors_pca[:,0], \n", " sentence_vectors_pca[:,1], \n", " 'o', \n", " markersize=1, \n", " color=[0.7, 0.7, 0.7]\n", " )\n", " plt.plot(\n", " sentence_vectors_pca[index,0], \n", " sentence_vectors_pca[index,1], \n", " 'o', \n", " markersize=2, \n", " color='k'\n", " )\n", " plt.title(category_list[label])" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "y6PPni1WHqLK" }, "source": [ "# 10-8\n", "sentence_vectors_tsne = TSNE(n_components=2).fit_transform(sentence_vectors) " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "YIi2D2rhBknu" }, "source": [ "# 10-9\n", "plt.figure(figsize=(10,10))\n", "for label in range(9):\n", " plt.subplot(3,3,label+1)\n", " index = labels == label\n", " plt.plot(\n", " sentence_vectors_tsne[:,0],\n", " sentence_vectors_tsne[:,1], \n", " 'o', \n", " markersize=1, \n", " color=[0.7, 0.7, 0.7]\n", " )\n", " plt.plot(\n", " sentence_vectors_tsne[index,0],\n", " sentence_vectors_tsne[index,1], \n", " 'o',\n", " markersize=2,\n", " color='k'\n", " )\n", " plt.title(category_list[label])" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "kp3U8tB-I46h" }, "source": [ "# 10-10\n", "# 先にノルムを1にしておく。\n", "norm = np.linalg.norm(sentence_vectors, axis=1, keepdims=True) \n", "sentence_vectors_normalized = sentence_vectors / norm\n", "\n", "# 類似度行列を計算する。\n", "# 類似度行列の(i,j)要素はi番目の記事とj番目の記事の類似度を表している。\n", "sim_matrix = sentence_vectors_normalized.dot(sentence_vectors_normalized.T)\n", "\n", "# 入力と同じ記事が出力されることを避けるため、\n", "# 類似度行列の対角要素の値を小さくしておく。\n", "np.fill_diagonal(sim_matrix, -1)\n", "\n", "# 類似度が高い記事のインデックスを得る\n", "similar_news = sim_matrix.argmax(axis=1) \n", "\n", "# 類似文章検索により選ばれた記事とカテゴリーが同一であった記事の割合を計算\n", "input_news_categories = labels\n", "output_news_categories = labels[similar_news]\n", "num_correct = ( input_news_categories == output_news_categories ).sum()\n", "accuracy = num_correct / labels.shape[0]\n", "\n", "print(f\"Accuracy: {accuracy:.2f}\")" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: Chapter4.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter04.ipynb", "provenance": [], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "8R27gfyksk4d" }, "source": [ "# 4章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。" ] }, { "cell_type": "code", "metadata": { "id": "kvqSUAEtU_VJ" }, "source": [ "# 4-1\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "DWT32lOgHLrU" }, "source": [ "# 4-2\n", "import torch\n", "from transformers import BertJapaneseTokenizer, BertModel" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "QcFsCeMBZVDR" }, "source": [ "# 4-3\n", "model_name = 'tohoku-nlp/bert-base-japanese-whole-word-masking'\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(model_name)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "gPy-VLclxU_u" }, "source": [ "# 4-4\n", "tokenizer.tokenize('明日は自然言語処理の勉強をしよう。')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "fekTFJmD0-TQ" }, "source": [ "# 4-5\n", "tokenizer.tokenize('明日はマシンラーニングの勉強をしよう。')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "g35K_yPf4YZL" }, "source": [ "# 4-6\n", "tokenizer.tokenize('機械学習を中国語にすると机器学习だ。')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "pWVOdX-Ci7zx" }, "source": [ "# 4-7\n", "input_ids = tokenizer.encode('明日は自然言語処理の勉強をしよう。')\n", "print(input_ids)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "QNAPAMjCjmH-" }, "source": [ "# 4-8\n", "tokenizer.convert_ids_to_tokens(input_ids)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "FCOXUJCsxj_F" }, "source": [ "# 4-9\n", "text = '明日の天気は晴れだ。'\n", "encoding = tokenizer(\n", " text, max_length=12, padding='max_length', truncation=True\n", ")\n", "print('# encoding:')\n", "print(encoding)\n", "\n", "tokens = tokenizer.convert_ids_to_tokens(encoding['input_ids'])\n", "print('# tokens:')\n", "print(tokens)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Y_KCd86ozaYH" }, "source": [ "# 4-10\n", "encoding = tokenizer(\n", " text, max_length=6, padding='max_length', truncation=True\n", ")\n", "tokens = tokenizer.convert_ids_to_tokens(encoding['input_ids'])\n", "print(tokens)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Lty7haD0kG-U" }, "source": [ "# 4-11\n", "text_list = ['明日の天気は晴れだ。','パソコンが急に動かなくなった。']\n", "tokenizer(\n", " text_list, max_length=10, padding='max_length', truncation=True\n", ")" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Sv2IQ2uD2B1i" }, "source": [ "# 4-12\n", "tokenizer(text_list, padding='longest')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "e9f1dK4IDzt_" }, "source": [ "# 4-13\n", "tokenizer(\n", " text_list,\n", " max_length=10,\n", " padding='max_length',\n", " truncation=True,\n", " return_tensors='pt'\n", ")" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "6ddHZWk6wLjh" }, "source": [ "# 4-14\n", "# モデルのロード\n", "model_name = 'tohoku-nlp/bert-base-japanese-whole-word-masking'\n", "bert = BertModel.from_pretrained(model_name)\n", "\n", "# BERTをGPUに載せる\n", "bert = bert.cuda() " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "jUG0FwjdERPP" }, "source": [ "# 4-15\n", "print(bert.config)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "C4Pcz2VCCChM" }, "source": [ "# 4-16\n", "text_list = [\n", " '明日は自然言語処理の勉強をしよう。',\n", " '明日はマシーンラーニングの勉強をしよう。'\n", "]\n", "\n", "# 文章の符号化\n", "encoding = tokenizer(\n", " text_list,\n", " max_length=32,\n", " padding='max_length',\n", " truncation=True,\n", " return_tensors='pt'\n", ")\n", "\n", "# データをGPUに載せる\n", "encoding = { k: v.cuda() for k, v in encoding.items() } \n", "\n", "# BERTでの処理\n", "output = bert(**encoding) # それぞれの入力は2次元のtorch.Tensor\n", "last_hidden_state = output.last_hidden_state # 最終層の出力" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "vwmt3sF3gU1L" }, "source": [ "# 4-17\n", "output = bert(\n", " input_ids=encoding['input_ids'], \n", " attention_mask=encoding['attention_mask'],\n", " token_type_ids=encoding['token_type_ids']\n", ")" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "6uDcHAPSFVlF" }, "source": [ "# 4-18\n", "print(last_hidden_state.size()) #テンソルのサイズ" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "-FPInM0KaPqh" }, "source": [ "# 4-19\n", "with torch.no_grad():\n", " output = bert(**encoding)\n", " last_hidden_state = output.last_hidden_state" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "rxAZqhrmTZOM" }, "source": [ "# 4-20\n", "last_hidden_state = last_hidden_state.cpu() # CPUにうつす。\n", "last_hidden_state = last_hidden_state.numpy() # numpy.ndarrayに変換\n", "last_hidden_state = last_hidden_state.tolist() # リストに変換" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: Chapter5.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter05.ipynb", "provenance": [], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "QB3WEyIVstl0" }, "source": [ "# 5章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。" ] }, { "cell_type": "code", "metadata": { "id": "kvqSUAEtU_VJ" }, "source": [ "# 5-1\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "DWT32lOgHLrU" }, "source": [ "# 5-2\n", "import numpy as np\n", "import torch\n", "from transformers import BertJapaneseTokenizer, BertForMaskedLM" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "I7X-Iy52AC1v" }, "source": [ "# 5-3\n", "model_name = 'tohoku-nlp/bert-base-japanese-whole-word-masking'\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(model_name)\n", "bert_mlm = BertForMaskedLM.from_pretrained(model_name)\n", "bert_mlm = bert_mlm.cuda()" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "EfKt-j0WLOfx" }, "source": [ "# 5-4\n", "text = '今日は[MASK]へ行く。'\n", "tokens = tokenizer.tokenize(text)\n", "print(tokens)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "YaW5Y9fM5zeM" }, "source": [ "# 5-5\n", "# 文章を符号化し、GPUに配置する。\n", "input_ids = tokenizer.encode(text, return_tensors='pt')\n", "input_ids = input_ids.cuda()\n", "\n", "# BERTに入力し、分類スコアを得る。\n", "# 系列長を揃える必要がないので、単にiput_idsのみを入力します。\n", "with torch.no_grad():\n", " output = bert_mlm(input_ids=input_ids)\n", " scores = output.logits" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Z-5lnX9r8XKl" }, "source": [ "# 5-6\n", "# ID列で'[MASK]'(IDは4)の位置を調べる\n", "mask_position = input_ids[0].tolist().index(4) \n", "\n", "# スコアが最も良いトークンのIDを取り出し、トークンに変換する。\n", "id_best = scores[0, mask_position].argmax(-1).item()\n", "token_best = tokenizer.convert_ids_to_tokens(id_best)\n", "token_best = token_best.replace('##', '')\n", "\n", "# [MASK]を上で求めたトークンで置き換える。\n", "text = text.replace('[MASK]',token_best)\n", "\n", "print(text)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "TgbIA-1-EVaJ" }, "source": [ "# 5-7\n", "def predict_mask_topk(text, tokenizer, bert_mlm, num_topk):\n", " \"\"\"\n", " 文章中の最初の[MASK]をスコアの上位のトークンに置き換える。\n", " 上位何位まで使うかは、num_topkで指定。\n", " 出力は穴埋めされた文章のリストと、置き換えられたトークンのスコアのリスト。\n", " \"\"\"\n", " # 文章を符号化し、BERTで分類スコアを得る。\n", " input_ids = tokenizer.encode(text, return_tensors='pt')\n", " input_ids = input_ids.cuda()\n", " with torch.no_grad():\n", " output = bert_mlm(input_ids=input_ids)\n", " scores = output.logits\n", "\n", " # スコアが上位のトークンとスコアを求める。\n", " mask_position = input_ids[0].tolist().index(4) \n", " topk = scores[0, mask_position].topk(num_topk)\n", " ids_topk = topk.indices # トークンのID\n", " tokens_topk = tokenizer.convert_ids_to_tokens(ids_topk) # トークン\n", " scores_topk = topk.values.cpu().numpy() # スコア\n", "\n", " # 文章中の[MASK]を上で求めたトークンで置き換える。\n", " text_topk = [] # 穴埋めされたテキストを追加する。\n", " for token in tokens_topk:\n", " token = token.replace('##', '')\n", " text_topk.append(text.replace('[MASK]', token, 1))\n", "\n", " return text_topk, scores_topk\n", "\n", "text = '今日は[MASK]へ行く。'\n", "text_topk, _ = predict_mask_topk(text, tokenizer, bert_mlm, 10)\n", "print(*text_topk, sep='\\n')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "yCaGV_rT3A5N" }, "source": [ "# 5-8\n", "def greedy_prediction(text, tokenizer, bert_mlm):\n", " \"\"\"\n", " [MASK]を含む文章を入力として、貪欲法で穴埋めを行った文章を出力する。\n", " \"\"\"\n", " # 前から順に[MASK]を一つづつ、スコアの最も高いトークンに置き換える。\n", " for _ in range(text.count('[MASK]')):\n", " text = predict_mask_topk(text, tokenizer, bert_mlm, 1)[0][0]\n", " return text\n", "\n", "text = '今日は[MASK][MASK]へ行く。'\n", "greedy_prediction(text, tokenizer, bert_mlm)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "prdEvsxBrrGq" }, "source": [ "# 5-9\n", "text = '今日は[MASK][MASK][MASK][MASK][MASK]'\n", "greedy_prediction(text, tokenizer, bert_mlm)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "yHRemOdN0QE9" }, "source": [ "# 5-10\n", "def beam_search(text, tokenizer, bert_mlm, num_topk):\n", " \"\"\"\n", " ビームサーチで文章の穴埋めを行う。\n", " \"\"\"\n", " num_mask = text.count('[MASK]')\n", " text_topk = [text]\n", " scores_topk = np.array([0])\n", " for _ in range(num_mask):\n", " # 現在得られている、それぞれの文章に対して、\n", " # 最初の[MASK]をスコアが上位のトークンで穴埋めする。\n", " text_candidates = [] # それぞれの文章を穴埋めした結果を追加する。\n", " score_candidates = [] # 穴埋めに使ったトークンのスコアを追加する。\n", " for text_mask, score in zip(text_topk, scores_topk):\n", " text_topk_inner, scores_topk_inner = predict_mask_topk(\n", " text_mask, tokenizer, bert_mlm, num_topk\n", " )\n", " text_candidates.extend(text_topk_inner)\n", " score_candidates.append( score + scores_topk_inner )\n", "\n", " # 穴埋めにより生成された文章の中から合計スコアの高いものを選ぶ。\n", " score_candidates = np.hstack(score_candidates)\n", " idx_list = score_candidates.argsort()[::-1][:num_topk]\n", " text_topk = [ text_candidates[idx] for idx in idx_list ]\n", " scores_topk = score_candidates[idx_list]\n", "\n", " return text_topk\n", "\n", "text = \"今日は[MASK][MASK]へ行く。\"\n", "text_topk = beam_search(text, tokenizer, bert_mlm, 10)\n", "print(*text_topk, sep='\\n')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "5mhL-VSTvUo7" }, "source": [ "# 5-11\n", "text = '今日は[MASK][MASK][MASK][MASK][MASK]'\n", "text_topk = beam_search(text, tokenizer, bert_mlm, 10)\n", "print(*text_topk, sep='\\n')" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: Chapter6.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter6.ipynb", "provenance": [ { "file_id": "https://github.com/stockmarkteam/bert-book/blob/master/Chapter6.ipynb", "timestamp": 1630571793610 } ], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "DKcIdYD2sySs" }, "source": [ "# 6章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。" ] }, { "cell_type": "code", "metadata": { "id": "BDX6Gi6xiCOY" }, "source": [ "# 6-1\n", "!mkdir chap6\n", "%cd ./chap6" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "0hJ-pXOwXBzH" }, "source": [ "# 6-2\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0 pytorch-lightning==1.6.1" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "V_BGiKTflI39" }, "source": [ "# 6-3\n", "import random\n", "import glob\n", "from tqdm import tqdm\n", "\n", "import torch\n", "from torch.utils.data import DataLoader\n", "from transformers import BertJapaneseTokenizer, BertForSequenceClassification\n", "import pytorch_lightning as pl\n", "\n", "# 日本語の事前学習モデル\n", "MODEL_NAME = 'tohoku-nlp/bert-base-japanese-whole-word-masking'" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "CzgAG-1VpLd7" }, "source": [ "# 6-4\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)\n", "bert_sc = BertForSequenceClassification.from_pretrained(\n", " MODEL_NAME, num_labels=2\n", ")\n", "bert_sc = bert_sc.cuda()" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "G6EbYOsCGzaC" }, "source": [ "# 6-5\n", "text_list = [\n", " \"この映画は面白かった。\",\n", " \"この映画の最後にはがっかりさせられた。\",\n", " \"この映画を見て幸せな気持ちになった。\"\n", "]\n", "label_list = [1,0,1]\n", "\n", "# データの符号化\n", "encoding = tokenizer(\n", " text_list, \n", " padding = 'longest',\n", " return_tensors='pt'\n", ")\n", "encoding = { k: v.cuda() for k, v in encoding.items() }\n", "labels = torch.tensor(label_list).cuda()\n", "\n", "# 推論\n", "with torch.no_grad():\n", " output = bert_sc.forward(**encoding)\n", "scores = output.logits # 分類スコア\n", "labels_predicted = scores.argmax(-1) # スコアが最も高いラベル\n", "num_correct = (labels_predicted==labels).sum().item() # 正解数\n", "accuracy = num_correct/labels.size(0) # 精度\n", "\n", "print(\"# scores:\")\n", "print(scores.size())\n", "print(\"# predicted labels:\")\n", "print(labels_predicted)\n", "print(\"# accuracy:\")\n", "print(accuracy)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "JtKgd11pGyiE" }, "source": [ "# 6-6\n", "# 符号化\n", "encoding = tokenizer(\n", " text_list, \n", " padding='longest',\n", " return_tensors='pt'\n", ") \n", "encoding['labels'] = torch.tensor(label_list) # 入力にラベルを加える。\n", "encoding = { k: v.cuda() for k, v in encoding.items() }\n", "\n", "# ロスの計算\n", "output = bert_sc(**encoding)\n", "loss = output.loss # 損失の取得\n", "print(loss)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "r97ZbgVeZ-Hi" }, "source": [ "# 6-7\n", "#データのダウンロード\n", "!wget https://www.rondhuit.com/download/ldcc-20140209.tar.gz \n", "#ファイルの解凍\n", "!tar -zxf ldcc-20140209.tar.gz " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "TMUJ3rscgG2z" }, "source": [ "# 6-8\n", "!cat ./text/it-life-hack/it-life-hack-6342280.txt # ファイルを表示" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "49pchD2z6JhM" }, "source": [ "# 6-9\n", "# データローダーの作成\n", "dataset_for_loader = [\n", " {'data':torch.tensor([0,1]), 'labels':torch.tensor(0)},\n", " {'data':torch.tensor([2,3]), 'labels':torch.tensor(1)},\n", " {'data':torch.tensor([4,5]), 'labels':torch.tensor(2)},\n", " {'data':torch.tensor([6,7]), 'labels':torch.tensor(3)},\n", "]\n", "loader = DataLoader(dataset_for_loader, batch_size=2)\n", "\n", "# データセットからミニバッチを取り出す\n", "for idx, batch in enumerate(loader):\n", " print(f'# batch {idx}')\n", " print(batch)\n", " ## ファインチューニングではここでミニバッチ毎の処理を行う" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "2_1f6IbMVbaH" }, "source": [ "# 6-10\n", "loader = DataLoader(dataset_for_loader, batch_size=2, shuffle=True)\n", "\n", "for idx, batch in enumerate(loader):\n", " print(f'# batch {idx}')\n", " print(batch)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "G9YGEfZUAxea" }, "source": [ "# 6-11\n", "# カテゴリーのリスト\n", "category_list = [\n", " 'dokujo-tsushin',\n", " 'it-life-hack',\n", " 'kaden-channel',\n", " 'livedoor-homme',\n", " 'movie-enter',\n", " 'peachy',\n", " 'smax',\n", " 'sports-watch',\n", " 'topic-news'\n", "]\n", "\n", "# トークナイザのロード\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)\n", "\n", "# 各データの形式を整える\n", "max_length = 128\n", "dataset_for_loader = []\n", "for label, category in enumerate(tqdm(category_list)):\n", " for file in glob.glob(f'./text/{category}/{category}*'):\n", " lines = open(file).read().splitlines()\n", " text = '\\n'.join(lines[3:]) # ファイルの4行目からを抜き出す。\n", " encoding = tokenizer(\n", " text,\n", " max_length=max_length, \n", " padding='max_length',\n", " truncation=True\n", " )\n", " encoding['labels'] = label # ラベルを追加\n", " encoding = { k: torch.tensor(v) for k, v in encoding.items() }\n", " dataset_for_loader.append(encoding)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "drP8IYLVBFh_" }, "source": [ "# 6-12\n", "print(dataset_for_loader[0])" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "XHY9Os6NJlip" }, "source": [ "# 6-13\n", "# データセットの分割\n", "random.shuffle(dataset_for_loader) # ランダムにシャッフル\n", "n = len(dataset_for_loader)\n", "n_train = int(0.6*n)\n", "n_val = int(0.2*n)\n", "dataset_train = dataset_for_loader[:n_train] # 学習データ\n", "dataset_val = dataset_for_loader[n_train:n_train+n_val] # 検証データ\n", "dataset_test = dataset_for_loader[n_train+n_val:] # テストデータ\n", "\n", "# データセットからデータローダを作成\n", "# 学習データはshuffle=Trueにする。\n", "dataloader_train = DataLoader(\n", " dataset_train, batch_size=32, shuffle=True\n", ") \n", "dataloader_val = DataLoader(dataset_val, batch_size=256)\n", "dataloader_test = DataLoader(dataset_test, batch_size=256)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ffaUyGcoVj8l" }, "source": [ "# 6-14\n", "class BertForSequenceClassification_pl(pl.LightningModule):\n", " \n", " def __init__(self, model_name, num_labels, lr):\n", " # model_name: Transformersのモデルの名前\n", " # num_labels: ラベルの数\n", " # lr: 学習率\n", "\n", " super().__init__()\n", " \n", " # 引数のnum_labelsとlrを保存。\n", " # 例えば、self.hparams.lrでlrにアクセスできる。\n", " # チェックポイント作成時にも自動で保存される。\n", " self.save_hyperparameters() \n", "\n", " # BERTのロード\n", " self.bert_sc = BertForSequenceClassification.from_pretrained(\n", " model_name,\n", " num_labels=num_labels\n", " )\n", " \n", " # 学習データのミニバッチ(`batch`)が与えられた時に損失を出力する関数を書く。\n", " # batch_idxはミニバッチの番号であるが今回は使わない。\n", " def training_step(self, batch, batch_idx):\n", " output = self.bert_sc(**batch)\n", " loss = output.loss\n", " self.log('train_loss', loss) # 損失を'train_loss'の名前でログをとる。\n", " return loss\n", " \n", " # 検証データのミニバッチが与えられた時に、\n", " # 検証データを評価する指標を計算する関数を書く。\n", " def validation_step(self, batch, batch_idx):\n", " output = self.bert_sc(**batch)\n", " val_loss = output.loss\n", " self.log('val_loss', val_loss) # 損失を'val_loss'の名前でログをとる。\n", "\n", " # テストデータのミニバッチが与えられた時に、\n", " # テストデータを評価する指標を計算する関数を書く。\n", " def test_step(self, batch, batch_idx):\n", " labels = batch.pop('labels') # バッチからラベルを取得\n", " output = self.bert_sc(**batch)\n", " labels_predicted = output.logits.argmax(-1)\n", " num_correct = ( labels_predicted == labels ).sum().item()\n", " accuracy = num_correct/labels.size(0) #精度\n", " self.log('accuracy', accuracy) # 精度を'accuracy'の名前でログをとる。\n", "\n", " # 学習に用いるオプティマイザを返す関数を書く。\n", " def configure_optimizers(self):\n", " return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "lyR6de1TqfW9" }, "source": [ "# 6-15\n", "# 学習時にモデルの重みを保存する条件を指定\n", "checkpoint = pl.callbacks.ModelCheckpoint(\n", " monitor='val_loss',\n", " mode='min',\n", " save_top_k=1,\n", " save_weights_only=True,\n", " dirpath='model/',\n", ")\n", "\n", "# 学習の方法を指定\n", "trainer = pl.Trainer(\n", " gpus=1, \n", " max_epochs=10,\n", " callbacks = [checkpoint]\n", ")" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "fgk48zEqIJKh" }, "source": [ "# 6-16\n", "# PyTorch Lightningモデルのロード\n", "model = BertForSequenceClassification_pl(\n", " MODEL_NAME, num_labels=9, lr=1e-5\n", ")\n", "\n", "# ファインチューニングを行う。\n", "trainer.fit(model, dataloader_train, dataloader_val) " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "h68P7MG-JSh9" }, "source": [ "# 6-17\n", "best_model_path = checkpoint.best_model_path # ベストモデルのファイル\n", "print('ベストモデルのファイル: ', checkpoint.best_model_path)\n", "print('ベストモデルの検証データに対する損失: ', checkpoint.best_model_score)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "A-r9stqZqBdW" }, "source": [ "# 6-18\n", "%load_ext tensorboard\n", "%tensorboard --logdir ./" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "6bx0L0Ehr1tM" }, "source": [ "# 6-19\n", "test = trainer.test(dataloaders=dataloader_test)\n", "print(f'Accuracy: {test[0][\"accuracy\"]:.2f}')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "SbJAUdrStSgI" }, "source": [ "# 6-20\n", "# PyTorch Lightningモデルのロード\n", "model = BertForSequenceClassification_pl.load_from_checkpoint(\n", " best_model_path\n", ") \n", "\n", "# Transformers対応のモデルを./model_transformesに保存\n", "model.bert_sc.save_pretrained('./model_transformers') " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "xcho1B0BtfV0" }, "source": [ "# 6-21\n", "bert_sc = BertForSequenceClassification.from_pretrained(\n", " './model_transformers'\n", ")" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: Chapter7.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter7.ipynb", "provenance": [ { "file_id": "https://github.com/stockmarkteam/bert-book/blob/master/Chapter7.ipynb", "timestamp": 1630574288605 } ], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "CWPivw5Ss1Hk" }, "source": [ "# 7章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。" ] }, { "cell_type": "code", "metadata": { "id": "LvCX0ZnVJ1WD" }, "source": [ "# 7-1\n", "!mkdir chap7\n", "%cd ./chap7" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "0iMot3XGIhtD" }, "source": [ "# 7-2\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0 pytorch-lightning==1.6.1" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "87bW8wO5IhtF" }, "source": [ "# 7-3\n", "import random\n", "import glob\n", "import json\n", "from tqdm import tqdm\n", "\n", "import torch\n", "from torch.utils.data import DataLoader\n", "from transformers import BertJapaneseTokenizer, BertModel\n", "import pytorch_lightning as pl\n", "\n", "# 日本語の事前学習モデル\n", "MODEL_NAME = 'tohoku-nlp/bert-base-japanese-whole-word-masking'" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "5HFcRL7nnhbX" }, "source": [ "# 7-4\n", "class BertForSequenceClassificationMultiLabel(torch.nn.Module):\n", " \n", " def __init__(self, model_name, num_labels):\n", " super().__init__()\n", " # BertModelのロード\n", " self.bert = BertModel.from_pretrained(model_name) \n", " # 線形変換を初期化しておく\n", " self.linear = torch.nn.Linear(\n", " self.bert.config.hidden_size, num_labels\n", " ) \n", "\n", " def forward(\n", " self, \n", " input_ids=None, \n", " attention_mask=None, \n", " token_type_ids=None, \n", " labels=None\n", " ):\n", " # データを入力しBERTの最終層の出力を得る。\n", " bert_output = self.bert(\n", " input_ids=input_ids,\n", " attention_mask=attention_mask,\n", " token_type_ids=token_type_ids)\n", " last_hidden_state = bert_output.last_hidden_state\n", " \n", " # [PAD]以外のトークンで隠れ状態の平均をとる\n", " averaged_hidden_state = \\\n", " (last_hidden_state*attention_mask.unsqueeze(-1)).sum(1) \\\n", " / attention_mask.sum(1, keepdim=True)\n", " \n", " # 線形変換\n", " scores = self.linear(averaged_hidden_state) \n", " \n", " # 出力の形式を整える。\n", " output = {'logits': scores}\n", "\n", " # labelsが入力に含まれていたら、損失を計算し出力する。\n", " if labels is not None: \n", " loss = torch.nn.BCEWithLogitsLoss()(scores, labels.float())\n", " output['loss'] = loss\n", " \n", " # 属性でアクセスできるようにする。\n", " output = type('bert_output', (object,), output) \n", "\n", " return output" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "RbWDC5z4x_kP" }, "source": [ "# 7-5\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)\n", "bert_scml = BertForSequenceClassificationMultiLabel(\n", " MODEL_NAME, num_labels=2\n", ") \n", "bert_scml = bert_scml.cuda()" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "V_ep4ddFjz-O" }, "source": [ "# 7-6\n", "text_list = [\n", " '今日の仕事はうまくいったが、体調があまり良くない。',\n", " '昨日は楽しかった。'\n", "]\n", "\n", "labels_list = [\n", " [1, 1],\n", " [0, 1]\n", "]\n", "\n", "# データの符号化\n", "encoding = tokenizer(\n", " text_list, \n", " padding='longest', \n", " return_tensors='pt'\n", ")\n", "encoding = { k: v.cuda() for k, v in encoding.items() }\n", "labels = torch.tensor(labels_list).cuda()\n", "\n", "# BERTへデータを入力し分類スコアを得る。\n", "with torch.no_grad():\n", " output = bert_scml(**encoding)\n", "scores = output.logits\n", "\n", "# スコアが正ならば、そのカテゴリーを選択する。\n", "labels_predicted = ( scores > 0 ).int()\n", "\n", "# 精度の計算\n", "num_correct = ( labels_predicted == labels ).all(-1).sum().item()\n", "accuracy = num_correct/labels.size(0)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "QrXA5KgXmX-m" }, "source": [ "# 7-7\n", "# データの符号化\n", "encoding = tokenizer(\n", " text_list, \n", " padding='longest', \n", " return_tensors='pt'\n", ")\n", "encoding['labels'] = torch.tensor(labels_list) # 入力にlabelsを含める。\n", "encoding = { k: v.cuda() for k, v in encoding.items() }\n", "\n", "output = bert_scml(**encoding)\n", "loss = output.loss # 損失" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "HJ9Tbr6PIhtF" }, "source": [ "# 7-8\n", "# データのダウンロード\n", "!wget https://s3-ap-northeast-1.amazonaws.com/dev.tech-sketch.jp/chakki/public/chABSA-dataset.zip\n", "# データの解凍\n", "!unzip chABSA-dataset.zip " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "zgXcOtz6fLge" }, "source": [ "# 7-9\n", "data = json.load(open('chABSA-dataset/e00030_ann.json'))\n", "print( data['sentences'][0] )" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "l33ix4WDIhtG" }, "source": [ "# 7-10\n", "category_id = {'negative':0, 'neutral':1 , 'positive':2}\n", "\n", "dataset = []\n", "for file in glob.glob('chABSA-dataset/*.json'):\n", " data = json.load(open(file))\n", " # 各データから文章(text)を抜き出し、ラベル('labels')を作成\n", " for sentence in data['sentences']:\n", " text = sentence['sentence'] \n", " labels = [0,0,0]\n", " for opinion in sentence['opinions']:\n", " labels[category_id[opinion['polarity']]] = 1\n", " sample = {'text': text, 'labels': labels}\n", " dataset.append(sample)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "k4Na8gOPHhya" }, "source": [ "# 7-11\n", "print(dataset[0])" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "igPtmux1IhtI" }, "source": [ "# 7-12\n", "# トークナイザのロード\n", "tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)\n", "\n", "# 各データの形式を整える\n", "max_length = 128\n", "dataset_for_loader = []\n", "for sample in dataset:\n", " text = sample['text']\n", " labels = sample['labels']\n", " encoding = tokenizer(\n", " text,\n", " max_length=max_length,\n", " padding='max_length',\n", " truncation=True\n", " )\n", " encoding['labels'] = labels\n", " encoding = { k: torch.tensor(v) for k, v in encoding.items() }\n", " dataset_for_loader.append(encoding)\n", "\n", "# データセットの分割\n", "random.shuffle(dataset_for_loader) \n", "n = len(dataset_for_loader)\n", "n_train = int(0.6*n)\n", "n_val = int(0.2*n)\n", "dataset_train = dataset_for_loader[:n_train] # 学習データ\n", "dataset_val = dataset_for_loader[n_train:n_train+n_val] # 検証データ\n", "dataset_test = dataset_for_loader[n_train+n_val:] # テストデータ\n", "\n", "# データセットからデータローダを作成\n", "dataloader_train = DataLoader(\n", " dataset_train, batch_size=32, shuffle=True\n", ") \n", "dataloader_val = DataLoader(dataset_val, batch_size=256)\n", "dataloader_test = DataLoader(dataset_test, batch_size=256)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "9y3dO-kBIhtI" }, "source": [ "# 7-13\n", "class BertForSequenceClassificationMultiLabel_pl(pl.LightningModule):\n", "\n", " def __init__(self, model_name, num_labels, lr):\n", " super().__init__()\n", " self.save_hyperparameters() \n", " self.bert_scml = BertForSequenceClassificationMultiLabel(\n", " model_name, num_labels=num_labels\n", " ) \n", "\n", " def training_step(self, batch, batch_idx):\n", " output = self.bert_scml(**batch)\n", " loss = output.loss\n", " self.log('train_loss', loss)\n", " return loss\n", " \n", " def validation_step(self, batch, batch_idx):\n", " output = self.bert_scml(**batch)\n", " val_loss = output.loss\n", " self.log('val_loss', val_loss)\n", "\n", " def test_step(self, batch, batch_idx):\n", " labels = batch.pop('labels')\n", " output = self.bert_scml(**batch)\n", " scores = output.logits\n", " labels_predicted = ( scores > 0 ).int()\n", " num_correct = ( labels_predicted == labels ).all(-1).sum().item()\n", " accuracy = num_correct/scores.size(0)\n", " self.log('accuracy', accuracy)\n", "\n", " def configure_optimizers(self):\n", " return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)\n", "\n", "checkpoint = pl.callbacks.ModelCheckpoint(\n", " monitor='val_loss',\n", " mode='min',\n", " save_top_k=1,\n", " save_weights_only=True,\n", " dirpath='model/',\n", ")\n", "\n", "trainer = pl.Trainer(\n", " gpus=1, \n", " max_epochs=5,\n", " callbacks = [checkpoint]\n", ")\n", "\n", "model = BertForSequenceClassificationMultiLabel_pl(\n", " MODEL_NAME, \n", " num_labels=3, \n", " lr=1e-5\n", ")\n", "trainer.fit(model, dataloader_train, dataloader_val)\n", "test = trainer.test(dataloaders=dataloader_test)\n", "print(f'Accuracy: {test[0][\"accuracy\"]:.2f}')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "My3WI8Qd7yVJ" }, "source": [ "# 7-14\n", "# 入力する文章\n", "text_list = [\n", " \"今期は売り上げが順調に推移したが、株価は低迷の一途を辿っている。\",\n", " \"昨年から黒字が減少した。\",\n", " \"今日の飲み会は楽しかった。\"\n", "]\n", "\n", "# モデルのロード\n", "best_model_path = checkpoint.best_model_path\n", "model = BertForSequenceClassificationMultiLabel_pl.load_from_checkpoint(best_model_path)\n", "bert_scml = model.bert_scml.cuda()\n", "\n", "# データの符号化\n", "encoding = tokenizer(\n", " text_list, \n", " padding = 'longest',\n", " return_tensors='pt'\n", ")\n", "encoding = { k: v.cuda() for k, v in encoding.items() }\n", "\n", "# BERTへデータを入力し分類スコアを得る。\n", "with torch.no_grad():\n", " output = bert_scml(**encoding)\n", "scores = output.logits\n", "labels_predicted = ( scores > 0 ).int().cpu().numpy().tolist()\n", "\n", "# 結果を表示\n", "for text, label in zip(text_list, labels_predicted):\n", " print('--')\n", " print(f'入力:{text}')\n", " print(f'出力:{label}')" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: Chapter8.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter8.ipynb", "provenance": [ { "file_id": "https://github.com/stockmarkteam/bert-book/blob/master/Chapter8.ipynb", "timestamp": 1630575336666 } ], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "ON6jU-cos5E1" }, "source": [ "# 8章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。" ] }, { "cell_type": "code", "metadata": { "id": "r6r9ATFJImOU" }, "source": [ "# 8-1\n", "!mkdir chap8\n", "%cd ./chap8" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "0hJ-pXOwXBzH" }, "source": [ "# 8-2\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0 pytorch-lightning==1.6.1" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "WWsBlMRNhnnx" }, "source": [ "# 8-3\n", "import itertools\n", "import random\n", "import json\n", "from tqdm import tqdm\n", "import numpy as np\n", "import unicodedata\n", "\n", "import torch\n", "from torch.utils.data import DataLoader\n", "from transformers import BertJapaneseTokenizer, BertForTokenClassification\n", "import pytorch_lightning as pl\n", "\n", "# 日本語学習済みモデル\n", "MODEL_NAME = 'tohoku-nlp/bert-base-japanese-whole-word-masking'" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ilBB1q6zPvww" }, "source": [ "# 8-4\n", "normalize = lambda s: unicodedata.normalize(\"NFKC\",s)\n", "print(f'ABC -> {normalize(\"ABC\")}' ) # 全角アルファベット\n", "print(f'ABC -> {normalize(\"ABC\")}' ) # 半角アルファベット\n", "print(f'123 -> {normalize(\"123\")}' ) # 全角数字\n", "print(f'123 -> {normalize(\"123\")}' ) # 半角数字\n", "print(f'アイウ -> {normalize(\"アイウ\")}' ) # 全角カタカナ\n", "print(f'アイウ -> {normalize(\"アイウ\")}' ) # 半角カタカナ" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "OnK2OiMKhmt0" }, "source": [ "# 8-5\n", "class NER_tokenizer(BertJapaneseTokenizer):\n", " \n", " def encode_plus_tagged(self, text, entities, max_length):\n", " \"\"\"\n", " 文章とそれに含まれる固有表現が与えられた時に、\n", " 符号化とラベル列の作成を行う。\n", " \"\"\"\n", " # 固有表現の前後でtextを分割し、それぞれのラベルをつけておく。\n", " entities = sorted(entities, key=lambda x: x['span'][0])\n", " splitted = [] # 分割後の文字列を追加していく\n", " position = 0\n", " for entity in entities:\n", " start = entity['span'][0]\n", " end = entity['span'][1]\n", " label = entity['type_id']\n", " # 固有表現ではないものには0のラベルを付与\n", " splitted.append({'text':text[position:start], 'label':0}) \n", " # 固有表現には、固有表現のタイプに対応するIDをラベルとして付与\n", " splitted.append({'text':text[start:end], 'label':label}) \n", " position = end\n", " splitted.append({'text': text[position:], 'label':0})\n", " splitted = [ s for s in splitted if s['text'] ] # 長さ0の文字列は除く\n", "\n", " # 分割されたそれぞれの文字列をトークン化し、ラベルをつける。\n", " tokens = [] # トークンを追加していく\n", " labels = [] # トークンのラベルを追加していく\n", " for text_splitted in splitted:\n", " text = text_splitted['text']\n", " label = text_splitted['label']\n", " tokens_splitted = self.tokenize(text)\n", " labels_splitted = [label] * len(tokens_splitted)\n", " tokens.extend(tokens_splitted)\n", " labels.extend(labels_splitted)\n", "\n", " # 符号化を行いBERTに入力できる形式にする。\n", " input_ids = self.convert_tokens_to_ids(tokens)\n", " encoding = self.prepare_for_model(\n", " input_ids, \n", " max_length=max_length, \n", " padding='max_length', \n", " truncation=True\n", " ) # input_idsをencodingに変換\n", " # 特殊トークン[CLS]、[SEP]のラベルを0にする。\n", " labels = [0] + labels[:max_length-2] + [0] \n", " # 特殊トークン[PAD]のラベルを0にする。\n", " labels = labels + [0]*( max_length - len(labels) ) \n", " encoding['labels'] = labels\n", "\n", " return encoding\n", "\n", " def encode_plus_untagged(\n", " self, text, max_length=None, return_tensors=None\n", " ):\n", " \"\"\"\n", " 文章をトークン化し、それぞれのトークンの文章中の位置も特定しておく。\n", " \"\"\"\n", " # 文章のトークン化を行い、\n", " # それぞれのトークンと文章中の文字列を対応づける。\n", " tokens = [] # トークンを追加していく。\n", " tokens_original = [] # トークンに対応する文章中の文字列を追加していく。\n", " words = self.word_tokenizer.tokenize(text) # MeCabで単語に分割\n", " for word in words:\n", " # 単語をサブワードに分割\n", " tokens_word = self.subword_tokenizer.tokenize(word) \n", " tokens.extend(tokens_word)\n", " if tokens_word[0] == '[UNK]': # 未知語への対応\n", " tokens_original.append(word)\n", " else:\n", " tokens_original.extend([\n", " token.replace('##','') for token in tokens_word\n", " ])\n", "\n", " # 各トークンの文章中での位置を調べる。(空白の位置を考慮する)\n", " position = 0\n", " spans = [] # トークンの位置を追加していく。\n", " for token in tokens_original:\n", " l = len(token)\n", " while 1:\n", " if token != text[position:position+l]:\n", " position += 1\n", " else:\n", " spans.append([position, position+l])\n", " position += l\n", " break\n", "\n", " # 符号化を行いBERTに入力できる形式にする。\n", " input_ids = self.convert_tokens_to_ids(tokens) \n", " encoding = self.prepare_for_model(\n", " input_ids, \n", " max_length=max_length, \n", " padding='max_length' if max_length else False, \n", " truncation=True if max_length else False\n", " )\n", " sequence_length = len(encoding['input_ids'])\n", " # 特殊トークン[CLS]に対するダミーのspanを追加。\n", " spans = [[-1, -1]] + spans[:sequence_length-2] \n", " # 特殊トークン[SEP]、[PAD]に対するダミーのspanを追加。\n", " spans = spans + [[-1, -1]] * ( sequence_length - len(spans) ) \n", "\n", " # 必要に応じてtorch.Tensorにする。\n", " if return_tensors == 'pt':\n", " encoding = { k: torch.tensor([v]) for k, v in encoding.items() }\n", "\n", " return encoding, spans\n", "\n", " def convert_bert_output_to_entities(self, text, labels, spans):\n", " \"\"\"\n", " 文章、ラベル列の予測値、各トークンの位置から固有表現を得る。\n", " \"\"\"\n", " # labels, spansから特殊トークンに対応する部分を取り除く\n", " labels = [label for label, span in zip(labels, spans) if span[0] != -1]\n", " spans = [span for span in spans if span[0] != -1]\n", "\n", " # 同じラベルが連続するトークンをまとめて、固有表現を抽出する。\n", " entities = []\n", " for label, group \\\n", " in itertools.groupby(enumerate(labels), key=lambda x: x[1]):\n", " \n", " group = list(group)\n", " start = spans[group[0][0]][0]\n", " end = spans[group[-1][0]][1]\n", "\n", " if label != 0: # ラベルが0以外ならば、新たな固有表現として追加。\n", " entity = {\n", " \"name\": text[start:end],\n", " \"span\": [start, end],\n", " \"type_id\": label\n", " }\n", " entities.append(entity)\n", "\n", " return entities" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "qh07iizk9Onf" }, "source": [ "# 8-6\n", "tokenizer = NER_tokenizer.from_pretrained(MODEL_NAME)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "5704VNOVt8Yg" }, "source": [ "# 8-7\n", "text = '昨日のみらい事務所との打ち合わせは順調だった。'\n", "entities = [\n", " {'name': 'みらい事務所', 'span': [3,9], 'type_id': 1}\n", "]\n", "\n", "encoding = tokenizer.encode_plus_tagged(\n", " text, entities, max_length=20\n", ")\n", "print(encoding)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "PDik0FRIL38J" }, "source": [ "# 8-8\n", "text = '騰訊の英語名はTencent Holdings Ltdである。'\n", "encoding, spans = tokenizer.encode_plus_untagged(\n", " text, return_tensors='pt'\n", ")\n", "print('# encoding')\n", "print(encoding)\n", "print('# spans')\n", "print(spans)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "YHS30W_woM-E" }, "source": [ "# 8-9\n", "labels_predicted = [0,1,1,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0]\n", "entities = tokenizer.convert_bert_output_to_entities(\n", " text, labels_predicted, spans\n", ")\n", "print(entities)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Jo45M23hSL_-" }, "source": [ "# 8-10\n", "tokenizer = NER_tokenizer.from_pretrained(MODEL_NAME)\n", "bert_tc = BertForTokenClassification.from_pretrained(\n", " MODEL_NAME, num_labels=4\n", ")\n", "bert_tc = bert_tc.cuda()" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "t344Ak2_0C2c" }, "source": [ "# 8-11\n", "text = 'AさんはB大学に入学した。'\n", "\n", "# 符号化を行い、各トークンの文章中での位置も特定しておく。\n", "encoding, spans = tokenizer.encode_plus_untagged(\n", " text, return_tensors='pt'\n", ") \n", "encoding = { k: v.cuda() for k, v in encoding.items() } \n", "\n", "# BERTでトークン毎の分類スコアを出力し、スコアの最も高いラベルを予測値とする。\n", "with torch.no_grad():\n", " output = bert_tc(**encoding)\n", " scores = output.logits\n", " labels_predicted = scores[0].argmax(-1).cpu().numpy().tolist()\n", "\n", "# ラベル列を固有表現に変換\n", "entities = tokenizer.convert_bert_output_to_entities(\n", " text, labels_predicted, spans\n", ")\n", "print(entities)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "QmFZVpb208CK" }, "source": [ "# 8-12\n", "data = [\n", " {\n", " 'text': 'AさんはB大学に入学した。',\n", " 'entities': [\n", " {'name': 'A', 'span': [0, 1], 'type_id': 2},\n", " {'name': 'B大学', 'span': [4, 7], 'type_id': 1}\n", " ]\n", " },\n", " {\n", " 'text': 'CDE株式会社は新製品「E」を販売する。',\n", " 'entities': [\n", " {'name': 'CDE株式会社', 'span': [0, 7], 'type_id': 1},\n", " {'name': 'E', 'span': [12, 13], 'type_id': 3}\n", " ]\n", " }\n", "]\n", "\n", "# 各データを符号化し、データローダを作成する。\n", "max_length=32\n", "dataset_for_loader = []\n", "for sample in data:\n", " text = sample['text']\n", " entities = sample['entities']\n", " encoding = tokenizer.encode_plus_tagged(\n", " text, entities, max_length=max_length\n", " )\n", " encoding = { k: torch.tensor(v) for k, v in encoding.items() }\n", " dataset_for_loader.append(encoding)\n", "dataloader = DataLoader(dataset_for_loader, batch_size=len(data))\n", "\n", "# ミニバッチを取り出し損失を得る。\n", "for batch in dataloader:\n", " batch = { k: v.cuda() for k, v in batch.items() } # GPU\n", " output = bert_tc(**batch) # BERTへ入力\n", " loss = output.loss # 損失" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "1-4aEmfGf9qT" }, "source": [ "# 8-13\n", "!git clone --branch v2.0 https://github.com/stockmarkteam/ner-wikipedia-dataset " ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "HpwR_RXZlX80" }, "source": [ "# 8-14\n", "# データのロード\n", "dataset = json.load(open('ner-wikipedia-dataset/ner.json','r'))\n", "\n", "# 固有表現のタイプとIDを対応付る辞書 \n", "type_id_dict = {\n", " \"人名\": 1,\n", " \"法人名\": 2,\n", " \"政治的組織名\": 3,\n", " \"その他の組織名\": 4,\n", " \"地名\": 5,\n", " \"施設名\": 6,\n", " \"製品名\": 7,\n", " \"イベント名\": 8\n", "}\n", "\n", "# カテゴリーをラベルに変更、文字列の正規化する。\n", "for sample in dataset:\n", " sample['text'] = unicodedata.normalize('NFKC', sample['text'])\n", " for e in sample[\"entities\"]:\n", " e['type_id'] = type_id_dict[e['type']]\n", " del e['type']\n", "\n", "# データセットの分割\n", "random.shuffle(dataset)\n", "n = len(dataset)\n", "n_train = int(n*0.6)\n", "n_val = int(n*0.2)\n", "dataset_train = dataset[:n_train]\n", "dataset_val = dataset[n_train:n_train+n_val]\n", "dataset_test = dataset[n_train+n_val:]" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "MDGDvIPIlX-s" }, "source": [ "# 8-15\n", "def create_dataset(tokenizer, dataset, max_length):\n", " \"\"\"\n", " データセットをデータローダに入力できる形に整形。\n", " \"\"\"\n", " dataset_for_loader = []\n", " for sample in dataset:\n", " text = sample['text']\n", " entities = sample['entities']\n", " encoding = tokenizer.encode_plus_tagged(\n", " text, entities, max_length=max_length\n", " )\n", " encoding = { k: torch.tensor(v) for k, v in encoding.items() }\n", " dataset_for_loader.append(encoding)\n", " return dataset_for_loader\n", "\n", "# トークナイザのロード\n", "tokenizer = NER_tokenizer.from_pretrained(MODEL_NAME)\n", "\n", "# データセットの作成\n", "max_length = 128\n", "dataset_train_for_loader = create_dataset(\n", " tokenizer, dataset_train, max_length\n", ")\n", "dataset_val_for_loader = create_dataset(\n", " tokenizer, dataset_val, max_length\n", ")\n", "\n", "# データローダの作成\n", "dataloader_train = DataLoader(\n", " dataset_train_for_loader, batch_size=32, shuffle=True\n", ")\n", "dataloader_val = DataLoader(dataset_val_for_loader, batch_size=256)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "446u47Os_Arh" }, "source": [ "# 8-16\n", "# PyTorch Lightningのモデル\n", "class BertForTokenClassification_pl(pl.LightningModule):\n", " \n", " def __init__(self, model_name, num_labels, lr):\n", " super().__init__()\n", " self.save_hyperparameters()\n", " self.bert_tc = BertForTokenClassification.from_pretrained(\n", " model_name,\n", " num_labels=num_labels\n", " )\n", " \n", " def training_step(self, batch, batch_idx):\n", " output = self.bert_tc(**batch)\n", " loss = output.loss\n", " self.log('train_loss', loss)\n", " return loss\n", " \n", " def validation_step(self, batch, batch_idx):\n", " output = self.bert_tc(**batch)\n", " val_loss = output.loss\n", " self.log('val_loss', val_loss)\n", " \n", " def configure_optimizers(self):\n", " return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)\n", "\n", "checkpoint = pl.callbacks.ModelCheckpoint(\n", " monitor='val_loss',\n", " mode='min',\n", " save_top_k=1,\n", " save_weights_only=True,\n", " dirpath='model/'\n", ")\n", "\n", "trainer = pl.Trainer(\n", " gpus=1,\n", " max_epochs=5,\n", " callbacks=[checkpoint]\n", ")\n", "\n", "# ファインチューニング\n", "model = BertForTokenClassification_pl(\n", " MODEL_NAME, num_labels=9, lr=1e-5\n", ")\n", "trainer.fit(model, dataloader_train, dataloader_val)\n", "best_model_path = checkpoint.best_model_path" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "9OU3oPuxCEWz" }, "source": [ "# 8-17\n", "def predict(text, tokenizer, bert_tc):\n", " \"\"\"\n", " BERTで固有表現抽出を行うための関数。\n", " \"\"\"\n", " # 符号化\n", " encoding, spans = tokenizer.encode_plus_untagged(\n", " text, return_tensors='pt'\n", " )\n", " encoding = { k: v.cuda() for k, v in encoding.items() }\n", "\n", " # ラベルの予測値の計算\n", " with torch.no_grad():\n", " output = bert_tc(**encoding)\n", " scores = output.logits\n", " labels_predicted = scores[0].argmax(-1).cpu().numpy().tolist() \n", "\n", " # ラベル列を固有表現に変換\n", " entities = tokenizer.convert_bert_output_to_entities(\n", " text, labels_predicted, spans\n", " )\n", "\n", " return entities\n", "\n", "# トークナイザのロード\n", "tokenizer = NER_tokenizer.from_pretrained(MODEL_NAME)\n", "\n", "# ファインチューニングしたモデルをロードし、GPUにのせる。\n", "model = BertForTokenClassification_pl.load_from_checkpoint(\n", " best_model_path\n", ")\n", "bert_tc = model.bert_tc.cuda()\n", "\n", "# 固有表現抽出\n", "# 注:以下ではコードのわかりやすさのために、1データづつ処理しているが、\n", "# バッチ化して処理を行った方が処理時間は短い\n", "entities_list = [] # 正解の固有表現を追加していく。\n", "entities_predicted_list = [] # 抽出された固有表現を追加していく。\n", "for sample in tqdm(dataset_test):\n", " text = sample['text']\n", " entities_predicted = predict(text, tokenizer, bert_tc) # BERTで予測\n", " entities_list.append(sample['entities'])\n", " entities_predicted_list.append( entities_predicted )" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "R-Fu6xJZ29HJ" }, "source": [ "# 8-18\n", "print(\"# 正解\")\n", "print(entities_list[0])\n", "print(\"# 抽出\")\n", "print(entities_predicted_list[0])" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "JeZzmNigiDsD" }, "source": [ "# 8-19\n", "def evaluate_model(entities_list, entities_predicted_list, type_id=None):\n", " \"\"\"\n", " 正解と予測を比較し、モデルの固有表現抽出の性能を評価する。\n", " type_idがNoneのときは、全ての固有表現のタイプに対して評価する。\n", " type_idが整数を指定すると、その固有表現のタイプのIDに対して評価を行う。\n", " \"\"\"\n", " num_entities = 0 # 固有表現(正解)の個数\n", " num_predictions = 0 # BERTにより予測された固有表現の個数\n", " num_correct = 0 # BERTにより予測のうち正解であった固有表現の数\n", "\n", " # それぞれの文章で予測と正解を比較。\n", " # 予測は文章中の位置とタイプIDが一致すれば正解とみなす。\n", " for entities, entities_predicted \\\n", " in zip(entities_list, entities_predicted_list):\n", "\n", " if type_id:\n", " entities = [ e for e in entities if e['type_id'] == type_id ]\n", " entities_predicted = [ \n", " e for e in entities_predicted if e['type_id'] == type_id\n", " ]\n", " \n", " get_span_type = lambda e: (e['span'][0], e['span'][1], e['type_id'])\n", " set_entities = set( get_span_type(e) for e in entities )\n", " set_entities_predicted = \\\n", " set( get_span_type(e) for e in entities_predicted )\n", "\n", " num_entities += len(entities)\n", " num_predictions += len(entities_predicted)\n", " num_correct += len( set_entities & set_entities_predicted )\n", "\n", " # 指標を計算\n", " precision = num_correct/num_predictions # 適合率\n", " recall = num_correct/num_entities # 再現率\n", " f_value = 2*precision*recall/(precision+recall) # F値\n", "\n", " result = {\n", " 'num_entities': num_entities,\n", " 'num_predictions': num_predictions,\n", " 'num_correct': num_correct,\n", " 'precision': precision,\n", " 'recall': recall,\n", " 'f_value': f_value\n", " }\n", "\n", " return result" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "GVbxf1FRlYBU" }, "source": [ "# 8-20\n", "print( evaluate_model(entities_list, entities_predicted_list) )" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "tD9sFRUu4Z4c" }, "source": [ "# 8-21\n", "class NER_tokenizer_BIO(BertJapaneseTokenizer):\n", "\n", " # 初期化時に固有表現のカテゴリーの数`num_entity_type`を\n", " # 受け入れるようにする。\n", " def __init__(self, *args, **kwargs):\n", " self.num_entity_type = kwargs.pop('num_entity_type')\n", " super().__init__(*args, **kwargs)\n", "\n", " def encode_plus_tagged(self, text, entities, max_length):\n", " \"\"\"\n", " 文章とそれに含まれる固有表現が与えられた時に、\n", " 符号化とラベル列の作成を行う。\n", " \"\"\"\n", " # 固有表現の前後でtextを分割し、それぞれのラベルをつけておく。\n", " splitted = [] # 分割後の文字列を追加していく\n", " position = 0\n", " for entity in entities:\n", " start = entity['span'][0]\n", " end = entity['span'][1]\n", " label = entity['type_id']\n", " splitted.append({'text':text[position:start], 'label':0})\n", " splitted.append({'text':text[start:end], 'label':label})\n", " position = end\n", " splitted.append({'text': text[position:], 'label':0})\n", " splitted = [ s for s in splitted if s['text'] ]\n", "\n", " # 分割されたそれぞれの文章をトークン化し、ラベルをつける。\n", " tokens = [] # トークンを追加していく\n", " labels = [] # ラベルを追加していく\n", " for s in splitted:\n", " tokens_splitted = self.tokenize(s['text'])\n", " label = s['label']\n", " if label > 0: # 固有表現\n", " # まずトークン全てにI-タグを付与\n", " labels_splitted = \\\n", " [ label + self.num_entity_type ] * len(tokens_splitted)\n", " # 先頭のトークンをB-タグにする\n", " labels_splitted[0] = label\n", " else: # それ以外\n", " labels_splitted = [0] * len(tokens_splitted)\n", " \n", " tokens.extend(tokens_splitted)\n", " labels.extend(labels_splitted)\n", "\n", " # 符号化を行いBERTに入力できる形式にする。\n", " input_ids = self.convert_tokens_to_ids(tokens)\n", " encoding = self.prepare_for_model(\n", " input_ids, \n", " max_length=max_length, \n", " padding='max_length',\n", " truncation=True\n", " ) \n", "\n", " # ラベルに特殊トークンを追加\n", " labels = [0] + labels[:max_length-2] + [0]\n", " labels = labels + [0]*( max_length - len(labels) )\n", " encoding['labels'] = labels\n", "\n", " return encoding\n", "\n", " def encode_plus_untagged(\n", " self, text, max_length=None, return_tensors=None\n", " ):\n", " \"\"\"\n", " 文章をトークン化し、それぞれのトークンの文章中の位置も特定しておく。\n", " IO法のトークナイザのencode_plus_untaggedと同じ\n", " \"\"\"\n", " # 文章のトークン化を行い、\n", " # それぞれのトークンと文章中の文字列を対応づける。\n", " tokens = [] # トークンを追加していく。\n", " tokens_original = [] # トークンに対応する文章中の文字列を追加していく。\n", " words = self.word_tokenizer.tokenize(text) # MeCabで単語に分割\n", " for word in words:\n", " # 単語をサブワードに分割\n", " tokens_word = self.subword_tokenizer.tokenize(word) \n", " tokens.extend(tokens_word)\n", " if tokens_word[0] == '[UNK]': # 未知語への対応\n", " tokens_original.append(word)\n", " else:\n", " tokens_original.extend([\n", " token.replace('##','') for token in tokens_word\n", " ])\n", "\n", " # 各トークンの文章中での位置を調べる。(空白の位置を考慮する)\n", " position = 0\n", " spans = [] # トークンの位置を追加していく。\n", " for token in tokens_original:\n", " l = len(token)\n", " while 1:\n", " if token != text[position:position+l]:\n", " position += 1\n", " else:\n", " spans.append([position, position+l])\n", " position += l\n", " break\n", "\n", " # 符号化を行いBERTに入力できる形式にする。\n", " input_ids = self.convert_tokens_to_ids(tokens) \n", " encoding = self.prepare_for_model(\n", " input_ids, \n", " max_length=max_length, \n", " padding='max_length' if max_length else False, \n", " truncation=True if max_length else False\n", " )\n", " sequence_length = len(encoding['input_ids'])\n", " # 特殊トークン[CLS]に対するダミーのspanを追加。\n", " spans = [[-1, -1]] + spans[:sequence_length-2] \n", " # 特殊トークン[SEP]、[PAD]に対するダミーのspanを追加。\n", " spans = spans + [[-1, -1]] * ( sequence_length - len(spans) ) \n", "\n", " # 必要に応じてtorch.Tensorにする。\n", " if return_tensors == 'pt':\n", " encoding = { k: torch.tensor([v]) for k, v in encoding.items() }\n", "\n", " return encoding, spans\n", "\n", " @staticmethod\n", " def Viterbi(scores_bert, num_entity_type, penalty=10000):\n", " \"\"\"\n", " Viterbiアルゴリズムで最適解を求める。\n", " \"\"\"\n", " m = 2*num_entity_type + 1\n", " penalty_matrix = np.zeros([m, m])\n", " for i in range(m):\n", " for j in range(1+num_entity_type, m):\n", " if not ( (i == j) or (i+num_entity_type == j) ): \n", " penalty_matrix[i,j] = penalty\n", " \n", " path = [ [i] for i in range(m) ]\n", " scores_path = scores_bert[0] - penalty_matrix[0,:]\n", " scores_bert = scores_bert[1:]\n", "\n", " for scores in scores_bert:\n", " assert len(scores) == 2*num_entity_type + 1\n", " score_matrix = np.array(scores_path).reshape(-1,1) \\\n", " + np.array(scores).reshape(1,-1) \\\n", " - penalty_matrix\n", " scores_path = score_matrix.max(axis=0)\n", " argmax = score_matrix.argmax(axis=0)\n", " path_new = []\n", " for i, idx in enumerate(argmax):\n", " path_new.append( path[idx] + [i] )\n", " path = path_new\n", "\n", " labels_optimal = path[np.argmax(scores_path)]\n", " return labels_optimal\n", "\n", " def convert_bert_output_to_entities(self, text, scores, spans):\n", " \"\"\"\n", " 文章、分類スコア、各トークンの位置から固有表現を得る。\n", " 分類スコアはサイズが(系列長、ラベル数)の2次元配列\n", " \"\"\"\n", " assert len(spans) == len(scores)\n", " num_entity_type = self.num_entity_type\n", " \n", " # 特殊トークンに対応する部分を取り除く\n", " scores = [score for score, span in zip(scores, spans) if span[0]!=-1]\n", " spans = [span for span in spans if span[0]!=-1]\n", "\n", " # Viterbiアルゴリズムでラベルの予測値を決める。\n", " labels = self.Viterbi(scores, num_entity_type)\n", "\n", " # 同じラベルが連続するトークンをまとめて、固有表現を抽出する。\n", " entities = []\n", " for label, group \\\n", " in itertools.groupby(enumerate(labels), key=lambda x: x[1]):\n", " \n", " group = list(group)\n", " start = spans[group[0][0]][0]\n", " end = spans[group[-1][0]][1]\n", "\n", " if label != 0: # 固有表現であれば\n", " if 1 <= label <= num_entity_type:\n", " # ラベルが`B-`ならば、新しいentityを追加\n", " entity = {\n", " \"name\": text[start:end],\n", " \"span\": [start, end],\n", " \"type_id\": label\n", " }\n", " entities.append(entity)\n", " else:\n", " # ラベルが`I-`ならば、直近のentityを更新\n", " entity['span'][1] = end \n", " entity['name'] = text[entity['span'][0]:entity['span'][1]]\n", " \n", " return entities" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "GvyVJCgH4Z6M" }, "source": [ "# 8-22\n", "# トークナイザのロード\n", "# 固有表現のカテゴリーの数`num_entity_type`を入力に入れる必要がある。\n", "tokenizer = NER_tokenizer_BIO.from_pretrained(\n", " MODEL_NAME,\n", " num_entity_type=8 \n", ")\n", "\n", "# データセットの作成\n", "max_length = 128\n", "dataset_train_for_loader = create_dataset(\n", " tokenizer, dataset_train, max_length\n", ")\n", "dataset_val_for_loader = create_dataset(\n", " tokenizer, dataset_val, max_length\n", ")\n", "\n", "# データローダの作成\n", "dataloader_train = DataLoader(\n", " dataset_train_for_loader, batch_size=32, shuffle=True\n", ")\n", "dataloader_val = DataLoader(dataset_val_for_loader, batch_size=256)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Ykfw0rCA4Z9N" }, "source": [ "# 8-23\n", "\n", "# ファインチューニング\n", "checkpoint = pl.callbacks.ModelCheckpoint(\n", " monitor='val_loss',\n", " mode='min',\n", " save_top_k=1,\n", " save_weights_only=True,\n", " dirpath='model_BIO/'\n", ")\n", "\n", "trainer = pl.Trainer(\n", " gpus=1,\n", " max_epochs=5,\n", " callbacks=[checkpoint]\n", ")\n", "\n", "# PyTorch Lightningのモデルのロード\n", "num_entity_type = 8\n", "num_labels = 2*num_entity_type+1\n", "model = BertForTokenClassification_pl(\n", " MODEL_NAME, num_labels=num_labels, lr=1e-5\n", ")\n", "\n", "# ファインチューニング\n", "trainer.fit(model, dataloader_train, dataloader_val)\n", "best_model_path = checkpoint.best_model_path\n", "\n", "# 性能評価\n", "model = BertForTokenClassification_pl.load_from_checkpoint(\n", " best_model_path\n", ") \n", "bert_tc = model.bert_tc.cuda()\n", "\n", "entities_list = [] # 正解の固有表現を追加していく\n", "entities_predicted_list = [] # 抽出された固有表現を追加していく\n", "for sample in tqdm(dataset_test):\n", " text = sample['text']\n", " encoding, spans = tokenizer.encode_plus_untagged(\n", " text, return_tensors='pt'\n", " )\n", " encoding = { k: v.cuda() for k, v in encoding.items() } \n", " \n", " with torch.no_grad():\n", " output = bert_tc(**encoding)\n", " scores = output.logits\n", " scores = scores[0].cpu().numpy().tolist()\n", " \n", " # 分類スコアを固有表現に変換する\n", " entities_predicted = tokenizer.convert_bert_output_to_entities(\n", " text, scores, spans\n", " )\n", "\n", " entities_list.append(sample['entities'])\n", " entities_predicted_list.append( entities_predicted )\n", "\n", "print(evaluate_model(entities_list, entities_predicted_list))" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: Chapter9.ipynb ================================================ { "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Chapter9.ipynb", "provenance": [ { "file_id": "https://github.com/stockmarkteam/bert-book/blob/master/Chapter9.ipynb", "timestamp": 1630577154754 } ], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "OQ4zTAe6s78f" }, "source": [ "# 9章\n", "- 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。\n", "- 本章で用いる「[日本語Wikipedia入力誤りデータセット](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)」は現在バージョン2が公開されていますが、本章では[バージョン1](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88v1)を用いています。\n", "\n" ] }, { "cell_type": "code", "metadata": { "id": "YYAc8uDgGXNZ" }, "source": [ "# 9-1\n", "!mkdir chap9\n", "%cd ./chap9" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "13L45ApiGXNa" }, "source": [ "# 9-2\n", "!pip install transformers==4.18.0 fugashi==1.1.0 ipadic==1.0.0 pytorch-lightning==1.6.1" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "0ULhAjFPGXNa" }, "source": [ "# 9-3\n", "import random\n", "from tqdm import tqdm\n", "import unicodedata\n", "\n", "import pandas as pd\n", "import torch\n", "from torch.utils.data import DataLoader\n", "from transformers import BertJapaneseTokenizer, BertForMaskedLM\n", "import pytorch_lightning as pl\n", "\n", "# 日本語の事前学習済みモデル\n", "MODEL_NAME = 'tohoku-nlp/bert-base-japanese-whole-word-masking'" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "MC1yL-SSGXNb" }, "source": [ "# 9-4\n", "class SC_tokenizer(BertJapaneseTokenizer):\n", " \n", " def encode_plus_tagged(\n", " self, wrong_text, correct_text, max_length=128\n", " ):\n", " \"\"\"\n", " ファインチューニング時に使用。\n", " 誤変換を含む文章と正しい文章を入力とし、\n", " 符号化を行いBERTに入力できる形式にする。\n", " \"\"\"\n", " # 誤変換した文章をトークン化し、符号化\n", " encoding = self(\n", " wrong_text, \n", " max_length=max_length, \n", " padding='max_length', \n", " truncation=True\n", " )\n", " # 正しい文章をトークン化し、符号化\n", " encoding_correct = self(\n", " correct_text,\n", " max_length=max_length,\n", " padding='max_length',\n", " truncation=True\n", " ) \n", " # 正しい文章の符号をラベルとする\n", " encoding['labels'] = encoding_correct['input_ids'] \n", "\n", " return encoding\n", "\n", " def encode_plus_untagged(\n", " self, text, max_length=None, return_tensors=None\n", " ):\n", " \"\"\"\n", " 文章を符号化し、それぞれのトークンの文章中の位置も特定しておく。\n", " \"\"\"\n", " # 文章のトークン化を行い、\n", " # それぞれのトークンと文章中の文字列を対応づける。\n", " tokens = [] # トークンを追加していく。\n", " tokens_original = [] # トークンに対応する文章中の文字列を追加していく。\n", " words = self.word_tokenizer.tokenize(text) # MeCabで単語に分割\n", " for word in words:\n", " # 単語をサブワードに分割\n", " tokens_word = self.subword_tokenizer.tokenize(word) \n", " tokens.extend(tokens_word)\n", " if tokens_word[0] == '[UNK]': # 未知語への対応\n", " tokens_original.append(word)\n", " else:\n", " tokens_original.extend([\n", " token.replace('##','') for token in tokens_word\n", " ])\n", "\n", " # 各トークンの文章中での位置を調べる。(空白の位置を考慮する)\n", " position = 0\n", " spans = [] # トークンの位置を追加していく。\n", " for token in tokens_original:\n", " l = len(token)\n", " while 1:\n", " if token != text[position:position+l]:\n", " position += 1\n", " else:\n", " spans.append([position, position+l])\n", " position += l\n", " break\n", "\n", " # 符号化を行いBERTに入力できる形式にする。\n", " input_ids = self.convert_tokens_to_ids(tokens) \n", " encoding = self.prepare_for_model(\n", " input_ids, \n", " max_length=max_length, \n", " padding='max_length' if max_length else False, \n", " truncation=True if max_length else False\n", " )\n", " sequence_length = len(encoding['input_ids'])\n", " # 特殊トークン[CLS]に対するダミーのspanを追加。\n", " spans = [[-1, -1]] + spans[:sequence_length-2] \n", " # 特殊トークン[SEP]、[PAD]に対するダミーのspanを追加。\n", " spans = spans + [[-1, -1]] * ( sequence_length - len(spans) ) \n", "\n", " # 必要に応じてtorch.Tensorにする。\n", " if return_tensors == 'pt':\n", " encoding = { k: torch.tensor([v]) for k, v in encoding.items() }\n", "\n", " return encoding, spans\n", "\n", " def convert_bert_output_to_text(self, text, labels, spans):\n", " \"\"\"\n", " 推論時に使用。\n", " 文章と、各トークンのラベルの予測値、文章中での位置を入力とする。\n", " そこから、BERTによって予測された文章に変換。\n", " \"\"\"\n", " assert len(spans) == len(labels)\n", "\n", " # labels, spansから特殊トークンに対応する部分を取り除く\n", " labels = [label for label, span in zip(labels, spans) if span[0]!=-1]\n", " spans = [span for span in spans if span[0]!=-1]\n", "\n", " # BERTが予測した文章を作成\n", " predicted_text = ''\n", " position = 0\n", " for label, span in zip(labels, spans):\n", " start, end = span\n", " if position != start: # 空白の処理\n", " predicted_text += text[position:start]\n", " predicted_token = self.convert_ids_to_tokens(label)\n", " predicted_token = predicted_token.replace('##', '')\n", " predicted_token = unicodedata.normalize(\n", " 'NFKC', predicted_token\n", " ) \n", " predicted_text += predicted_token\n", " position = end\n", " \n", " return predicted_text" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "HBTlINuoGXNc" }, "source": [ "# 9-5\n", "tokenizer = SC_tokenizer.from_pretrained(MODEL_NAME)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "e27NIdKYGXNc" }, "source": [ "# 9-6\n", "wrong_text = '優勝トロフィーを変換した'\n", "correct_text = '優勝トロフィーを返還した'\n", "encoding = tokenizer.encode_plus_tagged(\n", " wrong_text, correct_text, max_length=12\n", ")\n", "print(encoding)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "rLBWoRIcGXNd" }, "source": [ "# 9-7\n", "wrong_text = '優勝トロフィーを変換した'\n", "encoding, spans = tokenizer.encode_plus_untagged(\n", " wrong_text, return_tensors='pt'\n", ")\n", "print('# encoding')\n", "print(encoding)\n", "print('# spans')\n", "print(spans)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "O7gH4oB9GXNd" }, "source": [ "# 9-8\n", "predicted_labels = [2, 759, 18204, 11, 8274, 15, 10, 3]\n", "predicted_text = tokenizer.convert_bert_output_to_text(\n", " wrong_text, predicted_labels, spans\n", ")\n", "print(predicted_text)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "zCFGE4xbGXNe", "jupyter": { "outputs_hidden": false } }, "source": [ "# 9-9\n", "bert_mlm = BertForMaskedLM.from_pretrained(MODEL_NAME)\n", "bert_mlm = bert_mlm.cuda()" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "J2DR-ls8GXNf" }, "source": [ "# 9-10\n", "text = '優勝トロフィーを変換した。'\n", "\n", "# 符号化とともに各トークンの文章中の位置を計算しておく。\n", "encoding, spans = tokenizer.encode_plus_untagged(\n", " text, return_tensors='pt'\n", ")\n", "encoding = { k: v.cuda() for k, v in encoding.items() }\n", "\n", "# BERTに入力し、トークン毎にスコアの最も高いトークンのIDを予測値とする。\n", "with torch.no_grad():\n", " output = bert_mlm(**encoding)\n", " scores = output.logits\n", " labels_predicted = scores[0].argmax(-1).cpu().numpy().tolist()\n", " \n", "# ラベル列を文章に変換\n", "predict_text = tokenizer.convert_bert_output_to_text(\n", " text, labels_predicted, spans\n", ")" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Wc0mIVlnGXNf" }, "source": [ "# 9-11\n", "data = [\n", " {\n", " 'wrong_text': '優勝トロフィーを変換した。',\n", " 'correct_text': '優勝トロフィーを返還した。',\n", " },\n", " {\n", " 'wrong_text': '人と森は強制している。',\n", " 'correct_text': '人と森は共生している。',\n", " }\n", "]\n", "\n", "# 各データを符号化し、データローダへ入力できるようにする。\n", "max_length=32\n", "dataset_for_loader = []\n", "for sample in data:\n", " wrong_text = sample['wrong_text']\n", " correct_text = sample['correct_text']\n", " encoding = tokenizer.encode_plus_tagged(\n", " wrong_text, correct_text, max_length=max_length\n", " )\n", " encoding = { k: torch.tensor(v) for k, v in encoding.items() }\n", " dataset_for_loader.append(encoding)\n", "\n", "# データローダを作成\n", "dataloader = DataLoader(dataset_for_loader, batch_size=2)\n", "\n", "# ミニバッチをBERTへ入力し、損失を計算。\n", "for batch in dataloader:\n", " encoding = { k: v.cuda() for k, v in batch.items() }\n", " output = bert_mlm(**encoding)\n", " loss = output.loss # 損失" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "V0KBJLE4GXNg" }, "source": [ "# 9-12\n", "!curl -L \"https://nlp.ist.i.kyoto-u.ac.jp/DLcounter/lime.cgi?down=https://nlp.ist.i.kyoto-u.ac.jp/nl-resource/JWTD/jwtd.tar.gz&name=JWTD.tar.gz\" -o JWTD.tar.gz\n", "!tar zxvf JWTD.tar.gz" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Z-lflu5sGXNg", "jupyter": { "outputs_hidden": false } }, "source": [ "# 9-13\n", "def create_dataset(data_df):\n", "\n", " tokenizer = SC_tokenizer.from_pretrained(MODEL_NAME)\n", "\n", " def check_token_count(row):\n", " \"\"\"\n", " 誤変換の文章と正しい文章でトークンに対応がつくかどうかを判定。\n", " (条件は上の文章を参照)\n", " \"\"\"\n", " wrong_text_tokens = tokenizer.tokenize(row['wrong_text'])\n", " correct_text_tokens = tokenizer.tokenize(row['correct_text'])\n", " if len(wrong_text_tokens) != len(correct_text_tokens):\n", " return False\n", " \n", " diff_count = 0\n", " threthold_count = 2\n", " for wrong_text_token, correct_text_token \\\n", " in zip(wrong_text_tokens, correct_text_tokens):\n", "\n", " if wrong_text_token != correct_text_token:\n", " diff_count += 1\n", " if diff_count > threthold_count:\n", " return False\n", " return True\n", "\n", " def normalize(text):\n", " \"\"\"\n", " 文字列の正規化\n", " \"\"\"\n", " text = text.strip()\n", " text = unicodedata.normalize('NFKC', text)\n", " return text\n", "\n", " # 漢字の誤変換のデータのみを抜き出す。\n", " category_type = 'kanji-conversion'\n", " data_df.query('category == @category_type', inplace=True) \n", " data_df.rename(\n", " columns={'pre_text': 'wrong_text', 'post_text': 'correct_text'}, \n", " inplace=True\n", " )\n", " \n", " # 誤変換と正しい文章をそれぞれ正規化し、\n", " # それらの間でトークン列に対応がつくもののみを抜き出す。\n", " data_df['wrong_text'] = data_df['wrong_text'].map(normalize) \n", " data_df['correct_text'] = data_df['correct_text'].map(normalize)\n", " kanji_conversion_num = len(data_df)\n", " data_df = data_df[data_df.apply(check_token_count, axis=1)]\n", " same_tokens_count_num = len(data_df)\n", " print(\n", " f'- 漢字誤変換の総数:{kanji_conversion_num}',\n", " f'- トークンの対応関係のつく文章の総数: {same_tokens_count_num}',\n", " f' (全体の{same_tokens_count_num/kanji_conversion_num*100:.0f}%)',\n", " sep = '\\n'\n", " )\n", " return data_df[['wrong_text', 'correct_text']].to_dict(orient='records')\n", "\n", "# データのロード\n", "train_df = pd.read_json(\n", " './jwtd/train.jsonl', orient='records', lines=True\n", ")\n", "test_df = pd.read_json(\n", " './jwtd/test.jsonl', orient='records', lines=True\n", ")\n", "\n", "# 学習用と検証用データ\n", "print('学習と検証用のデータセット:')\n", "dataset = create_dataset(train_df)\n", "random.shuffle(dataset)\n", "n = len(dataset)\n", "n_train = int(n*0.8)\n", "dataset_train = dataset[:n_train]\n", "dataset_val = dataset[n_train:]\n", "\n", "# テストデータ\n", "print('テスト用のデータセット:')\n", "dataset_test = create_dataset(test_df)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "hlr9SXDNGXNh" }, "source": [ "# 9-14\n", "def create_dataset_for_loader(tokenizer, dataset, max_length):\n", " \"\"\"\n", " データセットをデータローダに入力可能な形式にする。\n", " \"\"\"\n", " dataset_for_loader = []\n", " for sample in tqdm(dataset):\n", " wrong_text = sample['wrong_text']\n", " correct_text = sample['correct_text']\n", " encoding = tokenizer.encode_plus_tagged(\n", " wrong_text, correct_text, max_length=max_length\n", " )\n", " encoding = { k: torch.tensor(v) for k, v in encoding.items() }\n", " dataset_for_loader.append(encoding)\n", " return dataset_for_loader\n", "\n", "tokenizer = SC_tokenizer.from_pretrained(MODEL_NAME)\n", "\n", "# データセットの作成\n", "max_length = 32\n", "dataset_train_for_loader = create_dataset_for_loader(\n", " tokenizer, dataset_train, max_length\n", ")\n", "dataset_val_for_loader = create_dataset_for_loader(\n", " tokenizer, dataset_val, max_length\n", ")\n", "\n", "# データローダの作成\n", "dataloader_train = DataLoader(\n", " dataset_train_for_loader, batch_size=32, shuffle=True\n", ")\n", "dataloader_val = DataLoader(dataset_val_for_loader, batch_size=256)" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "c-j7R6z0GXNh" }, "source": [ "# 9-15\n", "class BertForMaskedLM_pl(pl.LightningModule):\n", " \n", " def __init__(self, model_name, lr):\n", " super().__init__()\n", " self.save_hyperparameters()\n", " self.bert_mlm = BertForMaskedLM.from_pretrained(model_name)\n", " \n", " def training_step(self, batch, batch_idx):\n", " output = self.bert_mlm(**batch)\n", " loss = output.loss\n", " self.log('train_loss', loss)\n", " return loss\n", " \n", " def validation_step(self, batch, batch_idx):\n", " output = self.bert_mlm(**batch)\n", " val_loss = output.loss\n", " self.log('val_loss', val_loss)\n", " \n", " def configure_optimizers(self):\n", " return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)\n", "\n", "checkpoint = pl.callbacks.ModelCheckpoint(\n", " monitor='val_loss',\n", " mode='min',\n", " save_top_k=1,\n", " save_weights_only=True,\n", " dirpath='model/'\n", ")\n", "\n", "trainer = pl.Trainer(\n", " gpus=1,\n", " max_epochs=5,\n", " callbacks=[checkpoint]\n", ")\n", "\n", "# ファインチューニング\n", "model = BertForMaskedLM_pl(MODEL_NAME, lr=1e-5)\n", "trainer.fit(model, dataloader_train, dataloader_val)\n", "best_model_path = checkpoint.best_model_path" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "w2eNTz6OsvVP" }, "source": [ "# 9-16\n", "def predict(text, tokenizer, bert_mlm):\n", " \"\"\"\n", " 文章を入力として受け、BERTが予測した文章を出力\n", " \"\"\"\n", " # 符号化\n", " encoding, spans = tokenizer.encode_plus_untagged(\n", " text, return_tensors='pt'\n", " ) \n", " encoding = { k: v.cuda() for k, v in encoding.items() }\n", "\n", " # ラベルの予測値の計算\n", " with torch.no_grad():\n", " output = bert_mlm(**encoding)\n", " scores = output.logits\n", " labels_predicted = scores[0].argmax(-1).cpu().numpy().tolist()\n", "\n", " # ラベル列を文章に変換\n", " predict_text = tokenizer.convert_bert_output_to_text(\n", " text, labels_predicted, spans\n", " )\n", "\n", " return predict_text\n", "\n", "# いくつかの例に対してBERTによる文章校正を行ってみる。\n", "text_list = [\n", " 'ユーザーの試行に合わせた楽曲を配信する。',\n", " 'メールに明日の会議の史料を添付した。',\n", " '乳酸菌で牛乳を発行するとヨーグルトができる。',\n", " '突然、子供が帰省を発した。'\n", "]\n", "\n", "# トークナイザ、ファインチューニング済みのモデルのロード\n", "tokenizer = SC_tokenizer.from_pretrained(MODEL_NAME)\n", "model = BertForMaskedLM_pl.load_from_checkpoint(best_model_path)\n", "bert_mlm = model.bert_mlm.cuda()\n", "\n", "for text in text_list:\n", " predict_text = predict(text, tokenizer, bert_mlm) # BERTによる予測\n", " print('---')\n", " print(f'入力:{text}')\n", " print(f'出力:{predict_text}')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "0zwqdG4SGXNi", "jupyter": { "outputs_hidden": false } }, "source": [ "# 9-17\n", "# BERTで予測を行い、正解数を数える。\n", "correct_num = 0 \n", "for sample in tqdm(dataset_test):\n", " wrong_text = sample['wrong_text']\n", " correct_text = sample['correct_text']\n", " predict_text = predict(wrong_text, tokenizer, bert_mlm) # BERT予測\n", " \n", " if correct_text == predict_text: # 正解の場合\n", " correct_num += 1\n", "\n", "print(f'Accuracy: {correct_num/len(dataset_test):.2f}')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ZUj8v6dPGXNj" }, "source": [ "# 9-18\n", "correct_position_num = 0 # 正しく誤変換の漢字を特定できたデータの数\n", "for sample in tqdm(dataset_test):\n", " wrong_text = sample['wrong_text']\n", " correct_text = sample['correct_text']\n", " \n", " # 符号化\n", " encoding = tokenizer(wrong_text)\n", " wrong_input_ids = encoding['input_ids'] # 誤変換の文の符合列\n", " encoding = {k: torch.tensor([v]).cuda() for k,v in encoding.items()}\n", " correct_encoding = tokenizer(correct_text)\n", " correct_input_ids = correct_encoding['input_ids'] # 正しい文の符合列\n", " \n", " # 文章を予測\n", " with torch.no_grad():\n", " output = bert_mlm(**encoding)\n", " scores = output.logits\n", " # 予測された文章のトークンのID\n", " predict_input_ids = scores[0].argmax(-1).cpu().numpy().tolist() \n", "\n", " # 特殊トークンを取り除く\n", " wrong_input_ids = wrong_input_ids[1:-1]\n", " correct_input_ids = correct_input_ids[1:-1]\n", " predict_input_ids = predict_input_ids[1:-1]\n", " \n", " # 誤変換した漢字を特定できているかを判定\n", " # 符合列を比較する。\n", " detect_flag = True\n", " for wrong_token, correct_token, predict_token \\\n", " in zip(wrong_input_ids, correct_input_ids, predict_input_ids):\n", "\n", " if wrong_token == correct_token: # 正しいトークン\n", " # 正しいトークンなのに誤って別のトークンに変換している場合\n", " if wrong_token != predict_token: \n", " detect_flag = False\n", " break\n", " else: # 誤変換のトークン\n", " # 誤変換のトークンなのに、そのままにしている場合\n", " if wrong_token == predict_token: \n", " detect_flag = False\n", " break\n", "\n", " if detect_flag: # 誤変換の漢字の位置を正しく特定できた場合\n", " correct_position_num += 1\n", " \n", "print(f'Accuracy: {correct_position_num/len(dataset_test):.2f}')" ], "execution_count": null, "outputs": [] } ] } ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2021 Stockmark Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================ # 「BERTによる自然言語処理入門: Transformersを使った実践プログラミング」 こちらは、[「BERTによる自然言語処理入門: Transformersを使った実践プログラミング」、(編) ストックマーク株式会社、(著) 近江 崇宏、金田 健太郎、森長 誠 、江間見 亜利、(オーム社)](https://www.amazon.co.jp/dp/427422726X)のサポートページです。 ## 誤植 誤植は[こちら](https://github.com/stockmarkteam/bert-book/blob/master/CORRECTION.md)のページにまとめられています。誤植を見つけられた方は、issueを立ててご連絡いただければ幸いです。 ## エラー - 本書では東北大学が開発した事前学習済みのBERTを利用していますが、Hugging Face Hubでのモデルの保存場所が変更されたことに伴い、一部のコードを変更する必要があります。現在、GitHubで公開されているノートブックファイルではこの対応が既に反映されています。詳しくは[こちら](https://github.com/stockmarkteam/bert-book/wiki/%E6%9D%B1%E5%8C%97%E5%A4%A7BERT%E3%81%AE%E5%90%8D%E5%89%8D%E3%81%AE%E5%A4%89%E6%9B%B4)をご覧ください。 - コードブロック#6-3, #7-3, #8-3, #9-3で生じるエラーに関しては[こちら](https://github.com/stockmarkteam/bert-book/wiki/pytorch_lightning%E3%81%AEimport%E6%99%82%E3%81%AE%E3%82%A8%E3%83%A9%E3%83%BC%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6)をご確認ください。現在、GitHubで公開されているノートブックファイルではこの対応が既に反映されています。 ## コード 本書は、Googleの無料の計算環境である[Colaboratory](https://colab.research.google.com/)を利用して、コードを実行することを想定していますが、AWSの無料の計算環境である[Amazon SageMaker Studio Lab](https://studiolab.sagemaker.aws/)を利用することもできます(事前の[メールアドレスによる登録](https://studiolab.sagemaker.aws/requestAccount)が必要です)。Colaboratoryの基本的な使い方は本書の付録を、SageMaker Studio Labの使い方は[こちら](./README_studio-lab.md)をご覧ください。 以下のボタンから、それぞれの計算環境で各章のNotebookを開くことができます。 |Chapter| Google Colaboratory | Amazon SageMaker Studio Lab | |:---:|:---:|:---:| |4| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter4.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter4.ipynb) | |5| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter5.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter5.ipynb) | |6| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter6.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter6.ipynb) | |7| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter7.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter7.ipynb) | |8| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter8.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter8.ipynb) | |9| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter9.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter9.ipynb) | |10| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stockmarkteam/bert-book/blob/master/Chapter10.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter10.ipynb) | なおSageMaker Studio Labを用いる場合には、いずれかのNotebookの冒頭で ```Python !pip install torch==1.9 matplotlib pandas ``` によりライブラリの追加インストールを行なってください。 ================================================ FILE: README_studio-lab.md ================================================ # 「BERTによる自然言語処理入門: Transformersを使った実践プログラミング」 [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter4.ipynb) ## Amazon SageMaker Studio Lab の使い方 [Amazon SageMaker Studio Lab](https://studiolab.sagemaker.aws/)は無料の機械学習環境です。事前の[メールアドレスによる登録](https://studiolab.sagemaker.aws/requestAccount)を行うと、JupyterLabの実行環境が利用可能です。 ![SageMaker Studio のランディングページ](https://docs.aws.amazon.com/sagemaker/latest/dg/images/studio-lab-landing.png) ### Amazon SageMaker Studio Labを開始する Studio Lab を利用開始するためには、アカウントのリクエストと作成が必要です。アカウントのリクエストはこのように行います。 1. [Studio Lab のランディングページ](https://studiolab.sagemaker.aws/) を開きます。 1. "Request free account" を選択します。 1. メールアドレスなど必要な情報を記入します。 1. "Submit request" ボタンを押します。 1. メールアドレス確認のためのEメールを受け取ったら、案内に従って設定を完了してください。 以下で Studio Lab のアカウント作成を行う前に、アカウントリクエストが承認される必要があります。リクエストは 5 営業日以内に審査されます。詳細は[ドキュメント (英語)](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-lab-onboard.html) をご覧ください。 Studio Lab アカウントの作成は、以下の手順で行います。 1. リクエスト承認メール内の "Create account" を押しページを開きます。 1. Eメールアドレス、パスワード、ユーザー名を入力します。 1. "Create account" を選択します。 Studio Lab へのサインインは、 1. [Studio Lab のランディングページ](https://studiolab.sagemaker.aws/) を開き、 1. 右上の "Sign in" ボタンを押します。 1. Eメールアドレス、パスワード、ユーザー名を入力します。 1. "Sign in" を選択しプロジェクトのページを開きます。 ### GPUを使用する Studio Lab では4時間の compute time のあいだ GPU インスタンスを連続して利用することができます。なお、15 GB のストレージが割り当てられるので、ダウンロードしたデータや実行したコード、保存したファイルなどは、後のサインイン時に引き続き利用することができます。コンピュートインスタンスの詳細は[ドキュメント (英語)](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-lab-overview.html#studio-lab-overview-project-compute) をご覧下さい。 1. Studio Lab にサインインしたら、このようなプロジェクトページが表示されます。 1. "My Project" 以下の "Select compute type" から `GPU` を選択します。 1. "Start runtime" を押します。 1. ランタイムが開始したら "Open project" をクリックし JupyterLab 環境を開きます。 ![Studio Lab Project](https://docs.aws.amazon.com/sagemaker/latest/dg/images/studio-lab-overview.png) ### コードを実行する Studio Lab では JupyterLab のインターフェイスを拡張した UI が提供されています。JupyterLab の UI になじみのない方は [The JupyterLab Interface](https://jupyterlab.readthedocs.io/en/latest/user/interface.html) のページをご覧ください。 ![SageMaker Studio UI](https://docs.aws.amazon.com/sagemaker/latest/dg/images/studio-lab-ui.png) [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/stockmarkteam/bert-book/blob/master/Chapter4.ipynb) こちらのボタンから Studio Lab で本書の Notebook を開くことができます。"Copy to project" を押し、JupyterLab に遷移した後に "Clone Entier Repo" を選択すると、この GitHub リポジトリ全体をクローンすることができます。 なお、いずれかのNotebookの冒頭で ```Python !pip install torch==1.9 matplotlib pandas ``` によりライブラリの追加インストールを行なってください。ここで、PyTorch は Studio Lab でサポートされている 1.9 を利用しました。Studio Lab の環境とカスタマイズについてはこちらの[ドキュメント (英語)](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-lab-use-manage.html) をご覧下さい。 ### 外部ストレージ (Amazon S3) や Amazon SageMaker Studio の利用 Studio Lab の project に割り当てられた 15 GB のストレージを超えて利用したい場合は、[Amazon S3 に接続](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-lab-use-external.html#studio-lab-use-external-s3)するか、[Amazon SageMaker Studio への移行](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-lab-use-migrate.html) を検討してください。 ## 既知の問題 コードブロック#6-18でTensorBoardが表示されません、こちらは修正予定です。