[
  {
    "path": ".gitignore",
    "content": "*__pycache__\n*.pyc\n*.log\n*.csv\n*.swp\ntags\npunkt.zip\nwordnet.zip\n.idea/\naste/data/\nmodels/\nmodel_outputs/\noutputs/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Chia Yew Ken\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "## Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction\n\n**\\*\\*\\*\\*\\* New August 30th, 2022: Featured on YouTube video by Xiaoqing Wan [![YT](https://img.shields.io/youtube/views/rRTvsuGRnJ0?style=social)](https://www.youtube.com/watch?v=rRTvsuGRnJ0) \\*\\*\\*\\*\\***\n\n**\\*\\*\\*\\*\\* New March 31th, 2022: Scikit-Style API for Easy Usage \\*\\*\\*\\*\\***\n\n[![PWC](https://img.shields.io/badge/PapersWithCode-Benchmark-%232cafb1)](https://paperswithcode.com/sota/aspect-sentiment-triplet-extraction-on-aste)\n[![Colab](https://img.shields.io/badge/Colab-Code%20Demo-%23fe9f00)](https://colab.research.google.com/drive/1F9zW_nVkwfwIVXTOA_juFDrlPz5TLjpK?usp=sharing)\n[![Jupyter](https://img.shields.io/badge/Jupyter-Notebook%20Demo-important)](https://github.com/chiayewken/Span-ASTE/blob/main/demo.ipynb)\n\nThis repository implements our ACL 2021 research paper [Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction](https://aclanthology.org/2021.acl-long.367/). \nOur goal is to extract sentiment triplets of the format `(aspect target, opinion expression and sentiment polarity)`, as shown in the diagram below. \n\n<img src=\"https://github.com/chiayewken/Span-ASTE/blob/13a851b166998210a7cd2def5fa4aff20819b54d/assets/task_image.png\" width=\"450\" height=\"150\" alt=\"\">\n\n### Installation\n\n- Tested on Python 3.7 (recommended to use a virtual environment such as [Conda](https://docs.conda.io/en/latest/miniconda.html))\n- Install data and requirements: `bash setup.sh`\n- Training config: [training_config/config.jsonnet](training_config/config.jsonnet)\n- Modeling code: [span_model/models/span_model.py](span_model/models/span_model.py)\n\n### Data Format\n\nOur span-based model uses data files where the format for each line contains one input sentence and a list of output triplets.\nThe following data format is demonstrated in the [sample data file](sample_data.txt):\n\n> sentence#### #### ####[triplet_0, ..., triplet_n]\n\nEach triplet is a tuple that consists of `(span_a, span_b, label)`. Each span is a list. If the span covers a single word, the list will contain only the word index. If the span covers multiple words, the list will contain the index of the first word and last word. For example:\n\n> It also has lots of other Korean dishes that are affordable and just as yummy .#### #### ####[([6, 7], [10], 'POS'), ([6, 7], [14], 'POS')]\n\nFor prediction, the data can contain the input sentence only, with an empty list for triplets:\n\n> sentence#### #### ####[]\n\n### Predict Using Model Weights\n\n- First, download and extract [pre-trained weights](https://github.com/chiayewken/Span-ASTE/releases) to `pretrained_dir`\n- The input data file `path_in` and output data file `path_out` have the same [data format](#data-format).\n\n```\nfrom wrapper import SpanModel\n\nmodel = SpanModel(save_dir=pretrained_dir, random_seed=0)\nmodel.predict(path_in, path_out)\n```\n\n### Model Training\n\n- Configure the model with save directory and random seed.\n- Start training based on the training and validation data which have the same [data format](#data-format).\n\n```\nmodel = SpanModel(save_dir=save_dir, random_seed=random_seed)\nmodel.fit(path_train, path_dev)\n```\n\n- To train with multiple random seeds from the command-line, you can use the following command:\n- Replace `14lap` for other datasets (eg `14res`, `15res`, `16res`)\n\n```\npython aste/wrapper.py run_train_many \\\n--save_dir_template \"outputs/14lap/seed_{}\" \\\n--random_seeds [0,1,2,3,4] \\\n--path_train data/triplet_data/14lap/train.txt \\\n--path_dev data/triplet_data/14lap/dev.txt\n```\n\n### Model Evaluation\n\n- From the trained model, predict triplets from the test sentences and output into `path_pred`.\n- The model includes a scoring function which will provide F1 metric scores for triplet extraction.\n\n```\nmodel.predict(path_in=path_test, path_out=path_pred)\nresults = model.score(path_pred, path_test)\n```\n\n- To evaluate with multiple random seeds from the command-line, you can use the following command:\n- Replace `14lap` for other datasets (eg `14res`, `15res`, `16res`)\n\n```\npython aste/wrapper.py run_eval_many \\\n--save_dir_template \"outputs/14lap/seed_{}\" \\\n--random_seeds [0,1,2,3,4] \\\n--path_test data/triplet_data/14lap/test.txt\n```\n\n- To run scoring on your own prediction file from command-line, you can use the following command:\n- Replace `14lap` for other datasets (eg `14res`, `15res`, `16res`)\n\n```\npython aste/wrapper.py run_score --path_pred your_file --path_gold data/triplet_data/14lap/test.txt\n```\n\n### Research Citation\nIf the code is useful for your research project, we appreciate if you cite the following paper:\n```\n@inproceedings{xu-etal-2021-learning,\n    title = \"Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction\",\n    author = \"Xu, Lu  and\n      Chia, Yew Ken  and\n      Bing, Lidong\",\n    booktitle = \"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n    month = aug,\n    year = \"2021\",\n    address = \"Online\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2021.acl-long.367\",\n    doi = \"10.18653/v1/2021.acl-long.367\",\n    pages = \"4755--4766\",\n    abstract = \"Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation. Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency. To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks. This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly. Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks. In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.\",\n}\n```\n"
  },
  {
    "path": "aste/__init__.py",
    "content": ""
  },
  {
    "path": "aste/analysis.py",
    "content": "import json\nimport random\nimport sys\nfrom pathlib import Path\nfrom typing import List\n\nimport _jsonnet\nimport numpy as np\nimport torch\nfrom allennlp.commands.train import train_model\nfrom allennlp.common import Params\nfrom allennlp.data import DatasetReader, Vocabulary, DataLoader\nfrom allennlp.models import Model\nfrom allennlp.training import Trainer\nfrom fire import Fire\nfrom tqdm import tqdm\n\nfrom data_utils import Data, Sentence\nfrom wrapper import SpanModel, safe_divide\n\n\ndef set_seed(seed: int):\n    random.seed(seed)\n    np.random.seed(seed)\n    torch.manual_seed(seed)\n\n\ndef test_load(\n    path: str = \"training_config/config.jsonnet\",\n    path_train: str = \"outputs/14lap/seed_0/temp_data/train.json\",\n    path_dev: str = \"outputs/14lap/seed_0/temp_data/validation.json\",\n    save_dir=\"outputs/temp\",\n):\n    # Register custom modules\n    sys.path.append(\".\")\n    from span_model.data.dataset_readers.span_model import SpanModelReader\n\n    assert SpanModelReader is not None\n    params = Params.from_file(\n        path,\n        params_overrides=dict(\n            train_data_path=path_train,\n            validation_data_path=path_dev,\n            test_data_path=path_dev,\n        ),\n    )\n\n    train_model(params, serialization_dir=save_dir, force=True)\n    breakpoint()\n\n    config = json.loads(_jsonnet.evaluate_file(path))\n    set_seed(config[\"random_seed\"])\n    reader = DatasetReader.from_params(Params(config[\"dataset_reader\"]))\n    data_train = reader.read(path_train)\n    data_dev = reader.read(path_dev)\n    vocab = Vocabulary.from_instances(data_train + data_dev)\n    model = Model.from_params(Params(config[\"model\"]), vocab=vocab)\n\n    data_train.index_with(vocab)\n    data_dev.index_with(vocab)\n    trainer = Trainer.from_params(\n        Params(config[\"trainer\"]),\n        model=model,\n        data_loader=DataLoader.from_params(\n            Params(config[\"data_loader\"]), dataset=data_train\n        ),\n        validation_data_loader=DataLoader.from_params(\n            Params(config[\"data_loader\"]), dataset=data_dev\n        ),\n        serialization_dir=save_dir,\n    )\n    breakpoint()\n    trainer.train()\n    breakpoint()\n\n\nclass Scorer:\n    name: str = \"\"\n\n    def run(self, path_pred: str, path_gold: str) -> dict:\n        pred = Data.load_from_full_path(path_pred)\n        gold = Data.load_from_full_path(path_gold)\n        assert pred.sentences is not None\n        assert gold.sentences is not None\n        assert len(pred.sentences) == len(gold.sentences)\n        num_pred = 0\n        num_gold = 0\n        num_correct = 0\n\n        for i in range(len(gold.sentences)):\n            tuples_pred = self.make_tuples(pred.sentences[i])\n            tuples_gold = self.make_tuples(gold.sentences[i])\n            num_pred += len(tuples_pred)\n            num_gold += len(tuples_gold)\n            for p in tuples_pred:\n                for g in tuples_gold:\n                    if p == g:\n                        num_correct += 1\n\n        precision = safe_divide(num_correct, num_pred)\n        recall = safe_divide(num_correct, num_gold)\n\n        info = dict(\n            precision=precision,\n            recall=recall,\n            score=safe_divide(2 * precision * recall, precision + recall),\n        )\n        return info\n\n    def make_tuples(self, sent: Sentence) -> List[tuple]:\n        raise NotImplementedError\n\n\nclass SentimentTripletScorer(Scorer):\n    name: str = \"sentiment triplet\"\n\n    def make_tuples(self, sent: Sentence) -> List[tuple]:\n        return [(t.o_start, t.o_end, t.t_start, t.t_end, t.label) for t in sent.triples]\n\n\nclass TripletScorer(Scorer):\n    name: str = \"triplet\"\n\n    def make_tuples(self, sent: Sentence) -> List[tuple]:\n        return [(t.o_start, t.o_end, t.t_start, t.t_end) for t in sent.triples]\n\n\nclass OpinionScorer(Scorer):\n    name: str = \"opinion\"\n\n    def make_tuples(self, sent: Sentence) -> List[tuple]:\n        return sorted(set((t.o_start, t.o_end) for t in sent.triples))\n\n\nclass TargetScorer(Scorer):\n    name: str = \"target\"\n\n    def make_tuples(self, sent: Sentence) -> List[tuple]:\n        return sorted(set((t.t_start, t.t_end) for t in sent.triples))\n\n\nclass OrigScorer(Scorer):\n    name: str = \"orig\"\n\n    def make_tuples(self, sent: Sentence) -> List[tuple]:\n        raise NotImplementedError\n\n    def run(self, path_pred: str, path_gold: str) -> dict:\n        model = SpanModel(save_dir=\"\", random_seed=0)\n        return model.score(path_pred, path_gold)\n\n\ndef run_eval_domains(\n    save_dir_template: str,\n    path_test_template: str,\n    random_seeds: List[int] = (0, 1, 2, 3, 4),\n    domain_names: List[str] = (\"hotel\", \"restaurant\", \"laptop\"),\n):\n    print(locals())\n    all_results = {}\n\n    for domain in domain_names:\n        results = []\n        for seed in tqdm(random_seeds):\n            model = SpanModel(save_dir=save_dir_template.format(seed), random_seed=0)\n            path_pred = str(Path(model.save_dir, f\"pred_{domain}.txt\"))\n            path_test = path_test_template.format(domain)\n            if not Path(path_pred).exists():\n                model.predict(path_test, path_pred)\n            results.append(model.score(path_pred, path_test))\n\n        precision = sum(r[\"precision\"] for r in results) / len(random_seeds)\n        recall = sum(r[\"recall\"] for r in results) / len(random_seeds)\n        score = safe_divide(2 * precision * recall, precision + recall)\n        all_results[domain] = dict(p=precision, r=recall, f=score)\n        for k, v in all_results.items():\n            print(k, v)\n\n\ndef test_scorer(path_pred: str, path_gold: str):\n    for scorer in [\n        OpinionScorer(),\n        TargetScorer(),\n        TripletScorer(),\n        SentimentTripletScorer(),\n        OrigScorer(),\n    ]:\n        print(scorer.name)\n        print(scorer.run(path_pred, path_gold))\n\n\nif __name__ == \"__main__\":\n    Fire()\n"
  },
  {
    "path": "aste/data_utils.py",
    "content": "import ast\nimport copy\nimport json\nimport os\nfrom collections import Counter\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Set, Tuple\n\nimport numpy as np\nimport pandas as pd\nfrom fire import Fire\nfrom pydantic import BaseModel\nfrom sklearn.metrics import classification_report\n\nfrom utils import count_joins, get_simple_stats\n\nRawTriple = Tuple[List[int], int, int, int, int]\nSpan = Tuple[int, int]\n\n\nclass SplitEnum(str, Enum):\n    train = \"train\"\n    dev = \"dev\"\n    test = \"test\"\n\n\nclass LabelEnum(str, Enum):\n    positive = \"POS\"\n    negative = \"NEG\"\n    neutral = \"NEU\"\n    opinion = \"OPINION\"\n    target = \"TARGET\"\n\n    @classmethod\n    def as_list(cls):\n        return [cls.neutral, cls.positive, cls.negative]\n\n    @classmethod\n    def i_to_label(cls, i: int):\n        return cls.as_list()[i]\n\n    @classmethod\n    def label_to_i(cls, label) -> int:\n        return cls.as_list().index(label)\n\n\nclass SentimentTriple(BaseModel):\n    o_start: int\n    o_end: int\n    t_start: int\n    t_end: int\n    label: LabelEnum\n\n    @classmethod\n    def make_dummy(cls):\n        return cls(o_start=0, o_end=0, t_start=0, t_end=0, label=LabelEnum.neutral)\n\n    @property\n    def opinion(self) -> Tuple[int, int]:\n        return self.o_start, self.o_end\n\n    @property\n    def target(self) -> Tuple[int, int]:\n        return self.t_start, self.t_end\n\n    @classmethod\n    def from_raw_triple(cls, x: RawTriple):\n        (o_start, o_end), polarity, direction, gap_a, gap_b = x\n        # Refer: TagReader\n        if direction == 0:\n            t_end = o_start - gap_a\n            t_start = o_start - gap_b\n        elif direction == 1:\n            t_start = gap_a + o_start\n            t_end = gap_b + o_start\n        else:\n            raise ValueError\n\n        return cls(\n            o_start=o_start,\n            o_end=o_end,\n            t_start=t_start,\n            t_end=t_end,\n            label=LabelEnum.i_to_label(polarity),\n        )\n\n    def to_raw_triple(self) -> RawTriple:\n        polarity = LabelEnum.label_to_i(self.label)\n        if self.t_start < self.o_start:\n            direction = 0\n            gap_a, gap_b = self.o_start - self.t_end, self.o_start - self.t_start\n        else:\n            direction = 1\n            gap_a, gap_b = self.t_start - self.o_start, self.t_end - self.o_start\n        return [self.o_start, self.o_end], polarity, direction, gap_a, gap_b\n\n    def as_text(self, tokens: List[str]) -> str:\n        opinion = \" \".join(tokens[self.o_start : self.o_end + 1])\n        target = \" \".join(tokens[self.t_start : self.t_end + 1])\n        return f\"{opinion}-{target} ({self.label})\"\n\n\nclass TripleHeuristic(BaseModel):\n    @staticmethod\n    def run(\n        opinion_to_label: Dict[Span, LabelEnum],\n        target_to_label: Dict[Span, LabelEnum],\n    ) -> List[SentimentTriple]:\n        # For each target, pair with the closest opinion (and vice versa)\n        spans_o = list(opinion_to_label.keys())\n        spans_t = list(target_to_label.keys())\n        pos_o = np.expand_dims(np.array(spans_o).mean(axis=-1), axis=1)\n        pos_t = np.expand_dims(np.array(spans_t).mean(axis=-1), axis=0)\n        dists = np.absolute(pos_o - pos_t)\n        raw_triples: Set[Tuple[int, int, LabelEnum]] = set()\n\n        closest = np.argmin(dists, axis=1)\n        for i, span in enumerate(spans_o):\n            raw_triples.add((i, int(closest[i]), opinion_to_label[span]))\n        closest = np.argmin(dists, axis=0)\n        for i, span in enumerate(spans_t):\n            raw_triples.add((int(closest[i]), i, target_to_label[span]))\n\n        triples = []\n        for i, j, label in raw_triples:\n            os, oe = spans_o[i]\n            ts, te = spans_t[j]\n            triples.append(\n                SentimentTriple(o_start=os, o_end=oe, t_start=ts, t_end=te, label=label)\n            )\n        return triples\n\n\nclass TagMaker(BaseModel):\n    @staticmethod\n    def run(spans: List[Span], labels: List[LabelEnum], num_tokens: int) -> List[str]:\n        raise NotImplementedError\n\n\nclass BioesTagMaker(TagMaker):\n    @staticmethod\n    def run(spans: List[Span], labels: List[LabelEnum], num_tokens: int) -> List[str]:\n        tags = [\"O\"] * num_tokens\n        for (start, end), lab in zip(spans, labels):\n            assert end >= start\n            length = end - start + 1\n            if length == 1:\n                tags[start] = f\"S-{lab}\"\n            else:\n                tags[start] = f\"B-{lab}\"\n                tags[end] = f\"E-{lab}\"\n                for i in range(start + 1, end):\n                    tags[i] = f\"I-{lab}\"\n        return tags\n\n\nclass Sentence(BaseModel):\n    tokens: List[str]\n    pos: List[str]\n    weight: int\n    id: int\n    is_labeled: bool\n    triples: List[SentimentTriple]\n    spans: List[Tuple[int, int, LabelEnum]] = []\n\n    def extract_spans(self) -> List[Tuple[int, int, LabelEnum]]:\n        spans = []\n        for t in self.triples:\n            spans.append((t.o_start, t.o_end, LabelEnum.opinion))\n            spans.append((t.t_start, t.t_end, LabelEnum.target))\n        spans = sorted(set(spans))\n        return spans\n\n    def as_text(self) -> str:\n        tokens = list(self.tokens)\n        for t in self.triples:\n            tokens[t.o_start] = \"(\" + tokens[t.o_start]\n            tokens[t.o_end] = tokens[t.o_end] + \")\"\n            tokens[t.t_start] = \"[\" + tokens[t.t_start]\n            tokens[t.t_end] = tokens[t.t_end] + \"]\"\n        return \" \".join(tokens)\n\n    @classmethod\n    def from_line_format(cls, text: str):\n        front, back = text.split(\"#### #### ####\")\n        tokens = front.split(\" \")\n        triples = []\n\n        for a, b, label in ast.literal_eval(back):\n            t = SentimentTriple(\n                t_start=a[0],\n                t_end=a[0] if len(a) == 1 else a[-1],\n                o_start=b[0],\n                o_end=b[0] if len(b) == 1 else b[-1],\n                label=label,\n            )\n            triples.append(t)\n\n        return cls(\n            tokens=tokens, triples=triples, id=0, pos=[], weight=1, is_labeled=True\n        )\n\n    def to_line_format(self) -> str:\n        # ([1], [4], 'POS')\n        # ([1,2], [4], 'POS')\n        triplets = []\n        for t in self.triples:\n            parts = []\n            for start, end in [(t.t_start, t.t_end), (t.o_start, t.o_end)]:\n                if start == end:\n                    parts.append([start])\n                else:\n                    parts.append([start, end])\n            parts.append(f\"{t.label}\")\n            triplets.append(tuple(parts))\n\n        line = \" \".join(self.tokens) + \"#### #### ####\" + str(triplets) + \"\\n\"\n        assert self.from_line_format(line).tokens == self.tokens\n        assert self.from_line_format(line).triples == self.triples\n        return line\n\n\nclass Data(BaseModel):\n    root: Path\n    data_split: SplitEnum\n    sentences: Optional[List[Sentence]]\n    full_path: str = \"\"\n    num_instances: int = -1\n    opinion_offset: int = 3  # Refer: jet_o.py\n    is_labeled: bool = False\n\n    def load(self):\n        if self.sentences is None:\n            path = self.root / f\"{self.data_split}.txt\"\n            if self.full_path:\n                path = self.full_path\n\n            with open(path) as f:\n                self.sentences = [Sentence.from_line_format(line) for line in f]\n\n    @classmethod\n    def load_from_full_path(cls, path: str):\n        data = cls(full_path=path, root=Path(path).parent, data_split=SplitEnum.train)\n        data.load()\n        return data\n\n    def save_to_path(self, path: str):\n        assert self.sentences is not None\n        Path(path).parent.mkdir(exist_ok=True, parents=True)\n        with open(path, \"w\") as f:\n            for s in self.sentences:\n                f.write(s.to_line_format())\n\n        data = Data.load_from_full_path(path)\n        assert data.sentences is not None\n        for i, s in enumerate(data.sentences):\n            assert s.tokens == self.sentences[i].tokens\n            assert s.triples == self.sentences[i].triples\n\n    def analyze_spans(self):\n        print(\"\\nHow often is target closer to opinion than any invalid target?\")\n        records = []\n        for s in self.sentences:\n            valid_pairs = set([(a.opinion, a.target) for a in s.triples])\n            for a in s.triples:\n                closest = None\n                for b in s.triples:\n                    dist_a = abs(np.mean(a.opinion) - np.mean(a.target))\n                    dist_b = abs(np.mean(a.opinion) - np.mean(b.target))\n                    if dist_b <= dist_a and (a.opinion, b.target) not in valid_pairs:\n                        closest = b.target\n\n                spans = [a.opinion, a.target]\n                if closest is not None:\n                    spans.append(closest)\n\n                tokens = list(s.tokens)\n                for start, end in spans:\n                    tokens[start] = \"[\" + tokens[start]\n                    tokens[end] = tokens[end] + \"]\"\n\n                start = min([s[0] for s in spans])\n                end = max([s[1] for s in spans])\n                tokens = tokens[start : end + 1]\n\n                records.append(dict(is_closest=closest is None, text=\" \".join(tokens)))\n        df = pd.DataFrame(records)\n        print(df[\"is_closest\"].mean())\n        print(df[~df[\"is_closest\"]].head())\n\n    def analyze_joined_spans(self):\n        print(\"\\nHow often are target/opinion spans joined?\")\n        join_targets = 0\n        join_opinions = 0\n        total_targets = 0\n        total_opinions = 0\n\n        for s in self.sentences:\n            targets = set([t.target for t in s.triples])\n            opinions = set([t.opinion for t in s.triples])\n            total_targets += len(targets)\n            total_opinions += len(opinions)\n            join_targets += count_joins(targets)\n            join_opinions += count_joins(opinions)\n\n        print(\n            dict(\n                targets=join_targets / total_targets,\n                opinions=join_opinions / total_opinions,\n            )\n        )\n\n    def analyze_tag_counts(self):\n        print(\"\\nHow many tokens are target/opinion/none?\")\n        record = []\n        for s in self.sentences:\n            tags = [str(None) for _ in s.tokens]\n            for t in s.triples:\n                for i in range(t.o_start, t.o_end + 1):\n                    tags[i] = \"Opinion\"\n                for i in range(t.t_start, t.t_end + 1):\n                    tags[i] = \"Target\"\n            record.extend(tags)\n        print({k: v / len(record) for k, v in Counter(record).items()})\n\n    def analyze_span_distance(self):\n        print(\"\\nHow far is the target/opinion from each other on average?\")\n        distances = []\n        for s in self.sentences:\n            for t in s.triples:\n                x_opinion = (t.o_start + t.o_end) / 2\n                x_target = (t.t_start + t.t_end) / 2\n                distances.append(abs(x_opinion - x_target))\n        print(get_simple_stats(distances))\n\n    def analyze_opinion_labels(self):\n        print(\"\\nFor opinion/target how often is it associated with only 1 polarity?\")\n        for key in [\"opinion\", \"target\"]:\n            records = []\n            for s in self.sentences:\n                term_to_labels: Dict[Tuple[int, int], List[LabelEnum]] = {}\n                for t in s.triples:\n                    term_to_labels.setdefault(getattr(t, key), []).append(t.label)\n                records.extend([len(set(labels)) for labels in term_to_labels.values()])\n            is_single_label = [n == 1 for n in records]\n            print(\n                dict(\n                    key=key,\n                    is_single_label=sum(is_single_label) / len(is_single_label),\n                    stats=get_simple_stats(records),\n                )\n            )\n\n    def analyze_tag_score(self):\n        print(\"\\nIf have all target and opinion terms (unpaired), what is max f_score?\")\n        pred = copy.deepcopy(self.sentences)\n        for s in pred:\n            target_to_label = {t.target: t.label for t in s.triples}\n            opinion_to_label = {t.opinion: t.label for t in s.triples}\n            s.triples = TripleHeuristic().run(opinion_to_label, target_to_label)\n\n        analyzer = ResultAnalyzer()\n        analyzer.run(pred, gold=self.sentences, print_limit=0)\n\n    def analyze_ner(self):\n        print(\"\\n How many opinion/target per sentence?\")\n        num_o, num_t = [], []\n        for s in self.sentences:\n            opinions, targets = set(), set()\n            for t in s.triples:\n                opinions.add((t.o_start, t.o_end))\n                targets.add((t.t_start, t.t_end))\n            num_o.append(len(opinions))\n            num_t.append(len(targets))\n        print(\n            dict(\n                num_o=get_simple_stats(num_o),\n                num_t=get_simple_stats(num_t),\n                sentences=len(self.sentences),\n            )\n        )\n\n    def analyze_direction(self):\n        print(\"\\n For targets, is opinion offset always positive/negative/both?\")\n        records = []\n        for s in self.sentences:\n            span_to_offsets = {}\n            for t in s.triples:\n                off = np.mean(t.target) - np.mean(t.opinion)\n                span_to_offsets.setdefault(t.opinion, []).append(off)\n            for span, offsets in span_to_offsets.items():\n                labels = [\n                    LabelEnum.positive if off > 0 else LabelEnum.negative\n                    for off in offsets\n                ]\n                lab = labels[0] if len(set(labels)) == 1 else LabelEnum.neutral\n                records.append(\n                    dict(\n                        span=\" \".join(s.tokens[span[0] : span[1] + 1]),\n                        text=s.as_text(),\n                        offsets=lab,\n                    )\n                )\n        df = pd.DataFrame(records)\n        print(df[\"offsets\"].value_counts(normalize=True))\n        df = df[df[\"offsets\"] == LabelEnum.neutral].drop(columns=[\"offsets\"])\n        with pd.option_context(\"display.max_colwidth\", 999):\n            print(df.head())\n\n    def analyze(self):\n        triples = [t for s in self.sentences for t in s.triples]\n        info = dict(\n            root=self.root,\n            sentences=len(self.sentences),\n            sentiments=Counter([t.label for t in triples]),\n            target_lengths=get_simple_stats(\n                [abs(t.t_start - t.t_end) + 1 for t in triples]\n            ),\n            opinion_lengths=get_simple_stats(\n                [abs(t.o_start - t.o_end) + 1 for t in triples]\n            ),\n            sentence_lengths=get_simple_stats([len(s.tokens) for s in self.sentences]),\n        )\n        for k, v in info.items():\n            print(k, v)\n\n        self.analyze_direction()\n        self.analyze_ner()\n        self.analyze_spans()\n        self.analyze_joined_spans()\n        self.analyze_tag_counts()\n        self.analyze_span_distance()\n        self.analyze_opinion_labels()\n        self.analyze_tag_score()\n        print(\"#\" * 80)\n\n\ndef test_save_to_path(path: str = \"aste/data/triplet_data/14lap/train.txt\"):\n    print(\"\\nEnsure that Data.save_to_path works properly\")\n    path_temp = \"temp.txt\"\n    data = Data.load_from_full_path(path)\n    data.save_to_path(path_temp)\n    print(\"\\nSamples\")\n    with open(path_temp) as f:\n        for line in f.readlines()[:5]:\n            print(line)\n    os.remove(path_temp)\n\n\ndef merge_data(items: List[Data]) -> Data:\n    merged = Data(root=Path(), data_split=items[0].data_split, sentences=[])\n    for data in items:\n        data.load()\n        merged.sentences.extend(data.sentences)\n    return merged\n\n\nclass Result(BaseModel):\n    num_sentences: int\n    num_pred: int = 0\n    num_gold: int = 0\n    num_correct: int = 0\n    num_start_correct: int = 0\n    num_start_end_correct: int = 0\n    num_opinion_correct: int = 0\n    num_target_correct: int = 0\n    num_span_overlap: int = 0\n    precision: float = 0.0\n    recall: float = 0.0\n    f_score: float = 0.0\n\n\nclass ResultAnalyzer(BaseModel):\n    @staticmethod\n    def check_overlap(a_start: int, a_end: int, b_start: int, b_end: int) -> bool:\n        return (b_start <= a_start <= b_end) or (b_start <= a_end <= b_end)\n\n    @staticmethod\n    def run_sentence(pred: Sentence, gold: Sentence):\n        assert pred.tokens == gold.tokens\n        triples_gold = set([t.as_text(gold.tokens) for t in gold.triples])\n        triples_pred = set([t.as_text(pred.tokens) for t in pred.triples])\n        tp = triples_pred.intersection(triples_gold)\n        fp = triples_pred.difference(triples_gold)\n        fn = triples_gold.difference(triples_pred)\n        if fp or fn:\n            print(dict(gold=gold.as_text()))\n            print(dict(pred=pred.as_text()))\n            print(dict(tp=tp))\n            print(dict(fp=fp))\n            print(dict(fn=fn))\n            print(\"#\" * 80)\n\n    @staticmethod\n    def analyze_labels(pred: List[Sentence], gold: List[Sentence]):\n        y_pred = []\n        y_gold = []\n        for i in range(len(pred)):\n            for p in pred[i].triples:\n                for g in gold[i].triples:\n                    if (p.opinion, p.target) == (g.opinion, g.target):\n                        y_pred.append(str(p.label))\n                        y_gold.append(str(g.label))\n\n        print(dict(num_span_correct=len(y_pred)))\n        if y_pred:\n            print(classification_report(y_gold, y_pred))\n\n    @staticmethod\n    def analyze_spans(pred: List[Sentence], gold: List[Sentence]):\n        num_triples_gold, triples_found_o, triples_found_t = 0, set(), set()\n        for label in [LabelEnum.opinion, LabelEnum.target]:\n            num_correct, num_pred, num_gold = 0, 0, 0\n            is_target = {LabelEnum.opinion: False, LabelEnum.target: True}[label]\n            for i, (p, g) in enumerate(zip(pred, gold)):\n                spans_gold = set(g.spans if g.spans else g.extract_spans())\n                spans_pred = set(p.spans if p.spans else p.extract_spans())\n                spans_gold = set([s for s in spans_gold if s[-1] == label])\n                spans_pred = set([s for s in spans_pred if s[-1] == label])\n\n                num_gold += len(spans_gold)\n                num_pred += len(spans_pred)\n                num_correct += len(spans_gold.intersection(spans_pred))\n\n                for t in g.triples:\n                    num_triples_gold += 1\n                    span = (t.target if is_target else t.opinion) + (label,)\n                    if span in spans_pred:\n                        t_unique = (i,) + tuple(t.dict().items())\n                        if is_target:\n                            triples_found_t.add(t_unique)\n                        else:\n                            triples_found_o.add(t_unique)\n\n            if num_correct and num_pred and num_gold:\n                p = round(num_correct / num_pred, ndigits=4)\n                r = round(num_correct / num_gold, ndigits=4)\n                f = round(2 * p * r / (p + r), ndigits=4)\n                info = dict(label=label, p=p, r=r, f=f)\n                print(json.dumps(info, indent=2))\n\n        assert num_triples_gold % 2 == 0  # Was double-counted above\n        num_triples_gold = num_triples_gold // 2\n        num_triples_pred_ceiling = len(triples_found_o.intersection(triples_found_t))\n        triples_pred_recall_ceiling = num_triples_pred_ceiling / num_triples_gold\n        print(\"\\n What is the upper bound for RE from predicted O & T?\")\n        print(dict(recall=round(triples_pred_recall_ceiling, ndigits=4)))\n\n    @classmethod\n    def run(cls, pred: List[Sentence], gold: List[Sentence], print_limit=16):\n        assert len(pred) == len(gold)\n        cls.analyze_labels(pred, gold)\n\n        r = Result(num_sentences=len(pred))\n        for i in range(len(pred)):\n            if i < print_limit:\n                cls.run_sentence(pred[i], gold[i])\n            r.num_pred += len(pred[i].triples)\n            r.num_gold += len(gold[i].triples)\n            for p in pred[i].triples:\n                for g in gold[i].triples:\n                    if p.dict() == g.dict():\n                        r.num_correct += 1\n                    if (p.o_start, p.t_start) == (g.o_start, g.t_start):\n                        r.num_start_correct += 1\n                    if (p.opinion, p.target) == (g.opinion, g.target):\n                        r.num_start_end_correct += 1\n                    if p.opinion == g.opinion:\n                        r.num_opinion_correct += 1\n                    if p.target == g.target:\n                        r.num_target_correct += 1\n                    if cls.check_overlap(*p.opinion, *g.opinion) and cls.check_overlap(\n                        *p.target, *g.target\n                    ):\n                        r.num_span_overlap += 1\n\n        e = 1e-9\n        r.precision = round(r.num_correct / (r.num_pred + e), 4)\n        r.recall = round(r.num_correct / (r.num_gold + e), 4)\n        r.f_score = round(2 * r.precision * r.recall / (r.precision + r.recall + e), 3)\n        print(r.json(indent=2))\n        cls.analyze_spans(pred, gold)\n\n\ndef test_merge(root=\"aste/data/triplet_data\"):\n    unmerged = [Data(root=p, data_split=SplitEnum.train) for p in Path(root).iterdir()]\n    data = merge_data(unmerged)\n    data.analyze()\n\n\nif __name__ == \"__main__\":\n    Fire()\n"
  },
  {
    "path": "aste/utils.py",
    "content": "import copy\nimport hashlib\nimport pickle\nimport subprocess\nimport time\nfrom pathlib import Path\nfrom typing import List, Set, Tuple, Union\n\nfrom fire import Fire\nfrom pydantic import BaseModel\n\n\nclass Shell(BaseModel):\n    verbose: bool = True\n\n    @classmethod\n    def format_kwargs(cls, **kwargs) -> str:\n        outputs = []\n        for k, v in kwargs.items():\n            k = k.replace(\"_\", \"-\")\n            k = f\"--{k}\"\n            outputs.extend([k, str(v)])\n        return \" \".join(outputs)\n\n    def run_command(self, command: str) -> str:\n        # Continuously print outputs for long-running commands\n        # Refer: https://fabianlee.org/2019/09/15/python-getting-live-output-from-subprocess-using-poll/\n        print(dict(command=command))\n        process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)\n        outputs = []\n\n        while True:\n            if process.poll() is not None:\n                break\n            o = process.stdout.readline().decode()\n            if o:\n                outputs.append(o)\n                if self.verbose:\n                    print(o.strip())\n\n        return \"\".join(outputs)\n\n    def run(self, command: str, *args, **kwargs) -> str:\n        args = [str(a) for a in args]\n        command = \" \".join([command] + args + [self.format_kwargs(**kwargs)])\n        return self.run_command(command)\n\n\ndef hash_text(x: str) -> str:\n    return hashlib.md5(x.encode()).hexdigest()\n\n\nclass Timer(BaseModel):\n    name: str = \"\"\n    start: float = 0.0\n\n    def __enter__(self):\n        self.start = time.time()\n        return self\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        duration = round(time.time() - self.start, 3)\n        print(f\"Timer {self.name}: {duration}s\")\n\n\nclass PickleSaver(BaseModel):\n    path: Path\n\n    def dump(self, obj):\n        if not self.path.parent.exists():\n            self.path.parent.mkdir(exist_ok=True)\n        with open(self.path, \"wb\") as f:\n            pickle.dump(obj, f)\n\n    def load(self):\n        with Timer(name=str(self.path)):\n            with open(self.path, \"rb\") as f:\n                return pickle.load(f)\n\n\nclass FlexiModel(BaseModel):\n    class Config:\n        arbitrary_types_allowed = True\n\n\ndef get_simple_stats(numbers: List[Union[int, float]]):\n    return dict(\n        min=min(numbers),\n        max=max(numbers),\n        avg=sum(numbers) / len(numbers),\n    )\n\n\ndef count_joins(spans: Set[Tuple[int, int]]) -> int:\n    count = 0\n    for a_start, a_end in spans:\n        for b_start, b_end in spans:\n            if (a_start, a_end) == (b_start, b_end):\n                continue\n\n            if b_start <= a_start <= b_end + 1 or b_start - 1 <= a_end <= b_end:\n                count += 1\n    return count // 2\n\n\ndef update_nested_dict(d: dict, k: str, v, i=0, sep=\"__\"):\n    d = copy.deepcopy(d)\n    keys = k.split(sep)\n    assert keys[i] in d.keys(), str(dict(keys=keys, d=d, i=i))\n    if i == len(keys) - 1:\n        orig = d[keys[i]]\n        if v != orig:\n            print(dict(updated_key=k, new_value=v, orig=orig))\n            d[keys[i]] = v\n    else:\n        d[keys[i]] = update_nested_dict(d=d[keys[i]], k=k, v=v, i=i + 1)\n    return d\n\n\ndef test_update_nested_dict():\n    d = dict(top=dict(middle_a=dict(last=1), middle_b=0))\n    print(update_nested_dict(d, k=\"top__middle_b\", v=-1))\n    print(update_nested_dict(d, k=\"top__middle_a__last\", v=-1))\n    print(update_nested_dict(d, k=\"top__middle_a__last\", v=1))\n\n\ndef clean_up_triplet_data(path: str):\n    outputs = []\n    with open(path) as f:\n        for line in f:\n            sep = \"####\"\n            text, tags_t, tags_o, triplets = line.split(sep)\n            outputs.append(sep.join([text, \" \", \" \", triplets]))\n\n    with open(path, \"w\") as f:\n        f.write(\"\".join(outputs))\n\n\ndef clean_up_many(pattern: str = \"data/triplet_data/*/*.txt\"):\n    for path in sorted(Path().glob(pattern)):\n        print(path)\n        clean_up_triplet_data(str(path))\n\n\ndef merge_data(\n    folders_in: List[str] = [\n        \"aste/data/triplet_data/14res/\",\n        \"aste/data/triplet_data/15res/\",\n        \"aste/data/triplet_data/16res/\",\n    ],\n    folder_out: str = \"aste/data/triplet_data/res_all/\",\n):\n    for name in [\"train.txt\", \"dev.txt\", \"test.txt\"]:\n        outputs = []\n        for folder in folders_in:\n            path = Path(folder) / name\n            with open(path) as f:\n                for line in f:\n                    assert line.endswith(\"\\n\")\n                    outputs.append(line)\n\n        path_out = Path(folder_out) / name\n        path_out.parent.mkdir(exist_ok=True, parents=True)\n        with open(path_out, \"w\") as f:\n            f.write(\"\".join(outputs))\n\n\ndef safe_divide(a: float, b: float) -> float:\n    if a == 0 or b == 0:\n        return 0\n    return a / b\n\n\nif __name__ == \"__main__\":\n    Fire()\n"
  },
  {
    "path": "aste/wrapper.py",
    "content": "import json\nimport os\nimport shutil\nimport sys\nfrom argparse import Namespace\nfrom pathlib import Path\nfrom typing import List, Tuple, Optional\n\nfrom allennlp.commands.predict import _predict\nfrom allennlp.commands.train import train_model\nfrom allennlp.common import Params\nfrom fire import Fire\nfrom pydantic import BaseModel\nfrom tqdm import tqdm\n\nfrom data_utils import Data, SentimentTriple, SplitEnum, Sentence, LabelEnum\nfrom utils import safe_divide\n\n\nclass SpanModelDocument(BaseModel):\n    sentences: List[List[str]]\n    ner: List[List[Tuple[int, int, str]]]\n    relations: List[List[Tuple[int, int, int, int, str]]]\n    doc_key: str\n\n    @property\n    def is_valid(self) -> bool:\n        return len(set(map(len, [self.sentences, self.ner, self.relations]))) == 1\n\n    @classmethod\n    def from_sentence(cls, x: Sentence):\n        ner: List[Tuple[int, int, str]] = []\n        for t in x.triples:\n            ner.append((t.o_start, t.o_end, LabelEnum.opinion))\n            ner.append((t.t_start, t.t_end, LabelEnum.target))\n        ner = sorted(set(ner), key=lambda n: n[0])\n        relations = [\n            (t.o_start, t.o_end, t.t_start, t.t_end, t.label) for t in x.triples\n        ]\n        return cls(\n            sentences=[x.tokens],\n            ner=[ner],\n            relations=[relations],\n            doc_key=str(x.id),\n        )\n\n\nclass SpanModelPrediction(SpanModelDocument):\n    predicted_ner: List[List[Tuple[int, int, LabelEnum, float, float]]] = [\n        []\n    ]  # If loss_weights[\"ner\"] == 0.0\n    predicted_relations: List[List[Tuple[int, int, int, int, LabelEnum, float, float]]]\n\n    def to_sentence(self) -> Sentence:\n        for lst in [self.sentences, self.predicted_ner, self.predicted_relations]:\n            assert len(lst) == 1\n\n        triples = [\n            SentimentTriple(o_start=os, o_end=oe, t_start=ts, t_end=te, label=label)\n            for os, oe, ts, te, label, value, prob in self.predicted_relations[0]\n        ]\n        return Sentence(\n            id=int(self.doc_key),\n            tokens=self.sentences[0],\n            pos=[],\n            weight=1,\n            is_labeled=False,\n            triples=triples,\n            spans=[lst[:3] for lst in self.predicted_ner[0]],\n        )\n\n\nclass SpanModelData(BaseModel):\n    root: Path\n    data_split: SplitEnum\n    documents: Optional[List[SpanModelDocument]]\n\n    @classmethod\n    def read(cls, path: Path) -> List[SpanModelDocument]:\n        docs = []\n        with open(path) as f:\n            for line in f:\n                line = line.strip()\n                raw: dict = json.loads(line)\n                docs.append(SpanModelDocument(**raw))\n        return docs\n\n    def load(self):\n        if self.documents is None:\n            path = self.root / f\"{self.data_split}.json\"\n            self.documents = self.read(path)\n\n    def dump(self, path: Path, sep=\"\\n\"):\n        for d in self.documents:\n            assert d.is_valid\n        with open(path, \"w\") as f:\n            f.write(sep.join([d.json() for d in self.documents]))\n        assert all(\n            [a.dict() == b.dict() for a, b in zip(self.documents, self.read(path))]\n        )\n\n    @classmethod\n    def from_data(cls, x: Data):\n        data = cls(root=x.root, data_split=x.data_split)\n        data.documents = [SpanModelDocument.from_sentence(s) for s in x.sentences]\n        return data\n\n\nclass SpanModel(BaseModel):\n    save_dir: str\n    random_seed: int\n    path_config_base: str = \"training_config/config.jsonnet\"\n\n    def save_temp_data(self, path_in: str, name: str, is_test: bool = False) -> Path:\n        path_temp = Path(self.save_dir) / \"temp_data\" / f\"{name}.json\"\n        path_temp = path_temp.resolve()\n        path_temp.parent.mkdir(exist_ok=True, parents=True)\n        data = Data.load_from_full_path(path_in)\n\n        if is_test:\n            # SpanModel error if s.triples is empty list\n            assert data.sentences is not None\n            for s in data.sentences:\n                s.triples = [SentimentTriple.make_dummy()]\n\n        span_data = SpanModelData.from_data(data)\n        span_data.dump(path_temp)\n        return path_temp\n\n    def fit(self, path_train: str, path_dev: str):\n        weights_dir = Path(self.save_dir) / \"weights\"\n        weights_dir.mkdir(exist_ok=True, parents=True)\n        print(dict(weights_dir=weights_dir))\n\n        params = Params.from_file(\n            self.path_config_base,\n            params_overrides=dict(\n                random_seed=self.random_seed,\n                numpy_seed=self.random_seed,\n                pytorch_seed=self.random_seed,\n                train_data_path=str(self.save_temp_data(path_train, \"train\")),\n                validation_data_path=str(self.save_temp_data(path_dev, \"dev\")),\n                test_data_path=str(self.save_temp_data(path_dev, \"dev\")),\n            ),\n        )\n\n        # Register custom modules\n        sys.path.append(\".\")\n        from span_model.data.dataset_readers.span_model import SpanModelReader\n\n        assert SpanModelReader is not None\n        train_model(params, serialization_dir=str(weights_dir))\n\n    def predict(self, path_in: str, path_out: str):\n        path_model = Path(self.save_dir) / \"weights\" / \"model.tar.gz\"\n        path_temp_in = self.save_temp_data(path_in, \"pred_in\", is_test=True)\n        path_temp_out = Path(self.save_dir) / \"temp_data\" / \"pred_out.json\"\n        if path_temp_out.exists():\n            os.remove(path_temp_out)\n\n        args = Namespace(\n            archive_file=str(path_model),\n            input_file=str(path_temp_in),\n            output_file=str(path_temp_out),\n            weights_file=\"\",\n            batch_size=1,\n            silent=True,\n            cuda_device=0,\n            use_dataset_reader=True,\n            dataset_reader_choice=\"validation\",\n            overrides=\"\",\n            predictor=\"span_model\",\n            file_friendly_logging=False,\n        )\n\n        # Register custom modules\n        sys.path.append(\".\")\n        from span_model.data.dataset_readers.span_model import SpanModelReader\n        from span_model.predictors.span_model import SpanModelPredictor\n\n        assert SpanModelReader is not None\n        assert SpanModelPredictor is not None\n        _predict(args)\n\n        with open(path_temp_out) as f:\n            preds = [SpanModelPrediction(**json.loads(line.strip())) for line in f]\n        data = Data(\n            root=Path(),\n            data_split=SplitEnum.test,\n            sentences=[p.to_sentence() for p in preds],\n        )\n        data.save_to_path(path_out)\n\n    @classmethod\n    def score(cls, path_pred: str, path_gold: str) -> dict:\n        pred = Data.load_from_full_path(path_pred)\n        gold = Data.load_from_full_path(path_gold)\n        assert pred.sentences is not None\n        assert gold.sentences is not None\n        assert len(pred.sentences) == len(gold.sentences)\n        num_pred = 0\n        num_gold = 0\n        num_correct = 0\n\n        for i in range(len(gold.sentences)):\n            num_pred += len(pred.sentences[i].triples)\n            num_gold += len(gold.sentences[i].triples)\n            for p in pred.sentences[i].triples:\n                for g in gold.sentences[i].triples:\n                    if p.dict() == g.dict():\n                        num_correct += 1\n\n        precision = safe_divide(num_correct, num_pred)\n        recall = safe_divide(num_correct, num_gold)\n\n        info = dict(\n            path_pred=path_pred,\n            path_gold=path_gold,\n            precision=precision,\n            recall=recall,\n            score=safe_divide(2 * precision * recall, precision + recall),\n        )\n        return info\n\n\ndef run_score(path_pred: str, path_gold: str) -> dict:\n    return SpanModel.score(path_pred, path_gold)\n\n\ndef run_train(path_train: str, path_dev: str, save_dir: str, random_seed: int):\n    print(dict(run_train=locals()))\n    if Path(save_dir).exists():\n        return\n\n    model = SpanModel(save_dir=save_dir, random_seed=random_seed)\n    model.fit(path_train, path_dev)\n\n\ndef run_train_many(save_dir_template: str, random_seeds: List[int], **kwargs):\n    for seed in tqdm(random_seeds):\n        save_dir = save_dir_template.format(seed)\n        run_train(save_dir=save_dir, random_seed=seed, **kwargs)\n\n\ndef run_eval(path_test: str, save_dir: str):\n    print(dict(run_eval=locals()))\n    model = SpanModel(save_dir=save_dir, random_seed=0)\n    path_pred = str(Path(save_dir) / \"pred.txt\")\n    model.predict(path_test, path_pred)\n    results = model.score(path_pred, path_test)\n    print(results)\n    return results\n\n\ndef run_eval_many(save_dir_template: str, random_seeds: List[int], **kwargs):\n    results = []\n    for seed in tqdm(random_seeds):\n        save_dir = save_dir_template.format(seed)\n        results.append(run_eval(save_dir=save_dir, **kwargs))\n\n    precision = sum(r[\"precision\"] for r in results) / len(random_seeds)\n    recall = sum(r[\"recall\"] for r in results) / len(random_seeds)\n    score = safe_divide(2 * precision * recall, precision + recall)\n    print(dict(precision=precision, recall=recall, score=score))\n\n\nif __name__ == \"__main__\":\n    Fire()\n"
  },
  {
    "path": "demo.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {\n    \"colab\": {\n     \"base_uri\": \"https://localhost:8080/\"\n    },\n    \"id\": \"izKXA4b6-oIv\",\n    \"outputId\": \"1b436740-e1e0-4e01-e3f5-6325ce29907a\"\n   },\n   \"outputs\": [\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"Cloning into 'Span-ASTE'...\\n\",\n      \"remote: Enumerating objects: 191, done.\\u001B[K\\n\",\n      \"remote: Counting objects: 100% (97/97), done.\\u001B[K\\n\",\n      \"remote: Compressing objects: 100% (59/59), done.\\u001B[K\\n\",\n      \"remote: Total 191 (delta 58), reused 61 (delta 36), pack-reused 94\\u001B[K\\n\",\n      \"Receiving objects: 100% (191/191), 626.87 KiB | 23.22 MiB/s, done.\\n\",\n      \"Resolving deltas: 100% (80/80), done.\\n\",\n      \"Note: checking out 'f53ec3c'.\\n\",\n      \"\\n\",\n      \"You are in 'detached HEAD' state. You can look around, make experimental\\n\",\n      \"changes and commit them, and you can discard any commits you make in this\\n\",\n      \"state without impacting any branches by performing another checkout.\\n\",\n      \"\\n\",\n      \"If you want to create a new branch to retain commits you create, you may\\n\",\n      \"do so (now or later) by using -b with the checkout command again. Example:\\n\",\n      \"\\n\",\n      \"  git checkout -b <new-branch-name>\\n\",\n      \"\\n\",\n      \"HEAD is now at f53ec3c Add command-line scoring instructions in README.md\\n\",\n      \"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\\n\",\n      \"Collecting Cython==0.29.21\\n\",\n      \"  Downloading Cython-0.29.21-cp37-cp37m-manylinux1_x86_64.whl (2.0 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 2.0 MB 18.3 MB/s \\n\",\n      \"\\u001B[?25hCollecting PYEVALB==0.1.3\\n\",\n      \"  Downloading PYEVALB-0.1.3-py3-none-any.whl (13 kB)\\n\",\n      \"Collecting allennlp-models==1.2.2\\n\",\n      \"  Downloading allennlp_models-1.2.2-py3-none-any.whl (353 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 353 kB 69.5 MB/s \\n\",\n      \"\\u001B[?25hCollecting allennlp==1.2.2\\n\",\n      \"  Downloading allennlp-1.2.2-py3-none-any.whl (505 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 505 kB 39.7 MB/s \\n\",\n      \"\\u001B[?25hCollecting botocore==1.19.46\\n\",\n      \"  Downloading botocore-1.19.46-py2.py3-none-any.whl (7.2 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 7.2 MB 34.0 MB/s \\n\",\n      \"\\u001B[?25hCollecting fire==0.3.1\\n\",\n      \"  Downloading fire-0.3.1.tar.gz (81 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 81 kB 10.6 MB/s \\n\",\n      \"\\u001B[?25hCollecting nltk==3.6.6\\n\",\n      \"  Downloading nltk-3.6.6-py3-none-any.whl (1.5 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 1.5 MB 57.0 MB/s \\n\",\n      \"\\u001B[?25hCollecting numpy==1.21.5\\n\",\n      \"  Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 15.7 MB 49.4 MB/s \\n\",\n      \"\\u001B[?25hCollecting pandas==1.1.5\\n\",\n      \"  Downloading pandas-1.1.5-cp37-cp37m-manylinux1_x86_64.whl (9.5 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 9.5 MB 53.1 MB/s \\n\",\n      \"\\u001B[?25hCollecting pydantic==1.6.2\\n\",\n      \"  Downloading pydantic-1.6.2-cp37-cp37m-manylinux2014_x86_64.whl (8.6 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 8.6 MB 28.2 MB/s \\n\",\n      \"\\u001B[?25hCollecting scikit-learn==0.22.2.post1\\n\",\n      \"  Downloading scikit_learn-0.22.2.post1-cp37-cp37m-manylinux1_x86_64.whl (7.1 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 7.1 MB 48.9 MB/s \\n\",\n      \"\\u001B[?25hCollecting torch==1.7.0\\n\",\n      \"  Downloading torch-1.7.0-cp37-cp37m-manylinux1_x86_64.whl (776.7 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 776.7 MB 4.4 kB/s \\n\",\n      \"\\u001B[?25hCollecting torchvision==0.8.1\\n\",\n      \"  Downloading torchvision-0.8.1-cp37-cp37m-manylinux1_x86_64.whl (12.7 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 12.7 MB 41.9 MB/s \\n\",\n      \"\\u001B[?25hCollecting transformers==3.4.0\\n\",\n      \"  Downloading transformers-3.4.0-py3-none-any.whl (1.3 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 1.3 MB 58.1 MB/s \\n\",\n      \"\\u001B[?25hCollecting boto3==1.16.46\\n\",\n      \"  Downloading boto3-1.16.46-py2.py3-none-any.whl (130 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 130 kB 75.8 MB/s \\n\",\n      \"\\u001B[?25hCollecting pytablewriter>=0.10.2\\n\",\n      \"  Downloading pytablewriter-0.64.2-py3-none-any.whl (106 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 106 kB 71.7 MB/s \\n\",\n      \"\\u001B[?25hCollecting word2number>=1.1\\n\",\n      \"  Downloading word2number-1.1.zip (9.7 kB)\\n\",\n      \"Collecting ftfy\\n\",\n      \"  Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 53 kB 2.1 MB/s \\n\",\n      \"\\u001B[?25hCollecting conllu==4.2.1\\n\",\n      \"  Downloading conllu-4.2.1-py2.py3-none-any.whl (14 kB)\\n\",\n      \"Collecting py-rouge==1.1\\n\",\n      \"  Downloading py_rouge-1.1-py3-none-any.whl (56 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 56 kB 3.5 MB/s \\n\",\n      \"\\u001B[?25hCollecting overrides==3.1.0\\n\",\n      \"  Downloading overrides-3.1.0.tar.gz (11 kB)\\n\",\n      \"Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (3.1.0)\\n\",\n      \"Collecting jsonpickle\\n\",\n      \"  Downloading jsonpickle-2.2.0-py2.py3-none-any.whl (39 kB)\\n\",\n      \"Requirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (3.6.4)\\n\",\n      \"Collecting spacy<2.4,>=2.1.0\\n\",\n      \"  Downloading spacy-2.3.8-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 4.8 MB 50.8 MB/s \\n\",\n      \"\\u001B[?25hRequirement already satisfied: tqdm>=4.19 in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (4.64.1)\\n\",\n      \"Collecting filelock<3.1,>=3.0\\n\",\n      \"  Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)\\n\",\n      \"Requirement already satisfied: requests>=2.18 in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (2.23.0)\\n\",\n      \"Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (1.7.3)\\n\",\n      \"Collecting tensorboardX>=1.2\\n\",\n      \"  Downloading tensorboardX-2.5.1-py2.py3-none-any.whl (125 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 125 kB 58.4 MB/s \\n\",\n      \"\\u001B[?25hCollecting jsonnet>=0.10.0\\n\",\n      \"  Downloading jsonnet-0.19.1.tar.gz (593 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 593 kB 72.4 MB/s \\n\",\n      \"\\u001B[?25hCollecting jmespath<1.0.0,>=0.7.1\\n\",\n      \"  Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)\\n\",\n      \"Collecting urllib3<1.27,>=1.25.4\\n\",\n      \"  Downloading urllib3-1.26.13-py2.py3-none-any.whl (140 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 140 kB 67.8 MB/s \\n\",\n      \"\\u001B[?25hRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.7/dist-packages (from botocore==1.19.46->-r requirements.txt (line 5)) (2.8.2)\\n\",\n      \"Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from fire==0.3.1->-r requirements.txt (line 6)) (1.15.0)\\n\",\n      \"Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from fire==0.3.1->-r requirements.txt (line 6)) (2.1.0)\\n\",\n      \"Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from nltk==3.6.6->-r requirements.txt (line 7)) (1.2.0)\\n\",\n      \"Requirement already satisfied: regex>=2021.8.3 in /usr/local/lib/python3.7/dist-packages (from nltk==3.6.6->-r requirements.txt (line 7)) (2022.6.2)\\n\",\n      \"Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from nltk==3.6.6->-r requirements.txt (line 7)) (7.1.2)\\n\",\n      \"Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas==1.1.5->-r requirements.txt (line 9)) (2022.6)\\n\",\n      \"Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 12)) (4.1.1)\\n\",\n      \"Collecting dataclasses\\n\",\n      \"  Downloading dataclasses-0.6-py3-none-any.whl (14 kB)\\n\",\n      \"Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 12)) (0.16.0)\\n\",\n      \"Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.8.1->-r requirements.txt (line 13)) (7.1.2)\\n\",\n      \"Requirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 14)) (3.19.6)\\n\",\n      \"Collecting sacremoses\\n\",\n      \"  Downloading sacremoses-0.0.53.tar.gz (880 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 880 kB 67.9 MB/s \\n\",\n      \"\\u001B[?25hCollecting tokenizers==0.9.2\\n\",\n      \"  Downloading tokenizers-0.9.2-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 2.9 MB 45.8 MB/s \\n\",\n      \"\\u001B[?25hCollecting sentencepiece!=0.1.92\\n\",\n      \"  Downloading sentencepiece-0.1.97-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 1.3 MB 62.0 MB/s \\n\",\n      \"\\u001B[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 14)) (21.3)\\n\",\n      \"Collecting s3transfer<0.4.0,>=0.3.0\\n\",\n      \"  Downloading s3transfer-0.3.7-py2.py3-none-any.whl (73 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 73 kB 2.4 MB/s \\n\",\n      \"\\u001B[?25hCollecting typepy[datetime]<2,>=1.2.0\\n\",\n      \"  Downloading typepy-1.3.0-py3-none-any.whl (31 kB)\\n\",\n      \"Collecting pathvalidate<3,>=2.3.0\\n\",\n      \"  Downloading pathvalidate-2.5.2-py3-none-any.whl (20 kB)\\n\",\n      \"Collecting tabledata<2,>=1.3.0\\n\",\n      \"  Downloading tabledata-1.3.0-py3-none-any.whl (11 kB)\\n\",\n      \"Collecting mbstrdecoder<2,>=1.0.0\\n\",\n      \"  Downloading mbstrdecoder-1.1.1-py3-none-any.whl (7.7 kB)\\n\",\n      \"Collecting DataProperty<2,>=0.55.0\\n\",\n      \"  Downloading DataProperty-0.55.0-py3-none-any.whl (26 kB)\\n\",\n      \"Collecting tcolorpy<1,>=0.0.5\\n\",\n      \"  Downloading tcolorpy-0.1.2-py3-none-any.whl (7.9 kB)\\n\",\n      \"Requirement already satisfied: setuptools>=38.3.0 in /usr/local/lib/python3.7/dist-packages (from pytablewriter>=0.10.2->PYEVALB==0.1.3->-r requirements.txt (line 2)) (57.4.0)\\n\",\n      \"Requirement already satisfied: chardet<6,>=3.0.4 in /usr/local/lib/python3.7/dist-packages (from mbstrdecoder<2,>=1.0.0->pytablewriter>=0.10.2->PYEVALB==0.1.3->-r requirements.txt (line 2)) (3.0.4)\\n\",\n      \"Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.18->allennlp==1.2.2->-r requirements.txt (line 4)) (2.10)\\n\",\n      \"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.18->allennlp==1.2.2->-r requirements.txt (line 4)) (2022.9.24)\\n\",\n      \"Collecting urllib3<1.27,>=1.25.4\\n\",\n      \"  Downloading urllib3-1.25.11-py2.py3-none-any.whl (127 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 127 kB 73.6 MB/s \\n\",\n      \"\\u001B[?25hRequirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (0.10.1)\\n\",\n      \"Collecting catalogue<1.1.0,>=0.0.7\\n\",\n      \"  Downloading catalogue-1.0.2-py2.py3-none-any.whl (16 kB)\\n\",\n      \"Collecting srsly<1.1.0,>=1.0.2\\n\",\n      \"  Downloading srsly-1.0.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (208 kB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 208 kB 71.2 MB/s \\n\",\n      \"\\u001B[?25hRequirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (2.0.7)\\n\",\n      \"Collecting thinc<7.5.0,>=7.4.1\\n\",\n      \"  Downloading thinc-7.4.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.0 MB)\\n\",\n      \"\\u001B[K     |████████████████████████████████| 1.0 MB 58.3 MB/s \\n\",\n      \"\\u001B[?25hRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (3.0.8)\\n\",\n      \"Collecting plac<1.2.0,>=0.9.6\\n\",\n      \"  Downloading plac-1.1.3-py2.py3-none-any.whl (20 kB)\\n\",\n      \"Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (1.0.9)\\n\",\n      \"Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (0.7.9)\\n\",\n      \"Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<1.1.0,>=0.0.7->spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (3.10.0)\\n\",\n      \"Requirement already satisfied: wcwidth>=0.2.5 in /usr/local/lib/python3.7/dist-packages (from ftfy->allennlp-models==1.2.2->-r requirements.txt (line 3)) (0.2.5)\\n\",\n      \"Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py->allennlp==1.2.2->-r requirements.txt (line 4)) (1.5.2)\\n\",\n      \"Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from jsonpickle->allennlp==1.2.2->-r requirements.txt (line 4)) (4.13.0)\\n\",\n      \"Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers==3.4.0->-r requirements.txt (line 14)) (3.0.9)\\n\",\n      \"Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (1.4.1)\\n\",\n      \"Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (1.11.0)\\n\",\n      \"Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (0.7.1)\\n\",\n      \"Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (9.0.0)\\n\",\n      \"Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (22.1.0)\\n\",\n      \"Building wheels for collected packages: fire, overrides, jsonnet, word2number, sacremoses\\n\",\n      \"  Building wheel for fire (setup.py) ... \\u001B[?25l\\u001B[?25hdone\\n\",\n      \"  Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111023 sha256=867877500d51fc1466978cad4ed0d27cb550966ead11c09f6ebb1c29d326c4d7\\n\",\n      \"  Stored in directory: /root/.cache/pip/wheels/95/38/e1/8b62337a8ecf5728bdc1017e828f253f7a9cf25db999861bec\\n\",\n      \"  Building wheel for overrides (setup.py) ... \\u001B[?25l\\u001B[?25hdone\\n\",\n      \"  Created wheel for overrides: filename=overrides-3.1.0-py3-none-any.whl size=10187 sha256=f29e5574b1a87b6159f1a10bd1d4884e547a8f0c981fdf36b32f876ba81e67fc\\n\",\n      \"  Stored in directory: /root/.cache/pip/wheels/3a/0d/38/01a9bc6e20dcfaf0a6a7b552d03137558ba1c38aea47644682\\n\",\n      \"  Building wheel for jsonnet (setup.py) ... \\u001B[?25l\\u001B[?25hdone\\n\",\n      \"  Created wheel for jsonnet: filename=jsonnet-0.19.1-cp37-cp37m-linux_x86_64.whl size=3997237 sha256=65b35b399530104bd0f8d4c9e9854bf5b1ea5fa5ef562a7a2991994f048d31c4\\n\",\n      \"  Stored in directory: /root/.cache/pip/wheels/03/6b/48/a168ed5f8d01c50268605eff341c29126286763607bf707e3b\\n\",\n      \"  Building wheel for word2number (setup.py) ... \\u001B[?25l\\u001B[?25hdone\\n\",\n      \"  Created wheel for word2number: filename=word2number-1.1-py3-none-any.whl size=5582 sha256=db776042f55344c643ceb582cc4583b9f7af2c165fd065777f58e72b1fbb68e4\\n\",\n      \"  Stored in directory: /root/.cache/pip/wheels/4b/c3/77/a5f48aeb0d3efb7cd5ad61cbd3da30bbf9ffc9662b07c9f879\\n\",\n      \"  Building wheel for sacremoses (setup.py) ... \\u001B[?25l\\u001B[?25hdone\\n\",\n      \"  Created wheel for sacremoses: filename=sacremoses-0.0.53-py3-none-any.whl size=895260 sha256=bf922fe785a540d21b8a0d5061ff77f1b7f7bdf172b948ea4af6e6b3817594b8\\n\",\n      \"  Stored in directory: /root/.cache/pip/wheels/87/39/dd/a83eeef36d0bf98e7a4d1933a4ad2d660295a40613079bafc9\\n\",\n      \"Successfully built fire overrides jsonnet word2number sacremoses\\n\",\n      \"Installing collected packages: mbstrdecoder, urllib3, typepy, numpy, jmespath, srsly, plac, catalogue, botocore, tokenizers, thinc, sentencepiece, sacremoses, s3transfer, filelock, DataProperty, dataclasses, transformers, torch, tensorboardX, tcolorpy, tabledata, spacy, scikit-learn, pathvalidate, overrides, nltk, jsonpickle, jsonnet, boto3, word2number, pytablewriter, py-rouge, ftfy, conllu, allennlp, torchvision, PYEVALB, pydantic, pandas, fire, Cython, allennlp-models\\n\",\n      \"  Attempting uninstall: urllib3\\n\",\n      \"    Found existing installation: urllib3 1.24.3\\n\",\n      \"    Uninstalling urllib3-1.24.3:\\n\",\n      \"      Successfully uninstalled urllib3-1.24.3\\n\",\n      \"  Attempting uninstall: numpy\\n\",\n      \"    Found existing installation: numpy 1.21.6\\n\",\n      \"    Uninstalling numpy-1.21.6:\\n\",\n      \"      Successfully uninstalled numpy-1.21.6\\n\",\n      \"  Attempting uninstall: srsly\\n\",\n      \"    Found existing installation: srsly 2.4.5\\n\",\n      \"    Uninstalling srsly-2.4.5:\\n\",\n      \"      Successfully uninstalled srsly-2.4.5\\n\",\n      \"  Attempting uninstall: catalogue\\n\",\n      \"    Found existing installation: catalogue 2.0.8\\n\",\n      \"    Uninstalling catalogue-2.0.8:\\n\",\n      \"      Successfully uninstalled catalogue-2.0.8\\n\",\n      \"  Attempting uninstall: thinc\\n\",\n      \"    Found existing installation: thinc 8.1.5\\n\",\n      \"    Uninstalling thinc-8.1.5:\\n\",\n      \"      Successfully uninstalled thinc-8.1.5\\n\",\n      \"  Attempting uninstall: filelock\\n\",\n      \"    Found existing installation: filelock 3.8.0\\n\",\n      \"    Uninstalling filelock-3.8.0:\\n\",\n      \"      Successfully uninstalled filelock-3.8.0\\n\",\n      \"  Attempting uninstall: torch\\n\",\n      \"    Found existing installation: torch 1.12.1+cu113\\n\",\n      \"    Uninstalling torch-1.12.1+cu113:\\n\",\n      \"      Successfully uninstalled torch-1.12.1+cu113\\n\",\n      \"  Attempting uninstall: spacy\\n\",\n      \"    Found existing installation: spacy 3.4.3\\n\",\n      \"    Uninstalling spacy-3.4.3:\\n\",\n      \"      Successfully uninstalled spacy-3.4.3\\n\",\n      \"  Attempting uninstall: scikit-learn\\n\",\n      \"    Found existing installation: scikit-learn 1.0.2\\n\",\n      \"    Uninstalling scikit-learn-1.0.2:\\n\",\n      \"      Successfully uninstalled scikit-learn-1.0.2\\n\",\n      \"  Attempting uninstall: nltk\\n\",\n      \"    Found existing installation: nltk 3.7\\n\",\n      \"    Uninstalling nltk-3.7:\\n\",\n      \"      Successfully uninstalled nltk-3.7\\n\",\n      \"  Attempting uninstall: torchvision\\n\",\n      \"    Found existing installation: torchvision 0.13.1+cu113\\n\",\n      \"    Uninstalling torchvision-0.13.1+cu113:\\n\",\n      \"      Successfully uninstalled torchvision-0.13.1+cu113\\n\",\n      \"  Attempting uninstall: pydantic\\n\",\n      \"    Found existing installation: pydantic 1.10.2\\n\",\n      \"    Uninstalling pydantic-1.10.2:\\n\",\n      \"      Successfully uninstalled pydantic-1.10.2\\n\",\n      \"  Attempting uninstall: pandas\\n\",\n      \"    Found existing installation: pandas 1.3.5\\n\",\n      \"    Uninstalling pandas-1.3.5:\\n\",\n      \"      Successfully uninstalled pandas-1.3.5\\n\",\n      \"  Attempting uninstall: Cython\\n\",\n      \"    Found existing installation: Cython 0.29.32\\n\",\n      \"    Uninstalling Cython-0.29.32:\\n\",\n      \"      Successfully uninstalled Cython-0.29.32\\n\",\n      \"\\u001B[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\\n\",\n      \"yellowbrick 1.5 requires scikit-learn>=1.0.0, but you have scikit-learn 0.22.2.post1 which is incompatible.\\n\",\n      \"torchtext 0.13.1 requires torch==1.12.1, but you have torch 1.7.0 which is incompatible.\\n\",\n      \"torchaudio 0.12.1+cu113 requires torch==1.12.1, but you have torch 1.7.0 which is incompatible.\\n\",\n      \"imbalanced-learn 0.8.1 requires scikit-learn>=0.24, but you have scikit-learn 0.22.2.post1 which is incompatible.\\n\",\n      \"fastai 2.7.10 requires torchvision>=0.8.2, but you have torchvision 0.8.1 which is incompatible.\\n\",\n      \"en-core-web-sm 3.4.1 requires spacy<3.5.0,>=3.4.0, but you have spacy 2.3.8 which is incompatible.\\n\",\n      \"confection 0.0.3 requires pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4, but you have pydantic 1.6.2 which is incompatible.\\n\",\n      \"confection 0.0.3 requires srsly<3.0.0,>=2.4.0, but you have srsly 1.0.6 which is incompatible.\\u001B[0m\\n\",\n      \"Successfully installed Cython-0.29.21 DataProperty-0.55.0 PYEVALB-0.1.3 allennlp-1.2.2 allennlp-models-1.2.2 boto3-1.16.46 botocore-1.19.46 catalogue-1.0.2 conllu-4.2.1 dataclasses-0.6 filelock-3.0.12 fire-0.3.1 ftfy-6.1.1 jmespath-0.10.0 jsonnet-0.19.1 jsonpickle-2.2.0 mbstrdecoder-1.1.1 nltk-3.6.6 numpy-1.21.5 overrides-3.1.0 pandas-1.1.5 pathvalidate-2.5.2 plac-1.1.3 py-rouge-1.1 pydantic-1.6.2 pytablewriter-0.64.2 s3transfer-0.3.7 sacremoses-0.0.53 scikit-learn-0.22.2.post1 sentencepiece-0.1.97 spacy-2.3.8 srsly-1.0.6 tabledata-1.3.0 tcolorpy-0.1.2 tensorboardX-2.5.1 thinc-7.4.6 tokenizers-0.9.2 torch-1.7.0 torchvision-0.8.1 transformers-3.4.0 typepy-1.3.0 urllib3-1.25.11 word2number-1.1\\n\",\n      \"Found existing installation: dataclasses 0.6\\n\",\n      \"Uninstalling dataclasses-0.6:\\n\",\n      \"  Successfully uninstalled dataclasses-0.6\\n\",\n      \"Archive:  data.zip\\n\",\n      \"   creating: aste/data/\\n\",\n      \"   creating: aste/data/triplet_data/\\n\",\n      \"   creating: aste/data/triplet_data/14lap/\\n\",\n      \"  inflating: aste/data/triplet_data/14lap/dev.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/14lap/test.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/14lap/train.txt  \\n\",\n      \"   creating: aste/data/triplet_data/14res/\\n\",\n      \"  inflating: aste/data/triplet_data/14res/dev.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/14res/test.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/14res/train.txt  \\n\",\n      \"   creating: aste/data/triplet_data/15res/\\n\",\n      \"  inflating: aste/data/triplet_data/15res/dev.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/15res/test.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/15res/train.txt  \\n\",\n      \"   creating: aste/data/triplet_data/16res/\\n\",\n      \"  inflating: aste/data/triplet_data/16res/dev.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/16res/test.txt  \\n\",\n      \"  inflating: aste/data/triplet_data/16res/train.txt  \\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"!git clone https://github.com/chiayewken/Span-ASTE.git\\n\",\n    \"!cd Span-ASTE && git checkout f53ec3c\\n\",\n    \"!cp -a Span-ASTE/* .\\n\",\n    \"!echo boto3==1.16.46 >> requirements.txt\\n\",\n    \"!bash setup.sh\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {\n    \"colab\": {\n     \"base_uri\": \"https://localhost:8080/\"\n    },\n    \"id\": \"-pTnCgDxcSQ5\",\n    \"outputId\": \"a461cd5d-5ed6-4c38-9144-b4149ee62952\"\n   },\n   \"outputs\": [\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"tokens: ['I', 'charge', 'it', 'at', 'night', 'and', 'skip', 'taking', 'the', 'cord', 'with', 'me', 'because', 'of', 'the', 'good', 'battery', 'life', '.']\\n\",\n      \"target: (16, 17)\\n\",\n      \"opinion: (15, 15)\\n\",\n      \"label: LabelEnum.positive\\n\",\n      \"\\n\",\n      \"tokens: ['it', 'is', 'of', 'high', 'quality', ',', 'has', 'a', 'killer', 'GUI', ',', 'is', 'extremely', 'stable', ',', 'is', 'highly', 'expandable', ',', 'is', 'bundled', 'with', 'lots', 'of', 'very', 'good', 'applications', ',', 'is', 'easy', 'to', 'use', ',', 'and', 'is', 'absolutely', 'gorgeous', '.']\\n\",\n      \"target: (4, 4)\\n\",\n      \"opinion: (3, 3)\\n\",\n      \"label: LabelEnum.positive\\n\",\n      \"target: (9, 9)\\n\",\n      \"opinion: (8, 8)\\n\",\n      \"label: LabelEnum.positive\\n\",\n      \"target: (26, 26)\\n\",\n      \"opinion: (25, 25)\\n\",\n      \"label: LabelEnum.positive\\n\",\n      \"target: (31, 31)\\n\",\n      \"opinion: (29, 29)\\n\",\n      \"label: LabelEnum.positive\\n\",\n      \"\\n\",\n      \"tokens: ['Easy', 'to', 'start', 'up', 'and', 'does', 'not', 'overheat', 'as', 'much', 'as', 'other', 'laptops', '.']\\n\",\n      \"target: (2, 3)\\n\",\n      \"opinion: (0, 0)\\n\",\n      \"label: LabelEnum.positive\\n\",\n      \"\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"#@title Data Exploration\\n\",\n    \"data_name = \\\"14lap\\\" #@param [\\\"14lap\\\", \\\"14res\\\", \\\"15res\\\", \\\"16res\\\"]\\n\",\n    \"\\n\",\n    \"import sys\\n\",\n    \"sys.path.append(\\\"aste\\\")\\n\",\n    \"from data_utils import Data\\n\",\n    \"\\n\",\n    \"path = f\\\"aste/data/triplet_data/{data_name}/train.txt\\\"\\n\",\n    \"data = Data.load_from_full_path(path)\\n\",\n    \"\\n\",\n    \"for s in data.sentences[:3]:\\n\",\n    \"    print(\\\"tokens:\\\", s.tokens)\\n\",\n    \"    for t in s.triples:\\n\",\n    \"        print(\\\"target:\\\", (t.t_start, t.t_end))\\n\",\n    \"        print(\\\"opinion:\\\", (t.o_start, t.o_end))\\n\",\n    \"        print(\\\"label:\\\", t.label)\\n\",\n    \"    print()\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"source\": [\n    \"# Download pretrained SpanModel weights\\n\",\n    \"from pathlib import Path\\n\",\n    \"template = \\\"https://github.com/chiayewken/Span-ASTE/releases/download/v1.0.0/{}.tar\\\"\\n\",\n    \"url = template.format(data_name)\\n\",\n    \"model_tar = Path(url).name\\n\",\n    \"model_dir = Path(url).stem\\n\",\n    \"\\n\",\n    \"!wget -nc $url\\n\",\n    \"!tar -xf $model_tar\"\n   ],\n   \"metadata\": {\n    \"colab\": {\n     \"base_uri\": \"https://localhost:8080/\"\n    },\n    \"id\": \"3LmrJekiPHpQ\",\n    \"outputId\": \"0f62b4a9-8dd1-4363-b66f-1ae1661e6cab\"\n   },\n   \"execution_count\": 3,\n   \"outputs\": [\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"--2022-11-30 07:39:51--  https://github.com/chiayewken/Span-ASTE/releases/download/v1.0.0/14lap.tar\\n\",\n      \"Resolving github.com (github.com)... 20.205.243.166\\n\",\n      \"Connecting to github.com (github.com)|20.205.243.166|:443... connected.\\n\",\n      \"HTTP request sent, awaiting response... 302 Found\\n\",\n      \"Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/371216048/70bb2013-2773-44c0-b0d9-8a2ec8e38515?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221130T073951Z&X-Amz-Expires=300&X-Amz-Signature=10727051f65ed91031b2e1e8b05cf44384aae0bdafd0171b1655ca6c72249494&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=371216048&response-content-disposition=attachment%3B%20filename%3D14lap.tar&response-content-type=application%2Foctet-stream [following]\\n\",\n      \"--2022-11-30 07:39:51--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/371216048/70bb2013-2773-44c0-b0d9-8a2ec8e38515?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221130T073951Z&X-Amz-Expires=300&X-Amz-Signature=10727051f65ed91031b2e1e8b05cf44384aae0bdafd0171b1655ca6c72249494&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=371216048&response-content-disposition=attachment%3B%20filename%3D14lap.tar&response-content-type=application%2Foctet-stream\\n\",\n      \"Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\\n\",\n      \"Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.\\n\",\n      \"HTTP request sent, awaiting response... 200 OK\\n\",\n      \"Length: 409068544 (390M) [application/octet-stream]\\n\",\n      \"Saving to: ‘14lap.tar’\\n\",\n      \"\\n\",\n      \"14lap.tar           100%[===================>] 390.12M  2.13MB/s    in 2m 21s  \\n\",\n      \"\\n\",\n      \"2022-11-30 07:42:13 (2.76 MB/s) - ‘14lap.tar’ saved [409068544/409068544]\\n\",\n      \"\\n\"\n     ]\n    }\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {\n    \"colab\": {\n     \"base_uri\": \"https://localhost:8080/\",\n     \"height\": 488,\n     \"referenced_widgets\": [\n      \"f61ea767ae064779b77f7d206a90b765\",\n      \"382e7815e6314a798943a7f71eab1dbd\",\n      \"b3f970d5f20748d091d13d1d37e712e4\",\n      \"3c925c25029e4e5a9515b525a819cb31\",\n      \"17148c3a40ae4572923f16f249179b9b\",\n      \"9cc1d9231ee34d33b65a88c4de3b213f\",\n      \"9afa9b48d00748739422b2e32763e57d\",\n      \"f6d16c6d56974ec88f55333b65e0f16a\",\n      \"e776ac7bd605497395d6cf45648c46e0\",\n      \"87d171d5ff4d48bea3781c4185c53cd3\",\n      \"ac44150d1166470f944a1d0effeae80b\",\n      \"916c3c664f2348b5b608c368090945ac\",\n      \"f00d08843e1a4e9e911a8d9fd11f04d1\",\n      \"de26b5b4f1be42cba2951f528f7715ba\",\n      \"d5d290cde75d463ba7b9b220eed79ca7\",\n      \"e30953017bae40849979501dbb4647bc\",\n      \"08b2e55d6325474da282c48e0f959a56\",\n      \"542e865145b547ffbe61dec7fb94bab7\",\n      \"1ac6bf7c4d7d4fbd8d1c85ec426854db\",\n      \"808d2ba240c241e9a6989a03c4134a33\",\n      \"886551311e7d4ee9823ecd34dfc82811\",\n      \"988ec5ae620d4d67b6749ee92a2cb560\",\n      \"9463e5ed29e74f05869715f4669d1fa5\",\n      \"5e57195d10d7414c9f418af4e7eca84a\",\n      \"e4bb3941e21d45f2b2327690b4d589bf\",\n      \"321f61ce086b4ace9260a2d55afbdefa\",\n      \"94f8b1fb0c764cfa9af078bd238623d4\",\n      \"621130d9d8cf468abe5709a85f07d106\",\n      \"1453b743641b45758303e91bfedafe03\",\n      \"a7aeaf582d15403dbe447d57789b691f\",\n      \"d1d0b028b6c04d59ada3bdfb7efde504\",\n      \"0a203635e4a54efd96b85633164a067d\",\n      \"c55350fd925a454eae62f9da4ed21962\",\n      \"21dd3d5e1468453ab2f81d5e184a990e\",\n      \"a18149514fe94397abd4bcafd4df0807\",\n      \"c73b50f20f5f4b6c8ccca8e1ec61e738\",\n      \"fb17189f06074ca39d8251ea2ece15f3\",\n      \"b379be7248c84e88b3a5bc8362e56e2f\",\n      \"7e85b97fbf8642ecb7613a8b3646b6dc\",\n      \"f13fd92805504c0dae5b22e404c256fa\",\n      \"86c67fc1ac0d47bbace0fe3b6f24c7ce\",\n      \"46568a6cac834c86854b6e41c7e7219a\",\n      \"fd38ef9382a6488a8be23d5bdb1fb533\",\n      \"e01ecdf66c3143809825cbbad4aaeebb\"\n     ]\n    },\n    \"id\": \"r3i4rnIhapWe\",\n    \"outputId\": \"804a7c34-2089-4dab-b736-e2f92ba30f94\"\n   },\n   \"outputs\": [\n    {\n     \"output_type\": \"display_data\",\n     \"data\": {\n      \"text/plain\": [\n       \"Downloading:   0%|          | 0.00/433 [00:00<?, ?B/s]\"\n      ],\n      \"application/vnd.jupyter.widget-view+json\": {\n       \"version_major\": 2,\n       \"version_minor\": 0,\n       \"model_id\": \"f61ea767ae064779b77f7d206a90b765\"\n      }\n     },\n     \"metadata\": {}\n    },\n    {\n     \"output_type\": \"display_data\",\n     \"data\": {\n      \"text/plain\": [\n       \"Downloading:   0%|          | 0.00/232k [00:00<?, ?B/s]\"\n      ],\n      \"application/vnd.jupyter.widget-view+json\": {\n       \"version_major\": 2,\n       \"version_minor\": 0,\n       \"model_id\": \"916c3c664f2348b5b608c368090945ac\"\n      }\n     },\n     \"metadata\": {}\n    },\n    {\n     \"output_type\": \"display_data\",\n     \"data\": {\n      \"text/plain\": [\n       \"Downloading:   0%|          | 0.00/466k [00:00<?, ?B/s]\"\n      ],\n      \"application/vnd.jupyter.widget-view+json\": {\n       \"version_major\": 2,\n       \"version_minor\": 0,\n       \"model_id\": \"9463e5ed29e74f05869715f4669d1fa5\"\n      }\n     },\n     \"metadata\": {}\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"################################################################################\\n\",\n      \"################################################################################\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"display_data\",\n     \"data\": {\n      \"text/plain\": [\n       \"Downloading:   0%|          | 0.00/440M [00:00<?, ?B/s]\"\n      ],\n      \"application/vnd.jupyter.widget-view+json\": {\n       \"version_major\": 2,\n       \"version_minor\": 0,\n       \"model_id\": \"21dd3d5e1468453ab2f81d5e184a990e\"\n      }\n     },\n     \"metadata\": {}\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"WARNING:allennlp.nn.initializers:Did not use initialization regex that was passed: .*weight_matrix\\n\",\n      \"WARNING:allennlp.nn.initializers:Did not use initialization regex that was passed: .*weight_matrix\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"{'span_model_unused_keys': dict_keys(['serialization_dir'])}\\n\",\n      \"{'locals': ('span_extractor_type', 'endpoint')}\\n\",\n      \"{'locals': ('use_span_width_embeds', True)}\\n\",\n      \"{'ner_loss_fn': CrossEntropyLoss()}\\n\",\n      \"{'unused_keys': dict_keys([])}\\n\",\n      \"{'locals': {'self': ProperRelationExtractor(), 'make_feedforward': <function SpanModel.__init__.<locals>.make_feedforward at 0x7f26debbe680>, 'span_emb_dim': 1556, 'feature_size': 20, 'spans_per_word': 0.5, 'positive_label_weight': 1.0, 'regularizer': None, 'use_distance_embeds': True, 'use_pruning': True, 'kwargs': {}, 'vocab': Vocabulary with namespaces:  None__relation_labels, Size: 3 || None__ner_labels, Size: 3 || Non Padded Namespaces: {'*tags', '*labels'}, '__class__': <class 'span_model.models.relation_proper.ProperRelationExtractor'>}}\\n\",\n      \"{'token_emb_dim': 768, 'span_emb_dim': 1556, 'relation_scorer_dim': 3240}\\n\",\n      \"{'relation_loss_fn': CrossEntropyLoss()}\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"reading instances: 1it [00:00, 350.58it/s]\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"\\n\",\n      \"{'target': 'Windows 8', 'opinion': 'Did not enjoy', 'sentiment': <LabelEnum.negative: 'NEG'>}\\n\",\n      \"\\n\",\n      \"{'target': 'touchscreen functions', 'opinion': 'Did not enjoy', 'sentiment': <LabelEnum.negative: 'NEG'>}\\n\",\n      \"\\n\",\n      \"{'target': 'Windows 8', 'opinion': 'new', 'sentiment': <LabelEnum.neutral: 'NEU'>}\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# Use pretrained SpanModel weights for prediction\\n\",\n    \"import sys\\n\",\n    \"sys.path.append(\\\"aste\\\")\\n\",\n    \"from pathlib import Path\\n\",\n    \"from data_utils import Data, Sentence, SplitEnum\\n\",\n    \"from wrapper import SpanModel\\n\",\n    \"\\n\",\n    \"def predict_sentence(text: str, model: SpanModel) -> Sentence:\\n\",\n    \"    path_in = \\\"temp_in.txt\\\"\\n\",\n    \"    path_out = \\\"temp_out.txt\\\"\\n\",\n    \"    sent = Sentence(tokens=text.split(), triples=[], pos=[], is_labeled=False, weight=1, id=0)\\n\",\n    \"    data = Data(root=Path(), data_split=SplitEnum.test, sentences=[sent])\\n\",\n    \"    data.save_to_path(path_in)\\n\",\n    \"    model.predict(path_in, path_out)\\n\",\n    \"    data = Data.load_from_full_path(path_out)\\n\",\n    \"    return data.sentences[0]\\n\",\n    \"\\n\",\n    \"text = \\\"Did not enjoy the new Windows 8 and touchscreen functions .\\\"\\n\",\n    \"model = SpanModel(save_dir=model_dir, random_seed=0)\\n\",\n    \"sent = predict_sentence(text, model)\\n\",\n    \"\\n\",\n    \"for t in sent.triples:\\n\",\n    \"    target = \\\" \\\".join(sent.tokens[t.t_start:t.t_end+1])\\n\",\n    \"    opinion = \\\" \\\".join(sent.tokens[t.o_start:t.o_end+1])\\n\",\n    \"    print()\\n\",\n    \"    print(dict(target=target, opinion=opinion, sentiment=t.label))\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"colab\": {\n     \"base_uri\": \"https://localhost:8080/\"\n    },\n    \"collapsed\": true,\n    \"id\": \"srSNwqUz-39x\",\n    \"outputId\": \"9a34cc00-477f-4002-c8ec-e357284c2bc5\"\n   },\n   \"outputs\": [\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"{'weights_dir': PosixPath('outputs/14lap/seed_4/weights')}\\n\",\n      \"2022-11-30 03:01:13,590 - INFO - allennlp.common.params - random_seed = 4\\n\",\n      \"2022-11-30 03:01:13,592 - INFO - allennlp.common.params - numpy_seed = 4\\n\",\n      \"2022-11-30 03:01:13,596 - INFO - allennlp.common.params - pytorch_seed = 4\\n\",\n      \"2022-11-30 03:01:13,599 - INFO - allennlp.common.checks - Pytorch version: 1.7.0\\n\",\n      \"2022-11-30 03:01:13,600 - INFO - allennlp.common.params - type = default\\n\",\n      \"2022-11-30 03:01:13,604 - INFO - allennlp.common.params - dataset_reader.type = span_model\\n\",\n      \"2022-11-30 03:01:13,606 - INFO - allennlp.common.params - dataset_reader.lazy = False\\n\",\n      \"2022-11-30 03:01:13,608 - INFO - allennlp.common.params - dataset_reader.cache_directory = None\\n\",\n      \"2022-11-30 03:01:13,610 - INFO - allennlp.common.params - dataset_reader.max_instances = None\\n\",\n      \"2022-11-30 03:01:13,612 - INFO - allennlp.common.params - dataset_reader.manual_distributed_sharding = False\\n\",\n      \"2022-11-30 03:01:13,613 - INFO - allennlp.common.params - dataset_reader.manual_multi_process_sharding = False\\n\",\n      \"2022-11-30 03:01:13,615 - INFO - allennlp.common.params - dataset_reader.max_span_width = 8\\n\",\n      \"2022-11-30 03:01:13,617 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.type = pretrained_transformer_mismatched\\n\",\n      \"2022-11-30 03:01:13,618 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.token_min_padding_length = 0\\n\",\n      \"2022-11-30 03:01:13,620 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.model_name = bert-base-uncased\\n\",\n      \"2022-11-30 03:01:13,621 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.namespace = tags\\n\",\n      \"2022-11-30 03:01:13,623 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.max_length = 512\\n\",\n      \"2022-11-30 03:01:13,625 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.tokenizer_kwargs = None\\n\",\n      \"################################################################################\\n\",\n      \"2022-11-30 03:01:13,627 - INFO - allennlp.common.params - train_data_path = /content/outputs/14lap/seed_4/temp_data/train.json\\n\",\n      \"2022-11-30 03:01:13,630 - INFO - allennlp.common.params - vocabulary = <allennlp.common.lazy.Lazy object at 0x7f9692de6c90>\\n\",\n      \"2022-11-30 03:01:13,631 - INFO - allennlp.common.params - datasets_for_vocab_creation = None\\n\",\n      \"2022-11-30 03:01:13,633 - INFO - allennlp.common.params - validation_dataset_reader = None\\n\",\n      \"2022-11-30 03:01:13,634 - INFO - allennlp.common.params - validation_data_path = /content/outputs/14lap/seed_4/temp_data/dev.json\\n\",\n      \"2022-11-30 03:01:13,636 - INFO - allennlp.common.params - validation_data_loader = None\\n\",\n      \"2022-11-30 03:01:13,637 - INFO - allennlp.common.params - test_data_path = /content/outputs/14lap/seed_4/temp_data/dev.json\\n\",\n      \"2022-11-30 03:01:13,639 - INFO - allennlp.common.params - evaluate_on_test = False\\n\",\n      \"2022-11-30 03:01:13,640 - INFO - allennlp.common.params - batch_weight_key = \\n\",\n      \"2022-11-30 03:01:13,642 - INFO - allennlp.training.util - Reading training data from /content/outputs/14lap/seed_4/temp_data/train.json\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"reading instances: 906it [00:01, 803.93it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:01:14,774 - INFO - allennlp.training.util - Reading validation data from /content/outputs/14lap/seed_4/temp_data/dev.json\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\",\n      \"reading instances: 219it [00:00, 1307.35it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:01:14,953 - INFO - allennlp.training.util - Reading test data from /content/outputs/14lap/seed_4/temp_data/dev.json\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\",\n      \"reading instances: 219it [00:00, 520.69it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:01:15,382 - INFO - allennlp.common.params - type = from_instances\\n\",\n      \"2022-11-30 03:01:15,386 - INFO - allennlp.common.params - min_count = None\\n\",\n      \"2022-11-30 03:01:15,389 - INFO - allennlp.common.params - max_vocab_size = None\\n\",\n      \"2022-11-30 03:01:15,391 - INFO - allennlp.common.params - non_padded_namespaces = ('*tags', '*labels')\\n\",\n      \"2022-11-30 03:01:15,394 - INFO - allennlp.common.params - pretrained_files = None\\n\",\n      \"2022-11-30 03:01:15,397 - INFO - allennlp.common.params - only_include_pretrained_words = False\\n\",\n      \"2022-11-30 03:01:15,401 - INFO - allennlp.common.params - tokens_to_add = None\\n\",\n      \"2022-11-30 03:01:15,402 - INFO - allennlp.common.params - min_pretrained_embeddings = None\\n\",\n      \"2022-11-30 03:01:15,403 - INFO - allennlp.common.params - padding_token = @@PADDING@@\\n\",\n      \"2022-11-30 03:01:15,405 - INFO - allennlp.common.params - oov_token = @@UNKNOWN@@\\n\",\n      \"2022-11-30 03:01:15,406 - INFO - allennlp.data.vocabulary - Fitting token dictionary from dataset.\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\",\n      \"building vocab: 1344it [00:00, 14370.57it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:01:15,504 - INFO - allennlp.common.params - model.type = span_model\\n\",\n      \"2022-11-30 03:01:15,507 - INFO - allennlp.common.params - model.regularizer = None\\n\",\n      \"2022-11-30 03:01:15,513 - INFO - allennlp.common.params - model.embedder.type = basic\\n\",\n      \"2022-11-30 03:01:15,515 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.type = pretrained_transformer_mismatched\\n\",\n      \"2022-11-30 03:01:15,517 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.model_name = bert-base-uncased\\n\",\n      \"2022-11-30 03:01:15,519 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.max_length = 512\\n\",\n      \"2022-11-30 03:01:15,520 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.train_parameters = True\\n\",\n      \"2022-11-30 03:01:15,521 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.last_layer_only = True\\n\",\n      \"2022-11-30 03:01:15,523 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.gradient_checkpointing = None\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:01:15,525 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.tokenizer_kwargs = None\\n\",\n      \"2022-11-30 03:01:15,526 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.transformer_kwargs = None\\n\",\n      \"2022-11-30 03:01:15,653 - INFO - allennlp.common.params - model.modules.relation.spans_per_word = 0.5\\n\",\n      \"2022-11-30 03:01:15,654 - INFO - allennlp.common.params - model.modules.relation.use_distance_embeds = True\\n\",\n      \"2022-11-30 03:01:15,657 - INFO - allennlp.common.params - model.modules.relation.use_pruning = True\\n\",\n      \"2022-11-30 03:01:15,658 - INFO - allennlp.common.params - model.feature_size = 20\\n\",\n      \"2022-11-30 03:01:15,659 - INFO - allennlp.common.params - model.max_span_width = 8\\n\",\n      \"2022-11-30 03:01:15,660 - INFO - allennlp.common.params - model.target_task = relation\\n\",\n      \"2022-11-30 03:01:15,663 - INFO - allennlp.common.params - model.initializer.regexes.0.1.type = xavier_normal\\n\",\n      \"2022-11-30 03:01:15,665 - INFO - allennlp.common.params - model.initializer.regexes.0.1.gain = 1.0\\n\",\n      \"2022-11-30 03:01:15,667 - INFO - allennlp.common.params - model.initializer.prevent_regexes = None\\n\",\n      \"2022-11-30 03:01:15,669 - INFO - allennlp.common.params - model.module_initializer.regexes.0.1.type = xavier_normal\\n\",\n      \"2022-11-30 03:01:15,670 - INFO - allennlp.common.params - model.module_initializer.regexes.0.1.gain = 1.0\\n\",\n      \"2022-11-30 03:01:15,672 - INFO - allennlp.common.params - model.module_initializer.regexes.1.1.type = xavier_normal\\n\",\n      \"2022-11-30 03:01:15,673 - INFO - allennlp.common.params - model.module_initializer.regexes.1.1.gain = 1.0\\n\",\n      \"2022-11-30 03:01:15,674 - INFO - allennlp.common.params - model.module_initializer.prevent_regexes = None\\n\",\n      \"2022-11-30 03:01:15,676 - INFO - allennlp.common.params - model.display_metrics = None\\n\",\n      \"2022-11-30 03:01:15,677 - INFO - allennlp.common.params - model.span_extractor_type = endpoint\\n\",\n      \"2022-11-30 03:01:15,678 - INFO - allennlp.common.params - model.use_span_width_embeds = True\\n\",\n      \"{'span_model_unused_keys': dict_keys(['serialization_dir'])}\\n\",\n      \"{'locals': ('span_extractor_type', 'endpoint')}\\n\",\n      \"{'locals': ('use_span_width_embeds', True)}\\n\",\n      \"2022-11-30 03:01:15,680 - INFO - allennlp.common.params - ner.regularizer = None\\n\",\n      \"2022-11-30 03:01:15,681 - INFO - allennlp.common.params - ner.name = ner_labels\\n\",\n      \"{'ner_loss_fn': CrossEntropyLoss()}\\n\",\n      \"2022-11-30 03:01:15,687 - INFO - allennlp.common.params - relation.regularizer = None\\n\",\n      \"2022-11-30 03:01:15,688 - INFO - allennlp.common.params - relation.serialization_dir = None\\n\",\n      \"2022-11-30 03:01:15,689 - INFO - allennlp.common.params - relation.spans_per_word = 0.5\\n\",\n      \"2022-11-30 03:01:15,690 - INFO - allennlp.common.params - relation.positive_label_weight = 1.0\\n\",\n      \"2022-11-30 03:01:15,692 - INFO - allennlp.common.params - relation.use_distance_embeds = True\\n\",\n      \"2022-11-30 03:01:15,693 - INFO - allennlp.common.params - relation.use_pruning = True\\n\",\n      \"{'unused_keys': dict_keys([])}\\n\",\n      \"{'locals': {'self': ProperRelationExtractor(), 'make_feedforward': <function SpanModel.__init__.<locals>.make_feedforward at 0x7f95ec5935f0>, 'span_emb_dim': 1556, 'feature_size': 20, 'spans_per_word': 0.5, 'positive_label_weight': 1.0, 'regularizer': None, 'use_distance_embeds': True, 'use_pruning': True, 'kwargs': {}, 'vocab': Vocabulary with namespaces:  None__ner_labels, Size: 3 || None__relation_labels, Size: 3 || Non Padded Namespaces: {'*labels', '*tags'}, '__class__': <class 'span_model.models.relation_proper.ProperRelationExtractor'>}}\\n\",\n      \"{'token_emb_dim': 768, 'span_emb_dim': 1556, 'relation_scorer_dim': 3240}\\n\",\n      \"{'relation_loss_fn': CrossEntropyLoss()}\\n\",\n      \"2022-11-30 03:01:15,722 - INFO - allennlp.nn.initializers - Initializing parameters\\n\",\n      \"2022-11-30 03:01:15,727 - INFO - allennlp.nn.initializers - Initializing _ner_scorers.None__ner_labels.0._module._linear_layers.0.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,733 - INFO - allennlp.nn.initializers - Initializing _ner_scorers.None__ner_labels.0._module._linear_layers.1.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,743 - INFO - allennlp.nn.initializers - Initializing _ner_scorers.None__ner_labels.1._module.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,745 - WARNING - allennlp.nn.initializers - Did not use initialization regex that was passed: .*weight_matrix\\n\",\n      \"2022-11-30 03:01:15,746 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code\\n\",\n      \"2022-11-30 03:01:15,748 - INFO - allennlp.nn.initializers -    _ner_scorers.None__ner_labels.0._module._linear_layers.0.bias\\n\",\n      \"2022-11-30 03:01:15,749 - INFO - allennlp.nn.initializers -    _ner_scorers.None__ner_labels.0._module._linear_layers.1.bias\\n\",\n      \"2022-11-30 03:01:15,750 - INFO - allennlp.nn.initializers -    _ner_scorers.None__ner_labels.1._module.bias\\n\",\n      \"2022-11-30 03:01:15,752 - INFO - allennlp.nn.initializers - Initializing parameters\\n\",\n      \"2022-11-30 03:01:15,753 - INFO - allennlp.nn.initializers - Initializing d_embedder.embedder.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,754 - INFO - allennlp.nn.initializers - Initializing _relation_feedforwards.None__relation_labels._linear_layers.0.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,781 - INFO - allennlp.nn.initializers - Initializing _relation_feedforwards.None__relation_labels._linear_layers.1.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,783 - INFO - allennlp.nn.initializers - Initializing _relation_scorers.None__relation_labels.weight using .*weight initializer\\n\",\n      \"2022-11-30 03:01:15,787 - WARNING - allennlp.nn.initializers - Did not use initialization regex that was passed: .*weight_matrix\\n\",\n      \"2022-11-30 03:01:15,789 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code\\n\",\n      \"2022-11-30 03:01:15,791 - INFO - allennlp.nn.initializers -    _relation_feedforwards.None__relation_labels._linear_layers.0.bias\\n\",\n      \"2022-11-30 03:01:15,794 - INFO - allennlp.nn.initializers -    _relation_feedforwards.None__relation_labels._linear_layers.1.bias\\n\",\n      \"2022-11-30 03:01:15,797 - INFO - allennlp.nn.initializers -    _relation_scorers.None__relation_labels.bias\\n\",\n      \"2022-11-30 03:01:15,799 - INFO - allennlp.nn.initializers - Initializing parameters\\n\",\n      \"2022-11-30 03:01:15,802 - INFO - allennlp.nn.initializers - Initializing _endpoint_span_extractor._span_width_embedding.weight using _span_width_embedding.weight initializer\\n\",\n      \"2022-11-30 03:01:15,806 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code\\n\",\n      \"2022-11-30 03:01:15,809 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,811 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,813 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.position_embeddings.weight\\n\",\n      \"2022-11-30 03:01:15,815 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.token_type_embeddings.weight\\n\",\n      \"2022-11-30 03:01:15,818 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.word_embeddings.weight\\n\",\n      \"2022-11-30 03:01:15,820 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,822 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,823 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,824 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,829 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,831 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,832 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,833 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,834 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,837 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,838 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,839 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,841 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,844 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,845 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,847 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,849 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,850 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,851 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,853 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,855 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,857 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,859 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,862 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,864 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,868 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,871 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,873 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,875 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,877 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,880 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,882 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,884 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,886 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,888 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,889 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,891 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,893 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,894 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,896 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,897 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,899 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,900 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,901 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,903 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,904 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,905 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,906 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,908 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,909 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,910 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,912 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,913 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,915 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,916 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,917 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,919 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,920 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,922 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,923 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,924 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,926 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,928 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,929 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,930 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,931 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,933 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,934 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,935 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,936 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,937 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,938 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,940 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,941 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,942 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,943 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,945 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,946 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,947 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,948 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,950 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,951 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,952 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,954 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,955 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,956 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,958 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,959 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,960 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,961 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,963 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,964 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,965 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,967 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,968 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,969 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,970 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,972 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,973 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,974 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,976 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,977 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,978 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:15,980 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:15,981 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:15,982 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:15,983 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:15,985 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:15,986 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,987 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,989 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,990 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,991 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:15,993 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:15,994 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:15,995 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:15,996 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:15,998 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:15,999 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,000 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,002 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,003 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,004 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,006 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,007 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,008 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,010 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,011 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,012 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,014 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,015 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,016 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,018 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,019 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,020 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,021 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,022 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,023 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,024 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,025 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,026 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,028 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,029 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,030 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,031 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,033 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,034 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,035 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,037 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,038 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,039 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,041 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,042 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,043 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,045 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,046 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,047 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,049 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,050 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,051 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,052 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,054 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,055 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,056 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,058 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,059 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,060 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,061 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,063 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,064 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,065 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,067 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,068 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,069 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,070 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,072 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,073 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,074 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,076 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,077 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,078 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,080 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,081 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,082 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,083 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,085 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,086 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,087 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,089 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,090 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,091 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,093 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,094 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.bias\\n\",\n      \"2022-11-30 03:01:16,095 - INFO - allennlp.nn.initializers -    _embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.weight\\n\",\n      \"2022-11-30 03:01:16,097 - INFO - allennlp.nn.initializers -    _ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.bias\\n\",\n      \"2022-11-30 03:01:16,098 - INFO - allennlp.nn.initializers -    _ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.weight\\n\",\n      \"2022-11-30 03:01:16,099 - INFO - allennlp.nn.initializers -    _ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.bias\\n\",\n      \"2022-11-30 03:01:16,100 - INFO - allennlp.nn.initializers -    _ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.weight\\n\",\n      \"2022-11-30 03:01:16,102 - INFO - allennlp.nn.initializers -    _ner._ner_scorers.None__ner_labels.1._module.bias\\n\",\n      \"2022-11-30 03:01:16,103 - INFO - allennlp.nn.initializers -    _ner._ner_scorers.None__ner_labels.1._module.weight\\n\",\n      \"2022-11-30 03:01:16,104 - INFO - allennlp.nn.initializers -    _relation._relation_feedforwards.None__relation_labels._linear_layers.0.bias\\n\",\n      \"2022-11-30 03:01:16,105 - INFO - allennlp.nn.initializers -    _relation._relation_feedforwards.None__relation_labels._linear_layers.0.weight\\n\",\n      \"2022-11-30 03:01:16,107 - INFO - allennlp.nn.initializers -    _relation._relation_feedforwards.None__relation_labels._linear_layers.1.bias\\n\",\n      \"2022-11-30 03:01:16,108 - INFO - allennlp.nn.initializers -    _relation._relation_feedforwards.None__relation_labels._linear_layers.1.weight\\n\",\n      \"2022-11-30 03:01:16,109 - INFO - allennlp.nn.initializers -    _relation._relation_scorers.None__relation_labels.bias\\n\",\n      \"2022-11-30 03:01:16,111 - INFO - allennlp.nn.initializers -    _relation._relation_scorers.None__relation_labels.weight\\n\",\n      \"2022-11-30 03:01:16,112 - INFO - allennlp.nn.initializers -    _relation.d_embedder.embedder.weight\\n\",\n      \"2022-11-30 03:01:16,113 - INFO - filelock - Lock 140284684846864 acquired on outputs/14lap/seed_4/weights/vocabulary/.lock\\n\",\n      \"2022-11-30 03:01:16,115 - INFO - filelock - Lock 140284684846864 released on outputs/14lap/seed_4/weights/vocabulary/.lock\\n\",\n      \"2022-11-30 03:01:16,116 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader\\n\",\n      \"2022-11-30 03:01:16,118 - INFO - allennlp.common.params - data_loader.batch_size = 1\\n\",\n      \"2022-11-30 03:01:16,119 - INFO - allennlp.common.params - data_loader.shuffle = False\\n\",\n      \"2022-11-30 03:01:16,120 - INFO - allennlp.common.params - data_loader.batch_sampler = None\\n\",\n      \"2022-11-30 03:01:16,122 - INFO - allennlp.common.params - data_loader.num_workers = 0\\n\",\n      \"2022-11-30 03:01:16,123 - INFO - allennlp.common.params - data_loader.pin_memory = False\\n\",\n      \"2022-11-30 03:01:16,124 - INFO - allennlp.common.params - data_loader.drop_last = False\\n\",\n      \"2022-11-30 03:01:16,125 - INFO - allennlp.common.params - data_loader.timeout = 0\\n\",\n      \"2022-11-30 03:01:16,127 - INFO - allennlp.common.params - data_loader.worker_init_fn = None\\n\",\n      \"2022-11-30 03:01:16,128 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None\\n\",\n      \"2022-11-30 03:01:16,129 - INFO - allennlp.common.params - data_loader.batches_per_epoch = None\\n\",\n      \"2022-11-30 03:01:16,131 - INFO - allennlp.common.params - data_loader.sampler.type = random\\n\",\n      \"2022-11-30 03:01:16,132 - INFO - allennlp.common.params - data_loader.sampler.replacement = False\\n\",\n      \"2022-11-30 03:01:16,134 - INFO - allennlp.common.params - data_loader.sampler.num_samples = None\\n\",\n      \"2022-11-30 03:01:16,136 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader\\n\",\n      \"2022-11-30 03:01:16,137 - INFO - allennlp.common.params - data_loader.batch_size = 1\\n\",\n      \"2022-11-30 03:01:16,139 - INFO - allennlp.common.params - data_loader.shuffle = False\\n\",\n      \"2022-11-30 03:01:16,140 - INFO - allennlp.common.params - data_loader.batch_sampler = None\\n\",\n      \"2022-11-30 03:01:16,142 - INFO - allennlp.common.params - data_loader.num_workers = 0\\n\",\n      \"2022-11-30 03:01:16,143 - INFO - allennlp.common.params - data_loader.pin_memory = False\\n\",\n      \"2022-11-30 03:01:16,144 - INFO - allennlp.common.params - data_loader.drop_last = False\\n\",\n      \"2022-11-30 03:01:16,146 - INFO - allennlp.common.params - data_loader.timeout = 0\\n\",\n      \"2022-11-30 03:01:16,147 - INFO - allennlp.common.params - data_loader.worker_init_fn = None\\n\",\n      \"2022-11-30 03:01:16,148 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None\\n\",\n      \"2022-11-30 03:01:16,149 - INFO - allennlp.common.params - data_loader.batches_per_epoch = None\\n\",\n      \"2022-11-30 03:01:16,151 - INFO - allennlp.common.params - data_loader.sampler.type = random\\n\",\n      \"2022-11-30 03:01:16,152 - INFO - allennlp.common.params - data_loader.sampler.replacement = False\\n\",\n      \"2022-11-30 03:01:16,154 - INFO - allennlp.common.params - data_loader.sampler.num_samples = None\\n\",\n      \"2022-11-30 03:01:16,155 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader\\n\",\n      \"2022-11-30 03:01:16,157 - INFO - allennlp.common.params - data_loader.batch_size = 1\\n\",\n      \"2022-11-30 03:01:16,158 - INFO - allennlp.common.params - data_loader.shuffle = False\\n\",\n      \"2022-11-30 03:01:16,160 - INFO - allennlp.common.params - data_loader.batch_sampler = None\\n\",\n      \"2022-11-30 03:01:16,161 - INFO - allennlp.common.params - data_loader.num_workers = 0\\n\",\n      \"2022-11-30 03:01:16,162 - INFO - allennlp.common.params - data_loader.pin_memory = False\\n\",\n      \"2022-11-30 03:01:16,164 - INFO - allennlp.common.params - data_loader.drop_last = False\\n\",\n      \"2022-11-30 03:01:16,165 - INFO - allennlp.common.params - data_loader.timeout = 0\\n\",\n      \"2022-11-30 03:01:16,167 - INFO - allennlp.common.params - data_loader.worker_init_fn = None\\n\",\n      \"2022-11-30 03:01:16,168 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None\\n\",\n      \"2022-11-30 03:01:16,169 - INFO - allennlp.common.params - data_loader.batches_per_epoch = None\\n\",\n      \"2022-11-30 03:01:16,171 - INFO - allennlp.common.params - data_loader.sampler.type = random\\n\",\n      \"2022-11-30 03:01:16,172 - INFO - allennlp.common.params - data_loader.sampler.replacement = False\\n\",\n      \"2022-11-30 03:01:16,173 - INFO - allennlp.common.params - data_loader.sampler.num_samples = None\\n\",\n      \"2022-11-30 03:01:16,175 - INFO - allennlp.common.params - trainer.type = gradient_descent\\n\",\n      \"2022-11-30 03:01:16,177 - INFO - allennlp.common.params - trainer.patience = None\\n\",\n      \"2022-11-30 03:01:16,178 - INFO - allennlp.common.params - trainer.validation_metric = +MEAN__relation_f1\\n\",\n      \"2022-11-30 03:01:16,179 - INFO - allennlp.common.params - trainer.num_epochs = 10\\n\",\n      \"2022-11-30 03:01:16,181 - INFO - allennlp.common.params - trainer.cuda_device = 0\\n\",\n      \"2022-11-30 03:01:16,182 - INFO - allennlp.common.params - trainer.grad_norm = 5\\n\",\n      \"2022-11-30 03:01:16,184 - INFO - allennlp.common.params - trainer.grad_clipping = None\\n\",\n      \"2022-11-30 03:01:16,185 - INFO - allennlp.common.params - trainer.distributed = False\\n\",\n      \"2022-11-30 03:01:16,186 - INFO - allennlp.common.params - trainer.world_size = 1\\n\",\n      \"2022-11-30 03:01:16,188 - INFO - allennlp.common.params - trainer.num_gradient_accumulation_steps = 1\\n\",\n      \"2022-11-30 03:01:16,189 - INFO - allennlp.common.params - trainer.use_amp = False\\n\",\n      \"2022-11-30 03:01:16,191 - INFO - allennlp.common.params - trainer.no_grad = None\\n\",\n      \"2022-11-30 03:01:16,192 - INFO - allennlp.common.params - trainer.momentum_scheduler = None\\n\",\n      \"2022-11-30 03:01:16,194 - INFO - allennlp.common.params - trainer.tensorboard_writer = <allennlp.common.lazy.Lazy object at 0x7f9692e41190>\\n\",\n      \"2022-11-30 03:01:16,195 - INFO - allennlp.common.params - trainer.moving_average = None\\n\",\n      \"2022-11-30 03:01:16,196 - INFO - allennlp.common.params - trainer.batch_callbacks = None\\n\",\n      \"2022-11-30 03:01:16,198 - INFO - allennlp.common.params - trainer.epoch_callbacks = None\\n\",\n      \"2022-11-30 03:01:16,199 - INFO - allennlp.common.params - trainer.end_callbacks = None\\n\",\n      \"2022-11-30 03:01:16,200 - INFO - allennlp.common.params - trainer.trainer_callbacks = None\\n\",\n      \"2022-11-30 03:01:16,508 - INFO - allennlp.common.params - trainer.optimizer.type = adamw\\n\",\n      \"2022-11-30 03:01:16,518 - INFO - allennlp.common.params - trainer.optimizer.lr = 0.001\\n\",\n      \"2022-11-30 03:01:16,520 - INFO - allennlp.common.params - trainer.optimizer.betas = (0.9, 0.999)\\n\",\n      \"2022-11-30 03:01:16,521 - INFO - allennlp.common.params - trainer.optimizer.eps = 1e-08\\n\",\n      \"2022-11-30 03:01:16,522 - INFO - allennlp.common.params - trainer.optimizer.weight_decay = 0\\n\",\n      \"2022-11-30 03:01:16,523 - INFO - allennlp.common.params - trainer.optimizer.amsgrad = False\\n\",\n      \"2022-11-30 03:01:16,526 - INFO - allennlp.training.optimizers - Done constructing parameter groups.\\n\",\n      \"2022-11-30 03:01:16,527 - INFO - allennlp.training.optimizers - Group 0: ['_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.word_embeddings.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.position_embeddings.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.token_type_embeddings.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.weight'], {'finetune': True, 'lr': 5e-05, 'weight_decay': 0.01}\\n\",\n      \"2022-11-30 03:01:16,530 - INFO - allennlp.training.optimizers - Group 1: [], {'lr': 0.01}\\n\",\n      \"2022-11-30 03:01:16,531 - INFO - allennlp.training.optimizers - Group 2: ['_ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.weight', '_relation._relation_feedforwards.None__relation_labels._linear_layers.1.bias', '_relation._relation_scorers.None__relation_labels.bias', '_endpoint_span_extractor._span_width_embedding.weight', '_relation._relation_feedforwards.None__relation_labels._linear_layers.1.weight', '_ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.weight', '_relation._relation_feedforwards.None__relation_labels._linear_layers.0.bias', '_relation._relation_feedforwards.None__relation_labels._linear_layers.0.weight', '_relation.d_embedder.embedder.weight', '_ner._ner_scorers.None__ner_labels.1._module.bias', '_ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.bias', '_ner._ner_scorers.None__ner_labels.1._module.weight', '_ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.bias', '_relation._relation_scorers.None__relation_labels.weight'], {}\\n\",\n      \"2022-11-30 03:01:16,532 - WARNING - allennlp.training.optimizers - When constructing parameter groups, scalar_parameters does not match any parameter name\\n\",\n      \"2022-11-30 03:01:16,534 - INFO - allennlp.training.optimizers - Number of trainable parameters: 110249737\\n\",\n      \"2022-11-30 03:01:16,538 - INFO - allennlp.common.util - The following parameters are Frozen (without gradient):\\n\",\n      \"2022-11-30 03:01:16,542 - INFO - allennlp.common.util - The following parameters are Tunable (with gradient):\\n\",\n      \"2022-11-30 03:01:16,544 - INFO - allennlp.common.util - _endpoint_span_extractor._span_width_embedding.weight\\n\",\n      \"2022-11-30 03:01:16,545 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.word_embeddings.weight\\n\",\n      \"2022-11-30 03:01:16,547 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.position_embeddings.weight\\n\",\n      \"2022-11-30 03:01:16,548 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.token_type_embeddings.weight\\n\",\n      \"2022-11-30 03:01:16,549 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,550 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,552 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,553 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,554 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,556 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,557 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,558 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,559 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,561 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,562 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,563 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,564 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,566 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,567 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,568 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,570 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,571 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,572 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,573 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,575 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,576 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,577 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,578 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,580 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,581 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,582 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,583 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,585 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,586 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,587 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,589 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,590 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,591 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,592 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,594 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,595 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,596 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,598 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,599 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,600 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,602 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,603 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,604 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,605 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,607 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,608 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,609 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,611 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,612 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,613 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,614 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,616 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,617 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,618 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,620 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,621 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,622 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,623 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,625 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,626 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,627 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,628 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,629 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,631 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,632 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,633 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,634 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,635 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,637 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,638 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,639 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,640 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,641 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,642 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,643 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,644 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,645 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,647 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,648 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,649 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,650 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,651 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,653 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,654 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,655 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,656 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,658 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,659 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,660 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,661 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,661 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,663 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,664 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,665 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,666 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,667 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,668 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,669 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,670 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,671 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,674 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,675 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,676 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,677 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,679 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,680 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,681 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,682 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,683 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,685 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,686 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,687 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,688 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,689 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,690 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,691 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,692 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,694 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,695 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,696 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,697 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,698 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,700 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,701 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,702 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,703 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,705 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,706 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,707 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,708 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,710 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,711 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,712 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,713 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,714 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,716 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,717 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,718 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,720 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,721 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,722 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,723 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,725 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,726 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,727 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,728 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,730 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,731 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,732 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,733 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,735 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,736 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,737 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,738 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,740 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,741 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,742 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,744 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,746 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,747 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,748 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,749 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,751 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,752 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,753 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,754 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,756 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,757 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,758 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,760 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,761 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,762 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,763 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,765 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,766 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,767 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,836 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,837 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.weight\\n\",\n      \"2022-11-30 03:01:16,839 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.bias\\n\",\n      \"2022-11-30 03:01:16,840 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.weight\\n\",\n      \"2022-11-30 03:01:16,841 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.bias\\n\",\n      \"2022-11-30 03:01:16,842 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.weight\\n\",\n      \"2022-11-30 03:01:16,844 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.bias\\n\",\n      \"2022-11-30 03:01:16,845 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,846 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,848 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,849 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,850 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.weight\\n\",\n      \"2022-11-30 03:01:16,859 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.bias\\n\",\n      \"2022-11-30 03:01:16,893 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.weight\\n\",\n      \"2022-11-30 03:01:16,895 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.bias\\n\",\n      \"2022-11-30 03:01:16,898 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.weight\\n\",\n      \"2022-11-30 03:01:16,900 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.bias\\n\",\n      \"2022-11-30 03:01:16,902 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.weight\\n\",\n      \"2022-11-30 03:01:16,904 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.bias\\n\",\n      \"2022-11-30 03:01:16,906 - INFO - allennlp.common.util - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.weight\\n\",\n      \"2022-11-30 03:01:16,907 - INFO - allennlp.common.util - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.bias\\n\",\n      \"2022-11-30 03:01:16,910 - INFO - allennlp.common.util - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.weight\\n\",\n      \"2022-11-30 03:01:16,912 - INFO - allennlp.common.util - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.bias\\n\",\n      \"2022-11-30 03:01:16,913 - INFO - allennlp.common.util - _ner._ner_scorers.None__ner_labels.1._module.weight\\n\",\n      \"2022-11-30 03:01:16,914 - INFO - allennlp.common.util - _ner._ner_scorers.None__ner_labels.1._module.bias\\n\",\n      \"2022-11-30 03:01:16,915 - INFO - allennlp.common.util - _relation.d_embedder.embedder.weight\\n\",\n      \"2022-11-30 03:01:16,917 - INFO - allennlp.common.util - _relation._relation_feedforwards.None__relation_labels._linear_layers.0.weight\\n\",\n      \"2022-11-30 03:01:16,919 - INFO - allennlp.common.util - _relation._relation_feedforwards.None__relation_labels._linear_layers.0.bias\\n\",\n      \"2022-11-30 03:01:16,920 - INFO - allennlp.common.util - _relation._relation_feedforwards.None__relation_labels._linear_layers.1.weight\\n\",\n      \"2022-11-30 03:01:16,922 - INFO - allennlp.common.util - _relation._relation_feedforwards.None__relation_labels._linear_layers.1.bias\\n\",\n      \"2022-11-30 03:01:16,926 - INFO - allennlp.common.util - _relation._relation_scorers.None__relation_labels.weight\\n\",\n      \"2022-11-30 03:01:16,928 - INFO - allennlp.common.util - _relation._relation_scorers.None__relation_labels.bias\\n\",\n      \"2022-11-30 03:01:16,929 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.type = slanted_triangular\\n\",\n      \"2022-11-30 03:01:16,932 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.cut_frac = 0.1\\n\",\n      \"2022-11-30 03:01:16,933 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.ratio = 32\\n\",\n      \"2022-11-30 03:01:16,935 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.last_epoch = -1\\n\",\n      \"2022-11-30 03:01:16,936 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.gradual_unfreezing = False\\n\",\n      \"2022-11-30 03:01:16,941 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.discriminative_fine_tuning = False\\n\",\n      \"2022-11-30 03:01:16,942 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.decay_factor = 0.38\\n\",\n      \"2022-11-30 03:01:16,943 - INFO - allennlp.common.params - trainer.checkpointer.type = default\\n\",\n      \"2022-11-30 03:01:16,945 - INFO - allennlp.common.params - trainer.checkpointer.keep_serialized_model_every_num_seconds = None\\n\",\n      \"2022-11-30 03:01:16,946 - INFO - allennlp.common.params - trainer.checkpointer.num_serialized_models_to_keep = 1\\n\",\n      \"2022-11-30 03:01:16,948 - INFO - allennlp.common.params - trainer.checkpointer.model_save_interval = None\\n\",\n      \"2022-11-30 03:01:16,949 - INFO - allennlp.common.params - summary_interval = 100\\n\",\n      \"2022-11-30 03:01:16,950 - INFO - allennlp.common.params - histogram_interval = None\\n\",\n      \"2022-11-30 03:01:16,951 - INFO - allennlp.common.params - batch_size_interval = None\\n\",\n      \"2022-11-30 03:01:16,952 - INFO - allennlp.common.params - should_log_parameter_statistics = True\\n\",\n      \"2022-11-30 03:01:16,954 - INFO - allennlp.common.params - should_log_learning_rate = False\\n\",\n      \"2022-11-30 03:01:16,955 - INFO - allennlp.common.params - get_batch_num_total = None\\n\",\n      \"2022-11-30 03:01:16,967 - WARNING - allennlp.training.trainer - You provided a validation dataset but patience was set to None, meaning that early stopping is disabled\\n\",\n      \"2022-11-30 03:01:16,968 - INFO - allennlp.training.trainer - Beginning training.\\n\",\n      \"2022-11-30 03:01:16,969 - INFO - allennlp.training.trainer - Epoch 0/9\\n\",\n      \"2022-11-30 03:01:16,971 - INFO - allennlp.training.trainer - Worker 0 memory usage: 4.0G\\n\",\n      \"2022-11-30 03:01:16,972 - INFO - allennlp.training.trainer - GPU 0 memory usage: 844M\\n\",\n      \"2022-11-30 03:01:16,974 - INFO - allennlp.training.trainer - Training\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\r  0%|          | 0/906 [00:00<?, ?it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:01:17,437 - WARNING - allennlp.training.util - Metrics with names beginning with \\\"_\\\" will not be logged to the tqdm progress bar.\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.0735, MEAN__relation_recall: 0.0541, MEAN__relation_f1: 0.0623, batch_loss: 4.1855, loss: 19.9654 ||: 100%|##########| 906/906 [01:29<00:00, 10.17it/s]\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:02:47,332 - INFO - allennlp.training.trainer - Validating\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.8919, MEAN__relation_recall: 0.0957, MEAN__relation_f1: 0.1728, batch_loss: 3.4678, loss: 9.2253 ||: 100%|##########| 219/219 [00:05<00:00, 40.59it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:02:52,748 - INFO - allennlp.training.tensorboard_writer -                               Training |  Validation\\n\",\n      \"2022-11-30 03:02:52,752 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_f1         |     0.062  |     0.173\\n\",\n      \"2022-11-30 03:02:52,759 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_precision  |     0.073  |     0.892\\n\",\n      \"2022-11-30 03:02:52,762 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_recall     |     0.054  |     0.096\\n\",\n      \"2022-11-30 03:02:52,764 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_f1             |     0.330  |     0.716\\n\",\n      \"2022-11-30 03:02:52,768 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_precision      |     0.320  |     0.707\\n\",\n      \"2022-11-30 03:02:52,770 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_recall         |     0.340  |     0.725\\n\",\n      \"2022-11-30 03:02:52,772 - INFO - allennlp.training.tensorboard_writer - _None__ner_f1             |     0.330  |     0.716\\n\",\n      \"2022-11-30 03:02:52,774 - INFO - allennlp.training.tensorboard_writer - _None__ner_precision      |     0.320  |     0.707\\n\",\n      \"2022-11-30 03:02:52,776 - INFO - allennlp.training.tensorboard_writer - _None__ner_recall         |     0.340  |     0.725\\n\",\n      \"2022-11-30 03:02:52,778 - INFO - allennlp.training.tensorboard_writer - _None__relation_f1        |     0.062  |     0.173\\n\",\n      \"2022-11-30 03:02:52,780 - INFO - allennlp.training.tensorboard_writer - _None__relation_precision |     0.073  |     0.892\\n\",\n      \"2022-11-30 03:02:52,782 - INFO - allennlp.training.tensorboard_writer - _None__relation_recall    |     0.054  |     0.096\\n\",\n      \"2022-11-30 03:02:52,784 - INFO - allennlp.training.tensorboard_writer - gpu_0_memory_MB           |   843.606  |       N/A\\n\",\n      \"2022-11-30 03:02:52,785 - INFO - allennlp.training.tensorboard_writer - loss                      |    19.965  |     9.225\\n\",\n      \"2022-11-30 03:02:52,787 - INFO - allennlp.training.tensorboard_writer - worker_0_memory_MB        |  4127.316  |       N/A\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:02:55,895 - INFO - allennlp.training.checkpointer - Best validation performance so far. Copying weights to 'outputs/14lap/seed_4/weights/best.th'.\\n\",\n      \"2022-11-30 03:02:57,626 - INFO - allennlp.training.trainer - Epoch duration: 0:01:40.656161\\n\",\n      \"2022-11-30 03:02:57,632 - INFO - allennlp.training.trainer - Estimated training time remaining: 0:15:05\\n\",\n      \"2022-11-30 03:02:57,636 - INFO - allennlp.training.trainer - Epoch 1/9\\n\",\n      \"2022-11-30 03:02:57,639 - INFO - allennlp.training.trainer - Worker 0 memory usage: 4.1G\\n\",\n      \"2022-11-30 03:02:57,644 - INFO - allennlp.training.trainer - GPU 0 memory usage: 1.8G\\n\",\n      \"2022-11-30 03:02:57,649 - INFO - allennlp.training.trainer - Training\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.4882, MEAN__relation_recall: 0.3404, MEAN__relation_f1: 0.4011, batch_loss: 10.0549, loss: 9.5514 ||: 100%|##########| 906/906 [01:28<00:00, 10.20it/s]\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:04:27,788 - INFO - allennlp.training.trainer - Validating\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.7639, MEAN__relation_recall: 0.1594, MEAN__relation_f1: 0.2638, batch_loss: 8.5782, loss: 8.9333 ||: 100%|##########| 219/219 [00:05<00:00, 43.13it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:04:32,879 - INFO - allennlp.training.tensorboard_writer -                               Training |  Validation\\n\",\n      \"2022-11-30 03:04:32,881 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_f1         |     0.401  |     0.264\\n\",\n      \"2022-11-30 03:04:32,884 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_precision  |     0.488  |     0.764\\n\",\n      \"2022-11-30 03:04:32,889 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_recall     |     0.340  |     0.159\\n\",\n      \"2022-11-30 03:04:32,895 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_f1             |     0.754  |     0.784\\n\",\n      \"2022-11-30 03:04:32,897 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_precision      |     0.777  |     0.775\\n\",\n      \"2022-11-30 03:04:32,900 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_recall         |     0.732  |     0.793\\n\",\n      \"2022-11-30 03:04:32,905 - INFO - allennlp.training.tensorboard_writer - _None__ner_f1             |     0.754  |     0.784\\n\",\n      \"2022-11-30 03:04:32,907 - INFO - allennlp.training.tensorboard_writer - _None__ner_precision      |     0.777  |     0.775\\n\",\n      \"2022-11-30 03:04:32,910 - INFO - allennlp.training.tensorboard_writer - _None__ner_recall         |     0.732  |     0.793\\n\",\n      \"2022-11-30 03:04:32,913 - INFO - allennlp.training.tensorboard_writer - _None__relation_f1        |     0.401  |     0.264\\n\",\n      \"2022-11-30 03:04:32,918 - INFO - allennlp.training.tensorboard_writer - _None__relation_precision |     0.488  |     0.764\\n\",\n      \"2022-11-30 03:04:32,923 - INFO - allennlp.training.tensorboard_writer - _None__relation_recall    |     0.340  |     0.159\\n\",\n      \"2022-11-30 03:04:32,950 - INFO - allennlp.training.tensorboard_writer - gpu_0_memory_MB           |  1871.789  |       N/A\\n\",\n      \"2022-11-30 03:04:32,952 - INFO - allennlp.training.tensorboard_writer - loss                      |     9.551  |     8.933\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:04:32,962 - INFO - allennlp.training.tensorboard_writer - worker_0_memory_MB        |  4215.652  |       N/A\\n\",\n      \"2022-11-30 03:04:36,059 - INFO - allennlp.training.checkpointer - Best validation performance so far. Copying weights to 'outputs/14lap/seed_4/weights/best.th'.\\n\",\n      \"2022-11-30 03:04:38,109 - INFO - allennlp.training.trainer - Epoch duration: 0:01:40.472822\\n\",\n      \"2022-11-30 03:04:38,110 - INFO - allennlp.training.trainer - Estimated training time remaining: 0:13:24\\n\",\n      \"2022-11-30 03:04:38,116 - INFO - allennlp.training.trainer - Epoch 2/9\\n\",\n      \"2022-11-30 03:04:38,120 - INFO - allennlp.training.trainer - Worker 0 memory usage: 4.1G\\n\",\n      \"2022-11-30 03:04:38,123 - INFO - allennlp.training.trainer - GPU 0 memory usage: 1.8G\\n\",\n      \"2022-11-30 03:04:38,129 - INFO - allennlp.training.trainer - Training\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.6161, MEAN__relation_recall: 0.5089, MEAN__relation_f1: 0.5574, batch_loss: 1.6394, loss: 7.2603 ||: 100%|##########| 906/906 [01:28<00:00, 10.23it/s]\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:06:07,972 - INFO - allennlp.training.trainer - Validating\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.6667, MEAN__relation_recall: 0.4058, MEAN__relation_f1: 0.5045, batch_loss: 5.9121, loss: 12.1476 ||: 100%|##########| 219/219 [00:05<00:00, 42.53it/s]\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:06:13,133 - INFO - allennlp.training.tensorboard_writer -                               Training |  Validation\\n\",\n      \"2022-11-30 03:06:13,139 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_f1         |     0.557  |     0.505\\n\",\n      \"2022-11-30 03:06:13,141 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_precision  |     0.616  |     0.667\\n\",\n      \"2022-11-30 03:06:13,144 - INFO - allennlp.training.tensorboard_writer - MEAN__relation_recall     |     0.509  |     0.406\\n\",\n      \"2022-11-30 03:06:13,147 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_f1             |     0.860  |     0.796\\n\",\n      \"2022-11-30 03:06:13,149 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_precision      |     0.879  |     0.795\\n\",\n      \"2022-11-30 03:06:13,152 - INFO - allennlp.training.tensorboard_writer - _MEAN__ner_recall         |     0.842  |     0.797\\n\",\n      \"2022-11-30 03:06:13,155 - INFO - allennlp.training.tensorboard_writer - _None__ner_f1             |     0.860  |     0.796\\n\",\n      \"2022-11-30 03:06:13,158 - INFO - allennlp.training.tensorboard_writer - _None__ner_precision      |     0.879  |     0.795\\n\",\n      \"2022-11-30 03:06:13,160 - INFO - allennlp.training.tensorboard_writer - _None__ner_recall         |     0.842  |     0.797\\n\",\n      \"2022-11-30 03:06:13,164 - INFO - allennlp.training.tensorboard_writer - _None__relation_f1        |     0.557  |     0.505\\n\",\n      \"2022-11-30 03:06:13,169 - INFO - allennlp.training.tensorboard_writer - _None__relation_precision |     0.616  |     0.667\\n\",\n      \"2022-11-30 03:06:13,172 - INFO - allennlp.training.tensorboard_writer - _None__relation_recall    |     0.509  |     0.406\\n\",\n      \"2022-11-30 03:06:13,182 - INFO - allennlp.training.tensorboard_writer - gpu_0_memory_MB           |  1871.789  |       N/A\\n\",\n      \"2022-11-30 03:06:13,189 - INFO - allennlp.training.tensorboard_writer - loss                      |     7.260  |    12.148\\n\",\n      \"2022-11-30 03:06:13,192 - INFO - allennlp.training.tensorboard_writer - worker_0_memory_MB        |  4215.652  |       N/A\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"2022-11-30 03:06:16,292 - INFO - allennlp.training.checkpointer - Best validation performance so far. Copying weights to 'outputs/14lap/seed_4/weights/best.th'.\\n\",\n      \"2022-11-30 03:06:18,593 - INFO - allennlp.training.trainer - Epoch duration: 0:01:40.476870\\n\",\n      \"2022-11-30 03:06:18,596 - INFO - allennlp.training.trainer - Estimated training time remaining: 0:11:43\\n\",\n      \"2022-11-30 03:06:18,599 - INFO - allennlp.training.trainer - Epoch 3/9\\n\",\n      \"2022-11-30 03:06:18,601 - INFO - allennlp.training.trainer - Worker 0 memory usage: 4.1G\\n\",\n      \"2022-11-30 03:06:18,604 - INFO - allennlp.training.trainer - GPU 0 memory usage: 1.8G\\n\",\n      \"2022-11-30 03:06:18,608 - INFO - allennlp.training.trainer - Training\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"MEAN__relation_precision: 0.7126, MEAN__relation_recall: 0.6330, MEAN__relation_f1: 0.6704, batch_loss: 11.1852, loss: 4.8294 ||:  14%|#3        | 126/906 [00:11<01:12, 10.76it/s]\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# Train SpanModel from scratch\\n\",\n    \"random_seed = 4\\n\",\n    \"path_train = f\\\"aste/data/triplet_data/{data_name}/train.txt\\\"\\n\",\n    \"path_dev = f\\\"aste/data/triplet_data/{data_name}/dev.txt\\\"\\n\",\n    \"save_dir = f\\\"outputs/{data_name}/seed_{random_seed}\\\"\\n\",\n    \"\\n\",\n    \"model = SpanModel(save_dir=save_dir, random_seed=random_seed)\\n\",\n    \"model.fit(path_train, path_dev)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"metadata\": {\n    \"id\": \"yjyiKWjSF7oZ\",\n    \"colab\": {\n     \"base_uri\": \"https://localhost:8080/\"\n    },\n    \"outputId\": \"d7475f09-d593-48a9-a708-da3186dd575a\"\n   },\n   \"outputs\": [\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"WARNING:allennlp.nn.initializers:Did not use initialization regex that was passed: .*weight_matrix\\n\",\n      \"WARNING:allennlp.nn.initializers:Did not use initialization regex that was passed: .*weight_matrix\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"################################################################################\\n\",\n      \"################################################################################\\n\",\n      \"{'span_model_unused_keys': dict_keys(['serialization_dir'])}\\n\",\n      \"{'locals': ('span_extractor_type', 'endpoint')}\\n\",\n      \"{'locals': ('use_span_width_embeds', True)}\\n\",\n      \"{'ner_loss_fn': CrossEntropyLoss()}\\n\",\n      \"{'unused_keys': dict_keys([])}\\n\",\n      \"{'locals': {'self': ProperRelationExtractor(), 'make_feedforward': <function SpanModel.__init__.<locals>.make_feedforward at 0x7f26de145f80>, 'span_emb_dim': 1556, 'feature_size': 20, 'spans_per_word': 0.5, 'positive_label_weight': 1.0, 'regularizer': None, 'use_distance_embeds': True, 'use_pruning': True, 'kwargs': {}, 'vocab': Vocabulary with namespaces:  None__relation_labels, Size: 3 || None__ner_labels, Size: 3 || Non Padded Namespaces: {'*tags', '*labels'}, '__class__': <class 'span_model.models.relation_proper.ProperRelationExtractor'>}}\\n\",\n      \"{'token_emb_dim': 768, 'span_emb_dim': 1556, 'relation_scorer_dim': 3240}\\n\",\n      \"{'relation_loss_fn': CrossEntropyLoss()}\\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stderr\",\n     \"text\": [\n      \"reading instances: 328it [00:00, 802.08it/s] \\n\"\n     ]\n    },\n    {\n     \"output_type\": \"stream\",\n     \"name\": \"stdout\",\n     \"text\": [\n      \"{\\n\",\n      \"  \\\"path_pred\\\": \\\"pred.txt\\\",\\n\",\n      \"  \\\"path_gold\\\": \\\"aste/data/triplet_data/14lap/test.txt\\\",\\n\",\n      \"  \\\"precision\\\": 0.658695652173913,\\n\",\n      \"  \\\"recall\\\": 0.5580110497237569,\\n\",\n      \"  \\\"score\\\": 0.6041874376869392\\n\",\n      \"}\\n\"\n     ]\n    }\n   ],\n   \"source\": [\n    \"# Evaluate SpanModel F1 Score\\n\",\n    \"import json\\n\",\n    \"\\n\",\n    \"path_pred = \\\"pred.txt\\\"\\n\",\n    \"path_test = f\\\"aste/data/triplet_data/{data_name}/test.txt\\\"\\n\",\n    \"model.predict(path_in=path_test, path_out=path_pred)\\n\",\n    \"results = model.score(path_pred, path_test)\\n\",\n    \"print(json.dumps(results, indent=2))\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"accelerator\": \"GPU\",\n  \"colab\": {\n   \"provenance\": []\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"name\": \"python\"\n  },\n  \"widgets\": {\n   \"application/vnd.jupyter.widget-state+json\": {\n    \"f61ea767ae064779b77f7d206a90b765\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HBoxModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HBoxModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HBoxView\",\n      \"box_style\": \"\",\n      \"children\": [\n       \"IPY_MODEL_382e7815e6314a798943a7f71eab1dbd\",\n       \"IPY_MODEL_b3f970d5f20748d091d13d1d37e712e4\",\n       \"IPY_MODEL_3c925c25029e4e5a9515b525a819cb31\"\n      ],\n      \"layout\": \"IPY_MODEL_17148c3a40ae4572923f16f249179b9b\"\n     }\n    },\n    \"382e7815e6314a798943a7f71eab1dbd\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_9cc1d9231ee34d33b65a88c4de3b213f\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_9afa9b48d00748739422b2e32763e57d\",\n      \"value\": \"Downloading: 100%\"\n     }\n    },\n    \"b3f970d5f20748d091d13d1d37e712e4\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"FloatProgressModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"FloatProgressModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"ProgressView\",\n      \"bar_style\": \"success\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_f6d16c6d56974ec88f55333b65e0f16a\",\n      \"max\": 433,\n      \"min\": 0,\n      \"orientation\": \"horizontal\",\n      \"style\": \"IPY_MODEL_e776ac7bd605497395d6cf45648c46e0\",\n      \"value\": 433\n     }\n    },\n    \"3c925c25029e4e5a9515b525a819cb31\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_87d171d5ff4d48bea3781c4185c53cd3\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_ac44150d1166470f944a1d0effeae80b\",\n      \"value\": \" 433/433 [00:00&lt;00:00, 13.3kB/s]\"\n     }\n    },\n    \"17148c3a40ae4572923f16f249179b9b\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"9cc1d9231ee34d33b65a88c4de3b213f\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"9afa9b48d00748739422b2e32763e57d\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"f6d16c6d56974ec88f55333b65e0f16a\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"e776ac7bd605497395d6cf45648c46e0\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"ProgressStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"ProgressStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"bar_color\": null,\n      \"description_width\": \"\"\n     }\n    },\n    \"87d171d5ff4d48bea3781c4185c53cd3\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"ac44150d1166470f944a1d0effeae80b\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"916c3c664f2348b5b608c368090945ac\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HBoxModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HBoxModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HBoxView\",\n      \"box_style\": \"\",\n      \"children\": [\n       \"IPY_MODEL_f00d08843e1a4e9e911a8d9fd11f04d1\",\n       \"IPY_MODEL_de26b5b4f1be42cba2951f528f7715ba\",\n       \"IPY_MODEL_d5d290cde75d463ba7b9b220eed79ca7\"\n      ],\n      \"layout\": \"IPY_MODEL_e30953017bae40849979501dbb4647bc\"\n     }\n    },\n    \"f00d08843e1a4e9e911a8d9fd11f04d1\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_08b2e55d6325474da282c48e0f959a56\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_542e865145b547ffbe61dec7fb94bab7\",\n      \"value\": \"Downloading: 100%\"\n     }\n    },\n    \"de26b5b4f1be42cba2951f528f7715ba\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"FloatProgressModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"FloatProgressModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"ProgressView\",\n      \"bar_style\": \"success\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_1ac6bf7c4d7d4fbd8d1c85ec426854db\",\n      \"max\": 231508,\n      \"min\": 0,\n      \"orientation\": \"horizontal\",\n      \"style\": \"IPY_MODEL_808d2ba240c241e9a6989a03c4134a33\",\n      \"value\": 231508\n     }\n    },\n    \"d5d290cde75d463ba7b9b220eed79ca7\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_886551311e7d4ee9823ecd34dfc82811\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_988ec5ae620d4d67b6749ee92a2cb560\",\n      \"value\": \" 232k/232k [00:00&lt;00:00, 209kB/s]\"\n     }\n    },\n    \"e30953017bae40849979501dbb4647bc\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"08b2e55d6325474da282c48e0f959a56\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"542e865145b547ffbe61dec7fb94bab7\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"1ac6bf7c4d7d4fbd8d1c85ec426854db\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"808d2ba240c241e9a6989a03c4134a33\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"ProgressStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"ProgressStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"bar_color\": null,\n      \"description_width\": \"\"\n     }\n    },\n    \"886551311e7d4ee9823ecd34dfc82811\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"988ec5ae620d4d67b6749ee92a2cb560\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"9463e5ed29e74f05869715f4669d1fa5\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HBoxModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HBoxModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HBoxView\",\n      \"box_style\": \"\",\n      \"children\": [\n       \"IPY_MODEL_5e57195d10d7414c9f418af4e7eca84a\",\n       \"IPY_MODEL_e4bb3941e21d45f2b2327690b4d589bf\",\n       \"IPY_MODEL_321f61ce086b4ace9260a2d55afbdefa\"\n      ],\n      \"layout\": \"IPY_MODEL_94f8b1fb0c764cfa9af078bd238623d4\"\n     }\n    },\n    \"5e57195d10d7414c9f418af4e7eca84a\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_621130d9d8cf468abe5709a85f07d106\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_1453b743641b45758303e91bfedafe03\",\n      \"value\": \"Downloading: 100%\"\n     }\n    },\n    \"e4bb3941e21d45f2b2327690b4d589bf\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"FloatProgressModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"FloatProgressModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"ProgressView\",\n      \"bar_style\": \"success\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_a7aeaf582d15403dbe447d57789b691f\",\n      \"max\": 466062,\n      \"min\": 0,\n      \"orientation\": \"horizontal\",\n      \"style\": \"IPY_MODEL_d1d0b028b6c04d59ada3bdfb7efde504\",\n      \"value\": 466062\n     }\n    },\n    \"321f61ce086b4ace9260a2d55afbdefa\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_0a203635e4a54efd96b85633164a067d\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_c55350fd925a454eae62f9da4ed21962\",\n      \"value\": \" 466k/466k [00:01&lt;00:00, 503kB/s]\"\n     }\n    },\n    \"94f8b1fb0c764cfa9af078bd238623d4\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"621130d9d8cf468abe5709a85f07d106\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"1453b743641b45758303e91bfedafe03\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"a7aeaf582d15403dbe447d57789b691f\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"d1d0b028b6c04d59ada3bdfb7efde504\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"ProgressStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"ProgressStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"bar_color\": null,\n      \"description_width\": \"\"\n     }\n    },\n    \"0a203635e4a54efd96b85633164a067d\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"c55350fd925a454eae62f9da4ed21962\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"21dd3d5e1468453ab2f81d5e184a990e\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HBoxModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HBoxModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HBoxView\",\n      \"box_style\": \"\",\n      \"children\": [\n       \"IPY_MODEL_a18149514fe94397abd4bcafd4df0807\",\n       \"IPY_MODEL_c73b50f20f5f4b6c8ccca8e1ec61e738\",\n       \"IPY_MODEL_fb17189f06074ca39d8251ea2ece15f3\"\n      ],\n      \"layout\": \"IPY_MODEL_b379be7248c84e88b3a5bc8362e56e2f\"\n     }\n    },\n    \"a18149514fe94397abd4bcafd4df0807\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_7e85b97fbf8642ecb7613a8b3646b6dc\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_f13fd92805504c0dae5b22e404c256fa\",\n      \"value\": \"Downloading: 100%\"\n     }\n    },\n    \"c73b50f20f5f4b6c8ccca8e1ec61e738\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"FloatProgressModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"FloatProgressModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"ProgressView\",\n      \"bar_style\": \"success\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_86c67fc1ac0d47bbace0fe3b6f24c7ce\",\n      \"max\": 440473133,\n      \"min\": 0,\n      \"orientation\": \"horizontal\",\n      \"style\": \"IPY_MODEL_46568a6cac834c86854b6e41c7e7219a\",\n      \"value\": 440473133\n     }\n    },\n    \"fb17189f06074ca39d8251ea2ece15f3\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"HTMLModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_dom_classes\": [],\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"HTMLModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/controls\",\n      \"_view_module_version\": \"1.5.0\",\n      \"_view_name\": \"HTMLView\",\n      \"description\": \"\",\n      \"description_tooltip\": null,\n      \"layout\": \"IPY_MODEL_fd38ef9382a6488a8be23d5bdb1fb533\",\n      \"placeholder\": \"​\",\n      \"style\": \"IPY_MODEL_e01ecdf66c3143809825cbbad4aaeebb\",\n      \"value\": \" 440M/440M [00:07&lt;00:00, 52.8MB/s]\"\n     }\n    },\n    \"b379be7248c84e88b3a5bc8362e56e2f\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"7e85b97fbf8642ecb7613a8b3646b6dc\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"f13fd92805504c0dae5b22e404c256fa\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    },\n    \"86c67fc1ac0d47bbace0fe3b6f24c7ce\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"46568a6cac834c86854b6e41c7e7219a\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"ProgressStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"ProgressStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"bar_color\": null,\n      \"description_width\": \"\"\n     }\n    },\n    \"fd38ef9382a6488a8be23d5bdb1fb533\": {\n     \"model_module\": \"@jupyter-widgets/base\",\n     \"model_name\": \"LayoutModel\",\n     \"model_module_version\": \"1.2.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/base\",\n      \"_model_module_version\": \"1.2.0\",\n      \"_model_name\": \"LayoutModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"LayoutView\",\n      \"align_content\": null,\n      \"align_items\": null,\n      \"align_self\": null,\n      \"border\": null,\n      \"bottom\": null,\n      \"display\": null,\n      \"flex\": null,\n      \"flex_flow\": null,\n      \"grid_area\": null,\n      \"grid_auto_columns\": null,\n      \"grid_auto_flow\": null,\n      \"grid_auto_rows\": null,\n      \"grid_column\": null,\n      \"grid_gap\": null,\n      \"grid_row\": null,\n      \"grid_template_areas\": null,\n      \"grid_template_columns\": null,\n      \"grid_template_rows\": null,\n      \"height\": null,\n      \"justify_content\": null,\n      \"justify_items\": null,\n      \"left\": null,\n      \"margin\": null,\n      \"max_height\": null,\n      \"max_width\": null,\n      \"min_height\": null,\n      \"min_width\": null,\n      \"object_fit\": null,\n      \"object_position\": null,\n      \"order\": null,\n      \"overflow\": null,\n      \"overflow_x\": null,\n      \"overflow_y\": null,\n      \"padding\": null,\n      \"right\": null,\n      \"top\": null,\n      \"visibility\": null,\n      \"width\": null\n     }\n    },\n    \"e01ecdf66c3143809825cbbad4aaeebb\": {\n     \"model_module\": \"@jupyter-widgets/controls\",\n     \"model_name\": \"DescriptionStyleModel\",\n     \"model_module_version\": \"1.5.0\",\n     \"state\": {\n      \"_model_module\": \"@jupyter-widgets/controls\",\n      \"_model_module_version\": \"1.5.0\",\n      \"_model_name\": \"DescriptionStyleModel\",\n      \"_view_count\": null,\n      \"_view_module\": \"@jupyter-widgets/base\",\n      \"_view_module_version\": \"1.2.0\",\n      \"_view_name\": \"StyleView\",\n      \"description_width\": \"\"\n     }\n    }\n   }\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "requirements.txt",
    "content": "Cython==0.29.21\nPYEVALB==0.1.3\nallennlp-models==1.2.2\nallennlp==1.2.2\nbotocore==1.19.46\nfire==0.3.1\nnltk==3.6.6\nnumpy==1.21.5\npandas==1.1.5\npydantic==1.6.2\nscikit-learn==0.22.2.post1\ntorch==1.7.0\ntorchvision==0.8.1\ntransformers==3.4.0\n"
  },
  {
    "path": "sample_data.txt",
    "content": "I charge it at night and skip taking the cord with me because of the good battery life .#### #### ####[([16, 17], [15], 'POS')]\nit is of high quality , has a killer GUI , is extremely stable , is highly expandable , is bundled with lots of very good applications , is easy to use , and is absolutely gorgeous .#### #### ####[([4], [3], 'POS'), ([9], [8], 'POS'), ([26], [25], 'POS'), ([31], [29], 'POS')]\nEasy to start up and does not overheat as much as other laptops .#### #### ####[([2, 3], [0], 'POS')]\nGreat laptop that offers many great features !#### #### ####[([6], [5], 'POS')]\nOne night I turned the freaking thing off after using it , the next day I turn it on , no GUI , screen all dark , power light steady , hard drive light steady and not flashing as it usually does .#### #### ####[([21], [20], 'NEG'), ([23], [25], 'NEG'), ([27, 28], [29], 'NEU'), ([31, 32, 33], [34], 'NEG')]\nHowever , the multi-touch gestures and large tracking area make having an external mouse unnecessary ( unless you 're gaming ) .#### #### ####[([12, 13], [14], 'NEU')]\nI love the way the entire suite of software works together .#### #### ####[([6, 7, 8], [1], 'POS')]\nThe speed is incredible and I am more than satisfied .#### #### ####[([1], [3], 'POS'), ([1], [9], 'POS')]\nI can barely use any usb devices because they will not stay connected properly .#### #### ####[([5, 6], [10, 11, 12, 13], 'NEG')]\nWhen I finally had everything running with all my software installed I plugged in my droid to recharge and the system crashed .#### #### ####[([20], [21], 'NEG')]\nPairing it with an iPhone is a pure pleasure - talk about painless syncing - used to take me forever - now it 's a snap .#### #### ####[([13], [12], 'POS')]\nI also got the added bonus of a 30 '' HD Monitor , which really helps to extend my screen and keep my eyes fresh !#### #### ####[([19], [17], 'NEU')]\nThe machine is slow to boot up and occasionally crashes completely .#### #### ####[([5, 6], [3], 'NEG')]\nAfter paying several hundred dollars for this service , it is frustrating that you can not get help after hours .#### #### ####[([7], [11], 'NEG')]\nI love the operating system and the preloaded software .#### #### ####[([3, 4], [1], 'POS'), ([7, 8], [1], 'POS')]\nThe best thing about this laptop is the price along with some of the newer features .#### #### ####[([8], [1], 'POS'), ([15], [1], 'POS'), ([15], [14], 'POS')]\nYOU WILL NOT BE ABLE TO TALK TO AN AMERICAN WARRANTY SERVICE IS OUT OF COUNTRY .#### #### ####[([10, 11], [2, 3, 4], 'NEG')]\nbut now i have realized its a problem with this brand .#### #### ####[([10], [7], 'NEG')]\nAlso kinda loud when the fan was running .#### #### ####[([5], [2], 'NEG')]\nThere also seemed to be a problem with the hard disc as certain times windows loads but claims to not be able to find any drivers or files .#### #### ####[([9, 10], [6], 'NEG')]\nSpeaking of the browser , it too has problems .#### #### ####[([3], [8], 'NEG')]\nThe keyboard is too slick .#### #### ####[([1], [4], 'NEG')]\nIt 's like 9 punds , but if you can look past it , it 's GREAT !#### #### ####[([3, 4], [16], 'NEG')]\nIt 's just as fast with one program open as it is with sixteen open .#### #### ####[([7], [4], 'NEU')]\nStill under warrenty so called Toshiba , no help at all .#### #### ####[([2], [7, 8], 'NEG')]\nAmazing Quality !#### #### ####[([1], [0], 'POS')]\nA month or so ago , the freaking motherboard just died .#### #### ####[([8], [7], 'NEG'), ([8], [10], 'NEG')]\nThe system it comes with does not work properly , so when trying to fix the problems with it it started not working at all .#### #### ####[([1], [6, 7, 8], 'NEG')]\nThen after 4 or so months the charger stopped working so I was forced to go out and buy new hardware just to keep this computer running .#### #### ####[([7], [8, 9], 'NEG')]\nIf a website ever freezes ( which is rare ) , its really easy to force quit .#### #### ####[([15, 16], [13], 'POS')]\nI also enjoy the fact that my MacBook Pro laptop allows me to run Windows 7 on it by using the VMWare program .#### #### ####[([14, 15], [2], 'POS'), ([21, 22], [2], 'NEU')]\nI wanted to purchase the extended warranty and they refused , because they knew it was trouble .#### #### ####[([5, 6], [9], 'NEU')]\nWe upgraded the memory to four gigabytes in order to take advantage of the performace increase in speed .#### #### ####[([17], [15], 'POS'), ([14], [15], 'POS')]\nThe reality was , it heated up very quickly , and took way too long to do simple things , like opening my Documents folder .#### #### ####[([21, 22, 23, 24], [14], 'NEG')]\nI had always used PCs and been constantly frustrated by the crashing and the poorly designed operating systems that were never very intuitive .#### #### ####[([16, 17], [14, 15], 'NEG'), ([16, 17], [22], 'NEG')]\nThen , within 5 months , the charger crapped out on me .#### #### ####[([7], [8], 'NEG')]\nAnd if you have a iphone or ipod touch you can connect and download songs to it at high speed .#### #### ####[([19], [18], 'POS')]\nI love the glass touchpad .#### #### ####[([3, 4], [1], 'POS')]\nI continued to take the computer in AGAIN and they replaced the hard drive and mother board yet again .#### #### ####[([12, 13], [10], 'NEG'), ([15, 16], [10], 'NEG')]\nThen HP sends it back to me with the hardware screwed up , not able to connect .#### #### ####[([9], [10, 11], 'NEG')]\nOh yeah , do n't forget the expensive shipping to and from HP .#### #### ####[([8], [7], 'NEG')]\nEverything is so easy to use , Mac software is just so much simpler than Microsoft software .#### #### ####[([7, 8], [3], 'POS'), ([15, 16], [13, 14], 'NEG'), ([5], [3], 'POS')]\nAnd if you do a lot of writing , editing is a problem since there is no forward delete key .#### #### ####[([9], [12], 'NEG')]\nIts ease of use and the top service from Apple- be it their phone assistance or bellying up to the genius bar -- can not be beat .#### #### ####[([3], [1], 'POS'), ([7], [6], 'POS'), ([13, 14], [23, 24, 25, 26], 'POS'), ([20, 21], [23, 24, 25, 26], 'POS')]\nIt has a 10 hour battery life when you 're doing web browsing and word editing , making it perfect for the classroom or office , and in terms of gaming and movie playing it 'll have a battery life of just over 5 hours .#### #### ####[([5, 6], [19], 'POS'), ([11, 12], [19], 'NEU'), ([14, 15], [19], 'NEU')]\nAcer has set me up with FREE recovery discs , when they are available since I asked .#### #### ####[([7, 8], [6], 'POS')]\nEnabling the battery timer is useless .#### #### ####[([2, 3], [5], 'NEG')]\nThere is no need to purchase virus protection for Mac , which saves me a lot of time and money .#### #### ####[([6, 7, 8, 9], [12], 'POS')]\nBut we had paid for bluetooth , and there was none .#### #### ####[([5], [10], 'NEG')]\nIt is always reliable , never bugged and responds well .#### #### ####[([8], [9], 'POS')]\nthey had to replace the motherboard in April#### #### ####[([5], [3], 'NEG')]\nAlso , if you need to talk to a representive at Microsoft , there is a charge , which I believe is robbery , since you are charged enormous amounts for a very badly designed system , which most people would have went with XP if they could .#### #### ####[([35], [33, 34], 'NEG')]\nI love WIndows 7 which is a vast improvment over Vista .#### #### ####[([2, 3], [1], 'POS')]\nDell 's customer disservice is an insult to it 's customers who pay good money for shoddy products .#### #### ####[([0, 1, 2, 3], [6], 'NEG')]\nAfter talking it over with the very knowledgeable sales associate , I chose the MacBook Pro over the white MacBook .#### #### ####[([8, 9], [7], 'POS')]\nIf you really want a bang-up system and do n't need to run Windows applications , go with an Apple ;#### #### ####[([6], [5], 'POS')]\nYou wo n't have to spend gobs of money on some inefficient virus program that needs to be updated every month and that constantly drains your wallet .#### #### ####[([12, 13], [11], 'NEG')]\nIt 's color is even cool .#### #### ####[([2], [5], 'POS')]\nkeys are all in weird places and is way too large for the way it is designed .#### #### ####[([0], [4], 'NEG'), ([0], [10], 'NEG')]\nYes , a Mac is much more money than the average laptop out there , but there is no comparison in style , speed and just cool factor .#### #### ####[([21], [18, 19], 'POS'), ([23], [18, 19], 'POS')]\nAnd not to mention after using it for a few months or so , the battery will slowly less and less hold a charge until you ca n't leave it unplugged for more than 5 minutes without the thing dying .#### #### ####[([15], [18], 'NEG')]\nBEST BUY - 5 STARS + + + ( sales , service , respect for old men who are n't familiar with the technology ) DELL COMPUTERS - 3 stars DELL SUPPORT - owes a me a couple#### #### ####[([9], [0], 'POS'), ([11], [0], 'POS')]\nno complaints with their desktop , and maybe because it just sits on your desktop , and you do n't carry it around , which could jar the hard drive , or the motherboard .#### #### ####[([28, 29], [26], 'NEU'), ([33], [26], 'NEU')]\nYes , they cost more , but they more than make up for it in speed , construction quality , and longevity .#### #### ####[([15], [10, 11], 'POS'), ([17, 18], [10, 11], 'POS'), ([21], [10, 11], 'POS')]\nIt absolutely is more expensive than most PC laptops , but the ease of use , security , and minimal problems that have arisen make it well worth the pricetag .#### #### ####[([14], [12], 'POS')]\nIt gets stuck all of the time you use it , and you have to keep tapping on it to get it to work .#### #### ####[([8], [2], 'NEG')]\nlots of preloaded software .#### #### ####[([2, 3], [0], 'POS')]\nI wish it had a webcam though , then it would be perfect !#### #### ####[([5], [1], 'NEG')]\nAnother thing I might add is the battery life is excellent .#### #### ####[([7, 8], [10], 'POS')]\nOne drawback , I wish the keys were backlit .#### #### ####[([6], [1], 'NEG')]\nI wish the volume could be louder and the mouse didnt break after only a month .#### #### ####[([3], [6], 'NEG'), ([9], [10, 11], 'NEG')]\nI play a lot of casual games online , and the touchpad is very responsive .#### #### ####[([11], [14], 'POS')]\nIt is everything I 'd hoped it would be from a look and feel standpoint , but somehow a bit more sturdy .#### #### ####[([11, 12, 13, 14], [21], 'POS')]\nthe only fact i dont like about apples is they generally use safari and i dont use safari but after i install Mozzilla firfox i love every single bit about it .#### #### ####[([17], [15, 16], 'NEG'), ([22, 23], [25], 'POS')]\nGreat battery , speed , display .#### #### ####[([1], [0], 'POS'), ([3], [0], 'POS'), ([5], [0], 'POS')]\nThe delivery was fast , and I would not hesitate to purchase this laptop again .#### #### ####[([1], [3], 'POS')]\nI 've been impressed with the battery life and the performance for such a small amount of memory .#### #### ####[([6, 7], [3], 'POS'), ([10], [14], 'POS'), ([17], [14], 'NEG')]\nIt 's applications are terrific , including the replacements for Microsoft office .#### #### ####[([2], [4], 'POS')]\nI got it back and my built-in webcam and built-in mic were shorting out anytime I touched the lid , ( mind you this was my means of communication with my fiance who was deployed ) but I suffered thru it and would constandly have to reset the computer to be able to use my cam and mic anytime they went out .#### #### ####[([6, 7], [12, 13], 'NEG'), ([9, 10], [12, 13], 'NEG'), ([55], [38], 'NEG'), ([10], [38], 'NEG')]\nThe board has a bad connector with the power supply and shortly after warrenty expires the power supply will start having issues .#### #### ####[([1], [4], 'NEG'), ([5], [4], 'NEG'), ([16, 17], [21], 'NEG')]\nMy dad has one of the very first Toshibas ever made , yes its abit slow now but still works well and i hooked to my ethernet !#### #### ####[([19], [20], 'POS')]\nMostly I love the drag and drop feature .#### #### ####[([4, 5, 6, 7], [2], 'POS')]\noh yeah , and if the fancy webcam breaks guess who you have to send it to to get it fixed ?#### #### ####[([7], [6], 'NEG')]\nI ordered through MacMall , which saved me the sales tax I would have incurred buying locally .#### #### ####[([9, 10], [6], 'POS')]\nOf course , I also have several great software packages that came for free including iWork , GarageBand , and iMovie .#### #### ####[([8, 9], [7], 'POS'), ([15], [13], 'POS'), ([17], [13], 'POS'), ([20], [13], 'POS')]\nThe screen is very large and crystal clear with amazing colors and resolution .#### #### ####[([1], [4], 'POS'), ([1], [7], 'POS'), ([10], [9], 'POS'), ([12], [9], 'POS')]\nAfter a little more than a year of owning my MacBook Pro , the monitor has completely died .#### #### ####[([14], [17], 'NEG')]\nThe brand of iTunes has just become ingrained in our lexicon now , but keep in mind that Apple started it all .#### #### ####[([3], [7], 'POS')]\nSize : I know 13 is small ( especially for a desktop replacement ) but with an external monitor , who cares .#### #### ####[([0], [6], 'NEG'), ([17, 18], [6], 'NEU')]\nThe display is incredibly bright , much brighter than my PowerBook and very crisp .#### #### ####[([1], [4], 'POS'), ([1], [13], 'POS')]\nThe start menu is not the easiest thing to navigate due to the stacking .#### #### ####[([1, 2], [4, 5, 6], 'NEG')]\nReally like the textured surface which shows no fingerprints .#### #### ####[([4], [1], 'POS')]\nThe screen is bright and the keyboard is nice ;#### #### ####[([1], [3], 'POS'), ([6], [8], 'POS')]\nBut the machine is awesome and iLife is great and I love Snow Leopard X .#### #### ####[([6], [8], 'POS'), ([12, 13, 14], [11], 'POS')]\nI thought learning the Mac OS would be hard , but it is easily picked up if you are familiar with a PC .#### #### ####[([4, 5], [13], 'POS')]\nIt is easy to use and lightweight .#### #### ####[([4], [2], 'POS')]\nThey also have a longer service life than other computers ( I have several friends who still use the older Apple PowerBooks ) .#### #### ####[([5, 6], [4], 'POS')]\nIf you check you will find the same notebook with the above missing ports and a dual core AMD or Intel processor .#### #### ####[([13], [12], 'NEU')]\nThis laptop is a great price and has a sleek look .#### #### ####[([5], [4], 'POS'), ([10], [9], 'POS')]\nI especially like the keyboard which has chiclet type keys .#### #### ####[([4], [2], 'POS'), ([9], [7, 8], 'POS')]\n"
  },
  {
    "path": "setup.sh",
    "content": "#!/usr/bin/env bash\nset -e\n\n# Main Requirements\npip install -r requirements.txt\npip uninstall dataclasses -y # Compatibility issue with Python > 3.6\n\n# Optional: Set up NLTK packages\nif [[ -f punkt.zip ]]; then\n\tmkdir -p /home/admin/nltk_data/tokenizers\n\tcp punkt.zip /home/admin/nltk_data/tokenizers\nfi\nif [[ -f wordnet.zip ]]; then\n\tmkdir -p /home/admin/nltk_data/corpora\n\tcp wordnet.zip /home/admin/nltk_data/corpora\nfi\n\n# Make sample data for quick debugging\nunzip -n data.zip -d aste/\ncd aste/data/triplet_data\nmkdir -p sample\nhead -n 32 14lap/train.txt >sample/train.txt\nhead -n 32 14lap/dev.txt >sample/dev.txt\nhead -n 32 14lap/test.txt >sample/test.txt\ncd ../../..\n"
  },
  {
    "path": "span_model/__init__.py",
    "content": ""
  },
  {
    "path": "span_model/data/__init__.py",
    "content": "from span_model.data.dataset_readers.span_model import SpanModelReader\nfrom span_model.data.dataset_readers.document import Document\n"
  },
  {
    "path": "span_model/data/dataset_readers/document.py",
    "content": "from span_model.models.shared import fields_to_batches, batches_to_fields\nimport copy\nimport numpy as np\nimport re\nimport json\n\n\ndef format_float(x):\n    return round(x, 4)\n\n\nclass SpanCrossesSentencesError(ValueError):\n    pass\n\n\ndef get_sentence_of_span(span, sentence_starts, doc_tokens):\n    \"\"\"\n    Return the index of the sentence that the span is part of.\n    \"\"\"\n    # Inclusive sentence ends\n    sentence_ends = [x - 1 for x in sentence_starts[1:]] + [doc_tokens - 1]\n    in_between = [\n        span[0] >= start and span[1] <= end\n        for start, end in zip(sentence_starts, sentence_ends)\n    ]\n    if sum(in_between) != 1:\n        raise SpanCrossesSentencesError\n    the_sentence = in_between.index(True)\n    return the_sentence\n\n\nclass Dataset:\n    def __init__(self, documents):\n        self.documents = documents\n\n    def __getitem__(self, i):\n        return self.documents[i]\n\n    def __len__(self):\n        return len(self.documents)\n\n    def __repr__(self):\n        return f\"Dataset with {self.__len__()} documents.\"\n\n    @classmethod\n    def from_jsonl(cls, fname):\n        documents = []\n        with open(fname, \"r\") as f:\n            for line in f:\n                doc = Document.from_json(json.loads(line))\n                documents.append(doc)\n\n        return cls(documents)\n\n    def to_jsonl(self, fname):\n        to_write = [doc.to_json() for doc in self]\n        with open(fname, \"w\") as f:\n            for entry in to_write:\n                print(json.dumps(entry), file=f)\n\n\nclass Document:\n    def __init__(\n        self,\n        doc_key,\n        dataset,\n        sentences,\n        weight=None,\n    ):\n        self.doc_key = doc_key\n        self.dataset = dataset\n        self.sentences = sentences\n        self.weight = weight\n\n    @classmethod\n    def from_json(cls, js):\n        \"Read in from json-loaded dict.\"\n        cls._check_fields(js)\n        doc_key = js[\"doc_key\"]\n        dataset = js.get(\"dataset\")\n        entries = fields_to_batches(\n            js,\n            [\n                \"doc_key\",\n                \"dataset\",\n                \"weight\",\n            ],\n        )\n        sentence_lengths = [len(entry[\"sentences\"]) for entry in entries]\n        sentence_starts = np.cumsum(sentence_lengths)\n        sentence_starts = np.roll(sentence_starts, 1)\n        sentence_starts[0] = 0\n        sentence_starts = sentence_starts.tolist()\n        sentences = [\n            Sentence(entry, sentence_start, sentence_ix)\n            for sentence_ix, (entry, sentence_start) in enumerate(\n                zip(entries, sentence_starts)\n            )\n        ]\n\n        # Get the loss weight for this document.\n        weight = js.get(\"weight\", None)\n\n        return cls(\n            doc_key,\n            dataset,\n            sentences,\n            weight,\n        )\n\n    @staticmethod\n    def _check_fields(js):\n        \"Make sure we only have allowed fields.\"\n        allowed_field_regex = (\n            \"doc_key|dataset|sentences|weight|.*ner$|\"\n            \".*relations$|.*clusters$|.*events$|^_.*\"\n        )\n        allowed_field_regex = re.compile(allowed_field_regex)\n        unexpected = []\n        for field in js.keys():\n            if not allowed_field_regex.match(field):\n                unexpected.append(field)\n\n        if unexpected:\n            msg = f\"The following unexpected fields should be prefixed with an underscore: {', '.join(unexpected)}.\"\n            raise ValueError(msg)\n\n    def to_json(self):\n        \"Write to json dict.\"\n        res = {\"doc_key\": self.doc_key, \"dataset\": self.dataset}\n        sents_json = [sent.to_json() for sent in self]\n        fields_json = batches_to_fields(sents_json)\n        res.update(fields_json)\n\n        if self.weight is not None:\n            res[\"weight\"] = self.weight\n\n        return res\n\n    # TODO: Write a unit test to make sure this does the correct thing.\n    def split(self, max_tokens_per_doc):\n        \"\"\"\n        Greedily split a long document into smaller documents, each shorter than\n        `max_tokens_per_doc`. Each split document will get the same weight as its parent.\n        \"\"\"\n        # If the document is already short enough, return it as a list with a single item.\n        if self.n_tokens <= max_tokens_per_doc:\n            return [self]\n\n        sentences = copy.deepcopy(self.sentences)\n\n        sentence_groups = []\n        current_group = []\n        group_length = 0\n        sentence_tok_offset = 0\n        sentence_ix_offset = 0\n        for sentence in sentences:\n            # Can't deal with single sentences longer than the limit.\n            if len(sentence) > max_tokens_per_doc:\n                msg = f\"Sentence \\\"{''.join(sentence.text)}\\\" has more than {max_tokens_per_doc} tokens. Please split this sentence.\"\n                raise ValueError(msg)\n\n            if group_length + len(sentence) <= max_tokens_per_doc:\n                # If we're not at the limit, add it to the current sentence group.\n                sentence.sentence_start -= sentence_tok_offset\n                sentence.sentence_ix -= sentence_ix_offset\n                current_group.append(sentence)\n                group_length += len(sentence)\n            else:\n                # Otherwise, start a new sentence group and adjust sentence offsets.\n                sentence_groups.append(current_group)\n                sentence_tok_offset = sentence.sentence_start\n                sentence_ix_offset = sentence.sentence_ix\n                sentence.sentence_start -= sentence_tok_offset\n                sentence.sentence_ix -= sentence_ix_offset\n                current_group = [sentence]\n                group_length = len(sentence)\n\n        # Add the final sentence group.\n        sentence_groups.append(current_group)\n\n        # Create a separate document for each sentence group.\n        doc_keys = [f\"{self.doc_key}_SPLIT_{i}\" for i in range(len(sentence_groups))]\n        res = [\n            self.__class__(\n                doc_key,\n                self.dataset,\n                sentence_group,\n                self.weight,\n            )\n            for doc_key, sentence_group in zip(doc_keys, sentence_groups)\n        ]\n\n        return res\n\n    def __repr__(self):\n        return \"\\n\".join(\n            [\n                str(i) + \": \" + \" \".join(sent.text)\n                for i, sent in enumerate(self.sentences)\n            ]\n        )\n\n    def __getitem__(self, ix):\n        return self.sentences[ix]\n\n    def __len__(self):\n        return len(self.sentences)\n\n    def print_plaintext(self):\n        for sent in self:\n            print(\" \".join(sent.text))\n\n    @property\n    def n_tokens(self):\n        return sum([len(sent) for sent in self.sentences])\n\n\nclass Sentence:\n    def __init__(self, entry, sentence_start, sentence_ix):\n        self.sentence_start = sentence_start\n        self.sentence_ix = sentence_ix\n        self.text = entry[\"sentences\"]\n\n        # Metadata fields are prefixed with a `_`.\n        self.metadata = {k: v for k, v in entry.items() if re.match(\"^_\", k)}\n\n        if \"ner\" in entry:\n            self.ner = [NER(this_ner, self) for this_ner in entry[\"ner\"]]\n            self.ner_dict = {entry.span.span_sent: entry.label for entry in self.ner}\n        else:\n            self.ner = None\n            self.ner_dict = None\n\n        # Predicted ner.\n        if \"predicted_ner\" in entry:\n            self.predicted_ner = [\n                PredictedNER(this_ner, self) for this_ner in entry[\"predicted_ner\"]\n            ]\n        else:\n            self.predicted_ner = None\n\n        # Store relations.\n        if \"relations\" in entry:\n            self.relations = [\n                Relation(this_relation, self) for this_relation in entry[\"relations\"]\n            ]\n            relation_dict = {}\n            for rel in self.relations:\n                key = (rel.pair[0].span_sent, rel.pair[1].span_sent)\n                relation_dict[key] = rel.label\n            self.relation_dict = relation_dict\n        else:\n            self.relations = None\n            self.relation_dict = None\n\n        # Predicted relations.\n        if \"predicted_relations\" in entry:\n            self.predicted_relations = [\n                PredictedRelation(this_relation, self)\n                for this_relation in entry[\"predicted_relations\"]\n            ]\n        else:\n            self.predicted_relations = None\n\n    def to_json(self):\n        res = {\"sentences\": self.text}\n        if self.ner is not None:\n            res[\"ner\"] = [entry.to_json() for entry in self.ner]\n        if self.predicted_ner is not None:\n            res[\"predicted_ner\"] = [entry.to_json() for entry in self.predicted_ner]\n        if self.relations is not None:\n            res[\"relations\"] = [entry.to_json() for entry in self.relations]\n        if self.predicted_relations is not None:\n            res[\"predicted_relations\"] = [\n                entry.to_json() for entry in self.predicted_relations\n            ]\n\n        for k, v in self.metadata.items():\n            res[k] = v\n\n        return res\n\n    def __repr__(self):\n        the_text = \" \".join(self.text)\n        the_lengths = [len(x) for x in self.text]\n        tok_ixs = \"\"\n        for i, offset in enumerate(the_lengths):\n            true_offset = offset if i < 10 else offset - 1\n            tok_ixs += str(i)\n            tok_ixs += \" \" * true_offset\n\n        return the_text + \"\\n\" + tok_ixs\n\n    def __len__(self):\n        return len(self.text)\n\n\nclass Span:\n    def __init__(self, start, end, sentence, sentence_offsets=False):\n        # The `start` and `end` are relative to the document. We convert them to be relative to the\n        # sentence.\n        self.sentence = sentence\n        # Need to store the sentence text to make span objects hashable.\n        self.sentence_text = \" \".join(sentence.text)\n        self.start_sent = start if sentence_offsets else start - sentence.sentence_start\n        self.end_sent = end if sentence_offsets else end - sentence.sentence_start\n\n    @property\n    def start_doc(self):\n        return self.start_sent + self.sentence.sentence_start\n\n    @property\n    def end_doc(self):\n        return self.end_sent + self.sentence.sentence_start\n\n    @property\n    def span_doc(self):\n        return (self.start_doc, self.end_doc)\n\n    @property\n    def span_sent(self):\n        return (self.start_sent, self.end_sent)\n\n    @property\n    def text(self):\n        return self.sentence.text[self.start_sent : self.end_sent + 1]\n\n    def __repr__(self):\n        return str((self.start_sent, self.end_sent, self.text))\n\n    def __eq__(self, other):\n        return (\n            self.span_doc == other.span_doc\n            and self.span_sent == other.span_sent\n            and self.sentence == other.sentence\n        )\n\n    def __hash__(self):\n        tup = self.span_sent + (self.sentence_text,)\n        return hash(tup)\n\n\nclass Token:\n    def __init__(self, ix, sentence, sentence_offsets=False):\n        self.sentence = sentence\n        self.ix_sent = ix if sentence_offsets else ix - sentence.sentence_start\n\n    @property\n    def ix_doc(self):\n        return self.ix_sent + self.sentence.sentence_start\n\n    @property\n    def text(self):\n        return self.sentence.text[self.ix_sent]\n\n    def __repr__(self):\n        return str((self.ix_sent, self.text))\n\n\nclass NER:\n    def __init__(self, ner, sentence, sentence_offsets=False):\n        self.span = Span(ner[0], ner[1], sentence, sentence_offsets)\n        self.label = ner[2]\n\n    def __repr__(self):\n        return f\"{self.span.__repr__()}: {self.label}\"\n\n    def __eq__(self, other):\n        return self.span == other.span and self.label == other.label\n\n    def to_json(self):\n        return list(self.span.span_doc) + [self.label]\n\n\nclass PredictedNER(NER):\n    def __init__(self, ner, sentence, sentence_offsets=False):\n        \"The input should be a list: [span_start, span_end, label, raw_score, softmax_score].\"\n        super().__init__(ner, sentence, sentence_offsets)\n        self.raw_score = ner[3]\n        self.softmax_score = ner[4]\n\n    def __repr__(self):\n        return super().__repr__() + f\" with confidence {self.softmax_score:0.4f}\"\n\n    def to_json(self):\n        return super().to_json() + [\n            format_float(self.raw_score),\n            format_float(self.softmax_score),\n        ]\n\n\nclass Relation:\n    def __init__(self, relation, sentence, sentence_offsets=False):\n        start1, end1 = relation[0], relation[1]\n        start2, end2 = relation[2], relation[3]\n        label = relation[4]\n        span1 = Span(start1, end1, sentence, sentence_offsets)\n        span2 = Span(start2, end2, sentence, sentence_offsets)\n        self.pair = (span1, span2)\n        self.label = label\n\n    def __repr__(self):\n        return f\"{self.pair[0].__repr__()}, {self.pair[1].__repr__()}: {self.label}\"\n\n    def __eq__(self, other):\n        return (self.pair == other.pair) and (self.label == other.label)\n\n    def to_json(self):\n        return list(self.pair[0].span_doc) + list(self.pair[1].span_doc) + [self.label]\n\n\nclass PredictedRelation(Relation):\n    def __init__(self, relation, sentence, sentence_offsets=False):\n        \"Input format: [start_1, end_1, start_2, end_2, label, raw_score, softmax_score].\"\n        super().__init__(relation, sentence, sentence_offsets)\n        self.raw_score = relation[5]\n        self.softmax_score = relation[6]\n\n    def __repr__(self):\n        return super().__repr__() + f\" with confidence {self.softmax_score:0.4f}\"\n\n    def to_json(self):\n        return super().to_json() + [\n            format_float(self.raw_score),\n            format_float(self.softmax_score),\n        ]\n"
  },
  {
    "path": "span_model/data/dataset_readers/span_model.py",
    "content": "import json\nimport logging\nimport pickle as pkl\nimport warnings\nfrom typing import Any, Dict\n\nfrom allennlp.common.file_utils import cached_path\nfrom allennlp.data.dataset_readers.dataset_reader import DatasetReader\nfrom allennlp.data.dataset_readers.dataset_utils import enumerate_spans\nfrom allennlp.data.fields import (\n    AdjacencyField,\n    LabelField,\n    ListField,\n    MetadataField,\n    SpanField,\n    TextField,\n)\nfrom allennlp.data.instance import Instance\nfrom allennlp.data.token_indexers import SingleIdTokenIndexer, TokenIndexer\nfrom allennlp.data.tokenizers import Token\nfrom overrides import overrides\n\nfrom span_model.data.dataset_readers.document import Document, Sentence\n\nlogger = logging.getLogger(__name__)  # pylint: disable=invalid-name\n\n\nclass SpanModelDataException(Exception):\n    pass\n\n\n@DatasetReader.register(\"span_model\")\nclass SpanModelReader(DatasetReader):\n    \"\"\"\n    Reads a single JSON-formatted file. This is the same file format as used in the\n    scierc, but is preprocessed\n    \"\"\"\n\n    def __init__(\n        self,\n        max_span_width: int,\n        token_indexers: Dict[str, TokenIndexer] = None,\n        **kwargs,\n    ) -> None:\n        super().__init__(**kwargs)\n        # New\n        self.is_train = False\n\n        print(\"#\" * 80)\n\n        self._max_span_width = max_span_width\n        self._token_indexers = token_indexers or {\"tokens\": SingleIdTokenIndexer()}\n\n    @overrides\n    def _read(self, file_path: str):\n        # if `file_path` is a URL, redirect to the cache\n        file_path = cached_path(file_path)\n\n        with open(file_path, \"r\") as f:\n            lines = f.readlines()\n\n        self.is_train = \"train\" in file_path  # New\n        for line in lines:\n            # Loop over the documents.\n            doc_text = json.loads(line)\n            instance = self.text_to_instance(doc_text)\n            yield instance\n\n    def _too_long(self, span):\n        return span[1] - span[0] + 1 > self._max_span_width\n\n    def _process_ner(self, span_tuples, sent):\n        ner_labels = [\"\"] * len(span_tuples)\n\n        for span, label in sent.ner_dict.items():\n            if self._too_long(span):\n                continue\n            # New\n            if span not in span_tuples:\n                continue\n            ix = span_tuples.index(span)\n            ner_labels[ix] = label\n\n        return ner_labels\n\n    def _process_relations(self, span_tuples, sent):\n        relations = []\n        relation_indices = []\n\n        # Loop over the gold spans. Look up their indices in the list of span tuples and store\n        # values.\n        for (span1, span2), label in sent.relation_dict.items():\n            # If either span is beyond the max span width, skip it.\n            if self._too_long(span1) or self._too_long(span2):\n                continue\n            # New\n            if (span1 not in span_tuples) or (span2 not in span_tuples):\n                continue\n            ix1 = span_tuples.index(span1)\n            ix2 = span_tuples.index(span2)\n            relation_indices.append((ix1, ix2))\n            relations.append(label)\n\n        return relations, relation_indices\n\n    def _process_sentence(self, sent: Sentence, dataset: str):\n        # Get the sentence text and define the `text_field`.\n        sentence_text = [self._normalize_word(word) for word in sent.text]\n        text_field = TextField(\n            [Token(word) for word in sentence_text], self._token_indexers\n        )\n\n        # Enumerate spans.\n        spans = []\n        for start, end in enumerate_spans(\n            sentence_text, max_span_width=self._max_span_width\n        ):\n            spans.append(SpanField(start, end, text_field))\n\n        # New\n        # spans = spans[:len(spans)//2]  # bug: deliberately truncate\n        # labeled:Set[Tuple[int, int]] = set([span for span,label in sent.ner_dict.items()])\n        # for span_pair, label in sent.relation_dict.items():\n        #     labeled.update(span_pair)\n        # existing:Set[Tuple[int, int]] = set([(s.span_start, s.span_end) for s in spans])\n        # for start, end in labeled:\n        #     if (start, end) not in existing:\n        #         spans.append(SpanField(start, end, text_field))\n\n        span_field = ListField(spans)\n        span_tuples = [(span.span_start, span.span_end) for span in spans]\n\n        # Convert data to fields.\n        # NOTE: The `ner_labels` and `coref_labels` would ideally have type\n        # `ListField[SequenceLabelField]`, where the sequence labels are over the `SpanField` of\n        # `spans`. But calling `as_tensor_dict()` fails on this specific data type. Matt G\n        # recognized that this is an AllenNLP API issue and suggested that represent these as\n        # `ListField[ListField[LabelField]]` instead.\n        fields = {}\n        fields[\"text\"] = text_field\n        fields[\"spans\"] = span_field\n\n        if sent.ner is not None:\n            ner_labels = self._process_ner(span_tuples, sent)\n            fields[\"ner_labels\"] = ListField(\n                [\n                    LabelField(entry, label_namespace=f\"{dataset}__ner_labels\")\n                    for entry in ner_labels\n                ]\n            )\n        if sent.relations is not None:\n            relation_labels, relation_indices = self._process_relations(\n                span_tuples, sent\n            )\n            fields[\"relation_labels\"] = AdjacencyField(\n                indices=relation_indices,\n                sequence_field=span_field,\n                labels=relation_labels,\n                label_namespace=f\"{dataset}__relation_labels\",\n            )\n\n        return fields\n\n    def _process_sentence_fields(self, doc: Document):\n        # Process each sentence.\n        sentence_fields = [\n            self._process_sentence(sent, doc.dataset) for sent in doc.sentences\n        ]\n\n        # Make sure that all sentences have the same set of keys.\n        first_keys = set(sentence_fields[0].keys())\n        for entry in sentence_fields:\n            if set(entry.keys()) != first_keys:\n                raise SpanModelDataException(\n                    f\"Keys do not match across sentences for document {doc.doc_key}.\"\n                )\n\n        # For each field, store the data from all sentences together in a ListField.\n        fields = {}\n        keys = sentence_fields[0].keys()\n        for key in keys:\n            this_field = ListField([sent[key] for sent in sentence_fields])\n            fields[key] = this_field\n\n        return fields\n\n    @overrides\n    def text_to_instance(self, doc_text: Dict[str, Any]):\n        \"\"\"\n        Convert a Document object into an instance.\n        \"\"\"\n        doc = Document.from_json(doc_text)\n\n        # Make sure there are no single-token sentences; these break things.\n        sent_lengths = [len(x) for x in doc.sentences]\n        if min(sent_lengths) < 2:\n            msg = (\n                f\"Document {doc.doc_key} has a sentence with a single token or no tokens. \"\n                \"This may break the modeling code.\"\n            )\n            warnings.warn(msg)\n\n        fields = self._process_sentence_fields(doc)\n        fields[\"metadata\"] = MetadataField(doc)\n\n        return Instance(fields)\n\n    @overrides\n    def _instances_from_cache_file(self, cache_filename):\n        with open(cache_filename, \"rb\") as f:\n            for entry in pkl.load(f):\n                yield entry\n\n    @overrides\n    def _instances_to_cache_file(self, cache_filename, instances):\n        with open(cache_filename, \"wb\") as f:\n            pkl.dump(instances, f, protocol=pkl.HIGHEST_PROTOCOL)\n\n    @staticmethod\n    def _normalize_word(word):\n        if word == \"/.\" or word == \"/?\":\n            return word[1:]\n        else:\n            return word\n"
  },
  {
    "path": "span_model/predictors/__init__.py",
    "content": "from span_model.predictors.span_model import SpanModelPredictor\n"
  },
  {
    "path": "span_model/predictors/span_model.py",
    "content": "from typing import List\nimport numpy as np\nimport warnings\n\nfrom overrides import overrides\nimport numpy\nimport json\n\nfrom allennlp.common.util import JsonDict\nfrom allennlp.nn import util\nfrom allennlp.data import Batch\nfrom allennlp.data import DatasetReader\nfrom allennlp.models import Model\nfrom allennlp.predictors.predictor import Predictor\n\n\n@Predictor.register(\"span_model\")\nclass SpanModelPredictor(Predictor):\n    \"\"\"\n    Predictor for SpanModel model.\n\n    If model was trained on coref, prediction is done on a whole document at\n    once. This risks overflowing memory on large documents.\n    If the model was trained without coref, prediction is done by sentence.\n    \"\"\"\n\n    def __init__(self, model: Model, dataset_reader: DatasetReader) -> None:\n        super().__init__(model, dataset_reader)\n\n    def predict(self, document):\n        return self.predict_json({\"document\": document})\n\n    def predict_tokenized(self, tokenized_document: List[str]) -> JsonDict:\n        instance = self._words_list_to_instance(tokenized_document)\n        return self.predict_instance(instance)\n\n    @overrides\n    def dump_line(self, outputs):\n        # Need to override to tell Python how to deal with Numpy ints.\n        return json.dumps(outputs, default=int) + \"\\n\"\n\n    # TODO: Can this be implemented in `forward_on_instance`  instead?\n    @overrides\n    def predict_instance(self, instance):\n        \"\"\"\n        An instance is an entire document, represented as a list of sentences.\n        \"\"\"\n        model = self._model\n        cuda_device = model._get_prediction_device()\n\n        # Try to predict this batch.\n        try:\n            dataset = Batch([instance])\n            dataset.index_instances(model.vocab)\n            model_input = util.move_to_device(dataset.as_tensor_dict(), cuda_device)\n            prediction = model.make_output_human_readable(\n                model(**model_input)\n            ).to_json()\n        # If we run out of GPU memory, warn user and indicate that this document failed.\n        # This way, prediction doesn't grind to a halt every time we run out of GPU.\n        except RuntimeError as err:\n            # doc_key, dataset, sentences, message\n            metadata = instance[\"metadata\"].metadata\n            doc_key = metadata.doc_key\n            msg = (\n                f\"Encountered a RunTimeError on document {doc_key}. Skipping this example.\"\n                f\" Error message:\\n{err.args[0]}.\"\n            )\n            warnings.warn(msg)\n            prediction = metadata.to_json()\n            prediction[\"_FAILED_PREDICTION\"] = True\n\n        return prediction\n"
  },
  {
    "path": "span_model/training/f1.py",
    "content": "\"\"\"\nFunction to compute F1 scores.\n\"\"\"\n\n\ndef safe_div(num, denom):\n    if denom > 0:\n        return num / denom\n    else:\n        return 0\n\n\ndef compute_f1(predicted, gold, matched):\n    precision = safe_div(matched, predicted)\n    recall = safe_div(matched, gold)\n    f1 = safe_div(2 * precision * recall, precision + recall)\n    return precision, recall, f1\n"
  },
  {
    "path": "span_model/training/ner_metrics.py",
    "content": "from overrides import overrides\nfrom typing import Optional\n\nimport torch\n\nfrom allennlp.training.metrics.metric import Metric\n\nfrom span_model.training.f1 import compute_f1\n\n# TODO: Need to use the decoded predictions so that we catch the gold examples longer than\n# the span boundary.\n\n\nclass NERMetrics(Metric):\n    \"\"\"\n    Computes precision, recall, and micro-averaged F1 from a list of predicted and gold labels.\n    \"\"\"\n\n    def __init__(self, number_of_classes: int, none_label: int = 0):\n        self.number_of_classes = number_of_classes\n        self.none_label = none_label\n        self.reset()\n\n    @overrides\n    def __call__(\n        self,\n        predictions: torch.Tensor,\n        gold_labels: torch.Tensor,\n        mask: Optional[torch.Tensor] = None,\n    ):\n        predictions = predictions.cpu()\n        gold_labels = gold_labels.cpu()\n        mask = mask.cpu()\n        for i in range(self.number_of_classes):\n            if i == self.none_label:\n                continue\n            self._true_positives += (\n                ((predictions == i) * (gold_labels == i) * mask.bool()).sum().item()\n            )\n            self._false_positives += (\n                ((predictions == i) * (gold_labels != i) * mask.bool()).sum().item()\n            )\n            self._true_negatives += (\n                ((predictions != i) * (gold_labels != i) * mask.bool()).sum().item()\n            )\n            self._false_negatives += (\n                ((predictions != i) * (gold_labels == i) * mask.bool()).sum().item()\n            )\n\n    @overrides\n    def get_metric(self, reset=False):\n        \"\"\"\n        Returns\n        -------\n        A tuple of the following metrics based on the accumulated count statistics:\n        precision : float\n        recall : float\n        f1-measure : float\n        \"\"\"\n        predicted = self._true_positives + self._false_positives\n        gold = self._true_positives + self._false_negatives\n        matched = self._true_positives\n        precision, recall, f1_measure = compute_f1(predicted, gold, matched)\n\n        # Reset counts if at end of epoch.\n        if reset:\n            self.reset()\n\n        return precision, recall, f1_measure\n\n    @overrides\n    def reset(self):\n        self._true_positives = 0\n        self._false_positives = 0\n        self._true_negatives = 0\n        self._false_negatives = 0\n"
  },
  {
    "path": "span_model/training/relation_metrics.py",
    "content": "from overrides import overrides\n\nfrom allennlp.training.metrics.metric import Metric\n\nfrom span_model.training.f1 import compute_f1\n\n\nclass RelationMetrics(Metric):\n    \"\"\"\n    Computes precision, recall, and micro-averaged F1 from a list of predicted and gold spans.\n    \"\"\"\n\n    def __init__(self):\n        self.reset()\n\n    # TODO: This requires decoding because the dataset reader gets rid of gold spans wider\n    # than the span width. So, I can't just compare the tensor of gold labels to the tensor of\n    # predicted labels.\n    @overrides\n    def __call__(self, predicted_relation_list, metadata_list):\n        for predicted_relations, metadata in zip(\n            predicted_relation_list, metadata_list\n        ):\n            gold_relations = metadata.relation_dict\n            self._total_gold += len(gold_relations)\n            self._total_predicted += len(predicted_relations)\n            for (span_1, span_2), label in predicted_relations.items():\n                ix = (span_1, span_2)\n                if ix in gold_relations and gold_relations[ix] == label:\n                    self._total_matched += 1\n\n    @overrides\n    def get_metric(self, reset=False):\n        precision, recall, f1 = compute_f1(\n            self._total_predicted, self._total_gold, self._total_matched\n        )\n\n        # Reset counts if at end of epoch.\n        if reset:\n            self.reset()\n\n        return precision, recall, f1\n\n    @overrides\n    def reset(self):\n        self._total_gold = 0\n        self._total_predicted = 0\n        self._total_matched = 0\n\n\nclass SpanPairMetrics(RelationMetrics):\n    @overrides\n    def __call__(self, predicted_relation_list, metadata_list):\n        for predicted_relations, metadata in zip(\n                predicted_relation_list, metadata_list\n        ):\n            gold_relations = metadata.relation_dict\n            self._total_gold += len(gold_relations)\n            self._total_predicted += len(predicted_relations)\n            for (span_1, span_2), label in predicted_relations.items():\n                ix = (span_1, span_2)\n                if ix in gold_relations:\n                    self._total_matched += 1\n"
  },
  {
    "path": "training_config/config.jsonnet",
    "content": "{\n  \"data_loader\": {\n    \"sampler\": {\n      \"type\": \"random\"\n    }\n  },\n  \"dataset_reader\": {\n    \"max_span_width\": 8,\n    \"token_indexers\": {\n      \"bert\": {\n        \"max_length\": 512,\n        \"model_name\": \"bert-base-uncased\",\n        \"type\": \"pretrained_transformer_mismatched\"\n      }\n    },\n    \"type\": \"span_model\"\n  },\n  \"model\": {\n    \"embedder\": {\n      \"token_embedders\": {\n        \"bert\": {\n          \"max_length\": 512,\n          \"model_name\": \"bert-base-uncased\",\n          \"type\": \"pretrained_transformer_mismatched\"\n        }\n      }\n    },\n    \"feature_size\": 20,\n    \"feedforward_params\": {\n      \"dropout\": 0.4,\n      \"hidden_dims\": 150,\n      \"num_layers\": 2\n    },\n    \"initializer\": {\n      \"regexes\": [\n        [\n          \"_span_width_embedding.weight\",\n          {\n            \"type\": \"xavier_normal\"\n          }\n        ]\n      ]\n    },\n    \"loss_weights\": {\n      \"ner\": 1.0,\n      \"relation\": 1\n    },\n    \"max_span_width\": 8,\n    \"module_initializer\": {\n      \"regexes\": [\n        [\n          \".*weight\",\n          {\n            \"type\": \"xavier_normal\"\n          }\n        ],\n        [\n          \".*weight_matrix\",\n          {\n            \"type\": \"xavier_normal\"\n          }\n        ]\n      ]\n    },\n    \"modules\": {\n      \"ner\": {\n      },\n      \"relation\": {\n        \"spans_per_word\": 0.5,\n        \"use_distance_embeds\": true,\n        \"use_pruning\": true,\n      }\n    },\n    \"span_extractor_type\": \"endpoint\",\n    \"target_task\": \"relation\",\n    \"type\": \"span_model\",\n    \"use_span_width_embeds\": true\n  },\n  \"trainer\": {\n    \"checkpointer\": {\n      \"num_serialized_models_to_keep\": 1\n    },\n    \"cuda_device\": 0,\n    \"grad_norm\": 5,\n    \"learning_rate_scheduler\": {\n      \"type\": \"slanted_triangular\"\n    },\n    \"num_epochs\": 10,\n    \"optimizer\": {\n      \"lr\": 0.001,\n      \"parameter_groups\": [\n        [\n          [\n            \"_matched_embedder\"\n          ],\n          {\n            \"finetune\": true,\n            \"lr\": 5e-05,\n            \"weight_decay\": 0.01\n          }\n        ],\n        [\n          [\n            \"scalar_parameters\"\n          ],\n          {\n            \"lr\": 0.01\n          }\n        ]\n      ],\n      \"type\": \"adamw\",\n      \"weight_decay\": 0\n    },\n    \"validation_metric\": \"+MEAN__relation_f1\"\n  },\n  \"numpy_seed\": 0,\n  \"pytorch_seed\": 0,\n  \"random_seed\": 0,\n  \"test_data_path\": \"\",\n  \"train_data_path\": \"\",\n  \"validation_data_path\": \"\"\n}\n"
  }
]