Showing preview only (317K chars total). Download the full file or copy to clipboard to get everything.
Repository: chiayewken/Span-ASTE
Branch: main
Commit: b252f8b1e402
Files: 22
Total size: 303.8 KB
Directory structure:
gitextract_j_aejt67/
├── .gitignore
├── LICENSE
├── README.md
├── aste/
│ ├── __init__.py
│ ├── analysis.py
│ ├── data_utils.py
│ ├── utils.py
│ └── wrapper.py
├── demo.ipynb
├── requirements.txt
├── sample_data.txt
├── setup.sh
├── span_model/
│ ├── __init__.py
│ ├── data/
│ │ ├── __init__.py
│ │ └── dataset_readers/
│ │ ├── document.py
│ │ └── span_model.py
│ ├── predictors/
│ │ ├── __init__.py
│ │ └── span_model.py
│ └── training/
│ ├── f1.py
│ ├── ner_metrics.py
│ └── relation_metrics.py
└── training_config/
└── config.jsonnet
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
*__pycache__
*.pyc
*.log
*.csv
*.swp
tags
punkt.zip
wordnet.zip
.idea/
aste/data/
models/
model_outputs/
outputs/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2022 Chia Yew Ken
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
## Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction
**\*\*\*\*\* New August 30th, 2022: Featured on YouTube video by Xiaoqing Wan [](https://www.youtube.com/watch?v=rRTvsuGRnJ0) \*\*\*\*\***
**\*\*\*\*\* New March 31th, 2022: Scikit-Style API for Easy Usage \*\*\*\*\***
[](https://paperswithcode.com/sota/aspect-sentiment-triplet-extraction-on-aste)
[](https://colab.research.google.com/drive/1F9zW_nVkwfwIVXTOA_juFDrlPz5TLjpK?usp=sharing)
[](https://github.com/chiayewken/Span-ASTE/blob/main/demo.ipynb)
This repository implements our ACL 2021 research paper [Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction](https://aclanthology.org/2021.acl-long.367/).
Our goal is to extract sentiment triplets of the format `(aspect target, opinion expression and sentiment polarity)`, as shown in the diagram below.
<img src="https://github.com/chiayewken/Span-ASTE/blob/13a851b166998210a7cd2def5fa4aff20819b54d/assets/task_image.png" width="450" height="150" alt="">
### Installation
- Tested on Python 3.7 (recommended to use a virtual environment such as [Conda](https://docs.conda.io/en/latest/miniconda.html))
- Install data and requirements: `bash setup.sh`
- Training config: [training_config/config.jsonnet](training_config/config.jsonnet)
- Modeling code: [span_model/models/span_model.py](span_model/models/span_model.py)
### Data Format
Our span-based model uses data files where the format for each line contains one input sentence and a list of output triplets.
The following data format is demonstrated in the [sample data file](sample_data.txt):
> sentence#### #### ####[triplet_0, ..., triplet_n]
Each triplet is a tuple that consists of `(span_a, span_b, label)`. Each span is a list. If the span covers a single word, the list will contain only the word index. If the span covers multiple words, the list will contain the index of the first word and last word. For example:
> It also has lots of other Korean dishes that are affordable and just as yummy .#### #### ####[([6, 7], [10], 'POS'), ([6, 7], [14], 'POS')]
For prediction, the data can contain the input sentence only, with an empty list for triplets:
> sentence#### #### ####[]
### Predict Using Model Weights
- First, download and extract [pre-trained weights](https://github.com/chiayewken/Span-ASTE/releases) to `pretrained_dir`
- The input data file `path_in` and output data file `path_out` have the same [data format](#data-format).
```
from wrapper import SpanModel
model = SpanModel(save_dir=pretrained_dir, random_seed=0)
model.predict(path_in, path_out)
```
### Model Training
- Configure the model with save directory and random seed.
- Start training based on the training and validation data which have the same [data format](#data-format).
```
model = SpanModel(save_dir=save_dir, random_seed=random_seed)
model.fit(path_train, path_dev)
```
- To train with multiple random seeds from the command-line, you can use the following command:
- Replace `14lap` for other datasets (eg `14res`, `15res`, `16res`)
```
python aste/wrapper.py run_train_many \
--save_dir_template "outputs/14lap/seed_{}" \
--random_seeds [0,1,2,3,4] \
--path_train data/triplet_data/14lap/train.txt \
--path_dev data/triplet_data/14lap/dev.txt
```
### Model Evaluation
- From the trained model, predict triplets from the test sentences and output into `path_pred`.
- The model includes a scoring function which will provide F1 metric scores for triplet extraction.
```
model.predict(path_in=path_test, path_out=path_pred)
results = model.score(path_pred, path_test)
```
- To evaluate with multiple random seeds from the command-line, you can use the following command:
- Replace `14lap` for other datasets (eg `14res`, `15res`, `16res`)
```
python aste/wrapper.py run_eval_many \
--save_dir_template "outputs/14lap/seed_{}" \
--random_seeds [0,1,2,3,4] \
--path_test data/triplet_data/14lap/test.txt
```
- To run scoring on your own prediction file from command-line, you can use the following command:
- Replace `14lap` for other datasets (eg `14res`, `15res`, `16res`)
```
python aste/wrapper.py run_score --path_pred your_file --path_gold data/triplet_data/14lap/test.txt
```
### Research Citation
If the code is useful for your research project, we appreciate if you cite the following paper:
```
@inproceedings{xu-etal-2021-learning,
title = "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction",
author = "Xu, Lu and
Chia, Yew Ken and
Bing, Lidong",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.367",
doi = "10.18653/v1/2021.acl-long.367",
pages = "4755--4766",
abstract = "Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation. Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency. To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks. This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly. Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks. In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.",
}
```
================================================
FILE: aste/__init__.py
================================================
================================================
FILE: aste/analysis.py
================================================
import json
import random
import sys
from pathlib import Path
from typing import List
import _jsonnet
import numpy as np
import torch
from allennlp.commands.train import train_model
from allennlp.common import Params
from allennlp.data import DatasetReader, Vocabulary, DataLoader
from allennlp.models import Model
from allennlp.training import Trainer
from fire import Fire
from tqdm import tqdm
from data_utils import Data, Sentence
from wrapper import SpanModel, safe_divide
def set_seed(seed: int):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
def test_load(
path: str = "training_config/config.jsonnet",
path_train: str = "outputs/14lap/seed_0/temp_data/train.json",
path_dev: str = "outputs/14lap/seed_0/temp_data/validation.json",
save_dir="outputs/temp",
):
# Register custom modules
sys.path.append(".")
from span_model.data.dataset_readers.span_model import SpanModelReader
assert SpanModelReader is not None
params = Params.from_file(
path,
params_overrides=dict(
train_data_path=path_train,
validation_data_path=path_dev,
test_data_path=path_dev,
),
)
train_model(params, serialization_dir=save_dir, force=True)
breakpoint()
config = json.loads(_jsonnet.evaluate_file(path))
set_seed(config["random_seed"])
reader = DatasetReader.from_params(Params(config["dataset_reader"]))
data_train = reader.read(path_train)
data_dev = reader.read(path_dev)
vocab = Vocabulary.from_instances(data_train + data_dev)
model = Model.from_params(Params(config["model"]), vocab=vocab)
data_train.index_with(vocab)
data_dev.index_with(vocab)
trainer = Trainer.from_params(
Params(config["trainer"]),
model=model,
data_loader=DataLoader.from_params(
Params(config["data_loader"]), dataset=data_train
),
validation_data_loader=DataLoader.from_params(
Params(config["data_loader"]), dataset=data_dev
),
serialization_dir=save_dir,
)
breakpoint()
trainer.train()
breakpoint()
class Scorer:
name: str = ""
def run(self, path_pred: str, path_gold: str) -> dict:
pred = Data.load_from_full_path(path_pred)
gold = Data.load_from_full_path(path_gold)
assert pred.sentences is not None
assert gold.sentences is not None
assert len(pred.sentences) == len(gold.sentences)
num_pred = 0
num_gold = 0
num_correct = 0
for i in range(len(gold.sentences)):
tuples_pred = self.make_tuples(pred.sentences[i])
tuples_gold = self.make_tuples(gold.sentences[i])
num_pred += len(tuples_pred)
num_gold += len(tuples_gold)
for p in tuples_pred:
for g in tuples_gold:
if p == g:
num_correct += 1
precision = safe_divide(num_correct, num_pred)
recall = safe_divide(num_correct, num_gold)
info = dict(
precision=precision,
recall=recall,
score=safe_divide(2 * precision * recall, precision + recall),
)
return info
def make_tuples(self, sent: Sentence) -> List[tuple]:
raise NotImplementedError
class SentimentTripletScorer(Scorer):
name: str = "sentiment triplet"
def make_tuples(self, sent: Sentence) -> List[tuple]:
return [(t.o_start, t.o_end, t.t_start, t.t_end, t.label) for t in sent.triples]
class TripletScorer(Scorer):
name: str = "triplet"
def make_tuples(self, sent: Sentence) -> List[tuple]:
return [(t.o_start, t.o_end, t.t_start, t.t_end) for t in sent.triples]
class OpinionScorer(Scorer):
name: str = "opinion"
def make_tuples(self, sent: Sentence) -> List[tuple]:
return sorted(set((t.o_start, t.o_end) for t in sent.triples))
class TargetScorer(Scorer):
name: str = "target"
def make_tuples(self, sent: Sentence) -> List[tuple]:
return sorted(set((t.t_start, t.t_end) for t in sent.triples))
class OrigScorer(Scorer):
name: str = "orig"
def make_tuples(self, sent: Sentence) -> List[tuple]:
raise NotImplementedError
def run(self, path_pred: str, path_gold: str) -> dict:
model = SpanModel(save_dir="", random_seed=0)
return model.score(path_pred, path_gold)
def run_eval_domains(
save_dir_template: str,
path_test_template: str,
random_seeds: List[int] = (0, 1, 2, 3, 4),
domain_names: List[str] = ("hotel", "restaurant", "laptop"),
):
print(locals())
all_results = {}
for domain in domain_names:
results = []
for seed in tqdm(random_seeds):
model = SpanModel(save_dir=save_dir_template.format(seed), random_seed=0)
path_pred = str(Path(model.save_dir, f"pred_{domain}.txt"))
path_test = path_test_template.format(domain)
if not Path(path_pred).exists():
model.predict(path_test, path_pred)
results.append(model.score(path_pred, path_test))
precision = sum(r["precision"] for r in results) / len(random_seeds)
recall = sum(r["recall"] for r in results) / len(random_seeds)
score = safe_divide(2 * precision * recall, precision + recall)
all_results[domain] = dict(p=precision, r=recall, f=score)
for k, v in all_results.items():
print(k, v)
def test_scorer(path_pred: str, path_gold: str):
for scorer in [
OpinionScorer(),
TargetScorer(),
TripletScorer(),
SentimentTripletScorer(),
OrigScorer(),
]:
print(scorer.name)
print(scorer.run(path_pred, path_gold))
if __name__ == "__main__":
Fire()
================================================
FILE: aste/data_utils.py
================================================
import ast
import copy
import json
import os
from collections import Counter
from enum import Enum
from pathlib import Path
from typing import Dict, List, Optional, Set, Tuple
import numpy as np
import pandas as pd
from fire import Fire
from pydantic import BaseModel
from sklearn.metrics import classification_report
from utils import count_joins, get_simple_stats
RawTriple = Tuple[List[int], int, int, int, int]
Span = Tuple[int, int]
class SplitEnum(str, Enum):
train = "train"
dev = "dev"
test = "test"
class LabelEnum(str, Enum):
positive = "POS"
negative = "NEG"
neutral = "NEU"
opinion = "OPINION"
target = "TARGET"
@classmethod
def as_list(cls):
return [cls.neutral, cls.positive, cls.negative]
@classmethod
def i_to_label(cls, i: int):
return cls.as_list()[i]
@classmethod
def label_to_i(cls, label) -> int:
return cls.as_list().index(label)
class SentimentTriple(BaseModel):
o_start: int
o_end: int
t_start: int
t_end: int
label: LabelEnum
@classmethod
def make_dummy(cls):
return cls(o_start=0, o_end=0, t_start=0, t_end=0, label=LabelEnum.neutral)
@property
def opinion(self) -> Tuple[int, int]:
return self.o_start, self.o_end
@property
def target(self) -> Tuple[int, int]:
return self.t_start, self.t_end
@classmethod
def from_raw_triple(cls, x: RawTriple):
(o_start, o_end), polarity, direction, gap_a, gap_b = x
# Refer: TagReader
if direction == 0:
t_end = o_start - gap_a
t_start = o_start - gap_b
elif direction == 1:
t_start = gap_a + o_start
t_end = gap_b + o_start
else:
raise ValueError
return cls(
o_start=o_start,
o_end=o_end,
t_start=t_start,
t_end=t_end,
label=LabelEnum.i_to_label(polarity),
)
def to_raw_triple(self) -> RawTriple:
polarity = LabelEnum.label_to_i(self.label)
if self.t_start < self.o_start:
direction = 0
gap_a, gap_b = self.o_start - self.t_end, self.o_start - self.t_start
else:
direction = 1
gap_a, gap_b = self.t_start - self.o_start, self.t_end - self.o_start
return [self.o_start, self.o_end], polarity, direction, gap_a, gap_b
def as_text(self, tokens: List[str]) -> str:
opinion = " ".join(tokens[self.o_start : self.o_end + 1])
target = " ".join(tokens[self.t_start : self.t_end + 1])
return f"{opinion}-{target} ({self.label})"
class TripleHeuristic(BaseModel):
@staticmethod
def run(
opinion_to_label: Dict[Span, LabelEnum],
target_to_label: Dict[Span, LabelEnum],
) -> List[SentimentTriple]:
# For each target, pair with the closest opinion (and vice versa)
spans_o = list(opinion_to_label.keys())
spans_t = list(target_to_label.keys())
pos_o = np.expand_dims(np.array(spans_o).mean(axis=-1), axis=1)
pos_t = np.expand_dims(np.array(spans_t).mean(axis=-1), axis=0)
dists = np.absolute(pos_o - pos_t)
raw_triples: Set[Tuple[int, int, LabelEnum]] = set()
closest = np.argmin(dists, axis=1)
for i, span in enumerate(spans_o):
raw_triples.add((i, int(closest[i]), opinion_to_label[span]))
closest = np.argmin(dists, axis=0)
for i, span in enumerate(spans_t):
raw_triples.add((int(closest[i]), i, target_to_label[span]))
triples = []
for i, j, label in raw_triples:
os, oe = spans_o[i]
ts, te = spans_t[j]
triples.append(
SentimentTriple(o_start=os, o_end=oe, t_start=ts, t_end=te, label=label)
)
return triples
class TagMaker(BaseModel):
@staticmethod
def run(spans: List[Span], labels: List[LabelEnum], num_tokens: int) -> List[str]:
raise NotImplementedError
class BioesTagMaker(TagMaker):
@staticmethod
def run(spans: List[Span], labels: List[LabelEnum], num_tokens: int) -> List[str]:
tags = ["O"] * num_tokens
for (start, end), lab in zip(spans, labels):
assert end >= start
length = end - start + 1
if length == 1:
tags[start] = f"S-{lab}"
else:
tags[start] = f"B-{lab}"
tags[end] = f"E-{lab}"
for i in range(start + 1, end):
tags[i] = f"I-{lab}"
return tags
class Sentence(BaseModel):
tokens: List[str]
pos: List[str]
weight: int
id: int
is_labeled: bool
triples: List[SentimentTriple]
spans: List[Tuple[int, int, LabelEnum]] = []
def extract_spans(self) -> List[Tuple[int, int, LabelEnum]]:
spans = []
for t in self.triples:
spans.append((t.o_start, t.o_end, LabelEnum.opinion))
spans.append((t.t_start, t.t_end, LabelEnum.target))
spans = sorted(set(spans))
return spans
def as_text(self) -> str:
tokens = list(self.tokens)
for t in self.triples:
tokens[t.o_start] = "(" + tokens[t.o_start]
tokens[t.o_end] = tokens[t.o_end] + ")"
tokens[t.t_start] = "[" + tokens[t.t_start]
tokens[t.t_end] = tokens[t.t_end] + "]"
return " ".join(tokens)
@classmethod
def from_line_format(cls, text: str):
front, back = text.split("#### #### ####")
tokens = front.split(" ")
triples = []
for a, b, label in ast.literal_eval(back):
t = SentimentTriple(
t_start=a[0],
t_end=a[0] if len(a) == 1 else a[-1],
o_start=b[0],
o_end=b[0] if len(b) == 1 else b[-1],
label=label,
)
triples.append(t)
return cls(
tokens=tokens, triples=triples, id=0, pos=[], weight=1, is_labeled=True
)
def to_line_format(self) -> str:
# ([1], [4], 'POS')
# ([1,2], [4], 'POS')
triplets = []
for t in self.triples:
parts = []
for start, end in [(t.t_start, t.t_end), (t.o_start, t.o_end)]:
if start == end:
parts.append([start])
else:
parts.append([start, end])
parts.append(f"{t.label}")
triplets.append(tuple(parts))
line = " ".join(self.tokens) + "#### #### ####" + str(triplets) + "\n"
assert self.from_line_format(line).tokens == self.tokens
assert self.from_line_format(line).triples == self.triples
return line
class Data(BaseModel):
root: Path
data_split: SplitEnum
sentences: Optional[List[Sentence]]
full_path: str = ""
num_instances: int = -1
opinion_offset: int = 3 # Refer: jet_o.py
is_labeled: bool = False
def load(self):
if self.sentences is None:
path = self.root / f"{self.data_split}.txt"
if self.full_path:
path = self.full_path
with open(path) as f:
self.sentences = [Sentence.from_line_format(line) for line in f]
@classmethod
def load_from_full_path(cls, path: str):
data = cls(full_path=path, root=Path(path).parent, data_split=SplitEnum.train)
data.load()
return data
def save_to_path(self, path: str):
assert self.sentences is not None
Path(path).parent.mkdir(exist_ok=True, parents=True)
with open(path, "w") as f:
for s in self.sentences:
f.write(s.to_line_format())
data = Data.load_from_full_path(path)
assert data.sentences is not None
for i, s in enumerate(data.sentences):
assert s.tokens == self.sentences[i].tokens
assert s.triples == self.sentences[i].triples
def analyze_spans(self):
print("\nHow often is target closer to opinion than any invalid target?")
records = []
for s in self.sentences:
valid_pairs = set([(a.opinion, a.target) for a in s.triples])
for a in s.triples:
closest = None
for b in s.triples:
dist_a = abs(np.mean(a.opinion) - np.mean(a.target))
dist_b = abs(np.mean(a.opinion) - np.mean(b.target))
if dist_b <= dist_a and (a.opinion, b.target) not in valid_pairs:
closest = b.target
spans = [a.opinion, a.target]
if closest is not None:
spans.append(closest)
tokens = list(s.tokens)
for start, end in spans:
tokens[start] = "[" + tokens[start]
tokens[end] = tokens[end] + "]"
start = min([s[0] for s in spans])
end = max([s[1] for s in spans])
tokens = tokens[start : end + 1]
records.append(dict(is_closest=closest is None, text=" ".join(tokens)))
df = pd.DataFrame(records)
print(df["is_closest"].mean())
print(df[~df["is_closest"]].head())
def analyze_joined_spans(self):
print("\nHow often are target/opinion spans joined?")
join_targets = 0
join_opinions = 0
total_targets = 0
total_opinions = 0
for s in self.sentences:
targets = set([t.target for t in s.triples])
opinions = set([t.opinion for t in s.triples])
total_targets += len(targets)
total_opinions += len(opinions)
join_targets += count_joins(targets)
join_opinions += count_joins(opinions)
print(
dict(
targets=join_targets / total_targets,
opinions=join_opinions / total_opinions,
)
)
def analyze_tag_counts(self):
print("\nHow many tokens are target/opinion/none?")
record = []
for s in self.sentences:
tags = [str(None) for _ in s.tokens]
for t in s.triples:
for i in range(t.o_start, t.o_end + 1):
tags[i] = "Opinion"
for i in range(t.t_start, t.t_end + 1):
tags[i] = "Target"
record.extend(tags)
print({k: v / len(record) for k, v in Counter(record).items()})
def analyze_span_distance(self):
print("\nHow far is the target/opinion from each other on average?")
distances = []
for s in self.sentences:
for t in s.triples:
x_opinion = (t.o_start + t.o_end) / 2
x_target = (t.t_start + t.t_end) / 2
distances.append(abs(x_opinion - x_target))
print(get_simple_stats(distances))
def analyze_opinion_labels(self):
print("\nFor opinion/target how often is it associated with only 1 polarity?")
for key in ["opinion", "target"]:
records = []
for s in self.sentences:
term_to_labels: Dict[Tuple[int, int], List[LabelEnum]] = {}
for t in s.triples:
term_to_labels.setdefault(getattr(t, key), []).append(t.label)
records.extend([len(set(labels)) for labels in term_to_labels.values()])
is_single_label = [n == 1 for n in records]
print(
dict(
key=key,
is_single_label=sum(is_single_label) / len(is_single_label),
stats=get_simple_stats(records),
)
)
def analyze_tag_score(self):
print("\nIf have all target and opinion terms (unpaired), what is max f_score?")
pred = copy.deepcopy(self.sentences)
for s in pred:
target_to_label = {t.target: t.label for t in s.triples}
opinion_to_label = {t.opinion: t.label for t in s.triples}
s.triples = TripleHeuristic().run(opinion_to_label, target_to_label)
analyzer = ResultAnalyzer()
analyzer.run(pred, gold=self.sentences, print_limit=0)
def analyze_ner(self):
print("\n How many opinion/target per sentence?")
num_o, num_t = [], []
for s in self.sentences:
opinions, targets = set(), set()
for t in s.triples:
opinions.add((t.o_start, t.o_end))
targets.add((t.t_start, t.t_end))
num_o.append(len(opinions))
num_t.append(len(targets))
print(
dict(
num_o=get_simple_stats(num_o),
num_t=get_simple_stats(num_t),
sentences=len(self.sentences),
)
)
def analyze_direction(self):
print("\n For targets, is opinion offset always positive/negative/both?")
records = []
for s in self.sentences:
span_to_offsets = {}
for t in s.triples:
off = np.mean(t.target) - np.mean(t.opinion)
span_to_offsets.setdefault(t.opinion, []).append(off)
for span, offsets in span_to_offsets.items():
labels = [
LabelEnum.positive if off > 0 else LabelEnum.negative
for off in offsets
]
lab = labels[0] if len(set(labels)) == 1 else LabelEnum.neutral
records.append(
dict(
span=" ".join(s.tokens[span[0] : span[1] + 1]),
text=s.as_text(),
offsets=lab,
)
)
df = pd.DataFrame(records)
print(df["offsets"].value_counts(normalize=True))
df = df[df["offsets"] == LabelEnum.neutral].drop(columns=["offsets"])
with pd.option_context("display.max_colwidth", 999):
print(df.head())
def analyze(self):
triples = [t for s in self.sentences for t in s.triples]
info = dict(
root=self.root,
sentences=len(self.sentences),
sentiments=Counter([t.label for t in triples]),
target_lengths=get_simple_stats(
[abs(t.t_start - t.t_end) + 1 for t in triples]
),
opinion_lengths=get_simple_stats(
[abs(t.o_start - t.o_end) + 1 for t in triples]
),
sentence_lengths=get_simple_stats([len(s.tokens) for s in self.sentences]),
)
for k, v in info.items():
print(k, v)
self.analyze_direction()
self.analyze_ner()
self.analyze_spans()
self.analyze_joined_spans()
self.analyze_tag_counts()
self.analyze_span_distance()
self.analyze_opinion_labels()
self.analyze_tag_score()
print("#" * 80)
def test_save_to_path(path: str = "aste/data/triplet_data/14lap/train.txt"):
print("\nEnsure that Data.save_to_path works properly")
path_temp = "temp.txt"
data = Data.load_from_full_path(path)
data.save_to_path(path_temp)
print("\nSamples")
with open(path_temp) as f:
for line in f.readlines()[:5]:
print(line)
os.remove(path_temp)
def merge_data(items: List[Data]) -> Data:
merged = Data(root=Path(), data_split=items[0].data_split, sentences=[])
for data in items:
data.load()
merged.sentences.extend(data.sentences)
return merged
class Result(BaseModel):
num_sentences: int
num_pred: int = 0
num_gold: int = 0
num_correct: int = 0
num_start_correct: int = 0
num_start_end_correct: int = 0
num_opinion_correct: int = 0
num_target_correct: int = 0
num_span_overlap: int = 0
precision: float = 0.0
recall: float = 0.0
f_score: float = 0.0
class ResultAnalyzer(BaseModel):
@staticmethod
def check_overlap(a_start: int, a_end: int, b_start: int, b_end: int) -> bool:
return (b_start <= a_start <= b_end) or (b_start <= a_end <= b_end)
@staticmethod
def run_sentence(pred: Sentence, gold: Sentence):
assert pred.tokens == gold.tokens
triples_gold = set([t.as_text(gold.tokens) for t in gold.triples])
triples_pred = set([t.as_text(pred.tokens) for t in pred.triples])
tp = triples_pred.intersection(triples_gold)
fp = triples_pred.difference(triples_gold)
fn = triples_gold.difference(triples_pred)
if fp or fn:
print(dict(gold=gold.as_text()))
print(dict(pred=pred.as_text()))
print(dict(tp=tp))
print(dict(fp=fp))
print(dict(fn=fn))
print("#" * 80)
@staticmethod
def analyze_labels(pred: List[Sentence], gold: List[Sentence]):
y_pred = []
y_gold = []
for i in range(len(pred)):
for p in pred[i].triples:
for g in gold[i].triples:
if (p.opinion, p.target) == (g.opinion, g.target):
y_pred.append(str(p.label))
y_gold.append(str(g.label))
print(dict(num_span_correct=len(y_pred)))
if y_pred:
print(classification_report(y_gold, y_pred))
@staticmethod
def analyze_spans(pred: List[Sentence], gold: List[Sentence]):
num_triples_gold, triples_found_o, triples_found_t = 0, set(), set()
for label in [LabelEnum.opinion, LabelEnum.target]:
num_correct, num_pred, num_gold = 0, 0, 0
is_target = {LabelEnum.opinion: False, LabelEnum.target: True}[label]
for i, (p, g) in enumerate(zip(pred, gold)):
spans_gold = set(g.spans if g.spans else g.extract_spans())
spans_pred = set(p.spans if p.spans else p.extract_spans())
spans_gold = set([s for s in spans_gold if s[-1] == label])
spans_pred = set([s for s in spans_pred if s[-1] == label])
num_gold += len(spans_gold)
num_pred += len(spans_pred)
num_correct += len(spans_gold.intersection(spans_pred))
for t in g.triples:
num_triples_gold += 1
span = (t.target if is_target else t.opinion) + (label,)
if span in spans_pred:
t_unique = (i,) + tuple(t.dict().items())
if is_target:
triples_found_t.add(t_unique)
else:
triples_found_o.add(t_unique)
if num_correct and num_pred and num_gold:
p = round(num_correct / num_pred, ndigits=4)
r = round(num_correct / num_gold, ndigits=4)
f = round(2 * p * r / (p + r), ndigits=4)
info = dict(label=label, p=p, r=r, f=f)
print(json.dumps(info, indent=2))
assert num_triples_gold % 2 == 0 # Was double-counted above
num_triples_gold = num_triples_gold // 2
num_triples_pred_ceiling = len(triples_found_o.intersection(triples_found_t))
triples_pred_recall_ceiling = num_triples_pred_ceiling / num_triples_gold
print("\n What is the upper bound for RE from predicted O & T?")
print(dict(recall=round(triples_pred_recall_ceiling, ndigits=4)))
@classmethod
def run(cls, pred: List[Sentence], gold: List[Sentence], print_limit=16):
assert len(pred) == len(gold)
cls.analyze_labels(pred, gold)
r = Result(num_sentences=len(pred))
for i in range(len(pred)):
if i < print_limit:
cls.run_sentence(pred[i], gold[i])
r.num_pred += len(pred[i].triples)
r.num_gold += len(gold[i].triples)
for p in pred[i].triples:
for g in gold[i].triples:
if p.dict() == g.dict():
r.num_correct += 1
if (p.o_start, p.t_start) == (g.o_start, g.t_start):
r.num_start_correct += 1
if (p.opinion, p.target) == (g.opinion, g.target):
r.num_start_end_correct += 1
if p.opinion == g.opinion:
r.num_opinion_correct += 1
if p.target == g.target:
r.num_target_correct += 1
if cls.check_overlap(*p.opinion, *g.opinion) and cls.check_overlap(
*p.target, *g.target
):
r.num_span_overlap += 1
e = 1e-9
r.precision = round(r.num_correct / (r.num_pred + e), 4)
r.recall = round(r.num_correct / (r.num_gold + e), 4)
r.f_score = round(2 * r.precision * r.recall / (r.precision + r.recall + e), 3)
print(r.json(indent=2))
cls.analyze_spans(pred, gold)
def test_merge(root="aste/data/triplet_data"):
unmerged = [Data(root=p, data_split=SplitEnum.train) for p in Path(root).iterdir()]
data = merge_data(unmerged)
data.analyze()
if __name__ == "__main__":
Fire()
================================================
FILE: aste/utils.py
================================================
import copy
import hashlib
import pickle
import subprocess
import time
from pathlib import Path
from typing import List, Set, Tuple, Union
from fire import Fire
from pydantic import BaseModel
class Shell(BaseModel):
verbose: bool = True
@classmethod
def format_kwargs(cls, **kwargs) -> str:
outputs = []
for k, v in kwargs.items():
k = k.replace("_", "-")
k = f"--{k}"
outputs.extend([k, str(v)])
return " ".join(outputs)
def run_command(self, command: str) -> str:
# Continuously print outputs for long-running commands
# Refer: https://fabianlee.org/2019/09/15/python-getting-live-output-from-subprocess-using-poll/
print(dict(command=command))
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
outputs = []
while True:
if process.poll() is not None:
break
o = process.stdout.readline().decode()
if o:
outputs.append(o)
if self.verbose:
print(o.strip())
return "".join(outputs)
def run(self, command: str, *args, **kwargs) -> str:
args = [str(a) for a in args]
command = " ".join([command] + args + [self.format_kwargs(**kwargs)])
return self.run_command(command)
def hash_text(x: str) -> str:
return hashlib.md5(x.encode()).hexdigest()
class Timer(BaseModel):
name: str = ""
start: float = 0.0
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
duration = round(time.time() - self.start, 3)
print(f"Timer {self.name}: {duration}s")
class PickleSaver(BaseModel):
path: Path
def dump(self, obj):
if not self.path.parent.exists():
self.path.parent.mkdir(exist_ok=True)
with open(self.path, "wb") as f:
pickle.dump(obj, f)
def load(self):
with Timer(name=str(self.path)):
with open(self.path, "rb") as f:
return pickle.load(f)
class FlexiModel(BaseModel):
class Config:
arbitrary_types_allowed = True
def get_simple_stats(numbers: List[Union[int, float]]):
return dict(
min=min(numbers),
max=max(numbers),
avg=sum(numbers) / len(numbers),
)
def count_joins(spans: Set[Tuple[int, int]]) -> int:
count = 0
for a_start, a_end in spans:
for b_start, b_end in spans:
if (a_start, a_end) == (b_start, b_end):
continue
if b_start <= a_start <= b_end + 1 or b_start - 1 <= a_end <= b_end:
count += 1
return count // 2
def update_nested_dict(d: dict, k: str, v, i=0, sep="__"):
d = copy.deepcopy(d)
keys = k.split(sep)
assert keys[i] in d.keys(), str(dict(keys=keys, d=d, i=i))
if i == len(keys) - 1:
orig = d[keys[i]]
if v != orig:
print(dict(updated_key=k, new_value=v, orig=orig))
d[keys[i]] = v
else:
d[keys[i]] = update_nested_dict(d=d[keys[i]], k=k, v=v, i=i + 1)
return d
def test_update_nested_dict():
d = dict(top=dict(middle_a=dict(last=1), middle_b=0))
print(update_nested_dict(d, k="top__middle_b", v=-1))
print(update_nested_dict(d, k="top__middle_a__last", v=-1))
print(update_nested_dict(d, k="top__middle_a__last", v=1))
def clean_up_triplet_data(path: str):
outputs = []
with open(path) as f:
for line in f:
sep = "####"
text, tags_t, tags_o, triplets = line.split(sep)
outputs.append(sep.join([text, " ", " ", triplets]))
with open(path, "w") as f:
f.write("".join(outputs))
def clean_up_many(pattern: str = "data/triplet_data/*/*.txt"):
for path in sorted(Path().glob(pattern)):
print(path)
clean_up_triplet_data(str(path))
def merge_data(
folders_in: List[str] = [
"aste/data/triplet_data/14res/",
"aste/data/triplet_data/15res/",
"aste/data/triplet_data/16res/",
],
folder_out: str = "aste/data/triplet_data/res_all/",
):
for name in ["train.txt", "dev.txt", "test.txt"]:
outputs = []
for folder in folders_in:
path = Path(folder) / name
with open(path) as f:
for line in f:
assert line.endswith("\n")
outputs.append(line)
path_out = Path(folder_out) / name
path_out.parent.mkdir(exist_ok=True, parents=True)
with open(path_out, "w") as f:
f.write("".join(outputs))
def safe_divide(a: float, b: float) -> float:
if a == 0 or b == 0:
return 0
return a / b
if __name__ == "__main__":
Fire()
================================================
FILE: aste/wrapper.py
================================================
import json
import os
import shutil
import sys
from argparse import Namespace
from pathlib import Path
from typing import List, Tuple, Optional
from allennlp.commands.predict import _predict
from allennlp.commands.train import train_model
from allennlp.common import Params
from fire import Fire
from pydantic import BaseModel
from tqdm import tqdm
from data_utils import Data, SentimentTriple, SplitEnum, Sentence, LabelEnum
from utils import safe_divide
class SpanModelDocument(BaseModel):
sentences: List[List[str]]
ner: List[List[Tuple[int, int, str]]]
relations: List[List[Tuple[int, int, int, int, str]]]
doc_key: str
@property
def is_valid(self) -> bool:
return len(set(map(len, [self.sentences, self.ner, self.relations]))) == 1
@classmethod
def from_sentence(cls, x: Sentence):
ner: List[Tuple[int, int, str]] = []
for t in x.triples:
ner.append((t.o_start, t.o_end, LabelEnum.opinion))
ner.append((t.t_start, t.t_end, LabelEnum.target))
ner = sorted(set(ner), key=lambda n: n[0])
relations = [
(t.o_start, t.o_end, t.t_start, t.t_end, t.label) for t in x.triples
]
return cls(
sentences=[x.tokens],
ner=[ner],
relations=[relations],
doc_key=str(x.id),
)
class SpanModelPrediction(SpanModelDocument):
predicted_ner: List[List[Tuple[int, int, LabelEnum, float, float]]] = [
[]
] # If loss_weights["ner"] == 0.0
predicted_relations: List[List[Tuple[int, int, int, int, LabelEnum, float, float]]]
def to_sentence(self) -> Sentence:
for lst in [self.sentences, self.predicted_ner, self.predicted_relations]:
assert len(lst) == 1
triples = [
SentimentTriple(o_start=os, o_end=oe, t_start=ts, t_end=te, label=label)
for os, oe, ts, te, label, value, prob in self.predicted_relations[0]
]
return Sentence(
id=int(self.doc_key),
tokens=self.sentences[0],
pos=[],
weight=1,
is_labeled=False,
triples=triples,
spans=[lst[:3] for lst in self.predicted_ner[0]],
)
class SpanModelData(BaseModel):
root: Path
data_split: SplitEnum
documents: Optional[List[SpanModelDocument]]
@classmethod
def read(cls, path: Path) -> List[SpanModelDocument]:
docs = []
with open(path) as f:
for line in f:
line = line.strip()
raw: dict = json.loads(line)
docs.append(SpanModelDocument(**raw))
return docs
def load(self):
if self.documents is None:
path = self.root / f"{self.data_split}.json"
self.documents = self.read(path)
def dump(self, path: Path, sep="\n"):
for d in self.documents:
assert d.is_valid
with open(path, "w") as f:
f.write(sep.join([d.json() for d in self.documents]))
assert all(
[a.dict() == b.dict() for a, b in zip(self.documents, self.read(path))]
)
@classmethod
def from_data(cls, x: Data):
data = cls(root=x.root, data_split=x.data_split)
data.documents = [SpanModelDocument.from_sentence(s) for s in x.sentences]
return data
class SpanModel(BaseModel):
save_dir: str
random_seed: int
path_config_base: str = "training_config/config.jsonnet"
def save_temp_data(self, path_in: str, name: str, is_test: bool = False) -> Path:
path_temp = Path(self.save_dir) / "temp_data" / f"{name}.json"
path_temp = path_temp.resolve()
path_temp.parent.mkdir(exist_ok=True, parents=True)
data = Data.load_from_full_path(path_in)
if is_test:
# SpanModel error if s.triples is empty list
assert data.sentences is not None
for s in data.sentences:
s.triples = [SentimentTriple.make_dummy()]
span_data = SpanModelData.from_data(data)
span_data.dump(path_temp)
return path_temp
def fit(self, path_train: str, path_dev: str):
weights_dir = Path(self.save_dir) / "weights"
weights_dir.mkdir(exist_ok=True, parents=True)
print(dict(weights_dir=weights_dir))
params = Params.from_file(
self.path_config_base,
params_overrides=dict(
random_seed=self.random_seed,
numpy_seed=self.random_seed,
pytorch_seed=self.random_seed,
train_data_path=str(self.save_temp_data(path_train, "train")),
validation_data_path=str(self.save_temp_data(path_dev, "dev")),
test_data_path=str(self.save_temp_data(path_dev, "dev")),
),
)
# Register custom modules
sys.path.append(".")
from span_model.data.dataset_readers.span_model import SpanModelReader
assert SpanModelReader is not None
train_model(params, serialization_dir=str(weights_dir))
def predict(self, path_in: str, path_out: str):
path_model = Path(self.save_dir) / "weights" / "model.tar.gz"
path_temp_in = self.save_temp_data(path_in, "pred_in", is_test=True)
path_temp_out = Path(self.save_dir) / "temp_data" / "pred_out.json"
if path_temp_out.exists():
os.remove(path_temp_out)
args = Namespace(
archive_file=str(path_model),
input_file=str(path_temp_in),
output_file=str(path_temp_out),
weights_file="",
batch_size=1,
silent=True,
cuda_device=0,
use_dataset_reader=True,
dataset_reader_choice="validation",
overrides="",
predictor="span_model",
file_friendly_logging=False,
)
# Register custom modules
sys.path.append(".")
from span_model.data.dataset_readers.span_model import SpanModelReader
from span_model.predictors.span_model import SpanModelPredictor
assert SpanModelReader is not None
assert SpanModelPredictor is not None
_predict(args)
with open(path_temp_out) as f:
preds = [SpanModelPrediction(**json.loads(line.strip())) for line in f]
data = Data(
root=Path(),
data_split=SplitEnum.test,
sentences=[p.to_sentence() for p in preds],
)
data.save_to_path(path_out)
@classmethod
def score(cls, path_pred: str, path_gold: str) -> dict:
pred = Data.load_from_full_path(path_pred)
gold = Data.load_from_full_path(path_gold)
assert pred.sentences is not None
assert gold.sentences is not None
assert len(pred.sentences) == len(gold.sentences)
num_pred = 0
num_gold = 0
num_correct = 0
for i in range(len(gold.sentences)):
num_pred += len(pred.sentences[i].triples)
num_gold += len(gold.sentences[i].triples)
for p in pred.sentences[i].triples:
for g in gold.sentences[i].triples:
if p.dict() == g.dict():
num_correct += 1
precision = safe_divide(num_correct, num_pred)
recall = safe_divide(num_correct, num_gold)
info = dict(
path_pred=path_pred,
path_gold=path_gold,
precision=precision,
recall=recall,
score=safe_divide(2 * precision * recall, precision + recall),
)
return info
def run_score(path_pred: str, path_gold: str) -> dict:
return SpanModel.score(path_pred, path_gold)
def run_train(path_train: str, path_dev: str, save_dir: str, random_seed: int):
print(dict(run_train=locals()))
if Path(save_dir).exists():
return
model = SpanModel(save_dir=save_dir, random_seed=random_seed)
model.fit(path_train, path_dev)
def run_train_many(save_dir_template: str, random_seeds: List[int], **kwargs):
for seed in tqdm(random_seeds):
save_dir = save_dir_template.format(seed)
run_train(save_dir=save_dir, random_seed=seed, **kwargs)
def run_eval(path_test: str, save_dir: str):
print(dict(run_eval=locals()))
model = SpanModel(save_dir=save_dir, random_seed=0)
path_pred = str(Path(save_dir) / "pred.txt")
model.predict(path_test, path_pred)
results = model.score(path_pred, path_test)
print(results)
return results
def run_eval_many(save_dir_template: str, random_seeds: List[int], **kwargs):
results = []
for seed in tqdm(random_seeds):
save_dir = save_dir_template.format(seed)
results.append(run_eval(save_dir=save_dir, **kwargs))
precision = sum(r["precision"] for r in results) / len(random_seeds)
recall = sum(r["recall"] for r in results) / len(random_seeds)
score = safe_divide(2 * precision * recall, precision + recall)
print(dict(precision=precision, recall=recall, score=score))
if __name__ == "__main__":
Fire()
================================================
FILE: demo.ipynb
================================================
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "izKXA4b6-oIv",
"outputId": "1b436740-e1e0-4e01-e3f5-6325ce29907a"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Cloning into 'Span-ASTE'...\n",
"remote: Enumerating objects: 191, done.\u001B[K\n",
"remote: Counting objects: 100% (97/97), done.\u001B[K\n",
"remote: Compressing objects: 100% (59/59), done.\u001B[K\n",
"remote: Total 191 (delta 58), reused 61 (delta 36), pack-reused 94\u001B[K\n",
"Receiving objects: 100% (191/191), 626.87 KiB | 23.22 MiB/s, done.\n",
"Resolving deltas: 100% (80/80), done.\n",
"Note: checking out 'f53ec3c'.\n",
"\n",
"You are in 'detached HEAD' state. You can look around, make experimental\n",
"changes and commit them, and you can discard any commits you make in this\n",
"state without impacting any branches by performing another checkout.\n",
"\n",
"If you want to create a new branch to retain commits you create, you may\n",
"do so (now or later) by using -b with the checkout command again. Example:\n",
"\n",
" git checkout -b <new-branch-name>\n",
"\n",
"HEAD is now at f53ec3c Add command-line scoring instructions in README.md\n",
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting Cython==0.29.21\n",
" Downloading Cython-0.29.21-cp37-cp37m-manylinux1_x86_64.whl (2.0 MB)\n",
"\u001B[K |████████████████████████████████| 2.0 MB 18.3 MB/s \n",
"\u001B[?25hCollecting PYEVALB==0.1.3\n",
" Downloading PYEVALB-0.1.3-py3-none-any.whl (13 kB)\n",
"Collecting allennlp-models==1.2.2\n",
" Downloading allennlp_models-1.2.2-py3-none-any.whl (353 kB)\n",
"\u001B[K |████████████████████████████████| 353 kB 69.5 MB/s \n",
"\u001B[?25hCollecting allennlp==1.2.2\n",
" Downloading allennlp-1.2.2-py3-none-any.whl (505 kB)\n",
"\u001B[K |████████████████████████████████| 505 kB 39.7 MB/s \n",
"\u001B[?25hCollecting botocore==1.19.46\n",
" Downloading botocore-1.19.46-py2.py3-none-any.whl (7.2 MB)\n",
"\u001B[K |████████████████████████████████| 7.2 MB 34.0 MB/s \n",
"\u001B[?25hCollecting fire==0.3.1\n",
" Downloading fire-0.3.1.tar.gz (81 kB)\n",
"\u001B[K |████████████████████████████████| 81 kB 10.6 MB/s \n",
"\u001B[?25hCollecting nltk==3.6.6\n",
" Downloading nltk-3.6.6-py3-none-any.whl (1.5 MB)\n",
"\u001B[K |████████████████████████████████| 1.5 MB 57.0 MB/s \n",
"\u001B[?25hCollecting numpy==1.21.5\n",
" Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)\n",
"\u001B[K |████████████████████████████████| 15.7 MB 49.4 MB/s \n",
"\u001B[?25hCollecting pandas==1.1.5\n",
" Downloading pandas-1.1.5-cp37-cp37m-manylinux1_x86_64.whl (9.5 MB)\n",
"\u001B[K |████████████████████████████████| 9.5 MB 53.1 MB/s \n",
"\u001B[?25hCollecting pydantic==1.6.2\n",
" Downloading pydantic-1.6.2-cp37-cp37m-manylinux2014_x86_64.whl (8.6 MB)\n",
"\u001B[K |████████████████████████████████| 8.6 MB 28.2 MB/s \n",
"\u001B[?25hCollecting scikit-learn==0.22.2.post1\n",
" Downloading scikit_learn-0.22.2.post1-cp37-cp37m-manylinux1_x86_64.whl (7.1 MB)\n",
"\u001B[K |████████████████████████████████| 7.1 MB 48.9 MB/s \n",
"\u001B[?25hCollecting torch==1.7.0\n",
" Downloading torch-1.7.0-cp37-cp37m-manylinux1_x86_64.whl (776.7 MB)\n",
"\u001B[K |████████████████████████████████| 776.7 MB 4.4 kB/s \n",
"\u001B[?25hCollecting torchvision==0.8.1\n",
" Downloading torchvision-0.8.1-cp37-cp37m-manylinux1_x86_64.whl (12.7 MB)\n",
"\u001B[K |████████████████████████████████| 12.7 MB 41.9 MB/s \n",
"\u001B[?25hCollecting transformers==3.4.0\n",
" Downloading transformers-3.4.0-py3-none-any.whl (1.3 MB)\n",
"\u001B[K |████████████████████████████████| 1.3 MB 58.1 MB/s \n",
"\u001B[?25hCollecting boto3==1.16.46\n",
" Downloading boto3-1.16.46-py2.py3-none-any.whl (130 kB)\n",
"\u001B[K |████████████████████████████████| 130 kB 75.8 MB/s \n",
"\u001B[?25hCollecting pytablewriter>=0.10.2\n",
" Downloading pytablewriter-0.64.2-py3-none-any.whl (106 kB)\n",
"\u001B[K |████████████████████████████████| 106 kB 71.7 MB/s \n",
"\u001B[?25hCollecting word2number>=1.1\n",
" Downloading word2number-1.1.zip (9.7 kB)\n",
"Collecting ftfy\n",
" Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)\n",
"\u001B[K |████████████████████████████████| 53 kB 2.1 MB/s \n",
"\u001B[?25hCollecting conllu==4.2.1\n",
" Downloading conllu-4.2.1-py2.py3-none-any.whl (14 kB)\n",
"Collecting py-rouge==1.1\n",
" Downloading py_rouge-1.1-py3-none-any.whl (56 kB)\n",
"\u001B[K |████████████████████████████████| 56 kB 3.5 MB/s \n",
"\u001B[?25hCollecting overrides==3.1.0\n",
" Downloading overrides-3.1.0.tar.gz (11 kB)\n",
"Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (3.1.0)\n",
"Collecting jsonpickle\n",
" Downloading jsonpickle-2.2.0-py2.py3-none-any.whl (39 kB)\n",
"Requirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (3.6.4)\n",
"Collecting spacy<2.4,>=2.1.0\n",
" Downloading spacy-2.3.8-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB)\n",
"\u001B[K |████████████████████████████████| 4.8 MB 50.8 MB/s \n",
"\u001B[?25hRequirement already satisfied: tqdm>=4.19 in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (4.64.1)\n",
"Collecting filelock<3.1,>=3.0\n",
" Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)\n",
"Requirement already satisfied: requests>=2.18 in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (2.23.0)\n",
"Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from allennlp==1.2.2->-r requirements.txt (line 4)) (1.7.3)\n",
"Collecting tensorboardX>=1.2\n",
" Downloading tensorboardX-2.5.1-py2.py3-none-any.whl (125 kB)\n",
"\u001B[K |████████████████████████████████| 125 kB 58.4 MB/s \n",
"\u001B[?25hCollecting jsonnet>=0.10.0\n",
" Downloading jsonnet-0.19.1.tar.gz (593 kB)\n",
"\u001B[K |████████████████████████████████| 593 kB 72.4 MB/s \n",
"\u001B[?25hCollecting jmespath<1.0.0,>=0.7.1\n",
" Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)\n",
"Collecting urllib3<1.27,>=1.25.4\n",
" Downloading urllib3-1.26.13-py2.py3-none-any.whl (140 kB)\n",
"\u001B[K |████████████████████████████████| 140 kB 67.8 MB/s \n",
"\u001B[?25hRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.7/dist-packages (from botocore==1.19.46->-r requirements.txt (line 5)) (2.8.2)\n",
"Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from fire==0.3.1->-r requirements.txt (line 6)) (1.15.0)\n",
"Requirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from fire==0.3.1->-r requirements.txt (line 6)) (2.1.0)\n",
"Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from nltk==3.6.6->-r requirements.txt (line 7)) (1.2.0)\n",
"Requirement already satisfied: regex>=2021.8.3 in /usr/local/lib/python3.7/dist-packages (from nltk==3.6.6->-r requirements.txt (line 7)) (2022.6.2)\n",
"Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from nltk==3.6.6->-r requirements.txt (line 7)) (7.1.2)\n",
"Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas==1.1.5->-r requirements.txt (line 9)) (2022.6)\n",
"Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 12)) (4.1.1)\n",
"Collecting dataclasses\n",
" Downloading dataclasses-0.6-py3-none-any.whl (14 kB)\n",
"Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 12)) (0.16.0)\n",
"Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.8.1->-r requirements.txt (line 13)) (7.1.2)\n",
"Requirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 14)) (3.19.6)\n",
"Collecting sacremoses\n",
" Downloading sacremoses-0.0.53.tar.gz (880 kB)\n",
"\u001B[K |████████████████████████████████| 880 kB 67.9 MB/s \n",
"\u001B[?25hCollecting tokenizers==0.9.2\n",
" Downloading tokenizers-0.9.2-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)\n",
"\u001B[K |████████████████████████████████| 2.9 MB 45.8 MB/s \n",
"\u001B[?25hCollecting sentencepiece!=0.1.92\n",
" Downloading sentencepiece-0.1.97-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\n",
"\u001B[K |████████████████████████████████| 1.3 MB 62.0 MB/s \n",
"\u001B[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 14)) (21.3)\n",
"Collecting s3transfer<0.4.0,>=0.3.0\n",
" Downloading s3transfer-0.3.7-py2.py3-none-any.whl (73 kB)\n",
"\u001B[K |████████████████████████████████| 73 kB 2.4 MB/s \n",
"\u001B[?25hCollecting typepy[datetime]<2,>=1.2.0\n",
" Downloading typepy-1.3.0-py3-none-any.whl (31 kB)\n",
"Collecting pathvalidate<3,>=2.3.0\n",
" Downloading pathvalidate-2.5.2-py3-none-any.whl (20 kB)\n",
"Collecting tabledata<2,>=1.3.0\n",
" Downloading tabledata-1.3.0-py3-none-any.whl (11 kB)\n",
"Collecting mbstrdecoder<2,>=1.0.0\n",
" Downloading mbstrdecoder-1.1.1-py3-none-any.whl (7.7 kB)\n",
"Collecting DataProperty<2,>=0.55.0\n",
" Downloading DataProperty-0.55.0-py3-none-any.whl (26 kB)\n",
"Collecting tcolorpy<1,>=0.0.5\n",
" Downloading tcolorpy-0.1.2-py3-none-any.whl (7.9 kB)\n",
"Requirement already satisfied: setuptools>=38.3.0 in /usr/local/lib/python3.7/dist-packages (from pytablewriter>=0.10.2->PYEVALB==0.1.3->-r requirements.txt (line 2)) (57.4.0)\n",
"Requirement already satisfied: chardet<6,>=3.0.4 in /usr/local/lib/python3.7/dist-packages (from mbstrdecoder<2,>=1.0.0->pytablewriter>=0.10.2->PYEVALB==0.1.3->-r requirements.txt (line 2)) (3.0.4)\n",
"Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.18->allennlp==1.2.2->-r requirements.txt (line 4)) (2.10)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.18->allennlp==1.2.2->-r requirements.txt (line 4)) (2022.9.24)\n",
"Collecting urllib3<1.27,>=1.25.4\n",
" Downloading urllib3-1.25.11-py2.py3-none-any.whl (127 kB)\n",
"\u001B[K |████████████████████████████████| 127 kB 73.6 MB/s \n",
"\u001B[?25hRequirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (0.10.1)\n",
"Collecting catalogue<1.1.0,>=0.0.7\n",
" Downloading catalogue-1.0.2-py2.py3-none-any.whl (16 kB)\n",
"Collecting srsly<1.1.0,>=1.0.2\n",
" Downloading srsly-1.0.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (208 kB)\n",
"\u001B[K |████████████████████████████████| 208 kB 71.2 MB/s \n",
"\u001B[?25hRequirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (2.0.7)\n",
"Collecting thinc<7.5.0,>=7.4.1\n",
" Downloading thinc-7.4.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.0 MB)\n",
"\u001B[K |████████████████████████████████| 1.0 MB 58.3 MB/s \n",
"\u001B[?25hRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (3.0.8)\n",
"Collecting plac<1.2.0,>=0.9.6\n",
" Downloading plac-1.1.3-py2.py3-none-any.whl (20 kB)\n",
"Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (1.0.9)\n",
"Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (0.7.9)\n",
"Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<1.1.0,>=0.0.7->spacy<2.4,>=2.1.0->allennlp==1.2.2->-r requirements.txt (line 4)) (3.10.0)\n",
"Requirement already satisfied: wcwidth>=0.2.5 in /usr/local/lib/python3.7/dist-packages (from ftfy->allennlp-models==1.2.2->-r requirements.txt (line 3)) (0.2.5)\n",
"Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py->allennlp==1.2.2->-r requirements.txt (line 4)) (1.5.2)\n",
"Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from jsonpickle->allennlp==1.2.2->-r requirements.txt (line 4)) (4.13.0)\n",
"Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers==3.4.0->-r requirements.txt (line 14)) (3.0.9)\n",
"Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (1.4.1)\n",
"Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (1.11.0)\n",
"Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (0.7.1)\n",
"Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (9.0.0)\n",
"Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->allennlp==1.2.2->-r requirements.txt (line 4)) (22.1.0)\n",
"Building wheels for collected packages: fire, overrides, jsonnet, word2number, sacremoses\n",
" Building wheel for fire (setup.py) ... \u001B[?25l\u001B[?25hdone\n",
" Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111023 sha256=867877500d51fc1466978cad4ed0d27cb550966ead11c09f6ebb1c29d326c4d7\n",
" Stored in directory: /root/.cache/pip/wheels/95/38/e1/8b62337a8ecf5728bdc1017e828f253f7a9cf25db999861bec\n",
" Building wheel for overrides (setup.py) ... \u001B[?25l\u001B[?25hdone\n",
" Created wheel for overrides: filename=overrides-3.1.0-py3-none-any.whl size=10187 sha256=f29e5574b1a87b6159f1a10bd1d4884e547a8f0c981fdf36b32f876ba81e67fc\n",
" Stored in directory: /root/.cache/pip/wheels/3a/0d/38/01a9bc6e20dcfaf0a6a7b552d03137558ba1c38aea47644682\n",
" Building wheel for jsonnet (setup.py) ... \u001B[?25l\u001B[?25hdone\n",
" Created wheel for jsonnet: filename=jsonnet-0.19.1-cp37-cp37m-linux_x86_64.whl size=3997237 sha256=65b35b399530104bd0f8d4c9e9854bf5b1ea5fa5ef562a7a2991994f048d31c4\n",
" Stored in directory: /root/.cache/pip/wheels/03/6b/48/a168ed5f8d01c50268605eff341c29126286763607bf707e3b\n",
" Building wheel for word2number (setup.py) ... \u001B[?25l\u001B[?25hdone\n",
" Created wheel for word2number: filename=word2number-1.1-py3-none-any.whl size=5582 sha256=db776042f55344c643ceb582cc4583b9f7af2c165fd065777f58e72b1fbb68e4\n",
" Stored in directory: /root/.cache/pip/wheels/4b/c3/77/a5f48aeb0d3efb7cd5ad61cbd3da30bbf9ffc9662b07c9f879\n",
" Building wheel for sacremoses (setup.py) ... \u001B[?25l\u001B[?25hdone\n",
" Created wheel for sacremoses: filename=sacremoses-0.0.53-py3-none-any.whl size=895260 sha256=bf922fe785a540d21b8a0d5061ff77f1b7f7bdf172b948ea4af6e6b3817594b8\n",
" Stored in directory: /root/.cache/pip/wheels/87/39/dd/a83eeef36d0bf98e7a4d1933a4ad2d660295a40613079bafc9\n",
"Successfully built fire overrides jsonnet word2number sacremoses\n",
"Installing collected packages: mbstrdecoder, urllib3, typepy, numpy, jmespath, srsly, plac, catalogue, botocore, tokenizers, thinc, sentencepiece, sacremoses, s3transfer, filelock, DataProperty, dataclasses, transformers, torch, tensorboardX, tcolorpy, tabledata, spacy, scikit-learn, pathvalidate, overrides, nltk, jsonpickle, jsonnet, boto3, word2number, pytablewriter, py-rouge, ftfy, conllu, allennlp, torchvision, PYEVALB, pydantic, pandas, fire, Cython, allennlp-models\n",
" Attempting uninstall: urllib3\n",
" Found existing installation: urllib3 1.24.3\n",
" Uninstalling urllib3-1.24.3:\n",
" Successfully uninstalled urllib3-1.24.3\n",
" Attempting uninstall: numpy\n",
" Found existing installation: numpy 1.21.6\n",
" Uninstalling numpy-1.21.6:\n",
" Successfully uninstalled numpy-1.21.6\n",
" Attempting uninstall: srsly\n",
" Found existing installation: srsly 2.4.5\n",
" Uninstalling srsly-2.4.5:\n",
" Successfully uninstalled srsly-2.4.5\n",
" Attempting uninstall: catalogue\n",
" Found existing installation: catalogue 2.0.8\n",
" Uninstalling catalogue-2.0.8:\n",
" Successfully uninstalled catalogue-2.0.8\n",
" Attempting uninstall: thinc\n",
" Found existing installation: thinc 8.1.5\n",
" Uninstalling thinc-8.1.5:\n",
" Successfully uninstalled thinc-8.1.5\n",
" Attempting uninstall: filelock\n",
" Found existing installation: filelock 3.8.0\n",
" Uninstalling filelock-3.8.0:\n",
" Successfully uninstalled filelock-3.8.0\n",
" Attempting uninstall: torch\n",
" Found existing installation: torch 1.12.1+cu113\n",
" Uninstalling torch-1.12.1+cu113:\n",
" Successfully uninstalled torch-1.12.1+cu113\n",
" Attempting uninstall: spacy\n",
" Found existing installation: spacy 3.4.3\n",
" Uninstalling spacy-3.4.3:\n",
" Successfully uninstalled spacy-3.4.3\n",
" Attempting uninstall: scikit-learn\n",
" Found existing installation: scikit-learn 1.0.2\n",
" Uninstalling scikit-learn-1.0.2:\n",
" Successfully uninstalled scikit-learn-1.0.2\n",
" Attempting uninstall: nltk\n",
" Found existing installation: nltk 3.7\n",
" Uninstalling nltk-3.7:\n",
" Successfully uninstalled nltk-3.7\n",
" Attempting uninstall: torchvision\n",
" Found existing installation: torchvision 0.13.1+cu113\n",
" Uninstalling torchvision-0.13.1+cu113:\n",
" Successfully uninstalled torchvision-0.13.1+cu113\n",
" Attempting uninstall: pydantic\n",
" Found existing installation: pydantic 1.10.2\n",
" Uninstalling pydantic-1.10.2:\n",
" Successfully uninstalled pydantic-1.10.2\n",
" Attempting uninstall: pandas\n",
" Found existing installation: pandas 1.3.5\n",
" Uninstalling pandas-1.3.5:\n",
" Successfully uninstalled pandas-1.3.5\n",
" Attempting uninstall: Cython\n",
" Found existing installation: Cython 0.29.32\n",
" Uninstalling Cython-0.29.32:\n",
" Successfully uninstalled Cython-0.29.32\n",
"\u001B[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"yellowbrick 1.5 requires scikit-learn>=1.0.0, but you have scikit-learn 0.22.2.post1 which is incompatible.\n",
"torchtext 0.13.1 requires torch==1.12.1, but you have torch 1.7.0 which is incompatible.\n",
"torchaudio 0.12.1+cu113 requires torch==1.12.1, but you have torch 1.7.0 which is incompatible.\n",
"imbalanced-learn 0.8.1 requires scikit-learn>=0.24, but you have scikit-learn 0.22.2.post1 which is incompatible.\n",
"fastai 2.7.10 requires torchvision>=0.8.2, but you have torchvision 0.8.1 which is incompatible.\n",
"en-core-web-sm 3.4.1 requires spacy<3.5.0,>=3.4.0, but you have spacy 2.3.8 which is incompatible.\n",
"confection 0.0.3 requires pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4, but you have pydantic 1.6.2 which is incompatible.\n",
"confection 0.0.3 requires srsly<3.0.0,>=2.4.0, but you have srsly 1.0.6 which is incompatible.\u001B[0m\n",
"Successfully installed Cython-0.29.21 DataProperty-0.55.0 PYEVALB-0.1.3 allennlp-1.2.2 allennlp-models-1.2.2 boto3-1.16.46 botocore-1.19.46 catalogue-1.0.2 conllu-4.2.1 dataclasses-0.6 filelock-3.0.12 fire-0.3.1 ftfy-6.1.1 jmespath-0.10.0 jsonnet-0.19.1 jsonpickle-2.2.0 mbstrdecoder-1.1.1 nltk-3.6.6 numpy-1.21.5 overrides-3.1.0 pandas-1.1.5 pathvalidate-2.5.2 plac-1.1.3 py-rouge-1.1 pydantic-1.6.2 pytablewriter-0.64.2 s3transfer-0.3.7 sacremoses-0.0.53 scikit-learn-0.22.2.post1 sentencepiece-0.1.97 spacy-2.3.8 srsly-1.0.6 tabledata-1.3.0 tcolorpy-0.1.2 tensorboardX-2.5.1 thinc-7.4.6 tokenizers-0.9.2 torch-1.7.0 torchvision-0.8.1 transformers-3.4.0 typepy-1.3.0 urllib3-1.25.11 word2number-1.1\n",
"Found existing installation: dataclasses 0.6\n",
"Uninstalling dataclasses-0.6:\n",
" Successfully uninstalled dataclasses-0.6\n",
"Archive: data.zip\n",
" creating: aste/data/\n",
" creating: aste/data/triplet_data/\n",
" creating: aste/data/triplet_data/14lap/\n",
" inflating: aste/data/triplet_data/14lap/dev.txt \n",
" inflating: aste/data/triplet_data/14lap/test.txt \n",
" inflating: aste/data/triplet_data/14lap/train.txt \n",
" creating: aste/data/triplet_data/14res/\n",
" inflating: aste/data/triplet_data/14res/dev.txt \n",
" inflating: aste/data/triplet_data/14res/test.txt \n",
" inflating: aste/data/triplet_data/14res/train.txt \n",
" creating: aste/data/triplet_data/15res/\n",
" inflating: aste/data/triplet_data/15res/dev.txt \n",
" inflating: aste/data/triplet_data/15res/test.txt \n",
" inflating: aste/data/triplet_data/15res/train.txt \n",
" creating: aste/data/triplet_data/16res/\n",
" inflating: aste/data/triplet_data/16res/dev.txt \n",
" inflating: aste/data/triplet_data/16res/test.txt \n",
" inflating: aste/data/triplet_data/16res/train.txt \n"
]
}
],
"source": [
"!git clone https://github.com/chiayewken/Span-ASTE.git\n",
"!cd Span-ASTE && git checkout f53ec3c\n",
"!cp -a Span-ASTE/* .\n",
"!echo boto3==1.16.46 >> requirements.txt\n",
"!bash setup.sh"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "-pTnCgDxcSQ5",
"outputId": "a461cd5d-5ed6-4c38-9144-b4149ee62952"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tokens: ['I', 'charge', 'it', 'at', 'night', 'and', 'skip', 'taking', 'the', 'cord', 'with', 'me', 'because', 'of', 'the', 'good', 'battery', 'life', '.']\n",
"target: (16, 17)\n",
"opinion: (15, 15)\n",
"label: LabelEnum.positive\n",
"\n",
"tokens: ['it', 'is', 'of', 'high', 'quality', ',', 'has', 'a', 'killer', 'GUI', ',', 'is', 'extremely', 'stable', ',', 'is', 'highly', 'expandable', ',', 'is', 'bundled', 'with', 'lots', 'of', 'very', 'good', 'applications', ',', 'is', 'easy', 'to', 'use', ',', 'and', 'is', 'absolutely', 'gorgeous', '.']\n",
"target: (4, 4)\n",
"opinion: (3, 3)\n",
"label: LabelEnum.positive\n",
"target: (9, 9)\n",
"opinion: (8, 8)\n",
"label: LabelEnum.positive\n",
"target: (26, 26)\n",
"opinion: (25, 25)\n",
"label: LabelEnum.positive\n",
"target: (31, 31)\n",
"opinion: (29, 29)\n",
"label: LabelEnum.positive\n",
"\n",
"tokens: ['Easy', 'to', 'start', 'up', 'and', 'does', 'not', 'overheat', 'as', 'much', 'as', 'other', 'laptops', '.']\n",
"target: (2, 3)\n",
"opinion: (0, 0)\n",
"label: LabelEnum.positive\n",
"\n"
]
}
],
"source": [
"#@title Data Exploration\n",
"data_name = \"14lap\" #@param [\"14lap\", \"14res\", \"15res\", \"16res\"]\n",
"\n",
"import sys\n",
"sys.path.append(\"aste\")\n",
"from data_utils import Data\n",
"\n",
"path = f\"aste/data/triplet_data/{data_name}/train.txt\"\n",
"data = Data.load_from_full_path(path)\n",
"\n",
"for s in data.sentences[:3]:\n",
" print(\"tokens:\", s.tokens)\n",
" for t in s.triples:\n",
" print(\"target:\", (t.t_start, t.t_end))\n",
" print(\"opinion:\", (t.o_start, t.o_end))\n",
" print(\"label:\", t.label)\n",
" print()"
]
},
{
"cell_type": "code",
"source": [
"# Download pretrained SpanModel weights\n",
"from pathlib import Path\n",
"template = \"https://github.com/chiayewken/Span-ASTE/releases/download/v1.0.0/{}.tar\"\n",
"url = template.format(data_name)\n",
"model_tar = Path(url).name\n",
"model_dir = Path(url).stem\n",
"\n",
"!wget -nc $url\n",
"!tar -xf $model_tar"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "3LmrJekiPHpQ",
"outputId": "0f62b4a9-8dd1-4363-b66f-1ae1661e6cab"
},
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"--2022-11-30 07:39:51-- https://github.com/chiayewken/Span-ASTE/releases/download/v1.0.0/14lap.tar\n",
"Resolving github.com (github.com)... 20.205.243.166\n",
"Connecting to github.com (github.com)|20.205.243.166|:443... connected.\n",
"HTTP request sent, awaiting response... 302 Found\n",
"Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/371216048/70bb2013-2773-44c0-b0d9-8a2ec8e38515?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221130T073951Z&X-Amz-Expires=300&X-Amz-Signature=10727051f65ed91031b2e1e8b05cf44384aae0bdafd0171b1655ca6c72249494&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=371216048&response-content-disposition=attachment%3B%20filename%3D14lap.tar&response-content-type=application%2Foctet-stream [following]\n",
"--2022-11-30 07:39:51-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/371216048/70bb2013-2773-44c0-b0d9-8a2ec8e38515?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221130T073951Z&X-Amz-Expires=300&X-Amz-Signature=10727051f65ed91031b2e1e8b05cf44384aae0bdafd0171b1655ca6c72249494&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=371216048&response-content-disposition=attachment%3B%20filename%3D14lap.tar&response-content-type=application%2Foctet-stream\n",
"Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
"Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 409068544 (390M) [application/octet-stream]\n",
"Saving to: ‘14lap.tar’\n",
"\n",
"14lap.tar 100%[===================>] 390.12M 2.13MB/s in 2m 21s \n",
"\n",
"2022-11-30 07:42:13 (2.76 MB/s) - ‘14lap.tar’ saved [409068544/409068544]\n",
"\n"
]
}
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 488,
"referenced_widgets": [
"f61ea767ae064779b77f7d206a90b765",
"382e7815e6314a798943a7f71eab1dbd",
"b3f970d5f20748d091d13d1d37e712e4",
"3c925c25029e4e5a9515b525a819cb31",
"17148c3a40ae4572923f16f249179b9b",
"9cc1d9231ee34d33b65a88c4de3b213f",
"9afa9b48d00748739422b2e32763e57d",
"f6d16c6d56974ec88f55333b65e0f16a",
"e776ac7bd605497395d6cf45648c46e0",
"87d171d5ff4d48bea3781c4185c53cd3",
"ac44150d1166470f944a1d0effeae80b",
"916c3c664f2348b5b608c368090945ac",
"f00d08843e1a4e9e911a8d9fd11f04d1",
"de26b5b4f1be42cba2951f528f7715ba",
"d5d290cde75d463ba7b9b220eed79ca7",
"e30953017bae40849979501dbb4647bc",
"08b2e55d6325474da282c48e0f959a56",
"542e865145b547ffbe61dec7fb94bab7",
"1ac6bf7c4d7d4fbd8d1c85ec426854db",
"808d2ba240c241e9a6989a03c4134a33",
"886551311e7d4ee9823ecd34dfc82811",
"988ec5ae620d4d67b6749ee92a2cb560",
"9463e5ed29e74f05869715f4669d1fa5",
"5e57195d10d7414c9f418af4e7eca84a",
"e4bb3941e21d45f2b2327690b4d589bf",
"321f61ce086b4ace9260a2d55afbdefa",
"94f8b1fb0c764cfa9af078bd238623d4",
"621130d9d8cf468abe5709a85f07d106",
"1453b743641b45758303e91bfedafe03",
"a7aeaf582d15403dbe447d57789b691f",
"d1d0b028b6c04d59ada3bdfb7efde504",
"0a203635e4a54efd96b85633164a067d",
"c55350fd925a454eae62f9da4ed21962",
"21dd3d5e1468453ab2f81d5e184a990e",
"a18149514fe94397abd4bcafd4df0807",
"c73b50f20f5f4b6c8ccca8e1ec61e738",
"fb17189f06074ca39d8251ea2ece15f3",
"b379be7248c84e88b3a5bc8362e56e2f",
"7e85b97fbf8642ecb7613a8b3646b6dc",
"f13fd92805504c0dae5b22e404c256fa",
"86c67fc1ac0d47bbace0fe3b6f24c7ce",
"46568a6cac834c86854b6e41c7e7219a",
"fd38ef9382a6488a8be23d5bdb1fb533",
"e01ecdf66c3143809825cbbad4aaeebb"
]
},
"id": "r3i4rnIhapWe",
"outputId": "804a7c34-2089-4dab-b736-e2f92ba30f94"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"Downloading: 0%| | 0.00/433 [00:00<?, ?B/s]"
],
"application/vnd.jupyter.widget-view+json": {
"version_major": 2,
"version_minor": 0,
"model_id": "f61ea767ae064779b77f7d206a90b765"
}
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"Downloading: 0%| | 0.00/232k [00:00<?, ?B/s]"
],
"application/vnd.jupyter.widget-view+json": {
"version_major": 2,
"version_minor": 0,
"model_id": "916c3c664f2348b5b608c368090945ac"
}
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"Downloading: 0%| | 0.00/466k [00:00<?, ?B/s]"
],
"application/vnd.jupyter.widget-view+json": {
"version_major": 2,
"version_minor": 0,
"model_id": "9463e5ed29e74f05869715f4669d1fa5"
}
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"################################################################################\n",
"################################################################################\n"
]
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"Downloading: 0%| | 0.00/440M [00:00<?, ?B/s]"
],
"application/vnd.jupyter.widget-view+json": {
"version_major": 2,
"version_minor": 0,
"model_id": "21dd3d5e1468453ab2f81d5e184a990e"
}
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"WARNING:allennlp.nn.initializers:Did not use initialization regex that was passed: .*weight_matrix\n",
"WARNING:allennlp.nn.initializers:Did not use initialization regex that was passed: .*weight_matrix\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"{'span_model_unused_keys': dict_keys(['serialization_dir'])}\n",
"{'locals': ('span_extractor_type', 'endpoint')}\n",
"{'locals': ('use_span_width_embeds', True)}\n",
"{'ner_loss_fn': CrossEntropyLoss()}\n",
"{'unused_keys': dict_keys([])}\n",
"{'locals': {'self': ProperRelationExtractor(), 'make_feedforward': <function SpanModel.__init__.<locals>.make_feedforward at 0x7f26debbe680>, 'span_emb_dim': 1556, 'feature_size': 20, 'spans_per_word': 0.5, 'positive_label_weight': 1.0, 'regularizer': None, 'use_distance_embeds': True, 'use_pruning': True, 'kwargs': {}, 'vocab': Vocabulary with namespaces: None__relation_labels, Size: 3 || None__ner_labels, Size: 3 || Non Padded Namespaces: {'*tags', '*labels'}, '__class__': <class 'span_model.models.relation_proper.ProperRelationExtractor'>}}\n",
"{'token_emb_dim': 768, 'span_emb_dim': 1556, 'relation_scorer_dim': 3240}\n",
"{'relation_loss_fn': CrossEntropyLoss()}\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"reading instances: 1it [00:00, 350.58it/s]\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"{'target': 'Windows 8', 'opinion': 'Did not enjoy', 'sentiment': <LabelEnum.negative: 'NEG'>}\n",
"\n",
"{'target': 'touchscreen functions', 'opinion': 'Did not enjoy', 'sentiment': <LabelEnum.negative: 'NEG'>}\n",
"\n",
"{'target': 'Windows 8', 'opinion': 'new', 'sentiment': <LabelEnum.neutral: 'NEU'>}\n"
]
}
],
"source": [
"# Use pretrained SpanModel weights for prediction\n",
"import sys\n",
"sys.path.append(\"aste\")\n",
"from pathlib import Path\n",
"from data_utils import Data, Sentence, SplitEnum\n",
"from wrapper import SpanModel\n",
"\n",
"def predict_sentence(text: str, model: SpanModel) -> Sentence:\n",
" path_in = \"temp_in.txt\"\n",
" path_out = \"temp_out.txt\"\n",
" sent = Sentence(tokens=text.split(), triples=[], pos=[], is_labeled=False, weight=1, id=0)\n",
" data = Data(root=Path(), data_split=SplitEnum.test, sentences=[sent])\n",
" data.save_to_path(path_in)\n",
" model.predict(path_in, path_out)\n",
" data = Data.load_from_full_path(path_out)\n",
" return data.sentences[0]\n",
"\n",
"text = \"Did not enjoy the new Windows 8 and touchscreen functions .\"\n",
"model = SpanModel(save_dir=model_dir, random_seed=0)\n",
"sent = predict_sentence(text, model)\n",
"\n",
"for t in sent.triples:\n",
" target = \" \".join(sent.tokens[t.t_start:t.t_end+1])\n",
" opinion = \" \".join(sent.tokens[t.o_start:t.o_end+1])\n",
" print()\n",
" print(dict(target=target, opinion=opinion, sentiment=t.label))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"collapsed": true,
"id": "srSNwqUz-39x",
"outputId": "9a34cc00-477f-4002-c8ec-e357284c2bc5"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"{'weights_dir': PosixPath('outputs/14lap/seed_4/weights')}\n",
"2022-11-30 03:01:13,590 - INFO - allennlp.common.params - random_seed = 4\n",
"2022-11-30 03:01:13,592 - INFO - allennlp.common.params - numpy_seed = 4\n",
"2022-11-30 03:01:13,596 - INFO - allennlp.common.params - pytorch_seed = 4\n",
"2022-11-30 03:01:13,599 - INFO - allennlp.common.checks - Pytorch version: 1.7.0\n",
"2022-11-30 03:01:13,600 - INFO - allennlp.common.params - type = default\n",
"2022-11-30 03:01:13,604 - INFO - allennlp.common.params - dataset_reader.type = span_model\n",
"2022-11-30 03:01:13,606 - INFO - allennlp.common.params - dataset_reader.lazy = False\n",
"2022-11-30 03:01:13,608 - INFO - allennlp.common.params - dataset_reader.cache_directory = None\n",
"2022-11-30 03:01:13,610 - INFO - allennlp.common.params - dataset_reader.max_instances = None\n",
"2022-11-30 03:01:13,612 - INFO - allennlp.common.params - dataset_reader.manual_distributed_sharding = False\n",
"2022-11-30 03:01:13,613 - INFO - allennlp.common.params - dataset_reader.manual_multi_process_sharding = False\n",
"2022-11-30 03:01:13,615 - INFO - allennlp.common.params - dataset_reader.max_span_width = 8\n",
"2022-11-30 03:01:13,617 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.type = pretrained_transformer_mismatched\n",
"2022-11-30 03:01:13,618 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.token_min_padding_length = 0\n",
"2022-11-30 03:01:13,620 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.model_name = bert-base-uncased\n",
"2022-11-30 03:01:13,621 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.namespace = tags\n",
"2022-11-30 03:01:13,623 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.max_length = 512\n",
"2022-11-30 03:01:13,625 - INFO - allennlp.common.params - dataset_reader.token_indexers.bert.tokenizer_kwargs = None\n",
"################################################################################\n",
"2022-11-30 03:01:13,627 - INFO - allennlp.common.params - train_data_path = /content/outputs/14lap/seed_4/temp_data/train.json\n",
"2022-11-30 03:01:13,630 - INFO - allennlp.common.params - vocabulary = <allennlp.common.lazy.Lazy object at 0x7f9692de6c90>\n",
"2022-11-30 03:01:13,631 - INFO - allennlp.common.params - datasets_for_vocab_creation = None\n",
"2022-11-30 03:01:13,633 - INFO - allennlp.common.params - validation_dataset_reader = None\n",
"2022-11-30 03:01:13,634 - INFO - allennlp.common.params - validation_data_path = /content/outputs/14lap/seed_4/temp_data/dev.json\n",
"2022-11-30 03:01:13,636 - INFO - allennlp.common.params - validation_data_loader = None\n",
"2022-11-30 03:01:13,637 - INFO - allennlp.common.params - test_data_path = /content/outputs/14lap/seed_4/temp_data/dev.json\n",
"2022-11-30 03:01:13,639 - INFO - allennlp.common.params - evaluate_on_test = False\n",
"2022-11-30 03:01:13,640 - INFO - allennlp.common.params - batch_weight_key = \n",
"2022-11-30 03:01:13,642 - INFO - allennlp.training.util - Reading training data from /content/outputs/14lap/seed_4/temp_data/train.json\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"reading instances: 906it [00:01, 803.93it/s]"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"2022-11-30 03:01:14,774 - INFO - allennlp.training.util - Reading validation data from /content/outputs/14lap/seed_4/temp_data/dev.json\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n",
"reading instances: 219it [00:00, 1307.35it/s]"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"2022-11-30 03:01:14,953 - INFO - allennlp.training.util - Reading test data from /content/outputs/14lap/seed_4/temp_data/dev.json\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n",
"reading instances: 219it [00:00, 520.69it/s]"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"2022-11-30 03:01:15,382 - INFO - allennlp.common.params - type = from_instances\n",
"2022-11-30 03:01:15,386 - INFO - allennlp.common.params - min_count = None\n",
"2022-11-30 03:01:15,389 - INFO - allennlp.common.params - max_vocab_size = None\n",
"2022-11-30 03:01:15,391 - INFO - allennlp.common.params - non_padded_namespaces = ('*tags', '*labels')\n",
"2022-11-30 03:01:15,394 - INFO - allennlp.common.params - pretrained_files = None\n",
"2022-11-30 03:01:15,397 - INFO - allennlp.common.params - only_include_pretrained_words = False\n",
"2022-11-30 03:01:15,401 - INFO - allennlp.common.params - tokens_to_add = None\n",
"2022-11-30 03:01:15,402 - INFO - allennlp.common.params - min_pretrained_embeddings = None\n",
"2022-11-30 03:01:15,403 - INFO - allennlp.common.params - padding_token = @@PADDING@@\n",
"2022-11-30 03:01:15,405 - INFO - allennlp.common.params - oov_token = @@UNKNOWN@@\n",
"2022-11-30 03:01:15,406 - INFO - allennlp.data.vocabulary - Fitting token dictionary from dataset.\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n",
"building vocab: 1344it [00:00, 14370.57it/s]"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"2022-11-30 03:01:15,504 - INFO - allennlp.common.params - model.type = span_model\n",
"2022-11-30 03:01:15,507 - INFO - allennlp.common.params - model.regularizer = None\n",
"2022-11-30 03:01:15,513 - INFO - allennlp.common.params - model.embedder.type = basic\n",
"2022-11-30 03:01:15,515 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.type = pretrained_transformer_mismatched\n",
"2022-11-30 03:01:15,517 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.model_name = bert-base-uncased\n",
"2022-11-30 03:01:15,519 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.max_length = 512\n",
"2022-11-30 03:01:15,520 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.train_parameters = True\n",
"2022-11-30 03:01:15,521 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.last_layer_only = True\n",
"2022-11-30 03:01:15,523 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.gradient_checkpointing = None\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"2022-11-30 03:01:15,525 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.tokenizer_kwargs = None\n",
"2022-11-30 03:01:15,526 - INFO - allennlp.common.params - model.embedder.token_embedders.bert.transformer_kwargs = None\n",
"2022-11-30 03:01:15,653 - INFO - allennlp.common.params - model.modules.relation.spans_per_word = 0.5\n",
"2022-11-30 03:01:15,654 - INFO - allennlp.common.params - model.modules.relation.use_distance_embeds = True\n",
"2022-11-30 03:01:15,657 - INFO - allennlp.common.params - model.modules.relation.use_pruning = True\n",
"2022-11-30 03:01:15,658 - INFO - allennlp.common.params - model.feature_size = 20\n",
"2022-11-30 03:01:15,659 - INFO - allennlp.common.params - model.max_span_width = 8\n",
"2022-11-30 03:01:15,660 - INFO - allennlp.common.params - model.target_task = relation\n",
"2022-11-30 03:01:15,663 - INFO - allennlp.common.params - model.initializer.regexes.0.1.type = xavier_normal\n",
"2022-11-30 03:01:15,665 - INFO - allennlp.common.params - model.initializer.regexes.0.1.gain = 1.0\n",
"2022-11-30 03:01:15,667 - INFO - allennlp.common.params - model.initializer.prevent_regexes = None\n",
"2022-11-30 03:01:15,669 - INFO - allennlp.common.params - model.module_initializer.regexes.0.1.type = xavier_normal\n",
"2022-11-30 03:01:15,670 - INFO - allennlp.common.params - model.module_initializer.regexes.0.1.gain = 1.0\n",
"2022-11-30 03:01:15,672 - INFO - allennlp.common.params - model.module_initializer.regexes.1.1.type = xavier_normal\n",
"2022-11-30 03:01:15,673 - INFO - allennlp.common.params - model.module_initializer.regexes.1.1.gain = 1.0\n",
"2022-11-30 03:01:15,674 - INFO - allennlp.common.params - model.module_initializer.prevent_regexes = None\n",
"2022-11-30 03:01:15,676 - INFO - allennlp.common.params - model.display_metrics = None\n",
"2022-11-30 03:01:15,677 - INFO - allennlp.common.params - model.span_extractor_type = endpoint\n",
"2022-11-30 03:01:15,678 - INFO - allennlp.common.params - model.use_span_width_embeds = True\n",
"{'span_model_unused_keys': dict_keys(['serialization_dir'])}\n",
"{'locals': ('span_extractor_type', 'endpoint')}\n",
"{'locals': ('use_span_width_embeds', True)}\n",
"2022-11-30 03:01:15,680 - INFO - allennlp.common.params - ner.regularizer = None\n",
"2022-11-30 03:01:15,681 - INFO - allennlp.common.params - ner.name = ner_labels\n",
"{'ner_loss_fn': CrossEntropyLoss()}\n",
"2022-11-30 03:01:15,687 - INFO - allennlp.common.params - relation.regularizer = None\n",
"2022-11-30 03:01:15,688 - INFO - allennlp.common.params - relation.serialization_dir = None\n",
"2022-11-30 03:01:15,689 - INFO - allennlp.common.params - relation.spans_per_word = 0.5\n",
"2022-11-30 03:01:15,690 - INFO - allennlp.common.params - relation.positive_label_weight = 1.0\n",
"2022-11-30 03:01:15,692 - INFO - allennlp.common.params - relation.use_distance_embeds = True\n",
"2022-11-30 03:01:15,693 - INFO - allennlp.common.params - relation.use_pruning = True\n",
"{'unused_keys': dict_keys([])}\n",
"{'locals': {'self': ProperRelationExtractor(), 'make_feedforward': <function SpanModel.__init__.<locals>.make_feedforward at 0x7f95ec5935f0>, 'span_emb_dim': 1556, 'feature_size': 20, 'spans_per_word': 0.5, 'positive_label_weight': 1.0, 'regularizer': None, 'use_distance_embeds': True, 'use_pruning': True, 'kwargs': {}, 'vocab': Vocabulary with namespaces: None__ner_labels, Size: 3 || None__relation_labels, Size: 3 || Non Padded Namespaces: {'*labels', '*tags'}, '__class__': <class 'span_model.models.relation_proper.ProperRelationExtractor'>}}\n",
"{'token_emb_dim': 768, 'span_emb_dim': 1556, 'relation_scorer_dim': 3240}\n",
"{'relation_loss_fn': CrossEntropyLoss()}\n",
"2022-11-30 03:01:15,722 - INFO - allennlp.nn.initializers - Initializing parameters\n",
"2022-11-30 03:01:15,727 - INFO - allennlp.nn.initializers - Initializing _ner_scorers.None__ner_labels.0._module._linear_layers.0.weight using .*weight initializer\n",
"2022-11-30 03:01:15,733 - INFO - allennlp.nn.initializers - Initializing _ner_scorers.None__ner_labels.0._module._linear_layers.1.weight using .*weight initializer\n",
"2022-11-30 03:01:15,743 - INFO - allennlp.nn.initializers - Initializing _ner_scorers.None__ner_labels.1._module.weight using .*weight initializer\n",
"2022-11-30 03:01:15,745 - WARNING - allennlp.nn.initializers - Did not use initialization regex that was passed: .*weight_matrix\n",
"2022-11-30 03:01:15,746 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code\n",
"2022-11-30 03:01:15,748 - INFO - allennlp.nn.initializers - _ner_scorers.None__ner_labels.0._module._linear_layers.0.bias\n",
"2022-11-30 03:01:15,749 - INFO - allennlp.nn.initializers - _ner_scorers.None__ner_labels.0._module._linear_layers.1.bias\n",
"2022-11-30 03:01:15,750 - INFO - allennlp.nn.initializers - _ner_scorers.None__ner_labels.1._module.bias\n",
"2022-11-30 03:01:15,752 - INFO - allennlp.nn.initializers - Initializing parameters\n",
"2022-11-30 03:01:15,753 - INFO - allennlp.nn.initializers - Initializing d_embedder.embedder.weight using .*weight initializer\n",
"2022-11-30 03:01:15,754 - INFO - allennlp.nn.initializers - Initializing _relation_feedforwards.None__relation_labels._linear_layers.0.weight using .*weight initializer\n",
"2022-11-30 03:01:15,781 - INFO - allennlp.nn.initializers - Initializing _relation_feedforwards.None__relation_labels._linear_layers.1.weight using .*weight initializer\n",
"2022-11-30 03:01:15,783 - INFO - allennlp.nn.initializers - Initializing _relation_scorers.None__relation_labels.weight using .*weight initializer\n",
"2022-11-30 03:01:15,787 - WARNING - allennlp.nn.initializers - Did not use initialization regex that was passed: .*weight_matrix\n",
"2022-11-30 03:01:15,789 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code\n",
"2022-11-30 03:01:15,791 - INFO - allennlp.nn.initializers - _relation_feedforwards.None__relation_labels._linear_layers.0.bias\n",
"2022-11-30 03:01:15,794 - INFO - allennlp.nn.initializers - _relation_feedforwards.None__relation_labels._linear_layers.1.bias\n",
"2022-11-30 03:01:15,797 - INFO - allennlp.nn.initializers - _relation_scorers.None__relation_labels.bias\n",
"2022-11-30 03:01:15,799 - INFO - allennlp.nn.initializers - Initializing parameters\n",
"2022-11-30 03:01:15,802 - INFO - allennlp.nn.initializers - Initializing _endpoint_span_extractor._span_width_embedding.weight using _span_width_embedding.weight initializer\n",
"2022-11-30 03:01:15,806 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code\n",
"2022-11-30 03:01:15,809 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.bias\n",
"2022-11-30 03:01:15,811 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.weight\n",
"2022-11-30 03:01:15,813 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.position_embeddings.weight\n",
"2022-11-30 03:01:15,815 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.token_type_embeddings.weight\n",
"2022-11-30 03:01:15,818 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.word_embeddings.weight\n",
"2022-11-30 03:01:15,820 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,822 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,823 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.bias\n",
"2022-11-30 03:01:15,824 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.weight\n",
"2022-11-30 03:01:15,829 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.bias\n",
"2022-11-30 03:01:15,831 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.weight\n",
"2022-11-30 03:01:15,832 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.bias\n",
"2022-11-30 03:01:15,833 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.weight\n",
"2022-11-30 03:01:15,834 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.bias\n",
"2022-11-30 03:01:15,837 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.weight\n",
"2022-11-30 03:01:15,838 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.bias\n",
"2022-11-30 03:01:15,839 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.weight\n",
"2022-11-30 03:01:15,841 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,844 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,845 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.bias\n",
"2022-11-30 03:01:15,847 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.weight\n",
"2022-11-30 03:01:15,849 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,850 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,851 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.bias\n",
"2022-11-30 03:01:15,853 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.weight\n",
"2022-11-30 03:01:15,855 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.bias\n",
"2022-11-30 03:01:15,857 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.weight\n",
"2022-11-30 03:01:15,859 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.bias\n",
"2022-11-30 03:01:15,862 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.weight\n",
"2022-11-30 03:01:15,864 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.bias\n",
"2022-11-30 03:01:15,868 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.weight\n",
"2022-11-30 03:01:15,871 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.bias\n",
"2022-11-30 03:01:15,873 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.weight\n",
"2022-11-30 03:01:15,875 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,877 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,880 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.bias\n",
"2022-11-30 03:01:15,882 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.weight\n",
"2022-11-30 03:01:15,884 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,886 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,888 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.bias\n",
"2022-11-30 03:01:15,889 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.weight\n",
"2022-11-30 03:01:15,891 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.bias\n",
"2022-11-30 03:01:15,893 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.weight\n",
"2022-11-30 03:01:15,894 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.bias\n",
"2022-11-30 03:01:15,896 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.weight\n",
"2022-11-30 03:01:15,897 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.bias\n",
"2022-11-30 03:01:15,899 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.weight\n",
"2022-11-30 03:01:15,900 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.bias\n",
"2022-11-30 03:01:15,901 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.weight\n",
"2022-11-30 03:01:15,903 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,904 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,905 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.bias\n",
"2022-11-30 03:01:15,906 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.weight\n",
"2022-11-30 03:01:15,908 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,909 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,910 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.bias\n",
"2022-11-30 03:01:15,912 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.weight\n",
"2022-11-30 03:01:15,913 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.bias\n",
"2022-11-30 03:01:15,915 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.weight\n",
"2022-11-30 03:01:15,916 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.bias\n",
"2022-11-30 03:01:15,917 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.weight\n",
"2022-11-30 03:01:15,919 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.bias\n",
"2022-11-30 03:01:15,920 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.weight\n",
"2022-11-30 03:01:15,922 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.bias\n",
"2022-11-30 03:01:15,923 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.weight\n",
"2022-11-30 03:01:15,924 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,926 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,928 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.bias\n",
"2022-11-30 03:01:15,929 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.weight\n",
"2022-11-30 03:01:15,930 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,931 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,933 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.bias\n",
"2022-11-30 03:01:15,934 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.weight\n",
"2022-11-30 03:01:15,935 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.bias\n",
"2022-11-30 03:01:15,936 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.weight\n",
"2022-11-30 03:01:15,937 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.bias\n",
"2022-11-30 03:01:15,938 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.weight\n",
"2022-11-30 03:01:15,940 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.bias\n",
"2022-11-30 03:01:15,941 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.weight\n",
"2022-11-30 03:01:15,942 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.bias\n",
"2022-11-30 03:01:15,943 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.weight\n",
"2022-11-30 03:01:15,945 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,946 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,947 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.bias\n",
"2022-11-30 03:01:15,948 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.weight\n",
"2022-11-30 03:01:15,950 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,951 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,952 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.bias\n",
"2022-11-30 03:01:15,954 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.weight\n",
"2022-11-30 03:01:15,955 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.bias\n",
"2022-11-30 03:01:15,956 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.weight\n",
"2022-11-30 03:01:15,958 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.bias\n",
"2022-11-30 03:01:15,959 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.weight\n",
"2022-11-30 03:01:15,960 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.bias\n",
"2022-11-30 03:01:15,961 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.weight\n",
"2022-11-30 03:01:15,963 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.bias\n",
"2022-11-30 03:01:15,964 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.weight\n",
"2022-11-30 03:01:15,965 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,967 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,968 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.bias\n",
"2022-11-30 03:01:15,969 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.weight\n",
"2022-11-30 03:01:15,970 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,972 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,973 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.bias\n",
"2022-11-30 03:01:15,974 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.weight\n",
"2022-11-30 03:01:15,976 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.bias\n",
"2022-11-30 03:01:15,977 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.weight\n",
"2022-11-30 03:01:15,978 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.bias\n",
"2022-11-30 03:01:15,980 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.weight\n",
"2022-11-30 03:01:15,981 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.bias\n",
"2022-11-30 03:01:15,982 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.weight\n",
"2022-11-30 03:01:15,983 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.bias\n",
"2022-11-30 03:01:15,985 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.weight\n",
"2022-11-30 03:01:15,986 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,987 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,989 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.bias\n",
"2022-11-30 03:01:15,990 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.weight\n",
"2022-11-30 03:01:15,991 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:15,993 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:15,994 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.bias\n",
"2022-11-30 03:01:15,995 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.weight\n",
"2022-11-30 03:01:15,996 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.bias\n",
"2022-11-30 03:01:15,998 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.weight\n",
"2022-11-30 03:01:15,999 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.bias\n",
"2022-11-30 03:01:16,000 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.weight\n",
"2022-11-30 03:01:16,002 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.bias\n",
"2022-11-30 03:01:16,003 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.weight\n",
"2022-11-30 03:01:16,004 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.bias\n",
"2022-11-30 03:01:16,006 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.weight\n",
"2022-11-30 03:01:16,007 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,008 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,010 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.bias\n",
"2022-11-30 03:01:16,011 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.weight\n",
"2022-11-30 03:01:16,012 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,014 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,015 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.bias\n",
"2022-11-30 03:01:16,016 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.weight\n",
"2022-11-30 03:01:16,018 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.bias\n",
"2022-11-30 03:01:16,019 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.weight\n",
"2022-11-30 03:01:16,020 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.bias\n",
"2022-11-30 03:01:16,021 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.weight\n",
"2022-11-30 03:01:16,022 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.bias\n",
"2022-11-30 03:01:16,023 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.weight\n",
"2022-11-30 03:01:16,024 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.bias\n",
"2022-11-30 03:01:16,025 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.weight\n",
"2022-11-30 03:01:16,026 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,028 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,029 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.bias\n",
"2022-11-30 03:01:16,030 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.weight\n",
"2022-11-30 03:01:16,031 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,033 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,034 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.bias\n",
"2022-11-30 03:01:16,035 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.weight\n",
"2022-11-30 03:01:16,037 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.bias\n",
"2022-11-30 03:01:16,038 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.weight\n",
"2022-11-30 03:01:16,039 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.bias\n",
"2022-11-30 03:01:16,041 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.weight\n",
"2022-11-30 03:01:16,042 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.bias\n",
"2022-11-30 03:01:16,043 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.weight\n",
"2022-11-30 03:01:16,045 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.bias\n",
"2022-11-30 03:01:16,046 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.weight\n",
"2022-11-30 03:01:16,047 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,049 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,050 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.bias\n",
"2022-11-30 03:01:16,051 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.weight\n",
"2022-11-30 03:01:16,052 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,054 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,055 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.bias\n",
"2022-11-30 03:01:16,056 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.weight\n",
"2022-11-30 03:01:16,058 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.bias\n",
"2022-11-30 03:01:16,059 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.weight\n",
"2022-11-30 03:01:16,060 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.bias\n",
"2022-11-30 03:01:16,061 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.weight\n",
"2022-11-30 03:01:16,063 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.bias\n",
"2022-11-30 03:01:16,064 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.weight\n",
"2022-11-30 03:01:16,065 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.bias\n",
"2022-11-30 03:01:16,067 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.weight\n",
"2022-11-30 03:01:16,068 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,069 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,070 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.bias\n",
"2022-11-30 03:01:16,072 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.weight\n",
"2022-11-30 03:01:16,073 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,074 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,076 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.bias\n",
"2022-11-30 03:01:16,077 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.weight\n",
"2022-11-30 03:01:16,078 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.bias\n",
"2022-11-30 03:01:16,080 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.weight\n",
"2022-11-30 03:01:16,081 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.bias\n",
"2022-11-30 03:01:16,082 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.weight\n",
"2022-11-30 03:01:16,083 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.bias\n",
"2022-11-30 03:01:16,085 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.weight\n",
"2022-11-30 03:01:16,086 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.bias\n",
"2022-11-30 03:01:16,087 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.weight\n",
"2022-11-30 03:01:16,089 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,090 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,091 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.bias\n",
"2022-11-30 03:01:16,093 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.weight\n",
"2022-11-30 03:01:16,094 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.bias\n",
"2022-11-30 03:01:16,095 - INFO - allennlp.nn.initializers - _embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.weight\n",
"2022-11-30 03:01:16,097 - INFO - allennlp.nn.initializers - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.bias\n",
"2022-11-30 03:01:16,098 - INFO - allennlp.nn.initializers - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.weight\n",
"2022-11-30 03:01:16,099 - INFO - allennlp.nn.initializers - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.bias\n",
"2022-11-30 03:01:16,100 - INFO - allennlp.nn.initializers - _ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.weight\n",
"2022-11-30 03:01:16,102 - INFO - allennlp.nn.initializers - _ner._ner_scorers.None__ner_labels.1._module.bias\n",
"2022-11-30 03:01:16,103 - INFO - allennlp.nn.initializers - _ner._ner_scorers.None__ner_labels.1._module.weight\n",
"2022-11-30 03:01:16,104 - INFO - allennlp.nn.initializers - _relation._relation_feedforwards.None__relation_labels._linear_layers.0.bias\n",
"2022-11-30 03:01:16,105 - INFO - allennlp.nn.initializers - _relation._relation_feedforwards.None__relation_labels._linear_layers.0.weight\n",
"2022-11-30 03:01:16,107 - INFO - allennlp.nn.initializers - _relation._relation_feedforwards.None__relation_labels._linear_layers.1.bias\n",
"2022-11-30 03:01:16,108 - INFO - allennlp.nn.initializers - _relation._relation_feedforwards.None__relation_labels._linear_layers.1.weight\n",
"2022-11-30 03:01:16,109 - INFO - allennlp.nn.initializers - _relation._relation_scorers.None__relation_labels.bias\n",
"2022-11-30 03:01:16,111 - INFO - allennlp.nn.initializers - _relation._relation_scorers.None__relation_labels.weight\n",
"2022-11-30 03:01:16,112 - INFO - allennlp.nn.initializers - _relation.d_embedder.embedder.weight\n",
"2022-11-30 03:01:16,113 - INFO - filelock - Lock 140284684846864 acquired on outputs/14lap/seed_4/weights/vocabulary/.lock\n",
"2022-11-30 03:01:16,115 - INFO - filelock - Lock 140284684846864 released on outputs/14lap/seed_4/weights/vocabulary/.lock\n",
"2022-11-30 03:01:16,116 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader\n",
"2022-11-30 03:01:16,118 - INFO - allennlp.common.params - data_loader.batch_size = 1\n",
"2022-11-30 03:01:16,119 - INFO - allennlp.common.params - data_loader.shuffle = False\n",
"2022-11-30 03:01:16,120 - INFO - allennlp.common.params - data_loader.batch_sampler = None\n",
"2022-11-30 03:01:16,122 - INFO - allennlp.common.params - data_loader.num_workers = 0\n",
"2022-11-30 03:01:16,123 - INFO - allennlp.common.params - data_loader.pin_memory = False\n",
"2022-11-30 03:01:16,124 - INFO - allennlp.common.params - data_loader.drop_last = False\n",
"2022-11-30 03:01:16,125 - INFO - allennlp.common.params - data_loader.timeout = 0\n",
"2022-11-30 03:01:16,127 - INFO - allennlp.common.params - data_loader.worker_init_fn = None\n",
"2022-11-30 03:01:16,128 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None\n",
"2022-11-30 03:01:16,129 - INFO - allennlp.common.params - data_loader.batches_per_epoch = None\n",
"2022-11-30 03:01:16,131 - INFO - allennlp.common.params - data_loader.sampler.type = random\n",
"2022-11-30 03:01:16,132 - INFO - allennlp.common.params - data_loader.sampler.replacement = False\n",
"2022-11-30 03:01:16,134 - INFO - allennlp.common.params - data_loader.sampler.num_samples = None\n",
"2022-11-30 03:01:16,136 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader\n",
"2022-11-30 03:01:16,137 - INFO - allennlp.common.params - data_loader.batch_size = 1\n",
"2022-11-30 03:01:16,139 - INFO - allennlp.common.params - data_loader.shuffle = False\n",
"2022-11-30 03:01:16,140 - INFO - allennlp.common.params - data_loader.batch_sampler = None\n",
"2022-11-30 03:01:16,142 - INFO - allennlp.common.params - data_loader.num_workers = 0\n",
"2022-11-30 03:01:16,143 - INFO - allennlp.common.params - data_loader.pin_memory = False\n",
"2022-11-30 03:01:16,144 - INFO - allennlp.common.params - data_loader.drop_last = False\n",
"2022-11-30 03:01:16,146 - INFO - allennlp.common.params - data_loader.timeout = 0\n",
"2022-11-30 03:01:16,147 - INFO - allennlp.common.params - data_loader.worker_init_fn = None\n",
"2022-11-30 03:01:16,148 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None\n",
"2022-11-30 03:01:16,149 - INFO - allennlp.common.params - data_loader.batches_per_epoch = None\n",
"2022-11-30 03:01:16,151 - INFO - allennlp.common.params - data_loader.sampler.type = random\n",
"2022-11-30 03:01:16,152 - INFO - allennlp.common.params - data_loader.sampler.replacement = False\n",
"2022-11-30 03:01:16,154 - INFO - allennlp.common.params - data_loader.sampler.num_samples = None\n",
"2022-11-30 03:01:16,155 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader\n",
"2022-11-30 03:01:16,157 - INFO - allennlp.common.params - data_loader.batch_size = 1\n",
"2022-11-30 03:01:16,158 - INFO - allennlp.common.params - data_loader.shuffle = False\n",
"2022-11-30 03:01:16,160 - INFO - allennlp.common.params - data_loader.batch_sampler = None\n",
"2022-11-30 03:01:16,161 - INFO - allennlp.common.params - data_loader.num_workers = 0\n",
"2022-11-30 03:01:16,162 - INFO - allennlp.common.params - data_loader.pin_memory = False\n",
"2022-11-30 03:01:16,164 - INFO - allennlp.common.params - data_loader.drop_last = False\n",
"2022-11-30 03:01:16,165 - INFO - allennlp.common.params - data_loader.timeout = 0\n",
"2022-11-30 03:01:16,167 - INFO - allennlp.common.params - data_loader.worker_init_fn = None\n",
"2022-11-30 03:01:16,168 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None\n",
"2022-11-30 03:01:16,169 - INFO - allennlp.common.params - data_loader.batches_per_epoch = None\n",
"2022-11-30 03:01:16,171 - INFO - allennlp.common.params - data_loader.sampler.type = random\n",
"2022-11-30 03:01:16,172 - INFO - allennlp.common.params - data_loader.sampler.replacement = False\n",
"2022-11-30 03:01:16,173 - INFO - allennlp.common.params - data_loader.sampler.num_samples = None\n",
"2022-11-30 03:01:16,175 - INFO - allennlp.common.params - trainer.type = gradient_descent\n",
"2022-11-30 03:01:16,177 - INFO - allennlp.common.params - trainer.patience = None\n",
"2022-11-30 03:01:16,178 - INFO - allennlp.common.params - trainer.validation_metric = +MEAN__relation_f1\n",
"2022-11-30 03:01:16,179 - INFO - allennlp.common.params - trainer.num_epochs = 10\n",
"2022-11-30 03:01:16,181 - INFO - allennlp.common.params - trainer.cuda_device = 0\n",
"2022-11-30 03:01:16,182 - INFO - allennlp.common.params - trainer.grad_norm = 5\n",
"2022-11-30 03:01:16,184 - INFO - allennlp.common.params - trainer.grad_clipping = None\n",
"2022-11-30 03:01:16,185 - INFO - allennlp.common.params - trainer.distributed = False\n",
"2022-11-30 03:01:16,186 - INFO - allennlp.common.params - trainer.world_size = 1\n",
"2022-11-30 03:01:16,188 - INFO - allennlp.common.params - trainer.num_gradient_accumulation_steps = 1\n",
"2022-11-30 03:01:16,189 - INFO - allennlp.common.params - trainer.use_amp = False\n",
"2022-11-30 03:01:16,191 - INFO - allennlp.common.params - trainer.no_grad = None\n",
"2022-11-30 03:01:16,192 - INFO - allennlp.common.params - trainer.momentum_scheduler = None\n",
"2022-11-30 03:01:16,194 - INFO - allennlp.common.params - trainer.tensorboard_writer = <allennlp.common.lazy.Lazy object at 0x7f9692e41190>\n",
"2022-11-30 03:01:16,195 - INFO - allennlp.common.params - trainer.moving_average = None\n",
"2022-11-30 03:01:16,196 - INFO - allennlp.common.params - trainer.batch_callbacks = None\n",
"2022-11-30 03:01:16,198 - INFO - allennlp.common.params - trainer.epoch_callbacks = None\n",
"2022-11-30 03:01:16,199 - INFO - allennlp.common.params - trainer.end_callbacks = None\n",
"2022-11-30 03:01:16,200 - INFO - allennlp.common.params - trainer.trainer_callbacks = None\n",
"2022-11-30 03:01:16,508 - INFO - allennlp.common.params - trainer.optimizer.type = adamw\n",
"2022-11-30 03:01:16,518 - INFO - allennlp.common.params - trainer.optimizer.lr = 0.001\n",
"2022-11-30 03:01:16,520 - INFO - allennlp.common.params - trainer.optimizer.betas = (0.9, 0.999)\n",
"2022-11-30 03:01:16,521 - INFO - allennlp.common.params - trainer.optimizer.eps = 1e-08\n",
"2022-11-30 03:01:16,522 - INFO - allennlp.common.params - trainer.optimizer.weight_decay = 0\n",
"2022-11-30 03:01:16,523 - INFO - allennlp.common.params - trainer.optimizer.amsgrad = False\n",
"2022-11-30 03:01:16,526 - INFO - allennlp.training.optimizers - Done constructing parameter groups.\n",
"2022-11-30 03:01:16,527 - INFO - allennlp.training.optimizers - Group 0: ['_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.pooler.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.word_embeddings.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.position_embeddings.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.token_type_embeddings.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.11.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.output.LayerNorm.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.weight', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.bias', '_embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.weight'], {'finetune': True, 'lr': 5e-05, 'weight_decay': 0.01}\n",
"2022-11-30 03:01:16,530 - INFO - allennlp.training.optimizers - Group 1: [], {'lr': 0.01}\n",
"2022-11-30 03:01:16,531 - INFO - allennlp.training.optimizers - Group 2: ['_ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.weight', '_relation._relation_feedforwards.None__relation_labels._linear_layers.1.bias', '_relation._relation_scorers.None__relation_labels.bias', '_endpoint_span_extractor._span_width_embedding.weight', '_relation._relation_feedforwards.None__relation_labels._linear_layers.1.weight', '_ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.weight', '_relation._relation_feedforwards.None__relation_labels._linear_layers.0.bias', '_relation._relation_feedforwards.None__relation_labels._linear_layers.0.weight', '_relation.d_embedder.embedder.weight', '_ner._ner_scorers.None__ner_labels.1._module.bias', '_ner._ner_scorers.None__ner_labels.0._module._linear_layers.0.bias', '_ner._ner_scorers.None__ner_labels.1._module.weight', '_ner._ner_scorers.None__ner_labels.0._module._linear_layers.1.bias', '_relation._relation_scorers.None__relation_labels.weight'], {}\n",
"2022-11-30 03:01:16,532 - WARNING - allennlp.training.optimizers - When constructing parameter groups, scalar_parameters does not match any parameter name\n",
"2022-11-30 03:01:16,534 - INFO - allennlp.training.optimizers - Number of trainable parameters: 110249737\n",
"2022-11-30 03:01:16,538 - INFO - allennlp.common.util - The following parameters are Frozen (without gradient):\n",
"2022-11-30 03:01:16,542 - INFO - allennlp.common.util - The following parameters are Tunable (with gradient):\n",
"2022-11-30 03:01:16,544 - INFO - allennlp.common.util - _endpoint_span_extractor._span_width_embedding.weight\n",
"2022-11-30 03:01:16,545 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.word_embeddings.weight\n",
"2022-11-30 03:01:16,547 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.position_embeddings.weight\n",
"2022-11-30 03:01:16,548 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.token_type_embeddings.weight\n",
"2022-11-30 03:01:16,549 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.weight\n",
"2022-11-30 03:01:16,550 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.embeddings.LayerNorm.bias\n",
"2022-11-30 03:01:16,552 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.weight\n",
"2022-11-30 03:01:16,553 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.query.bias\n",
"2022-11-30 03:01:16,554 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.weight\n",
"2022-11-30 03:01:16,556 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.key.bias\n",
"2022-11-30 03:01:16,557 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.weight\n",
"2022-11-30 03:01:16,558 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.self.value.bias\n",
"2022-11-30 03:01:16,559 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.weight\n",
"2022-11-30 03:01:16,561 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.dense.bias\n",
"2022-11-30 03:01:16,562 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,563 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,564 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.weight\n",
"2022-11-30 03:01:16,566 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.intermediate.dense.bias\n",
"2022-11-30 03:01:16,567 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.weight\n",
"2022-11-30 03:01:16,568 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.dense.bias\n",
"2022-11-30 03:01:16,570 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,571 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.0.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,572 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.weight\n",
"2022-11-30 03:01:16,573 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.query.bias\n",
"2022-11-30 03:01:16,575 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.weight\n",
"2022-11-30 03:01:16,576 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.key.bias\n",
"2022-11-30 03:01:16,577 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.weight\n",
"2022-11-30 03:01:16,578 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.self.value.bias\n",
"2022-11-30 03:01:16,580 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.weight\n",
"2022-11-30 03:01:16,581 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.dense.bias\n",
"2022-11-30 03:01:16,582 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,583 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,585 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.weight\n",
"2022-11-30 03:01:16,586 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.intermediate.dense.bias\n",
"2022-11-30 03:01:16,587 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.weight\n",
"2022-11-30 03:01:16,589 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.dense.bias\n",
"2022-11-30 03:01:16,590 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,591 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.1.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,592 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.weight\n",
"2022-11-30 03:01:16,594 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.query.bias\n",
"2022-11-30 03:01:16,595 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.weight\n",
"2022-11-30 03:01:16,596 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.key.bias\n",
"2022-11-30 03:01:16,598 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.weight\n",
"2022-11-30 03:01:16,599 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.self.value.bias\n",
"2022-11-30 03:01:16,600 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.weight\n",
"2022-11-30 03:01:16,602 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.dense.bias\n",
"2022-11-30 03:01:16,603 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,604 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,605 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.weight\n",
"2022-11-30 03:01:16,607 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.intermediate.dense.bias\n",
"2022-11-30 03:01:16,608 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.weight\n",
"2022-11-30 03:01:16,609 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.dense.bias\n",
"2022-11-30 03:01:16,611 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,612 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.2.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,613 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.weight\n",
"2022-11-30 03:01:16,614 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.query.bias\n",
"2022-11-30 03:01:16,616 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.weight\n",
"2022-11-30 03:01:16,617 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.key.bias\n",
"2022-11-30 03:01:16,618 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.weight\n",
"2022-11-30 03:01:16,620 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.self.value.bias\n",
"2022-11-30 03:01:16,621 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.weight\n",
"2022-11-30 03:01:16,622 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.dense.bias\n",
"2022-11-30 03:01:16,623 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,625 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,626 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.weight\n",
"2022-11-30 03:01:16,627 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.intermediate.dense.bias\n",
"2022-11-30 03:01:16,628 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.weight\n",
"2022-11-30 03:01:16,629 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.dense.bias\n",
"2022-11-30 03:01:16,631 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,632 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.3.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,633 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.weight\n",
"2022-11-30 03:01:16,634 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.query.bias\n",
"2022-11-30 03:01:16,635 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.weight\n",
"2022-11-30 03:01:16,637 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.key.bias\n",
"2022-11-30 03:01:16,638 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.weight\n",
"2022-11-30 03:01:16,639 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.self.value.bias\n",
"2022-11-30 03:01:16,640 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.weight\n",
"2022-11-30 03:01:16,641 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.dense.bias\n",
"2022-11-30 03:01:16,642 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,643 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,644 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.weight\n",
"2022-11-30 03:01:16,645 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.intermediate.dense.bias\n",
"2022-11-30 03:01:16,647 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.weight\n",
"2022-11-30 03:01:16,648 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.dense.bias\n",
"2022-11-30 03:01:16,649 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,650 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.4.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,651 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.weight\n",
"2022-11-30 03:01:16,653 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.query.bias\n",
"2022-11-30 03:01:16,654 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.weight\n",
"2022-11-30 03:01:16,655 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.key.bias\n",
"2022-11-30 03:01:16,656 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.weight\n",
"2022-11-30 03:01:16,658 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.self.value.bias\n",
"2022-11-30 03:01:16,659 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.weight\n",
"2022-11-30 03:01:16,660 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.dense.bias\n",
"2022-11-30 03:01:16,661 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,661 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,663 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.weight\n",
"2022-11-30 03:01:16,664 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.intermediate.dense.bias\n",
"2022-11-30 03:01:16,665 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.weight\n",
"2022-11-30 03:01:16,666 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.dense.bias\n",
"2022-11-30 03:01:16,667 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,668 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.5.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,669 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.weight\n",
"2022-11-30 03:01:16,670 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.query.bias\n",
"2022-11-30 03:01:16,671 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.weight\n",
"2022-11-30 03:01:16,674 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.key.bias\n",
"2022-11-30 03:01:16,675 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.weight\n",
"2022-11-30 03:01:16,676 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.self.value.bias\n",
"2022-11-30 03:01:16,677 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.weight\n",
"2022-11-30 03:01:16,679 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.dense.bias\n",
"2022-11-30 03:01:16,680 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,681 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,682 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.weight\n",
"2022-11-30 03:01:16,683 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.intermediate.dense.bias\n",
"2022-11-30 03:01:16,685 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.weight\n",
"2022-11-30 03:01:16,686 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.dense.bias\n",
"2022-11-30 03:01:16,687 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,688 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.6.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,689 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.weight\n",
"2022-11-30 03:01:16,690 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.query.bias\n",
"2022-11-30 03:01:16,691 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.weight\n",
"2022-11-30 03:01:16,692 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.key.bias\n",
"2022-11-30 03:01:16,694 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.weight\n",
"2022-11-30 03:01:16,695 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.self.value.bias\n",
"2022-11-30 03:01:16,696 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.weight\n",
"2022-11-30 03:01:16,697 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.dense.bias\n",
"2022-11-30 03:01:16,698 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,700 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,701 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.weight\n",
"2022-11-30 03:01:16,702 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.intermediate.dense.bias\n",
"2022-11-30 03:01:16,703 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.weight\n",
"2022-11-30 03:01:16,705 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.dense.bias\n",
"2022-11-30 03:01:16,706 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,707 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.7.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,708 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.weight\n",
"2022-11-30 03:01:16,710 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.query.bias\n",
"2022-11-30 03:01:16,711 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.weight\n",
"2022-11-30 03:01:16,712 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.key.bias\n",
"2022-11-30 03:01:16,713 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.weight\n",
"2022-11-30 03:01:16,714 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.self.value.bias\n",
"2022-11-30 03:01:16,716 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.weight\n",
"2022-11-30 03:01:16,717 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.dense.bias\n",
"2022-11-30 03:01:16,718 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,720 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,721 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.weight\n",
"2022-11-30 03:01:16,722 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.intermediate.dense.bias\n",
"2022-11-30 03:01:16,723 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.weight\n",
"2022-11-30 03:01:16,725 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.dense.bias\n",
"2022-11-30 03:01:16,726 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,727 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.8.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,728 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.weight\n",
"2022-11-30 03:01:16,730 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.query.bias\n",
"2022-11-30 03:01:16,731 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.weight\n",
"2022-11-30 03:01:16,732 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.key.bias\n",
"2022-11-30 03:01:16,733 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.weight\n",
"2022-11-30 03:01:16,735 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.self.value.bias\n",
"2022-11-30 03:01:16,736 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.weight\n",
"2022-11-30 03:01:16,737 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.dense.bias\n",
"2022-11-30 03:01:16,738 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,740 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,741 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.weight\n",
"2022-11-30 03:01:16,742 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.intermediate.dense.bias\n",
"2022-11-30 03:01:16,744 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.weight\n",
"2022-11-30 03:01:16,746 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.dense.bias\n",
"2022-11-30 03:01:16,747 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,748 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.9.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,749 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.weight\n",
"2022-11-30 03:01:16,751 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.query.bias\n",
"2022-11-30 03:01:16,752 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.weight\n",
"2022-11-30 03:01:16,753 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.key.bias\n",
"2022-11-30 03:01:16,754 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.weight\n",
"2022-11-30 03:01:16,756 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.self.value.bias\n",
"2022-11-30 03:01:16,757 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.weight\n",
"2022-11-30 03:01:16,758 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.dense.bias\n",
"2022-11-30 03:01:16,760 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.weight\n",
"2022-11-30 03:01:16,761 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.attention.output.LayerNorm.bias\n",
"2022-11-30 03:01:16,762 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.transformer_model.encoder.layer.10.intermediate.dense.weight\n",
"2022-11-30 03:01:16,763 - INFO - allennlp.common.util - _embedder.token_embedder_bert._matched_embedder.tran
gitextract_j_aejt67/
├── .gitignore
├── LICENSE
├── README.md
├── aste/
│ ├── __init__.py
│ ├── analysis.py
│ ├── data_utils.py
│ ├── utils.py
│ └── wrapper.py
├── demo.ipynb
├── requirements.txt
├── sample_data.txt
├── setup.sh
├── span_model/
│ ├── __init__.py
│ ├── data/
│ │ ├── __init__.py
│ │ └── dataset_readers/
│ │ ├── document.py
│ │ └── span_model.py
│ ├── predictors/
│ │ ├── __init__.py
│ │ └── span_model.py
│ └── training/
│ ├── f1.py
│ ├── ner_metrics.py
│ └── relation_metrics.py
└── training_config/
└── config.jsonnet
SYMBOL INDEX (197 symbols across 10 files)
FILE: aste/analysis.py
function set_seed (line 22) | def set_seed(seed: int):
function test_load (line 28) | def test_load(
class Scorer (line 77) | class Scorer:
method run (line 80) | def run(self, path_pred: str, path_gold: str) -> dict:
method make_tuples (line 110) | def make_tuples(self, sent: Sentence) -> List[tuple]:
class SentimentTripletScorer (line 114) | class SentimentTripletScorer(Scorer):
method make_tuples (line 117) | def make_tuples(self, sent: Sentence) -> List[tuple]:
class TripletScorer (line 121) | class TripletScorer(Scorer):
method make_tuples (line 124) | def make_tuples(self, sent: Sentence) -> List[tuple]:
class OpinionScorer (line 128) | class OpinionScorer(Scorer):
method make_tuples (line 131) | def make_tuples(self, sent: Sentence) -> List[tuple]:
class TargetScorer (line 135) | class TargetScorer(Scorer):
method make_tuples (line 138) | def make_tuples(self, sent: Sentence) -> List[tuple]:
class OrigScorer (line 142) | class OrigScorer(Scorer):
method make_tuples (line 145) | def make_tuples(self, sent: Sentence) -> List[tuple]:
method run (line 148) | def run(self, path_pred: str, path_gold: str) -> dict:
function run_eval_domains (line 153) | def run_eval_domains(
function test_scorer (line 180) | def test_scorer(path_pred: str, path_gold: str):
FILE: aste/data_utils.py
class SplitEnum (line 22) | class SplitEnum(str, Enum):
class LabelEnum (line 28) | class LabelEnum(str, Enum):
method as_list (line 36) | def as_list(cls):
method i_to_label (line 40) | def i_to_label(cls, i: int):
method label_to_i (line 44) | def label_to_i(cls, label) -> int:
class SentimentTriple (line 48) | class SentimentTriple(BaseModel):
method make_dummy (line 56) | def make_dummy(cls):
method opinion (line 60) | def opinion(self) -> Tuple[int, int]:
method target (line 64) | def target(self) -> Tuple[int, int]:
method from_raw_triple (line 68) | def from_raw_triple(cls, x: RawTriple):
method to_raw_triple (line 88) | def to_raw_triple(self) -> RawTriple:
method as_text (line 98) | def as_text(self, tokens: List[str]) -> str:
class TripleHeuristic (line 104) | class TripleHeuristic(BaseModel):
method run (line 106) | def run(
class TagMaker (line 135) | class TagMaker(BaseModel):
method run (line 137) | def run(spans: List[Span], labels: List[LabelEnum], num_tokens: int) -...
class BioesTagMaker (line 141) | class BioesTagMaker(TagMaker):
method run (line 143) | def run(spans: List[Span], labels: List[LabelEnum], num_tokens: int) -...
class Sentence (line 158) | class Sentence(BaseModel):
method extract_spans (line 167) | def extract_spans(self) -> List[Tuple[int, int, LabelEnum]]:
method as_text (line 175) | def as_text(self) -> str:
method from_line_format (line 185) | def from_line_format(cls, text: str):
method to_line_format (line 204) | def to_line_format(self) -> str:
class Data (line 224) | class Data(BaseModel):
method load (line 233) | def load(self):
method load_from_full_path (line 243) | def load_from_full_path(cls, path: str):
method save_to_path (line 248) | def save_to_path(self, path: str):
method analyze_spans (line 261) | def analyze_spans(self):
method analyze_joined_spans (line 292) | def analyze_joined_spans(self):
method analyze_tag_counts (line 314) | def analyze_tag_counts(self):
method analyze_span_distance (line 327) | def analyze_span_distance(self):
method analyze_opinion_labels (line 337) | def analyze_opinion_labels(self):
method analyze_tag_score (line 355) | def analyze_tag_score(self):
method analyze_ner (line 366) | def analyze_ner(self):
method analyze_direction (line 384) | def analyze_direction(self):
method analyze (line 411) | def analyze(self):
function test_save_to_path (line 439) | def test_save_to_path(path: str = "aste/data/triplet_data/14lap/train.tx...
function merge_data (line 451) | def merge_data(items: List[Data]) -> Data:
class Result (line 459) | class Result(BaseModel):
class ResultAnalyzer (line 474) | class ResultAnalyzer(BaseModel):
method check_overlap (line 476) | def check_overlap(a_start: int, a_end: int, b_start: int, b_end: int) ...
method run_sentence (line 480) | def run_sentence(pred: Sentence, gold: Sentence):
method analyze_labels (line 496) | def analyze_labels(pred: List[Sentence], gold: List[Sentence]):
method analyze_spans (line 511) | def analyze_spans(pred: List[Sentence], gold: List[Sentence]):
method run (line 551) | def run(cls, pred: List[Sentence], gold: List[Sentence], print_limit=16):
function test_merge (line 586) | def test_merge(root="aste/data/triplet_data"):
FILE: aste/utils.py
class Shell (line 13) | class Shell(BaseModel):
method format_kwargs (line 17) | def format_kwargs(cls, **kwargs) -> str:
method run_command (line 25) | def run_command(self, command: str) -> str:
method run (line 43) | def run(self, command: str, *args, **kwargs) -> str:
function hash_text (line 49) | def hash_text(x: str) -> str:
class Timer (line 53) | class Timer(BaseModel):
method __enter__ (line 57) | def __enter__(self):
method __exit__ (line 61) | def __exit__(self, exc_type, exc_val, exc_tb):
class PickleSaver (line 66) | class PickleSaver(BaseModel):
method dump (line 69) | def dump(self, obj):
method load (line 75) | def load(self):
class FlexiModel (line 81) | class FlexiModel(BaseModel):
class Config (line 82) | class Config:
function get_simple_stats (line 86) | def get_simple_stats(numbers: List[Union[int, float]]):
function count_joins (line 94) | def count_joins(spans: Set[Tuple[int, int]]) -> int:
function update_nested_dict (line 106) | def update_nested_dict(d: dict, k: str, v, i=0, sep="__"):
function test_update_nested_dict (line 120) | def test_update_nested_dict():
function clean_up_triplet_data (line 127) | def clean_up_triplet_data(path: str):
function clean_up_many (line 139) | def clean_up_many(pattern: str = "data/triplet_data/*/*.txt"):
function merge_data (line 145) | def merge_data(
function safe_divide (line 168) | def safe_divide(a: float, b: float) -> float:
FILE: aste/wrapper.py
class SpanModelDocument (line 20) | class SpanModelDocument(BaseModel):
method is_valid (line 27) | def is_valid(self) -> bool:
method from_sentence (line 31) | def from_sentence(cls, x: Sentence):
class SpanModelPrediction (line 48) | class SpanModelPrediction(SpanModelDocument):
method to_sentence (line 54) | def to_sentence(self) -> Sentence:
class SpanModelData (line 73) | class SpanModelData(BaseModel):
method read (line 79) | def read(cls, path: Path) -> List[SpanModelDocument]:
method load (line 88) | def load(self):
method dump (line 93) | def dump(self, path: Path, sep="\n"):
method from_data (line 103) | def from_data(cls, x: Data):
class SpanModel (line 109) | class SpanModel(BaseModel):
method save_temp_data (line 114) | def save_temp_data(self, path_in: str, name: str, is_test: bool = Fals...
method fit (line 130) | def fit(self, path_train: str, path_dev: str):
method predict (line 154) | def predict(self, path_in: str, path_out: str):
method score (line 195) | def score(cls, path_pred: str, path_gold: str) -> dict:
function run_score (line 226) | def run_score(path_pred: str, path_gold: str) -> dict:
function run_train (line 230) | def run_train(path_train: str, path_dev: str, save_dir: str, random_seed...
function run_train_many (line 239) | def run_train_many(save_dir_template: str, random_seeds: List[int], **kw...
function run_eval (line 245) | def run_eval(path_test: str, save_dir: str):
function run_eval_many (line 255) | def run_eval_many(save_dir_template: str, random_seeds: List[int], **kwa...
FILE: span_model/data/dataset_readers/document.py
function format_float (line 8) | def format_float(x):
class SpanCrossesSentencesError (line 12) | class SpanCrossesSentencesError(ValueError):
function get_sentence_of_span (line 16) | def get_sentence_of_span(span, sentence_starts, doc_tokens):
class Dataset (line 32) | class Dataset:
method __init__ (line 33) | def __init__(self, documents):
method __getitem__ (line 36) | def __getitem__(self, i):
method __len__ (line 39) | def __len__(self):
method __repr__ (line 42) | def __repr__(self):
method from_jsonl (line 46) | def from_jsonl(cls, fname):
method to_jsonl (line 55) | def to_jsonl(self, fname):
class Document (line 62) | class Document:
method __init__ (line 63) | def __init__(
method from_json (line 76) | def from_json(cls, js):
method _check_fields (line 112) | def _check_fields(js):
method to_json (line 128) | def to_json(self):
method split (line 141) | def split(self, max_tokens_per_doc):
method __repr__ (line 196) | def __repr__(self):
method __getitem__ (line 204) | def __getitem__(self, ix):
method __len__ (line 207) | def __len__(self):
method print_plaintext (line 210) | def print_plaintext(self):
method n_tokens (line 215) | def n_tokens(self):
class Sentence (line 219) | class Sentence:
method __init__ (line 220) | def __init__(self, entry, sentence_start, sentence_ix):
method to_json (line 266) | def to_json(self):
method __repr__ (line 284) | def __repr__(self):
method __len__ (line 295) | def __len__(self):
class Span (line 299) | class Span:
method __init__ (line 300) | def __init__(self, start, end, sentence, sentence_offsets=False):
method start_doc (line 310) | def start_doc(self):
method end_doc (line 314) | def end_doc(self):
method span_doc (line 318) | def span_doc(self):
method span_sent (line 322) | def span_sent(self):
method text (line 326) | def text(self):
method __repr__ (line 329) | def __repr__(self):
method __eq__ (line 332) | def __eq__(self, other):
method __hash__ (line 339) | def __hash__(self):
class Token (line 344) | class Token:
method __init__ (line 345) | def __init__(self, ix, sentence, sentence_offsets=False):
method ix_doc (line 350) | def ix_doc(self):
method text (line 354) | def text(self):
method __repr__ (line 357) | def __repr__(self):
class NER (line 361) | class NER:
method __init__ (line 362) | def __init__(self, ner, sentence, sentence_offsets=False):
method __repr__ (line 366) | def __repr__(self):
method __eq__ (line 369) | def __eq__(self, other):
method to_json (line 372) | def to_json(self):
class PredictedNER (line 376) | class PredictedNER(NER):
method __init__ (line 377) | def __init__(self, ner, sentence, sentence_offsets=False):
method __repr__ (line 383) | def __repr__(self):
method to_json (line 386) | def to_json(self):
class Relation (line 393) | class Relation:
method __init__ (line 394) | def __init__(self, relation, sentence, sentence_offsets=False):
method __repr__ (line 403) | def __repr__(self):
method __eq__ (line 406) | def __eq__(self, other):
method to_json (line 409) | def to_json(self):
class PredictedRelation (line 413) | class PredictedRelation(Relation):
method __init__ (line 414) | def __init__(self, relation, sentence, sentence_offsets=False):
method __repr__ (line 420) | def __repr__(self):
method to_json (line 423) | def to_json(self):
FILE: span_model/data/dataset_readers/span_model.py
class SpanModelDataException (line 28) | class SpanModelDataException(Exception):
class SpanModelReader (line 33) | class SpanModelReader(DatasetReader):
method __init__ (line 39) | def __init__(
method _read (line 55) | def _read(self, file_path: str):
method _too_long (line 69) | def _too_long(self, span):
method _process_ner (line 72) | def _process_ner(self, span_tuples, sent):
method _process_relations (line 86) | def _process_relations(self, span_tuples, sent):
method _process_sentence (line 106) | def _process_sentence(self, sent: Sentence, dataset: str):
method _process_sentence_fields (line 164) | def _process_sentence_fields(self, doc: Document):
method text_to_instance (line 188) | def text_to_instance(self, doc_text: Dict[str, Any]):
method _instances_from_cache_file (line 209) | def _instances_from_cache_file(self, cache_filename):
method _instances_to_cache_file (line 215) | def _instances_to_cache_file(self, cache_filename, instances):
method _normalize_word (line 220) | def _normalize_word(word):
FILE: span_model/predictors/span_model.py
class SpanModelPredictor (line 18) | class SpanModelPredictor(Predictor):
method __init__ (line 27) | def __init__(self, model: Model, dataset_reader: DatasetReader) -> None:
method predict (line 30) | def predict(self, document):
method predict_tokenized (line 33) | def predict_tokenized(self, tokenized_document: List[str]) -> JsonDict:
method dump_line (line 38) | def dump_line(self, outputs):
method predict_instance (line 44) | def predict_instance(self, instance):
FILE: span_model/training/f1.py
function safe_div (line 6) | def safe_div(num, denom):
function compute_f1 (line 13) | def compute_f1(predicted, gold, matched):
FILE: span_model/training/ner_metrics.py
class NERMetrics (line 14) | class NERMetrics(Metric):
method __init__ (line 19) | def __init__(self, number_of_classes: int, none_label: int = 0):
method __call__ (line 25) | def __call__(
method get_metric (line 51) | def get_metric(self, reset=False):
method reset (line 72) | def reset(self):
FILE: span_model/training/relation_metrics.py
class RelationMetrics (line 8) | class RelationMetrics(Metric):
method __init__ (line 13) | def __init__(self):
method __call__ (line 20) | def __call__(self, predicted_relation_list, metadata_list):
method get_metric (line 33) | def get_metric(self, reset=False):
method reset (line 45) | def reset(self):
class SpanPairMetrics (line 51) | class SpanPairMetrics(RelationMetrics):
method __call__ (line 53) | def __call__(self, predicted_relation_list, metadata_list):
Condensed preview — 22 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (328K chars).
[
{
"path": ".gitignore",
"chars": 114,
"preview": "*__pycache__\n*.pyc\n*.log\n*.csv\n*.swp\ntags\npunkt.zip\nwordnet.zip\n.idea/\naste/data/\nmodels/\nmodel_outputs/\noutputs/\n"
},
{
"path": "LICENSE",
"chars": 1069,
"preview": "MIT License\n\nCopyright (c) 2022 Chia Yew Ken\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
},
{
"path": "README.md",
"chars": 6601,
"preview": "## Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction\n\n**\\*\\*\\*\\*\\* New August 30th, 20"
},
{
"path": "aste/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "aste/analysis.py",
"chars": 5819,
"preview": "import json\nimport random\nimport sys\nfrom pathlib import Path\nfrom typing import List\n\nimport _jsonnet\nimport numpy as n"
},
{
"path": "aste/data_utils.py",
"chars": 21213,
"preview": "import ast\nimport copy\nimport json\nimport os\nfrom collections import Counter\nfrom enum import Enum\nfrom pathlib import P"
},
{
"path": "aste/utils.py",
"chars": 4805,
"preview": "import copy\nimport hashlib\nimport pickle\nimport subprocess\nimport time\nfrom pathlib import Path\nfrom typing import List,"
},
{
"path": "aste/wrapper.py",
"chars": 9098,
"preview": "import json\nimport os\nimport shutil\nimport sys\nfrom argparse import Namespace\nfrom pathlib import Path\nfrom typing impor"
},
{
"path": "demo.ipynb",
"chars": 215770,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 1,\n \"metadata\": {\n \"colab\": {\n \"base_uri\": \"htt"
},
{
"path": "requirements.txt",
"chars": 235,
"preview": "Cython==0.29.21\nPYEVALB==0.1.3\nallennlp-models==1.2.2\nallennlp==1.2.2\nbotocore==1.19.46\nfire==0.3.1\nnltk==3.6.6\nnumpy==1"
},
{
"path": "sample_data.txt",
"chars": 14265,
"preview": "I charge it at night and skip taking the cord with me because of the good battery life .#### #### ####[([16, 17], [15], "
},
{
"path": "setup.sh",
"chars": 673,
"preview": "#!/usr/bin/env bash\nset -e\n\n# Main Requirements\npip install -r requirements.txt\npip uninstall dataclasses -y # Compatibi"
},
{
"path": "span_model/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "span_model/data/__init__.py",
"chars": 133,
"preview": "from span_model.data.dataset_readers.span_model import SpanModelReader\nfrom span_model.data.dataset_readers.document imp"
},
{
"path": "span_model/data/dataset_readers/document.py",
"chars": 13594,
"preview": "from span_model.models.shared import fields_to_batches, batches_to_fields\nimport copy\nimport numpy as np\nimport re\nimpor"
},
{
"path": "span_model/data/dataset_readers/span_model.py",
"chars": 7719,
"preview": "import json\nimport logging\nimport pickle as pkl\nimport warnings\nfrom typing import Any, Dict\n\nfrom allennlp.common.file_"
},
{
"path": "span_model/predictors/__init__.py",
"chars": 64,
"preview": "from span_model.predictors.span_model import SpanModelPredictor\n"
},
{
"path": "span_model/predictors/span_model.py",
"chars": 2612,
"preview": "from typing import List\nimport numpy as np\nimport warnings\n\nfrom overrides import overrides\nimport numpy\nimport json\n\nfr"
},
{
"path": "span_model/training/f1.py",
"chars": 360,
"preview": "\"\"\"\nFunction to compute F1 scores.\n\"\"\"\n\n\ndef safe_div(num, denom):\n if denom > 0:\n return num / denom\n else"
},
{
"path": "span_model/training/ner_metrics.py",
"chars": 2358,
"preview": "from overrides import overrides\nfrom typing import Optional\n\nimport torch\n\nfrom allennlp.training.metrics.metric import "
},
{
"path": "span_model/training/relation_metrics.py",
"chars": 2156,
"preview": "from overrides import overrides\n\nfrom allennlp.training.metrics.metric import Metric\n\nfrom span_model.training.f1 import"
},
{
"path": "training_config/config.jsonnet",
"chars": 2393,
"preview": "{\n \"data_loader\": {\n \"sampler\": {\n \"type\": \"random\"\n }\n },\n \"dataset_reader\": {\n \"max_span_width\": 8,\n "
}
]
About this extraction
This page contains the full source code of the chiayewken/Span-ASTE GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 22 files (303.8 KB), approximately 89.9k tokens, and a symbol index with 197 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.