Repository: abhishekkrthakur/long-text-token-classification
Branch: main
Commit: 8f636ea23b7e
Files: 5
Total size: 28.0 KB
Directory structure:
gitextract_8v4yrbgd/
├── .gitignore
├── LICENSE
├── README.md
├── train.py
└── utils.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
.vscode/
# local stuff
input/
*.csv
*.bin
*.pkl
*.h5
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2022 Abhishek Thakur
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Long text token classification using LongFormer
The data comes from: https://www.kaggle.com/c/feedback-prize-2021/
To train the model for 5 folds, you can run:
python train.py --fold 0 --model allenai/longformer-large-4096 --lr 1e-5 --epochs 10 --max_len 1536 --batch_size 4 --valid_batch_size 4
python train.py --fold 1 --model allenai/longformer-large-4096 --lr 1e-5 --epochs 10 --max_len 1536 --batch_size 4 --valid_batch_size 4
python train.py --fold 2 --model allenai/longformer-large-4096 --lr 1e-5 --epochs 10 --max_len 1536 --batch_size 4 --valid_batch_size 4
python train.py --fold 3 --model allenai/longformer-large-4096 --lr 1e-5 --epochs 10 --max_len 1536 --batch_size 4 --valid_batch_size 4
python train.py --fold 4 --model allenai/longformer-large-4096 --lr 1e-5 --epochs 10 --max_len 1536 --batch_size 4 --valid_batch_size 4
Note that you need have `kfold` column in training data.
================================================
FILE: train.py
================================================
import argparse
import os
import random
import warnings
import numpy as np
import pandas as pd
import tez
import torch
import torch.nn as nn
from sklearn import metrics
from torch.nn import functional as F
from transformers import AdamW, AutoConfig, AutoModel, AutoTokenizer, get_cosine_schedule_with_warmup
from utils import EarlyStopping, prepare_training_data, target_id_map
warnings.filterwarnings("ignore")
def seed_everything(seed: int):
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--fold", type=int, required=True)
parser.add_argument("--model", type=str, required=True)
parser.add_argument("--lr", type=float, required=True)
parser.add_argument("--output", type=str, default="../model", required=False)
parser.add_argument("--input", type=str, default="../input", required=False)
parser.add_argument("--max_len", type=int, default=1024, required=False)
parser.add_argument("--batch_size", type=int, default=8, required=False)
parser.add_argument("--valid_batch_size", type=int, default=8, required=False)
parser.add_argument("--epochs", type=int, default=20, required=False)
parser.add_argument("--accumulation_steps", type=int, default=1, required=False)
return parser.parse_args()
class FeedbackDataset:
def __init__(self, samples, max_len, tokenizer):
self.samples = samples
self.max_len = max_len
self.tokenizer = tokenizer
self.length = len(samples)
def __len__(self):
return self.length
def __getitem__(self, idx):
input_ids = self.samples[idx]["input_ids"]
input_labels = self.samples[idx]["input_labels"]
input_labels = [target_id_map[x] for x in input_labels]
other_label_id = target_id_map["O"]
padding_label_id = target_id_map["PAD"]
# print(input_ids)
# print(input_labels)
# add start token id to the input_ids
input_ids = [self.tokenizer.cls_token_id] + input_ids
input_labels = [other_label_id] + input_labels
if len(input_ids) > self.max_len - 1:
input_ids = input_ids[: self.max_len - 1]
input_labels = input_labels[: self.max_len - 1]
# add end token id to the input_ids
input_ids = input_ids + [self.tokenizer.sep_token_id]
input_labels = input_labels + [other_label_id]
attention_mask = [1] * len(input_ids)
padding_length = self.max_len - len(input_ids)
if padding_length > 0:
if self.tokenizer.padding_side == "right":
input_ids = input_ids + [self.tokenizer.pad_token_id] * padding_length
input_labels = input_labels + [padding_label_id] * padding_length
attention_mask = attention_mask + [0] * padding_length
else:
input_ids = [self.tokenizer.pad_token_id] * padding_length + input_ids
input_labels = [padding_label_id] * padding_length + input_labels
attention_mask = [0] * padding_length + attention_mask
return {
"ids": torch.tensor(input_ids, dtype=torch.long),
"mask": torch.tensor(attention_mask, dtype=torch.long),
"targets": torch.tensor(input_labels, dtype=torch.long),
}
class FeedbackModel(tez.Model):
def __init__(self, model_name, num_train_steps, learning_rate, num_labels, steps_per_epoch):
super().__init__()
self.learning_rate = learning_rate
self.model_name = model_name
self.num_train_steps = num_train_steps
self.num_labels = num_labels
self.steps_per_epoch = steps_per_epoch
self.step_scheduler_after = "batch"
hidden_dropout_prob: float = 0.1
layer_norm_eps: float = 1e-7
config = AutoConfig.from_pretrained(model_name)
config.update(
{
"output_hidden_states": True,
"hidden_dropout_prob": hidden_dropout_prob,
"layer_norm_eps": layer_norm_eps,
"add_pooling_layer": False,
"num_labels": self.num_labels,
}
)
self.transformer = AutoModel.from_pretrained(model_name, config=config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.dropout1 = nn.Dropout(0.1)
self.dropout2 = nn.Dropout(0.2)
self.dropout3 = nn.Dropout(0.3)
self.dropout4 = nn.Dropout(0.4)
self.dropout5 = nn.Dropout(0.5)
self.output = nn.Linear(config.hidden_size, self.num_labels)
def fetch_optimizer(self):
param_optimizer = list(self.named_parameters())
no_decay = ["bias", "LayerNorm.bias"]
optimizer_parameters = [
{
"params": [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
"weight_decay": 0.01,
},
{
"params": [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
opt = AdamW(optimizer_parameters, lr=self.learning_rate)
return opt
def fetch_scheduler(self):
sch = get_cosine_schedule_with_warmup(
self.optimizer,
num_warmup_steps=int(0.1 * self.num_train_steps),
num_training_steps=self.num_train_steps,
num_cycles=1,
last_epoch=-1,
)
return sch
def loss(self, outputs, targets, attention_mask):
loss_fct = nn.CrossEntropyLoss()
active_loss = attention_mask.view(-1) == 1
active_logits = outputs.view(-1, self.num_labels)
true_labels = targets.view(-1)
outputs = active_logits.argmax(dim=-1)
idxs = np.where(active_loss.cpu().numpy() == 1)[0]
active_logits = active_logits[idxs]
true_labels = true_labels[idxs].to(torch.long)
loss = loss_fct(active_logits, true_labels)
return loss
def monitor_metrics(self, outputs, targets, attention_mask):
active_loss = (attention_mask.view(-1) == 1).cpu().numpy()
active_logits = outputs.view(-1, self.num_labels)
true_labels = targets.view(-1).cpu().numpy()
outputs = active_logits.argmax(dim=-1).cpu().numpy()
idxs = np.where(active_loss == 1)[0]
f1_score = metrics.f1_score(true_labels[idxs], outputs[idxs], average="macro")
return {"f1": f1_score}
def forward(self, ids, mask, token_type_ids=None, targets=None):
if token_type_ids:
transformer_out = self.transformer(ids, mask, token_type_ids)
else:
transformer_out = self.transformer(ids, mask)
sequence_output = transformer_out.last_hidden_state
sequence_output = self.dropout(sequence_output)
logits1 = self.output(self.dropout1(sequence_output))
logits2 = self.output(self.dropout2(sequence_output))
logits3 = self.output(self.dropout3(sequence_output))
logits4 = self.output(self.dropout4(sequence_output))
logits5 = self.output(self.dropout5(sequence_output))
logits = (logits1 + logits2 + logits3 + logits4 + logits5) / 5
logits = torch.softmax(logits, dim=-1)
loss = 0
if targets is not None:
loss1 = self.loss(logits1, targets, attention_mask=mask)
loss2 = self.loss(logits2, targets, attention_mask=mask)
loss3 = self.loss(logits3, targets, attention_mask=mask)
loss4 = self.loss(logits4, targets, attention_mask=mask)
loss5 = self.loss(logits5, targets, attention_mask=mask)
loss = (loss1 + loss2 + loss3 + loss4 + loss5) / 5
f1_1 = self.monitor_metrics(logits1, targets, attention_mask=mask)["f1"]
f1_2 = self.monitor_metrics(logits2, targets, attention_mask=mask)["f1"]
f1_3 = self.monitor_metrics(logits3, targets, attention_mask=mask)["f1"]
f1_4 = self.monitor_metrics(logits4, targets, attention_mask=mask)["f1"]
f1_5 = self.monitor_metrics(logits5, targets, attention_mask=mask)["f1"]
f1 = (f1_1 + f1_2 + f1_3 + f1_4 + f1_5) / 5
metric = {"f1": f1}
return logits, loss, metric
return logits, loss, {}
if __name__ == "__main__":
NUM_JOBS = 12
args = parse_args()
seed_everything(42)
os.makedirs(args.output, exist_ok=True)
df = pd.read_csv(os.path.join(args.input, "train_folds.csv"))
train_df = df[df["kfold"] != args.fold].reset_index(drop=True)
valid_df = df[df["kfold"] == args.fold].reset_index(drop=True)
tokenizer = AutoTokenizer.from_pretrained(args.model)
training_samples = prepare_training_data(train_df, tokenizer, args, num_jobs=NUM_JOBS)
valid_samples = prepare_training_data(valid_df, tokenizer, args, num_jobs=NUM_JOBS)
train_dataset = FeedbackDataset(training_samples, args.max_len, tokenizer)
num_train_steps = int(len(train_dataset) / args.batch_size / args.accumulation_steps * args.epochs)
print(num_train_steps)
model = FeedbackModel(
model_name=args.model,
num_train_steps=num_train_steps,
learning_rate=args.lr,
num_labels=len(target_id_map) - 1,
steps_per_epoch=len(train_dataset) / args.batch_size,
)
es = EarlyStopping(
model_path=os.path.join(args.output, f"model_{args.fold}.bin"),
valid_df=valid_df,
valid_samples=valid_samples,
batch_size=args.valid_batch_size,
patience=5,
mode="max",
delta=0.001,
save_weights_only=True,
tokenizer=tokenizer,
)
model.fit(
train_dataset,
train_bs=args.batch_size,
device="cuda",
epochs=args.epochs,
callbacks=[es],
fp16=True,
accumulation_steps=args.accumulation_steps,
)
================================================
FILE: utils.py
================================================
import copy
import os
import numpy as np
import pandas as pd
import torch
from joblib import Parallel, delayed
from tez import enums
from tez.callbacks import Callback
from tqdm import tqdm
target_id_map = {
"B-Lead": 0,
"I-Lead": 1,
"B-Position": 2,
"I-Position": 3,
"B-Evidence": 4,
"I-Evidence": 5,
"B-Claim": 6,
"I-Claim": 7,
"B-Concluding Statement": 8,
"I-Concluding Statement": 9,
"B-Counterclaim": 10,
"I-Counterclaim": 11,
"B-Rebuttal": 12,
"I-Rebuttal": 13,
"O": 14,
"PAD": -100,
}
id_target_map = {v: k for k, v in target_id_map.items()}
def _prepare_training_data_helper(args, tokenizer, df, train_ids):
training_samples = []
for idx in tqdm(train_ids):
filename = os.path.join(args.input, "train", idx + ".txt")
with open(filename, "r") as f:
text = f.read()
encoded_text = tokenizer.encode_plus(
text,
add_special_tokens=False,
return_offsets_mapping=True,
)
input_ids = encoded_text["input_ids"]
input_labels = copy.deepcopy(input_ids)
offset_mapping = encoded_text["offset_mapping"]
for k in range(len(input_labels)):
input_labels[k] = "O"
sample = {
"id": idx,
"input_ids": input_ids,
"text": text,
"offset_mapping": offset_mapping,
}
temp_df = df[df["id"] == idx]
for _, row in temp_df.iterrows():
text_labels = [0] * len(text)
discourse_start = int(row["discourse_start"])
discourse_end = int(row["discourse_end"])
prediction_label = row["discourse_type"]
text_labels[discourse_start:discourse_end] = [1] * (discourse_end - discourse_start)
target_idx = []
for map_idx, (offset1, offset2) in enumerate(encoded_text["offset_mapping"]):
if sum(text_labels[offset1:offset2]) > 0:
if len(text[offset1:offset2].split()) > 0:
target_idx.append(map_idx)
targets_start = target_idx[0]
targets_end = target_idx[-1]
pred_start = "B-" + prediction_label
pred_end = "I-" + prediction_label
input_labels[targets_start] = pred_start
input_labels[targets_start + 1 : targets_end + 1] = [pred_end] * (targets_end - targets_start)
sample["input_ids"] = input_ids
sample["input_labels"] = input_labels
training_samples.append(sample)
return training_samples
def prepare_training_data(df, tokenizer, args, num_jobs):
training_samples = []
train_ids = df["id"].unique()
train_ids_splits = np.array_split(train_ids, num_jobs)
results = Parallel(n_jobs=num_jobs, backend="multiprocessing")(
delayed(_prepare_training_data_helper)(args, tokenizer, df, idx) for idx in train_ids_splits
)
for result in results:
training_samples.extend(result)
return training_samples
def calc_overlap(row):
"""
Calculates the overlap between prediction and
ground truth and overlap percentages used for determining
true positives.
"""
set_pred = set(row.predictionstring_pred.split(" "))
set_gt = set(row.predictionstring_gt.split(" "))
# Length of each and intersection
len_gt = len(set_gt)
len_pred = len(set_pred)
inter = len(set_gt.intersection(set_pred))
overlap_1 = inter / len_gt
overlap_2 = inter / len_pred
return [overlap_1, overlap_2]
def score_feedback_comp_micro(pred_df, gt_df):
"""
A function that scores for the kaggle
Student Writing Competition
Uses the steps in the evaluation page here:
https://www.kaggle.com/c/feedback-prize-2021/overview/evaluation
This code is from Rob Mulla's Kaggle kernel.
"""
gt_df = gt_df[["id", "discourse_type", "predictionstring"]].reset_index(drop=True).copy()
pred_df = pred_df[["id", "class", "predictionstring"]].reset_index(drop=True).copy()
pred_df["pred_id"] = pred_df.index
gt_df["gt_id"] = gt_df.index
# Step 1. all ground truths and predictions for a given class are compared.
joined = pred_df.merge(
gt_df,
left_on=["id", "class"],
right_on=["id", "discourse_type"],
how="outer",
suffixes=("_pred", "_gt"),
)
joined["predictionstring_gt"] = joined["predictionstring_gt"].fillna(" ")
joined["predictionstring_pred"] = joined["predictionstring_pred"].fillna(" ")
joined["overlaps"] = joined.apply(calc_overlap, axis=1)
# 2. If the overlap between the ground truth and prediction is >= 0.5,
# and the overlap between the prediction and the ground truth >= 0.5,
# the prediction is a match and considered a true positive.
# If multiple matches exist, the match with the highest pair of overlaps is taken.
joined["overlap1"] = joined["overlaps"].apply(lambda x: eval(str(x))[0])
joined["overlap2"] = joined["overlaps"].apply(lambda x: eval(str(x))[1])
joined["potential_TP"] = (joined["overlap1"] >= 0.5) & (joined["overlap2"] >= 0.5)
joined["max_overlap"] = joined[["overlap1", "overlap2"]].max(axis=1)
tp_pred_ids = (
joined.query("potential_TP")
.sort_values("max_overlap", ascending=False)
.groupby(["id", "predictionstring_gt"])
.first()["pred_id"]
.values
)
# 3. Any unmatched ground truths are false negatives
# and any unmatched predictions are false positives.
fp_pred_ids = [p for p in joined["pred_id"].unique() if p not in tp_pred_ids]
matched_gt_ids = joined.query("potential_TP")["gt_id"].unique()
unmatched_gt_ids = [c for c in joined["gt_id"].unique() if c not in matched_gt_ids]
# Get numbers of each type
TP = len(tp_pred_ids)
FP = len(fp_pred_ids)
FN = len(unmatched_gt_ids)
# calc microf1
my_f1_score = TP / (TP + 0.5 * (FP + FN))
return my_f1_score
def score_feedback_comp(pred_df, gt_df, return_class_scores=False):
class_scores = {}
pred_df = pred_df[["id", "class", "predictionstring"]].reset_index(drop=True).copy()
for discourse_type, gt_subset in gt_df.groupby("discourse_type"):
pred_subset = pred_df.loc[pred_df["class"] == discourse_type].reset_index(drop=True).copy()
class_score = score_feedback_comp_micro(pred_subset, gt_subset)
class_scores[discourse_type] = class_score
f1 = np.mean([v for v in class_scores.values()])
if return_class_scores:
return f1, class_scores
return f1
class FeedbackDatasetValid:
def __init__(self, samples, max_len, tokenizer):
self.samples = samples
self.max_len = max_len
self.tokenizer = tokenizer
self.length = len(samples)
def __len__(self):
return self.length
def __getitem__(self, idx):
input_ids = self.samples[idx]["input_ids"]
input_ids = [self.tokenizer.cls_token_id] + input_ids
if len(input_ids) > self.max_len - 1:
input_ids = input_ids[: self.max_len - 1]
# add end token id to the input_ids
input_ids = input_ids + [self.tokenizer.sep_token_id]
attention_mask = [1] * len(input_ids)
return {
"ids": input_ids,
"mask": attention_mask,
}
class Collate:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def __call__(self, batch):
output = dict()
output["ids"] = [sample["ids"] for sample in batch]
output["mask"] = [sample["mask"] for sample in batch]
# calculate max token length of this batch
batch_max = max([len(ids) for ids in output["ids"]])
# add padding
if self.tokenizer.padding_side == "right":
output["ids"] = [s + (batch_max - len(s)) * [self.tokenizer.pad_token_id] for s in output["ids"]]
output["mask"] = [s + (batch_max - len(s)) * [0] for s in output["mask"]]
else:
output["ids"] = [(batch_max - len(s)) * [self.tokenizer.pad_token_id] + s for s in output["ids"]]
output["mask"] = [(batch_max - len(s)) * [0] + s for s in output["mask"]]
# convert to tensors
output["ids"] = torch.tensor(output["ids"], dtype=torch.long)
output["mask"] = torch.tensor(output["mask"], dtype=torch.long)
return output
class EarlyStopping(Callback):
def __init__(
self,
model_path,
valid_df,
valid_samples,
batch_size,
tokenizer,
patience=5,
mode="max",
delta=0.001,
save_weights_only=True,
):
self.patience = patience
self.counter = 0
self.mode = mode
self.best_score = None
self.early_stop = False
self.delta = delta
self.save_weights_only = save_weights_only
self.model_path = model_path
self.valid_samples = valid_samples
self.batch_size = batch_size
self.valid_df = valid_df
self.tokenizer = tokenizer
if self.mode == "min":
self.val_score = np.Inf
else:
self.val_score = -np.Inf
def on_epoch_end(self, model):
model.eval()
valid_dataset = FeedbackDatasetValid(self.valid_samples, 4096, self.tokenizer)
collate = Collate(self.tokenizer)
preds_iter = model.predict(
valid_dataset,
batch_size=self.batch_size,
n_jobs=-1,
collate_fn=collate,
)
final_preds = []
final_scores = []
for preds in preds_iter:
pred_class = np.argmax(preds, axis=2)
pred_scrs = np.max(preds, axis=2)
for pred, pred_scr in zip(pred_class, pred_scrs):
final_preds.append(pred.tolist())
final_scores.append(pred_scr.tolist())
for j in range(len(self.valid_samples)):
tt = [id_target_map[p] for p in final_preds[j][1:]]
tt_score = final_scores[j][1:]
self.valid_samples[j]["preds"] = tt
self.valid_samples[j]["pred_scores"] = tt_score
submission = []
min_thresh = {
"Lead": 9,
"Position": 5,
"Evidence": 14,
"Claim": 3,
"Concluding Statement": 11,
"Counterclaim": 6,
"Rebuttal": 4,
}
proba_thresh = {
"Lead": 0.7,
"Position": 0.55,
"Evidence": 0.65,
"Claim": 0.55,
"Concluding Statement": 0.7,
"Counterclaim": 0.5,
"Rebuttal": 0.55,
}
for _, sample in enumerate(self.valid_samples):
preds = sample["preds"]
offset_mapping = sample["offset_mapping"]
sample_id = sample["id"]
sample_text = sample["text"]
sample_pred_scores = sample["pred_scores"]
# pad preds to same length as offset_mapping
if len(preds) < len(offset_mapping):
preds = preds + ["O"] * (len(offset_mapping) - len(preds))
sample_pred_scores = sample_pred_scores + [0] * (len(offset_mapping) - len(sample_pred_scores))
idx = 0
phrase_preds = []
while idx < len(offset_mapping):
start, _ = offset_mapping[idx]
if preds[idx] != "O":
label = preds[idx][2:]
else:
label = "O"
phrase_scores = []
phrase_scores.append(sample_pred_scores[idx])
idx += 1
while idx < len(offset_mapping):
if label == "O":
matching_label = "O"
else:
matching_label = f"I-{label}"
if preds[idx] == matching_label:
_, end = offset_mapping[idx]
phrase_scores.append(sample_pred_scores[idx])
idx += 1
else:
break
if "end" in locals():
phrase = sample_text[start:end]
phrase_preds.append((phrase, start, end, label, phrase_scores))
temp_df = []
for phrase_idx, (phrase, start, end, label, phrase_scores) in enumerate(phrase_preds):
word_start = len(sample_text[:start].split())
word_end = word_start + len(sample_text[start:end].split())
word_end = min(word_end, len(sample_text.split()))
ps = " ".join([str(x) for x in range(word_start, word_end)])
if label != "O":
if sum(phrase_scores) / len(phrase_scores) >= proba_thresh[label]:
temp_df.append((sample_id, label, ps))
temp_df = pd.DataFrame(temp_df, columns=["id", "class", "predictionstring"])
submission.append(temp_df)
submission = pd.concat(submission).reset_index(drop=True)
submission["len"] = submission.predictionstring.apply(lambda x: len(x.split()))
def threshold(df):
df = df.copy()
for key, value in min_thresh.items():
index = df.loc[df["class"] == key].query(f"len<{value}").index
df.drop(index, inplace=True)
return df
submission = threshold(submission)
# drop len
submission = submission.drop(columns=["len"])
scr = score_feedback_comp(submission, self.valid_df, return_class_scores=True)
print(scr)
model.train()
epoch_score = scr[0]
if self.mode == "min":
score = -1.0 * epoch_score
else:
score = np.copy(epoch_score)
if self.best_score is None:
self.best_score = score
self.save_checkpoint(epoch_score, model)
elif score < self.best_score + self.delta:
self.counter += 1
print("EarlyStopping counter: {} out of {}".format(self.counter, self.patience))
if self.counter >= self.patience:
model.model_state = enums.ModelState.END
else:
self.best_score = score
self.save_checkpoint(epoch_score, model)
self.counter = 0
def save_checkpoint(self, epoch_score, model):
if epoch_score not in [-np.inf, np.inf, -np.nan, np.nan]:
print("Validation score improved ({} --> {}). Saving model!".format(self.val_score, epoch_score))
model.save(self.model_path, weights_only=self.save_weights_only)
self.val_score = epoch_score
gitextract_8v4yrbgd/ ├── .gitignore ├── LICENSE ├── README.md ├── train.py └── utils.py
SYMBOL INDEX (29 symbols across 2 files)
FILE: train.py
function seed_everything (line 20) | def seed_everything(seed: int):
function parse_args (line 30) | def parse_args():
class FeedbackDataset (line 45) | class FeedbackDataset:
method __init__ (line 46) | def __init__(self, samples, max_len, tokenizer):
method __len__ (line 52) | def __len__(self):
method __getitem__ (line 55) | def __getitem__(self, idx):
class FeedbackModel (line 96) | class FeedbackModel(tez.Model):
method __init__ (line 97) | def __init__(self, model_name, num_train_steps, learning_rate, num_lab...
method fetch_optimizer (line 129) | def fetch_optimizer(self):
method fetch_scheduler (line 145) | def fetch_scheduler(self):
method loss (line 155) | def loss(self, outputs, targets, attention_mask):
method monitor_metrics (line 169) | def monitor_metrics(self, outputs, targets, attention_mask):
method forward (line 178) | def forward(self, ids, mask, token_type_ids=None, targets=None):
FILE: utils.py
function _prepare_training_data_helper (line 35) | def _prepare_training_data_helper(args, tokenizer, df, train_ids):
function prepare_training_data (line 87) | def prepare_training_data(df, tokenizer, args, num_jobs):
function calc_overlap (line 102) | def calc_overlap(row):
function score_feedback_comp_micro (line 119) | def score_feedback_comp_micro(pred_df, gt_df):
function score_feedback_comp (line 178) | def score_feedback_comp(pred_df, gt_df, return_class_scores=False):
class FeedbackDatasetValid (line 191) | class FeedbackDatasetValid:
method __init__ (line 192) | def __init__(self, samples, max_len, tokenizer):
method __len__ (line 198) | def __len__(self):
method __getitem__ (line 201) | def __getitem__(self, idx):
class Collate (line 218) | class Collate:
method __init__ (line 219) | def __init__(self, tokenizer):
method __call__ (line 222) | def __call__(self, batch):
class EarlyStopping (line 245) | class EarlyStopping(Callback):
method __init__ (line 246) | def __init__(
method on_epoch_end (line 276) | def on_epoch_end(self, model):
method save_checkpoint (line 413) | def save_checkpoint(self, epoch_score, model):
Condensed preview — 5 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (30K chars).
[
{
"path": ".gitignore",
"chars": 1854,
"preview": ".vscode/\n\n# local stuff\ninput/\n*.csv\n*.bin\n*.pkl\n*.h5\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$"
},
{
"path": "LICENSE",
"chars": 1071,
"preview": "MIT License\n\nCopyright (c) 2022 Abhishek Thakur\n\nPermission is hereby granted, free of charge, to any person obtaining a"
},
{
"path": "README.md",
"chars": 923,
"preview": "# Long text token classification using LongFormer\n\nThe data comes from: https://www.kaggle.com/c/feedback-prize-2021/\n\nT"
},
{
"path": "train.py",
"chars": 10107,
"preview": "import argparse\nimport os\nimport random\nimport warnings\n\nimport numpy as np\nimport pandas as pd\nimport tez\nimport torch\n"
},
{
"path": "utils.py",
"chars": 14709,
"preview": "import copy\nimport os\n\nimport numpy as np\nimport pandas as pd\nimport torch\nfrom joblib import Parallel, delayed\nfrom tez"
}
]
About this extraction
This page contains the full source code of the abhishekkrthakur/long-text-token-classification GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 5 files (28.0 KB), approximately 7.0k tokens, and a symbol index with 29 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.