Repository: kentonl/e2e-coref
Branch: master
Commit: 9d1ee1972f6e
Files: 24
Total size: 94.5 KB
Directory structure:
gitextract_zs1s9vox/
├── .gitignore
├── LICENSE
├── README.md
├── cache_elmo.py
├── conll.py
├── continuous_evaluate.py
├── coref_kernels.cc
├── coref_model.py
├── coref_ops.py
├── demo.py
├── evaluate.py
├── experiments.conf
├── filter_embeddings.py
├── get_char_vocab.py
├── metrics.py
├── minimize.py
├── predict.py
├── ps.py
├── requirements.txt
├── setup_all.sh
├── setup_training.sh
├── train.py
├── util.py
└── worker.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
*.pyc
*.so
*.jsonlines
logs
conll-2012
char_vocab*.txt
glove*.txt
glove*.txt.filtered
*.v*_*_conll
*.hdf5
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017 Kenton Lee
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# Higher-order Coreference Resolution with Coarse-to-fine Inference
## Introduction
This repository contains the code for replicating results from
* [Higher-order Coreference Resolution with Coarse-to-fine Inference](https://arxiv.org/abs/1804.05392)
* [Kenton Lee](http://kentonl.com/), [Luheng He](https://homes.cs.washington.edu/~luheng), and [Luke Zettlemoyer](https://www.cs.washington.edu/people/faculty/lsz)
* In NAACL 2018
## Getting Started
* Install python (either 2 or 3) requirements: `pip install -r requirements.txt`
* Download pretrained models at https://drive.google.com/file/d/1fkifqZzdzsOEo0DXMzCFjiNXqsKG_cHi
* Move the downloaded file to the root of the repo and extract: `tar -xzvf e2e-coref.tgz`
* Download GloVe embeddings and build custom kernels by running `setup_all.sh`.
* There are 3 platform-dependent ways to build custom TensorFlow kernels. Please comment/uncomment the appropriate lines in the script.
* To train your own models, run `setup_training.sh`
* This assumes access to OntoNotes 5.0. Please edit the `ontonotes_path` variable.
## Training Instructions
* Experiment configurations are found in `experiments.conf`
* Choose an experiment that you would like to run, e.g. `best`
* Training: `python train.py <experiment>`
* Results are stored in the `logs` directory and can be viewed via TensorBoard.
* Evaluation: `python evaluate.py <experiment>`
## Demo Instructions
* Command-line demo: `python demo.py final`
* To run the demo with other experiments, replace `final` with your configuration name.
## Batched Prediction Instructions
* Create a file where each line is in the following json format (make sure to strip the newlines so each line is well-formed json):
```
{
"clusters": [],
"doc_key": "nw",
"sentences": [["This", "is", "the", "first", "sentence", "."], ["This", "is", "the", "second", "."]],
"speakers": [["spk1", "spk1", "spk1", "spk1", "spk1", "spk1"], ["spk2", "spk2", "spk2", "spk2", "spk2"]]
}
```
* `clusters` should be left empty and is only used for evaluation purposes.
* `doc_key` indicates the genre, which can be one of the following: `"bc", "bn", "mz", "nw", "pt", "tc", "wb"`
* `speakers` indicates the speaker of each word. These can be all empty strings if there is only one known speaker.
* Run `python predict.py <experiment> <input_file> <output_file>`, which outputs the input jsonlines with predicted clusters.
## Other Quirks
* It does not use GPUs by default. Instead, it looks for the `GPU` environment variable, which the code treats as shorthand for `CUDA_VISIBLE_DEVICES`.
* The training runs indefinitely and needs to be terminated manually. The model generally converges at about 400k steps.
================================================
FILE: cache_elmo.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import h5py
import json
import sys
def build_elmo():
token_ph = tf.placeholder(tf.string, [None, None])
len_ph = tf.placeholder(tf.int32, [None])
elmo_module = hub.Module("https://tfhub.dev/google/elmo/2")
lm_embeddings = elmo_module(
inputs={"tokens": token_ph, "sequence_len": len_ph},
signature="tokens", as_dict=True)
word_emb = lm_embeddings["word_emb"]
lm_emb = tf.stack([tf.concat([word_emb, word_emb], -1),
lm_embeddings["lstm_outputs1"],
lm_embeddings["lstm_outputs2"]], -1)
return token_ph, len_ph, lm_emb
def cache_dataset(data_path, session, token_ph, len_ph, lm_emb, out_file):
with open(data_path) as in_file:
for doc_num, line in enumerate(in_file.readlines()):
example = json.loads(line)
sentences = example["sentences"]
max_sentence_length = max(len(s) for s in sentences)
tokens = [[""] * max_sentence_length for _ in sentences]
text_len = np.array([len(s) for s in sentences])
for i, sentence in enumerate(sentences):
for j, word in enumerate(sentence):
tokens[i][j] = word
tokens = np.array(tokens)
tf_lm_emb = session.run(lm_emb, feed_dict={
token_ph: tokens,
len_ph: text_len
})
file_key = example["doc_key"].replace("/", ":")
group = out_file.create_group(file_key)
for i, (e, l) in enumerate(zip(tf_lm_emb, text_len)):
e = e[:l, :, :]
group[str(i)] = e
if doc_num % 10 == 0:
print("Cached {} documents in {}".format(doc_num + 1, data_path))
if __name__ == "__main__":
token_ph, len_ph, lm_emb = build_elmo()
with tf.Session() as session:
session.run(tf.global_variables_initializer())
with h5py.File("elmo_cache.hdf5", "w") as out_file:
for json_filename in sys.argv[1:]:
cache_dataset(json_filename, session, token_ph, len_ph, lm_emb, out_file)
================================================
FILE: conll.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
import os
import sys
import json
import tempfile
import subprocess
import operator
import collections
BEGIN_DOCUMENT_REGEX = re.compile(r"#begin document \((.*)\); part (\d+)")
COREF_RESULTS_REGEX = re.compile(r".*Coreference: Recall: \([0-9.]+ / [0-9.]+\) ([0-9.]+)%\tPrecision: \([0-9.]+ / [0-9.]+\) ([0-9.]+)%\tF1: ([0-9.]+)%.*", re.DOTALL)
def get_doc_key(doc_id, part):
return "{}_{}".format(doc_id, int(part))
def output_conll(input_file, output_file, predictions):
prediction_map = {}
for doc_key, clusters in predictions.items():
start_map = collections.defaultdict(list)
end_map = collections.defaultdict(list)
word_map = collections.defaultdict(list)
for cluster_id, mentions in enumerate(clusters):
for start, end in mentions:
if start == end:
word_map[start].append(cluster_id)
else:
start_map[start].append((cluster_id, end))
end_map[end].append((cluster_id, start))
for k,v in start_map.items():
start_map[k] = [cluster_id for cluster_id, end in sorted(v, key=operator.itemgetter(1), reverse=True)]
for k,v in end_map.items():
end_map[k] = [cluster_id for cluster_id, start in sorted(v, key=operator.itemgetter(1), reverse=True)]
prediction_map[doc_key] = (start_map, end_map, word_map)
word_index = 0
for line in input_file.readlines():
row = line.split()
if len(row) == 0:
output_file.write("\n")
elif row[0].startswith("#"):
begin_match = re.match(BEGIN_DOCUMENT_REGEX, line)
if begin_match:
doc_key = get_doc_key(begin_match.group(1), begin_match.group(2))
start_map, end_map, word_map = prediction_map[doc_key]
word_index = 0
output_file.write(line)
output_file.write("\n")
else:
assert get_doc_key(row[0], row[1]) == doc_key
coref_list = []
if word_index in end_map:
for cluster_id in end_map[word_index]:
coref_list.append("{})".format(cluster_id))
if word_index in word_map:
for cluster_id in word_map[word_index]:
coref_list.append("({})".format(cluster_id))
if word_index in start_map:
for cluster_id in start_map[word_index]:
coref_list.append("({}".format(cluster_id))
if len(coref_list) == 0:
row[-1] = "-"
else:
row[-1] = "|".join(coref_list)
output_file.write(" ".join(row))
output_file.write("\n")
word_index += 1
def official_conll_eval(gold_path, predicted_path, metric, official_stdout=False):
cmd = ["conll-2012/scorer/v8.01/scorer.pl", metric, gold_path, predicted_path, "none"]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
process.wait()
stdout = stdout.decode("utf-8")
if stderr is not None:
print(stderr)
if official_stdout:
print("Official result for {}".format(metric))
print(stdout)
coref_results_match = re.match(COREF_RESULTS_REGEX, stdout)
recall = float(coref_results_match.group(1))
precision = float(coref_results_match.group(2))
f1 = float(coref_results_match.group(3))
return { "r": recall, "p": precision, "f": f1 }
def evaluate_conll(gold_path, predictions, official_stdout=False):
with tempfile.NamedTemporaryFile(delete=False, mode="w") as prediction_file:
with open(gold_path, "r") as gold_file:
output_conll(gold_file, prediction_file, predictions)
print("Predicted conll file: {}".format(prediction_file.name))
return { m: official_conll_eval(gold_file.name, prediction_file.name, m, official_stdout) for m in ("muc", "bcub", "ceafe") }
================================================
FILE: continuous_evaluate.py
================================================
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import re
import time
import shutil
import tensorflow as tf
import coref_model as cm
import util
def copy_checkpoint(source, target):
for ext in (".index", ".data-00000-of-00001"):
shutil.copyfile(source + ext, target + ext)
if __name__ == "__main__":
config = util.initialize_from_env()
model = cm.CorefModel(config)
saver = tf.train.Saver()
log_dir = config["log_dir"]
writer = tf.summary.FileWriter(log_dir, flush_secs=20)
evaluated_checkpoints = set()
max_f1 = 0
checkpoint_pattern = re.compile(".*model.ckpt-([0-9]*)\Z")
with tf.Session() as session:
while True:
ckpt = tf.train.get_checkpoint_state(log_dir)
if ckpt and ckpt.model_checkpoint_path and ckpt.model_checkpoint_path not in evaluated_checkpoints:
print("Evaluating {}".format(ckpt.model_checkpoint_path))
# Move it to a temporary location to avoid being deleted by the training supervisor.
tmp_checkpoint_path = os.path.join(log_dir, "model.tmp.ckpt")
copy_checkpoint(ckpt.model_checkpoint_path, tmp_checkpoint_path)
global_step = int(checkpoint_pattern.match(ckpt.model_checkpoint_path).group(1))
saver.restore(session, ckpt.model_checkpoint_path)
eval_summary, f1 = model.evaluate(session)
if f1 > max_f1:
max_f1 = f1
copy_checkpoint(tmp_checkpoint_path, os.path.join(log_dir, "model.max.ckpt"))
print("Current max F1: {:.2f}".format(max_f1))
writer.add_summary(eval_summary, global_step)
print("Evaluation written to {} at step {}".format(log_dir, global_step))
evaluated_checkpoints.add(ckpt.model_checkpoint_path)
sleep_time = 60
else:
sleep_time = 10
print("Waiting for {} seconds before looking for next checkpoint.".format(sleep_time))
time.sleep(sleep_time)
================================================
FILE: coref_kernels.cc
================================================
#include <map>
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
using namespace tensorflow;
REGISTER_OP("ExtractSpans")
.Input("span_scores: float32")
.Input("candidate_starts: int32")
.Input("candidate_ends: int32")
.Input("num_output_spans: int32")
.Input("max_sentence_length: int32")
.Attr("sort_spans: bool")
.Output("output_span_indices: int32");
class ExtractSpansOp : public OpKernel {
public:
explicit ExtractSpansOp(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("sort_spans", &_sort_spans));
}
void Compute(OpKernelContext* context) override {
TTypes<float>::ConstMatrix span_scores = context->input(0).matrix<float>();
TTypes<int32>::ConstMatrix candidate_starts = context->input(1).matrix<int32>();
TTypes<int32>::ConstMatrix candidate_ends = context->input(2).matrix<int32>();
TTypes<int32>::ConstVec num_output_spans = context->input(3).vec<int32>();
int max_sentence_length = context->input(4).scalar<int32>()();
int num_sentences = span_scores.dimension(0);
int num_input_spans = span_scores.dimension(1);
int max_num_output_spans = 0;
for (int i = 0; i < num_sentences; i++) {
if (num_output_spans(i) > max_num_output_spans) {
max_num_output_spans = num_output_spans(i);
}
}
Tensor* output_span_indices_tensor = nullptr;
TensorShape output_span_indices_shape({num_sentences, max_num_output_spans});
OP_REQUIRES_OK(context, context->allocate_output(0, output_span_indices_shape,
&output_span_indices_tensor));
TTypes<int32>::Matrix output_span_indices = output_span_indices_tensor->matrix<int32>();
std::vector<std::vector<int>> sorted_input_span_indices(num_sentences,
std::vector<int>(num_input_spans));
for (int i = 0; i < num_sentences; i++) {
std::iota(sorted_input_span_indices[i].begin(), sorted_input_span_indices[i].end(), 0);
std::sort(sorted_input_span_indices[i].begin(), sorted_input_span_indices[i].end(),
[&span_scores, &i](int j1, int j2) {
return span_scores(i, j2) < span_scores(i, j1);
});
}
for (int l = 0; l < num_sentences; l++) {
std::vector<int> top_span_indices;
std::unordered_map<int, int> end_to_earliest_start;
std::unordered_map<int, int> start_to_latest_end;
int current_span_index = 0,
num_selected_spans = 0;
while (num_selected_spans < num_output_spans(l) && current_span_index < num_input_spans) {
int i = sorted_input_span_indices[l][current_span_index];
bool any_crossing = false;
const int start = candidate_starts(l, i);
const int end = candidate_ends(l, i);
for (int j = start; j <= end; ++j) {
auto latest_end_iter = start_to_latest_end.find(j);
if (latest_end_iter != start_to_latest_end.end() && j > start && latest_end_iter->second > end) {
// Given (), exists [], such that ( [ ) ]
any_crossing = true;
break;
}
auto earliest_start_iter = end_to_earliest_start.find(j);
if (earliest_start_iter != end_to_earliest_start.end() && j < end && earliest_start_iter->second < start) {
// Given (), exists [], such that [ ( ] )
any_crossing = true;
break;
}
}
if (!any_crossing) {
if (_sort_spans) {
top_span_indices.push_back(i);
} else {
output_span_indices(l, num_selected_spans) = i;
}
++num_selected_spans;
// Update data struct.
auto latest_end_iter = start_to_latest_end.find(start);
if (latest_end_iter == start_to_latest_end.end() || end > latest_end_iter->second) {
start_to_latest_end[start] = end;
}
auto earliest_start_iter = end_to_earliest_start.find(end);
if (earliest_start_iter == end_to_earliest_start.end() || start < earliest_start_iter->second) {
end_to_earliest_start[end] = start;
}
}
++current_span_index;
}
// Sort and populate selected span indices.
if (_sort_spans) {
std::sort(top_span_indices.begin(), top_span_indices.end(),
[&candidate_starts, &candidate_ends, &l] (int i1, int i2) {
if (candidate_starts(l, i1) < candidate_starts(l, i2)) {
return true;
} else if (candidate_starts(l, i1) > candidate_starts(l, i2)) {
return false;
} else if (candidate_ends(l, i1) < candidate_ends(l, i2)) {
return true;
} else if (candidate_ends(l, i1) > candidate_ends(l, i2)) {
return false;
} else {
return i1 < i2;
}
});
for (int i = 0; i < num_output_spans(l); ++i) {
output_span_indices(l, i) = top_span_indices[i];
}
}
// Pad with the first span index.
for (int i = num_selected_spans; i < max_num_output_spans; ++i) {
output_span_indices(l, i) = output_span_indices(l, 0);
}
}
}
private:
bool _sort_spans;
};
REGISTER_KERNEL_BUILDER(Name("ExtractSpans").Device(DEVICE_CPU), ExtractSpansOp);
================================================
FILE: coref_model.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import operator
import random
import math
import json
import threading
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import h5py
import util
import coref_ops
import conll
import metrics
class CorefModel(object):
def __init__(self, config):
self.config = config
self.context_embeddings = util.EmbeddingDictionary(config["context_embeddings"])
self.head_embeddings = util.EmbeddingDictionary(config["head_embeddings"], maybe_cache=self.context_embeddings)
self.char_embedding_size = config["char_embedding_size"]
self.char_dict = util.load_char_dict(config["char_vocab_path"])
self.max_span_width = config["max_span_width"]
self.genres = { g:i for i,g in enumerate(config["genres"]) }
if config["lm_path"]:
self.lm_file = h5py.File(self.config["lm_path"], "r")
else:
self.lm_file = None
self.lm_layers = self.config["lm_layers"]
self.lm_size = self.config["lm_size"]
self.eval_data = None # Load eval data lazily.
input_props = []
input_props.append((tf.string, [None, None])) # Tokens.
input_props.append((tf.float32, [None, None, self.context_embeddings.size])) # Context embeddings.
input_props.append((tf.float32, [None, None, self.head_embeddings.size])) # Head embeddings.
input_props.append((tf.float32, [None, None, self.lm_size, self.lm_layers])) # LM embeddings.
input_props.append((tf.int32, [None, None, None])) # Character indices.
input_props.append((tf.int32, [None])) # Text lengths.
input_props.append((tf.int32, [None])) # Speaker IDs.
input_props.append((tf.int32, [])) # Genre.
input_props.append((tf.bool, [])) # Is training.
input_props.append((tf.int32, [None])) # Gold starts.
input_props.append((tf.int32, [None])) # Gold ends.
input_props.append((tf.int32, [None])) # Cluster ids.
self.queue_input_tensors = [tf.placeholder(dtype, shape) for dtype, shape in input_props]
dtypes, shapes = zip(*input_props)
queue = tf.PaddingFIFOQueue(capacity=10, dtypes=dtypes, shapes=shapes)
self.enqueue_op = queue.enqueue(self.queue_input_tensors)
self.input_tensors = queue.dequeue()
self.predictions, self.loss = self.get_predictions_and_loss(*self.input_tensors)
self.global_step = tf.Variable(0, name="global_step", trainable=False)
self.reset_global_step = tf.assign(self.global_step, 0)
learning_rate = tf.train.exponential_decay(self.config["learning_rate"], self.global_step,
self.config["decay_frequency"], self.config["decay_rate"], staircase=True)
trainable_params = tf.trainable_variables()
gradients = tf.gradients(self.loss, trainable_params)
gradients, _ = tf.clip_by_global_norm(gradients, self.config["max_gradient_norm"])
optimizers = {
"adam" : tf.train.AdamOptimizer,
"sgd" : tf.train.GradientDescentOptimizer
}
optimizer = optimizers[self.config["optimizer"]](learning_rate)
self.train_op = optimizer.apply_gradients(zip(gradients, trainable_params), global_step=self.global_step)
def start_enqueue_thread(self, session):
with open(self.config["train_path"]) as f:
train_examples = [json.loads(jsonline) for jsonline in f.readlines()]
def _enqueue_loop():
while True:
random.shuffle(train_examples)
for example in train_examples:
tensorized_example = self.tensorize_example(example, is_training=True)
feed_dict = dict(zip(self.queue_input_tensors, tensorized_example))
session.run(self.enqueue_op, feed_dict=feed_dict)
enqueue_thread = threading.Thread(target=_enqueue_loop)
enqueue_thread.daemon = True
enqueue_thread.start()
def restore(self, session):
# Don't try to restore unused variables from the TF-Hub ELMo module.
vars_to_restore = [v for v in tf.global_variables() if "module/" not in v.name]
saver = tf.train.Saver(vars_to_restore)
checkpoint_path = os.path.join(self.config["log_dir"], "model.max.ckpt")
print("Restoring from {}".format(checkpoint_path))
session.run(tf.global_variables_initializer())
saver.restore(session, checkpoint_path)
def load_lm_embeddings(self, doc_key):
if self.lm_file is None:
return np.zeros([0, 0, self.lm_size, self.lm_layers])
file_key = doc_key.replace("/", ":")
group = self.lm_file[file_key]
num_sentences = len(list(group.keys()))
sentences = [group[str(i)][...] for i in range(num_sentences)]
lm_emb = np.zeros([num_sentences, max(s.shape[0] for s in sentences), self.lm_size, self.lm_layers])
for i, s in enumerate(sentences):
lm_emb[i, :s.shape[0], :, :] = s
return lm_emb
def tensorize_mentions(self, mentions):
if len(mentions) > 0:
starts, ends = zip(*mentions)
else:
starts, ends = [], []
return np.array(starts), np.array(ends)
def tensorize_span_labels(self, tuples, label_dict):
if len(tuples) > 0:
starts, ends, labels = zip(*tuples)
else:
starts, ends, labels = [], [], []
return np.array(starts), np.array(ends), np.array([label_dict[c] for c in labels])
def tensorize_example(self, example, is_training):
clusters = example["clusters"]
gold_mentions = sorted(tuple(m) for m in util.flatten(clusters))
gold_mention_map = {m:i for i,m in enumerate(gold_mentions)}
cluster_ids = np.zeros(len(gold_mentions))
for cluster_id, cluster in enumerate(clusters):
for mention in cluster:
cluster_ids[gold_mention_map[tuple(mention)]] = cluster_id + 1
sentences = example["sentences"]
num_words = sum(len(s) for s in sentences)
speakers = util.flatten(example["speakers"])
assert num_words == len(speakers)
max_sentence_length = max(len(s) for s in sentences)
max_word_length = max(max(max(len(w) for w in s) for s in sentences), max(self.config["filter_widths"]))
text_len = np.array([len(s) for s in sentences])
tokens = [[""] * max_sentence_length for _ in sentences]
context_word_emb = np.zeros([len(sentences), max_sentence_length, self.context_embeddings.size])
head_word_emb = np.zeros([len(sentences), max_sentence_length, self.head_embeddings.size])
char_index = np.zeros([len(sentences), max_sentence_length, max_word_length])
for i, sentence in enumerate(sentences):
for j, word in enumerate(sentence):
tokens[i][j] = word
context_word_emb[i, j] = self.context_embeddings[word]
head_word_emb[i, j] = self.head_embeddings[word]
char_index[i, j, :len(word)] = [self.char_dict[c] for c in word]
tokens = np.array(tokens)
speaker_dict = { s:i for i,s in enumerate(set(speakers)) }
speaker_ids = np.array([speaker_dict[s] for s in speakers])
doc_key = example["doc_key"]
genre = self.genres[doc_key[:2]]
gold_starts, gold_ends = self.tensorize_mentions(gold_mentions)
lm_emb = self.load_lm_embeddings(doc_key)
example_tensors = (tokens, context_word_emb, head_word_emb, lm_emb, char_index, text_len, speaker_ids, genre, is_training, gold_starts, gold_ends, cluster_ids)
if is_training and len(sentences) > self.config["max_training_sentences"]:
return self.truncate_example(*example_tensors)
else:
return example_tensors
def truncate_example(self, tokens, context_word_emb, head_word_emb, lm_emb, char_index, text_len, speaker_ids, genre, is_training, gold_starts, gold_ends, cluster_ids):
max_training_sentences = self.config["max_training_sentences"]
num_sentences = context_word_emb.shape[0]
assert num_sentences > max_training_sentences
sentence_offset = random.randint(0, num_sentences - max_training_sentences)
word_offset = text_len[:sentence_offset].sum()
num_words = text_len[sentence_offset:sentence_offset + max_training_sentences].sum()
tokens = tokens[sentence_offset:sentence_offset + max_training_sentences, :]
context_word_emb = context_word_emb[sentence_offset:sentence_offset + max_training_sentences, :, :]
head_word_emb = head_word_emb[sentence_offset:sentence_offset + max_training_sentences, :, :]
lm_emb = lm_emb[sentence_offset:sentence_offset + max_training_sentences, :, :, :]
char_index = char_index[sentence_offset:sentence_offset + max_training_sentences, :, :]
text_len = text_len[sentence_offset:sentence_offset + max_training_sentences]
speaker_ids = speaker_ids[word_offset: word_offset + num_words]
gold_spans = np.logical_and(gold_ends >= word_offset, gold_starts < word_offset + num_words)
gold_starts = gold_starts[gold_spans] - word_offset
gold_ends = gold_ends[gold_spans] - word_offset
cluster_ids = cluster_ids[gold_spans]
return tokens, context_word_emb, head_word_emb, lm_emb, char_index, text_len, speaker_ids, genre, is_training, gold_starts, gold_ends, cluster_ids
def get_candidate_labels(self, candidate_starts, candidate_ends, labeled_starts, labeled_ends, labels):
same_start = tf.equal(tf.expand_dims(labeled_starts, 1), tf.expand_dims(candidate_starts, 0)) # [num_labeled, num_candidates]
same_end = tf.equal(tf.expand_dims(labeled_ends, 1), tf.expand_dims(candidate_ends, 0)) # [num_labeled, num_candidates]
same_span = tf.logical_and(same_start, same_end) # [num_labeled, num_candidates]
candidate_labels = tf.matmul(tf.expand_dims(labels, 0), tf.to_int32(same_span)) # [1, num_candidates]
candidate_labels = tf.squeeze(candidate_labels, 0) # [num_candidates]
return candidate_labels
def get_dropout(self, dropout_rate, is_training):
return 1 - (tf.to_float(is_training) * dropout_rate)
def coarse_to_fine_pruning(self, top_span_emb, top_span_mention_scores, c):
k = util.shape(top_span_emb, 0)
top_span_range = tf.range(k) # [k]
antecedent_offsets = tf.expand_dims(top_span_range, 1) - tf.expand_dims(top_span_range, 0) # [k, k]
antecedents_mask = antecedent_offsets >= 1 # [k, k]
fast_antecedent_scores = tf.expand_dims(top_span_mention_scores, 1) + tf.expand_dims(top_span_mention_scores, 0) # [k, k]
fast_antecedent_scores += tf.log(tf.to_float(antecedents_mask)) # [k, k]
fast_antecedent_scores += self.get_fast_antecedent_scores(top_span_emb) # [k, k]
_, top_antecedents = tf.nn.top_k(fast_antecedent_scores, c, sorted=False) # [k, c]
top_antecedents_mask = util.batch_gather(antecedents_mask, top_antecedents) # [k, c]
top_fast_antecedent_scores = util.batch_gather(fast_antecedent_scores, top_antecedents) # [k, c]
top_antecedent_offsets = util.batch_gather(antecedent_offsets, top_antecedents) # [k, c]
return top_antecedents, top_antecedents_mask, top_fast_antecedent_scores, top_antecedent_offsets
def distance_pruning(self, top_span_emb, top_span_mention_scores, c):
k = util.shape(top_span_emb, 0)
top_antecedent_offsets = tf.tile(tf.expand_dims(tf.range(c) + 1, 0), [k, 1]) # [k, c]
raw_top_antecedents = tf.expand_dims(tf.range(k), 1) - top_antecedent_offsets # [k, c]
top_antecedents_mask = raw_top_antecedents >= 0 # [k, c]
top_antecedents = tf.maximum(raw_top_antecedents, 0) # [k, c]
top_fast_antecedent_scores = tf.expand_dims(top_span_mention_scores, 1) + tf.gather(top_span_mention_scores, top_antecedents) # [k, c]
top_fast_antecedent_scores += tf.log(tf.to_float(top_antecedents_mask)) # [k, c]
return top_antecedents, top_antecedents_mask, top_fast_antecedent_scores, top_antecedent_offsets
def get_predictions_and_loss(self, tokens, context_word_emb, head_word_emb, lm_emb, char_index, text_len, speaker_ids, genre, is_training, gold_starts, gold_ends, cluster_ids):
self.dropout = self.get_dropout(self.config["dropout_rate"], is_training)
self.lexical_dropout = self.get_dropout(self.config["lexical_dropout_rate"], is_training)
self.lstm_dropout = self.get_dropout(self.config["lstm_dropout_rate"], is_training)
num_sentences = tf.shape(context_word_emb)[0]
max_sentence_length = tf.shape(context_word_emb)[1]
context_emb_list = [context_word_emb]
head_emb_list = [head_word_emb]
if self.config["char_embedding_size"] > 0:
char_emb = tf.gather(tf.get_variable("char_embeddings", [len(self.char_dict), self.config["char_embedding_size"]]), char_index) # [num_sentences, max_sentence_length, max_word_length, emb]
flattened_char_emb = tf.reshape(char_emb, [num_sentences * max_sentence_length, util.shape(char_emb, 2), util.shape(char_emb, 3)]) # [num_sentences * max_sentence_length, max_word_length, emb]
flattened_aggregated_char_emb = util.cnn(flattened_char_emb, self.config["filter_widths"], self.config["filter_size"]) # [num_sentences * max_sentence_length, emb]
aggregated_char_emb = tf.reshape(flattened_aggregated_char_emb, [num_sentences, max_sentence_length, util.shape(flattened_aggregated_char_emb, 1)]) # [num_sentences, max_sentence_length, emb]
context_emb_list.append(aggregated_char_emb)
head_emb_list.append(aggregated_char_emb)
if not self.lm_file:
elmo_module = hub.Module("https://tfhub.dev/google/elmo/2")
lm_embeddings = elmo_module(
inputs={"tokens": tokens, "sequence_len": text_len},
signature="tokens", as_dict=True)
word_emb = lm_embeddings["word_emb"] # [num_sentences, max_sentence_length, 512]
lm_emb = tf.stack([tf.concat([word_emb, word_emb], -1),
lm_embeddings["lstm_outputs1"],
lm_embeddings["lstm_outputs2"]], -1) # [num_sentences, max_sentence_length, 1024, 3]
lm_emb_size = util.shape(lm_emb, 2)
lm_num_layers = util.shape(lm_emb, 3)
with tf.variable_scope("lm_aggregation"):
self.lm_weights = tf.nn.softmax(tf.get_variable("lm_scores", [lm_num_layers], initializer=tf.constant_initializer(0.0)))
self.lm_scaling = tf.get_variable("lm_scaling", [], initializer=tf.constant_initializer(1.0))
flattened_lm_emb = tf.reshape(lm_emb, [num_sentences * max_sentence_length * lm_emb_size, lm_num_layers])
flattened_aggregated_lm_emb = tf.matmul(flattened_lm_emb, tf.expand_dims(self.lm_weights, 1)) # [num_sentences * max_sentence_length * emb, 1]
aggregated_lm_emb = tf.reshape(flattened_aggregated_lm_emb, [num_sentences, max_sentence_length, lm_emb_size])
aggregated_lm_emb *= self.lm_scaling
context_emb_list.append(aggregated_lm_emb)
context_emb = tf.concat(context_emb_list, 2) # [num_sentences, max_sentence_length, emb]
head_emb = tf.concat(head_emb_list, 2) # [num_sentences, max_sentence_length, emb]
context_emb = tf.nn.dropout(context_emb, self.lexical_dropout) # [num_sentences, max_sentence_length, emb]
head_emb = tf.nn.dropout(head_emb, self.lexical_dropout) # [num_sentences, max_sentence_length, emb]
text_len_mask = tf.sequence_mask(text_len, maxlen=max_sentence_length) # [num_sentence, max_sentence_length]
context_outputs = self.lstm_contextualize(context_emb, text_len, text_len_mask) # [num_words, emb]
num_words = util.shape(context_outputs, 0)
genre_emb = tf.gather(tf.get_variable("genre_embeddings", [len(self.genres), self.config["feature_size"]]), genre) # [emb]
sentence_indices = tf.tile(tf.expand_dims(tf.range(num_sentences), 1), [1, max_sentence_length]) # [num_sentences, max_sentence_length]
flattened_sentence_indices = self.flatten_emb_by_sentence(sentence_indices, text_len_mask) # [num_words]
flattened_head_emb = self.flatten_emb_by_sentence(head_emb, text_len_mask) # [num_words]
candidate_starts = tf.tile(tf.expand_dims(tf.range(num_words), 1), [1, self.max_span_width]) # [num_words, max_span_width]
candidate_ends = candidate_starts + tf.expand_dims(tf.range(self.max_span_width), 0) # [num_words, max_span_width]
candidate_start_sentence_indices = tf.gather(flattened_sentence_indices, candidate_starts) # [num_words, max_span_width]
candidate_end_sentence_indices = tf.gather(flattened_sentence_indices, tf.minimum(candidate_ends, num_words - 1)) # [num_words, max_span_width]
candidate_mask = tf.logical_and(candidate_ends < num_words, tf.equal(candidate_start_sentence_indices, candidate_end_sentence_indices)) # [num_words, max_span_width]
flattened_candidate_mask = tf.reshape(candidate_mask, [-1]) # [num_words * max_span_width]
candidate_starts = tf.boolean_mask(tf.reshape(candidate_starts, [-1]), flattened_candidate_mask) # [num_candidates]
candidate_ends = tf.boolean_mask(tf.reshape(candidate_ends, [-1]), flattened_candidate_mask) # [num_candidates]
candidate_sentence_indices = tf.boolean_mask(tf.reshape(candidate_start_sentence_indices, [-1]), flattened_candidate_mask) # [num_candidates]
candidate_cluster_ids = self.get_candidate_labels(candidate_starts, candidate_ends, gold_starts, gold_ends, cluster_ids) # [num_candidates]
candidate_span_emb = self.get_span_emb(flattened_head_emb, context_outputs, candidate_starts, candidate_ends) # [num_candidates, emb]
candidate_mention_scores = self.get_mention_scores(candidate_span_emb) # [k, 1]
candidate_mention_scores = tf.squeeze(candidate_mention_scores, 1) # [k]
k = tf.to_int32(tf.floor(tf.to_float(tf.shape(context_outputs)[0]) * self.config["top_span_ratio"]))
top_span_indices = coref_ops.extract_spans(tf.expand_dims(candidate_mention_scores, 0),
tf.expand_dims(candidate_starts, 0),
tf.expand_dims(candidate_ends, 0),
tf.expand_dims(k, 0),
util.shape(context_outputs, 0),
True) # [1, k]
top_span_indices.set_shape([1, None])
top_span_indices = tf.squeeze(top_span_indices, 0) # [k]
top_span_starts = tf.gather(candidate_starts, top_span_indices) # [k]
top_span_ends = tf.gather(candidate_ends, top_span_indices) # [k]
top_span_emb = tf.gather(candidate_span_emb, top_span_indices) # [k, emb]
top_span_cluster_ids = tf.gather(candidate_cluster_ids, top_span_indices) # [k]
top_span_mention_scores = tf.gather(candidate_mention_scores, top_span_indices) # [k]
top_span_sentence_indices = tf.gather(candidate_sentence_indices, top_span_indices) # [k]
top_span_speaker_ids = tf.gather(speaker_ids, top_span_starts) # [k]
c = tf.minimum(self.config["max_top_antecedents"], k)
if self.config["coarse_to_fine"]:
top_antecedents, top_antecedents_mask, top_fast_antecedent_scores, top_antecedent_offsets = self.coarse_to_fine_pruning(top_span_emb, top_span_mention_scores, c)
else:
top_antecedents, top_antecedents_mask, top_fast_antecedent_scores, top_antecedent_offsets = self.distance_pruning(top_span_emb, top_span_mention_scores, c)
dummy_scores = tf.zeros([k, 1]) # [k, 1]
for i in range(self.config["coref_depth"]):
with tf.variable_scope("coref_layer", reuse=(i > 0)):
top_antecedent_emb = tf.gather(top_span_emb, top_antecedents) # [k, c, emb]
top_antecedent_scores = top_fast_antecedent_scores + self.get_slow_antecedent_scores(top_span_emb, top_antecedents, top_antecedent_emb, top_antecedent_offsets, top_span_speaker_ids, genre_emb) # [k, c]
top_antecedent_weights = tf.nn.softmax(tf.concat([dummy_scores, top_antecedent_scores], 1)) # [k, c + 1]
top_antecedent_emb = tf.concat([tf.expand_dims(top_span_emb, 1), top_antecedent_emb], 1) # [k, c + 1, emb]
attended_span_emb = tf.reduce_sum(tf.expand_dims(top_antecedent_weights, 2) * top_antecedent_emb, 1) # [k, emb]
with tf.variable_scope("f"):
f = tf.sigmoid(util.projection(tf.concat([top_span_emb, attended_span_emb], 1), util.shape(top_span_emb, -1))) # [k, emb]
top_span_emb = f * attended_span_emb + (1 - f) * top_span_emb # [k, emb]
top_antecedent_scores = tf.concat([dummy_scores, top_antecedent_scores], 1) # [k, c + 1]
top_antecedent_cluster_ids = tf.gather(top_span_cluster_ids, top_antecedents) # [k, c]
top_antecedent_cluster_ids += tf.to_int32(tf.log(tf.to_float(top_antecedents_mask))) # [k, c]
same_cluster_indicator = tf.equal(top_antecedent_cluster_ids, tf.expand_dims(top_span_cluster_ids, 1)) # [k, c]
non_dummy_indicator = tf.expand_dims(top_span_cluster_ids > 0, 1) # [k, 1]
pairwise_labels = tf.logical_and(same_cluster_indicator, non_dummy_indicator) # [k, c]
dummy_labels = tf.logical_not(tf.reduce_any(pairwise_labels, 1, keepdims=True)) # [k, 1]
top_antecedent_labels = tf.concat([dummy_labels, pairwise_labels], 1) # [k, c + 1]
loss = self.softmax_loss(top_antecedent_scores, top_antecedent_labels) # [k]
loss = tf.reduce_sum(loss) # []
return [candidate_starts, candidate_ends, candidate_mention_scores, top_span_starts, top_span_ends, top_antecedents, top_antecedent_scores], loss
def get_span_emb(self, head_emb, context_outputs, span_starts, span_ends):
span_emb_list = []
span_start_emb = tf.gather(context_outputs, span_starts) # [k, emb]
span_emb_list.append(span_start_emb)
span_end_emb = tf.gather(context_outputs, span_ends) # [k, emb]
span_emb_list.append(span_end_emb)
span_width = 1 + span_ends - span_starts # [k]
if self.config["use_features"]:
span_width_index = span_width - 1 # [k]
span_width_emb = tf.gather(tf.get_variable("span_width_embeddings", [self.config["max_span_width"], self.config["feature_size"]]), span_width_index) # [k, emb]
span_width_emb = tf.nn.dropout(span_width_emb, self.dropout)
span_emb_list.append(span_width_emb)
if self.config["model_heads"]:
span_indices = tf.expand_dims(tf.range(self.config["max_span_width"]), 0) + tf.expand_dims(span_starts, 1) # [k, max_span_width]
span_indices = tf.minimum(util.shape(context_outputs, 0) - 1, span_indices) # [k, max_span_width]
span_text_emb = tf.gather(head_emb, span_indices) # [k, max_span_width, emb]
with tf.variable_scope("head_scores"):
self.head_scores = util.projection(context_outputs, 1) # [num_words, 1]
span_head_scores = tf.gather(self.head_scores, span_indices) # [k, max_span_width, 1]
span_mask = tf.expand_dims(tf.sequence_mask(span_width, self.config["max_span_width"], dtype=tf.float32), 2) # [k, max_span_width, 1]
span_head_scores += tf.log(span_mask) # [k, max_span_width, 1]
span_attention = tf.nn.softmax(span_head_scores, 1) # [k, max_span_width, 1]
span_head_emb = tf.reduce_sum(span_attention * span_text_emb, 1) # [k, emb]
span_emb_list.append(span_head_emb)
span_emb = tf.concat(span_emb_list, 1) # [k, emb]
return span_emb # [k, emb]
def get_mention_scores(self, span_emb):
with tf.variable_scope("mention_scores"):
return util.ffnn(span_emb, self.config["ffnn_depth"], self.config["ffnn_size"], 1, self.dropout) # [k, 1]
def softmax_loss(self, antecedent_scores, antecedent_labels):
gold_scores = antecedent_scores + tf.log(tf.to_float(antecedent_labels)) # [k, max_ant + 1]
marginalized_gold_scores = tf.reduce_logsumexp(gold_scores, [1]) # [k]
log_norm = tf.reduce_logsumexp(antecedent_scores, [1]) # [k]
return log_norm - marginalized_gold_scores # [k]
def bucket_distance(self, distances):
"""
Places the given values (designed for distances) into 10 semi-logscale buckets:
[0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+].
"""
logspace_idx = tf.to_int32(tf.floor(tf.log(tf.to_float(distances))/math.log(2))) + 3
use_identity = tf.to_int32(distances <= 4)
combined_idx = use_identity * distances + (1 - use_identity) * logspace_idx
return tf.clip_by_value(combined_idx, 0, 9)
def get_slow_antecedent_scores(self, top_span_emb, top_antecedents, top_antecedent_emb, top_antecedent_offsets, top_span_speaker_ids, genre_emb):
k = util.shape(top_span_emb, 0)
c = util.shape(top_antecedents, 1)
feature_emb_list = []
if self.config["use_metadata"]:
top_antecedent_speaker_ids = tf.gather(top_span_speaker_ids, top_antecedents) # [k, c]
same_speaker = tf.equal(tf.expand_dims(top_span_speaker_ids, 1), top_antecedent_speaker_ids) # [k, c]
speaker_pair_emb = tf.gather(tf.get_variable("same_speaker_emb", [2, self.config["feature_size"]]), tf.to_int32(same_speaker)) # [k, c, emb]
feature_emb_list.append(speaker_pair_emb)
tiled_genre_emb = tf.tile(tf.expand_dims(tf.expand_dims(genre_emb, 0), 0), [k, c, 1]) # [k, c, emb]
feature_emb_list.append(tiled_genre_emb)
if self.config["use_features"]:
antecedent_distance_buckets = self.bucket_distance(top_antecedent_offsets) # [k, c]
antecedent_distance_emb = tf.gather(tf.get_variable("antecedent_distance_emb", [10, self.config["feature_size"]]), antecedent_distance_buckets) # [k, c]
feature_emb_list.append(antecedent_distance_emb)
feature_emb = tf.concat(feature_emb_list, 2) # [k, c, emb]
feature_emb = tf.nn.dropout(feature_emb, self.dropout) # [k, c, emb]
target_emb = tf.expand_dims(top_span_emb, 1) # [k, 1, emb]
similarity_emb = top_antecedent_emb * target_emb # [k, c, emb]
target_emb = tf.tile(target_emb, [1, c, 1]) # [k, c, emb]
pair_emb = tf.concat([target_emb, top_antecedent_emb, similarity_emb, feature_emb], 2) # [k, c, emb]
with tf.variable_scope("slow_antecedent_scores"):
slow_antecedent_scores = util.ffnn(pair_emb, self.config["ffnn_depth"], self.config["ffnn_size"], 1, self.dropout) # [k, c, 1]
slow_antecedent_scores = tf.squeeze(slow_antecedent_scores, 2) # [k, c]
return slow_antecedent_scores # [k, c]
def get_fast_antecedent_scores(self, top_span_emb):
with tf.variable_scope("src_projection"):
source_top_span_emb = tf.nn.dropout(util.projection(top_span_emb, util.shape(top_span_emb, -1)), self.dropout) # [k, emb]
target_top_span_emb = tf.nn.dropout(top_span_emb, self.dropout) # [k, emb]
return tf.matmul(source_top_span_emb, target_top_span_emb, transpose_b=True) # [k, k]
def flatten_emb_by_sentence(self, emb, text_len_mask):
num_sentences = tf.shape(emb)[0]
max_sentence_length = tf.shape(emb)[1]
emb_rank = len(emb.get_shape())
if emb_rank == 2:
flattened_emb = tf.reshape(emb, [num_sentences * max_sentence_length])
elif emb_rank == 3:
flattened_emb = tf.reshape(emb, [num_sentences * max_sentence_length, util.shape(emb, 2)])
else:
raise ValueError("Unsupported rank: {}".format(emb_rank))
return tf.boolean_mask(flattened_emb, tf.reshape(text_len_mask, [num_sentences * max_sentence_length]))
def lstm_contextualize(self, text_emb, text_len, text_len_mask):
num_sentences = tf.shape(text_emb)[0]
current_inputs = text_emb # [num_sentences, max_sentence_length, emb]
for layer in range(self.config["contextualization_layers"]):
with tf.variable_scope("layer_{}".format(layer)):
with tf.variable_scope("fw_cell"):
cell_fw = util.CustomLSTMCell(self.config["contextualization_size"], num_sentences, self.lstm_dropout)
with tf.variable_scope("bw_cell"):
cell_bw = util.CustomLSTMCell(self.config["contextualization_size"], num_sentences, self.lstm_dropout)
state_fw = tf.contrib.rnn.LSTMStateTuple(tf.tile(cell_fw.initial_state.c, [num_sentences, 1]), tf.tile(cell_fw.initial_state.h, [num_sentences, 1]))
state_bw = tf.contrib.rnn.LSTMStateTuple(tf.tile(cell_bw.initial_state.c, [num_sentences, 1]), tf.tile(cell_bw.initial_state.h, [num_sentences, 1]))
(fw_outputs, bw_outputs), _ = tf.nn.bidirectional_dynamic_rnn(
cell_fw=cell_fw,
cell_bw=cell_bw,
inputs=current_inputs,
sequence_length=text_len,
initial_state_fw=state_fw,
initial_state_bw=state_bw)
text_outputs = tf.concat([fw_outputs, bw_outputs], 2) # [num_sentences, max_sentence_length, emb]
text_outputs = tf.nn.dropout(text_outputs, self.lstm_dropout)
if layer > 0:
highway_gates = tf.sigmoid(util.projection(text_outputs, util.shape(text_outputs, 2))) # [num_sentences, max_sentence_length, emb]
text_outputs = highway_gates * text_outputs + (1 - highway_gates) * current_inputs
current_inputs = text_outputs
return self.flatten_emb_by_sentence(text_outputs, text_len_mask)
def get_predicted_antecedents(self, antecedents, antecedent_scores):
predicted_antecedents = []
for i, index in enumerate(np.argmax(antecedent_scores, axis=1) - 1):
if index < 0:
predicted_antecedents.append(-1)
else:
predicted_antecedents.append(antecedents[i, index])
return predicted_antecedents
def get_predicted_clusters(self, top_span_starts, top_span_ends, predicted_antecedents):
mention_to_predicted = {}
predicted_clusters = []
for i, predicted_index in enumerate(predicted_antecedents):
if predicted_index < 0:
continue
assert i > predicted_index
predicted_antecedent = (int(top_span_starts[predicted_index]), int(top_span_ends[predicted_index]))
if predicted_antecedent in mention_to_predicted:
predicted_cluster = mention_to_predicted[predicted_antecedent]
else:
predicted_cluster = len(predicted_clusters)
predicted_clusters.append([predicted_antecedent])
mention_to_predicted[predicted_antecedent] = predicted_cluster
mention = (int(top_span_starts[i]), int(top_span_ends[i]))
predicted_clusters[predicted_cluster].append(mention)
mention_to_predicted[mention] = predicted_cluster
predicted_clusters = [tuple(pc) for pc in predicted_clusters]
mention_to_predicted = { m:predicted_clusters[i] for m,i in mention_to_predicted.items() }
return predicted_clusters, mention_to_predicted
def evaluate_coref(self, top_span_starts, top_span_ends, predicted_antecedents, gold_clusters, evaluator):
gold_clusters = [tuple(tuple(m) for m in gc) for gc in gold_clusters]
mention_to_gold = {}
for gc in gold_clusters:
for mention in gc:
mention_to_gold[mention] = gc
predicted_clusters, mention_to_predicted = self.get_predicted_clusters(top_span_starts, top_span_ends, predicted_antecedents)
evaluator.update(predicted_clusters, gold_clusters, mention_to_predicted, mention_to_gold)
return predicted_clusters
def load_eval_data(self):
if self.eval_data is None:
def load_line(line):
example = json.loads(line)
return self.tensorize_example(example, is_training=False), example
with open(self.config["eval_path"]) as f:
self.eval_data = [load_line(l) for l in f.readlines()]
num_words = sum(tensorized_example[2].sum() for tensorized_example, _ in self.eval_data)
print("Loaded {} eval examples.".format(len(self.eval_data)))
def evaluate(self, session, official_stdout=False):
self.load_eval_data()
coref_predictions = {}
coref_evaluator = metrics.CorefEvaluator()
for example_num, (tensorized_example, example) in enumerate(self.eval_data):
_, _, _, _, _, _, _, _, _, gold_starts, gold_ends, _ = tensorized_example
feed_dict = {i:t for i,t in zip(self.input_tensors, tensorized_example)}
candidate_starts, candidate_ends, candidate_mention_scores, top_span_starts, top_span_ends, top_antecedents, top_antecedent_scores = session.run(self.predictions, feed_dict=feed_dict)
predicted_antecedents = self.get_predicted_antecedents(top_antecedents, top_antecedent_scores)
coref_predictions[example["doc_key"]] = self.evaluate_coref(top_span_starts, top_span_ends, predicted_antecedents, example["clusters"], coref_evaluator)
if example_num % 10 == 0:
print("Evaluated {}/{} examples.".format(example_num + 1, len(self.eval_data)))
summary_dict = {}
conll_results = conll.evaluate_conll(self.config["conll_eval_path"], coref_predictions, official_stdout)
average_f1 = sum(results["f"] for results in conll_results.values()) / len(conll_results)
summary_dict["Average F1 (conll)"] = average_f1
print("Average F1 (conll): {:.2f}%".format(average_f1))
p,r,f = coref_evaluator.get_prf()
summary_dict["Average F1 (py)"] = f
print("Average F1 (py): {:.2f}%".format(f * 100))
summary_dict["Average precision (py)"] = p
print("Average precision (py): {:.2f}%".format(p * 100))
summary_dict["Average recall (py)"] = r
print("Average recall (py): {:.2f}%".format(r * 100))
return util.make_summary(summary_dict), average_f1
================================================
FILE: coref_ops.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.python import pywrap_tensorflow
coref_op_library = tf.load_op_library("./coref_kernels.so")
extract_spans = coref_op_library.extract_spans
tf.NotDifferentiable("ExtractSpans")
================================================
FILE: demo.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from six.moves import input
import tensorflow as tf
import coref_model as cm
import util
import nltk
nltk.download("punkt")
from nltk.tokenize import sent_tokenize, word_tokenize
def create_example(text):
raw_sentences = sent_tokenize(text)
sentences = [word_tokenize(s) for s in raw_sentences]
speakers = [["" for _ in sentence] for sentence in sentences]
return {
"doc_key": "nw",
"clusters": [],
"sentences": sentences,
"speakers": speakers,
}
def print_predictions(example):
words = util.flatten(example["sentences"])
for cluster in example["predicted_clusters"]:
print(u"Predicted cluster: {}".format([" ".join(words[m[0]:m[1]+1]) for m in cluster]))
def make_predictions(text, model):
example = create_example(text)
tensorized_example = model.tensorize_example(example, is_training=False)
feed_dict = {i:t for i,t in zip(model.input_tensors, tensorized_example)}
_, _, _, mention_starts, mention_ends, antecedents, antecedent_scores, head_scores = session.run(model.predictions + [model.head_scores], feed_dict=feed_dict)
predicted_antecedents = model.get_predicted_antecedents(antecedents, antecedent_scores)
example["predicted_clusters"], _ = model.get_predicted_clusters(mention_starts, mention_ends, predicted_antecedents)
example["top_spans"] = zip((int(i) for i in mention_starts), (int(i) for i in mention_ends))
example["head_scores"] = head_scores.tolist()
return example
if __name__ == "__main__":
config = util.initialize_from_env()
model = cm.CorefModel(config)
with tf.Session() as session:
model.restore(session)
while True:
text = input("Document text: ")
if len(text) > 0:
print_predictions(make_predictions(text, model))
================================================
FILE: evaluate.py
================================================
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow as tf
import coref_model as cm
import util
if __name__ == "__main__":
config = util.initialize_from_env()
model = cm.CorefModel(config)
with tf.Session() as session:
model.restore(session)
model.evaluate(session, official_stdout=True)
================================================
FILE: experiments.conf
================================================
# Word embeddings.
glove_300d {
path = glove.840B.300d.txt
size = 300
}
glove_300d_filtered {
path = glove.840B.300d.txt.filtered
size = 300
}
glove_300d_2w {
path = glove_50_300_2.txt
size = 300
}
# Distributed training configurations.
two_local_gpus {
addresses {
ps = [localhost:2222]
worker = [localhost:2223, localhost:2224]
}
gpus = [0, 1]
}
# Main configuration.
best {
# Computation limits.
max_top_antecedents = 50
max_training_sentences = 50
top_span_ratio = 0.4
# Model hyperparameters.
filter_widths = [3, 4, 5]
filter_size = 50
char_embedding_size = 8
char_vocab_path = "char_vocab.english.txt"
context_embeddings = ${glove_300d_filtered}
head_embeddings = ${glove_300d_2w}
contextualization_size = 200
contextualization_layers = 3
ffnn_size = 150
ffnn_depth = 2
feature_size = 20
max_span_width = 30
use_metadata = true
use_features = true
model_heads = true
coref_depth = 2
lm_layers = 3
lm_size = 1024
coarse_to_fine = true
# Learning hyperparameters.
max_gradient_norm = 5.0
lstm_dropout_rate = 0.4
lexical_dropout_rate = 0.5
dropout_rate = 0.2
optimizer = adam
learning_rate = 0.001
decay_rate = 0.999
decay_frequency = 100
# Other.
train_path = train.english.jsonlines
eval_path = dev.english.jsonlines
conll_eval_path = dev.english.v4_gold_conll
lm_path = elmo_cache.hdf5
genres = ["bc", "bn", "mz", "nw", "pt", "tc", "wb"]
eval_frequency = 5000
report_frequency = 100
log_root = logs
cluster = ${two_local_gpus}
}
# For evaluation. Do not use for training (i.e. only for predict.py, evaluate.py, and demo.py). Rename `best` directory to `final`.
final = ${best} {
context_embeddings = ${glove_300d}
head_embeddings = ${glove_300d_2w}
lm_path = ""
eval_path = test.english.jsonlines
conll_eval_path = test.english.v4_gold_conll
}
# Baselines.
c2f_100_ant = ${best} {
max_top_antecedents = 100
}
c2f_250_ant = ${best} {
max_top_antecedents = 250
}
c2f_1_layer = ${best} {
coref_depth = 1
}
c2f_3_layer = ${best} {
coref_depth = 3
}
distance_50_ant = ${best} {
max_top_antecedents = 50
coarse_to_fine = false
coref_depth = 1
}
distance_100_ant = ${distance_50_ant} {
max_top_antecedents = 100
}
distance_250_ant = ${distance_50_ant} {
max_top_antecedents = 250
}
================================================
FILE: filter_embeddings.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import json
if __name__ == "__main__":
if len(sys.argv) < 3:
sys.exit("Usage: {} <embeddings> <json1> <json2> ...".format(sys.argv[0]))
words_to_keep = set()
for json_filename in sys.argv[2:]:
with open(json_filename) as json_file:
for line in json_file.readlines():
for sentence in json.loads(line)["sentences"]:
words_to_keep.update(sentence)
print("Found {} words in {} dataset(s).".format(len(words_to_keep), len(sys.argv) - 2))
total_lines = 0
kept_lines = 0
out_filename = "{}.filtered".format(sys.argv[1])
with open(sys.argv[1]) as in_file:
with open(out_filename, "w") as out_file:
for line in in_file.readlines():
total_lines += 1
word = line.split()[0]
if word in words_to_keep:
kept_lines += 1
out_file.write(line)
print("Kept {} out of {} lines.".format(kept_lines, total_lines))
print("Wrote result to {}.".format(out_filename))
================================================
FILE: get_char_vocab.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import json
import io
def get_char_vocab(input_filenames, output_filename):
vocab = set()
for filename in input_filenames:
with open(filename) as f:
for line in f.readlines():
for sentence in json.loads(line)["sentences"]:
for word in sentence:
vocab.update(word)
vocab = sorted(list(vocab))
with io.open(output_filename, mode="w", encoding="utf8") as f:
for char in vocab:
f.write(char)
f.write(u"\n")
print("Wrote {} characters to {}".format(len(vocab), output_filename))
def get_char_vocab_language(language):
get_char_vocab(["{}.{}.jsonlines".format(partition, language) for partition in ("train", "dev", "test")], "char_vocab.{}.txt".format(language))
get_char_vocab_language("english")
get_char_vocab_language("chinese")
get_char_vocab_language("arabic")
================================================
FILE: metrics.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from collections import Counter
from sklearn.utils.linear_assignment_ import linear_assignment
def f1(p_num, p_den, r_num, r_den, beta=1):
p = 0 if p_den == 0 else p_num / float(p_den)
r = 0 if r_den == 0 else r_num / float(r_den)
return 0 if p + r == 0 else (1 + beta * beta) * p * r / (beta * beta * p + r)
class CorefEvaluator(object):
def __init__(self):
self.evaluators = [Evaluator(m) for m in (muc, b_cubed, ceafe)]
def update(self, predicted, gold, mention_to_predicted, mention_to_gold):
for e in self.evaluators:
e.update(predicted, gold, mention_to_predicted, mention_to_gold)
def get_f1(self):
return sum(e.get_f1() for e in self.evaluators) / len(self.evaluators)
def get_recall(self):
return sum(e.get_recall() for e in self.evaluators) / len(self.evaluators)
def get_precision(self):
return sum(e.get_precision() for e in self.evaluators) / len(self.evaluators)
def get_prf(self):
return self.get_precision(), self.get_recall(), self.get_f1()
class Evaluator(object):
def __init__(self, metric, beta=1):
self.p_num = 0
self.p_den = 0
self.r_num = 0
self.r_den = 0
self.metric = metric
self.beta = beta
def update(self, predicted, gold, mention_to_predicted, mention_to_gold):
if self.metric == ceafe:
pn, pd, rn, rd = self.metric(predicted, gold)
else:
pn, pd = self.metric(predicted, mention_to_gold)
rn, rd = self.metric(gold, mention_to_predicted)
self.p_num += pn
self.p_den += pd
self.r_num += rn
self.r_den += rd
def get_f1(self):
return f1(self.p_num, self.p_den, self.r_num, self.r_den, beta=self.beta)
def get_recall(self):
return 0 if self.r_num == 0 else self.r_num / float(self.r_den)
def get_precision(self):
return 0 if self.p_num == 0 else self.p_num / float(self.p_den)
def get_prf(self):
return self.get_precision(), self.get_recall(), self.get_f1()
def get_counts(self):
return self.p_num, self.p_den, self.r_num, self.r_den
def evaluate_documents(documents, metric, beta=1):
evaluator = Evaluator(metric, beta=beta)
for document in documents:
evaluator.update(document)
return evaluator.get_precision(), evaluator.get_recall(), evaluator.get_f1()
def b_cubed(clusters, mention_to_gold):
num, dem = 0, 0
for c in clusters:
if len(c) == 1:
continue
gold_counts = Counter()
correct = 0
for m in c:
if m in mention_to_gold:
gold_counts[tuple(mention_to_gold[m])] += 1
for c2, count in gold_counts.items():
if len(c2) != 1:
correct += count * count
num += correct / float(len(c))
dem += len(c)
return num, dem
def muc(clusters, mention_to_gold):
tp, p = 0, 0
for c in clusters:
p += len(c) - 1
tp += len(c)
linked = set()
for m in c:
if m in mention_to_gold:
linked.add(mention_to_gold[m])
else:
tp -= 1
tp -= len(linked)
return tp, p
def phi4(c1, c2):
return 2 * len([m for m in c1 if m in c2]) / float(len(c1) + len(c2))
def ceafe(clusters, gold_clusters):
clusters = [c for c in clusters if len(c) != 1]
scores = np.zeros((len(gold_clusters), len(clusters)))
for i in range(len(gold_clusters)):
for j in range(len(clusters)):
scores[i, j] = phi4(gold_clusters[i], clusters[j])
matching = linear_assignment(-scores)
similarity = sum(scores[matching[:, 0], matching[:, 1]])
return similarity, len(clusters), similarity, len(gold_clusters)
def lea(clusters, mention_to_gold):
num, dem = 0, 0
for c in clusters:
if len(c) == 1:
continue
common_links = 0
all_links = len(c) * (len(c) - 1) / 2.0
for i, m in enumerate(c):
if m in mention_to_gold:
for m2 in c[i + 1:]:
if m2 in mention_to_gold and mention_to_gold[m] == mention_to_gold[m2]:
common_links += 1
num += len(c) * common_links / float(all_links)
dem += len(c)
return num, dem
================================================
FILE: minimize.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
import os
import sys
import json
import tempfile
import subprocess
import collections
import util
import conll
class DocumentState(object):
def __init__(self):
self.doc_key = None
self.text = []
self.text_speakers = []
self.speakers = []
self.sentences = []
self.constituents = {}
self.const_stack = []
self.ner = {}
self.ner_stack = []
self.clusters = collections.defaultdict(list)
self.coref_stacks = collections.defaultdict(list)
def assert_empty(self):
assert self.doc_key is None
assert len(self.text) == 0
assert len(self.text_speakers) == 0
assert len(self.speakers) == 0
assert len(self.sentences) == 0
assert len(self.constituents) == 0
assert len(self.const_stack) == 0
assert len(self.ner) == 0
assert len(self.ner_stack) == 0
assert len(self.coref_stacks) == 0
assert len(self.clusters) == 0
def assert_finalizable(self):
assert self.doc_key is not None
assert len(self.text) == 0
assert len(self.text_speakers) == 0
assert len(self.speakers) > 0
assert len(self.sentences) > 0
assert len(self.constituents) > 0
assert len(self.const_stack) == 0
assert len(self.ner_stack) == 0
assert all(len(s) == 0 for s in self.coref_stacks.values())
def span_dict_to_list(self, span_dict):
return [(s,e,l) for (s,e),l in span_dict.items()]
def finalize(self):
merged_clusters = []
for c1 in self.clusters.values():
existing = None
for m in c1:
for c2 in merged_clusters:
if m in c2:
existing = c2
break
if existing is not None:
break
if existing is not None:
print("Merging clusters (shouldn't happen very often.)")
existing.update(c1)
else:
merged_clusters.append(set(c1))
merged_clusters = [list(c) for c in merged_clusters]
all_mentions = util.flatten(merged_clusters)
assert len(all_mentions) == len(set(all_mentions))
return {
"doc_key": self.doc_key,
"sentences": self.sentences,
"speakers": self.speakers,
"constituents": self.span_dict_to_list(self.constituents),
"ner": self.span_dict_to_list(self.ner),
"clusters": merged_clusters
}
def normalize_word(word, language):
if language == "arabic":
word = word[:word.find("#")]
if word == "/." or word == "/?":
return word[1:]
else:
return word
def handle_bit(word_index, bit, stack, spans):
asterisk_idx = bit.find("*")
if asterisk_idx >= 0:
open_parens = bit[:asterisk_idx]
close_parens = bit[asterisk_idx + 1:]
else:
open_parens = bit[:-1]
close_parens = bit[-1]
current_idx = open_parens.find("(")
while current_idx >= 0:
next_idx = open_parens.find("(", current_idx + 1)
if next_idx >= 0:
label = open_parens[current_idx + 1:next_idx]
else:
label = open_parens[current_idx + 1:]
stack.append((word_index, label))
current_idx = next_idx
for c in close_parens:
assert c == ")"
open_index, label = stack.pop()
current_span = (open_index, word_index)
"""
if current_span in spans:
spans[current_span] += "_" + label
else:
spans[current_span] = label
"""
spans[current_span] = label
def handle_line(line, document_state, language, labels, stats):
begin_document_match = re.match(conll.BEGIN_DOCUMENT_REGEX, line)
if begin_document_match:
document_state.assert_empty()
document_state.doc_key = conll.get_doc_key(begin_document_match.group(1), begin_document_match.group(2))
return None
elif line.startswith("#end document"):
document_state.assert_finalizable()
finalized_state = document_state.finalize()
stats["num_clusters"] += len(finalized_state["clusters"])
stats["num_mentions"] += sum(len(c) for c in finalized_state["clusters"])
labels["{}_const_labels".format(language)].update(l for _, _, l in finalized_state["constituents"])
labels["ner"].update(l for _, _, l in finalized_state["ner"])
return finalized_state
else:
row = line.split()
if len(row) == 0:
stats["max_sent_len_{}".format(language)] = max(len(document_state.text), stats["max_sent_len_{}".format(language)])
stats["num_sents_{}".format(language)] += 1
document_state.sentences.append(tuple(document_state.text))
del document_state.text[:]
document_state.speakers.append(tuple(document_state.text_speakers))
del document_state.text_speakers[:]
return None
assert len(row) >= 12
doc_key = conll.get_doc_key(row[0], row[1])
word = normalize_word(row[3], language)
parse = row[5]
speaker = row[9]
ner = row[10]
coref = row[-1]
word_index = len(document_state.text) + sum(len(s) for s in document_state.sentences)
document_state.text.append(word)
document_state.text_speakers.append(speaker)
handle_bit(word_index, parse, document_state.const_stack, document_state.constituents)
handle_bit(word_index, ner, document_state.ner_stack, document_state.ner)
if coref != "-":
for segment in coref.split("|"):
if segment[0] == "(":
if segment[-1] == ")":
cluster_id = int(segment[1:-1])
document_state.clusters[cluster_id].append((word_index, word_index))
else:
cluster_id = int(segment[1:])
document_state.coref_stacks[cluster_id].append(word_index)
else:
cluster_id = int(segment[:-1])
start = document_state.coref_stacks[cluster_id].pop()
document_state.clusters[cluster_id].append((start, word_index))
return None
def minimize_partition(name, language, extension, labels, stats):
input_path = "{}.{}.{}".format(name, language, extension)
output_path = "{}.{}.jsonlines".format(name, language)
count = 0
print("Minimizing {}".format(input_path))
with open(input_path, "r") as input_file:
with open(output_path, "w") as output_file:
document_state = DocumentState()
for line in input_file.readlines():
document = handle_line(line, document_state, language, labels, stats)
if document is not None:
output_file.write(json.dumps(document))
output_file.write("\n")
count += 1
document_state = DocumentState()
print("Wrote {} documents to {}".format(count, output_path))
def minimize_language(language, labels, stats):
minimize_partition("dev", language, "v4_gold_conll", labels, stats)
minimize_partition("train", language, "v4_gold_conll", labels, stats)
minimize_partition("test", language, "v4_gold_conll", labels, stats)
if __name__ == "__main__":
labels = collections.defaultdict(set)
stats = collections.defaultdict(int)
minimize_language("english", labels, stats)
minimize_language("chinese", labels, stats)
minimize_language("arabic", labels, stats)
for k, v in labels.items():
print("{} = [{}]".format(k, ", ".join("\"{}\"".format(label) for label in v)))
for k, v in stats.items():
print("{} = {}".format(k, v))
================================================
FILE: predict.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import json
import tensorflow as tf
import coref_model as cm
import util
if __name__ == "__main__":
config = util.initialize_from_env()
# Input file in .jsonlines format.
input_filename = sys.argv[2]
# Predictions will be written to this file in .jsonlines format.
output_filename = sys.argv[3]
model = cm.CorefModel(config)
with tf.Session() as session:
model.restore(session)
with open(output_filename, "w") as output_file:
with open(input_filename) as input_file:
for example_num, line in enumerate(input_file.readlines()):
example = json.loads(line)
tensorized_example = model.tensorize_example(example, is_training=False)
feed_dict = {i:t for i,t in zip(model.input_tensors, tensorized_example)}
_, _, _, top_span_starts, top_span_ends, top_antecedents, top_antecedent_scores = session.run(model.predictions, feed_dict=feed_dict)
predicted_antecedents = model.get_predicted_antecedents(top_antecedents, top_antecedent_scores)
example["predicted_clusters"], _ = model.get_predicted_clusters(top_span_starts, top_span_ends, predicted_antecedents)
output_file.write(json.dumps(example))
output_file.write("\n")
if example_num % 100 == 0:
print("Decoded {} examples.".format(example_num + 1))
================================================
FILE: ps.py
================================================
#!/usr/bin/env python
import os
import tensorflow as tf
import util
if __name__ == "__main__":
config = util.initialize_from_env()
report_frequency = config["report_frequency"]
cluster_config = config["cluster"]
util.set_gpus()
cluster = tf.train.ClusterSpec(cluster_config["addresses"])
server = tf.train.Server(cluster, job_name="ps", task_index=0)
server.join()
================================================
FILE: requirements.txt
================================================
tensorflow-gpu>=1.13.1
tensorflow-hub>=0.4.0
h5py
nltk
pyhocon
scipy
sklearn
================================================
FILE: setup_all.sh
================================================
#!/bin/bash
# Download pretrained embeddings.
curl -O http://downloads.cs.stanford.edu/nlp/data/glove.840B.300d.zip
unzip glove.840B.300d.zip
rm glove.840B.300d.zip
# Build custom kernels.
TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
# Linux (pip)
g++ -std=c++11 -shared coref_kernels.cc -o coref_kernels.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2 -D_GLIBCXX_USE_CXX11_ABI=0
# Linux (build from source)
#g++ -std=c++11 -shared coref_kernels.cc -o coref_kernels.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2
# Mac
#g++ -std=c++11 -shared coref_kernels.cc -o coref_kernels.so -I -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2 -D_GLIBCXX_USE_CXX11_ABI=0 -undefined dynamic_lookup
================================================
FILE: setup_training.sh
================================================
#!/bin/bash
dlx() {
wget $1/$2
tar -xvzf $2
rm $2
}
conll_url=http://conll.cemantix.org/2012/download
dlx $conll_url conll-2012-train.v4.tar.gz
dlx $conll_url conll-2012-development.v4.tar.gz
dlx $conll_url/test conll-2012-test-key.tar.gz
dlx $conll_url/test conll-2012-test-official.v9.tar.gz
dlx $conll_url conll-2012-scripts.v3.tar.gz
dlx http://conll.cemantix.org/download reference-coreference-scorers.v8.01.tar.gz
mv reference-coreference-scorers conll-2012/scorer
ontonotes_path=/projects/WebWare6/ontonotes-release-5.0
bash conll-2012/v3/scripts/skeleton2conll.sh -D $ontonotes_path/data/files/data conll-2012
function compile_partition() {
rm -f $2.$5.$3$4
cat conll-2012/$3/data/$1/data/$5/annotations/*/*/*/*.$3$4 >> $2.$5.$3$4
}
function compile_language() {
compile_partition development dev v4 _gold_conll $1
compile_partition train train v4 _gold_conll $1
compile_partition test test v4 _gold_conll $1
}
compile_language english
compile_language chinese
compile_language arabic
python minimize.py
python get_char_vocab.py
python filter_embeddings.py glove.840B.300d.txt train.english.jsonlines dev.english.jsonlines
python cache_elmo.py train.english.jsonlines dev.english.jsonlines
================================================
FILE: train.py
================================================
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import tensorflow as tf
import coref_model as cm
import util
if __name__ == "__main__":
config = util.initialize_from_env()
report_frequency = config["report_frequency"]
eval_frequency = config["eval_frequency"]
model = cm.CorefModel(config)
saver = tf.train.Saver()
log_dir = config["log_dir"]
writer = tf.summary.FileWriter(log_dir, flush_secs=20)
max_f1 = 0
with tf.Session() as session:
session.run(tf.global_variables_initializer())
model.start_enqueue_thread(session)
accumulated_loss = 0.0
ckpt = tf.train.get_checkpoint_state(log_dir)
if ckpt and ckpt.model_checkpoint_path:
print("Restoring from: {}".format(ckpt.model_checkpoint_path))
saver.restore(session, ckpt.model_checkpoint_path)
initial_time = time.time()
while True:
tf_loss, tf_global_step, _ = session.run([model.loss, model.global_step, model.train_op])
accumulated_loss += tf_loss
if tf_global_step % report_frequency == 0:
total_time = time.time() - initial_time
steps_per_second = tf_global_step / total_time
average_loss = accumulated_loss / report_frequency
print("[{}] loss={:.2f}, steps/s={:.2f}".format(tf_global_step, average_loss, steps_per_second))
writer.add_summary(util.make_summary({"loss": average_loss}), tf_global_step)
accumulated_loss = 0.0
if tf_global_step % eval_frequency == 0:
saver.save(session, os.path.join(log_dir, "model"), global_step=tf_global_step)
eval_summary, eval_f1 = model.evaluate(session)
if eval_f1 > max_f1:
max_f1 = eval_f1
util.copy_checkpoint(os.path.join(log_dir, "model-{}".format(tf_global_step)), os.path.join(log_dir, "model.max.ckpt"))
writer.add_summary(eval_summary, tf_global_step)
writer.add_summary(util.make_summary({"max_eval_f1": max_f1}), tf_global_step)
print("[{}] evaL_f1={:.2f}, max_f1={:.2f}".format(tf_global_step, eval_f1, max_f1))
================================================
FILE: util.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import errno
import codecs
import collections
import json
import math
import shutil
import sys
import numpy as np
import tensorflow as tf
import pyhocon
def initialize_from_env():
if "GPU" in os.environ:
set_gpus(int(os.environ["GPU"]))
else:
set_gpus()
name = sys.argv[1]
print("Running experiment: {}".format(name))
config = pyhocon.ConfigFactory.parse_file("experiments.conf")[name]
config["log_dir"] = mkdirs(os.path.join(config["log_root"], name))
print(pyhocon.HOCONConverter.convert(config, "hocon"))
return config
def copy_checkpoint(source, target):
for ext in (".index", ".data-00000-of-00001"):
shutil.copyfile(source + ext, target + ext)
def make_summary(value_dict):
return tf.Summary(value=[tf.Summary.Value(tag=k, simple_value=v) for k,v in value_dict.items()])
def flatten(l):
return [item for sublist in l for item in sublist]
def set_gpus(*gpus):
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join(str(g) for g in gpus)
print("Setting CUDA_VISIBLE_DEVICES to: {}".format(os.environ["CUDA_VISIBLE_DEVICES"]))
def mkdirs(path):
try:
os.makedirs(path)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
return path
def load_char_dict(char_vocab_path):
vocab = [u"<unk>"]
with codecs.open(char_vocab_path, encoding="utf-8") as f:
vocab.extend(l.strip() for l in f.readlines())
char_dict = collections.defaultdict(int)
char_dict.update({c:i for i, c in enumerate(vocab)})
return char_dict
def maybe_divide(x, y):
return 0 if y == 0 else x / float(y)
def projection(inputs, output_size, initializer=None):
return ffnn(inputs, 0, -1, output_size, dropout=None, output_weights_initializer=initializer)
def highway(inputs, num_layers, dropout):
for i in range(num_layers):
with tf.variable_scope("highway_{}".format(i)):
j, f = tf.split(projection(inputs, 2 * shape(inputs, -1)), 2, -1)
f = tf.sigmoid(f)
j = tf.nn.relu(j)
if dropout is not None:
j = tf.nn.dropout(j, dropout)
inputs = f * j + (1 - f) * inputs
return inputs
def shape(x, dim):
return x.get_shape()[dim].value or tf.shape(x)[dim]
def ffnn(inputs, num_hidden_layers, hidden_size, output_size, dropout, output_weights_initializer=None):
if len(inputs.get_shape()) > 3:
raise ValueError("FFNN with rank {} not supported".format(len(inputs.get_shape())))
if len(inputs.get_shape()) == 3:
batch_size = shape(inputs, 0)
seqlen = shape(inputs, 1)
emb_size = shape(inputs, 2)
current_inputs = tf.reshape(inputs, [batch_size * seqlen, emb_size])
else:
current_inputs = inputs
for i in range(num_hidden_layers):
hidden_weights = tf.get_variable("hidden_weights_{}".format(i), [shape(current_inputs, 1), hidden_size])
hidden_bias = tf.get_variable("hidden_bias_{}".format(i), [hidden_size])
current_outputs = tf.nn.relu(tf.nn.xw_plus_b(current_inputs, hidden_weights, hidden_bias))
if dropout is not None:
current_outputs = tf.nn.dropout(current_outputs, dropout)
current_inputs = current_outputs
output_weights = tf.get_variable("output_weights", [shape(current_inputs, 1), output_size], initializer=output_weights_initializer)
output_bias = tf.get_variable("output_bias", [output_size])
outputs = tf.nn.xw_plus_b(current_inputs, output_weights, output_bias)
if len(inputs.get_shape()) == 3:
outputs = tf.reshape(outputs, [batch_size, seqlen, output_size])
return outputs
def cnn(inputs, filter_sizes, num_filters):
num_words = shape(inputs, 0)
num_chars = shape(inputs, 1)
input_size = shape(inputs, 2)
outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.variable_scope("conv_{}".format(i)):
w = tf.get_variable("w", [filter_size, input_size, num_filters])
b = tf.get_variable("b", [num_filters])
conv = tf.nn.conv1d(inputs, w, stride=1, padding="VALID") # [num_words, num_chars - filter_size, num_filters]
h = tf.nn.relu(tf.nn.bias_add(conv, b)) # [num_words, num_chars - filter_size, num_filters]
pooled = tf.reduce_max(h, 1) # [num_words, num_filters]
outputs.append(pooled)
return tf.concat(outputs, 1) # [num_words, num_filters * len(filter_sizes)]
def batch_gather(emb, indices):
batch_size = shape(emb, 0)
seqlen = shape(emb, 1)
if len(emb.get_shape()) > 2:
emb_size = shape(emb, 2)
else:
emb_size = 1
flattened_emb = tf.reshape(emb, [batch_size * seqlen, emb_size]) # [batch_size * seqlen, emb]
offset = tf.expand_dims(tf.range(batch_size) * seqlen, 1) # [batch_size, 1]
gathered = tf.gather(flattened_emb, indices + offset) # [batch_size, num_indices, emb]
if len(emb.get_shape()) == 2:
gathered = tf.squeeze(gathered, 2) # [batch_size, num_indices]
return gathered
class RetrievalEvaluator(object):
def __init__(self):
self._num_correct = 0
self._num_gold = 0
self._num_predicted = 0
def update(self, gold_set, predicted_set):
self._num_correct += len(gold_set & predicted_set)
self._num_gold += len(gold_set)
self._num_predicted += len(predicted_set)
def recall(self):
return maybe_divide(self._num_correct, self._num_gold)
def precision(self):
return maybe_divide(self._num_correct, self._num_predicted)
def metrics(self):
recall = self.recall()
precision = self.precision()
f1 = maybe_divide(2 * recall * precision, precision + recall)
return recall, precision, f1
class EmbeddingDictionary(object):
def __init__(self, info, normalize=True, maybe_cache=None):
self._size = info["size"]
self._normalize = normalize
self._path = info["path"]
if maybe_cache is not None and maybe_cache._path == self._path:
assert self._size == maybe_cache._size
self._embeddings = maybe_cache._embeddings
else:
self._embeddings = self.load_embedding_dict(self._path)
@property
def size(self):
return self._size
def load_embedding_dict(self, path):
print("Loading word embeddings from {}...".format(path))
default_embedding = np.zeros(self.size)
embedding_dict = collections.defaultdict(lambda:default_embedding)
if len(path) > 0:
vocab_size = None
with open(path) as f:
for i, line in enumerate(f.readlines()):
word_end = line.find(" ")
word = line[:word_end]
embedding = np.fromstring(line[word_end + 1:], np.float32, sep=" ")
assert len(embedding) == self.size
embedding_dict[word] = embedding
if vocab_size is not None:
assert vocab_size == len(embedding_dict)
print("Done loading word embeddings.")
return embedding_dict
def __getitem__(self, key):
embedding = self._embeddings[key]
if self._normalize:
embedding = self.normalize(embedding)
return embedding
def normalize(self, v):
norm = np.linalg.norm(v)
if norm > 0:
return v / norm
else:
return v
class CustomLSTMCell(tf.contrib.rnn.RNNCell):
def __init__(self, num_units, batch_size, dropout):
self._num_units = num_units
self._dropout = dropout
self._dropout_mask = tf.nn.dropout(tf.ones([batch_size, self.output_size]), dropout)
self._initializer = self._block_orthonormal_initializer([self.output_size] * 3)
initial_cell_state = tf.get_variable("lstm_initial_cell_state", [1, self.output_size])
initial_hidden_state = tf.get_variable("lstm_initial_hidden_state", [1, self.output_size])
self._initial_state = tf.contrib.rnn.LSTMStateTuple(initial_cell_state, initial_hidden_state)
@property
def state_size(self):
return tf.contrib.rnn.LSTMStateTuple(self.output_size, self.output_size)
@property
def output_size(self):
return self._num_units
@property
def initial_state(self):
return self._initial_state
def __call__(self, inputs, state, scope=None):
"""Long short-term memory cell (LSTM)."""
with tf.variable_scope(scope or type(self).__name__): # "CustomLSTMCell"
c, h = state
h *= self._dropout_mask
concat = projection(tf.concat([inputs, h], 1), 3 * self.output_size, initializer=self._initializer)
i, j, o = tf.split(concat, num_or_size_splits=3, axis=1)
i = tf.sigmoid(i)
new_c = (1 - i) * c + i * tf.tanh(j)
new_h = tf.tanh(new_c) * tf.sigmoid(o)
new_state = tf.contrib.rnn.LSTMStateTuple(new_c, new_h)
return new_h, new_state
def _orthonormal_initializer(self, scale=1.0):
def _initializer(shape, dtype=tf.float32, partition_info=None):
M1 = np.random.randn(shape[0], shape[0]).astype(np.float32)
M2 = np.random.randn(shape[1], shape[1]).astype(np.float32)
Q1, R1 = np.linalg.qr(M1)
Q2, R2 = np.linalg.qr(M2)
Q1 = Q1 * np.sign(np.diag(R1))
Q2 = Q2 * np.sign(np.diag(R2))
n_min = min(shape[0], shape[1])
params = np.dot(Q1[:, :n_min], Q2[:n_min, :]) * scale
return params
return _initializer
def _block_orthonormal_initializer(self, output_sizes):
def _initializer(shape, dtype=np.float32, partition_info=None):
assert len(shape) == 2
assert sum(output_sizes) == shape[1]
initializer = self._orthonormal_initializer()
params = np.concatenate([initializer([shape[0], o], dtype, partition_info) for o in output_sizes], 1)
return params
return _initializer
================================================
FILE: worker.py
================================================
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import time
import tensorflow as tf
import coref_model as cm
import util
if __name__ == "__main__":
config = util.initialize_from_env()
task_index = int(os.environ["TASK"])
report_frequency = config["report_frequency"]
cluster_config = config["cluster"]
util.set_gpus(cluster_config["gpus"][task_index])
cluster = tf.train.ClusterSpec(cluster_config["addresses"])
server = tf.train.Server(cluster,
job_name="worker",
task_index=task_index)
# Assigns ops to the local worker by default.
with tf.device(tf.train.replica_device_setter(worker_device="/job:worker/task:%d" % task_index, cluster=cluster)):
model = cm.CorefModel(config)
saver = tf.train.Saver()
init_op = tf.global_variables_initializer()
log_dir = config["log_dir"]
writer = tf.summary.FileWriter(os.path.join(log_dir, "w{}".format(task_index)), flush_secs=20)
is_chief = (task_index == 0)
# Create a "supervisor", which oversees the training process.
sv = tf.train.Supervisor(is_chief=is_chief,
logdir=log_dir,
init_op=init_op,
saver=saver,
global_step=model.global_step,
save_model_secs=120)
# The supervisor takes care of session initialization, restoring from
# a checkpoint, and closing when done or an error occurs.
with sv.managed_session(server.target) as session:
model.start_enqueue_thread(session)
accumulated_loss = 0.0
initial_time = time.time()
while not sv.should_stop():
tf_loss, tf_global_step, _ = session.run([model.loss, model.global_step, model.train_op])
accumulated_loss += tf_loss
if tf_global_step % report_frequency == 0:
total_time = time.time() - initial_time
steps_per_second = tf_global_step / total_time
average_loss = accumulated_loss / report_frequency
print("[{}] loss={:.2f}, steps/s={:.2f}".format(tf_global_step, tf_loss, steps_per_second))
accumulated_loss = 0.0
writer.add_summary(util.make_summary({
"Train Loss": average_loss,
"Steps per second": steps_per_second
}))
# Ask for all the services to stop.
sv.stop()
gitextract_zs1s9vox/ ├── .gitignore ├── LICENSE ├── README.md ├── cache_elmo.py ├── conll.py ├── continuous_evaluate.py ├── coref_kernels.cc ├── coref_model.py ├── coref_ops.py ├── demo.py ├── evaluate.py ├── experiments.conf ├── filter_embeddings.py ├── get_char_vocab.py ├── metrics.py ├── minimize.py ├── predict.py ├── ps.py ├── requirements.txt ├── setup_all.sh ├── setup_training.sh ├── train.py ├── util.py └── worker.py
SYMBOL INDEX (109 symbols across 10 files)
FILE: cache_elmo.py
function build_elmo (line 12) | def build_elmo():
function cache_dataset (line 25) | def cache_dataset(data_path, session, token_ph, len_ph, lm_emb, out_file):
FILE: conll.py
function get_doc_key (line 17) | def get_doc_key(doc_id, part):
function output_conll (line 20) | def output_conll(input_file, output_file, predictions):
function official_conll_eval (line 74) | def official_conll_eval(gold_path, predicted_path, metric, official_stdo...
function evaluate_conll (line 94) | def evaluate_conll(gold_path, predictions, official_stdout=False):
FILE: continuous_evaluate.py
function copy_checkpoint (line 15) | def copy_checkpoint(source, target):
FILE: coref_kernels.cc
class ExtractSpansOp (line 18) | class ExtractSpansOp : public OpKernel {
method ExtractSpansOp (line 20) | explicit ExtractSpansOp(OpKernelConstruction* context) : OpKernel(cont...
method Compute (line 24) | void Compute(OpKernelContext* context) override {
FILE: coref_model.py
class CorefModel (line 21) | class CorefModel(object):
method __init__ (line 22) | def __init__(self, config):
method start_enqueue_thread (line 73) | def start_enqueue_thread(self, session):
method restore (line 87) | def restore(self, session):
method load_lm_embeddings (line 96) | def load_lm_embeddings(self, doc_key):
method tensorize_mentions (line 108) | def tensorize_mentions(self, mentions):
method tensorize_span_labels (line 115) | def tensorize_span_labels(self, tuples, label_dict):
method tensorize_example (line 122) | def tensorize_example(self, example, is_training):
method truncate_example (line 170) | def truncate_example(self, tokens, context_word_emb, head_word_emb, lm...
method get_candidate_labels (line 193) | def get_candidate_labels(self, candidate_starts, candidate_ends, label...
method get_dropout (line 201) | def get_dropout(self, dropout_rate, is_training):
method coarse_to_fine_pruning (line 204) | def coarse_to_fine_pruning(self, top_span_emb, top_span_mention_scores...
method distance_pruning (line 219) | def distance_pruning(self, top_span_emb, top_span_mention_scores, c):
method get_predictions_and_loss (line 230) | def get_predictions_and_loss(self, tokens, context_word_emb, head_word...
method get_span_emb (line 352) | def get_span_emb(self, head_emb, context_outputs, span_starts, span_en...
method get_mention_scores (line 385) | def get_mention_scores(self, span_emb):
method softmax_loss (line 389) | def softmax_loss(self, antecedent_scores, antecedent_labels):
method bucket_distance (line 395) | def bucket_distance(self, distances):
method get_slow_antecedent_scores (line 405) | def get_slow_antecedent_scores(self, top_span_emb, top_antecedents, to...
method get_fast_antecedent_scores (line 439) | def get_fast_antecedent_scores(self, top_span_emb):
method flatten_emb_by_sentence (line 445) | def flatten_emb_by_sentence(self, emb, text_len_mask):
method lstm_contextualize (line 458) | def lstm_contextualize(self, text_emb, text_len, text_len_mask):
method get_predicted_antecedents (line 489) | def get_predicted_antecedents(self, antecedents, antecedent_scores):
method get_predicted_clusters (line 498) | def get_predicted_clusters(self, top_span_starts, top_span_ends, predi...
method evaluate_coref (line 522) | def evaluate_coref(self, top_span_starts, top_span_ends, predicted_ant...
method load_eval_data (line 533) | def load_eval_data(self):
method evaluate (line 543) | def evaluate(self, session, official_stdout=False):
FILE: demo.py
function create_example (line 14) | def create_example(text):
function print_predictions (line 25) | def print_predictions(example):
function make_predictions (line 30) | def make_predictions(text, model):
FILE: get_char_vocab.py
function get_char_vocab (line 9) | def get_char_vocab(input_filenames, output_filename):
function get_char_vocab_language (line 24) | def get_char_vocab_language(language):
FILE: metrics.py
function f1 (line 10) | def f1(p_num, p_den, r_num, r_den, beta=1):
class CorefEvaluator (line 15) | class CorefEvaluator(object):
method __init__ (line 16) | def __init__(self):
method update (line 19) | def update(self, predicted, gold, mention_to_predicted, mention_to_gold):
method get_f1 (line 23) | def get_f1(self):
method get_recall (line 26) | def get_recall(self):
method get_precision (line 29) | def get_precision(self):
method get_prf (line 32) | def get_prf(self):
class Evaluator (line 35) | class Evaluator(object):
method __init__ (line 36) | def __init__(self, metric, beta=1):
method update (line 44) | def update(self, predicted, gold, mention_to_predicted, mention_to_gold):
method get_f1 (line 55) | def get_f1(self):
method get_recall (line 58) | def get_recall(self):
method get_precision (line 61) | def get_precision(self):
method get_prf (line 64) | def get_prf(self):
method get_counts (line 67) | def get_counts(self):
function evaluate_documents (line 71) | def evaluate_documents(documents, metric, beta=1):
function b_cubed (line 78) | def b_cubed(clusters, mention_to_gold):
function muc (line 100) | def muc(clusters, mention_to_gold):
function phi4 (line 115) | def phi4(c1, c2):
function ceafe (line 119) | def ceafe(clusters, gold_clusters):
function lea (line 130) | def lea(clusters, mention_to_gold):
FILE: minimize.py
class DocumentState (line 16) | class DocumentState(object):
method __init__ (line 17) | def __init__(self):
method assert_empty (line 30) | def assert_empty(self):
method assert_finalizable (line 43) | def assert_finalizable(self):
method span_dict_to_list (line 54) | def span_dict_to_list(self, span_dict):
method finalize (line 57) | def finalize(self):
function normalize_word (line 86) | def normalize_word(word, language):
function handle_bit (line 94) | def handle_bit(word_index, bit, stack, spans):
function handle_line (line 125) | def handle_line(line, document_state, language, labels, stats):
function minimize_partition (line 180) | def minimize_partition(name, language, extension, labels, stats):
function minimize_language (line 197) | def minimize_language(language, labels, stats):
FILE: util.py
function initialize_from_env (line 19) | def initialize_from_env():
function copy_checkpoint (line 34) | def copy_checkpoint(source, target):
function make_summary (line 38) | def make_summary(value_dict):
function flatten (line 41) | def flatten(l):
function set_gpus (line 44) | def set_gpus(*gpus):
function mkdirs (line 48) | def mkdirs(path):
function load_char_dict (line 56) | def load_char_dict(char_vocab_path):
function maybe_divide (line 64) | def maybe_divide(x, y):
function projection (line 67) | def projection(inputs, output_size, initializer=None):
function highway (line 70) | def highway(inputs, num_layers, dropout):
function shape (line 81) | def shape(x, dim):
function ffnn (line 84) | def ffnn(inputs, num_hidden_layers, hidden_size, output_size, dropout, o...
function cnn (line 113) | def cnn(inputs, filter_sizes, num_filters):
function batch_gather (line 128) | def batch_gather(emb, indices):
class RetrievalEvaluator (line 142) | class RetrievalEvaluator(object):
method __init__ (line 143) | def __init__(self):
method update (line 148) | def update(self, gold_set, predicted_set):
method recall (line 153) | def recall(self):
method precision (line 156) | def precision(self):
method metrics (line 159) | def metrics(self):
class EmbeddingDictionary (line 165) | class EmbeddingDictionary(object):
method __init__ (line 166) | def __init__(self, info, normalize=True, maybe_cache=None):
method size (line 177) | def size(self):
method load_embedding_dict (line 180) | def load_embedding_dict(self, path):
method __getitem__ (line 198) | def __getitem__(self, key):
method normalize (line 204) | def normalize(self, v):
class CustomLSTMCell (line 211) | class CustomLSTMCell(tf.contrib.rnn.RNNCell):
method __init__ (line 212) | def __init__(self, num_units, batch_size, dropout):
method state_size (line 222) | def state_size(self):
method output_size (line 226) | def output_size(self):
method initial_state (line 230) | def initial_state(self):
method __call__ (line 233) | def __call__(self, inputs, state, scope=None):
method _orthonormal_initializer (line 246) | def _orthonormal_initializer(self, scale=1.0):
method _block_orthonormal_initializer (line 259) | def _block_orthonormal_initializer(self, output_sizes):
Condensed preview — 24 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (101K chars).
[
{
"path": ".gitignore",
"chars": 105,
"preview": "*.pyc\n*.so\n*.jsonlines\nlogs\nconll-2012\nchar_vocab*.txt\nglove*.txt\nglove*.txt.filtered\n*.v*_*_conll\n*.hdf5"
},
{
"path": "LICENSE",
"chars": 11340,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 2711,
"preview": "# Higher-order Coreference Resolution with Coarse-to-fine Inference\n\n## Introduction\nThis repository contains the code f"
},
{
"path": "cache_elmo.py",
"chars": 2095,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport num"
},
{
"path": "conll.py",
"chars": 3732,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\n"
},
{
"path": "continuous_evaluate.py",
"chars": 1978,
"preview": "#!/usr/bin/env python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import prin"
},
{
"path": "coref_kernels.cc",
"chars": 5562,
"preview": "#include <map>\n\n#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/shape_inference.h\"\n#includ"
},
{
"path": "coref_model.py",
"chars": 32592,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n"
},
{
"path": "coref_ops.py",
"chars": 328,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport ten"
},
{
"path": "demo.py",
"chars": 1847,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom six.m"
},
{
"path": "evaluate.py",
"chars": 411,
"preview": "#!/usr/bin/env python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import prin"
},
{
"path": "experiments.conf",
"chars": 2328,
"preview": "# Word embeddings.\nglove_300d {\n path = glove.840B.300d.txt\n size = 300\n}\nglove_300d_filtered {\n path = glove.840B.30"
},
{
"path": "filter_embeddings.py",
"chars": 1074,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys"
},
{
"path": "get_char_vocab.py",
"chars": 953,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys"
},
{
"path": "metrics.py",
"chars": 4478,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport num"
},
{
"path": "minimize.py",
"chars": 7193,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\n"
},
{
"path": "predict.py",
"chars": 1461,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys"
},
{
"path": "ps.py",
"chars": 382,
"preview": "#!/usr/bin/env python\n\nimport os\n\nimport tensorflow as tf\nimport util\n\nif __name__ == \"__main__\":\n config = util.initia"
},
{
"path": "requirements.txt",
"chars": 77,
"preview": "tensorflow-gpu>=1.13.1\ntensorflow-hub>=0.4.0\nh5py\nnltk\npyhocon\nscipy\nsklearn\n"
},
{
"path": "setup_all.sh",
"chars": 838,
"preview": "#!/bin/bash\n\n# Download pretrained embeddings.\ncurl -O http://downloads.cs.stanford.edu/nlp/data/glove.840B.300d.zip\nunz"
},
{
"path": "setup_training.sh",
"chars": 1234,
"preview": "#!/bin/bash\n\ndlx() {\n wget $1/$2\n tar -xvzf $2\n rm $2\n}\n\nconll_url=http://conll.cemantix.org/2012/download\ndlx $conll"
},
{
"path": "train.py",
"chars": 2139,
"preview": "#!/usr/bin/env python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import prin"
},
{
"path": "util.py",
"chars": 9434,
"preview": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n"
},
{
"path": "worker.py",
"chars": 2432,
"preview": "#!/usr/bin/env python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import prin"
}
]
About this extraction
This page contains the full source code of the kentonl/e2e-coref GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 24 files (94.5 KB), approximately 24.9k tokens, and a symbol index with 109 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.