[
  {
    "path": ".gitignore",
    "content": "# Custom configuration\n\nwork/\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Masatoshi Suzuki\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Wikipedia Entity Vectors\n\n\n## Introduction\n\n**Wikipedia Entity Vectors** [1] is a distributed representation of words and named entities (NEs).\nThe words and NEs are mapped to the same vector space.\nThe vectors are trained with skip-gram algorithm using preprocessed Wikipedia text as the corpus.\n\n\n## Downloads\n\nPre-trained vectors are downloadable from the [Releases](https://github.com/singletongue/WikiEntVec/releases) page.\n\nSeveral old versions are available at [this site](http://www.cl.ecei.tohoku.ac.jp/~m-suzuki/jawiki_vector/).\n\n\n## Specs\n\nEach version of the pre-trained vectors contains three files: `word_vectors.txt`, `entity_vectors.txt`, and `all_vectors.txt`.\nThese files are text files in word2vec output format, in which the first line declares the vocabulary size and the vector size (number of dimensions), followed by lines of tokens and their corresponding vectors.\n\n`word_vectors.txt` and `entity_vectors.txt` contains vectors for words and NEs, respectively.\nFor `entity_vectors.txt`, white spaces within names of NEs are replaced with underscores like `United_States`.\n`all_vectors.txt` contains vectors of both words and embeddings in one file, where each NE token is formatted with `##` like `##United_States##`\n(Note that some old versions use square brackets `[]` for formatting NE tokens.)\n\nPre-trained vectors are trained under the configurations below (see Manual training for details):\n\n### `make_corpus.py`\n\n#### Japanese\n\nWe used [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) (v0.0.6) for tokenizing Japanese texts.\n\n|Option              |Value                                                 |\n|:-------------------|:-----------------------------------------------------|\n|`--cirrus_file`     |path to `jawiki-20190520-cirrussearch-content.json.gz`|\n|`--output_file`     |path to the output file                               |\n|`--tokenizer`       |`mecab`                                               |\n|`--tokenizer_option`|`-d <directory of mecab-ipadic-NEologd dictionary>`   |\n\n### `train.py`\n\n|Option         |Value                                           |\n|:--------------|:-----------------------------------------------|\n|`--corpus_file`|path to a corpus file made with `make_corpus.py`|\n|`--output_dir` |path to the output directory                    |\n|`--embed_size` |`100`, `200`, or `300`                          |\n|`--window_size`|`10`                                            |\n|`--sample_size`|`10`                                            |\n|`--min_count`  |`10`                                            |\n|`--epoch`      |`5`                                             |\n|`--workers`    |`20`                                            |\n\n\n## Manual training\n\nYou can manually process Wikipedia dump file and train a skip-gram model on the preprocessed file.\n\n\n### Requirements\n\n- Python 3.6\n- gensim\n- logzero\n- MeCab and its Python binding (mecab-python3) (optional: required for tokenizing Japanese texts)\n\n\n### Steps\n\n1. Download Wikipedia Cirrussearch dump file from [here](https://dumps.wikimedia.org/other/cirrussearch/).\n    - Make sure to choose a file named like `**wiki-YYYYMMDD-cirrussearch-content.json.gz`.\n2. Clone this repository.\n3. Preprocess the downloaded dump file.\n    ```\n    $ python make_corpus.py --cirrus_file <dump file> --output_file <corpus file>\n    ```\n    If you're processing Japanese version of Wikipedia, make sure to use MeCab tokenizer by setting `--tokenizer mecab` option.\n    Otherwise, the text will be tokenized by a simple rule based on regular expression.\n4. Train the model\n    ```\n    $ python train.py --corpus_file <corpus file> --output_dir <output directory>\n    ```\n\n    You can configure options below for training a model.\n\n    ```\n    usage: train.py [-h] --corpus_file CORPUS_FILE --output_dir OUTPUT_DIR\n                    [--embed_size EMBED_SIZE] [--window_size WINDOW_SIZE]\n                    [--sample_size SAMPLE_SIZE] [--min_count MIN_COUNT]\n                    [--epoch EPOCH] [--workers WORKERS]\n\n    optional arguments:\n      -h, --help            show this help message and exit\n      --corpus_file CORPUS_FILE\n                            Corpus file (.txt)\n      --output_dir OUTPUT_DIR\n                            Output directory to save embedding files\n      --embed_size EMBED_SIZE\n                            Dimensionality of the word/entity vectors [100]\n      --window_size WINDOW_SIZE\n                            Maximum distance between the current and predicted\n                            word within a sentence [5]\n      --sample_size SAMPLE_SIZE\n                            Number of negative samples [5]\n      --min_count MIN_COUNT\n                            Ignores all words/entities with total frequency lower\n                            than this [5]\n      --epoch EPOCH         number of training epochs [5]\n      --workers WORKERS     Use these many worker threads to train the model [2]\n    ```\n\n\n## Concepts\n\nThere are several methods to learn distributed representations (or embeddings) of words, such as CBOW and skip-gram [2].\nThese methods train a neural network to predict contextual words given a word in a sentence from large corpora in an unsupervised way.\n\nHowever, there are a couple of problems when applying these methods to learning distributed representations of NEs.\nOne problem is that many NEs consist of multiple words (such as \"New York\" and \"George Washington\"), which makes a simple tokenization of text undesirable.\n\nOther problems are the diversity and ambiguity of NE mentions.\nFor each NE, several expression can be used to mention the NE.\nFor example, \"USA\", \"US\", \"United States\", and \"America\" can all express the same country.\nOn the other hand, the same words and phrases can refer to different entities.\nFor example, the word \"Mercury\" may represent a planet or an element or even a person (such as \"Freddie Mercury\", the vocalist for the rock group Queen).\nTherefore, in order to learn distributed representations of NEs, one must identify the spans of NEs in the text and recognize the mentioned NEs so that they are not treated just as a sequence of words.\n\nTo address these problems, we used Wikipedia as the corpus and utilized its internal hyperlinks for identifying mentions of NEs in article text.\n\nFor each article in Wikipedia, we performed the following preprocessing.\n\nFirst, we extracted all hyperlinks (pairs of anchor text and the linked article) from the source text (a.k.a wikitext) of an article.\n\nNext, for each hyperlink, we replaced the appearances of anchor text in the article body with special tokens representing the linked articles.\n\nFor instance, if an article has a hyperlink to \"Mercury (planet)\" assigned to the anchor text \"Mercury\", we replace all the other appearances of \"Mercury\" in the same article with the special token `##Mercury_(planet)##`.\n\nNote that the diversity of NE mentions is resolved by replacing possibly diverse anchor texts with special tokens which are unique to NEs.\nMoreover, the ambiguity of NE mentions is also addressed by making \"one sense per discourse\" assumption; we assume that NEs mentioned by possibly ambiguous phrases can be determined by the context or the document.\nWith this assumption, the phrases \"Mercury\" in the above example are neither replaced with `##Mercury_(element)##` nor `##Freddie_Mercury##`, since the article does not have such mentions as hyperlinks.\n\nWe used the preprocessed Wikipedia articles as the corpus and applied skip-gram algorithm to learn distributed representations of words and NEs.\nThis means that words and NEs are mapped to the same vector space.\n\n\n## Licenses\n\nThe pre-trained vectors are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).\n\nThe codes in this repository are distributed under the MIT License.\n\n\n## References\n\n[1] Masatoshi Suzuki, Koji Matsuda, Satoshi Sekine, Naoaki Okazaki and Kentaro\nInui. A Joint Neural Model for Fine-Grained Named Entity Classification of\nWikipedia Articles. IEICE Transactions on Information and Systems, Special\nSection on Semantic Web and Linked Data, Vol. E101-D, No.1, pp.73-81, 2018.\n\n[2] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean. Efficient Estimation\nof Word Representations in Vector Space. ICLR, 2013.\n\n[3] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean.\nDistributed Representations of Words and Phrases and their Compositionality.\nNIPS, 2013.\n\n\n## Acknowledgments\n\nThis work was partially supported by Research and Development on Real World Big Data Integration and Analysis.\n"
  },
  {
    "path": "make_corpus.py",
    "content": "import re\nimport json\nimport gzip\nimport argparse\nfrom collections import OrderedDict\n\nfrom logzero import logger\n\nfrom tokenization import RegExpTokenizer, NLTKTokenizer, MeCabTokenizer\n\n\nregex_spaces = re.compile(r'\\s+')\nregex_title_paren = re.compile(r' \\([^()].+?\\)$')\nregex_hyperlink = re.compile(r'\\[\\[(.+?)\\]\\]')\nregex_entity = re.compile(r'##[^#]+?##')\n\n\ndef main(args):\n    logger.info('initializing a tokenizer')\n    if args.tokenizer == 'regexp':\n        tokenizer = RegExpTokenizer(do_lower_case=args.do_lower_case,\n                                    preserved_pattern=regex_entity)\n    elif args.tokenizer == 'nltk':\n        tokenizer = NLTKTokenizer(do_lower_case=args.do_lower_case,\n                                  preserved_pattern=regex_entity)\n    elif args.tokenizer == 'mecab':\n        tokenizer = MeCabTokenizer(mecab_option=args.tokenizer_option,\n                                   do_lower_case=args.do_lower_case,\n                                   preserved_pattern=regex_entity)\n    else:\n        raise RuntimeError(f'Invalid tokenizer: {args.tokenizer}')\n\n\n    redirects = dict()\n    if args.do_resolve_redirects:\n        logger.info('loading redirect information')\n        with gzip.open(args.cirrus_file, 'rt') as fi:\n            for line in fi:\n                json_item = json.loads(line)\n                if 'title' not in json_item:\n                    continue\n\n                if 'redirect' not in json_item:\n                    continue\n\n                dst_title = json_item['title']\n                redirects[dst_title] = dst_title\n                for redirect_item in json_item['redirect']:\n                    if redirect_item['namespace'] == 0:\n                        src_title = redirect_item['title']\n                        redirects.setdefault(src_title, dst_title)\n\n    logger.info('generating corpus for training')\n    n_processed = 0\n    with gzip.open(args.cirrus_file, 'rt') as fi, \\\n         open(args.output_file, 'wt') as fo:\n        for line in fi:\n            json_item = json.loads(line)\n            if 'title' not in json_item:\n                continue\n\n            title = json_item['title']\n            text = regex_spaces.sub(' ', json_item['text'])\n\n            hyperlinks = dict()\n            title_without_paren = regex_title_paren.sub('', title)\n            hyperlinks.setdefault(title_without_paren, title)\n            for match in regex_hyperlink.finditer(json_item['source_text']):\n                if '|' in match.group(1):\n                    (entity, anchor) = match.group(1).split('|', maxsplit=1)\n                else:\n                    entity = anchor = match.group(1)\n\n                if '#' in entity:\n                    entity = entity[:entity.find('#')]\n\n                anchor = anchor.strip()\n                entity = entity.strip()\n\n                if args.do_resolve_redirects:\n                    entity = redirects.get(entity, '')\n\n                if len(anchor) > 0 and len(entity) > 0:\n                    hyperlinks.setdefault(anchor, entity)\n\n            hyperlinks_sorted = OrderedDict(sorted(\n                hyperlinks.items(), key=lambda t: len(t[0]), reverse=True))\n\n            replacement_flags = [0] * len(text)\n            for (anchor, entity) in hyperlinks_sorted.items():\n                cursor = 0\n                while cursor < len(text) and anchor in text[cursor:]:\n                    start = text.index(anchor, cursor)\n                    end = start + len(anchor)\n                    if not any(replacement_flags[start:end]):\n                        entity_token = f'##{entity}##'.replace(' ', '_')\n                        text = text[:start] + entity_token + text[end:]\n                        replacement_flags = replacement_flags[:start] \\\n                            + [1] * len(entity_token) + replacement_flags[end:]\n                        assert len(text) == len(replacement_flags)\n                        cursor = start + len(entity_token)\n                    else:\n                        cursor = end\n\n            text = ' '.join(tokenizer.tokenize(text))\n\n            print(text, file=fo)\n            n_processed += 1\n\n            if n_processed <= 10:\n                logger.info('*** Example ***')\n                example_text = text[:400] + '...' if len(text) > 400 else text\n                logger.info(example_text)\n\n            if n_processed % 10000 == 0:\n                logger.info(f'processed: {n_processed}')\n\n    if n_processed % 10000 != 0:\n        logger.info(f'processed: {n_processed}')\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--cirrus_file', type=str, required=True,\n        help='Wikipedia Cirrussearch content dump file (.json.gz)')\n    parser.add_argument('--output_file', type=str, required=True,\n        help='output corpus file (.txt)')\n    parser.add_argument('--tokenizer', default='regexp',\n        help='tokenizer type [regexp]')\n    parser.add_argument('--do_lower_case', action='store_true',\n        help='lowercase words (not applied to NEs)')\n    parser.add_argument('--do_resolve_redirects', action='store_true',\n        help='resolve redirects of entity names')\n    parser.add_argument('--tokenizer_option', type=str, default='',\n        help='option string passed to the tokenizer')\n    args = parser.parse_args()\n    main(args)\n"
  },
  {
    "path": "tokenization.py",
    "content": "import re\n\n\nclass BaseTokenizer(object):\n    def __init__(self, do_lower_case=False, preserved_pattern=None):\n        self.do_lower_case = do_lower_case\n        self.preserved_pattern = preserved_pattern\n\n    def tokenize_words(self, text):\n        raise NotImplementedError\n\n    def tokenize(self, text):\n        if self.preserved_pattern is not None:\n            tokens = []\n            split_texts = self.preserved_pattern.split(text)\n            matched_texts = \\\n                [m.group(0) for m in self.preserved_pattern.finditer(text)] + [None]\n            assert len(split_texts) == len(matched_texts)\n            for (split_text, matched_text) in zip(split_texts, matched_texts):\n                if self.do_lower_case:\n                    tokens += [t.lower() for t in self.tokenize_words(split_text)]\n                else:\n                    tokens += self.tokenize_words(split_text)\n\n                if matched_text is not None:\n                    tokens += [matched_text]\n        else:\n            if self.do_lower_case:\n                tokens = [t.lower() for t in self.tokenize_words(text)]\n            else:\n                tokens = self.tokenize_words(text)\n\n        return tokens\n\n\nclass RegExpTokenizer(BaseTokenizer):\n    def __init__(self, pattern=r'\\w+|\\S', do_lower_case=False, preserved_pattern=None):\n        super(RegExpTokenizer, self).__init__(do_lower_case, preserved_pattern)\n        self.pattern = re.compile(pattern)\n\n    def tokenize_words(self, text):\n        tokens = [t.strip() for t in self.pattern.findall(text) if t.strip()]\n        return tokens\n\n\nclass NLTKTokenizer(BaseTokenizer):\n    def __init__(self, do_lower_case=False, preserved_pattern=None):\n        super(NLTKTokenizer, self).__init__(do_lower_case, preserved_pattern)\n        from nltk import word_tokenize\n        self.nltk_tokenize = word_tokenize\n\n    def tokenize_words(self, text):\n        tokens = [t.strip() for t in self.nltk_tokenize(text) if t.strip()]\n        return tokens\n\n\nclass MeCabTokenizer(BaseTokenizer):\n    def __init__(self, mecab_option='', do_lower_case=False, preserved_pattern=None):\n        super(MeCabTokenizer, self).__init__(do_lower_case, preserved_pattern)\n        import MeCab\n        self.mecab_option = mecab_option\n        self.mecab = MeCab.Tagger(self.mecab_option)\n\n    def tokenize_words(self, text):\n        tokens = []\n        for line in self.mecab.parse(text).split('\\n'):\n            if line == 'EOS':\n                break\n\n            token = line.split('\\t')[0].strip()\n            if not token:\n                continue\n\n            tokens.append(token)\n\n        return tokens\n"
  },
  {
    "path": "train.py",
    "content": "import re\nimport argparse\nfrom pathlib import Path\n\nimport logzero\nfrom logzero import logger\nfrom gensim.models.word2vec import LineSentence, Word2Vec\n\n\nlogger_word2vec = logzero.setup_logger(name='gensim.models.word2vec')\nlogger_base_any2vec = logzero.setup_logger(name='gensim.models.base_any2vec')\n\nregex_entity = re.compile(r'##[^#]+?##')\n\n\ndef main(args):\n    output_dir = Path(args.output_dir)\n    output_dir.mkdir(parents=True, exist_ok=True)\n\n    word_vectors_file = output_dir / 'word_vectors.txt'\n    entity_vectors_file = output_dir / 'entity_vectors.txt'\n    all_vectors_file = output_dir / 'all_vectors.txt'\n\n    logger.info('training the model')\n    model = Word2Vec(sentences=LineSentence(args.corpus_file),\n                     size=args.embed_size,\n                     window=args.window_size,\n                     negative=args.sample_size,\n                     min_count=args.min_count,\n                     workers=args.workers,\n                     sg=1,\n                     hs=0,\n                     iter=args.epoch)\n\n    word_vocab_size = 0\n    entity_vocab_size = 0\n    for token in model.wv.vocab:\n        if regex_entity.match(token):\n            entity_vocab_size += 1\n        else:\n            word_vocab_size += 1\n\n    total_vocab_size = word_vocab_size + entity_vocab_size\n    logger.info(f'word vocabulary size: {word_vocab_size}')\n    logger.info(f'entity vocabulary size: {entity_vocab_size}')\n    logger.info(f'total vocabulary size: {total_vocab_size}')\n\n    logger.info('writing word/entity vectors to files')\n    with open(word_vectors_file, 'w') as fo_word, \\\n         open(entity_vectors_file, 'w') as fo_entity, \\\n         open(all_vectors_file, 'w') as fo_all:\n\n        # write word2vec headers to each file\n        print(word_vocab_size, args.embed_size, file=fo_word)\n        print(entity_vocab_size, args.embed_size, file=fo_entity)\n        print(total_vocab_size, args.embed_size, file=fo_all)\n\n        # write tokens and vectors\n        for (token, _) in sorted(model.wv.vocab.items(), key=lambda t: -t[1].count):\n            vector = model.wv[token]\n\n            if regex_entity.match(token):\n                print(token[2:-2], *vector, file=fo_entity)\n            else:\n                print(token, *vector, file=fo_word)\n\n            print(token, *vector, file=fo_all)\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--corpus_file', type=str, required=True,\n        help='Corpus file (.txt)')\n    parser.add_argument('--output_dir', type=str, required=True,\n        help='Output directory to save embedding files')\n    parser.add_argument('--embed_size', type=int, default=100,\n        help='Dimensionality of the word/entity vectors [100]')\n    parser.add_argument('--window_size', type=int, default=5,\n        help='Maximum distance between the current and '\n             'predicted word within a sentence [5]')\n    parser.add_argument('--sample_size', type=int, default=5,\n        help='Number of negative samples [5]')\n    parser.add_argument('--min_count', type=int, default=5,\n        help='Ignores all words/entities with total frequency lower than this [5]')\n    parser.add_argument('--epoch', type=int, default=5,\n        help='number of training epochs [5]')\n    parser.add_argument('--workers', type=int, default=2,\n        help='Use these many worker threads to train the model [2]')\n    args = parser.parse_args()\n    main(args)\n"
  }
]