Full Code of singletongue/WikiEntVec for AI

master acf6b08c962b cached
6 files
21.9 KB
5.0k tokens
15 symbols
1 requests
Download .txt
Repository: singletongue/WikiEntVec
Branch: master
Commit: acf6b08c962b
Files: 6
Total size: 21.9 KB

Directory structure:
gitextract__1gv7ppb/

├── .gitignore
├── LICENSE
├── README.md
├── make_corpus.py
├── tokenization.py
└── train.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Custom configuration

work/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2018 Masatoshi Suzuki

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# Wikipedia Entity Vectors


## Introduction

**Wikipedia Entity Vectors** [1] is a distributed representation of words and named entities (NEs).
The words and NEs are mapped to the same vector space.
The vectors are trained with skip-gram algorithm using preprocessed Wikipedia text as the corpus.


## Downloads

Pre-trained vectors are downloadable from the [Releases](https://github.com/singletongue/WikiEntVec/releases) page.

Several old versions are available at [this site](http://www.cl.ecei.tohoku.ac.jp/~m-suzuki/jawiki_vector/).


## Specs

Each version of the pre-trained vectors contains three files: `word_vectors.txt`, `entity_vectors.txt`, and `all_vectors.txt`.
These files are text files in word2vec output format, in which the first line declares the vocabulary size and the vector size (number of dimensions), followed by lines of tokens and their corresponding vectors.

`word_vectors.txt` and `entity_vectors.txt` contains vectors for words and NEs, respectively.
For `entity_vectors.txt`, white spaces within names of NEs are replaced with underscores like `United_States`.
`all_vectors.txt` contains vectors of both words and embeddings in one file, where each NE token is formatted with `##` like `##United_States##`
(Note that some old versions use square brackets `[]` for formatting NE tokens.)

Pre-trained vectors are trained under the configurations below (see Manual training for details):

### `make_corpus.py`

#### Japanese

We used [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) (v0.0.6) for tokenizing Japanese texts.

|Option              |Value                                                 |
|:-------------------|:-----------------------------------------------------|
|`--cirrus_file`     |path to `jawiki-20190520-cirrussearch-content.json.gz`|
|`--output_file`     |path to the output file                               |
|`--tokenizer`       |`mecab`                                               |
|`--tokenizer_option`|`-d <directory of mecab-ipadic-NEologd dictionary>`   |

### `train.py`

|Option         |Value                                           |
|:--------------|:-----------------------------------------------|
|`--corpus_file`|path to a corpus file made with `make_corpus.py`|
|`--output_dir` |path to the output directory                    |
|`--embed_size` |`100`, `200`, or `300`                          |
|`--window_size`|`10`                                            |
|`--sample_size`|`10`                                            |
|`--min_count`  |`10`                                            |
|`--epoch`      |`5`                                             |
|`--workers`    |`20`                                            |


## Manual training

You can manually process Wikipedia dump file and train a skip-gram model on the preprocessed file.


### Requirements

- Python 3.6
- gensim
- logzero
- MeCab and its Python binding (mecab-python3) (optional: required for tokenizing Japanese texts)


### Steps

1. Download Wikipedia Cirrussearch dump file from [here](https://dumps.wikimedia.org/other/cirrussearch/).
    - Make sure to choose a file named like `**wiki-YYYYMMDD-cirrussearch-content.json.gz`.
2. Clone this repository.
3. Preprocess the downloaded dump file.
    ```
    $ python make_corpus.py --cirrus_file <dump file> --output_file <corpus file>
    ```
    If you're processing Japanese version of Wikipedia, make sure to use MeCab tokenizer by setting `--tokenizer mecab` option.
    Otherwise, the text will be tokenized by a simple rule based on regular expression.
4. Train the model
    ```
    $ python train.py --corpus_file <corpus file> --output_dir <output directory>
    ```

    You can configure options below for training a model.

    ```
    usage: train.py [-h] --corpus_file CORPUS_FILE --output_dir OUTPUT_DIR
                    [--embed_size EMBED_SIZE] [--window_size WINDOW_SIZE]
                    [--sample_size SAMPLE_SIZE] [--min_count MIN_COUNT]
                    [--epoch EPOCH] [--workers WORKERS]

    optional arguments:
      -h, --help            show this help message and exit
      --corpus_file CORPUS_FILE
                            Corpus file (.txt)
      --output_dir OUTPUT_DIR
                            Output directory to save embedding files
      --embed_size EMBED_SIZE
                            Dimensionality of the word/entity vectors [100]
      --window_size WINDOW_SIZE
                            Maximum distance between the current and predicted
                            word within a sentence [5]
      --sample_size SAMPLE_SIZE
                            Number of negative samples [5]
      --min_count MIN_COUNT
                            Ignores all words/entities with total frequency lower
                            than this [5]
      --epoch EPOCH         number of training epochs [5]
      --workers WORKERS     Use these many worker threads to train the model [2]
    ```


## Concepts

There are several methods to learn distributed representations (or embeddings) of words, such as CBOW and skip-gram [2].
These methods train a neural network to predict contextual words given a word in a sentence from large corpora in an unsupervised way.

However, there are a couple of problems when applying these methods to learning distributed representations of NEs.
One problem is that many NEs consist of multiple words (such as "New York" and "George Washington"), which makes a simple tokenization of text undesirable.

Other problems are the diversity and ambiguity of NE mentions.
For each NE, several expression can be used to mention the NE.
For example, "USA", "US", "United States", and "America" can all express the same country.
On the other hand, the same words and phrases can refer to different entities.
For example, the word "Mercury" may represent a planet or an element or even a person (such as "Freddie Mercury", the vocalist for the rock group Queen).
Therefore, in order to learn distributed representations of NEs, one must identify the spans of NEs in the text and recognize the mentioned NEs so that they are not treated just as a sequence of words.

To address these problems, we used Wikipedia as the corpus and utilized its internal hyperlinks for identifying mentions of NEs in article text.

For each article in Wikipedia, we performed the following preprocessing.

First, we extracted all hyperlinks (pairs of anchor text and the linked article) from the source text (a.k.a wikitext) of an article.

Next, for each hyperlink, we replaced the appearances of anchor text in the article body with special tokens representing the linked articles.

For instance, if an article has a hyperlink to "Mercury (planet)" assigned to the anchor text "Mercury", we replace all the other appearances of "Mercury" in the same article with the special token `##Mercury_(planet)##`.

Note that the diversity of NE mentions is resolved by replacing possibly diverse anchor texts with special tokens which are unique to NEs.
Moreover, the ambiguity of NE mentions is also addressed by making "one sense per discourse" assumption; we assume that NEs mentioned by possibly ambiguous phrases can be determined by the context or the document.
With this assumption, the phrases "Mercury" in the above example are neither replaced with `##Mercury_(element)##` nor `##Freddie_Mercury##`, since the article does not have such mentions as hyperlinks.

We used the preprocessed Wikipedia articles as the corpus and applied skip-gram algorithm to learn distributed representations of words and NEs.
This means that words and NEs are mapped to the same vector space.


## Licenses

The pre-trained vectors are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).

The codes in this repository are distributed under the MIT License.


## References

[1] Masatoshi Suzuki, Koji Matsuda, Satoshi Sekine, Naoaki Okazaki and Kentaro
Inui. A Joint Neural Model for Fine-Grained Named Entity Classification of
Wikipedia Articles. IEICE Transactions on Information and Systems, Special
Section on Semantic Web and Linked Data, Vol. E101-D, No.1, pp.73-81, 2018.

[2] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean. Efficient Estimation
of Word Representations in Vector Space. ICLR, 2013.

[3] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean.
Distributed Representations of Words and Phrases and their Compositionality.
NIPS, 2013.


## Acknowledgments

This work was partially supported by Research and Development on Real World Big Data Integration and Analysis.


================================================
FILE: make_corpus.py
================================================
import re
import json
import gzip
import argparse
from collections import OrderedDict

from logzero import logger

from tokenization import RegExpTokenizer, NLTKTokenizer, MeCabTokenizer


regex_spaces = re.compile(r'\s+')
regex_title_paren = re.compile(r' \([^()].+?\)$')
regex_hyperlink = re.compile(r'\[\[(.+?)\]\]')
regex_entity = re.compile(r'##[^#]+?##')


def main(args):
    logger.info('initializing a tokenizer')
    if args.tokenizer == 'regexp':
        tokenizer = RegExpTokenizer(do_lower_case=args.do_lower_case,
                                    preserved_pattern=regex_entity)
    elif args.tokenizer == 'nltk':
        tokenizer = NLTKTokenizer(do_lower_case=args.do_lower_case,
                                  preserved_pattern=regex_entity)
    elif args.tokenizer == 'mecab':
        tokenizer = MeCabTokenizer(mecab_option=args.tokenizer_option,
                                   do_lower_case=args.do_lower_case,
                                   preserved_pattern=regex_entity)
    else:
        raise RuntimeError(f'Invalid tokenizer: {args.tokenizer}')


    redirects = dict()
    if args.do_resolve_redirects:
        logger.info('loading redirect information')
        with gzip.open(args.cirrus_file, 'rt') as fi:
            for line in fi:
                json_item = json.loads(line)
                if 'title' not in json_item:
                    continue

                if 'redirect' not in json_item:
                    continue

                dst_title = json_item['title']
                redirects[dst_title] = dst_title
                for redirect_item in json_item['redirect']:
                    if redirect_item['namespace'] == 0:
                        src_title = redirect_item['title']
                        redirects.setdefault(src_title, dst_title)

    logger.info('generating corpus for training')
    n_processed = 0
    with gzip.open(args.cirrus_file, 'rt') as fi, \
         open(args.output_file, 'wt') as fo:
        for line in fi:
            json_item = json.loads(line)
            if 'title' not in json_item:
                continue

            title = json_item['title']
            text = regex_spaces.sub(' ', json_item['text'])

            hyperlinks = dict()
            title_without_paren = regex_title_paren.sub('', title)
            hyperlinks.setdefault(title_without_paren, title)
            for match in regex_hyperlink.finditer(json_item['source_text']):
                if '|' in match.group(1):
                    (entity, anchor) = match.group(1).split('|', maxsplit=1)
                else:
                    entity = anchor = match.group(1)

                if '#' in entity:
                    entity = entity[:entity.find('#')]

                anchor = anchor.strip()
                entity = entity.strip()

                if args.do_resolve_redirects:
                    entity = redirects.get(entity, '')

                if len(anchor) > 0 and len(entity) > 0:
                    hyperlinks.setdefault(anchor, entity)

            hyperlinks_sorted = OrderedDict(sorted(
                hyperlinks.items(), key=lambda t: len(t[0]), reverse=True))

            replacement_flags = [0] * len(text)
            for (anchor, entity) in hyperlinks_sorted.items():
                cursor = 0
                while cursor < len(text) and anchor in text[cursor:]:
                    start = text.index(anchor, cursor)
                    end = start + len(anchor)
                    if not any(replacement_flags[start:end]):
                        entity_token = f'##{entity}##'.replace(' ', '_')
                        text = text[:start] + entity_token + text[end:]
                        replacement_flags = replacement_flags[:start] \
                            + [1] * len(entity_token) + replacement_flags[end:]
                        assert len(text) == len(replacement_flags)
                        cursor = start + len(entity_token)
                    else:
                        cursor = end

            text = ' '.join(tokenizer.tokenize(text))

            print(text, file=fo)
            n_processed += 1

            if n_processed <= 10:
                logger.info('*** Example ***')
                example_text = text[:400] + '...' if len(text) > 400 else text
                logger.info(example_text)

            if n_processed % 10000 == 0:
                logger.info(f'processed: {n_processed}')

    if n_processed % 10000 != 0:
        logger.info(f'processed: {n_processed}')


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument('--cirrus_file', type=str, required=True,
        help='Wikipedia Cirrussearch content dump file (.json.gz)')
    parser.add_argument('--output_file', type=str, required=True,
        help='output corpus file (.txt)')
    parser.add_argument('--tokenizer', default='regexp',
        help='tokenizer type [regexp]')
    parser.add_argument('--do_lower_case', action='store_true',
        help='lowercase words (not applied to NEs)')
    parser.add_argument('--do_resolve_redirects', action='store_true',
        help='resolve redirects of entity names')
    parser.add_argument('--tokenizer_option', type=str, default='',
        help='option string passed to the tokenizer')
    args = parser.parse_args()
    main(args)


================================================
FILE: tokenization.py
================================================
import re


class BaseTokenizer(object):
    def __init__(self, do_lower_case=False, preserved_pattern=None):
        self.do_lower_case = do_lower_case
        self.preserved_pattern = preserved_pattern

    def tokenize_words(self, text):
        raise NotImplementedError

    def tokenize(self, text):
        if self.preserved_pattern is not None:
            tokens = []
            split_texts = self.preserved_pattern.split(text)
            matched_texts = \
                [m.group(0) for m in self.preserved_pattern.finditer(text)] + [None]
            assert len(split_texts) == len(matched_texts)
            for (split_text, matched_text) in zip(split_texts, matched_texts):
                if self.do_lower_case:
                    tokens += [t.lower() for t in self.tokenize_words(split_text)]
                else:
                    tokens += self.tokenize_words(split_text)

                if matched_text is not None:
                    tokens += [matched_text]
        else:
            if self.do_lower_case:
                tokens = [t.lower() for t in self.tokenize_words(text)]
            else:
                tokens = self.tokenize_words(text)

        return tokens


class RegExpTokenizer(BaseTokenizer):
    def __init__(self, pattern=r'\w+|\S', do_lower_case=False, preserved_pattern=None):
        super(RegExpTokenizer, self).__init__(do_lower_case, preserved_pattern)
        self.pattern = re.compile(pattern)

    def tokenize_words(self, text):
        tokens = [t.strip() for t in self.pattern.findall(text) if t.strip()]
        return tokens


class NLTKTokenizer(BaseTokenizer):
    def __init__(self, do_lower_case=False, preserved_pattern=None):
        super(NLTKTokenizer, self).__init__(do_lower_case, preserved_pattern)
        from nltk import word_tokenize
        self.nltk_tokenize = word_tokenize

    def tokenize_words(self, text):
        tokens = [t.strip() for t in self.nltk_tokenize(text) if t.strip()]
        return tokens


class MeCabTokenizer(BaseTokenizer):
    def __init__(self, mecab_option='', do_lower_case=False, preserved_pattern=None):
        super(MeCabTokenizer, self).__init__(do_lower_case, preserved_pattern)
        import MeCab
        self.mecab_option = mecab_option
        self.mecab = MeCab.Tagger(self.mecab_option)

    def tokenize_words(self, text):
        tokens = []
        for line in self.mecab.parse(text).split('\n'):
            if line == 'EOS':
                break

            token = line.split('\t')[0].strip()
            if not token:
                continue

            tokens.append(token)

        return tokens


================================================
FILE: train.py
================================================
import re
import argparse
from pathlib import Path

import logzero
from logzero import logger
from gensim.models.word2vec import LineSentence, Word2Vec


logger_word2vec = logzero.setup_logger(name='gensim.models.word2vec')
logger_base_any2vec = logzero.setup_logger(name='gensim.models.base_any2vec')

regex_entity = re.compile(r'##[^#]+?##')


def main(args):
    output_dir = Path(args.output_dir)
    output_dir.mkdir(parents=True, exist_ok=True)

    word_vectors_file = output_dir / 'word_vectors.txt'
    entity_vectors_file = output_dir / 'entity_vectors.txt'
    all_vectors_file = output_dir / 'all_vectors.txt'

    logger.info('training the model')
    model = Word2Vec(sentences=LineSentence(args.corpus_file),
                     size=args.embed_size,
                     window=args.window_size,
                     negative=args.sample_size,
                     min_count=args.min_count,
                     workers=args.workers,
                     sg=1,
                     hs=0,
                     iter=args.epoch)

    word_vocab_size = 0
    entity_vocab_size = 0
    for token in model.wv.vocab:
        if regex_entity.match(token):
            entity_vocab_size += 1
        else:
            word_vocab_size += 1

    total_vocab_size = word_vocab_size + entity_vocab_size
    logger.info(f'word vocabulary size: {word_vocab_size}')
    logger.info(f'entity vocabulary size: {entity_vocab_size}')
    logger.info(f'total vocabulary size: {total_vocab_size}')

    logger.info('writing word/entity vectors to files')
    with open(word_vectors_file, 'w') as fo_word, \
         open(entity_vectors_file, 'w') as fo_entity, \
         open(all_vectors_file, 'w') as fo_all:

        # write word2vec headers to each file
        print(word_vocab_size, args.embed_size, file=fo_word)
        print(entity_vocab_size, args.embed_size, file=fo_entity)
        print(total_vocab_size, args.embed_size, file=fo_all)

        # write tokens and vectors
        for (token, _) in sorted(model.wv.vocab.items(), key=lambda t: -t[1].count):
            vector = model.wv[token]

            if regex_entity.match(token):
                print(token[2:-2], *vector, file=fo_entity)
            else:
                print(token, *vector, file=fo_word)

            print(token, *vector, file=fo_all)


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument('--corpus_file', type=str, required=True,
        help='Corpus file (.txt)')
    parser.add_argument('--output_dir', type=str, required=True,
        help='Output directory to save embedding files')
    parser.add_argument('--embed_size', type=int, default=100,
        help='Dimensionality of the word/entity vectors [100]')
    parser.add_argument('--window_size', type=int, default=5,
        help='Maximum distance between the current and '
             'predicted word within a sentence [5]')
    parser.add_argument('--sample_size', type=int, default=5,
        help='Number of negative samples [5]')
    parser.add_argument('--min_count', type=int, default=5,
        help='Ignores all words/entities with total frequency lower than this [5]')
    parser.add_argument('--epoch', type=int, default=5,
        help='number of training epochs [5]')
    parser.add_argument('--workers', type=int, default=2,
        help='Use these many worker threads to train the model [2]')
    args = parser.parse_args()
    main(args)
Download .txt
gitextract__1gv7ppb/

├── .gitignore
├── LICENSE
├── README.md
├── make_corpus.py
├── tokenization.py
└── train.py
Download .txt
SYMBOL INDEX (15 symbols across 3 files)

FILE: make_corpus.py
  function main (line 18) | def main(args):

FILE: tokenization.py
  class BaseTokenizer (line 4) | class BaseTokenizer(object):
    method __init__ (line 5) | def __init__(self, do_lower_case=False, preserved_pattern=None):
    method tokenize_words (line 9) | def tokenize_words(self, text):
    method tokenize (line 12) | def tokenize(self, text):
  class RegExpTokenizer (line 36) | class RegExpTokenizer(BaseTokenizer):
    method __init__ (line 37) | def __init__(self, pattern=r'\w+|\S', do_lower_case=False, preserved_p...
    method tokenize_words (line 41) | def tokenize_words(self, text):
  class NLTKTokenizer (line 46) | class NLTKTokenizer(BaseTokenizer):
    method __init__ (line 47) | def __init__(self, do_lower_case=False, preserved_pattern=None):
    method tokenize_words (line 52) | def tokenize_words(self, text):
  class MeCabTokenizer (line 57) | class MeCabTokenizer(BaseTokenizer):
    method __init__ (line 58) | def __init__(self, mecab_option='', do_lower_case=False, preserved_pat...
    method tokenize_words (line 64) | def tokenize_words(self, text):

FILE: train.py
  function main (line 16) | def main(args):
Condensed preview — 6 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (23K chars).
[
  {
    "path": ".gitignore",
    "chars": 1234,
    "preview": "# Custom configuration\n\nwork/\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions"
  },
  {
    "path": "LICENSE",
    "chars": 1073,
    "preview": "MIT License\n\nCopyright (c) 2018 Masatoshi Suzuki\n\nPermission is hereby granted, free of charge, to any person obtaining "
  },
  {
    "path": "README.md",
    "chars": 8668,
    "preview": "# Wikipedia Entity Vectors\n\n\n## Introduction\n\n**Wikipedia Entity Vectors** [1] is a distributed representation of words "
  },
  {
    "path": "make_corpus.py",
    "chars": 5337,
    "preview": "import re\nimport json\nimport gzip\nimport argparse\nfrom collections import OrderedDict\n\nfrom logzero import logger\n\nfrom "
  },
  {
    "path": "tokenization.py",
    "chars": 2631,
    "preview": "import re\n\n\nclass BaseTokenizer(object):\n    def __init__(self, do_lower_case=False, preserved_pattern=None):\n        se"
  },
  {
    "path": "train.py",
    "chars": 3440,
    "preview": "import re\nimport argparse\nfrom pathlib import Path\n\nimport logzero\nfrom logzero import logger\nfrom gensim.models.word2ve"
  }
]

About this extraction

This page contains the full source code of the singletongue/WikiEntVec GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 6 files (21.9 KB), approximately 5.0k tokens, and a symbol index with 15 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!