[
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: CI\n\non:\n  push:\n    branches: [master]\n  pull_request:\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - name: Set up JDK 21\n        uses: actions/setup-java@v5\n        with:\n          distribution: temurin\n          java-version: 21\n          cache: maven\n\n      - run: mvn --batch-mode verify\n"
  },
  {
    "path": ".gitignore",
    "content": "*.versionsBackup\r\ntmp/\r\ndist/\r\ntarget/\r\n*.patch\r\n.eclipse/\r\n.project\r\n.classpath\r\n.settings\r\n*.name\r\n*.iml\r\n.idea/"
  },
  {
    "path": "CHANGES.txt",
    "content": "\r\nMorfologik, Change Log\r\n======================\r\n\r\nFor an up-to-date CHANGES file see \r\nhttps://github.com/morfologik/morfologik-stemming/blob/master/CHANGES\r\n\r\n======================= morfologik-stemming 2.2.0 =======================\r\n\r\nBug Fixes\r\n\r\n * PR #121: fix bug in replacements: s>ss, ss>s (Jaume Ortolà).\r\n\r\n * PR #118: fix HMatrix not being reset between calls to\r\n   Speller.findReplacementCandidates(), causing incorrect candidates to be\r\n   returned on repeated calls (Jaume Ortolà).\r\n\r\n * GH-38: support ^ (start) and $ (end) anchors and _ (space) in\r\n   replacement-pairs, following hunspell REP conventions.\r\n\r\n * GH-75: Fix incorrect and incomplete CharsetDecoder usage in Speller.findRepl():\r\n   missing charBuffer.clear() before decode and missing decoder.flush() after\r\n   decode, which could produce wrong candidates for stateful encodings.\r\n\r\nOther Changes\r\n\r\n * apply spotless (google java format) formatting to sources.\r\n\r\n * switch to junit5/ jupiter and randomizedtesting-jupiter\r\n\r\n * Update Maven build plugins to current versions.\r\n\r\n * Require Java 21 for compiling the project. The output jar remains Java 11 \r\n   compatible.\r\n\r\n======================= morfologik-stemming 2.1.9 =======================\r\n\r\nOther Changes\r\n\r\n * PR #114: improve run-on suggestions for camel case words (Jaume Ortolà)\r\n\r\n======================= morfologik-stemming 2.1.8 =======================\r\n\r\nOther Changes\r\n\r\n * GH-112: Add automatic module name to all JARs.\r\n * Upgrade selected build dependencies.\r\n\r\n======================= morfologik-stemming 2.1.7 =======================\r\n\r\nBug Fixes\r\n\r\n * PR #103: fix distance value in the result of `Speller.findReplacementCandidates`\r\n   (Daniel Naber).\r\n\r\n * GH-102: upgrade jcommander to newest version. (Dawid Weiss)\r\n\r\nOther Changes\r\n\r\n * PR #103: introduce `Speller.replaceRunOnWordCandidates()` which returns\r\n   `CandidateData` (Daniel Naber).\r\n\r\n======================= morfologik-stemming 2.1.6 =======================\r\n\r\nOther Changes\r\n\r\n * PR #101: fix replaceRunOnWords() not working for words that are uppercase at\r\n   sentence start (Daniel Naber).\r\n\r\n======================= morfologik-stemming 2.1.5 =======================\r\n\r\nBug Fixes\r\n\r\n * PR #96: incorrect logic in runOnWords (Jaume Ortolà).\r\n\r\n * PR #97: micro performance optimization (Daniel Naber).\r\n\r\nOther Changes\r\n\r\n * GH-95: Speller: findReplacementCandidates returns full CandidateData. This \r\n          commit also refactors the Speller to use a stateless returned array\r\n          list rather than reuse an internal field. Should not make a \r\n          practical difference. (Dawid Weiss)\r\n\r\n======================= morfologik-stemming 2.1.4 =======================\r\n\r\nBug Fixes\r\n\r\n * PR #93: Case-changed words are always good suggestions (Jaume Ortolà).\r\n\r\n * GH-92: FSATraversal may return NOT_FOUND instead of AUTOMATON_HAS_PREFIX\r\n          (stevendolg via Dawid Weiss)\r\n\r\nOther Changes\r\n\r\n * Updated build and test plugins to newer versions.\r\n\r\n======================= morfologik-stemming 2.1.3 =======================\r\n\r\nBug Fixes\r\n\r\n * GH-86: Speller: words containing the dictionary separator are not handled\r\n          properly (Jaume Ortolà via Dawid Weiss).\r\n\r\n======================= morfologik-stemming 2.1.2 =======================\r\n\r\nBug Fixes\r\n\r\n * GH-85: Encoded sequences can clash with separator byte and cause assertion \r\n   errors. (Daniel Naber, Dawid Weiss).\r\n\r\n======================= morfologik-stemming 2.1.1 =======================\r\n\r\nBug Fixes\r\n\r\n * PR #78: Fix dependency issue in morfologik-speller (Alden Quimby).\r\n\r\n * GH-84: Dictionary resources not found with security manager.\r\n   (Uwe Schindler)\r\n\r\nOther Changes\r\n\r\n * GH-79: Corrected a corner case in DictCompileTest. (Dawid Weiss)\r\n\r\n * GH-77: Trailing spaces in encoder name can lead to illegal argument exception.\r\n   (Jaume Ortolà, Dawid Weiss)\r\n\r\n======================= morfologik-stemming 2.1.0 =======================\r\n\r\nNew Features\r\n\r\n * GH-74: Add dict_apply tool to apply a dictionary to a file or stdin. \r\n   (Dawid Weiss)\r\n\r\n * GH-73: Update Polish stemming dictionaries to polimorfologik 2.1. (Dawid Weiss)\r\n\r\nBug Fixes\r\n\r\n * GH-76: Consolidate and fix character encoding and decoding. (Dawid Weiss)\r\n\r\nOther Changes\r\n\r\n * GH-63: BufferUtils.ensureCapacity now clears the input buffer. This also\r\n   affects WordData methods that accept a reusable byte buffer -- it is now\r\n   always cleared prior to being flipped and returned. (Dawid Weiss)\r\n\r\n======================= morfologik-stemming 2.0.2 =======================\r\n\r\nBug Fixes\r\n\r\n * GH-68: WordData.clone() should be public. (Dawid Weiss)\r\n\r\nOther Changes\r\n\r\n * GH-64: reverted back OSGi annotations (bundle packaging). (Dawid Weiss)\r\n\r\n * GH-72: Rename tools: fsa_dump to fsa_decompile and fsa_build to fsa_compile.\r\n   Existing names remain as aliases but will be removed in 2.1.0. (Dawid Weiss)\r\n\r\n======================= morfologik-stemming 2.0.1 =======================\r\n\r\nBug Fixes\r\n\r\n * GH-65: Dictionary.read(URL) ends in NPE when reading from a JAR resource\r\n   (Dawid Weiss)\r\n\r\n======================= morfologik-stemming 2.0.0 =======================\r\n\r\nThis release comes with a cleanup of the API for Java 1.7. There are\r\nseveral aspects of the code that have been dropped (or added):\r\n\r\n  - NIO is used extensively, mostly for better error reporting.\r\n\r\n  - There is a simplified lookup of resources, no class-relative loading\r\n    of dictionaries for example. The caller is in charge of looking\r\n    up either an URL to the dictionary or providing an InputStream to it.\r\n\r\n  - Removed internal caching of dictionaries from Dictionary. The \r\n    Polish stemmer is initialized lazily and reuses its dictionary \r\n    internally.\r\n\r\n  - Numerous minor tweaks of parameters. JavaDocs.\r\n\r\n  - A complete rewrite of the tools to compile (and decompile) FSA automata\r\n    and complete stemming dictionaries. The tools now assert the validity\r\n    of input data files and ensure no corrupt dictionaries can be produced.\r\n\r\nChanges in backwards compatibility policy\r\n\r\n * GH-64: Removed OSGi support because of Maven issues (forks build\r\n   phases, tests, etc.).\r\n\r\n * GH-62: Recompress Polish dictionary to use ';' as the separator.\r\n   (Dawid Weiss)\r\n\r\n * GH-59: Moved Dictionary.convertText utility to \r\n   DictionaryLookup.applyReplacements and fixed current reliance on map \r\n   ordering. (Dawid Weiss)\r\n\r\n * GH-55: Removed the \"distribution\" module entirely. The tools module\r\n   should be self-organizing. Complete overhaul of all the tools. \r\n   Examples. Simplified syntax, options and assumptions. \r\n   Input sanity checks and validation. (Dawid Weiss)\r\n\r\n * GH-57: Restructured the project into FSA traversal/ reading (only)\r\n   and FSA Builders (construction). This cleans up dependency\r\n   structure as well (HPPC is not required for FSA traversals).\r\n   (Dawid Weiss)\r\n\r\n * GH-54: Make Java 1.7 the minimum required version. Certain methods\r\n   that relied on File as arguments have been removed or changed to\r\n   accept Path. (Dawid Weiss)\r\n\r\nNew Features\r\n\r\n * GH-53: Review library dependencies and bring them up to date. \r\n   (Dawid Weiss)\r\n\r\n * Added OSGi support (Michal Hlavac)\r\n\r\n * GH-51: Remove and fail on deprecated metadata (fsa.dict.uses-*).\r\n   (Dawid Weiss)\r\n\r\nOptimizations\r\n\r\n * GH-61: Refactored the code to use one encoding/ decoding routine\r\n   and ByteBuffers. Removed dependency on Guava.\r\n\r\nBug Fixes\r\n\r\n * GH-32: make replaceRunOnWords return \"a lot\" for \"alot\", etc. \r\n   (Daniel Naber)\r\n\r\n * GH-34: ArrayIndexOutOfBoundsException with replacement-pairs. \r\n   (Jaume Ortolà, Daniel Naber)\r\n\r\n======================= morfologik-stemming 1.10.0 =======================\r\n\r\nChanges in backwards compatibility policy\r\n\r\nNew Features\r\n \r\n * Added OSGi support (Michal Hlavac)\r\n\r\nBug Fixes\r\n\r\n * GH-32: make replaceRunOnWords return \"a lot\" for \"alot\", etc. \r\n   (Daniel Naber)\r\n\r\n * GH-34: ArrayIndexOutOfBoundsException with replacement-pairs. \r\n   (Jaume Ortolà, Daniel Naber)\r\n\r\n======================= morfologik-stemming 1.9.1 =======================\r\n\r\nChanges in backwards compatibility policy\r\n\r\nNew Features\r\n\r\nBug Fixes\r\n\r\n * Now only the longest replacement key is selected when using replacement\r\n   pairs (thanks to Jaume Ortolà). This fixes a subtle regression\r\n   introduced in 1.9.0.\r\n\r\nOptimizations\r\n\r\n======================= morfologik-stemming 1.9.0 =======================\r\n\r\nChanges in backwards compatibility policy\r\n\r\nNew Features\r\n\r\n* Added capability to normalize input and output strings for dictionaries.\r\n  This is useful for dictionaries that do not support ligatures, for example.\r\n  To specify input conversion, use the property 'fsa.dict.input-conversion'\r\n  in the .info file. The output conversion (for example, to use ligatures)\r\n  is specified by 'fsa.dict.output-conversion'. Note that lengthy \r\n  conversion tables may negatively affect performance.\r\n\r\nBug Fixes\r\n\r\nOptimizations\r\n\r\n * The suggestion search for the speller is now performed directly by traversing\r\n   the dictionary automaton, which makes it much more time-efficient (thanks\r\n   to Jaume Ortolà).\r\n\r\n * Suggestions are generated faster by avoiding unnecessary case conversions.\r\n\r\n======================= morfologik-stemming 1.8.3 =======================\r\n\r\nBug Fixes\r\n\r\n* Fixed a bug for spelling dictionaries in non-UTF encodings with \r\n  separators: strings with non-encodable characters might have been \r\n  accepted as spelled correctly even if they were missing in the \r\n  dictionary.\r\n\r\n======================= morfologik-stemming 1.8.2 =======================\r\n\r\nNew Features\r\n\r\n* Added the option of using frequencies of words for sorting spelling \r\n  replacements. It can be used in both spelling and tagging dictionaries.\r\n  'fsa.dict.frequency-included=true' must be added to the .info file.\r\n  For building the dictionary, add at the end of each entry a separator and \r\n  a character between A and Z (A: less frequently used words; \r\n  Z: more frequently used words). (Jaume Ortolà)\r\n\r\n======================= morfologik-stemming 1.8.1 =======================\r\n\r\nChanges in backwards compatibility policy\r\n\r\n* MorphEncodingTool will *fail* if it detects data/lines that contain the \r\n  separator annotation byte. This is because such lines get encoded into\r\n  something that the decoder cannot process. You can use \\u0000 as the \r\n  annotation byte to avoid clashes with any existing data.\r\n\r\n======================= morfologik-stemming 1.8.0 =======================\r\n\r\nChanges in backwards compatibility policy\r\n\r\n* Command-line option changes to MorphEncodingTool - it now accepts an explicit\r\n  name of the sequence encoder, not infix/suffix/prefix booleans.  \r\n\r\n* Updating dependencies to their newest versions.\r\n\r\nNew Features\r\n\r\n* Dictionary .info files can specify the sequence decoder explicitly:\r\n  suffix, prefix, infix, none are supported. For backwards compatibility,\r\n  fsa.dict.uses-prefixes, fsa.dict.uses-infixes and fsa.dict.uses-suffixes\r\n  are still supported, but will be removed in the next major version.\r\n\r\n* Command-line option changes to MorphEncodingTool - it now accepts an explicit\r\n  name of the sequence encoder, not infix/suffix/prefix booleans.  \r\n\r\n* Rewritten implementation of tab-separated data files (tab2morph tool).\r\n  The output should yield smaller files, especially for prefix encoding\r\n  and infix encoding. This does *not* necessarily mean smaller automata\r\n  but we're working on getting these as well.\r\n\r\n  Example output before and after refactoring:\r\n  \r\n  Prefix coder:\r\n  postmodernizm|modernizm|xyz => [before] postmodernizm+ANmodernizm+xyz\r\n                              => [after ] postmodernizm+EA+xyz\r\n  \r\n  Infix coder:\r\n  laquelle|lequel|D f s       => [before] laquelle+AAHequel+D f s\r\n                              => [after ] laquelle+AGAquel+D f s\r\n\r\n* Changed the default format of the Polish dictionary from infix\r\n  encoded to prefix encoded (smaller output size).\r\n\r\nOptimizations\r\n\r\n* A number of internal implementation cleanups and refactorings.\r\n\r\n======================= morfologik-stemming 1.7.2 =======================\r\n\r\n* A quick fix for incorrect decoding of certain suffixes (long suffixes).\r\n\r\n* Increased max. recursion level in Speller to 6 from 4. (Jaume Ortolà)\r\n\r\n======================= morfologik-stemming 1.7.1 =======================\r\n\r\n* Fixed a couple of bugs in morfologik-speller (Jaume Ortolà).\r\n\r\n======================= morfologik-stemming 1.7.0 =======================\r\n\r\n* Changed DictionaryMetadata API (access methods for encoder/decoder).\r\n\r\n* Initial version of morfologik-speller component.\r\n\r\n* Minor changes to the FSADumpTool: the header block is always UTF-8 \r\n  encoded, the default platform encoding does not matter. This is done to \r\n  always support certain attributes that may be unicode (and would be \r\n  incorrectly dumped otherwise).\r\n\r\n* Metadata *.info files can now be encoded in UTF-8 to support text \r\n  attributes that otherwise would require text2ascii conversion.\r\n\r\n======================= morfologik-stemming 1.6.0 =======================\r\n\r\n* Update morfologik-polish data to Morfologik 2.0 PoliMorf (08.03.2013). \r\n  Deprecated DICTIONARY constants (unified dictionary only).\r\n          \r\n* Important! The format of encoding tags has changed and is now \r\n  multiple-tags-per-lemma. The value returned from WordData#getTag \r\n  may be a number of tags concatenated with a \"+\" character. Previously\r\n  the same lamma/stem would be returned multiple times, each time with \r\n  a different tag.\r\n\r\n* Moving code from SourceForge to github.\r\n\r\n======================= morfologik-stemming 1.5.5 =======================\r\n\r\n* Made hppc an optional component of morfologik-fsa. It is required\r\n  for constructing FSA automata only and causes problems with javac.\r\n  http://stackoverflow.com/questions/3800462/can-i-prevent-javac-accessing-the-class-path-from-the-manifests-of-our-third-par\r\n\r\n======================= morfologik-stemming 1.5.4 =======================\r\n\r\n* Replaced byte-based speller with CharBasedSpeller.\r\n\r\n* Warn about UTF-8 files with BOM.\r\n \r\n* Fixed a typo in package name (speller).\r\n\r\n======================= morfologik-stemming 1.5.3 =======================\r\n\r\n* Initial release of spelling correction submodule.\r\n\r\n* Updated morfologik-polish data to morfologik 1.9 [12.06.2012]\r\n\r\n* Updated morfologik-polish licensing info to BSD (yay).\r\n\r\n======================= morfologik-stemming 1.5.2 =======================\r\n\r\n* An alternative Polish dictionary added (BSD licensed): SGJP (Morfeusz). \r\n  PolishStemmer can now take an enum switching between the dictionary to \r\n  be used or combine both.\r\n\r\n* Project split into modules. A single jar version (no external \r\n  dependencies) added by transforming via proguard.\r\n\r\n* Enabled use of escaped special characters in the tab2morph tool.\r\n\r\n* Added guards against the input term having separator character \r\n  somewhere (this will now return an empty list of matches). Added \r\n  getSeparatorChar to DictionaryLookup so that one can check for this \r\n  condition manually, if needed.\r\n\r\n======================= morfologik-stemming 1.5.1 =======================\r\n\r\n* Build system switch to Maven (tested with Maven2).\r\n\r\n======================= morfologik-stemming 1.5.0 =======================\r\n\r\n* Major size saving improvements in CFSA2. Built in Polish dictionary \r\n  size decreased from 2,811,345 to 1,806,661 (CFSA2 format).\r\n\r\n* FSABuilder returns a ready-to-be-used FSA (ConstantArcSizeFSA). \r\n  Construction overhead for this automaton is a round zero (it is \r\n  immediately serialized in-memory).\r\n\r\n* Polish dictionary updated to Morfologik 1.7. [19.11.2010]\r\n\r\n* Added an option to serialize automaton to CFSA2 or FSA5 directly from \r\n  fsa_build.\r\n\r\n* CFSA is now deprecated for serialization (the code still reads CFSA \r\n  automata, but will no be able to serialize them). Use CFSA2.\r\n\r\n* Added immediate state interning. Speedup in automaton construction by \r\n  about 30%, memory use decreased significantly (did not perform exact \r\n  measurements, but incremental construction from presorted data should \r\n  consume way less memory).\r\n\r\n* Added an option to build FSA from already sorted data (--sorted). \r\n  Avoids in-memory sorting. Pipe the input through shell sort if \r\n  building FSA from large data.\r\n\r\n* Changed the default ordering from Java signed-byte to C-like unsigned \r\n  byte value. This lets one use GNU sort to sort the input using \r\n  'export LC_ALL=C; sort input'.  \r\n\r\n* Added traversal routines to calculate perfect hashing based on \r\n  FSA with NUMBERS.\r\n\r\n* Changed the order of serialized arcs in the binary serializer for FSA5 \r\n  to lexicographic  (consistent with the input). Depth-first traversal \r\n  recreates the input, in other words.\r\n\r\n* Removed character-based automata.\r\n\r\n* Incompatible API changes to FSA builders (moved to morfologik.fsa).\r\n\r\n* Incompatible API changes to FSATraversalHelper. Cleaned up match \r\n  types, added unit tests. \r\n\r\n* An external dependency HPPC (high performance primitive collections) \r\n  is now required\r\n\r\n======================= morfologik-stemming 1.4.1 =======================\r\n\r\n* Upgrade of the built-in Morfologik dictionary for Polish (in CFSA \r\n  format).\r\n\r\n* Added options to define custom FILLER and ANNOT_SEPARATOR bytes in the \r\n  fsa_build tool.\r\n\r\n* Corrected an inconsistency with the C fsa package -- FILLER and \r\n  ANNOT_SEPARATOR characters are now identical with the C version.\r\n          \r\n* Cleanups to the tools' launcher -- will complain about missing JARs, \r\n  if any.\r\n\r\n======================= morfologik-stemming 1.4.0 =======================\r\n\r\n* Added FSA5 construction in Java (on byte sequences). Added preliminary \r\n  support for character sequences. Added a command line tool for FSA5\r\n  construction from unsorted data (sorting is done in-memory).\r\n\r\n* Added a tool to encode tab-delimited dictionaries to the format \r\n  accepted by fsa_build and FSA5 construction tool.\r\n\r\n* Added a new version of Morfologik dictionary for Polish (in CFSA format).\r\n\r\n======================= morfologik-stemming 1.3.0 =======================\r\n\r\n* Added runtime checking for tools availability so that unavailable tools \r\n  don't show up in the list.\r\n\r\n* Recompressed the built-in Polish dictionary to CFSA. \r\n\r\n* Cleaned up FSA/Dictionary separation. FSAs don't store encoding any more \r\n  (because it does not make sense for them to do so). The FSA is a purely \r\n  abstract class pushing functionality to sub-classes. Input stream \r\n  reading cleaned up.\r\n\r\n* Added initial code for CFSA (compressed FSA). Reduces automata size \r\n  about 10%. \r\n\r\n* Changes in the public API. Implementation classes renamed (FSAVer5Impl \r\n  into FSA5). Major tweaks and tunes to the API.\r\n\r\n* Added support for version 5 automata built with NUMBERS flag (an extra \r\n  field stored for each node).\r\n\r\n======================= morfologik-stemming 1.2.2 =======================\r\n\r\n* License switch to plain BSD (removed the patent clause which did not \r\n  make much sense anyway).\r\n\r\n* The build ZIP now includes licenses for individual JARs (prevents \r\n  confusion). \r\n\r\n======================= morfologik-stemming 1.2.1 =======================\r\n\r\n* Fixed tool launching routines.\r\n\r\n======================= morfologik-stemming 1.2.0 =======================\r\n\r\n* Package hierarchy reorganized.\r\n\r\n* Removed stempel (heuristic stemmer for Polish).\r\n\r\n* Code updated to Java 1.5. \r\n\r\n* The API has changed in many places (enums instead of constants, \r\n  generics, iterables, removed explicit Arc and Node classes and replaced \r\n  by int pointers).\r\n\r\n* FSA traversal in version 1.2 is implemented on top of primitive data \r\n  structures (int pointers) to keep memory usage minimal. The speed \r\n  boost gained from this is enormous and justifies less readable code. We\r\n  strongly advise to use the provided iterators and helper functions \r\n  for matching state sequences in the FSA.\r\n\r\n* Tools updated. Dumping existing FSAs is much, much faster now.        \r\n\r\n======================= morfologik-stemming 1.1.4 =======================\r\n\r\n* Fixed a bug that caused UTF-8 dictionaries to be garbled. Now it \r\n  should be relatively safe to use UTF-8 dictionaries (note: separators \r\n  cannot be multibyte UTF-8 characters, yet this is probably a very \r\n  rare case).\r\n\r\n======================= morfologik-stemming 1.1.3 =======================\r\n\r\n* Fixed a bug causing NPE when the library is called with null context \r\n  class loader  (happens when JVM is invoked from an JNI-attached \r\n  thread). Thanks to Patrick Luby for report and detailed analysis.\r\n\r\n* Updated the built-in dictionary to the newest version available. \r\n\r\n======================= morfologik-stemming 1.1.2 =======================\r\n\r\n* Fixed a bug causing JAR file locking (by implementing a workaround).\r\n\r\n* Fixed the build script (manifest file was broken).\r\n\r\n======================= morfologik-stemming 1.1.1 =======================\r\n\r\n* Distribution script fixes. The final JAR does not contain test classes \r\n  and resources. Size trimmed almost twice compared to release 1.1.\r\n\r\n* Updated the dump tool to accept dictionary metadata files.\r\n\r\n======================= morfologik-stemming 1.1 =========================\r\n\r\n* Introduced an auxiliary \"meta\" information files about compressed \r\n  dictionaries. Such information include delimiter symbol, encoding \r\n  and infix/prefix/postfix decoding info.\r\n\r\n* The API has changed (repackaging). Some deprecated methods have been \r\n  removed. This is a major redesign/ upgrade, you will have to adjust \r\n  your source code.\r\n\r\n* Cleaned up APIs and interfaces.\r\n\r\n* Added infrastructure for command-line tool launching.\r\n\r\n* Cleaned up tests.\r\n\r\n* Changed project name to morfologik-stemmers and ownership to \r\n  (c) Morfologik.\r\n\r\n======================= morfologik-stemming 1.0.7 =======================\r\n\r\n* Removed one bug in fsa 'compression' decoding.\r\n\r\n======================= morfologik-stemming 1.0.6 =======================\r\n\r\n* Customized version of stempel replaced with a standard distribution.\r\n\r\n* Removed deprecated methods and classes.\r\n          \r\n* Added infix and prefix encoding support for fsa dictionaries.\r\n\r\n======================= morfologik-stemming 1.0.5 =======================\r\n\r\n* Added filler and separator char dumps to FSADump.\r\n          \r\n* A major bug in automaton traversal corrected. Upgrade when possible.\r\n          \r\n* Certain API changes were introduced; older methods are now deprecated\r\n  and will be removed in the future.\r\n\r\n======================= morfologik-stemming 1.0.4 =======================\r\n\r\n* Licenses for full and no-dict versions.\r\n\r\n======================= morfologik-stemming 1.0.3 =======================\r\n\r\n* Project code moved to SourceForge (subproject of Morfologik).\r\n  LICENSE CHANGED FROM PUBLIC DOMAIN TO BSD (doesn't change much, but \r\n  clarifies legal issues).\r\n\r\n======================= morfologik-stemming 1.0.2 =======================\r\n\r\n* Added a Lametyzator constructor which allows custom dictionary stream, \r\n  field delimiters and encoding. Added an option for building stand-alone \r\n  JAR that does not include the default polish dictionary.\r\n\r\n======================= morfologik-stemming 1.0.1 =======================\r\n\r\n* Code cleanups. Added a method that returns the third automaton's column\r\n  (form).\r\n\r\n======================= morfologik-stemming 1.0 =========================\r\n\r\n* Initial release\r\n"
  },
  {
    "path": "CONTRIBUTING.txt",
    "content": "Contributions are welcome!\r\n\r\nUse a modern Java version for compilation and testing (JDK 21+ recommended).\r\n\r\nIf you use Eclipse, set up project formatting and validation with:\r\n\r\nmvn -Peclipse"
  },
  {
    "path": "LICENSE.txt",
    "content": "\r\nCopyright (c) 2006 Dawid Weiss\r\nCopyright (c) 2007-2015 Dawid Weiss, Marcin Miłkowski\r\nAll rights reserved.\r\n\r\nRedistribution and use in source and binary forms, with or without modification, \r\nare permitted provided that the following conditions are met:\r\n\r\n    * Redistributions of source code must retain the above copyright notice, \r\n    this list of conditions and the following disclaimer.\r\n    \r\n    * Redistributions in binary form must reproduce the above copyright notice, \r\n    this list of conditions and the following disclaimer in the documentation \r\n    and/or other materials provided with the distribution.\r\n    \r\n    * Neither the name of Morfologik nor the names of its contributors \r\n    may be used to endorse or promote products derived from this software \r\n    without specific prior written permission.\r\n\r\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND \r\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED \r\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE \r\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR \r\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES \r\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; \r\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON \r\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT \r\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS \r\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE."
  },
  {
    "path": "README.txt",
    "content": "MORFOLOGIK\r\n==========\r\n\r\nTools for finite state automata construction and dictionary-based \r\nmorphological dictionaries.\r\n\r\nMorphosyntactic dictionary for the Polish language.\r\n\r\nSee the following for more information:\r\n  Wiki: https://github.com/morfologik/morfologik-stemming/wiki\r\n  Bugs: https://github.com/morfologik/morfologik-stemming/issues\r\n\r\nSee CONTRIBUTING.txt if you'd like to add or change something.\r\n\r\nSee LICENSE.txt to make your company's lawyer happy.\r\n\r\nSee CHANGES.txt for API changes and updates.\r\n\r\n(c) Marcin Miłkowski, Dawid Weiss\r\n"
  },
  {
    "path": "etc/eclipse/settings/org.eclipse.jdt.core.prefs",
    "content": "eclipse.preferences.version=1\r\norg.eclipse.jdt.core.compiler.annotation.inheritNullAnnotations=disabled\r\norg.eclipse.jdt.core.compiler.annotation.missingNonNullByDefaultAnnotation=ignore\r\norg.eclipse.jdt.core.compiler.annotation.nonnull=org.eclipse.jdt.annotation.NonNull\r\norg.eclipse.jdt.core.compiler.annotation.nonnullbydefault=org.eclipse.jdt.annotation.NonNullByDefault\r\norg.eclipse.jdt.core.compiler.annotation.nullable=org.eclipse.jdt.annotation.Nullable\r\norg.eclipse.jdt.core.compiler.annotation.nullanalysis=disabled\r\norg.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled\r\norg.eclipse.jdt.core.compiler.codegen.methodParameters=do not generate\r\norg.eclipse.jdt.core.compiler.codegen.targetPlatform=1.7\r\norg.eclipse.jdt.core.compiler.codegen.unusedLocal=preserve\r\norg.eclipse.jdt.core.compiler.compliance=1.7\r\norg.eclipse.jdt.core.compiler.debug.lineNumber=generate\r\norg.eclipse.jdt.core.compiler.debug.localVariable=generate\r\norg.eclipse.jdt.core.compiler.debug.sourceFile=generate\r\norg.eclipse.jdt.core.compiler.doc.comment.support=enabled\r\norg.eclipse.jdt.core.compiler.problem.annotationSuperInterface=warning\r\norg.eclipse.jdt.core.compiler.problem.assertIdentifier=error\r\norg.eclipse.jdt.core.compiler.problem.autoboxing=ignore\r\norg.eclipse.jdt.core.compiler.problem.comparingIdentical=warning\r\norg.eclipse.jdt.core.compiler.problem.deadCode=warning\r\norg.eclipse.jdt.core.compiler.problem.deprecation=warning\r\norg.eclipse.jdt.core.compiler.problem.deprecationInDeprecatedCode=disabled\r\norg.eclipse.jdt.core.compiler.problem.deprecationWhenOverridingDeprecatedMethod=disabled\r\norg.eclipse.jdt.core.compiler.problem.discouragedReference=warning\r\norg.eclipse.jdt.core.compiler.problem.emptyStatement=ignore\r\norg.eclipse.jdt.core.compiler.problem.enumIdentifier=error\r\norg.eclipse.jdt.core.compiler.problem.explicitlyClosedAutoCloseable=ignore\r\norg.eclipse.jdt.core.compiler.problem.fallthroughCase=ignore\r\norg.eclipse.jdt.core.compiler.problem.fatalOptionalError=disabled\r\norg.eclipse.jdt.core.compiler.problem.fieldHiding=ignore\r\norg.eclipse.jdt.core.compiler.problem.finalParameterBound=warning\r\norg.eclipse.jdt.core.compiler.problem.finallyBlockNotCompletingNormally=warning\r\norg.eclipse.jdt.core.compiler.problem.forbiddenReference=warning\r\norg.eclipse.jdt.core.compiler.problem.hiddenCatchBlock=warning\r\norg.eclipse.jdt.core.compiler.problem.includeNullInfoFromAsserts=disabled\r\norg.eclipse.jdt.core.compiler.problem.incompatibleNonInheritedInterfaceMethod=warning\r\norg.eclipse.jdt.core.compiler.problem.incompleteEnumSwitch=warning\r\norg.eclipse.jdt.core.compiler.problem.indirectStaticAccess=ignore\r\norg.eclipse.jdt.core.compiler.problem.invalidJavadoc=error\r\norg.eclipse.jdt.core.compiler.problem.invalidJavadocTags=enabled\r\norg.eclipse.jdt.core.compiler.problem.invalidJavadocTagsDeprecatedRef=disabled\r\norg.eclipse.jdt.core.compiler.problem.invalidJavadocTagsNotVisibleRef=disabled\r\norg.eclipse.jdt.core.compiler.problem.invalidJavadocTagsVisibility=protected\r\norg.eclipse.jdt.core.compiler.problem.localVariableHiding=ignore\r\norg.eclipse.jdt.core.compiler.problem.methodWithConstructorName=warning\r\norg.eclipse.jdt.core.compiler.problem.missingDefaultCase=ignore\r\norg.eclipse.jdt.core.compiler.problem.missingDeprecatedAnnotation=ignore\r\norg.eclipse.jdt.core.compiler.problem.missingEnumCaseDespiteDefault=disabled\r\norg.eclipse.jdt.core.compiler.problem.missingHashCodeMethod=ignore\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocComments=ignore\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocCommentsOverriding=disabled\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocCommentsVisibility=public\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocTagDescription=return_tag\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocTags=error\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocTagsMethodTypeParameters=disabled\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocTagsOverriding=disabled\r\norg.eclipse.jdt.core.compiler.problem.missingJavadocTagsVisibility=protected\r\norg.eclipse.jdt.core.compiler.problem.missingOverrideAnnotation=ignore\r\norg.eclipse.jdt.core.compiler.problem.missingOverrideAnnotationForInterfaceMethodImplementation=enabled\r\norg.eclipse.jdt.core.compiler.problem.missingSerialVersion=warning\r\norg.eclipse.jdt.core.compiler.problem.missingSynchronizedOnInheritedMethod=ignore\r\norg.eclipse.jdt.core.compiler.problem.noEffectAssignment=warning\r\norg.eclipse.jdt.core.compiler.problem.noImplicitStringConversion=warning\r\norg.eclipse.jdt.core.compiler.problem.nonExternalizedStringLiteral=ignore\r\norg.eclipse.jdt.core.compiler.problem.nonnullParameterAnnotationDropped=warning\r\norg.eclipse.jdt.core.compiler.problem.nullAnnotationInferenceConflict=error\r\norg.eclipse.jdt.core.compiler.problem.nullReference=warning\r\norg.eclipse.jdt.core.compiler.problem.nullSpecViolation=error\r\norg.eclipse.jdt.core.compiler.problem.nullUncheckedConversion=warning\r\norg.eclipse.jdt.core.compiler.problem.overridingPackageDefaultMethod=warning\r\norg.eclipse.jdt.core.compiler.problem.parameterAssignment=ignore\r\norg.eclipse.jdt.core.compiler.problem.possibleAccidentalBooleanAssignment=ignore\r\norg.eclipse.jdt.core.compiler.problem.potentialNullReference=ignore\r\norg.eclipse.jdt.core.compiler.problem.potentiallyUnclosedCloseable=ignore\r\norg.eclipse.jdt.core.compiler.problem.rawTypeReference=warning\r\norg.eclipse.jdt.core.compiler.problem.redundantNullAnnotation=warning\r\norg.eclipse.jdt.core.compiler.problem.redundantNullCheck=ignore\r\norg.eclipse.jdt.core.compiler.problem.redundantSpecificationOfTypeArguments=ignore\r\norg.eclipse.jdt.core.compiler.problem.redundantSuperinterface=ignore\r\norg.eclipse.jdt.core.compiler.problem.reportMethodCanBePotentiallyStatic=ignore\r\norg.eclipse.jdt.core.compiler.problem.reportMethodCanBeStatic=ignore\r\norg.eclipse.jdt.core.compiler.problem.specialParameterHidingField=disabled\r\norg.eclipse.jdt.core.compiler.problem.staticAccessReceiver=warning\r\norg.eclipse.jdt.core.compiler.problem.suppressOptionalErrors=disabled\r\norg.eclipse.jdt.core.compiler.problem.suppressWarnings=enabled\r\norg.eclipse.jdt.core.compiler.problem.syntacticNullAnalysisForFields=disabled\r\norg.eclipse.jdt.core.compiler.problem.syntheticAccessEmulation=ignore\r\norg.eclipse.jdt.core.compiler.problem.typeParameterHiding=warning\r\norg.eclipse.jdt.core.compiler.problem.unavoidableGenericTypeProblems=enabled\r\norg.eclipse.jdt.core.compiler.problem.uncheckedTypeOperation=warning\r\norg.eclipse.jdt.core.compiler.problem.unclosedCloseable=warning\r\norg.eclipse.jdt.core.compiler.problem.undocumentedEmptyBlock=ignore\r\norg.eclipse.jdt.core.compiler.problem.unhandledWarningToken=warning\r\norg.eclipse.jdt.core.compiler.problem.unnecessaryElse=ignore\r\norg.eclipse.jdt.core.compiler.problem.unnecessaryTypeCheck=ignore\r\norg.eclipse.jdt.core.compiler.problem.unqualifiedFieldAccess=ignore\r\norg.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownException=ignore\r\norg.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownExceptionExemptExceptionAndThrowable=enabled\r\norg.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownExceptionIncludeDocCommentReference=enabled\r\norg.eclipse.jdt.core.compiler.problem.unusedDeclaredThrownExceptionWhenOverriding=disabled\r\norg.eclipse.jdt.core.compiler.problem.unusedImport=warning\r\norg.eclipse.jdt.core.compiler.problem.unusedLabel=warning\r\norg.eclipse.jdt.core.compiler.problem.unusedLocal=warning\r\norg.eclipse.jdt.core.compiler.problem.unusedObjectAllocation=ignore\r\norg.eclipse.jdt.core.compiler.problem.unusedParameter=ignore\r\norg.eclipse.jdt.core.compiler.problem.unusedParameterIncludeDocCommentReference=enabled\r\norg.eclipse.jdt.core.compiler.problem.unusedParameterWhenImplementingAbstract=disabled\r\norg.eclipse.jdt.core.compiler.problem.unusedParameterWhenOverridingConcrete=disabled\r\norg.eclipse.jdt.core.compiler.problem.unusedPrivateMember=warning\r\norg.eclipse.jdt.core.compiler.problem.unusedTypeParameter=ignore\r\norg.eclipse.jdt.core.compiler.problem.unusedWarningToken=warning\r\norg.eclipse.jdt.core.compiler.problem.varargsArgumentNeedCast=warning\r\norg.eclipse.jdt.core.compiler.source=1.7\r\norg.eclipse.jdt.core.formatter.align_type_members_on_columns=false\r\norg.eclipse.jdt.core.formatter.alignment_for_arguments_in_allocation_expression=16\r\norg.eclipse.jdt.core.formatter.alignment_for_arguments_in_annotation=0\r\norg.eclipse.jdt.core.formatter.alignment_for_arguments_in_enum_constant=16\r\norg.eclipse.jdt.core.formatter.alignment_for_arguments_in_explicit_constructor_call=16\r\norg.eclipse.jdt.core.formatter.alignment_for_arguments_in_method_invocation=16\r\norg.eclipse.jdt.core.formatter.alignment_for_arguments_in_qualified_allocation_expression=16\r\norg.eclipse.jdt.core.formatter.alignment_for_assignment=0\r\norg.eclipse.jdt.core.formatter.alignment_for_binary_expression=16\r\norg.eclipse.jdt.core.formatter.alignment_for_compact_if=16\r\norg.eclipse.jdt.core.formatter.alignment_for_conditional_expression=80\r\norg.eclipse.jdt.core.formatter.alignment_for_enum_constants=0\r\norg.eclipse.jdt.core.formatter.alignment_for_expressions_in_array_initializer=16\r\norg.eclipse.jdt.core.formatter.alignment_for_method_declaration=0\r\norg.eclipse.jdt.core.formatter.alignment_for_multiple_fields=16\r\norg.eclipse.jdt.core.formatter.alignment_for_parameters_in_constructor_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_parameters_in_method_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_resources_in_try=80\r\norg.eclipse.jdt.core.formatter.alignment_for_selector_in_method_invocation=16\r\norg.eclipse.jdt.core.formatter.alignment_for_superclass_in_type_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_enum_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_type_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_constructor_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_method_declaration=16\r\norg.eclipse.jdt.core.formatter.alignment_for_union_type_in_multicatch=16\r\norg.eclipse.jdt.core.formatter.blank_lines_after_imports=1\r\norg.eclipse.jdt.core.formatter.blank_lines_after_package=1\r\norg.eclipse.jdt.core.formatter.blank_lines_before_field=0\r\norg.eclipse.jdt.core.formatter.blank_lines_before_first_class_body_declaration=0\r\norg.eclipse.jdt.core.formatter.blank_lines_before_imports=1\r\norg.eclipse.jdt.core.formatter.blank_lines_before_member_type=1\r\norg.eclipse.jdt.core.formatter.blank_lines_before_method=1\r\norg.eclipse.jdt.core.formatter.blank_lines_before_new_chunk=1\r\norg.eclipse.jdt.core.formatter.blank_lines_before_package=0\r\norg.eclipse.jdt.core.formatter.blank_lines_between_import_groups=1\r\norg.eclipse.jdt.core.formatter.blank_lines_between_type_declarations=1\r\norg.eclipse.jdt.core.formatter.brace_position_for_annotation_type_declaration=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_anonymous_type_declaration=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_array_initializer=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_block=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_block_in_case=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_constructor_declaration=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_enum_constant=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_enum_declaration=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_lambda_body=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_method_declaration=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_switch=end_of_line\r\norg.eclipse.jdt.core.formatter.brace_position_for_type_declaration=end_of_line\r\norg.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_block_comment=false\r\norg.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_javadoc_comment=false\r\norg.eclipse.jdt.core.formatter.comment.format_block_comments=false\r\norg.eclipse.jdt.core.formatter.comment.format_header=false\r\norg.eclipse.jdt.core.formatter.comment.format_html=true\r\norg.eclipse.jdt.core.formatter.comment.format_javadoc_comments=true\r\norg.eclipse.jdt.core.formatter.comment.format_line_comments=false\r\norg.eclipse.jdt.core.formatter.comment.format_source_code=true\r\norg.eclipse.jdt.core.formatter.comment.indent_parameter_description=true\r\norg.eclipse.jdt.core.formatter.comment.indent_root_tags=true\r\norg.eclipse.jdt.core.formatter.comment.insert_new_line_before_root_tags=insert\r\norg.eclipse.jdt.core.formatter.comment.insert_new_line_for_parameter=insert\r\norg.eclipse.jdt.core.formatter.comment.line_length=80\r\norg.eclipse.jdt.core.formatter.comment.new_lines_at_block_boundaries=true\r\norg.eclipse.jdt.core.formatter.comment.new_lines_at_javadoc_boundaries=true\r\norg.eclipse.jdt.core.formatter.comment.preserve_white_space_between_code_and_line_comments=false\r\norg.eclipse.jdt.core.formatter.compact_else_if=true\r\norg.eclipse.jdt.core.formatter.continuation_indentation=2\r\norg.eclipse.jdt.core.formatter.continuation_indentation_for_array_initializer=2\r\norg.eclipse.jdt.core.formatter.disabling_tag=@formatter\\:off\r\norg.eclipse.jdt.core.formatter.enabling_tag=@formatter\\:on\r\norg.eclipse.jdt.core.formatter.format_guardian_clause_on_one_line=false\r\norg.eclipse.jdt.core.formatter.format_line_comment_starting_on_first_column=true\r\norg.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_annotation_declaration_header=true\r\norg.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_constant_header=true\r\norg.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_declaration_header=true\r\norg.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_type_header=true\r\norg.eclipse.jdt.core.formatter.indent_breaks_compare_to_cases=true\r\norg.eclipse.jdt.core.formatter.indent_empty_lines=false\r\norg.eclipse.jdt.core.formatter.indent_statements_compare_to_block=true\r\norg.eclipse.jdt.core.formatter.indent_statements_compare_to_body=true\r\norg.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_cases=true\r\norg.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_switch=true\r\norg.eclipse.jdt.core.formatter.indentation.size=2\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_field=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_local_variable=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_method=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_package=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_parameter=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_type=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_label=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_opening_brace_in_array_initializer=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_after_type_annotation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_at_end_of_file_if_missing=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_before_catch_in_try_statement=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_before_closing_brace_in_array_initializer=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_before_else_in_if_statement=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_before_finally_in_try_statement=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_before_while_in_do_statement=do not insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_annotation_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_anonymous_type_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_block=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_constant=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_method_body=insert\r\norg.eclipse.jdt.core.formatter.insert_new_line_in_empty_type_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_and_in_type_parameter=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_assignment_operator=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation_type_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_binary_operator=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_arguments=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_parameters=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_closing_brace_in_block=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_closing_paren_in_cast=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_colon_in_assert=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_colon_in_case=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_colon_in_conditional=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_colon_in_for=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_colon_in_labeled_statement=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_allocation_expression=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_annotation=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_array_initializer=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_parameters=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_throws=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_constant_arguments=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_declarations=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_explicitconstructorcall_arguments=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_increments=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_inits=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_parameters=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_throws=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_invocation_arguments=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_field_declarations=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_local_declarations=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_parameterized_type_reference=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_superinterfaces=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_arguments=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_parameters=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_ellipsis=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_lambda_arrow=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_parameterized_type_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_parameters=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_brace_in_array_initializer=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_allocation_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_annotation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_cast=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_catch=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_constructor_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_enum_constant=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_for=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_if=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_invocation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_parenthesized_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_switch=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_synchronized=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_try=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_while=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_postfix_operator=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_prefix_operator=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_question_in_conditional=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_question_in_wildcard=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_for=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_try_resources=insert\r\norg.eclipse.jdt.core.formatter.insert_space_after_unary_operator=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_and_in_type_parameter=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_assignment_operator=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_at_in_annotation_type_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_binary_operator=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_parameterized_type_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_parameters=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_brace_in_array_initializer=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_allocation_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_annotation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_cast=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_catch=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_constructor_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_enum_constant=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_for=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_if=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_invocation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_parenthesized_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_switch=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_synchronized=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_try=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_while=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_colon_in_assert=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_colon_in_case=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_colon_in_conditional=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_colon_in_default=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_colon_in_for=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_colon_in_labeled_statement=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_allocation_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_annotation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_array_initializer=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_parameters=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_throws=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_constant_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_declarations=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_explicitconstructorcall_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_increments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_inits=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_parameters=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_throws=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_invocation_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_field_declarations=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_local_declarations=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_parameterized_type_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_superinterfaces=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_parameters=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_ellipsis=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_lambda_arrow=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_parameterized_type_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_arguments=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_parameters=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_annotation_type_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_anonymous_type_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_array_initializer=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_block=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_constructor_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_constant=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_method_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_switch=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_type_declaration=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_allocation_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_type_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation_type_member_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_catch=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_constructor_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_enum_constant=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_for=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_if=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_invocation=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_parenthesized_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_switch=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_synchronized=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_try=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_while=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_return=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_throw=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_postfix_operator=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_prefix_operator=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_question_in_conditional=insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_question_in_wildcard=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_semicolon=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_for=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_try_resources=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_before_unary_operator=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_brackets_in_array_type_reference=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_braces_in_array_initializer=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_brackets_in_array_allocation_expression=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_annotation_type_member_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_constructor_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_enum_constant=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_declaration=do not insert\r\norg.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_invocation=do not insert\r\norg.eclipse.jdt.core.formatter.join_lines_in_comments=true\r\norg.eclipse.jdt.core.formatter.join_wrapped_lines=true\r\norg.eclipse.jdt.core.formatter.keep_else_statement_on_same_line=false\r\norg.eclipse.jdt.core.formatter.keep_empty_array_initializer_on_one_line=false\r\norg.eclipse.jdt.core.formatter.keep_imple_if_on_one_line=false\r\norg.eclipse.jdt.core.formatter.keep_then_statement_on_same_line=false\r\norg.eclipse.jdt.core.formatter.lineSplit=120\r\norg.eclipse.jdt.core.formatter.never_indent_block_comments_on_first_column=false\r\norg.eclipse.jdt.core.formatter.never_indent_line_comments_on_first_column=false\r\norg.eclipse.jdt.core.formatter.number_of_blank_lines_at_beginning_of_method_body=0\r\norg.eclipse.jdt.core.formatter.number_of_empty_lines_to_preserve=1\r\norg.eclipse.jdt.core.formatter.put_empty_statement_on_new_line=true\r\norg.eclipse.jdt.core.formatter.tabulation.char=space\r\norg.eclipse.jdt.core.formatter.tabulation.size=2\r\norg.eclipse.jdt.core.formatter.use_on_off_tags=true\r\norg.eclipse.jdt.core.formatter.use_tabs_only_for_leading_indentations=false\r\norg.eclipse.jdt.core.formatter.wrap_before_binary_operator=true\r\norg.eclipse.jdt.core.formatter.wrap_before_or_operator_multicatch=true\r\norg.eclipse.jdt.core.formatter.wrap_outer_expressions_when_nested=true\r\n"
  },
  {
    "path": "etc/eclipse/settings/org.eclipse.m2e.core.prefs",
    "content": "activeProfiles=eclipse\r\neclipse.preferences.version=1\r\nresolveWorkspaceProjects=true\r\nversion=1\r\n"
  },
  {
    "path": "etc/forbidden-apis/signatures.txt",
    "content": "@defaultMessage Convert to URI\njava.net.URL#getPath()\njava.net.URL#getFile()\n\n@defaultMessage spawns threads with vague names; use a custom thread factory and name threads so that you can tell (by its name) which executor it is associated with\njava.util.concurrent.Executors#newFixedThreadPool(int)\njava.util.concurrent.Executors#newSingleThreadExecutor()\njava.util.concurrent.Executors#newCachedThreadPool()\njava.util.concurrent.Executors#newSingleThreadScheduledExecutor()\njava.util.concurrent.Executors#newScheduledThreadPool(int)\njava.util.concurrent.Executors#defaultThreadFactory()\njava.util.concurrent.Executors#privilegedThreadFactory()\n\njava.lang.Character#codePointBefore(char[],int) @ Implicit start offset is error-prone when the char[] is a buffer and the first chars are random chars\njava.lang.Character#codePointAt(char[],int) @ Implicit end offset is error-prone when the char[] is a buffer and the last chars are random chars\n\n@defaultMessage Please do not try to stop the world\njava.lang.System#gc()\n\n@defaultMessage Use Channels.* methods to write to channels. Do not write directly.\njava.nio.channels.WritableByteChannel#write(java.nio.ByteBuffer)\njava.nio.channels.FileChannel#write(java.nio.ByteBuffer, long)\njava.nio.channels.GatheringByteChannel#write(java.nio.ByteBuffer[], int, int)\njava.nio.channels.GatheringByteChannel#write(java.nio.ByteBuffer[])\njava.nio.channels.ReadableByteChannel#read(java.nio.ByteBuffer)\njava.nio.channels.ScatteringByteChannel#read(java.nio.ByteBuffer[])\njava.nio.channels.ScatteringByteChannel#read(java.nio.ByteBuffer[], int, int)\njava.nio.channels.FileChannel#read(java.nio.ByteBuffer, long)\n\n@defaultMessage Filters are trappy (add suppression or make sure all read methods are redelegated).\njava.io.FilterInputStream#<init>(java.io.InputStream)\njava.io.FilterOutputStream#<init>(java.io.OutputStream)\njava.io.FilterReader#<init>(java.io.Reader)\njava.io.FilterWriter#<init>(java.io.Writer)\n\n#@defaultMessage Do not use context class loaders, prefer explicit ClassLoader argument.\njava.lang.Thread@getContextClassLoader()\njava.lang.Thread@setContextClassLoader()\n"
  },
  {
    "path": "morfologik-fsa/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.carrot2</groupId>\n    <artifactId>morfologik-parent</artifactId>\n    <version>2.2.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>morfologik-fsa</artifactId>\n  <packaging>bundle</packaging>\n\n  <name>Morfologik FSA (Traversal)</name>\n  <description>Morfologik Finite State Automata Traversal.</description>\n  \n  <properties>\n    <forbiddenapis.signaturefile>../etc/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n    <project.moduleId>org.carrot2.morfologik.fsa</project.moduleId>\n  </properties>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.felix</groupId>\n        <artifactId>maven-bundle-plugin</artifactId>\n        <configuration>\n          <instructions>\n            <Export-Package>morfologik.fsa</Export-Package>\n            <Import-Package>*</Import-Package>\n          </instructions>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/ByteSequenceIterator.java",
    "content": "package morfologik.fsa;\n\nimport java.nio.ByteBuffer;\nimport java.util.*;\n\n/**\n * An iterator that traverses the right language of a given node (all sequences reachable from a\n * given node).\n */\npublic final class ByteSequenceIterator implements Iterator<ByteBuffer> {\n  /**\n   * Default expected depth of the recursion stack (estimated longest sequence in the automaton).\n   * Buffers expand by the same value if exceeded.\n   */\n  private static final int EXPECTED_MAX_STATES = 15;\n\n  /** The FSA to which this iterator belongs. */\n  private final FSA fsa;\n\n  /** An internal cache for the next element in the FSA */\n  private ByteBuffer nextElement;\n\n  /** A buffer for the current sequence of bytes from the current node to the root. */\n  private byte[] buffer = new byte[EXPECTED_MAX_STATES];\n\n  /** Reusable byte buffer wrapper around {@link #buffer}. */\n  private ByteBuffer bufferWrapper = ByteBuffer.wrap(buffer);\n\n  /** An arc stack for DFS when processing the automaton. */\n  private int[] arcs = new int[EXPECTED_MAX_STATES];\n\n  /** Current processing depth in {@link #arcs}. */\n  private int position;\n\n  /**\n   * Create an instance of the iterator iterating over all automaton sequences.\n   *\n   * @param fsa The automaton to iterate over.\n   */\n  public ByteSequenceIterator(FSA fsa) {\n    this(fsa, fsa.getRootNode());\n  }\n\n  /**\n   * Create an instance of the iterator for a given node.\n   *\n   * @param fsa The automaton to iterate over.\n   * @param node The starting node's identifier (can be the {@link FSA#getRootNode()}).\n   */\n  public ByteSequenceIterator(FSA fsa, int node) {\n    this.fsa = fsa;\n\n    if (fsa.getFirstArc(node) != 0) {\n      restartFrom(node);\n    }\n  }\n\n  /**\n   * Restart walking from <code>node</code>. Allows iterator reuse.\n   *\n   * @param node Restart the iterator from <code>node</code>.\n   * @return Returns <code>this</code> for call chaining.\n   */\n  public ByteSequenceIterator restartFrom(int node) {\n    position = 0;\n    bufferWrapper.clear();\n    nextElement = null;\n\n    pushNode(node);\n    return this;\n  }\n\n  /** Returns <code>true</code> if there are still elements in this iterator. */\n  @Override\n  public boolean hasNext() {\n    if (nextElement == null) {\n      nextElement = advance();\n    }\n\n    return nextElement != null;\n  }\n\n  /**\n   * @return Returns a {@link ByteBuffer} with the sequence corresponding to the next final state in\n   *     the automaton.\n   */\n  @Override\n  public ByteBuffer next() {\n    if (nextElement != null) {\n      final ByteBuffer cache = nextElement;\n      nextElement = null;\n      return cache;\n    } else {\n      final ByteBuffer cache = advance();\n      if (cache == null) {\n        throw new NoSuchElementException();\n      }\n      return cache;\n    }\n  }\n\n  /** Advances to the next available final state. */\n  private final ByteBuffer advance() {\n    if (position == 0) {\n      return null;\n    }\n\n    while (position > 0) {\n      final int lastIndex = position - 1;\n      final int arc = arcs[lastIndex];\n\n      if (arc == 0) {\n        // Remove the current node from the queue.\n        position--;\n        continue;\n      }\n\n      // Go to the next arc, but leave it on the stack\n      // so that we keep the recursion depth level accurate.\n      arcs[lastIndex] = fsa.getNextArc(arc);\n\n      // Expand buffer if needed.\n      final int bufferLength = this.buffer.length;\n      if (lastIndex >= bufferLength) {\n        this.buffer = Arrays.copyOf(buffer, bufferLength + EXPECTED_MAX_STATES);\n        this.bufferWrapper = ByteBuffer.wrap(buffer);\n      }\n      buffer[lastIndex] = fsa.getArcLabel(arc);\n\n      if (!fsa.isArcTerminal(arc)) {\n        // Recursively descend into the arc's node.\n        pushNode(fsa.getEndNode(arc));\n      }\n\n      if (fsa.isArcFinal(arc)) {\n        bufferWrapper.clear();\n        bufferWrapper.limit(lastIndex + 1);\n        return bufferWrapper;\n      }\n    }\n\n    return null;\n  }\n\n  /** Not implemented in this iterator. */\n  @Override\n  public void remove() {\n    throw new UnsupportedOperationException(\"Read-only iterator.\");\n  }\n\n  /** Descends to a given node, adds its arcs to the stack to be traversed. */\n  private void pushNode(int node) {\n    // Expand buffers if needed.\n    if (position == arcs.length) {\n      arcs = Arrays.copyOf(arcs, arcs.length + EXPECTED_MAX_STATES);\n    }\n\n    arcs[position++] = fsa.getFirstArc(node);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/CFSA.java",
    "content": "package morfologik.fsa;\n\nimport static morfologik.fsa.FSAFlags.*;\n\nimport java.io.*;\nimport java.util.*;\n\n/**\n * CFSA (Compact Finite State Automaton) binary format implementation. This is a slightly\n * reorganized version of {@link FSA5} offering smaller automata size at some (minor) performance\n * penalty.\n *\n * <p><b>Note:</b> Serialize to {@link CFSA2} for new code.\n *\n * <p>The encoding of automaton body is as follows.\n *\n * <pre>\n * ---- FSA header (standard)\n * Byte                            Description\n *       +-+-+-+-+-+-+-+-+\\\n *     0 | | | | | | | | | +------ '\\'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     1 | | | | | | | | | +------ 'f'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     2 | | | | | | | | | +------ 's'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     3 | | | | | | | | | +------ 'a'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     4 | | | | | | | | | +------ version (fixed 0xc5)\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     5 | | | | | | | | | +------ filler character\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     6 | | | | | | | | | +------ annot character\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     7 |C|C|C|C|G|G|G|G| +------ C - node data size (ctl), G - address size (gotoLength)\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *  8-32 | | | | | | | | | +------ labels mapped for type (1) of arc encoding.\n *       : : : : : : : : : |\n *       +-+-+-+-+-+-+-+-+/\n *\n * ---- Start of a node; only if automaton was compiled with NUMBERS option.\n *\n * Byte\n *        +-+-+-+-+-+-+-+-+\\\n *      0 | | | | | | | | | \\  LSB\n *        +-+-+-+-+-+-+-+-+  +\n *      1 | | | | | | | | |  |      number of strings recognized\n *        +-+-+-+-+-+-+-+-+  +----- by the automaton starting\n *        : : : : : : : : :  |      from this node.\n *        +-+-+-+-+-+-+-+-+  +\n *  ctl-1 | | | | | | | | | /  MSB\n *        +-+-+-+-+-+-+-+-+/\n *\n * ---- A vector of node's arcs. Conditional format, depending on flags.\n *\n * 1) NEXT bit set, mapped arc label.\n *\n *                +--------------- arc's label mapped in M bits if M's field value &gt; 0\n *                | +------------- node pointed to is next\n *                | | +----------- the last arc of the node\n *         _______| | | +--------- the arc is final\n *        /       | | | |\n *       +-+-+-+-+-+-+-+-+\\\n *     0 |M|M|M|M|M|1|L|F| +------ flags + (M) index of the mapped label.\n *       +-+-+-+-+-+-+-+-+/\n *\n * 2) NEXT bit set, label separate.\n *\n *                +--------------- arc's label stored separately (M's field is zero).\n *                | +------------- node pointed to is next\n *                | | +----------- the last arc of the node\n *                | | | +--------- the arc is final\n *                | | | |\n *       +-+-+-+-+-+-+-+-+\\\n *     0 |0|0|0|0|0|1|L|F| +------ flags\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     1 | | | | | | | | | +------ label\n *       +-+-+-+-+-+-+-+-+/\n *\n * 3) NEXT bit not set. Full arc.\n *\n *                  +------------- node pointed to is next\n *                  | +----------- the last arc of the node\n *                  | | +--------- the arc is final\n *                  | | |\n *       +-+-+-+-+-+-+-+-+\\\n *     0 |A|A|A|A|A|0|L|F| +------ flags + (A) address field, lower bits\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     1 | | | | | | | | | +------ label\n *       +-+-+-+-+-+-+-+-+/\n *       : : : : : : : : :\n *       +-+-+-+-+-+-+-+-+\\\n * gtl-1 |A|A|A|A|A|A|A|A| +------ address, continuation (MSB)\n *       +-+-+-+-+-+-+-+-+/\n * </pre>\n */\npublic final class CFSA extends FSA {\n  /** Automaton header version value. */\n  public static final byte VERSION = (byte) 0xC5;\n\n  /**\n   * Bitmask indicating that an arc corresponds to the last character of a sequence available when\n   * building the automaton.\n   */\n  public static final int BIT_FINAL_ARC = 1 << 0;\n\n  /**\n   * Bitmask indicating that an arc is the last one of the node's list and the following one belongs\n   * to another node.\n   */\n  public static final int BIT_LAST_ARC = 1 << 1;\n\n  /**\n   * Bitmask indicating that the target node of this arc follows it in the compressed automaton\n   * structure (no goto field).\n   */\n  public static final int BIT_TARGET_NEXT = 1 << 2;\n\n  /**\n   * An array of bytes with the internal representation of the automaton. Please see the\n   * documentation of this class for more information on how this structure is organized.\n   */\n  public byte[] arcs;\n\n  /**\n   * The length of the node header structure (if the automaton was compiled with <code>NUMBERS\n   * </code> option). Otherwise zero.\n   */\n  public final int nodeDataLength;\n\n  /** Flags for this automaton version. */\n  private final Set<FSAFlags> flags;\n\n  /** Number of bytes each address takes in full, expanded form (goto length). */\n  public final int gtl;\n\n  /**\n   * Label mapping for arcs of type (1) (see class documentation). The array is indexed by mapped\n   * label's value and contains the original label.\n   */\n  public final byte[] labelMapping;\n\n  /** Creates a new automaton, reading it from a file in FSA format, version 5. */\n  CFSA(InputStream stream) throws IOException {\n    DataInputStream in = new DataInputStream(stream);\n\n    // Skip legacy header fields.\n    in.readByte(); // filler\n    in.readByte(); // annotation\n    final byte hgtl = in.readByte();\n\n    /*\n     * Determine if the automaton was compiled with NUMBERS. If so, modify\n     * ctl and goto fields accordingly.\n     */\n    flags = EnumSet.of(FLEXIBLE, STOPBIT, NEXTBIT);\n    if ((hgtl & 0xf0) != 0) {\n      this.nodeDataLength = (hgtl >>> 4) & 0x0f;\n      this.gtl = hgtl & 0x0f;\n      flags.add(NUMBERS);\n    } else {\n      this.nodeDataLength = 0;\n      this.gtl = hgtl & 0x0f;\n    }\n\n    /*\n     * Read mapping dictionary.\n     */\n    labelMapping = new byte[1 << 5];\n    in.readFully(labelMapping);\n\n    /*\n     * Read arcs' data.\n     */\n    arcs = readRemaining(in);\n  }\n\n  /**\n   * Returns the start node of this automaton. May return <code>0</code> if the start node is also\n   * an end node.\n   */\n  @Override\n  public int getRootNode() {\n    // Skip dummy node marking terminating state.\n    final int epsilonNode = skipArc(getFirstArc(0));\n\n    // And follow the epsilon node's first (and only) arc.\n    return getDestinationNodeOffset(getFirstArc(epsilonNode));\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public final int getFirstArc(int node) {\n    return nodeDataLength + node;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public final int getNextArc(int arc) {\n    if (isArcLast(arc)) return 0;\n    else return skipArc(arc);\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getArc(int node, byte label) {\n    for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n      if (getArcLabel(arc) == label) return arc;\n    }\n\n    // An arc labeled with \"label\" not found.\n    return 0;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getEndNode(int arc) {\n    final int nodeOffset = getDestinationNodeOffset(arc);\n    if (0 == nodeOffset) {\n      throw new RuntimeException(\"This is a terminal arc [\" + arc + \"]\");\n    }\n    return nodeOffset;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public byte getArcLabel(int arc) {\n    if (isNextSet(arc) && isLabelCompressed(arc)) {\n      return this.labelMapping[(arcs[arc] >>> 3) & 0x1f];\n    } else {\n      return arcs[arc + 1];\n    }\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getRightLanguageCount(int node) {\n    assert getFlags().contains(FSAFlags.NUMBERS) : \"This FSA was not compiled with NUMBERS.\";\n    return FSA5.decodeFromBytes(arcs, node, nodeDataLength);\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public boolean isArcFinal(int arc) {\n    return (arcs[arc] & BIT_FINAL_ARC) != 0;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public boolean isArcTerminal(int arc) {\n    return (0 == getDestinationNodeOffset(arc));\n  }\n\n  /**\n   * Returns <code>true</code> if this arc has <code>NEXT</code> bit set.\n   *\n   * @see #BIT_LAST_ARC\n   * @param arc The node's arc identifier.\n   * @return Returns true if the argument is the last arc of a node.\n   */\n  public boolean isArcLast(int arc) {\n    return (arcs[arc] & BIT_LAST_ARC) != 0;\n  }\n\n  /**\n   * @see #BIT_TARGET_NEXT\n   * @param arc The node's arc identifier.\n   * @return Returns true if {@link #BIT_TARGET_NEXT} is set for this arc.\n   */\n  public boolean isNextSet(int arc) {\n    return (arcs[arc] & BIT_TARGET_NEXT) != 0;\n  }\n\n  /**\n   * @param arc The node's arc identifier.\n   * @return Returns <code>true</code> if the label is compressed inside flags byte.\n   */\n  public boolean isLabelCompressed(int arc) {\n    assert isNextSet(arc) : \"Only applicable to arcs with NEXT bit.\";\n    return (arcs[arc] & (-1 << 3)) != 0;\n  }\n\n  /**\n   * {@inheritDoc}\n   *\n   * <p>For this automaton version, an additional {@link FSAFlags#NUMBERS} flag may be set to\n   * indicate the automaton contains extra fields for each node.\n   */\n  public Set<FSAFlags> getFlags() {\n    return Collections.unmodifiableSet(flags);\n  }\n\n  /** Returns the address of the node pointed to by this arc. */\n  final int getDestinationNodeOffset(int arc) {\n    if (isNextSet(arc)) {\n      /* The destination node follows this arc in the array. */\n      return skipArc(arc);\n    } else {\n      /*\n       * The destination node address has to be extracted from the arc's\n       * goto field.\n       */\n      int r = 0;\n      for (int i = gtl; --i >= 1; ) {\n        r = r << 8 | (arcs[arc + 1 + i] & 0xff);\n      }\n      r = r << 8 | (arcs[arc] & 0xff);\n      return r >>> 3;\n    }\n  }\n\n  /** Read the arc's layout and skip as many bytes, as needed, to skip it. */\n  private int skipArc(int offset) {\n    if (isNextSet(offset)) {\n      if (isLabelCompressed(offset)) {\n        offset++;\n      } else {\n        offset += 1 + 1;\n      }\n    } else {\n      offset += 1 + gtl;\n    }\n    return offset;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/CFSA2.java",
    "content": "package morfologik.fsa;\n\nimport java.io.DataInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.EnumSet;\nimport java.util.Set;\n\n/**\n * CFSA (Compact Finite State Automaton) binary format implementation, version 2:\n *\n * <ul>\n *   <li>{@link #BIT_TARGET_NEXT} applicable on all arcs, not necessarily the last one.\n *   <li>v-coded goto field\n *   <li>v-coded perfect hashing numbers, if any\n *   <li>31 most frequent labels integrated with flags byte\n * </ul>\n *\n * <p>The encoding of automaton body is as follows.\n *\n * <pre>\n * ---- CFSA header\n * Byte                            Description\n *       +-+-+-+-+-+-+-+-+\\\n *     0 | | | | | | | | | +------ '\\'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     1 | | | | | | | | | +------ 'f'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     2 | | | | | | | | | +------ 's'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     3 | | | | | | | | | +------ 'a'\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     4 | | | | | | | | | +------ version (fixed 0xc6)\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     5 | | | | | | | | | +----\\\n *       +-+-+-+-+-+-+-+-+/      \\ flags [MSB first]\n *       +-+-+-+-+-+-+-+-+\\      /\n *     6 | | | | | | | | | +----/\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     7 | | | | | | | | | +------ label lookup table size\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *  8-32 | | | | | | | | | +------ label value lookup table\n *       : : : : : : : : : |\n *       +-+-+-+-+-+-+-+-+/\n *\n * ---- Start of a node; only if automaton was compiled with NUMBERS option.\n *\n * Byte\n *        +-+-+-+-+-+-+-+-+\\\n *      0 | | | | | | | | | \\\n *        +-+-+-+-+-+-+-+-+  +\n *      1 | | | | | | | | |  |      number of strings recognized\n *        +-+-+-+-+-+-+-+-+  +----- by the automaton starting\n *        : : : : : : : : :  |      from this node. v-coding\n *        +-+-+-+-+-+-+-+-+  +\n *        | | | | | | | | | /\n *        +-+-+-+-+-+-+-+-+/\n *\n * ---- A vector of this node's arcs. An arc's layout depends on the combination of flags.\n *\n * 1) NEXT bit set, mapped arc label.\n *\n *        +----------------------- node pointed to is next\n *        | +--------------------- the last arc of the node\n *        | | +------------------- this arc leads to a final state (acceptor)\n *        | | |  _______+--------- arc's label; indexed if M &gt; 0, otherwise explicit label follows\n *        | | | / | | | |\n *       +-+-+-+-+-+-+-+-+\\\n *     0 |N|L|F|M|M|M|M|M| +------ flags + (M) index of the mapped label.\n *       +-+-+-+-+-+-+-+-+/\n *       +-+-+-+-+-+-+-+-+\\\n *     1 | | | | | | | | | +------ optional label if M == 0\n *       +-+-+-+-+-+-+-+-+/\n *       : : : : : : : : :\n *       +-+-+-+-+-+-+-+-+\\\n *       |A|A|A|A|A|A|A|A| +------ v-coded goto address\n *       +-+-+-+-+-+-+-+-+/\n * </pre>\n */\npublic final class CFSA2 extends FSA {\n  /** Automaton header version value. */\n  public static final byte VERSION = (byte) 0xc6;\n\n  /** The target node of this arc follows the last arc of the current state (no goto field). */\n  public static final int BIT_TARGET_NEXT = 1 << 7;\n\n  /** The arc is the last one from the current node's arcs list. */\n  public static final int BIT_LAST_ARC = 1 << 6;\n\n  /**\n   * The arc corresponds to the last character of a sequence available when building the automaton\n   * (acceptor transition).\n   */\n  public static final int BIT_FINAL_ARC = 1 << 5;\n\n  /** The count of bits assigned to storing an indexed label. */\n  static final int LABEL_INDEX_BITS = 5;\n\n  /** Masks only the M bits of a flag byte. */\n  static final int LABEL_INDEX_MASK = (1 << LABEL_INDEX_BITS) - 1;\n\n  /** Maximum size of the labels index. */\n  public static final int LABEL_INDEX_SIZE = (1 << LABEL_INDEX_BITS) - 1;\n\n  /**\n   * An array of bytes with the internal representation of the automaton. Please see the\n   * documentation of this class for more information on how this structure is organized.\n   */\n  public byte[] arcs;\n\n  /** Flags for this automaton version. */\n  private final EnumSet<FSAFlags> flags;\n\n  /** Label mapping for M-indexed labels. */\n  public final byte[] labelMapping;\n\n  /** If <code>true</code> states are prepended with numbers. */\n  private final boolean hasNumbers;\n\n  /** Epsilon node's offset. */\n  private final int epsilon = 0;\n\n  /** Reads an automaton from a byte stream. */\n  CFSA2(InputStream stream) throws IOException {\n    DataInputStream in = new DataInputStream(stream);\n\n    // Read flags.\n    short flagBits = in.readShort();\n    flags = EnumSet.noneOf(FSAFlags.class);\n    for (FSAFlags f : FSAFlags.values()) {\n      if (f.isSet(flagBits)) {\n        flags.add(f);\n      }\n    }\n\n    if (flagBits != FSAFlags.asShort(flags)) {\n      throw new IOException(\"Unrecognized flags: 0x\" + Integer.toHexString(flagBits));\n    }\n\n    this.hasNumbers = flags.contains(FSAFlags.NUMBERS);\n\n    /*\n     * Read mapping dictionary.\n     */\n    int labelMappingSize = in.readByte() & 0xff;\n    labelMapping = new byte[labelMappingSize];\n    in.readFully(labelMapping);\n\n    /*\n     * Read arcs' data.\n     */\n    arcs = readRemaining(in);\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getRootNode() {\n    // Skip dummy node marking terminating state.\n    return getDestinationNodeOffset(getFirstArc(epsilon));\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public final int getFirstArc(int node) {\n    if (hasNumbers) {\n      return skipVInt(node);\n    } else {\n      return node;\n    }\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public final int getNextArc(int arc) {\n    if (isArcLast(arc)) {\n      return 0;\n    } else {\n      return skipArc(arc);\n    }\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getArc(int node, byte label) {\n    for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n      if (getArcLabel(arc) == label) {\n        return arc;\n      }\n    }\n\n    // An arc labeled with \"label\" not found.\n    return 0;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getEndNode(int arc) {\n    final int nodeOffset = getDestinationNodeOffset(arc);\n    assert nodeOffset != 0 : \"Can't follow a terminal arc: \" + arc;\n    assert nodeOffset < arcs.length : \"Node out of bounds.\";\n    return nodeOffset;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public byte getArcLabel(int arc) {\n    int index = arcs[arc] & LABEL_INDEX_MASK;\n    if (index > 0) {\n      return this.labelMapping[index];\n    } else {\n      return arcs[arc + 1];\n    }\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getRightLanguageCount(int node) {\n    assert getFlags().contains(FSAFlags.NUMBERS) : \"This FSA was not compiled with NUMBERS.\";\n    return readVInt(arcs, node);\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public boolean isArcFinal(int arc) {\n    return (arcs[arc] & BIT_FINAL_ARC) != 0;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public boolean isArcTerminal(int arc) {\n    return (0 == getDestinationNodeOffset(arc));\n  }\n\n  /**\n   * Returns <code>true</code> if this arc has <code>NEXT</code> bit set.\n   *\n   * @see #BIT_LAST_ARC\n   * @param arc The node's arc identifier.\n   * @return Returns true if the argument is the last arc of a node.\n   */\n  public boolean isArcLast(int arc) {\n    return (arcs[arc] & BIT_LAST_ARC) != 0;\n  }\n\n  /**\n   * @see #BIT_TARGET_NEXT\n   * @param arc The node's arc identifier.\n   * @return Returns true if {@link #BIT_TARGET_NEXT} is set for this arc.\n   */\n  public boolean isNextSet(int arc) {\n    return (arcs[arc] & BIT_TARGET_NEXT) != 0;\n  }\n\n  /** {@inheritDoc} */\n  public Set<FSAFlags> getFlags() {\n    return flags;\n  }\n\n  /** Returns the address of the node pointed to by this arc. */\n  final int getDestinationNodeOffset(int arc) {\n    if (isNextSet(arc)) {\n      /* Follow until the last arc of this state. */\n      while (!isArcLast(arc)) {\n        arc = getNextArc(arc);\n      }\n\n      /* And return the byte right after it. */\n      return skipArc(arc);\n    } else {\n      /*\n       * The destination node address is v-coded. v-code starts either\n       * at the next byte (label indexed) or after the next byte (label explicit).\n       */\n      return readVInt(arcs, arc + ((arcs[arc] & LABEL_INDEX_MASK) == 0 ? 2 : 1));\n    }\n  }\n\n  /** Read the arc's layout and skip as many bytes, as needed, to skip it. */\n  private int skipArc(int offset) {\n    int flag = arcs[offset++];\n\n    // Explicit label?\n    if ((flag & LABEL_INDEX_MASK) == 0) {\n      offset++;\n    }\n\n    // Explicit goto?\n    if ((flag & BIT_TARGET_NEXT) == 0) {\n      offset = skipVInt(offset);\n    }\n\n    assert offset < this.arcs.length;\n    return offset;\n  }\n\n  /** Read a v-int. */\n  static int readVInt(byte[] array, int offset) {\n    byte b = array[offset];\n    int value = b & 0x7F;\n\n    for (int shift = 7; b < 0; shift += 7) {\n      b = array[++offset];\n      value |= (b & 0x7F) << shift;\n    }\n\n    return value;\n  }\n\n  /** Return the byte-length of a v-coded int. */\n  static int vIntLength(int value) {\n    assert value >= 0 : \"Can't v-code negative ints.\";\n\n    int bytes;\n    for (bytes = 1; value >= 0x80; bytes++) {\n      value >>= 7;\n    }\n\n    return bytes;\n  }\n\n  /** Skip a v-int. */\n  private int skipVInt(int offset) {\n    while (arcs[offset++] < 0) {\n      // Do nothing.\n    }\n    return offset;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/FSA.java",
    "content": "package morfologik.fsa;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.nio.ByteBuffer;\nimport java.util.BitSet;\nimport java.util.Collections;\nimport java.util.Iterator;\nimport java.util.Locale;\nimport java.util.Set;\n\n/**\n * This is a top abstract class for handling finite state automata. These automata are arc-based, a\n * design described in Jan Daciuk's <i>Incremental Construction of Finite-State Automata and\n * Transducers, and Their Use in the Natural Language Processing</i> (PhD thesis, Technical\n * University of Gdansk).\n */\npublic abstract class FSA implements Iterable<ByteBuffer> {\n  /**\n   * @return Returns the identifier of the root node of this automaton. Returns 0 if the start node\n   *     is also the end node (the automaton is empty).\n   */\n  public abstract int getRootNode();\n\n  /**\n   * @param node Identifier of the node.\n   * @return Returns the identifier of the first arc leaving <code>node</code> or 0 if the node has\n   *     no outgoing arcs.\n   */\n  public abstract int getFirstArc(int node);\n\n  /**\n   * @param arc The arc's identifier.\n   * @return Returns the identifier of the next arc after <code>arc</code> and leaving <code>node\n   *     </code>. Zero is returned if no more arcs are available for the node.\n   */\n  public abstract int getNextArc(int arc);\n\n  /**\n   * @param node Identifier of the node.\n   * @param label The arc's label.\n   * @return Returns the identifier of an arc leaving <code>node</code> and labeled with <code>label\n   *     </code>. An identifier equal to 0 means the node has no outgoing arc labeled <code>label\n   *     </code>.\n   */\n  public abstract int getArc(int node, byte label);\n\n  /**\n   * @param arc The arc's identifier.\n   * @return Return the label associated with a given <code>arc</code>.\n   */\n  public abstract byte getArcLabel(int arc);\n\n  /**\n   * @param arc The arc's identifier.\n   * @return Returns <code>true</code> if the destination node at the end of this <code>arc</code>\n   *     corresponds to an input sequence created when building this automaton.\n   */\n  public abstract boolean isArcFinal(int arc);\n\n  /**\n   * @param arc The arc's identifier.\n   * @return Returns <code>true</code> if this <code>arc</code> does not have a terminating node\n   *     (@link {@link #getEndNode(int)} will throw an exception). Implies {@link #isArcFinal(int)}.\n   */\n  public abstract boolean isArcTerminal(int arc);\n\n  /**\n   * @param arc The arc's identifier.\n   * @return Return the end node pointed to by a given <code>arc</code>. Terminal arcs (those that\n   *     point to a terminal state) have no end node representation and throw a runtime exception.\n   */\n  public abstract int getEndNode(int arc);\n\n  /**\n   * @return Returns a set of flags for this FSA instance.\n   */\n  public abstract Set<FSAFlags> getFlags();\n\n  /**\n   * @param node Identifier of the node.\n   * @return Calculates and returns the number of arcs of a given node.\n   */\n  public int getArcCount(int node) {\n    int count = 0;\n    for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n      count++;\n    }\n    return count;\n  }\n\n  /**\n   * @param node Identifier of the node.\n   * @return Returns the number of sequences reachable from the given state if the automaton was\n   *     compiled with {@link FSAFlags#NUMBERS}. The size of the right language of the state, in\n   *     other words.\n   * @throws UnsupportedOperationException If the automaton was not compiled with {@link\n   *     FSAFlags#NUMBERS}. The value can then be computed by manual count of {@link #getSequences}.\n   */\n  public int getRightLanguageCount(int node) {\n    throw new UnsupportedOperationException(\"Automaton not compiled with \" + FSAFlags.NUMBERS);\n  }\n\n  /**\n   * Returns an iterator over all binary sequences starting at the given FSA state (node) and ending\n   * in final nodes. This corresponds to a set of suffixes of a given prefix from all sequences\n   * stored in the automaton.\n   *\n   * <p>The returned iterator is a {@link ByteBuffer} whose contents changes on each call to {@link\n   * Iterator#next()}. The keep the contents between calls to {@link Iterator#next()}, one must copy\n   * the buffer to some other location.\n   *\n   * <p><b>Important.</b> It is guaranteed that the returned byte buffer is backed by a byte array\n   * and that the content of the byte buffer starts at the array's index 0.\n   *\n   * @param node Identifier of the starting node from which to return subsequences.\n   * @return An iterable over all sequences encoded starting at the given node.\n   */\n  public Iterable<ByteBuffer> getSequences(final int node) {\n    if (node == 0) {\n      return Collections.<ByteBuffer>emptyList();\n    }\n\n    return new Iterable<ByteBuffer>() {\n      public Iterator<ByteBuffer> iterator() {\n        return new ByteSequenceIterator(FSA.this, node);\n      }\n    };\n  }\n\n  /**\n   * An alias of calling {@link #iterator} directly ({@link FSA} is also {@link Iterable}).\n   *\n   * @return Returns all sequences encoded in the automaton.\n   */\n  public final Iterable<ByteBuffer> getSequences() {\n    return getSequences(getRootNode());\n  }\n\n  /**\n   * Returns an iterator over all binary sequences starting from the initial FSA state (node) and\n   * ending in final nodes. The returned iterator is a {@link ByteBuffer} whose contents changes on\n   * each call to {@link Iterator#next()}. The keep the contents between calls to {@link\n   * Iterator#next()}, one must copy the buffer to some other location.\n   *\n   * <p><b>Important.</b> It is guaranteed that the returned byte buffer is backed by a byte array\n   * and that the content of the byte buffer starts at the array's index 0.\n   */\n  public final Iterator<ByteBuffer> iterator() {\n    return getSequences().iterator();\n  }\n\n  /**\n   * Visit all states. The order of visiting is undefined. This method may be faster than traversing\n   * the automaton in post or preorder since it can scan states linearly. Returning false from\n   * {@link StateVisitor#accept(int)} immediately terminates the traversal.\n   *\n   * @param v Visitor to receive traversal calls.\n   * @param <T> A subclass of {@link StateVisitor}.\n   * @return Returns the argument (for access to anonymous class fields).\n   */\n  public <T extends StateVisitor> T visitAllStates(T v) {\n    return visitInPostOrder(v);\n  }\n\n  /**\n   * Same as {@link #visitInPostOrder(StateVisitor, int)}, starting from root automaton node.\n   *\n   * @param v Visitor to receive traversal calls.\n   * @param <T> A subclass of {@link StateVisitor}.\n   * @return Returns the argument (for access to anonymous class fields).\n   */\n  public <T extends StateVisitor> T visitInPostOrder(T v) {\n    return visitInPostOrder(v, getRootNode());\n  }\n\n  /**\n   * Visits all states reachable from <code>node</code> in postorder. Returning false from {@link\n   * StateVisitor#accept(int)} immediately terminates the traversal.\n   *\n   * @param v Visitor to receive traversal calls.\n   * @param <T> A subclass of {@link StateVisitor}.\n   * @param node Identifier of the node.\n   * @return Returns the argument (for access to anonymous class fields).\n   */\n  public <T extends StateVisitor> T visitInPostOrder(T v, int node) {\n    visitInPostOrder(v, node, new BitSet());\n    return v;\n  }\n\n  /** Private recursion. */\n  private boolean visitInPostOrder(StateVisitor v, int node, BitSet visited) {\n    if (visited.get(node)) return true;\n    visited.set(node);\n\n    for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n      if (!isArcTerminal(arc)) {\n        if (!visitInPostOrder(v, getEndNode(arc), visited)) return false;\n      }\n    }\n\n    return v.accept(node);\n  }\n\n  /**\n   * Same as {@link #visitInPreOrder(StateVisitor, int)}, starting from root automaton node.\n   *\n   * @param v Visitor to receive traversal calls.\n   * @param <T> A subclass of {@link StateVisitor}.\n   * @return Returns the argument (for access to anonymous class fields).\n   */\n  public <T extends StateVisitor> T visitInPreOrder(T v) {\n    return visitInPreOrder(v, getRootNode());\n  }\n\n  /**\n   * Visits all states in preorder. Returning false from {@link StateVisitor#accept(int)} skips\n   * traversal of all sub-states of a given state.\n   *\n   * @param v Visitor to receive traversal calls.\n   * @param <T> A subclass of {@link StateVisitor}.\n   * @param node Identifier of the node.\n   * @return Returns the argument (for access to anonymous class fields).\n   */\n  public <T extends StateVisitor> T visitInPreOrder(T v, int node) {\n    visitInPreOrder(v, node, new BitSet());\n    return v;\n  }\n\n  /**\n   * @param in The input stream.\n   * @return Reads all remaining bytes from an input stream and returns them as a byte array.\n   * @throws IOException Rethrown if an I/O exception occurs.\n   */\n  protected static final byte[] readRemaining(InputStream in) throws IOException {\n    ByteArrayOutputStream baos = new ByteArrayOutputStream();\n    byte[] buffer = new byte[1024 * 8];\n    int len;\n    while ((len = in.read(buffer)) >= 0) {\n      baos.write(buffer, 0, len);\n    }\n    return baos.toByteArray();\n  }\n\n  /** Private recursion. */\n  private void visitInPreOrder(StateVisitor v, int node, BitSet visited) {\n    if (visited.get(node)) {\n      return;\n    }\n    visited.set(node);\n\n    if (v.accept(node)) {\n      for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n        if (!isArcTerminal(arc)) {\n          visitInPreOrder(v, getEndNode(arc), visited);\n        }\n      }\n    }\n  }\n\n  /**\n   * A factory for reading automata in any of the supported versions.\n   *\n   * @param stream The input stream to read automaton data from. The stream is not closed.\n   * @return Returns an instantiated automaton. Never null.\n   * @throws IOException If the input stream does not represent an automaton or is otherwise\n   *     invalid.\n   */\n  public static FSA read(InputStream stream) throws IOException {\n    final FSAHeader header = FSAHeader.read(stream);\n\n    switch (header.version) {\n      case FSA5.VERSION:\n        return new FSA5(stream);\n      case CFSA.VERSION:\n        return new CFSA(stream);\n      case CFSA2.VERSION:\n        return new CFSA2(stream);\n      default:\n        throw new IOException(\n            String.format(\n                Locale.ROOT, \"Unsupported automaton version: 0x%02x\", header.version & 0xFF));\n    }\n  }\n\n  /**\n   * A factory for reading a specific FSA subclass, including proper casting.\n   *\n   * @param stream The input stream to read automaton data from. The stream is not closed.\n   * @param clazz A subclass of {@link FSA} to cast the read automaton to.\n   * @param <T> A subclass of {@link FSA} to cast the read automaton to.\n   * @return Returns an instantiated automaton. Never null.\n   * @throws IOException If the input stream does not represent an automaton, is otherwise invalid\n   *     or the class of the automaton read from the input stream is not assignable to <code>clazz\n   *     </code>.\n   */\n  public static <T extends FSA> T read(InputStream stream, Class<? extends T> clazz)\n      throws IOException {\n    FSA fsa = read(stream);\n    if (!clazz.isInstance(fsa)) {\n      throw new IOException(\n          String.format(\n              Locale.ROOT,\n              \"Expected FSA type %s, but read an incompatible type %s.\",\n              clazz.getName(),\n              fsa.getClass().getName()));\n    }\n    return clazz.cast(fsa);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/FSA5.java",
    "content": "package morfologik.fsa;\n\nimport static morfologik.fsa.FSAFlags.*;\n\nimport java.io.DataInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.Collections;\nimport java.util.EnumSet;\nimport java.util.Set;\n\n/**\n * FSA binary format implementation for version 5.\n *\n * <p>Version 5 indicates the dictionary was built with these flags: {@link FSAFlags#FLEXIBLE},\n * {@link FSAFlags#STOPBIT} and {@link FSAFlags#NEXTBIT}. The internal representation of the FSA\n * must therefore follow this description (please note this format describes only a single\n * transition (arc), not the entire dictionary file).\n *\n * <pre>\n * ---- this node header present only if automaton was compiled with NUMBERS option.\n * Byte\n *        +-+-+-+-+-+-+-+-+\\\n *      0 | | | | | | | | | \\  LSB\n *        +-+-+-+-+-+-+-+-+  +\n *      1 | | | | | | | | |  |      number of strings recognized\n *        +-+-+-+-+-+-+-+-+  +----- by the automaton starting\n *        : : : : : : : : :  |      from this node.\n *        +-+-+-+-+-+-+-+-+  +\n *  ctl-1 | | | | | | | | | /  MSB\n *        +-+-+-+-+-+-+-+-+/\n *\n * ---- remaining part of the node\n *\n * Byte\n *       +-+-+-+-+-+-+-+-+\\\n *     0 | | | | | | | | | +------ label\n *       +-+-+-+-+-+-+-+-+/\n *\n *                  +------------- node pointed to is next\n *                  | +----------- the last arc of the node\n *                  | | +--------- the arc is final\n *                  | | |\n *             +-----------+\n *             |    | | |  |\n *         ___+___  | | |  |\n *        /       \\ | | |  |\n *       MSB           LSB |\n *        7 6 5 4 3 2 1 0  |\n *       +-+-+-+-+-+-+-+-+ |\n *     1 | | | | | | | | | \\ \\\n *       +-+-+-+-+-+-+-+-+  \\ \\  LSB\n *       +-+-+-+-+-+-+-+-+     +\n *     2 | | | | | | | | |     |\n *       +-+-+-+-+-+-+-+-+     |\n *     3 | | | | | | | | |     +----- target node address (in bytes)\n *       +-+-+-+-+-+-+-+-+     |      (not present except for the byte\n *       : : : : : : : : :     |       with flags if the node pointed to\n *       +-+-+-+-+-+-+-+-+     +       is next)\n *   gtl | | | | | | | | |    /  MSB\n *       +-+-+-+-+-+-+-+-+   /\n * gtl+1                           (gtl = gotoLength)\n * </pre>\n */\npublic final class FSA5 extends FSA {\n  /** Default filler byte. */\n  public static final byte DEFAULT_FILLER = '_';\n\n  /** Default annotation byte. */\n  public static final byte DEFAULT_ANNOTATION = '+';\n\n  /** Automaton version as in the file header. */\n  public static final byte VERSION = 5;\n\n  /**\n   * Bit indicating that an arc corresponds to the last character of a sequence available when\n   * building the automaton.\n   */\n  public static final int BIT_FINAL_ARC = 1 << 0;\n\n  /**\n   * Bit indicating that an arc is the last one of the node's list and the following one belongs to\n   * another node.\n   */\n  public static final int BIT_LAST_ARC = 1 << 1;\n\n  /**\n   * Bit indicating that the target node of this arc follows it in the compressed automaton\n   * structure (no goto field).\n   */\n  public static final int BIT_TARGET_NEXT = 1 << 2;\n\n  /**\n   * An offset in the arc structure, where the address and flags field begins. In version 5 of FSA\n   * automata, this value is constant (1, skip label).\n   */\n  public static final int ADDRESS_OFFSET = 1;\n\n  /**\n   * An array of bytes with the internal representation of the automaton. Please see the\n   * documentation of this class for more information on how this structure is organized.\n   */\n  public final byte[] arcs;\n\n  /**\n   * The length of the node header structure (if the automaton was compiled with <code>NUMBERS\n   * </code> option). Otherwise zero.\n   */\n  public final int nodeDataLength;\n\n  /** Flags for this automaton version. */\n  private Set<FSAFlags> flags;\n\n  /** Number of bytes each address takes in full, expanded form (goto length). */\n  public final int gtl;\n\n  /** Filler character. */\n  public final byte filler;\n\n  /** Annotation character. */\n  public final byte annotation;\n\n  /** Read and wrap a binary automaton in FSA version 5. */\n  FSA5(InputStream stream) throws IOException {\n    DataInputStream in = new DataInputStream(stream);\n\n    this.filler = in.readByte();\n    this.annotation = in.readByte();\n    final byte hgtl = in.readByte();\n\n    /*\n     * Determine if the automaton was compiled with NUMBERS. If so, modify\n     * ctl and goto fields accordingly.\n     */\n    flags = EnumSet.of(FLEXIBLE, STOPBIT, NEXTBIT);\n    if ((hgtl & 0xf0) != 0) {\n      flags.add(NUMBERS);\n    }\n\n    flags = Collections.unmodifiableSet(flags);\n\n    this.nodeDataLength = (hgtl >>> 4) & 0x0f;\n    this.gtl = hgtl & 0x0f;\n\n    arcs = readRemaining(in);\n  }\n\n  /** Returns the start node of this automaton. */\n  @Override\n  public int getRootNode() {\n    // Skip dummy node marking terminating state.\n    final int epsilonNode = skipArc(getFirstArc(0));\n\n    // And follow the epsilon node's first (and only) arc.\n    return getDestinationNodeOffset(getFirstArc(epsilonNode));\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public final int getFirstArc(int node) {\n    return nodeDataLength + node;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public final int getNextArc(int arc) {\n    if (isArcLast(arc)) return 0;\n    else return skipArc(arc);\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getArc(int node, byte label) {\n    for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n      if (getArcLabel(arc) == label) return arc;\n    }\n\n    // An arc labeled with \"label\" not found.\n    return 0;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public int getEndNode(int arc) {\n    final int nodeOffset = getDestinationNodeOffset(arc);\n    assert nodeOffset != 0 : \"No target node for terminal arcs.\";\n    return nodeOffset;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public byte getArcLabel(int arc) {\n    return arcs[arc];\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public boolean isArcFinal(int arc) {\n    return (arcs[arc + ADDRESS_OFFSET] & BIT_FINAL_ARC) != 0;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public boolean isArcTerminal(int arc) {\n    return (0 == getDestinationNodeOffset(arc));\n  }\n\n  /**\n   * Returns the number encoded at the given node. The number equals the count of the set of\n   * suffixes reachable from <code>node</code> (called its right language).\n   */\n  @Override\n  public int getRightLanguageCount(int node) {\n    assert getFlags().contains(FSAFlags.NUMBERS) : \"This FSA was not compiled with NUMBERS.\";\n    return decodeFromBytes(arcs, node, nodeDataLength);\n  }\n\n  /**\n   * {@inheritDoc}\n   *\n   * <p>For this automaton version, an additional {@link FSAFlags#NUMBERS} flag may be set to\n   * indicate the automaton contains extra fields for each node.\n   */\n  @Override\n  public Set<FSAFlags> getFlags() {\n    return flags;\n  }\n\n  /**\n   * Returns <code>true</code> if this arc has <code>NEXT</code> bit set.\n   *\n   * @see #BIT_LAST_ARC\n   * @param arc The node's arc identifier.\n   * @return Returns true if the argument is the last arc of a node.\n   */\n  public boolean isArcLast(int arc) {\n    return (arcs[arc + ADDRESS_OFFSET] & BIT_LAST_ARC) != 0;\n  }\n\n  /**\n   * @see #BIT_TARGET_NEXT\n   * @param arc The node's arc identifier.\n   * @return Returns true if {@link #BIT_TARGET_NEXT} is set for this arc.\n   */\n  public boolean isNextSet(int arc) {\n    return (arcs[arc + ADDRESS_OFFSET] & BIT_TARGET_NEXT) != 0;\n  }\n\n  /** Returns an n-byte integer encoded in byte-packed representation. */\n  static final int decodeFromBytes(final byte[] arcs, final int start, final int n) {\n    int r = 0;\n    for (int i = n; --i >= 0; ) {\n      r = r << 8 | (arcs[start + i] & 0xff);\n    }\n    return r;\n  }\n\n  /** Returns the address of the node pointed to by this arc. */\n  final int getDestinationNodeOffset(int arc) {\n    if (isNextSet(arc)) {\n      /* The destination node follows this arc in the array. */\n      return skipArc(arc);\n    } else {\n      /*\n       * The destination node address has to be extracted from the arc's\n       * goto field.\n       */\n      return decodeFromBytes(arcs, arc + ADDRESS_OFFSET, gtl) >>> 3;\n    }\n  }\n\n  /** Read the arc's layout and skip as many bytes, as needed. */\n  private int skipArc(int offset) {\n    return offset\n        + (isNextSet(offset) ? 1 + 1 /* label + flags */ : 1 + gtl /* label + flags/address */);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/FSAFlags.java",
    "content": "package morfologik.fsa;\n\nimport java.util.Set;\n\n/** FSA automaton flags. Where applicable, flags follow Daciuk's <code>fsa</code> package. */\npublic enum FSAFlags {\n  /** Daciuk: flexible FSA encoding. */\n  FLEXIBLE(1 << 0),\n\n  /** Daciuk: stop bit in use. */\n  STOPBIT(1 << 1),\n\n  /** Daciuk: next bit in use. */\n  NEXTBIT(1 << 2),\n\n  /** Daciuk: tails compression. */\n  TAILS(1 << 3),\n\n  /*\n   * These flags are outside of byte range (never occur in Daciuk's FSA).\n   */\n\n  /**\n   * The FSA contains right-language count numbers on states.\n   *\n   * @see FSA#getRightLanguageCount(int)\n   */\n  NUMBERS(1 << 8),\n\n  /**\n   * The FSA supports legacy built-in separator and filler characters (Daciuk's FSA package\n   * compatibility).\n   */\n  SEPARATORS(1 << 9);\n\n  /** Bit mask for the corresponding flag. */\n  public final int bits;\n\n  /** */\n  private FSAFlags(int bits) {\n    this.bits = bits;\n  }\n\n  /**\n   * @param flags The bitset with flags.\n   * @return Returns <code>true</code> iff this flag is set in <code>flags</code>.\n   */\n  public boolean isSet(int flags) {\n    return (flags & bits) != 0;\n  }\n\n  /**\n   * @param flags A set of flags to encode.\n   * @return Returns the set of flags encoded as packed <code>short</code>.\n   */\n  public static short asShort(Set<FSAFlags> flags) {\n    short value = 0;\n    for (FSAFlags f : flags) {\n      value |= f.bits;\n    }\n    return value;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/FSAHeader.java",
    "content": "package morfologik.fsa;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\n\n/** Standard FSA file header, as described in <code>fsa</code> package documentation. */\npublic final class FSAHeader {\n  /** FSA magic (4 bytes). */\n  static final int FSA_MAGIC = ('\\\\' << 24) | ('f' << 16) | ('s' << 8) | ('a');\n\n  /** Maximum length of the header block. */\n  static final int MAX_HEADER_LENGTH = 4 + 8;\n\n  /** FSA version number. */\n  final byte version;\n\n  FSAHeader(byte version) {\n    this.version = version;\n  }\n\n  /**\n   * Read FSA header and version from a stream, consuming read bytes.\n   *\n   * @param in The input stream to read data from.\n   * @return Returns a valid {@link FSAHeader} with version information.\n   * @throws IOException If the stream ends prematurely or if it contains invalid data.\n   */\n  public static FSAHeader read(InputStream in) throws IOException {\n    if (in.read() != ((FSA_MAGIC >>> 24))\n        || in.read() != ((FSA_MAGIC >>> 16) & 0xff)\n        || in.read() != ((FSA_MAGIC >>> 8) & 0xff)\n        || in.read() != ((FSA_MAGIC) & 0xff)) {\n      throw new IOException(\"Invalid file header, probably not an FSA.\");\n    }\n\n    int version = in.read();\n    if (version == -1) {\n      throw new IOException(\"Truncated file, no version number.\");\n    }\n\n    return new FSAHeader((byte) version);\n  }\n\n  /**\n   * Writes FSA magic bytes and version information.\n   *\n   * @param os The stream to write to.\n   * @param version Automaton version.\n   * @throws IOException Rethrown if writing fails.\n   */\n  public static void write(OutputStream os, byte version) throws IOException {\n    os.write(FSA_MAGIC >> 24);\n    os.write(FSA_MAGIC >> 16);\n    os.write(FSA_MAGIC >> 8);\n    os.write(FSA_MAGIC);\n    os.write(version);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/FSATraversal.java",
    "content": "package morfologik.fsa;\n\nimport static morfologik.fsa.MatchResult.*;\n\n/** This class implements some common matching and scanning operations on a generic FSA. */\npublic final class FSATraversal {\n  /** Target automaton. */\n  private final FSA fsa;\n\n  /**\n   * Traversals of the given FSA.\n   *\n   * @param fsa The target automaton for traversals.\n   */\n  public FSATraversal(FSA fsa) {\n    this.fsa = fsa;\n  }\n\n  /**\n   * Calculate perfect hash for a given input sequence of bytes. The perfect hash requires that\n   * {@link FSA} is built with {@link FSAFlags#NUMBERS} and corresponds to the sequential order of\n   * input sequences used at automaton construction time.\n   *\n   * @param sequence The byte sequence to calculate perfect hash for.\n   * @param start Start index in the sequence array.\n   * @param length Length of the byte sequence, must be at least 1.\n   * @param node The node to start traversal from, typically the {@linkplain FSA#getRootNode() root\n   *     node}.\n   * @return Returns a unique integer assigned to the input sequence in the automaton (reflecting\n   *     the number of that sequence in the input used to build the automaton). Returns a negative\n   *     integer if the input sequence was not part of the input from which the automaton was\n   *     created. The type of mismatch is a constant defined in {@link MatchResult}.\n   */\n  public int perfectHash(byte[] sequence, int start, int length, int node) {\n    assert fsa.getFlags().contains(FSAFlags.NUMBERS) : \"FSA not built with NUMBERS option.\";\n    assert length > 0 : \"Must be a non-empty sequence.\";\n\n    int hash = 0;\n    final int end = start + length - 1;\n\n    int seqIndex = start;\n    byte label = sequence[seqIndex];\n\n    // Seek through the current node's labels, looking for 'label', update hash.\n    for (int arc = fsa.getFirstArc(node); arc != 0; ) {\n      if (fsa.getArcLabel(arc) == label) {\n        if (fsa.isArcFinal(arc)) {\n          if (seqIndex == end) {\n            return hash;\n          }\n\n          hash++;\n        }\n\n        if (fsa.isArcTerminal(arc)) {\n          /* The automaton contains a prefix of the input sequence. */\n          return AUTOMATON_HAS_PREFIX;\n        }\n\n        // The sequence is a prefix of one of the sequences stored in the automaton.\n        if (seqIndex == end) {\n          return SEQUENCE_IS_A_PREFIX;\n        }\n\n        // Make a transition along the arc, go the target node's first arc.\n        arc = fsa.getFirstArc(fsa.getEndNode(arc));\n        label = sequence[++seqIndex];\n        continue;\n      } else {\n        if (fsa.isArcFinal(arc)) {\n          hash++;\n        }\n        if (!fsa.isArcTerminal(arc)) {\n          hash += fsa.getRightLanguageCount(fsa.getEndNode(arc));\n        }\n      }\n\n      arc = fsa.getNextArc(arc);\n    }\n\n    if (seqIndex > start) {\n      return AUTOMATON_HAS_PREFIX;\n    } else {\n      // Labels of this node ended without a match on the sequence.\n      // Perfect hash does not exist.\n      return NO_MATCH;\n    }\n  }\n\n  /**\n   * @param sequence The byte sequence to calculate perfect hash for.\n   * @see #perfectHash(byte[], int, int, int)\n   * @return Returns a unique integer assigned to the input sequence in the automaton (reflecting\n   *     the number of that sequence in the input used to build the automaton). Returns a negative\n   *     integer if the input sequence was not part of the input from which the automaton was\n   *     created. The type of mismatch is a constant defined in {@link MatchResult}.\n   */\n  public int perfectHash(byte[] sequence) {\n    return perfectHash(sequence, 0, sequence.length, fsa.getRootNode());\n  }\n\n  /**\n   * Same as {@link #match(byte[], int, int, int)}, but allows passing a reusable {@link\n   * MatchResult} object so that no intermediate garbage is produced.\n   *\n   * @param reuse The {@link MatchResult} to reuse.\n   * @param sequence Input sequence to look for in the automaton.\n   * @param start Start index in the sequence array.\n   * @param length Length of the byte sequence, must be at least 1.\n   * @param node The node to start traversal from, typically the {@linkplain FSA#getRootNode() root\n   *     node}.\n   * @return The same object as <code>reuse</code>, but with updated match {@link MatchResult#kind}\n   *     and other relevant fields.\n   */\n  public MatchResult match(MatchResult reuse, byte[] sequence, int start, int length, int node) {\n    if (node == 0) {\n      reuse.reset(NO_MATCH, start, node);\n      return reuse;\n    }\n\n    final FSA fsa = this.fsa;\n    final int end = start + length;\n    for (int i = start; i < end; i++) {\n      final int arc = fsa.getArc(node, sequence[i]);\n      if (arc != 0) {\n        if (i + 1 == end && fsa.isArcFinal(arc)) {\n          /* The automaton has an exact match of the input sequence. */\n          reuse.reset(EXACT_MATCH, i, node);\n          return reuse;\n        }\n\n        if (fsa.isArcTerminal(arc)) {\n          /* The automaton contains a prefix of the input sequence. */\n          reuse.reset(AUTOMATON_HAS_PREFIX, i + 1, node);\n          return reuse;\n        }\n\n        // Make a transition along the arc.\n        node = fsa.getEndNode(arc);\n      } else {\n        if (i > start) {\n          reuse.reset(AUTOMATON_HAS_PREFIX, i, node);\n        } else {\n          reuse.reset(NO_MATCH, i, node);\n        }\n        return reuse;\n      }\n    }\n\n    /* The sequence is a prefix of at least one sequence in the automaton. */\n    reuse.reset(SEQUENCE_IS_A_PREFIX, 0, node);\n    return reuse;\n  }\n\n  /**\n   * Finds a matching path in the dictionary for a given sequence of labels from <code>sequence\n   * </code> and starting at node <code>node</code>.\n   *\n   * @param sequence Input sequence to look for in the automaton.\n   * @param start Start index in the sequence array.\n   * @param length Length of the byte sequence, must be at least 1.\n   * @param node The node to start traversal from, typically the {@linkplain FSA#getRootNode() root\n   *     node}.\n   * @see #match(byte [], int)\n   * @return {@link MatchResult} with updated match {@link MatchResult#kind}.\n   */\n  public MatchResult match(byte[] sequence, int start, int length, int node) {\n    return match(new MatchResult(), sequence, start, length, node);\n  }\n\n  /**\n   * @param sequence Input sequence to look for in the automaton.\n   * @param node The node to start traversal from, typically the {@linkplain FSA#getRootNode() root\n   *     node}.\n   * @see #match(byte [], int)\n   * @return {@link MatchResult} with updated match {@link MatchResult#kind}.\n   */\n  public MatchResult match(byte[] sequence, int node) {\n    return match(sequence, 0, sequence.length, node);\n  }\n\n  /**\n   * @param sequence Input sequence to look for in the automaton.\n   * @see #match(byte [], int)\n   * @return {@link MatchResult} with updated match {@link MatchResult#kind}.\n   */\n  public MatchResult match(byte[] sequence) {\n    return match(sequence, fsa.getRootNode());\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/MatchResult.java",
    "content": "package morfologik.fsa;\n\n/**\n * A matching result returned from {@link FSATraversal}.\n *\n * @see FSATraversal\n */\npublic final class MatchResult {\n  /** The automaton has exactly one match for the input sequence. */\n  public static final int EXACT_MATCH = 0;\n\n  /**\n   * The automaton has no match for the input sequence and no sequence in the automaton is a prefix\n   * of the input.\n   *\n   * <p>Note that to check for a general \"input does not exist in the automaton\" you have to check\n   * for both {@link #NO_MATCH} and {@link #AUTOMATON_HAS_PREFIX}.\n   */\n  public static final int NO_MATCH = -1;\n\n  /**\n   * The automaton contains a prefix of the input sequence (but the full sequence does not exist).\n   * This translates to: one of the input sequences used to build the automaton is a prefix of the\n   * input sequence, but the input sequence contains a non-existent suffix.\n   *\n   * <p>{@link MatchResult#index} will contain an index of the first character of the input sequence\n   * not present in the dictionary.\n   */\n  public static final int AUTOMATON_HAS_PREFIX = -3;\n\n  /**\n   * The sequence is a prefix of at least one sequence in the automaton. {@link MatchResult#node}\n   * returns the node from which all sequences with the given prefix start in the automaton.\n   */\n  public static final int SEQUENCE_IS_A_PREFIX = -4;\n\n  /**\n   * One of the match types defined in this class.\n   *\n   * @see #NO_MATCH\n   * @see #EXACT_MATCH\n   * @see #AUTOMATON_HAS_PREFIX\n   * @see #SEQUENCE_IS_A_PREFIX\n   */\n  public int kind;\n\n  /** Input sequence's index, interpretation depends on {@link #kind}. */\n  public int index;\n\n  /** Automaton node, interpretation depends on the {@link #kind}. */\n  public int node;\n\n  MatchResult(int kind, int index, int node) {\n    reset(kind, index, node);\n  }\n\n  MatchResult(int kind) {\n    reset(kind, 0, 0);\n  }\n\n  public MatchResult() {\n    reset(NO_MATCH, 0, 0);\n  }\n\n  final void reset(int kind, int index, int node) {\n    this.kind = kind;\n    this.index = index;\n    this.node = node;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa/src/main/java/morfologik/fsa/StateVisitor.java",
    "content": "package morfologik.fsa;\n\n/**\n * State visitor.\n *\n * @see FSA#visitInPostOrder(StateVisitor)\n * @see FSA#visitInPreOrder(StateVisitor)\n */\npublic interface StateVisitor {\n  public boolean accept(int state);\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.carrot2</groupId>\n    <artifactId>morfologik-parent</artifactId>\n    <version>2.2.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>morfologik-fsa-builders</artifactId>\n  <packaging>bundle</packaging>\n\n  <name>Morfologik FSA (Builder)</name>\n  <description>Morfologik Finite State Automata Builder</description>\n\n  <properties>\n    <forbiddenapis.signaturefile>../etc/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n    <project.moduleId>org.carrot2.morfologik.fsa_builders</project.moduleId>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-fsa</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    \n    <dependency>\n      <groupId>com.carrotsearch</groupId>\n      <artifactId>hppc</artifactId>\n    </dependency>    \n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.felix</groupId>\n        <artifactId>maven-bundle-plugin</artifactId>\n        <configuration>\n          <instructions>\n            <Export-Package>morfologik.fsa.builders</Export-Package>\n            <Import-Package>*</Import-Package>\n          </instructions>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/CFSA2Serializer.java",
    "content": "package morfologik.fsa.builders;\n\nimport static morfologik.fsa.CFSA2.*;\nimport static morfologik.fsa.FSAFlags.*;\n\nimport com.carrotsearch.hppc.BoundedProportionalArraySizingStrategy;\nimport com.carrotsearch.hppc.IntArrayList;\nimport com.carrotsearch.hppc.IntIntHashMap;\nimport com.carrotsearch.hppc.IntStack;\nimport com.carrotsearch.hppc.cursors.IntCursor;\nimport com.carrotsearch.hppc.cursors.IntIntCursor;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.util.BitSet;\nimport java.util.Comparator;\nimport java.util.EnumSet;\nimport java.util.Locale;\nimport java.util.PriorityQueue;\nimport java.util.Set;\nimport java.util.TreeSet;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\nimport morfologik.fsa.CFSA2;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSAFlags;\nimport morfologik.fsa.FSAHeader;\nimport morfologik.fsa.StateVisitor;\nimport morfologik.fsa.builders.FSAUtils.IntIntHolder;\n\n/**\n * Serializes in-memory {@link FSA} graphs to {@link CFSA2}.\n *\n * <p>It is possible to serialize the automaton with numbers required for perfect hashing. See\n * {@link #withNumbers()} method.\n *\n * @see CFSA2\n */\npublic final class CFSA2Serializer implements FSASerializer {\n  private final Logger logger = Logger.getLogger(getClass().getName());\n\n  /** Supported flags. */\n  private static final EnumSet<FSAFlags> flags = EnumSet.of(NUMBERS, FLEXIBLE, STOPBIT, NEXTBIT);\n\n  /** No-state id. */\n  private static final int NO_STATE = -1;\n\n  /**\n   * <code>true</code> if we should serialize with numbers.\n   *\n   * @see #withNumbers()\n   */\n  private boolean withNumbers;\n\n  /** A hash map of [state, offset] pairs. */\n  private IntIntHashMap offsets = new IntIntHashMap();\n\n  /** A hash map of [state, right-language-count] pairs. */\n  private IntIntHashMap numbers = new IntIntHashMap();\n\n  /** Scratch array for serializing vints. */\n  private final byte[] scratch = new byte[5];\n\n  /** The most frequent labels for integrating with the flags field. */\n  private byte[] labelsIndex;\n\n  /**\n   * Inverted index of labels to be integrated with flags field. A label at\n   * index <code>i<code> has the index or zero (no integration).\n   */\n  private int[] labelsInvIndex;\n\n  /**\n   * Serialize the automaton with the number of right-language sequences in each node. This is\n   * required to implement perfect hashing. The numbering also preserves the order of input\n   * sequences.\n   *\n   * @return Returns the same object for easier call chaining.\n   */\n  public CFSA2Serializer withNumbers() {\n    withNumbers = true;\n    return this;\n  }\n\n  /**\n   * Serializes any {@link FSA} to {@link CFSA2} stream.\n   *\n   * @see #withNumbers()\n   * @return Returns <code>os</code> for chaining.\n   */\n  @Override\n  public <T extends OutputStream> T serialize(final FSA fsa, T os) throws IOException {\n    /*\n     * Calculate the most frequent labels and build indexed labels dictionary.\n     */\n    computeLabelsIndex(fsa);\n\n    /*\n     * Calculate the number of bytes required for the node data, if\n     * serializing with numbers.\n     */\n    if (withNumbers) {\n      this.numbers = FSAUtils.rightLanguageForAllStates(fsa);\n    }\n\n    /*\n     * Linearize all the states, optimizing their layout.\n     */\n    IntArrayList linearized = linearize(fsa);\n\n    /*\n     * Emit the header.\n     */\n    FSAHeader.write(os, CFSA2.VERSION);\n\n    EnumSet<FSAFlags> fsaFlags = EnumSet.of(FLEXIBLE, STOPBIT, NEXTBIT);\n    if (withNumbers) {\n      fsaFlags.add(NUMBERS);\n    }\n\n    final short sflags = FSAFlags.asShort(fsaFlags);\n    os.write((sflags >> 8) & 0xFF);\n    os.write((sflags) & 0xFF);\n\n    /*\n     * Emit labels index.\n     */\n    os.write(labelsIndex.length);\n    os.write(labelsIndex);\n\n    /*\n     * Emit the automaton.\n     */\n    int size = emitNodes(fsa, os, linearized);\n    assert size == 0 : \"Size changed in the final pass?\";\n\n    return os;\n  }\n\n  /** Compute a set of labels to be integrated with the flags field. */\n  private void computeLabelsIndex(final FSA fsa) {\n    // Compute labels count.\n    final int[] countByValue = new int[256];\n\n    fsa.visitAllStates(\n        new StateVisitor() {\n          public boolean accept(int state) {\n            for (int arc = fsa.getFirstArc(state); arc != 0; arc = fsa.getNextArc(arc))\n              countByValue[fsa.getArcLabel(arc) & 0xff]++;\n            return true;\n          }\n        });\n\n    // Order by descending frequency of counts and increasing label value.\n    Comparator<IntIntHolder> comparator =\n        new Comparator<IntIntHolder>() {\n          public int compare(IntIntHolder o1, IntIntHolder o2) {\n            int countDiff = o2.b - o1.b;\n            if (countDiff == 0) {\n              countDiff = o1.a - o2.a;\n            }\n            return countDiff;\n          }\n        };\n\n    TreeSet<IntIntHolder> labelAndCount = new TreeSet<IntIntHolder>(comparator);\n    for (int label = 0; label < countByValue.length; label++) {\n      if (countByValue[label] > 0) {\n        labelAndCount.add(new IntIntHolder(label, countByValue[label]));\n      }\n    }\n\n    labelsIndex = new byte[1 + Math.min(labelAndCount.size(), CFSA2.LABEL_INDEX_SIZE)];\n    labelsInvIndex = new int[256];\n    for (int i = labelsIndex.length - 1; i > 0 && !labelAndCount.isEmpty(); i--) {\n      IntIntHolder p = labelAndCount.first();\n      labelAndCount.remove(p);\n      labelsInvIndex[p.a] = i;\n      labelsIndex[i] = (byte) p.a;\n    }\n  }\n\n  /** Return supported flags. */\n  @Override\n  public Set<FSAFlags> getFlags() {\n    return flags;\n  }\n\n  /** Linearization of states. */\n  private IntArrayList linearize(final FSA fsa) throws IOException {\n    /*\n     * Compute the states with most inlinks. These should be placed as close to the\n     * start of the automaton, as possible so that v-coded addresses are tiny.\n     */\n    final IntIntHashMap inlinkCount = computeInlinkCount(fsa);\n\n    /*\n     * An array of ordered states for serialization.\n     */\n    final IntArrayList linearized =\n        new IntArrayList(0, new BoundedProportionalArraySizingStrategy(1000, 10000, 1.5f));\n\n    /*\n     * Determine which states should be linearized first (at fixed positions) so as to\n     * minimize the place occupied by goto fields.\n     */\n    int maxStates = Integer.MAX_VALUE;\n    int minInlinkCount = 2;\n    int[] states = computeFirstStates(inlinkCount, maxStates, minInlinkCount);\n\n    /*\n     * Compute initial addresses, without node rearrangements.\n     */\n    int serializedSize = linearizeAndCalculateOffsets(fsa, new IntArrayList(), linearized, offsets);\n\n    /*\n     * Probe for better node arrangements by selecting between [lower, upper]\n     * nodes from the potential candidate nodes list.\n     */\n    IntArrayList sublist = new IntArrayList();\n    sublist.buffer = states;\n    sublist.elementsCount = states.length;\n\n    /*\n     * Probe the initial region a little bit, looking for optimal cut. It can't be binary search\n     * because the result isn't monotonic.\n     */\n    log(Level.FINE, \"Compacting, initial output size: %,d\", serializedSize);\n    int cutAt = 0;\n    for (int cut = Math.min(25, states.length); cut <= Math.min(150, states.length); cut += 25) {\n      sublist.elementsCount = cut;\n      int newSize = linearizeAndCalculateOffsets(fsa, sublist, linearized, offsets);\n      log(Level.FINE, \"Moved %,d states, output size: %,d\", sublist.size(), newSize);\n      if (newSize >= serializedSize) {\n        break;\n      }\n      cutAt = cut;\n    }\n\n    /*\n     * Cut at the calculated point and repeat linearization.\n     */\n    sublist.elementsCount = cutAt;\n    int size = linearizeAndCalculateOffsets(fsa, sublist, linearized, offsets);\n    log(Level.FINE, \"%,d states moved, final size: %,d\", sublist.size(), size);\n    return linearized;\n  }\n\n  private void log(Level level, String msg, Object... args) {\n    logger.log(level, String.format(Locale.ROOT, msg, args));\n  }\n\n  /**\n   * Linearize all states, putting <code>states</code> in front of the automaton and calculating\n   * stable state offsets.\n   */\n  private int linearizeAndCalculateOffsets(\n      FSA fsa, IntArrayList states, IntArrayList linearized, IntIntHashMap offsets)\n      throws IOException {\n    final BitSet visited = new BitSet();\n    final IntStack nodes = new IntStack();\n    linearized.clear();\n\n    /*\n     * Linearize states with most inlinks first.\n     */\n    for (int i = 0; i < states.size(); i++) {\n      linearizeState(fsa, nodes, linearized, visited, states.get(i));\n    }\n\n    /*\n     * Linearize the remaining states by chaining them one after another, in depth-order.\n     */\n    nodes.push(fsa.getRootNode());\n    while (!nodes.isEmpty()) {\n      final int node = nodes.pop();\n      if (visited.get(node)) continue;\n\n      linearizeState(fsa, nodes, linearized, visited, node);\n    }\n\n    /*\n     * Calculate new state offsets. This is iterative. We start with\n     * maximum potential offsets and recalculate until converged.\n     */\n    int MAX_OFFSET = Integer.MAX_VALUE;\n    for (IntCursor c : linearized) {\n      offsets.put(c.value, MAX_OFFSET);\n    }\n\n    int i, j = 0;\n    while ((i = emitNodes(fsa, null, linearized)) > 0) {\n      j = i;\n    }\n    return j;\n  }\n\n  /** Add a state to linearized list. */\n  private void linearizeState(\n      final FSA fsa, IntStack nodes, IntArrayList linearized, BitSet visited, int node) {\n    linearized.add(node);\n    visited.set(node);\n    for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n      if (!fsa.isArcTerminal(arc)) {\n        final int target = fsa.getEndNode(arc);\n        if (!visited.get(target)) nodes.push(target);\n      }\n    }\n  }\n\n  /**\n   * Compute the set of states that should be linearized first to minimize other states goto length.\n   */\n  private int[] computeFirstStates(IntIntHashMap inlinkCount, int maxStates, int minInlinkCount) {\n    Comparator<IntIntHolder> comparator =\n        new Comparator<FSAUtils.IntIntHolder>() {\n          public int compare(IntIntHolder o1, IntIntHolder o2) {\n            int v = o1.a - o2.a;\n            return v == 0 ? (o1.b - o2.b) : v;\n          }\n        };\n\n    PriorityQueue<IntIntHolder> stateInlink = new PriorityQueue<IntIntHolder>(1, comparator);\n    IntIntHolder scratch = new IntIntHolder();\n    for (IntIntCursor c : inlinkCount) {\n      if (c.value > minInlinkCount) {\n        scratch.a = c.value;\n        scratch.b = c.key;\n\n        if (stateInlink.size() < maxStates || comparator.compare(scratch, stateInlink.peek()) > 0) {\n          stateInlink.add(new IntIntHolder(c.value, c.key));\n          if (stateInlink.size() > maxStates) {\n            stateInlink.remove();\n          }\n        }\n      }\n    }\n\n    int[] states = new int[stateInlink.size()];\n    for (int position = states.length; !stateInlink.isEmpty(); ) {\n      IntIntHolder i = stateInlink.remove();\n      states[--position] = i.b;\n    }\n\n    return states;\n  }\n\n  /** Compute in-link count for each state. */\n  private IntIntHashMap computeInlinkCount(final FSA fsa) {\n    IntIntHashMap inlinkCount = new IntIntHashMap();\n    BitSet visited = new BitSet();\n    IntStack nodes = new IntStack();\n    nodes.push(fsa.getRootNode());\n\n    while (!nodes.isEmpty()) {\n      final int node = nodes.pop();\n      if (visited.get(node)) continue;\n\n      visited.set(node);\n\n      for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n        if (!fsa.isArcTerminal(arc)) {\n          final int target = fsa.getEndNode(arc);\n          inlinkCount.putOrAdd(target, 1, 1);\n          if (!visited.get(target)) nodes.push(target);\n        }\n      }\n    }\n\n    return inlinkCount;\n  }\n\n  /** Update arc offsets assuming the given goto length. */\n  private int emitNodes(FSA fsa, OutputStream os, IntArrayList linearized) throws IOException {\n    int offset = 0;\n\n    // Add epsilon state.\n    offset += emitNodeData(os, 0);\n    if (fsa.getRootNode() != 0)\n      offset += emitArc(os, BIT_LAST_ARC, (byte) '^', offsets.get(fsa.getRootNode()));\n    else offset += emitArc(os, BIT_LAST_ARC, (byte) '^', 0);\n\n    boolean offsetsChanged = false;\n    final int max = linearized.size();\n    for (IntCursor c : linearized) {\n      final int state = c.value;\n      final int nextState = c.index + 1 < max ? linearized.get(c.index + 1) : NO_STATE;\n\n      if (os == null) {\n        offsetsChanged |= (offsets.get(state) != offset);\n        offsets.put(state, offset);\n      } else {\n        assert offsets.get(state) == offset : state + \" \" + offsets.get(state) + \" \" + offset;\n      }\n\n      offset += emitNodeData(os, withNumbers ? numbers.get(state) : 0);\n      offset += emitNodeArcs(fsa, os, state, nextState);\n    }\n\n    return offsetsChanged ? offset : 0;\n  }\n\n  /** Emit all arcs of a single node. */\n  private int emitNodeArcs(FSA fsa, OutputStream os, final int state, final int nextState)\n      throws IOException {\n    int offset = 0;\n    for (int arc = fsa.getFirstArc(state); arc != 0; arc = fsa.getNextArc(arc)) {\n      int targetOffset;\n      final int target;\n\n      if (fsa.isArcTerminal(arc)) {\n        target = 0;\n        targetOffset = 0;\n      } else {\n        target = fsa.getEndNode(arc);\n        targetOffset = offsets.get(target);\n      }\n\n      int flags = 0;\n\n      if (fsa.isArcFinal(arc)) {\n        flags |= BIT_FINAL_ARC;\n      }\n\n      if (fsa.getNextArc(arc) == 0) {\n        flags |= BIT_LAST_ARC;\n      }\n\n      if (targetOffset != 0 && target == nextState) {\n        flags |= BIT_TARGET_NEXT;\n        targetOffset = 0;\n      }\n\n      offset += emitArc(os, flags, fsa.getArcLabel(arc), targetOffset);\n    }\n\n    return offset;\n  }\n\n  /** */\n  private int emitArc(OutputStream os, int flags, byte label, int targetOffset) throws IOException {\n    int length = 0;\n\n    int labelIndex = labelsInvIndex[label & 0xff];\n    if (labelIndex > 0) {\n      if (os != null) os.write(flags | labelIndex);\n      length++;\n    } else {\n      if (os != null) {\n        os.write(flags);\n        os.write(label);\n      }\n      length += 2;\n    }\n\n    if ((flags & BIT_TARGET_NEXT) == 0) {\n      int len = writeVInt(scratch, 0, targetOffset);\n      if (os != null) {\n        os.write(scratch, 0, len);\n      }\n      length += len;\n    }\n\n    return length;\n  }\n\n  /** */\n  private int emitNodeData(OutputStream os, int number) throws IOException {\n    int size = 0;\n\n    if (withNumbers) {\n      size = writeVInt(scratch, 0, number);\n      if (os != null) {\n        os.write(scratch, 0, size);\n      }\n    }\n\n    return size;\n  }\n\n  /** */\n  @Override\n  public CFSA2Serializer withFiller(byte filler) {\n    throw new UnsupportedOperationException(\"CFSA2 does not support filler. Use .info file.\");\n  }\n\n  /** */\n  @Override\n  public CFSA2Serializer withAnnotationSeparator(byte annotationSeparator) {\n    throw new UnsupportedOperationException(\"CFSA2 does not support separator. Use .info file.\");\n  }\n\n  /** Write a v-int to a byte array. */\n  static int writeVInt(byte[] array, int offset, int value) {\n    assert value >= 0 : \"Can't v-code negative ints.\";\n\n    while (value > 0x7F) {\n      array[offset++] = (byte) (0x80 | (value & 0x7F));\n      value >>= 7;\n    }\n    array[offset++] = (byte) value;\n\n    return offset;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/ConstantArcSizeFSA.java",
    "content": "package morfologik.fsa.builders;\n\nimport java.util.Collections;\nimport java.util.Set;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSAFlags;\n\n/**\n * An FSA with constant-size arc representation produced directly by {@link FSABuilder}.\n *\n * @see FSABuilder\n */\nfinal class ConstantArcSizeFSA extends FSA {\n  /** Size of the target address field (constant for the builder). */\n  public static final int TARGET_ADDRESS_SIZE = 4;\n\n  /** Size of the flags field (constant for the builder). */\n  public static final int FLAGS_SIZE = 1;\n\n  /** Size of the label field (constant for the builder). */\n  public static final int LABEL_SIZE = 1;\n\n  /** Size of a single arc structure. */\n  public static final int ARC_SIZE = FLAGS_SIZE + LABEL_SIZE + TARGET_ADDRESS_SIZE;\n\n  /** Offset of the flags field inside an arc. */\n  public static final int FLAGS_OFFSET = 0;\n\n  /** Offset of the label field inside an arc. */\n  public static final int LABEL_OFFSET = FLAGS_SIZE;\n\n  /** Offset of the address field inside an arc. */\n  public static final int ADDRESS_OFFSET = LABEL_OFFSET + LABEL_SIZE;\n\n  /** A dummy address of the terminal state. */\n  static final int TERMINAL_STATE = 0;\n\n  /** An arc flag indicating the target node of an arc corresponds to a final state. */\n  public static final int BIT_ARC_FINAL = 1 << 1;\n\n  /** An arc flag indicating the arc is last within its state. */\n  public static final int BIT_ARC_LAST = 1 << 0;\n\n  /**\n   * An epsilon state. The first and only arc of this state points either to the root or to the\n   * terminal state, indicating an empty automaton.\n   */\n  private final int epsilon;\n\n  /** FSA data, serialized as a byte array. */\n  private final byte[] data;\n\n  /**\n   * @param data FSA data. There must be no trailing bytes after the last state.\n   */\n  ConstantArcSizeFSA(byte[] data, int epsilon) {\n    assert epsilon == 0 : \"Epsilon is not zero?\";\n\n    this.epsilon = epsilon;\n    this.data = data;\n  }\n\n  @Override\n  public int getRootNode() {\n    return getEndNode(getFirstArc(epsilon));\n  }\n\n  @Override\n  public int getFirstArc(int node) {\n    return node;\n  }\n\n  @Override\n  public int getArc(int node, byte label) {\n    for (int arc = getFirstArc(node); arc != 0; arc = getNextArc(arc)) {\n      if (getArcLabel(arc) == label) return arc;\n    }\n    return 0;\n  }\n\n  @Override\n  public int getNextArc(int arc) {\n    if (isArcLast(arc)) return 0;\n    return arc + ARC_SIZE;\n  }\n\n  @Override\n  public byte getArcLabel(int arc) {\n    return data[arc + LABEL_OFFSET];\n  }\n\n  /** Fills the target state address of an arc. */\n  private int getArcTarget(int arc) {\n    arc += ADDRESS_OFFSET;\n    return (data[arc]) << 24\n        | (data[arc + 1] & 0xff) << 16\n        | (data[arc + 2] & 0xff) << 8\n        | (data[arc + 3] & 0xff);\n  }\n\n  @Override\n  public boolean isArcFinal(int arc) {\n    return (data[arc + FLAGS_OFFSET] & BIT_ARC_FINAL) != 0;\n  }\n\n  @Override\n  public boolean isArcTerminal(int arc) {\n    return getArcTarget(arc) == 0;\n  }\n\n  private boolean isArcLast(int arc) {\n    return (data[arc + FLAGS_OFFSET] & BIT_ARC_LAST) != 0;\n  }\n\n  @Override\n  public int getEndNode(int arc) {\n    return getArcTarget(arc);\n  }\n\n  @Override\n  public Set<FSAFlags> getFlags() {\n    return Collections.emptySet();\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/FSA5Serializer.java",
    "content": "package morfologik.fsa.builders;\n\nimport static morfologik.fsa.FSAFlags.*;\n\nimport com.carrotsearch.hppc.IntIntHashMap;\nimport com.carrotsearch.hppc.IntStack;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.util.Arrays;\nimport java.util.BitSet;\nimport java.util.EnumSet;\nimport java.util.Set;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSA5;\nimport morfologik.fsa.FSAFlags;\nimport morfologik.fsa.FSAHeader;\n\n/**\n * Serializes in-memory {@link FSA} graphs to a binary format compatible with Jan Daciuk's <code>fsa\n * </code>'s package <code>FSA5</code> format.\n *\n * <p>It is possible to serialize the automaton with numbers required for perfect hashing. See\n * {@link #withNumbers()} method.\n *\n * @see FSA5\n * @see FSA#read(java.io.InputStream)\n */\npublic final class FSA5Serializer implements FSASerializer {\n  /** Maximum number of bytes for a serialized arc. */\n  private static final int MAX_ARC_SIZE = 1 + 5;\n\n  /** Maximum number of bytes for per-node data. */\n  private static final int MAX_NODE_DATA_SIZE = 16;\n\n  /** Number of bytes for the arc's flags header (arc representation without the goto address). */\n  private static final int SIZEOF_FLAGS = 1;\n\n  /** Supported flags. */\n  private static final EnumSet<FSAFlags> flags =\n      EnumSet.of(NUMBERS, SEPARATORS, FLEXIBLE, STOPBIT, NEXTBIT);\n\n  /**\n   * @see FSA5#filler\n   */\n  public byte fillerByte = FSA5.DEFAULT_FILLER;\n\n  /**\n   * @see FSA5#annotation\n   */\n  public byte annotationByte = FSA5.DEFAULT_ANNOTATION;\n\n  /**\n   * <code>true</code> if we should serialize with numbers.\n   *\n   * @see #withNumbers()\n   */\n  private boolean withNumbers;\n\n  /** A hash map of [state, offset] pairs. */\n  private IntIntHashMap offsets = new IntIntHashMap();\n\n  /** A hash map of [state, right-language-count] pairs. */\n  private IntIntHashMap numbers = new IntIntHashMap();\n\n  /**\n   * Serialize the automaton with the number of right-language sequences in each node. This is\n   * required to implement perfect hashing. The numbering also preserves the order of input\n   * sequences.\n   *\n   * @return Returns the same object for easier call chaining.\n   */\n  public FSA5Serializer withNumbers() {\n    withNumbers = true;\n    return this;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public FSA5Serializer withFiller(byte filler) {\n    this.fillerByte = filler;\n    return this;\n  }\n\n  /** {@inheritDoc} */\n  @Override\n  public FSA5Serializer withAnnotationSeparator(byte annotationSeparator) {\n    this.annotationByte = annotationSeparator;\n    return this;\n  }\n\n  /**\n   * Serialize root state <code>s</code> to an output stream in <code>FSA5</code> format.\n   *\n   * @see #withNumbers()\n   * @return Returns <code>os</code> for chaining.\n   */\n  @Override\n  public <T extends OutputStream> T serialize(final FSA fsa, T os) throws IOException {\n\n    // Prepare space for arc offsets and linearize all the states.\n    int[] linearized = linearize(fsa);\n\n    /*\n     * Calculate the number of bytes required for the node data, if\n     * serializing with numbers.\n     */\n    int nodeDataLength = 0;\n    if (withNumbers) {\n      this.numbers = FSAUtils.rightLanguageForAllStates(fsa);\n      int maxNumber = numbers.get(fsa.getRootNode());\n      while (maxNumber > 0) {\n        nodeDataLength++;\n        maxNumber >>>= 8;\n      }\n    }\n\n    // Calculate minimal goto length.\n    int gtl = 1;\n    while (true) {\n      // First pass: calculate offsets of states.\n      if (!emitArcs(fsa, null, linearized, gtl, nodeDataLength)) {\n        gtl++;\n        continue;\n      }\n\n      // Second pass: check if goto overflows anywhere.\n      if (emitArcs(fsa, null, linearized, gtl, nodeDataLength)) break;\n\n      gtl++;\n    }\n\n    /*\n     * Emit the header.\n     */\n    FSAHeader.write(os, FSA5.VERSION);\n    os.write(fillerByte);\n    os.write(annotationByte);\n    os.write((nodeDataLength << 4) | gtl);\n\n    /*\n     * Emit the automaton.\n     */\n    boolean gtlUnchanged = emitArcs(fsa, os, linearized, gtl, nodeDataLength);\n    assert gtlUnchanged : \"gtl changed in the final pass.\";\n\n    return os;\n  }\n\n  /** Return supported flags. */\n  @Override\n  public Set<FSAFlags> getFlags() {\n    return flags;\n  }\n\n  /** Linearization of states. */\n  private int[] linearize(final FSA fsa) {\n    int[] linearized = new int[0];\n    int last = 0;\n\n    BitSet visited = new BitSet();\n    IntStack nodes = new IntStack();\n    nodes.push(fsa.getRootNode());\n\n    while (!nodes.isEmpty()) {\n      final int node = nodes.pop();\n      if (visited.get(node)) {\n        continue;\n      }\n\n      if (last >= linearized.length) {\n        linearized = Arrays.copyOf(linearized, linearized.length + 100000);\n      }\n\n      visited.set(node);\n      linearized[last++] = node;\n\n      for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n        if (!fsa.isArcTerminal(arc)) {\n          int target = fsa.getEndNode(arc);\n          if (!visited.get(target)) nodes.push(target);\n        }\n      }\n    }\n\n    return Arrays.copyOf(linearized, last);\n  }\n\n  /** Update arc offsets assuming the given goto length. */\n  private boolean emitArcs(FSA fsa, OutputStream os, int[] linearized, int gtl, int nodeDataLength)\n      throws IOException {\n    final ByteBuffer bb = ByteBuffer.allocate(Math.max(MAX_NODE_DATA_SIZE, MAX_ARC_SIZE));\n\n    int offset = 0;\n\n    // Add dummy terminal state.\n    offset += emitNodeData(bb, os, nodeDataLength, 0);\n    offset += emitArc(bb, os, gtl, 0, (byte) 0, 0);\n\n    // Add epsilon state.\n    offset += emitNodeData(bb, os, nodeDataLength, 0);\n    if (fsa.getRootNode() != 0)\n      offset += emitArc(bb, os, gtl, FSA5.BIT_LAST_ARC | FSA5.BIT_TARGET_NEXT, (byte) '^', 0);\n    else offset += emitArc(bb, os, gtl, FSA5.BIT_LAST_ARC, (byte) '^', 0);\n\n    int maxStates = linearized.length;\n    for (int j = 0; j < maxStates; j++) {\n      final int s = linearized[j];\n\n      if (os == null) {\n        offsets.put(s, offset);\n      } else {\n        assert offsets.get(s) == offset : s + \" \" + offsets.get(s) + \" \" + offset;\n      }\n\n      offset += emitNodeData(bb, os, nodeDataLength, withNumbers ? numbers.get(s) : 0);\n\n      for (int arc = fsa.getFirstArc(s); arc != 0; arc = fsa.getNextArc(arc)) {\n        int targetOffset;\n        final int target;\n        if (fsa.isArcTerminal(arc)) {\n          targetOffset = 0;\n          target = 0;\n        } else {\n          target = fsa.getEndNode(arc);\n          targetOffset = offsets.get(target);\n        }\n\n        int flags = 0;\n        if (fsa.isArcFinal(arc)) {\n          flags |= FSA5.BIT_FINAL_ARC;\n        }\n\n        if (fsa.getNextArc(arc) == 0) {\n          flags |= FSA5.BIT_LAST_ARC;\n\n          if (j + 1 < maxStates && target == linearized[j + 1] && targetOffset != 0) {\n            flags |= FSA5.BIT_TARGET_NEXT;\n            targetOffset = 0;\n          }\n        }\n\n        int bytes = emitArc(bb, os, gtl, flags, fsa.getArcLabel(arc), targetOffset);\n        if (bytes < 0)\n          // gtl too small. interrupt eagerly.\n          return false;\n\n        offset += bytes;\n      }\n    }\n\n    return true;\n  }\n\n  /** */\n  private int emitArc(\n      ByteBuffer bb, OutputStream os, int gtl, int flags, byte label, int targetOffset)\n      throws IOException {\n    int arcBytes = (flags & FSA5.BIT_TARGET_NEXT) != 0 ? SIZEOF_FLAGS : gtl;\n\n    flags |= (targetOffset << 3);\n    bb.put(label);\n    for (int b = 0; b < arcBytes; b++) {\n      bb.put((byte) flags);\n      flags >>>= 8;\n    }\n\n    if (flags != 0) {\n      // gtl too small. interrupt eagerly.\n      return -1;\n    }\n\n    bb.flip();\n    int bytes = bb.remaining();\n    if (os != null) {\n      os.write(bb.array(), bb.position(), bb.remaining());\n    }\n    bb.clear();\n\n    return bytes;\n  }\n\n  /** */\n  private int emitNodeData(ByteBuffer bb, OutputStream os, int nodeDataLength, int number)\n      throws IOException {\n    if (nodeDataLength > 0 && os != null) {\n      for (int i = 0; i < nodeDataLength; i++) {\n        bb.put((byte) number);\n        number >>>= 8;\n      }\n\n      bb.flip();\n      os.write(bb.array(), bb.position(), bb.remaining());\n      bb.clear();\n    }\n\n    return nodeDataLength;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/FSABuilder.java",
    "content": "package morfologik.fsa.builders;\n\nimport static morfologik.fsa.builders.ConstantArcSizeFSA.*;\n\nimport java.util.*;\nimport morfologik.fsa.FSA;\n\n/**\n * Fast, memory-conservative finite state automaton builder, returning an in-memory {@link FSA} that\n * is a tradeoff between construction speed and memory consumption. Use serializers to compress the\n * returned automaton into more compact form.\n *\n * @see FSASerializer\n */\npublic final class FSABuilder {\n  /**\n   * Debug and information constants.\n   *\n   * @see FSABuilder#getInfo()\n   */\n  public enum InfoEntry {\n    SERIALIZATION_BUFFER_SIZE(\"Serialization buffer size\"),\n    SERIALIZATION_BUFFER_REALLOCATIONS(\"Serialization buffer reallocs\"),\n    CONSTANT_ARC_AUTOMATON_SIZE(\"Constant arc FSA size\"),\n    MAX_ACTIVE_PATH_LENGTH(\"Max active path\"),\n    STATE_REGISTRY_TABLE_SLOTS(\"Registry hash slots\"),\n    STATE_REGISTRY_SIZE(\"Registry hash entries\"),\n    ESTIMATED_MEMORY_CONSUMPTION_MB(\"Estimated mem consumption (MB)\");\n\n    private final String stringified;\n\n    InfoEntry(String stringified) {\n      this.stringified = stringified;\n    }\n\n    @Override\n    public String toString() {\n      return stringified;\n    }\n  }\n\n  /** A megabyte. */\n  private static final int MB = 1024 * 1024;\n\n  /** Internal serialized FSA buffer expand ratio. */\n  private static final int BUFFER_GROWTH_SIZE = 5 * MB;\n\n  /** Maximum number of labels from a single state. */\n  private static final int MAX_LABELS = 256;\n\n  /** A comparator comparing full byte arrays. Unsigned byte comparisons ('C'-locale). */\n  public static final Comparator<byte[]> LEXICAL_ORDERING =\n      new Comparator<byte[]>() {\n        public int compare(byte[] o1, byte[] o2) {\n          return FSABuilder.compare(o1, 0, o1.length, o2, 0, o2.length);\n        }\n      };\n\n  /** Internal serialized FSA buffer expand ratio. */\n  private final int bufferGrowthSize;\n\n  /**\n   * Holds serialized and mutable states. Each state is a sequential list of arcs, the last arc is\n   * marked with {@link #BIT_ARC_LAST}.\n   */\n  private byte[] serialized = new byte[0];\n\n  /**\n   * Number of bytes already taken in {@link #serialized}. Start from 1 to keep 0 a sentinel value\n   * (for the hash set and final state).\n   */\n  private int size;\n\n  /**\n   * States on the \"active path\" (still mutable). Values are addresses of each state's first arc.\n   */\n  private int[] activePath = new int[0];\n\n  /** Current length of the active path. */\n  private int activePathLen;\n\n  /** The next offset at which an arc will be added to the given state on {@link #activePath}. */\n  private int[] nextArcOffset = new int[0];\n\n  /** Root state. If negative, the automaton has been built already and cannot be extended. */\n  private int root;\n\n  /**\n   * An epsilon state. The first and only arc of this state points either to the root or to the\n   * terminal state, indicating an empty automaton.\n   */\n  private int epsilon;\n\n  /**\n   * Hash set of state addresses in {@link #serialized}, hashed by {@link #hash(int, int)}. Zero\n   * reserved for an unoccupied slot.\n   */\n  private int[] hashSet = new int[2];\n\n  /** Number of entries currently stored in {@link #hashSet}. */\n  private int hashSize = 0;\n\n  /**\n   * Previous sequence added to the automaton in {@link #add(byte[], int, int)}. Used in assertions\n   * only.\n   */\n  private byte[] previous;\n\n  /** Information about the automaton and its compilation. */\n  private TreeMap<InfoEntry, Object> info;\n\n  /** {@link #previous} sequence's length, used in assertions only. */\n  private int previousLength;\n\n  /** */\n  public FSABuilder() {\n    this(BUFFER_GROWTH_SIZE);\n  }\n\n  /**\n   * @param bufferGrowthSize Buffer growth size (in bytes) when constructing the automaton.\n   */\n  public FSABuilder(int bufferGrowthSize) {\n    this.bufferGrowthSize = Math.max(bufferGrowthSize, ARC_SIZE * MAX_LABELS);\n\n    // Allocate epsilon state.\n    epsilon = allocateState(1);\n    serialized[epsilon + FLAGS_OFFSET] |= BIT_ARC_LAST;\n\n    // Allocate root, with an initial empty set of output arcs.\n    expandActivePath(1);\n    root = activePath[0];\n  }\n\n  /**\n   * Add a single sequence of bytes to the FSA. The input must be lexicographically greater than any\n   * previously added sequence.\n   *\n   * @param sequence The array holding input sequence of bytes.\n   * @param start Starting offset (inclusive)\n   * @param len Length of the input sequence (at least 1 byte).\n   */\n  public void add(byte[] sequence, int start, int len) {\n    assert serialized != null : \"Automaton already built.\";\n    assert previous == null\n            || len == 0\n            || compare(previous, 0, previousLength, sequence, start, len) <= 0\n        : \"Input must be sorted: \"\n            + Arrays.toString(Arrays.copyOf(previous, previousLength))\n            + \" >= \"\n            + Arrays.toString(Arrays.copyOfRange(sequence, start, len));\n    assert setPrevious(sequence, start, len);\n\n    // Determine common prefix length.\n    final int commonPrefix = commonPrefix(sequence, start, len);\n\n    // Make room for extra states on active path, if needed.\n    expandActivePath(len);\n\n    // Freeze all the states after the common prefix.\n    for (int i = activePathLen - 1; i > commonPrefix; i--) {\n      final int frozenState = freezeState(i);\n      setArcTarget(nextArcOffset[i - 1] - ARC_SIZE, frozenState);\n      nextArcOffset[i] = activePath[i];\n    }\n\n    // Create arcs to new suffix states.\n    for (int i = commonPrefix + 1, j = start + commonPrefix; i <= len; i++) {\n      final int p = nextArcOffset[i - 1];\n\n      serialized[p + FLAGS_OFFSET] = (byte) (i == len ? BIT_ARC_FINAL : 0);\n      serialized[p + LABEL_OFFSET] = sequence[j++];\n      setArcTarget(p, i == len ? TERMINAL_STATE : activePath[i]);\n\n      nextArcOffset[i - 1] = p + ARC_SIZE;\n    }\n\n    // Save last sequence's length so that we don't need to calculate it again.\n    this.activePathLen = len;\n  }\n\n  /** Number of serialization buffer reallocations. */\n  private int serializationBufferReallocations;\n\n  /**\n   * @return Finalizes the construction of the automaton and returns it.\n   */\n  public FSA complete() {\n    add(new byte[0], 0, 0);\n\n    if (nextArcOffset[0] - activePath[0] == 0) {\n      // An empty FSA.\n      setArcTarget(epsilon, TERMINAL_STATE);\n    } else {\n      // An automaton with at least a single arc from root.\n      root = freezeState(0);\n      setArcTarget(epsilon, root);\n    }\n\n    info = new TreeMap<InfoEntry, Object>();\n    info.put(InfoEntry.SERIALIZATION_BUFFER_SIZE, serialized.length);\n    info.put(InfoEntry.SERIALIZATION_BUFFER_REALLOCATIONS, serializationBufferReallocations);\n    info.put(InfoEntry.CONSTANT_ARC_AUTOMATON_SIZE, size);\n    info.put(InfoEntry.MAX_ACTIVE_PATH_LENGTH, activePath.length);\n    info.put(InfoEntry.STATE_REGISTRY_TABLE_SLOTS, hashSet.length);\n    info.put(InfoEntry.STATE_REGISTRY_SIZE, hashSize);\n    info.put(\n        InfoEntry.ESTIMATED_MEMORY_CONSUMPTION_MB,\n        (this.serialized.length + this.hashSet.length * 4) / (double) MB);\n\n    final FSA fsa =\n        new ConstantArcSizeFSA(java.util.Arrays.copyOf(this.serialized, this.size), epsilon);\n    this.serialized = null;\n    this.hashSet = null;\n    return fsa;\n  }\n\n  /**\n   * Build a minimal, deterministic automaton from a sorted list of byte sequences.\n   *\n   * @param input Input sequences to build automaton from.\n   * @return Returns the automaton encoding all input sequences.\n   */\n  public static FSA build(byte[][] input) {\n    final FSABuilder builder = new FSABuilder();\n\n    for (byte[] chs : input) {\n      builder.add(chs, 0, chs.length);\n    }\n\n    return builder.complete();\n  }\n\n  /**\n   * Build a minimal, deterministic automaton from an iterable list of byte sequences.\n   *\n   * @param input Input sequences to build automaton from.\n   * @return Returns the automaton encoding all input sequences.\n   */\n  public static FSA build(Iterable<byte[]> input) {\n    final FSABuilder builder = new FSABuilder();\n\n    for (byte[] chs : input) {\n      builder.add(chs, 0, chs.length);\n    }\n\n    return builder.complete();\n  }\n\n  /**\n   * @return Returns various statistics concerning the FSA and its compilation.\n   * @see InfoEntry\n   */\n  public Map<InfoEntry, Object> getInfo() {\n    return info;\n  }\n\n  /** Is this arc the state's last? */\n  private boolean isArcLast(int arc) {\n    return (serialized[arc + FLAGS_OFFSET] & BIT_ARC_LAST) != 0;\n  }\n\n  /** Is this arc final? */\n  private boolean isArcFinal(int arc) {\n    return (serialized[arc + FLAGS_OFFSET] & BIT_ARC_FINAL) != 0;\n  }\n\n  /** Get label's arc. */\n  private byte getArcLabel(int arc) {\n    return serialized[arc + LABEL_OFFSET];\n  }\n\n  /** Fills the target state address of an arc. */\n  private void setArcTarget(int arc, int state) {\n    arc += ADDRESS_OFFSET + TARGET_ADDRESS_SIZE;\n    for (int i = 0; i < TARGET_ADDRESS_SIZE; i++) {\n      serialized[--arc] = (byte) state;\n      state >>>= 8;\n    }\n  }\n\n  /** Returns the address of an arc. */\n  private int getArcTarget(int arc) {\n    arc += ADDRESS_OFFSET;\n    return (serialized[arc]) << 24\n        | (serialized[arc + 1] & 0xff) << 16\n        | (serialized[arc + 2] & 0xff) << 8\n        | (serialized[arc + 3] & 0xff);\n  }\n\n  /**\n   * @return The number of common prefix characters with the previous sequence.\n   */\n  private int commonPrefix(byte[] sequence, int start, int len) {\n    // Empty root state case.\n    final int max = Math.min(len, activePathLen);\n    int i;\n    for (i = 0; i < max; i++) {\n      final int lastArc = nextArcOffset[i] - ARC_SIZE;\n      if (sequence[start++] != getArcLabel(lastArc)) {\n        break;\n      }\n    }\n\n    return i;\n  }\n\n  /**\n   * Freeze a state: try to find an equivalent state in the interned states dictionary first, if\n   * found, return it, otherwise, serialize the mutable state at <code>activePathIndex</code> and\n   * return it.\n   */\n  private int freezeState(final int activePathIndex) {\n    final int start = activePath[activePathIndex];\n    final int end = nextArcOffset[activePathIndex];\n    final int len = end - start;\n\n    // Set the last arc flag on the current active path's state.\n    serialized[end - ARC_SIZE + FLAGS_OFFSET] |= BIT_ARC_LAST;\n\n    // Try to locate a state with an identical content in the hash set.\n    final int bucketMask = (hashSet.length - 1);\n    int slot = hash(start, len) & bucketMask;\n    for (int i = 0; ; ) {\n      int state = hashSet[slot];\n      if (state == 0) {\n        state = hashSet[slot] = serialize(activePathIndex);\n        if (++hashSize > hashSet.length / 2) expandAndRehash();\n        return state;\n      } else if (equivalent(state, start, len)) {\n        return state;\n      }\n\n      slot = (slot + (++i)) & bucketMask;\n    }\n  }\n\n  /** Reallocate and rehash the hash set. */\n  private void expandAndRehash() {\n    final int[] newHashSet = new int[hashSet.length * 2];\n    final int bucketMask = (newHashSet.length - 1);\n\n    for (int j = 0; j < hashSet.length; j++) {\n      final int state = hashSet[j];\n      if (state > 0) {\n        int slot = hash(state, stateLength(state)) & bucketMask;\n        for (int i = 0; newHashSet[slot] > 0; ) {\n          slot = (slot + (++i)) & bucketMask;\n        }\n        newHashSet[slot] = state;\n      }\n    }\n    this.hashSet = newHashSet;\n  }\n\n  /** The total length of the serialized state data (all arcs). */\n  private int stateLength(int state) {\n    int arc = state;\n    while (!isArcLast(arc)) {\n      arc += ARC_SIZE;\n    }\n    return arc - state + ARC_SIZE;\n  }\n\n  /** Return <code>true</code> if two regions in {@link #serialized} are identical. */\n  private boolean equivalent(int start1, int start2, int len) {\n    if (start1 + len > size || start2 + len > size) return false;\n\n    while (len-- > 0) if (serialized[start1++] != serialized[start2++]) return false;\n\n    return true;\n  }\n\n  /** Serialize a given state on the active path. */\n  private int serialize(final int activePathIndex) {\n    expandBuffers();\n\n    final int newState = size;\n    final int start = activePath[activePathIndex];\n    final int len = nextArcOffset[activePathIndex] - start;\n    System.arraycopy(serialized, start, serialized, newState, len);\n\n    size += len;\n    return newState;\n  }\n\n  /** Hash code of a fragment of {@link #serialized} array. */\n  private int hash(int start, int byteCount) {\n    assert byteCount % ARC_SIZE == 0 : \"Not an arc multiply?\";\n\n    int h = 0;\n    for (int arcs = byteCount / ARC_SIZE; --arcs >= 0; start += ARC_SIZE) {\n      h = 17 * h + getArcLabel(start);\n      h = 17 * h + getArcTarget(start);\n      if (isArcFinal(start)) h += 17;\n    }\n\n    return h;\n  }\n\n  /** Append a new mutable state to the active path. */\n  private void expandActivePath(int size) {\n    if (activePath.length < size) {\n      final int p = activePath.length;\n      activePath = java.util.Arrays.copyOf(activePath, size);\n      nextArcOffset = java.util.Arrays.copyOf(nextArcOffset, size);\n\n      for (int i = p; i < size; i++) {\n        nextArcOffset[i] = activePath[i] = allocateState(/* assume max labels count */ MAX_LABELS);\n      }\n    }\n  }\n\n  /** Expand internal buffers for the next state. */\n  private void expandBuffers() {\n    if (this.serialized.length < size + ARC_SIZE * MAX_LABELS) {\n      serialized = java.util.Arrays.copyOf(serialized, serialized.length + bufferGrowthSize);\n      serializationBufferReallocations++;\n    }\n  }\n\n  /**\n   * Allocate space for a state with the given number of outgoing labels.\n   *\n   * @return state offset\n   */\n  private int allocateState(int labels) {\n    expandBuffers();\n    final int state = size;\n    size += labels * ARC_SIZE;\n    return state;\n  }\n\n  /** Copy <code>current</code> into an internal buffer. */\n  private boolean setPrevious(byte[] sequence, int start, int length) {\n    if (previous == null || previous.length < length) {\n      previous = new byte[length];\n    }\n\n    System.arraycopy(sequence, start, previous, 0, length);\n    previousLength = length;\n    return true;\n  }\n\n  /**\n   * Lexicographic order of input sequences. By default, consistent with the \"C\" sort (absolute\n   * value of bytes, 0-255).\n   */\n  private static int compare(byte[] s1, int start1, int lens1, byte[] s2, int start2, int lens2) {\n    final int max = Math.min(lens1, lens2);\n\n    for (int i = 0; i < max; i++) {\n      final byte c1 = s1[start1++];\n      final byte c2 = s2[start2++];\n      if (c1 != c2) return (c1 & 0xff) - (c2 & 0xff);\n    }\n\n    return lens1 - lens2;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/FSAInfo.java",
    "content": "package morfologik.fsa.builders;\n\nimport com.carrotsearch.hppc.IntIntHashMap;\nimport java.util.BitSet;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSA5;\n\n/** Compute additional information about an FSA: number of arcs, nodes, etc. */\npublic final class FSAInfo {\n  /** Computes the exact number of states and nodes by recursively traversing the FSA. */\n  private static class NodeVisitor {\n    final BitSet visitedArcs = new BitSet();\n    final BitSet visitedNodes = new BitSet();\n\n    int nodes;\n    int arcs;\n    int totalArcs;\n\n    private final FSA fsa;\n\n    NodeVisitor(FSA fsa) {\n      this.fsa = fsa;\n    }\n\n    public void visitNode(final int node) {\n      if (visitedNodes.get(node)) {\n        return;\n      }\n      visitedNodes.set(node);\n\n      nodes++;\n      for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n        if (!visitedArcs.get(arc)) {\n          arcs++;\n        }\n        totalArcs++;\n        visitedArcs.set(arc);\n\n        if (!fsa.isArcTerminal(arc)) {\n          visitNode(fsa.getEndNode(arc));\n        }\n      }\n    }\n  }\n\n  /** Computes the exact number of final states. */\n  private static class FinalStateVisitor {\n    final IntIntHashMap visitedNodes = new IntIntHashMap();\n\n    private final FSA fsa;\n\n    FinalStateVisitor(FSA fsa) {\n      this.fsa = fsa;\n    }\n\n    public int visitNode(int node) {\n      int index = visitedNodes.indexOf(node);\n      if (index >= 0) {\n        return visitedNodes.indexGet(index);\n      }\n\n      int fromHere = 0;\n      for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n        if (fsa.isArcFinal(arc)) fromHere++;\n\n        if (!fsa.isArcTerminal(arc)) {\n          fromHere += visitNode(fsa.getEndNode(arc));\n        }\n      }\n      visitedNodes.put(node, fromHere);\n      return fromHere;\n    }\n  }\n\n  /** Number of nodes in the automaton. */\n  public final int nodeCount;\n\n  /**\n   * Number of arcs in the automaton, excluding an arcs from the zero node (initial) and an arc from\n   * the start node to the root node.\n   */\n  public final int arcsCount;\n\n  /** Total number of arcs, counting arcs that physically overlap due to merging. */\n  public final int arcsCountTotal;\n\n  /** Number of final states (number of input sequences stored in the automaton). */\n  public final int finalStatesCount;\n\n  /** Arcs size (in serialized form). */\n  public final int size;\n\n  /*\n   *\n   */\n  public FSAInfo(FSA fsa) {\n    final NodeVisitor w = new NodeVisitor(fsa);\n    int root = fsa.getRootNode();\n    if (root > 0) {\n      w.visitNode(root);\n    }\n\n    this.nodeCount = 1 + w.nodes;\n    this.arcsCount = 1 + w.arcs;\n    this.arcsCountTotal = 1 + w.totalArcs;\n\n    final FinalStateVisitor fsv = new FinalStateVisitor(fsa);\n    this.finalStatesCount = fsv.visitNode(fsa.getRootNode());\n\n    if (fsa instanceof FSA5) {\n      this.size = ((FSA5) fsa).arcs.length;\n    } else {\n      this.size = 0;\n    }\n  }\n\n  /*\n   *\n   */\n  public FSAInfo(int nodeCount, int arcsCount, int arcsCountTotal, int finalStatesCount) {\n    this.nodeCount = nodeCount;\n    this.arcsCount = arcsCount;\n    this.arcsCountTotal = arcsCountTotal;\n    this.finalStatesCount = finalStatesCount;\n    this.size = 0;\n  }\n\n  /*\n   *\n   */\n  @Override\n  public String toString() {\n    return \"Nodes: \"\n        + nodeCount\n        + \", arcs visited: \"\n        + arcsCount\n        + \", arcs total: \"\n        + arcsCountTotal\n        + \", final states: \"\n        + finalStatesCount\n        + \", size: \"\n        + size;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/FSASerializer.java",
    "content": "package morfologik.fsa.builders;\n\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.util.Set;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSAFlags;\n\n/** All FSA serializers (to binary formats) will implement this interface. */\npublic interface FSASerializer {\n  /**\n   * Serialize a finite state automaton to an output stream.\n   *\n   * @param fsa The automaton to serialize.\n   * @param os The output stream to serialize to.\n   * @param <T> A subclass of {@link OutputStream}, returned for chaining.\n   * @return Returns <code>T</code> for chaining.\n   * @throws IOException Rethrown if an I/O error occurs.\n   */\n  public <T extends OutputStream> T serialize(FSA fsa, T os) throws IOException;\n\n  /**\n   * @return Returns the set of flags supported by the serializer (and the output automaton).\n   */\n  public Set<FSAFlags> getFlags();\n\n  /**\n   * Sets the filler separator (only if {@link #getFlags()} returns {@link FSAFlags#SEPARATORS}).\n   *\n   * @param filler The filler separator byte.\n   * @return Returns <code>this</code> for call chaining.\n   */\n  public FSASerializer withFiller(byte filler);\n\n  /**\n   * Sets the annotation separator (only if {@link #getFlags()} returns {@link\n   * FSAFlags#SEPARATORS}).\n   *\n   * @param annotationSeparator The filler separator byte.\n   * @return Returns <code>this</code> for call chaining.\n   */\n  public FSASerializer withAnnotationSeparator(byte annotationSeparator);\n\n  /**\n   * Enables support for right language count on nodes, speeding up perfect hash counts (only if\n   * {@link #getFlags()} returns {@link FSAFlags#NUMBERS}).\n   *\n   * @return Returns <code>this</code> for call chaining.\n   */\n  public FSASerializer withNumbers();\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/main/java/morfologik/fsa/builders/FSAUtils.java",
    "content": "package morfologik.fsa.builders;\n\nimport com.carrotsearch.hppc.IntIntHashMap;\nimport java.io.IOException;\nimport java.io.StringWriter;\nimport java.io.Writer;\nimport java.util.BitSet;\nimport java.util.TreeMap;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSA5;\nimport morfologik.fsa.FSAFlags;\nimport morfologik.fsa.StateVisitor;\n\n/** Other FSA-related utilities not directly associated with the class hierarchy. */\npublic final class FSAUtils {\n  public static final class IntIntHolder {\n    public int a;\n    public int b;\n\n    public IntIntHolder(int a, int b) {\n      this.a = a;\n      this.b = b;\n    }\n\n    public IntIntHolder() {}\n  }\n\n  /**\n   * Returns the right-language reachable from a given FSA node, formatted as an input for the\n   * graphviz package (expressed in the <code>dot</code> language).\n   *\n   * @param fsa The automaton to visualize.\n   * @param node Starting node (subgraph will be visualized unless it's the automaton's root node).\n   * @return Returns the dot language description of the automaton.\n   */\n  public static String toDot(FSA fsa, int node) {\n    try {\n      StringWriter w = new StringWriter();\n      toDot(w, fsa, node);\n      return w.toString();\n    } catch (IOException e) {\n      throw new RuntimeException(e);\n    }\n  }\n\n  /**\n   * Saves the right-language reachable from a given FSA node, formatted as an input for the\n   * graphviz package (expressed in the <code>dot</code> language), to the given writer.\n   *\n   * @param w The writer to write dot language description of the automaton.\n   * @param fsa The automaton to visualize.\n   * @param node Starting node (subgraph will be visualized unless it's the automaton's root node).\n   * @throws IOException Rethrown if an I/O exception occurs.\n   */\n  public static void toDot(Writer w, FSA fsa, int node) throws IOException {\n    w.write(\"digraph Automaton {\\n\");\n    w.write(\"  rankdir = LR;\\n\");\n\n    final BitSet visited = new BitSet();\n\n    w.write(\"  stop [shape=doublecircle,label=\\\"\\\"];\\n\");\n    w.write(\"  initial [shape=plaintext,label=\\\"\\\"];\\n\");\n    w.write(\"  initial -> \" + node + \"\\n\\n\");\n\n    visitNode(w, 0, fsa, node, visited);\n    w.write(\"}\\n\");\n  }\n\n  private static void visitNode(Writer w, int d, FSA fsa, int s, BitSet visited)\n      throws IOException {\n    visited.set(s);\n    w.write(\"  \");\n    w.write(Integer.toString(s));\n\n    if (fsa.getFlags().contains(FSAFlags.NUMBERS)) {\n      int nodeNumber = fsa.getRightLanguageCount(s);\n      w.write(\" [shape=circle,label=\\\"\" + nodeNumber + \"\\\"];\\n\");\n    } else {\n      w.write(\" [shape=circle,label=\\\"\\\"];\\n\");\n    }\n\n    for (int arc = fsa.getFirstArc(s); arc != 0; arc = fsa.getNextArc(arc)) {\n      w.write(\"  \");\n      w.write(Integer.toString(s));\n      w.write(\" -> \");\n      if (fsa.isArcTerminal(arc)) {\n        w.write(\"stop\");\n      } else {\n        w.write(Integer.toString(fsa.getEndNode(arc)));\n      }\n\n      final byte label = fsa.getArcLabel(arc);\n      w.write(\" [label=\\\"\");\n      if (Character.isLetterOrDigit(label)) w.write((char) label);\n      else {\n        w.write(\"0x\");\n        w.write(Integer.toHexString(label & 0xFF));\n      }\n      w.write(\"\\\"\");\n      if (fsa.isArcFinal(arc)) w.write(\" arrowhead=\\\"tee\\\"\");\n      if (fsa instanceof FSA5) {\n        if (((FSA5) fsa).isNextSet(arc)) {\n          w.write(\" color=\\\"blue\\\"\");\n        }\n      }\n\n      w.write(\"]\\n\");\n    }\n\n    for (int arc = fsa.getFirstArc(s); arc != 0; arc = fsa.getNextArc(arc)) {\n      if (!fsa.isArcTerminal(arc)) {\n        int endNode = fsa.getEndNode(arc);\n        if (!visited.get(endNode)) {\n          visitNode(w, d + 1, fsa, endNode, visited);\n        }\n      }\n    }\n  }\n\n  /**\n   * Calculate fan-out ratio (how many nodes have a given number of outgoing arcs).\n   *\n   * @param fsa The automaton to calculate fanout for.\n   * @param root The starting node for calculations.\n   * @return The returned map contains keys for the number of outgoing arcs and an associated value\n   *     being the number of nodes with that arc number.\n   */\n  public static TreeMap<Integer, Integer> calculateFanOuts(final FSA fsa, int root) {\n    final int[] result = new int[256];\n    fsa.visitInPreOrder(\n        new StateVisitor() {\n          public boolean accept(int state) {\n            int count = 0;\n            for (int arc = fsa.getFirstArc(state); arc != 0; arc = fsa.getNextArc(arc)) {\n              count++;\n            }\n            result[count]++;\n            return true;\n          }\n        });\n\n    TreeMap<Integer, Integer> output = new TreeMap<Integer, Integer>();\n\n    int low = 1; // Omit #0, there is always a single node like that (dummy).\n    while (low < result.length && result[low] == 0) {\n      low++;\n    }\n\n    int high = result.length - 1;\n    while (high >= 0 && result[high] == 0) {\n      high--;\n    }\n\n    for (int i = low; i <= high; i++) {\n      output.put(i, result[i]);\n    }\n\n    return output;\n  }\n\n  /**\n   * Calculate the size of \"right language\" for each state in an FSA. The right language is the\n   * number of sequences encoded from a given node in the automaton.\n   *\n   * @param fsa The automaton to calculate right language for.\n   * @return Returns a map with node identifiers as keys and their right language counts as\n   *     associated values.\n   */\n  public static IntIntHashMap rightLanguageForAllStates(final FSA fsa) {\n    final IntIntHashMap numbers = new IntIntHashMap();\n\n    fsa.visitInPostOrder(\n        new StateVisitor() {\n          public boolean accept(int state) {\n            int thisNodeNumber = 0;\n            for (int arc = fsa.getFirstArc(state); arc != 0; arc = fsa.getNextArc(arc)) {\n              thisNodeNumber +=\n                  (fsa.isArcFinal(arc) ? 1 : 0)\n                      + (fsa.isArcTerminal(arc) ? 0 : numbers.get(fsa.getEndNode(arc)));\n            }\n            numbers.put(state, thisNodeNumber);\n\n            return true;\n          }\n        });\n\n    return numbers;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/CFSA2SerializerTest.java",
    "content": "package morfologik.fsa.builders;\n\n/** */\npublic class CFSA2SerializerTest extends SerializerTestBase {\n  protected CFSA2Serializer createSerializer() {\n    return new CFSA2Serializer();\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/FSA5SerializerTest.java",
    "content": "package morfologik.fsa.builders;\n\n/** */\npublic class FSA5SerializerTest extends SerializerTestBase {\n  protected FSA5Serializer createSerializer() {\n    return new FSA5Serializer();\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/FSA5Test.java",
    "content": "package morfologik.fsa.builders;\n\nimport static morfologik.fsa.FSAFlags.*;\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport java.io.IOException;\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.List;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSA5;\nimport morfologik.fsa.FSAFlags;\nimport org.junit.jupiter.api.Test;\n\n/** Additional tests for {@link FSA5}. */\npublic final class FSA5Test extends TestBase {\n  public List<String> expected = Arrays.asList(\"a\", \"aba\", \"ac\", \"b\", \"ba\", \"c\");\n\n  @Test\n  public void testVersion5() throws IOException {\n    final FSA fsa = FSA.read(this.getClass().getResourceAsStream(\"abc.fsa\"));\n    assertFalse(fsa.getFlags().contains(FSAFlags.NUMBERS));\n    verifyContent(expected, fsa);\n  }\n\n  @Test\n  public void testVersion5WithNumbers() throws IOException {\n    final FSA fsa = FSA.read(this.getClass().getResourceAsStream(\"abc-numbers.fsa\"));\n\n    verifyContent(expected, fsa);\n    assertTrue(fsa.getFlags().contains(FSAFlags.NUMBERS));\n  }\n\n  @Test\n  public void testArcsAndNodes() throws IOException {\n    final FSA fsa1 = FSA.read(this.getClass().getResourceAsStream(\"abc.fsa\"));\n    final FSA fsa2 = FSA.read(this.getClass().getResourceAsStream(\"abc-numbers.fsa\"));\n\n    FSAInfo info1 = new FSAInfo(fsa1);\n    FSAInfo info2 = new FSAInfo(fsa2);\n\n    assertEquals(info1.arcsCount, info2.arcsCount);\n    assertEquals(info1.nodeCount, info2.nodeCount);\n\n    assertEquals(4, info2.nodeCount);\n    assertEquals(7, info2.arcsCount);\n  }\n\n  @Test\n  public void testNumbers() throws IOException {\n    final FSA fsa = FSA.read(this.getClass().getResourceAsStream(\"abc-numbers.fsa\"));\n\n    assertTrue(fsa.getFlags().contains(NEXTBIT));\n\n    // Get all numbers for nodes.\n    byte[] buffer = new byte[128];\n    final ArrayList<String> result = new ArrayList<String>();\n    walkNode(buffer, 0, fsa, fsa.getRootNode(), 0, result);\n\n    Collections.sort(result);\n    assertEquals(Arrays.asList(\"0 c\", \"1 b\", \"2 ba\", \"3 a\", \"4 ac\", \"5 aba\"), result);\n  }\n\n  public static void walkNode(\n      byte[] buffer, int depth, FSA fsa, int node, int cnt, List<String> result)\n      throws IOException {\n    for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n      buffer[depth] = fsa.getArcLabel(arc);\n\n      if (fsa.isArcFinal(arc) || fsa.isArcTerminal(arc)) {\n        result.add(cnt + \" \" + new String(buffer, 0, depth + 1, \"UTF-8\"));\n      }\n\n      if (fsa.isArcFinal(arc)) {\n        cnt++;\n      }\n\n      if (!fsa.isArcTerminal(arc)) {\n        walkNode(buffer, depth + 1, fsa, fsa.getEndNode(arc), cnt, result);\n        cnt += fsa.getRightLanguageCount(fsa.getEndNode(arc));\n      }\n    }\n  }\n\n  private static void verifyContent(List<String> expected, FSA fsa) throws IOException {\n    final ArrayList<String> actual = new ArrayList<String>();\n\n    int count = 0;\n    for (ByteBuffer bb : fsa.getSequences()) {\n      assertEquals(0, bb.arrayOffset());\n      assertEquals(0, bb.position());\n      actual.add(new String(bb.array(), 0, bb.remaining(), \"UTF-8\"));\n      count++;\n    }\n    assertEquals(expected.size(), count);\n    Collections.sort(actual);\n    assertEquals(expected, actual);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/FSABuilderTest.java",
    "content": "package morfologik.fsa.builders;\n\nimport static morfologik.fsa.builders.FSATestUtils.*;\nimport static org.junit.jupiter.api.Assertions.assertEquals;\n\nimport java.io.IOException;\nimport java.util.Arrays;\nimport java.util.Random;\nimport morfologik.fsa.FSA;\nimport org.junit.jupiter.api.BeforeAll;\nimport org.junit.jupiter.api.Test;\n\npublic class FSABuilderTest extends TestBase {\n  private static byte[][] input;\n  private static byte[][] input2;\n\n  @BeforeAll\n  public static void prepareByteInput(Random rnd) {\n    input = generateRandom(rnd, 25000, new MinMax(1, 20), new MinMax(0, 255));\n    input2 = generateRandom(rnd, 40, new MinMax(1, 20), new MinMax(0, 3));\n  }\n\n  @Test\n  public void testEmptyInput() {\n    byte[][] input = {};\n    checkCorrect(input, FSABuilder.build(input));\n  }\n\n  @Test\n  public void testHashResizeBug() throws Exception {\n    byte[][] input = {\n      {0, 1}, {0, 2}, {1, 1}, {2, 1},\n    };\n\n    FSA fsa = FSABuilder.build(input);\n    checkCorrect(input, FSABuilder.build(input));\n    checkMinimal(fsa);\n  }\n\n  @Test\n  public void testSmallInput() throws Exception {\n    byte[][] input = {\n      \"abc\".getBytes(\"UTF-8\"), \"bbc\".getBytes(\"UTF-8\"), \"d\".getBytes(\"UTF-8\"),\n    };\n    checkCorrect(input, FSABuilder.build(input));\n  }\n\n  @Test\n  public void testLexicographicOrder() throws IOException {\n    byte[][] input = {\n      {0}, {1}, {(byte) 0xff},\n    };\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n\n    // Check if lexical ordering is consistent with absolute byte value.\n    assertEquals(0, input[0][0]);\n    assertEquals(1, input[1][0]);\n    assertEquals((byte) 0xff, input[2][0]);\n\n    final FSA fsa;\n    checkCorrect(input, fsa = FSABuilder.build(input));\n\n    int arc = fsa.getFirstArc(fsa.getRootNode());\n    assertEquals(0, fsa.getArcLabel(arc));\n    arc = fsa.getNextArc(arc);\n    assertEquals(1, fsa.getArcLabel(arc));\n    arc = fsa.getNextArc(arc);\n    assertEquals((byte) 0xff, fsa.getArcLabel(arc));\n  }\n\n  @Test\n  public void testRandom25000_largerAlphabet() {\n    FSA fsa = FSABuilder.build(input);\n    checkCorrect(input, fsa);\n    checkMinimal(fsa);\n  }\n\n  @Test\n  public void testRandom25000_smallAlphabet() throws IOException {\n    FSA fsa = FSABuilder.build(input2);\n    checkCorrect(input2, fsa);\n    checkMinimal(fsa);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/FSATestUtils.java",
    "content": "package morfologik.fsa.builders;\n\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport java.nio.ByteBuffer;\nimport java.util.*;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.StateVisitor;\n\npublic class FSATestUtils {\n  /*\n   * Generate a sorted list of random sequences.\n   */\n  public static byte[][] generateRandom(Random rnd, int count, MinMax length, MinMax alphabet) {\n    final byte[][] input = new byte[count][];\n    for (int i = 0; i < count; i++) {\n      input[i] = randomByteSequence(rnd, length, alphabet);\n    }\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    return input;\n  }\n\n  /** Generate a random string. */\n  private static byte[] randomByteSequence(Random rnd, MinMax length, MinMax alphabet) {\n    byte[] bytes = new byte[length.min + rnd.nextInt(length.range())];\n    for (int i = 0; i < bytes.length; i++) {\n      bytes[i] = (byte) (alphabet.min + rnd.nextInt(alphabet.range()));\n    }\n    return bytes;\n  }\n\n  /*\n   * Check if the DFSA is correct with respect to the given input.\n   */\n  public static void checkCorrect(byte[][] input, FSA fsa) {\n    // (1) All input sequences are in the right language.\n    HashSet<ByteBuffer> rl = new HashSet<ByteBuffer>();\n    for (ByteBuffer bb : fsa) {\n      rl.add(ByteBuffer.wrap(Arrays.copyOf(bb.array(), bb.remaining())));\n    }\n\n    HashSet<ByteBuffer> uniqueInput = new HashSet<ByteBuffer>();\n    for (byte[] sequence : input) {\n      uniqueInput.add(ByteBuffer.wrap(sequence));\n    }\n\n    for (ByteBuffer sequence : uniqueInput) {\n      if (!rl.remove(sequence)) {\n        fail(\"Not present in the right language: \" + SerializerTestBase.toString(sequence));\n      }\n    }\n\n    // (2) No other sequence _other_ than the input is in the right language.\n    assertEquals(0, rl.size());\n  }\n\n  /*\n   * Check if the DFSA reachable from a given state is minimal. This means no\n   * two states have the same right language.\n   */\n  public static void checkMinimal(final FSA fsa) {\n    final HashMap<String, Integer> stateLanguages = new HashMap<String, Integer>();\n\n    fsa.visitInPostOrder(\n        new StateVisitor() {\n          private StringBuilder b = new StringBuilder();\n\n          public boolean accept(int state) {\n            List<byte[]> rightLanguage = allSequences(fsa, state);\n            Collections.sort(rightLanguage, FSABuilder.LEXICAL_ORDERING);\n\n            b.setLength(0);\n            for (byte[] seq : rightLanguage) {\n              b.append(Arrays.toString(seq));\n              b.append(',');\n            }\n\n            String full = b.toString();\n            assertFalse(\n                stateLanguages.containsKey(full),\n                \"State exists: \" + state + \" \" + full + \" \" + stateLanguages.get(full));\n            stateLanguages.put(full, state);\n\n            return true;\n          }\n        });\n  }\n\n  static List<byte[]> allSequences(FSA fsa, int state) {\n    ArrayList<byte[]> seq = new ArrayList<byte[]>();\n    for (ByteBuffer bb : fsa.getSequences(state)) {\n      seq.add(Arrays.copyOf(bb.array(), bb.remaining()));\n    }\n    return seq;\n  }\n\n  /*\n   * Check if two FSAs are identical.\n   */\n  public static void checkIdentical(FSA fsa1, FSA fsa2) {\n    ArrayDeque<String> fromRoot = new ArrayDeque<String>();\n    checkIdentical(\n        fromRoot, fsa1, fsa1.getRootNode(), new BitSet(), fsa2, fsa2.getRootNode(), new BitSet());\n  }\n\n  /*\n   *\n   */\n  static void checkIdentical(\n      ArrayDeque<String> fromRoot,\n      FSA fsa1,\n      int node1,\n      BitSet visited1,\n      FSA fsa2,\n      int node2,\n      BitSet visited2) {\n    int arc1 = fsa1.getFirstArc(node1);\n    int arc2 = fsa2.getFirstArc(node2);\n\n    if (visited1.get(node1) != visited2.get(node2)) {\n      throw new RuntimeException(\n          \"Two nodes should either be visited or not visited: \"\n              + Arrays.toString(fromRoot.toArray())\n              + \" \"\n              + \" node1: \"\n              + node1\n              + \" \"\n              + \" node2: \"\n              + node2);\n    }\n    visited1.set(node1);\n    visited2.set(node2);\n\n    TreeSet<Character> labels1 = new TreeSet<Character>();\n    TreeSet<Character> labels2 = new TreeSet<Character>();\n    while (true) {\n      labels1.add((char) fsa1.getArcLabel(arc1));\n      labels2.add((char) fsa2.getArcLabel(arc2));\n\n      arc1 = fsa1.getNextArc(arc1);\n      arc2 = fsa2.getNextArc(arc2);\n\n      if (arc1 == 0 || arc2 == 0) {\n        if (arc1 != arc2) {\n          throw new RuntimeException(\n              \"Different number of labels at path: \" + Arrays.toString(fromRoot.toArray()));\n        }\n        break;\n      }\n    }\n\n    if (!labels1.equals(labels2)) {\n      throw new RuntimeException(\n          \"Different sets of labels at path: \"\n              + Arrays.toString(fromRoot.toArray())\n              + \":\\n\"\n              + labels1\n              + \"\\n\"\n              + labels2);\n    }\n\n    // recurse.\n    for (char chr : labels1) {\n      byte label = (byte) chr;\n      fromRoot.push(\n          Character.isLetterOrDigit(chr) ? Character.toString(chr) : Integer.toString(chr));\n\n      arc1 = fsa1.getArc(node1, label);\n      arc2 = fsa2.getArc(node2, label);\n\n      if (fsa1.isArcFinal(arc1) != fsa2.isArcFinal(arc2)) {\n        throw new RuntimeException(\n            \"Different final flag on arcs at: \"\n                + Arrays.toString(fromRoot.toArray())\n                + \", label: \"\n                + label);\n      }\n\n      if (fsa1.isArcTerminal(arc1) != fsa2.isArcTerminal(arc2)) {\n        throw new RuntimeException(\n            \"Different terminal flag on arcs at: \"\n                + Arrays.toString(fromRoot.toArray())\n                + \", label: \"\n                + label);\n      }\n\n      if (!fsa1.isArcTerminal(arc1)) {\n        checkIdentical(\n            fromRoot, fsa1, fsa1.getEndNode(arc1), visited1, fsa2, fsa2.getEndNode(arc2), visited2);\n      }\n\n      fromRoot.pop();\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/FSATraversalTest.java",
    "content": "package morfologik.fsa.builders;\n\nimport static java.nio.charset.StandardCharsets.*;\nimport static morfologik.fsa.MatchResult.*;\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.nio.ByteBuffer;\nimport java.util.Arrays;\nimport java.util.HashSet;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSA5;\nimport morfologik.fsa.FSATraversal;\nimport morfologik.fsa.MatchResult;\nimport org.junit.jupiter.api.Assertions;\nimport org.junit.jupiter.api.BeforeEach;\nimport org.junit.jupiter.api.Test;\n\n/** Tests {@link FSATraversal}. */\npublic final class FSATraversalTest extends TestBase {\n  private FSA fsa;\n\n  @BeforeEach\n  public void setUp() throws Exception {\n    fsa = FSA.read(this.getClass().getResourceAsStream(\"en_tst.dict\"));\n  }\n\n  @Test\n  public void testAutomatonHasPrefixBug() throws Exception {\n    FSA fsa =\n        FSABuilder.build(\n            Arrays.asList(\n                \"a\".getBytes(UTF_8),\n                \"ab\".getBytes(UTF_8),\n                \"abc\".getBytes(UTF_8),\n                \"ad\".getBytes(UTF_8),\n                \"bcd\".getBytes(UTF_8),\n                \"bce\".getBytes(UTF_8)));\n\n    FSATraversal fsaTraversal = new FSATraversal(fsa);\n    assertEquals(EXACT_MATCH, fsaTraversal.match(\"a\".getBytes(UTF_8)).kind);\n    assertEquals(EXACT_MATCH, fsaTraversal.match(\"ab\".getBytes(UTF_8)).kind);\n    assertEquals(EXACT_MATCH, fsaTraversal.match(\"abc\".getBytes(UTF_8)).kind);\n    assertEquals(EXACT_MATCH, fsaTraversal.match(\"ad\".getBytes(UTF_8)).kind);\n\n    assertEquals(SEQUENCE_IS_A_PREFIX, fsaTraversal.match(\"b\".getBytes(UTF_8)).kind);\n    assertEquals(SEQUENCE_IS_A_PREFIX, fsaTraversal.match(\"bc\".getBytes(UTF_8)).kind);\n\n    MatchResult m;\n\n    m = fsaTraversal.match(\"abcd\".getBytes(UTF_8));\n    assertEquals(AUTOMATON_HAS_PREFIX, m.kind);\n    assertEquals(3, m.index);\n\n    m = fsaTraversal.match(\"ade\".getBytes(UTF_8));\n    assertEquals(AUTOMATON_HAS_PREFIX, m.kind);\n    assertEquals(2, m.index);\n\n    m = fsaTraversal.match(\"ax\".getBytes(UTF_8));\n    assertEquals(AUTOMATON_HAS_PREFIX, m.kind);\n    assertEquals(1, m.index);\n\n    assertEquals(NO_MATCH, fsaTraversal.match(\"d\".getBytes(UTF_8)).kind);\n  }\n\n  @Test\n  public void testTraversalWithIterable() {\n    int count = 0;\n    for (ByteBuffer bb : fsa.getSequences()) {\n      assertEquals(0, bb.arrayOffset());\n      assertEquals(0, bb.position());\n      count++;\n    }\n    assertEquals(346773, count);\n  }\n\n  @Test\n  public void testPerfectHash() throws IOException {\n    byte[][] input =\n        new byte[][] {\n          {'a'}, {'a', 'b', 'a'}, {'a', 'c'}, {'b'}, {'b', 'a'}, {'c'},\n        };\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    final byte[] fsaData =\n        new FSA5Serializer().withNumbers().serialize(s, new ByteArrayOutputStream()).toByteArray();\n\n    final FSA5 fsa = FSA.read(new ByteArrayInputStream(fsaData), FSA5.class);\n    final FSATraversal traversal = new FSATraversal(fsa);\n\n    int i = 0;\n    for (byte[] seq : input) {\n      Assertions.assertEquals(i++, traversal.perfectHash(seq));\n    }\n\n    // Check if the total number of sequences is encoded at the root node.\n    assertEquals(6, fsa.getRightLanguageCount(fsa.getRootNode()));\n\n    // Check sub/super sequence scenarios.\n    assertEquals(AUTOMATON_HAS_PREFIX, traversal.perfectHash(\"abax\".getBytes(UTF_8)));\n    assertEquals(AUTOMATON_HAS_PREFIX, traversal.perfectHash(\"abx\".getBytes(UTF_8)));\n    assertEquals(SEQUENCE_IS_A_PREFIX, traversal.perfectHash(\"ab\".getBytes(UTF_8)));\n    assertEquals(NO_MATCH, traversal.perfectHash(\"d\".getBytes(UTF_8)));\n    assertEquals(NO_MATCH, traversal.perfectHash(new byte[] {0}));\n\n    assertTrue(AUTOMATON_HAS_PREFIX < 0);\n    assertTrue(SEQUENCE_IS_A_PREFIX < 0);\n    assertTrue(NO_MATCH < 0);\n  }\n\n  /** */\n  @Test\n  public void testRecursiveTraversal() {\n    final int[] counter = new int[] {0};\n\n    class Recursion {\n      public void dumpNode(final int node) {\n        int arc = fsa.getFirstArc(node);\n        do {\n          if (fsa.isArcFinal(arc)) {\n            counter[0]++;\n          }\n\n          if (!fsa.isArcTerminal(arc)) {\n            dumpNode(fsa.getEndNode(arc));\n          }\n\n          arc = fsa.getNextArc(arc);\n        } while (arc != 0);\n      }\n    }\n\n    new Recursion().dumpNode(fsa.getRootNode());\n\n    assertEquals(346773, counter[0]);\n  }\n\n  @Test\n  public void testMatch() throws IOException {\n    final FSA fsa = FSA.read(this.getClass().getResourceAsStream(\"abc.fsa\"));\n    final FSATraversal traversalHelper = new FSATraversal(fsa);\n\n    MatchResult m = traversalHelper.match(\"ax\".getBytes());\n    assertEquals(AUTOMATON_HAS_PREFIX, m.kind);\n    assertEquals(1, m.index);\n    assertEquals(new HashSet<String>(Arrays.asList(\"ba\", \"c\")), suffixes(fsa, m.node));\n\n    assertEquals(EXACT_MATCH, traversalHelper.match(\"aba\".getBytes()).kind);\n\n    m = traversalHelper.match(\"abalonger\".getBytes());\n    assertEquals(AUTOMATON_HAS_PREFIX, m.kind);\n    assertEquals(\"longer\", \"abalonger\".substring(m.index));\n\n    m = traversalHelper.match(\"ab\".getBytes());\n    assertEquals(SEQUENCE_IS_A_PREFIX, m.kind);\n    assertEquals(new HashSet<String>(Arrays.asList(\"a\")), suffixes(fsa, m.node));\n  }\n\n  /** Return all sequences reachable from a given node, as strings. */\n  private HashSet<String> suffixes(FSA fsa, int node) {\n    HashSet<String> result = new HashSet<String>();\n    for (ByteBuffer bb : fsa.getSequences(node)) {\n      result.add(new String(bb.array(), bb.position(), bb.remaining(), UTF_8));\n    }\n    return result;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/MinMax.java",
    "content": "package morfologik.fsa.builders;\n\n/** Minimum/maximum and range. */\nfinal class MinMax {\n  public final int min;\n  public final int max;\n\n  MinMax(int min, int max) {\n    this.min = Math.min(min, max);\n    this.max = Math.max(min, max);\n  }\n\n  public int range() {\n    return max - min;\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/SerializerTestBase.java",
    "content": "package morfologik.fsa.builders;\n\nimport static morfologik.fsa.FSAFlags.*;\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashSet;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSAFlags;\nimport org.junit.jupiter.api.Assumptions;\nimport org.junit.jupiter.api.Test;\n\npublic abstract class SerializerTestBase extends TestBase {\n  @Test\n  public void testA() throws IOException {\n    byte[][] input =\n        new byte[][] {\n          {'a'},\n        };\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    checkSerialization(input, s);\n  }\n\n  @Test\n  public void testArcsSharing() throws IOException {\n    byte[][] input =\n        new byte[][] {\n          {'a', 'c', 'f'},\n          {'a', 'd', 'g'},\n          {'a', 'e', 'h'},\n          {'b', 'd', 'g'},\n          {'b', 'e', 'h'},\n        };\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    checkSerialization(input, s);\n  }\n\n  @Test\n  public void testFSA5SerializerSimple() throws IOException {\n    byte[][] input =\n        new byte[][] {\n          {'a'}, {'a', 'b', 'a'}, {'a', 'c'}, {'b'}, {'b', 'a'}, {'c'},\n        };\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    checkSerialization(input, s);\n  }\n\n  @Test\n  public void testNotMinimal() throws IOException {\n    byte[][] input =\n        new byte[][] {\n          {'a', 'b', 'a'},\n          {'b'},\n          {'b', 'a'}\n        };\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    checkSerialization(input, s);\n  }\n\n  @Test\n  public void testFSA5Bug0() throws IOException {\n    checkCorrect(\n        new String[] {\n          \"3-D+A+JJ\", \"3-D+A+NN\", \"4-F+A+NN\", \"z+A+NN\",\n        });\n  }\n\n  @Test\n  public void testFSA5Bug1() throws IOException {\n    checkCorrect(\n        new String[] {\n          \"+NP\", \"n+N\", \"n+NP\",\n        });\n  }\n\n  private void checkCorrect(String[] strings) throws IOException {\n    byte[][] input = new byte[strings.length][];\n    for (int i = 0; i < strings.length; i++) {\n      input[i] = strings[i].getBytes(\"ISO8859-1\");\n    }\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    checkSerialization(input, s);\n  }\n\n  @Test\n  public void testEmptyInput() throws IOException {\n    byte[][] input = new byte[][] {};\n    FSA s = FSABuilder.build(input);\n\n    checkSerialization(input, s);\n  }\n\n  @Test\n  public void test_abc() throws IOException {\n    testBuiltIn(FSA.read(FSA5Test.class.getResourceAsStream(\"abc.fsa\")));\n  }\n\n  @Test\n  public void test_minimal() throws IOException {\n    testBuiltIn(FSA.read(FSA5Test.class.getResourceAsStream(\"minimal.fsa\")));\n  }\n\n  @Test\n  public void test_minimal2() throws IOException {\n    testBuiltIn(FSA.read(FSA5Test.class.getResourceAsStream(\"minimal2.fsa\")));\n  }\n\n  @Test\n  public void test_en_tst() throws IOException {\n    testBuiltIn(FSA.read(FSA5Test.class.getResourceAsStream(\"en_tst.dict\")));\n  }\n\n  private void testBuiltIn(FSA fsa) throws IOException {\n    final ArrayList<byte[]> sequences = new ArrayList<byte[]>();\n\n    sequences.clear();\n    for (ByteBuffer bb : fsa) {\n      sequences.add(Arrays.copyOf(bb.array(), bb.remaining()));\n    }\n\n    Collections.sort(sequences, FSABuilder.LEXICAL_ORDERING);\n\n    final byte[][] in = sequences.toArray(new byte[sequences.size()][]);\n    FSA root = FSABuilder.build(in);\n\n    // Check if the DFSA is correct first.\n    FSATestUtils.checkCorrect(in, root);\n\n    // Check serialization.\n    checkSerialization(in, root);\n  }\n\n  private void checkSerialization(byte[][] input, FSA root) throws IOException {\n    checkSerialization0(createSerializer(), input, root);\n    if (createSerializer().getFlags().contains(FSAFlags.NUMBERS)) {\n      checkSerialization0(createSerializer().withNumbers(), input, root);\n    }\n  }\n\n  private void checkSerialization0(FSASerializer serializer, final byte[][] in, FSA root)\n      throws IOException {\n    final byte[] fsaData = serializer.serialize(root, new ByteArrayOutputStream()).toByteArray();\n\n    FSA fsa = FSA.read(new ByteArrayInputStream(fsaData));\n    checkCorrect(in, fsa);\n  }\n\n  /*\n   * Check if the FSA is correct with respect to the given input.\n   */\n  protected void checkCorrect(byte[][] input, FSA fsa) {\n    // (1) All input sequences are in the right language.\n    HashSet<ByteBuffer> rl = new HashSet<ByteBuffer>();\n    for (ByteBuffer bb : fsa) {\n      byte[] array = bb.array();\n      int length = bb.remaining();\n      rl.add(ByteBuffer.wrap(Arrays.copyOf(array, length)));\n    }\n\n    HashSet<ByteBuffer> uniqueInput = new HashSet<ByteBuffer>();\n    for (byte[] sequence : input) {\n      uniqueInput.add(ByteBuffer.wrap(sequence));\n    }\n\n    for (ByteBuffer sequence : uniqueInput) {\n      if (!rl.remove(sequence)) {\n        fail(\"Not present in the right language: \" + toString(sequence));\n      }\n    }\n\n    // (2) No other sequence _other_ than the input is in the right\n    // language.\n    assertEquals(0, rl.size());\n  }\n\n  @Test\n  public void testAutomatonWithNodeNumbers() throws IOException {\n    Assumptions.assumeTrue(createSerializer().getFlags().contains(FSAFlags.NUMBERS));\n\n    byte[][] input =\n        new byte[][] {\n          {'a'}, {'a', 'b', 'a'}, {'a', 'c'}, {'b'}, {'b', 'a'}, {'c'},\n        };\n\n    Arrays.sort(input, FSABuilder.LEXICAL_ORDERING);\n    FSA s = FSABuilder.build(input);\n\n    final byte[] fsaData =\n        createSerializer().withNumbers().serialize(s, new ByteArrayOutputStream()).toByteArray();\n\n    FSA fsa = FSA.read(new ByteArrayInputStream(fsaData));\n\n    // Ensure we have the NUMBERS flag set.\n    assertTrue(fsa.getFlags().contains(NUMBERS));\n\n    // Get all numbers from nodes.\n    byte[] buffer = new byte[128];\n    final ArrayList<String> result = new ArrayList<String>();\n    FSA5Test.walkNode(buffer, 0, fsa, fsa.getRootNode(), 0, result);\n\n    Collections.sort(result);\n    assertEquals(Arrays.asList(\"0 a\", \"1 aba\", \"2 ac\", \"3 b\", \"4 ba\", \"5 c\"), result);\n  }\n\n  protected abstract FSASerializer createSerializer();\n\n  /*\n   * Drain bytes from a byte buffer to a string.\n   */\n  public static String toString(ByteBuffer sequence) {\n    byte[] bytes = new byte[sequence.remaining()];\n    sequence.get(bytes);\n    return Arrays.toString(bytes);\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/java/morfologik/fsa/builders/TestBase.java",
    "content": "package morfologik.fsa.builders;\n\nimport com.carrotsearch.randomizedtesting.jupiter.DetectThreadLeaks;\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport java.util.function.Predicate;\n\n@Randomized\n@DetectThreadLeaks(scope = DetectThreadLeaks.Scope.SUITE)\n@DetectThreadLeaks.LingerTime(millis = 5 * 1000)\n@DetectThreadLeaks.ExcludeThreads(TestBase.CustomThreadFilter.class)\npublic abstract class TestBase {\n  /** Any custom thread filters we should ignore. */\n  public static class CustomThreadFilter implements Predicate<Thread> {\n    @Override\n    public boolean test(Thread t) {\n      // IBM J9 bogus threads.\n      String threadName = t.getName();\n      if (\"Attach API wait loop\".equals(threadName)\n          || \"file lock watchdog\".equals(threadName)\n          || \"ClassCache Reaper\".equals(threadName)) {\n        return true;\n      }\n\n      return false;\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/resources/morfologik/fsa/builders/abc.in",
    "content": "a\naba\nac\nb\nba\nc\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/resources/morfologik/fsa/builders/minimal.in",
    "content": "+NP\nn+N\nn+NP\n"
  },
  {
    "path": "morfologik-fsa-builders/src/test/resources/morfologik/fsa/builders/minimal2.in",
    "content": "3-D+A+JJ\n3-D+A+NN\n4-F+A+NN\n4-H+A+JJ\nz+A+NN\nz-axis+A+NN\nzB+A+NN\nzZt+A+NNP\nza-zen+A+NN\nzabaglione+A+NN\nzabagliones+B+NNS\nzabajone+A+NN\nzabajones+B+NNS\nzabaione+A+NN\nzabaiones+B+NNS\nzabra+A+NN\nzabras+B+NNS\nzack+A+NN\nzacaton+A+NN\nzacatons+B+NNS\nzacatun+A+NN\nzaddik+A+NN\nzaddiks+B+NNS\nzaffar+A+NN"
  },
  {
    "path": "morfologik-polish/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.carrot2</groupId>\n    <artifactId>morfologik-parent</artifactId>\n    <version>2.2.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>morfologik-polish</artifactId>\n  <packaging>bundle</packaging>\n\n  <name>Morfologik Stemming (Polish Dictionary)</name>\n  <description>Morfologik Stemming (Polish Dictionary)</description>\n\n  <properties>\n    <forbiddenapis.signaturefile>../etc/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n    <project.moduleId>org.carrot2.morfologik.polish</project.moduleId>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-stemming</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.felix</groupId>\n        <artifactId>maven-bundle-plugin</artifactId>\n        <configuration>\n          <instructions>\n            <Export-Package>morfologik.stemming.polish</Export-Package>\n            <Import-Package>*</Import-Package>\n          </instructions>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "morfologik-polish/src/main/java/morfologik/stemming/polish/PolishStemmer.java",
    "content": "package morfologik.stemming.polish;\n\nimport java.io.IOException;\nimport java.net.URL;\nimport java.security.AccessController;\nimport java.security.PrivilegedActionException;\nimport java.security.PrivilegedExceptionAction;\nimport java.util.Iterator;\nimport java.util.List;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.IStemmer;\nimport morfologik.stemming.WordData;\n\n/**\n * A dictionary-based stemmer for the Polish language. Instances of this class are not thread safe.\n *\n * @see morfologik.stemming.DictionaryLookup\n */\npublic final class PolishStemmer implements IStemmer, Iterable<WordData> {\n  /** The underlying dictionary, loaded once (lazily). */\n  private static Dictionary dictionary;\n\n  /** Dictionary lookup delegate. */\n  private final DictionaryLookup lookup;\n\n  public PolishStemmer() {\n    synchronized (getClass()) {\n      if (dictionary == null) {\n        try {\n          dictionary =\n              AccessController.doPrivileged(\n                  new PrivilegedExceptionAction<Dictionary>() {\n                    @Override\n                    public Dictionary run() throws Exception {\n                      URL dictResource = getClass().getResource(\"polish.dict\");\n                      if (dictResource == null) {\n                        throw new IOException(\"Polish dictionary resource not found.\");\n                      }\n                      return Dictionary.read(dictResource);\n                    }\n                  });\n        } catch (PrivilegedActionException e) {\n          throw new RuntimeException(\"Could not read dictionary data.\", e.getException());\n        }\n      }\n    }\n\n    lookup = new DictionaryLookup(dictionary);\n  }\n\n  /**\n   * @return Return the underlying {@link Dictionary} driving the stemmer.\n   */\n  public Dictionary getDictionary() {\n    return dictionary;\n  }\n\n  /** {@inheritDoc} */\n  public List<WordData> lookup(CharSequence word) {\n    return lookup.lookup(word);\n  }\n\n  /** Iterates over all dictionary forms stored in this stemmer. */\n  public Iterator<WordData> iterator() {\n    return lookup.iterator();\n  }\n}\n"
  },
  {
    "path": "morfologik-polish/src/main/resources/morfologik/stemming/polish/polish.LICENSE.Polish.txt",
    "content": "Morfologik\n\nVERSION: 2.1 PoliMorf\nBUILD:   2016-02-13 19:37:51+01:00\nGIT:     6e63b53\n\nCopyright (c) 2016, Marcin Miłkowski\nWszelkie prawa zastrzeżone\n\nRedystrybucja i używanie, czy to w formie kodu źródłowego, czy w formie kodu \nwykonawczego, są dozwolone pod warunkiem spełnienia poniższych warunków:\n\n1. Redystrybucja kodu źródłowego musi zawierać powyższą notę copyrightową, \n   niniejszą listę warunków oraz poniższe oświadczenie o wyłączeniu \n   odpowiedzialności.\n2. Redystrybucja kodu wykonawczego musi zawierać powyższą notę copyrightową, \n   niniejszą listę warunków oraz poniższe oświadczenie o wyłączeniu \n   odpowiedzialności w dokumentacji i/lub w innych materiałach dostarczanych \n   wraz z kopią oprogramowania.\n\nTO OPROGRAMOWANIE JEST DOSTARCZONE PRZEZ <POSIADACZA PRAW AUTORSKICH> \n„TAKIM, JAKIE JEST”. KAŻDA, DOROZUMIANA LUB BEZPOŚREDNIO WYRAŻONA GWARANCJA,\nNIE WYŁĄCZAJĄC DOROZUMIANEJ GWARANCJI PRZYDATNOŚCI HANDLOWEJ I PRZYDATNOŚCI\nDO OKREŚLONEGO ZASTOSOWANIA, JEST WYŁĄCZONA. W ŻADNYM WYPADKU \n<POSIADACZE PRAW AUTORSKICH> NIE MOGĄ BYĆ ODPOWIEDZIALNI ZA JAKIEKOLWIEK \nBEZPOŚREDNIE, POŚREDNIE, INCYDENTALNE, SPECJALNE, UBOCZNE I WTÓRNE SZKODY \n(NIE WYŁĄCZAJĄC OBOWIĄZKU DOSTARCZENIA PRODUKTU ZASTĘPCZEGO LUB SERWISU, \nODPOWIEDZIALNOŚCI Z TYTUŁU UTRATY WALORÓW UŻYTKOWYCH, UTRATY DANYCH LUB \nKORZYŚCI, A TAKŻE PRZERW W PRACY PRZEDSIĘBIORSTWA) SPOWODOWANE W JAKIKOLWIEK \nSPOSÓB I NA PODSTAWIE ISTNIEJĄCEJ W TEORII ODPOWIEDZIALNOŚCI KONTRAKTOWEJ, \nCAŁKOWITEJ LUB DELIKTOWEJ (WYNIKŁEJ ZARÓWNO Z NIEDBALSTWA JAK INNYCH POSTACI \nWINY), POWSTAŁE W JAKIKOLWIEK SPOSÓB W WYNIKU UŻYWANIA LUB MAJĄCE ZWIĄZEK \nZ UŻYWANIEM OPROGRAMOWANIA, NAWET JEŚLI O MOŻLIWOŚCI POWSTANIA TAKICH SZKÓD \nOSTRZEŻONO.\n"
  },
  {
    "path": "morfologik-polish/src/main/resources/morfologik/stemming/polish/polish.LICENSE.txt",
    "content": "Morfologik\n\nVERSION: 2.1 PoliMorf\nBUILD:   2016-02-13 19:37:50+01:00\nGIT:     6e63b53\n\nCopyright (c) 2016, Marcin Miłkowski\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met: \n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer. \n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution. \n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "morfologik-polish/src/main/resources/morfologik/stemming/polish/polish.README.Polish.txt",
    "content": "Morfologik to projekt tworzenia polskich słowników morfosyntaktycznych (stąd \nnazwa) służących do znakowania morfosyntaktycznego i syntezy gramatycznej.\n\nWERSJA:    2.1 PoliMorf\nUTWORZONA: 2016-02-15 17:46:00+01:00\nGIT:       d3b2fe7\n\n\nŹRÓDŁO\n======\n\nDane pochodzą ze słownika sjp.pl oraz słownika PoliMorf i są licencjonowane na \nlicencji zawartej w pliku LICENSE.Polish.txt. Dane źródłowe pochodzą z \npolskiego słownika ispell, następnie redagowanego na stronach \nkurnik.pl/slownik i sjp.pl, a także Słownika gramatycznego języka polskiego. \n\nAutorzy:\n\n  (1) ispell: Mirosław Prywata, Piotr Gackiewicz, Włodzimierz Macewicz, \n      Łukasz Szałkiewicz, Marek Futrega.\n  (2) SGJP: Zygmunt Saloni, Włodzimierz Gruszczyński, Marcin Woliński, \n      Robert Wołosz.\n\nWersja PoliMorf została opracowana w ramach projektu CESAR realizowanego w \nZespole Inżynierii Lingwistycznej IPI PAN. W przygotowaniu ostatecznej \nwersji 2.0 dopomogli Jan Szejko i Adam Radziszewski.\n\n\nPLIKI\n=====\n\n1. polish.dict oraz polish.info to pliki słownika morfologicznego dla programu\n   morfologik-stemming (zob. [3]), wykorzystywanego również przez projekt\n   LanguageTool (zob. [2]).\n\n2. polish_synth.dict oraz polish_synth.info to pliki słownika syntezy \n   gramatycznej dla LanguageTool (zob. [2]). Aby uzyskać formę odmienioną,\n   należy używać następującej składni \"zapytania\" do słownika:\n\n     <wyraz>|<znacznik>\n\n   Przykład:\n\n     niemiecki|adjp\n\n   daje \"niemiecku\".\n\n3. fsa_morph/polish.dict i fsa_morph/polish_synth.dict to pliki słowników jak\n   powyżej, ale przeznaczone dla programu fsa_morph z pakietu fsa\n   Janka Daciuka (zob. [1]). Słowniki te zawierają te same dane, co słowniki\n   powyżej, różnią się jednak metodą kompresji oraz:\n   - mają separator w automacie ustawiony na sztywno na '+',\n   - mają znaczniki morfosyntaktyczne rozdzielone znakiem '|',\n   - mają kodowanie \"prefiksowe\", które wymaga podania flagi \"-P\" do fsa_morph,\n   - znaki diakrytyczne są kodowane w UTF-8 (ma znaczenie, jeśli terminal ma\n     ustawione inne).\n\n   Przykład:\n\n     $ echo \"krowami\" | ./fsa_morph -P -d polish.dict\n     krowami: krowa+subst:pl:inst:f\n     $ echo \"zamek\"   | ./fsa_morph -P -d polish.dict\n     zamek: zamek+subst:sg:acc:m3|subst:sg:nom:m3\n\n   Synteza:\n\n     $ echo \"niemiecki|adjp\" | ./fsa_morph -P -d polish_synth.dict\n     niemiecki|adjp: niemiecku\n     \n4. polimorfologik-2.1 PoliMorf.txt to zwykły plik tekstowy w kodowaniu UTF-8 o formacie:\n    forma podstawowa;forma odmieniona;znaczniki gramatyczne\n\n[1] http://www.eti.pg.gda.pl/katedry/kiw/pracownicy/Jan.Daciuk/personal/fsa.html\n[2] https://languagetool.org/\n[3] https://github.com/morfologik/morfologik-stemming\n\n\nZNACZNIKI MORFOSYNTAKTYCZNE\n===========================\n\nZestaw znaczników jest zbliżony do zestawu korpusu NKJP (www.nkjp.pl).\n\n    * adj - przymiotnik (np. „niemiecki”)\n    * adja - przymiotnik przyprzymiotnikowy (np. „niemiecko”, w wyrażeniach typu „niemiecko-chiński”)\n    * adjc - przymiotnik predykatywny (np. „ciekaw”, „dłużen”)\n    * adjp - przymiotnik poprzyimkowy (np. „niemiecku”)\n    * adv - przysłówek (np. „głupio”)\n    * burk - burkinostka (np. „Burkina Faso”)\n    * depr - forma deprecjatywna\n    * ger - rzeczownik odsłowny\n    * conj - spójnik łączący zdania współrzędne\n    * comp - spójnik wprowadzający zdanie podrzędne\n    * num - liczebnik\n    * pact - imiesłów przymiotnikowy czynny\n    * pant - imiesłów przysłówkowy uprzedni\n    * pcon - imiesłów przysłówkowy współczesny\n    * ppas - imiesłów przymiotnikowy bierny\n    * ppron12 - zaimek nietrzecioosobowy\n    * ppron3 - zaimek trzecioosobowy\n    * pred - predykatyw (np. „trzeba”)\n    * prep - przyimek\n    * siebie - zaimek \"siebie\"\n    * subst - rzeczownik\n    * verb - czasownik\n    * brev - skrót\n    * interj - wykrzyknienie\n    * qub - kublik (np. „nie” lub „tak”)\n\nAtrybuty podstawowych form:\n\n    * sg / pl - liczba pojedyncza / liczba mnoga    \n    * nom - mianownik\n    * gen - dopełniacz\n    * acc - biernik\n    * dat - celownik\n    * inst - narzędnik\n    * loc - miejscownik\n    * voc - wołacz\n    * pos - stopień równy\n    * com - stopień wyższy\n    * sup - stopień najwyższy\n    * m1, m2, m3 - rodzaje męskie\n    * n1, n2 - rodzaje nijakie\n    * p1, p2, p3 - rodzaje rzeczowników mających tylko liczbę mnogą (pluralium tantum)\n    * f - rodzaj żeński\n    * pri - pierwsza osoba\n    * sec - druga osoba\n    * ter - trzecia osoba\n    * aff - forma niezanegowana\n    * neg - forma zanegowana\n    * refl - forma zwrotna czasownika\n    * nonrefl - forma niezwrotna czasownika\n    * refl.nonrefl - forma może być zwrotna lub niezwrotna\n    * perf - czasownik dokonany\n    * imperf - czasownik niedokonany\n    * imperf.perf - czasownik, który może występować zarówno jako dokonany, jak i jako niedokonany\n    * nakc - forma nieakcentowana zaimka (ppron lub siebie)\n    * akc - forma akcentowana zaimka\n    * praep - forma poprzyimkowa\n    * npraep - forma niepoprzyimkowa\n    * ger - rzeczownik odsłowny\n    * imps - forma bezosobowa\n    * impt - tryb rozkazujący\n    * inf - bezokolicznik\n    * fin - forma nieprzeszła\n    * bedzie - forma przyszła \"być\"\n    * praet - forma przeszła czasownika (pseudoimiesłów)\n    * pot - tryb przypuszczający [nie występuje w znacznikach NKJP]\n    * pun - skrót z kropką [za NKJP]\n    * npun - bez kropki [za NKJP]\t\n    * wok / nwok: forma wokaliczna / niewokaliczna\n\nUwaga: formy trybu przypuszczającego są jednolicie oznaczone tylko znacznikiem \npot, bez znacznika praet.\n\nW znacznikach Morfologika nie występuje i nie będzie występować znacznik \naglt, a to ze względu na inną zasadę segmentacji wyrazów.\n"
  },
  {
    "path": "morfologik-polish/src/main/resources/morfologik/stemming/polish/polish.README.txt",
    "content": "Morfologik is a project aiming at generating Polish morphosyntactic\ndictionaries (hence the name) used for part-of-speech tagging and\npart-of-speech synthesis.\n\nSee LICENSE.txt for license restrictions.\n\nSee README.Polish.txt for more information concerning authorship and\ndictionary data format.\n\nVERSION: 2.1 PoliMorf\nBUILD:   2016-02-13 19:37:50+01:00\nGIT:     6e63b53\n"
  },
  {
    "path": "morfologik-polish/src/main/resources/morfologik/stemming/polish/polish.info",
    "content": "#\n# Morfologik Polish (stemming dictionary)\n# Version: 2.1 PoliMorf\n# Date: 2016-02-13 19:32:15+01:00\n# Git: 6e63b53\n#\n# Copyright (c) 2016, Marcin Miłkowski\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met: \n#\n# 1. Redistributions of source code must retain the above copyright notice, this\n#    list of conditions and the following disclaimer. \n# 2. Redistributions in binary form must reproduce the above copyright notice,\n#    this list of conditions and the following disclaimer in the documentation\n#    and/or other materials provided with the distribution. \n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\n# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n#\n\nfsa.dict.author=morfologik.blogspot.com\nfsa.dict.created=2016-02-13 19:32:15+01:00\nfsa.dict.license=BSD. http://morfologik.blogspot.com\n\nfsa.dict.separator=;\nfsa.dict.encoding=UTF-8\n\nfsa.dict.encoder=PREFIX\n"
  },
  {
    "path": "morfologik-polish/src/test/java/morfologik/stemming/polish/Gh27Test.java",
    "content": "package morfologik.stemming.polish;\n\nimport java.io.IOException;\nimport java.util.Locale;\nimport morfologik.stemming.WordData;\nimport org.junit.jupiter.api.Test;\n\n/*\n *\n */\npublic class Gh27Test {\n  /* */\n  @Test\n  public void gh27() throws IOException {\n    PolishStemmer stemmer = new PolishStemmer();\n\n    String in =\n        \"Nie zabrakło oczywiście wpadek. Największym zaskoczeniem okazał się dla nas strój\"\n            + \" Katarzyny Zielińskiej, której ewidentnie o coś chodziło, ale wciąż nie wiemy o co.\";\n    for (String t : in.toLowerCase(new Locale(\"pl\")).split(\"[\\\\s\\\\.\\\\,]+\")) {\n      System.out.println(\"> '\" + t + \"'\");\n      for (WordData wd : stemmer.lookup(t)) {\n        System.out.print(\n            \"  - \" + (wd.getStem() == null ? \"<null>\" : wd.getStem()) + \", \" + wd.getTag());\n      }\n      System.out.println();\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-polish/src/test/java/morfologik/stemming/polish/PolishMorfologikStemmerTest.java",
    "content": "package morfologik.stemming.polish;\n\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport java.io.IOException;\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.HashSet;\nimport java.util.List;\nimport morfologik.stemming.IStemmer;\nimport morfologik.stemming.WordData;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\n\n/*\n *\n */\npublic class PolishMorfologikStemmerTest {\n  /* */\n  @Test\n  public void testLexemes() {\n    PolishStemmer s = new PolishStemmer();\n\n    assertEquals(\"żywotopisarstwo\", stem(s, \"żywotopisarstwie\")[0]);\n    assertEquals(\"abradować\", stem(s, \"abradowałoby\")[0]);\n\n    assertArrayEquals(\n        new String[] {\"żywotopisarstwo\", \"subst:sg:loc:n2\"}, stem(s, \"żywotopisarstwie\"));\n    assertArrayEquals(new String[] {\"bazia\", \"subst:pl:inst:f\"}, stem(s, \"baziami\"));\n\n    // This word is not in the dictionary.\n    assertNoStemFor(s, \"martygalski\");\n  }\n\n  /* */\n  @Test\n  public void listUniqueTags() {\n    HashSet<String> forms = new HashSet<>();\n    boolean hadMissing = false;\n    for (WordData wd : new PolishStemmer()) {\n      final CharSequence chs = wd.getTag();\n      if (chs == null) {\n        System.err.println(\"Missing tag for: \" + wd.getWord());\n        hadMissing = true;\n        continue;\n      }\n      forms.add(chs.toString());\n    }\n\n    Assertions.assertThat(hadMissing).isFalse();\n  }\n\n  /* */\n  @Test\n  public void testWordDataFields() throws IOException {\n    final IStemmer s = new PolishStemmer();\n\n    final String word = \"liga\";\n    final List<WordData> response = s.lookup(word);\n    assertEquals(2, response.size());\n\n    final HashSet<String> stems = new HashSet<String>();\n    final HashSet<String> tags = new HashSet<String>();\n    for (WordData wd : response) {\n      stems.add(wd.getStem().toString());\n      tags.add(wd.getTag().toString());\n      assertSame(word, wd.getWord());\n    }\n    assertTrue(stems.contains(\"ligać\"));\n    assertTrue(stems.contains(\"liga\"));\n    assertTrue(tags.contains(\"subst:sg:nom:f\"));\n    assertTrue(tags.contains(\"verb:fin:sg:ter:imperf:nonrefl+verb:fin:sg:ter:imperf:refl.nonrefl\"));\n\n    // Repeat to make sure we get the same values consistently.\n    for (WordData wd : response) {\n      stems.contains(wd.getStem().toString());\n      tags.contains(wd.getTag().toString());\n    }\n\n    final String ENCODING = \"UTF-8\";\n\n    // Run the same consistency check for the returned buffers.\n    final ByteBuffer temp = ByteBuffer.allocate(100);\n    for (WordData wd : response) {\n      // Buffer should be copied.\n      final ByteBuffer copy = wd.getStemBytes(null);\n      final String stem =\n          new String(\n              copy.array(), copy.arrayOffset() + copy.position(), copy.remaining(), ENCODING);\n      // The buffer should be present in stems set.\n      Assertions.assertThat(stems.contains(stem)).as(stem).isTrue();\n      // Buffer large enough to hold the contents.\n      assertSame(temp, wd.getStemBytes(temp));\n      // The copy and the clone should be identical.\n      assertEquals(0, copy.compareTo(temp));\n    }\n\n    for (WordData wd : response) {\n      // Buffer should be copied.\n      final ByteBuffer copy = wd.getTagBytes(null);\n      final String tag =\n          new String(\n              copy.array(), copy.arrayOffset() + copy.position(), copy.remaining(), ENCODING);\n      // The buffer should be present in tags set.\n      Assertions.assertThat(tags.contains(tag)).as(tag).isTrue();\n      // Buffer large enough to hold the contents.\n      temp.clear();\n      assertSame(temp, wd.getTagBytes(temp));\n      // The copy and the clone should be identical.\n      assertEquals(0, copy.compareTo(temp));\n    }\n\n    for (WordData wd : response) {\n      // Buffer should be copied.\n      final ByteBuffer copy = wd.getWordBytes(null);\n      assertNotNull(copy);\n      assertEquals(0, copy.compareTo(ByteBuffer.wrap(word.getBytes(ENCODING))));\n    }\n  }\n\n  /* */\n  public static String asString(CharSequence s) {\n    if (s == null) return null;\n    return s.toString();\n  }\n\n  /* */\n  public static String[] stem(IStemmer s, String word) {\n    ArrayList<String> result = new ArrayList<>();\n    for (WordData wd : s.lookup(word)) {\n      result.add(asString(wd.getStem()));\n      result.add(asString(wd.getTag()));\n    }\n    return result.toArray(new String[result.size()]);\n  }\n\n  /* */\n  public static void assertNoStemFor(IStemmer s, String word) {\n    assertArrayEquals(new String[] {}, stem(s, word));\n  }\n}\n"
  },
  {
    "path": "morfologik-speller/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.carrot2</groupId>\n    <artifactId>morfologik-parent</artifactId>\n    <version>2.2.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>morfologik-speller</artifactId>\n  <packaging>bundle</packaging>\n\n  <name>Morfologik Speller</name>\n  <description>Morfologik Speller</description>\n\n  <properties>\n    <forbiddenapis.signaturefile>../etc/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n    <project.moduleId>org.carrot2.morfologik.speller</project.moduleId>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-stemming</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.felix</groupId>\n        <artifactId>maven-bundle-plugin</artifactId>\n        <configuration>\n          <instructions>\n            <Export-Package>morfologik.speller</Export-Package>\n            <Import-Package>*</Import-Package>\n          </instructions>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "morfologik-speller/src/main/java/morfologik/speller/HMatrix.java",
    "content": "package morfologik.speller;\n\nimport java.util.Arrays;\n\n/**\n * Keeps track of already computed values of edit distance. Remarks: To save space, the matrix is\n * kept in a vector.\n */\npublic class HMatrix {\n  private int[] p; /* the vector */\n  private int rowLength; /* row length of matrix */\n  int columnHeight; /* column height of matrix */\n  int editDistance; /* edit distance */\n\n  /**\n   * Allocates memory and initializes matrix (constructor).\n   *\n   * @param distance (int) max edit distance allowed for candidates;\n   * @param maxLength (int) max length of words.\n   *     <p>Remarks: See Oflazer. To save space, the matrix is stored as a vector. To save time,\n   *     additional rows and columns are added. They are initialized to their distance in the\n   *     matrix, so that no bound checking is necessary during access.\n   */\n  public HMatrix(final int distance, final int maxLength) {\n    rowLength = maxLength + 2;\n    columnHeight = 2 * distance + 3;\n    editDistance = distance;\n    final int size = rowLength * columnHeight;\n    p = new int[size];\n    init();\n  }\n\n  private void init() {\n    final int size = p.length;\n    // Initialize edges of the diagonal band to distance + 1 (i.e. distance too big)\n    for (int i = 0; i < rowLength - editDistance - 1; i++) {\n      p[i] = editDistance + 1; // H(distance + j, j) = distance + 1\n      p[size - i - 1] = editDistance + 1; // H(i, distance + i) = distance + 1\n    }\n    // Initialize items H(i,j) with at least one index equal to zero to |i - j|\n    for (int j = 0; j < editDistance + 2; j++) {\n      p[j * rowLength] = editDistance + 1 - j; // H(i=0..distance+1,0)=i\n      p[(j + editDistance + 1) * rowLength + j] = j; // H(0,j=0..distance+1)=j\n    }\n  }\n\n  public void reset() {\n    Arrays.fill(p, 0);\n    init();\n  }\n\n  /**\n   * Provide an item of hMatrix indexed by indices.\n   *\n   * @param i - (int) row number;\n   * @param j - (int) column number.\n   * @return Item <code>H[i][j]</code>. Remarks: H matrix is really simulated. What is needed is\n   *     only <code>edit_distance + 2</code> wideband around the diagonal. In fact this diagonal has\n   *     been pushed up to the upper border of the matrix.\n   *     <p>The matrix in the vector looks like this:\n   *     <pre>\n   * \t    +---------------------+\n   * \t0   |#####################| j=i-e-1\n   * \t1   |                     | j=i-e\n   * \t    :                     :\n   * \te+1 |                     | j=i-1\n   * \t    +---------------------+\n   * \te+2 |                     | j=i\n   * \t    +---------------------+\n   * \te+3 |                     | j=i+1\n   * \t    :                     :\n   * \t2e+2|                     | j=i+e\n   * \t2e+3|#####################| j=i+e+1\n   * \t    +---------------------+\n   * </pre>\n   */\n  public int get(final int i, final int j) {\n    return p[(j - i + editDistance + 1) * rowLength + j];\n  }\n\n  /**\n   * Set an item in hMatrix. No checking for i &amp; j is done. They must be correct.\n   *\n   * @param i - (int) row number;\n   * @param j - (int) column number;\n   * @param val - (int) value to put there.\n   */\n  public void set(final int i, final int j, final int val) {\n    p[(j - i + editDistance + 1) * rowLength + j] = val;\n  }\n}\n"
  },
  {
    "path": "morfologik-speller/src/main/java/morfologik/speller/Speller.java",
    "content": "package morfologik.speller;\n\nimport static morfologik.fsa.MatchResult.EXACT_MATCH;\nimport static morfologik.fsa.MatchResult.SEQUENCE_IS_A_PREFIX;\n\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.CharsetDecoder;\nimport java.nio.charset.CharsetEncoder;\nimport java.nio.charset.CoderResult;\nimport java.text.Normalizer;\nimport java.text.Normalizer.Form;\nimport java.util.*;\nimport morfologik.fsa.ByteSequenceIterator;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSATraversal;\nimport morfologik.fsa.MatchResult;\nimport morfologik.stemming.BufferUtils;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.DictionaryMetadata;\nimport morfologik.stemming.UnmappableInputException;\n\n/**\n * Finds spelling suggestions. Implements K. Oflazer's algorithm as described in: Oflazer, Kemal.\n * 1996. \"Error-Tolerant Finite-State Recognition with Applications to Morphological Analysis and\n * Spelling Correction.\" <i>Computational Linguistics</i> 22 (1): 73–89.\n *\n * <p>See Jan Daciuk's <code>s_fsa</code> package.\n */\npublic class Speller {\n  /** Maximum length of the word to be checked. */\n  public static final int MAX_WORD_LENGTH = 120;\n\n  static final int FREQ_RANGES = 'Z' - 'A' + 1;\n  static final int FIRST_RANGE_CODE = 'A'; // less frequent words\n\n  // FIXME: this is an upper limit for replacement searches, we need\n  // proper tree traversal instead of generation of all possible candidates\n  static final int UPPER_SEARCH_LIMIT = 15;\n  private static final int MIN_WORD_LENGTH = 4;\n  private static final int MAX_RECURSION_LEVEL = 6;\n\n  private final int editDistance;\n  private int effectEditDistance; // effective edit distance\n\n  private final HMatrix hMatrix;\n\n  private char[] candidate; /* current replacement */\n  private int candLen;\n  private int wordLen; /* length of word being processed */\n  private char[] wordProcessed; /* word being processed */\n\n  /** Replacement pattern with optional start/end anchor. */\n  private static final class Pattern {\n    final char[] chars;\n    final boolean startAnchor;\n    final boolean endAnchor;\n\n    Pattern(char[] chars, boolean startAnchor, boolean endAnchor) {\n      this.chars = chars;\n      this.startAnchor = startAnchor;\n      this.endAnchor = endAnchor;\n    }\n  }\n\n  private Map<Character, List<Pattern>> replacementsAnyToOne = new HashMap<>();\n  private Map<String, List<Pattern>> replacementsAnyToTwo = new HashMap<>();\n\n  /** Keys may carry ^ / $ anchors; values are the replacement strings. */\n  private Map<String, List<String>> replacementsTheRest = new HashMap<>();\n\n  private boolean containsSeparators = true;\n\n  /** Internal reusable buffer for encoding words into byte arrays using {@link #encoder}. */\n  private ByteBuffer byteBuffer = ByteBuffer.allocate(MAX_WORD_LENGTH);\n\n  /** Internal reusable buffer for encoding words into byte arrays using {@link #encoder}. */\n  private CharBuffer charBuffer = CharBuffer.allocate(MAX_WORD_LENGTH);\n\n  /** Reusable match result. */\n  private final MatchResult matchResult = new MatchResult();\n\n  /**\n   * Features of the compiled dictionary.\n   *\n   * @see DictionaryMetadata\n   */\n  private final DictionaryMetadata dictionaryMetadata;\n\n  /** Charset encoder for the FSA. */\n  private final CharsetEncoder encoder;\n\n  /** Charset decoder for the FSA. */\n  private final CharsetDecoder decoder;\n\n  /** An FSA used for lookups. */\n  private final FSATraversal matcher;\n\n  /** FSA's root node. */\n  private final int rootNode;\n\n  /** The FSA we are using. */\n  private final FSA fsa;\n\n  /** An iterator for walking along the final states of {@link #fsa}. */\n  private final ByteSequenceIterator finalStatesIterator;\n\n  public Speller(final Dictionary dictionary) {\n    this(dictionary, 1);\n  }\n\n  public Speller(final Dictionary dictionary, final int editDistance) {\n    this.editDistance = editDistance;\n    this.hMatrix = new HMatrix(editDistance, MAX_WORD_LENGTH);\n\n    this.dictionaryMetadata = dictionary.metadata;\n    this.rootNode = dictionary.fsa.getRootNode();\n    this.fsa = dictionary.fsa;\n    this.matcher = new FSATraversal(fsa);\n    this.finalStatesIterator = new ByteSequenceIterator(fsa, rootNode);\n\n    if (rootNode == 0) {\n      throw new IllegalArgumentException(\"Dictionary must have at least the root node.\");\n    }\n\n    if (dictionaryMetadata == null) {\n      throw new IllegalArgumentException(\"Dictionary metadata must not be null.\");\n    }\n\n    encoder = dictionaryMetadata.getEncoder();\n    decoder = dictionaryMetadata.getDecoder();\n\n    // Multibyte separator will result in an exception here.\n    dictionaryMetadata.getSeparatorAsChar();\n\n    this.createReplacementsMaps();\n  }\n\n  private static boolean isStartAnchored(String key) {\n    return key.startsWith(\"^\");\n  }\n\n  private static boolean isEndAnchored(String key) {\n    return key.endsWith(\"$\");\n  }\n\n  private static String stripAnchors(String key) {\n    int start = key.startsWith(\"^\") ? 1 : 0;\n    int end = key.endsWith(\"$\") ? key.length() - 1 : key.length();\n    return key.substring(start, end);\n  }\n\n  private void createReplacementsMaps() {\n    for (Map.Entry<String, List<String>> entry :\n        dictionaryMetadata.getReplacementPairs().entrySet()) {\n      String rawKey = entry.getKey();\n      boolean startAnchor = isStartAnchored(rawKey);\n      boolean endAnchor = isEndAnchored(rawKey);\n      String strippedKey = stripAnchors(rawKey);\n\n      for (String s : entry.getValue()) {\n        // replacements any to one: key is the 1-char replacement target\n        if (s.length() == 1) {\n          Pattern p = new Pattern(strippedKey.toCharArray(), startAnchor, endAnchor);\n          if (!replacementsAnyToOne.containsKey(s.charAt(0))) {\n            List<Pattern> list = new ArrayList<>();\n            list.add(p);\n            replacementsAnyToOne.put(s.charAt(0), list);\n          } else {\n            replacementsAnyToOne.get(s.charAt(0)).add(p);\n          }\n        }\n        // replacements any to two: key is the 2-char replacement target\n        else if (s.length() == 2) {\n          Pattern p = new Pattern(strippedKey.toCharArray(), startAnchor, endAnchor);\n          if (!replacementsAnyToTwo.containsKey(s)) {\n            List<Pattern> list = new ArrayList<>();\n            list.add(p);\n            replacementsAnyToTwo.put(s, list);\n          } else {\n            replacementsAnyToTwo.get(s).add(p);\n          }\n        } else {\n          // replacements with longer targets: key keeps anchors for getAllReplacements\n          if (!replacementsTheRest.containsKey(rawKey)) {\n            List<String> list = new ArrayList<>();\n            list.add(s);\n            replacementsTheRest.put(rawKey, list);\n          } else {\n            replacementsTheRest.get(rawKey).add(s);\n          }\n        }\n      }\n    }\n  }\n\n  private ByteBuffer charSequenceToBytes(final CharSequence word) throws UnmappableInputException {\n    // Encode word characters into bytes in the same encoding as the FSA's.\n    charBuffer = BufferUtils.clearAndEnsureCapacity(charBuffer, word.length());\n    for (int i = 0; i < word.length(); i++) {\n      final char chr = word.charAt(i);\n      charBuffer.put(chr);\n    }\n    charBuffer.flip();\n\n    return BufferUtils.charsToBytes(encoder, charBuffer, byteBuffer);\n  }\n\n  /**\n   * Checks whether the word is misspelled, by performing a series of checks according to properties\n   * of the dictionary.\n   *\n   * <p>If the flag <code>fsa.dict.speller.ignore-punctuation</code> is set, then all non-alphabetic\n   * characters are considered to be correctly spelled.\n   *\n   * <p>If the flag <code>fsa.dict.speller.ignore-numbers</code> is set, then all words containing\n   * decimal digits are considered to be correctly spelled.\n   *\n   * <p>If the flag <code>fsa.dict.speller.ignore-camel-case</code> is set, then all CamelCase words\n   * are considered to be correctly spelled.\n   *\n   * <p>If the flag <code>fsa.dict.speller.ignore-all-uppercase</code> is set, then all alphabetic\n   * words composed of only uppercase characters are considered to be correctly spelled.\n   *\n   * <p>Otherwise, the word is checked in the dictionary. If the test fails, and the dictionary does\n   * not perform any case conversions (as set by <code>fsa.dict.speller.convert-case</code> flag),\n   * then the method returns false. In case of case conversions, it is checked whether a non-mixed\n   * case word is found in its lowercase version in the dictionary, and for all-uppercase words,\n   * whether the word is found in the dictionary with the initial uppercase letter.\n   *\n   * @param word - the word to be checked\n   * @return true if the word is misspelled\n   */\n  public boolean isMisspelled(final String word) {\n    // dictionaries usually do not contain punctuation\n    String wordToCheck = word;\n    if (!dictionaryMetadata.getInputConversionPairs().isEmpty()) {\n      wordToCheck =\n          DictionaryLookup.applyReplacements(word, dictionaryMetadata.getInputConversionPairs());\n    }\n    boolean isAlphabetic = wordToCheck.length() != 1 || isAlphabetic(wordToCheck.charAt(0));\n    return wordToCheck.length() > 0\n        && (!dictionaryMetadata.isIgnoringPunctuation() || isAlphabetic)\n        && (!dictionaryMetadata.isIgnoringNumbers() || containsNoDigit(wordToCheck))\n        && !(dictionaryMetadata.isIgnoringCamelCase() && isCamelCase(wordToCheck))\n        && !(dictionaryMetadata.isIgnoringAllUppercase()\n            && isAlphabetic\n            && isAllUppercase(wordToCheck))\n        && !isInDictionary(wordToCheck)\n        && (!dictionaryMetadata.isConvertingCase()\n            || !(!isMixedCase(wordToCheck)\n                && (isInDictionary(wordToCheck.toLowerCase(dictionaryMetadata.getLocale()))\n                    || isAllUppercase(wordToCheck)\n                        && isInDictionary(initialUppercase(wordToCheck)))));\n  }\n\n  private CharSequence initialUppercase(final String wordToCheck) {\n    return wordToCheck.substring(0, 1)\n        + wordToCheck.substring(1).toLowerCase(dictionaryMetadata.getLocale());\n  }\n\n  /**\n   * Test whether the word is found in the dictionary.\n   *\n   * @param word the word to be tested\n   * @return True if it is found.\n   */\n  public boolean isInDictionary(final CharSequence word) {\n    try {\n      byteBuffer = charSequenceToBytes(word);\n    } catch (UnmappableInputException e) {\n      return false;\n    }\n\n    // Try to find a partial match in the dictionary.\n    final MatchResult match =\n        matcher.match(matchResult, byteBuffer.array(), 0, byteBuffer.remaining(), rootNode);\n\n    // Make sure the word doesn't contain a separator if there is an exact match\n    if (containsSeparators && match.kind == EXACT_MATCH) {\n      containsSeparators = false;\n      for (int i = 0; i < word.length(); i++) {\n        if (word.charAt(i) == dictionaryMetadata.getSeparator()) {\n          containsSeparators = true;\n          break;\n        }\n      }\n    }\n\n    if (match.kind == EXACT_MATCH && !containsSeparators) {\n      return true;\n    }\n\n    return containsSeparators\n        && match.kind == SEQUENCE_IS_A_PREFIX\n        && byteBuffer.remaining() > 0\n        && fsa.getArc(match.node, dictionaryMetadata.getSeparator()) != 0;\n  }\n\n  /**\n   * Get the frequency value for a word form. It is taken from the first entry with this word form.\n   *\n   * @param word the word to be tested\n   * @return frequency value in range: 0..FREQ_RANGE-1 (0: less frequent).\n   */\n  public int getFrequency(final CharSequence word) {\n    if (!dictionaryMetadata.isFrequencyIncluded()) {\n      return 0;\n    }\n\n    final byte separator = dictionaryMetadata.getSeparator();\n    try {\n      byteBuffer = charSequenceToBytes(word);\n    } catch (UnmappableInputException e) {\n      return 0;\n    }\n\n    final MatchResult match =\n        matcher.match(matchResult, byteBuffer.array(), 0, byteBuffer.remaining(), rootNode);\n    if (match.kind == SEQUENCE_IS_A_PREFIX) {\n      final int arc = fsa.getArc(match.node, separator);\n      if (arc != 0 && !fsa.isArcFinal(arc)) {\n        finalStatesIterator.restartFrom(fsa.getEndNode(arc));\n        if (finalStatesIterator.hasNext()) {\n          final ByteBuffer bb = finalStatesIterator.next();\n          final byte[] ba = bb.array();\n          final int bbSize = bb.remaining();\n          // the last byte contains the frequency after a separator\n          return ba[bbSize - 1] - FIRST_RANGE_CODE;\n        }\n      }\n    }\n    return 0;\n  }\n\n  /**\n   * Propose suggestions for misspelled run-on words. This algorithm is inspired by spell.cc in\n   * s_fsa package by Jan Daciuk.\n   *\n   * @param original The original misspelled word.\n   * @return The list of suggested pairs, as CandidateData with space-concatenated strings.\n   */\n  public List<CandidateData> replaceRunOnWordCandidates(final String original) {\n    final List<CandidateData> candidates = new ArrayList<>();\n    String wordToCheck = original;\n    if (!dictionaryMetadata.getInputConversionPairs().isEmpty()) {\n      wordToCheck =\n          DictionaryLookup.applyReplacements(\n              original, dictionaryMetadata.getInputConversionPairs());\n    }\n    if (!isInDictionary(wordToCheck) && dictionaryMetadata.isSupportingRunOnWords()) {\n      Locale locale = dictionaryMetadata.getLocale();\n      for (int i = 1; i < wordToCheck.length(); i++) {\n        // chop from left to right\n        final String prefix = wordToCheck.substring(0, i);\n        final String suffix = wordToCheck.substring(i);\n        if (isInDictionary(suffix)\n            // camel case words: e.g. GreatElephant\n            || (!isNotCapitalizedWord(suffix) && isInDictionary(suffix.toLowerCase(locale)))) {\n          if (isInDictionary(prefix)) {\n            addReplacement(candidates, prefix + \" \" + suffix);\n          } else if (Character.isUpperCase(prefix.charAt(0))\n              && isInDictionary(prefix.toLowerCase(locale))) {\n            // a word that's uppercase just because used at sentence start\n            addReplacement(candidates, prefix + \" \" + suffix);\n          }\n        }\n      }\n    }\n    return candidates;\n  }\n\n  /**\n   * Propose suggestions for misspelled run-on words. This algorithm is inspired by spell.cc in\n   * s_fsa package by Jan Daciuk.\n   *\n   * @param original The original misspelled word.\n   * @return The list of suggested pairs, as space-concatenated strings.\n   */\n  public List<String> replaceRunOnWords(final String original) {\n    final List<CandidateData> candidateData = replaceRunOnWordCandidates(original);\n    final List<String> candidates = new ArrayList<>();\n    for (CandidateData candidate : candidateData) {\n      candidates.add(candidate.word);\n    }\n    return candidates;\n  }\n\n  private void addReplacement(List<CandidateData> candidates, String replacement) {\n    if (dictionaryMetadata.getOutputConversionPairs().isEmpty()) {\n      candidates.add(new CandidateData(replacement, 1));\n    } else {\n      candidates.add(\n          new CandidateData(\n              DictionaryLookup.applyReplacements(\n                  replacement, dictionaryMetadata.getOutputConversionPairs()),\n              1));\n    }\n  }\n\n  /**\n   * Find similar words even if the original word is a correct word that exists in the dictionary\n   *\n   * @param word The original word.\n   * @return A list of suggested candidate replacements.\n   */\n  public ArrayList<CandidateData> findSimilarWordCandidates(String word) {\n    return findReplacementCandidates(word, true);\n  }\n\n  public ArrayList<String> findSimilarWords(String word) {\n    final List<CandidateData> result = findSimilarWordCandidates(word);\n    final ArrayList<String> resultSuggestions = new ArrayList<>(result.size());\n    for (CandidateData cd : result) {\n      resultSuggestions.add(cd.getWord());\n    }\n    return resultSuggestions;\n  }\n\n  /**\n   * Find suggestions by using K. Oflazer's algorithm. See Jan Daciuk's s_fsa package, spell.cc for\n   * further explanation.\n   *\n   * @param word The original misspelled word.\n   * @return A list of suggested replacements.\n   */\n  public ArrayList<String> findReplacements(String word) {\n    final List<CandidateData> result = findReplacementCandidates(word);\n\n    final ArrayList<String> resultSuggestions = new ArrayList<>(result.size());\n    for (CandidateData cd : result) {\n      resultSuggestions.add(cd.getWord());\n    }\n    return resultSuggestions;\n  }\n\n  /**\n   * Find and return suggestions by using K. Oflazer's algorithm. See Jan Daciuk's s_fsa package,\n   * spell.cc for further explanation. This method is identical to {@link #findReplacements}, but\n   * returns candidate terms with their edit distance scores.\n   *\n   * @param word The original misspelled word.\n   * @return A list of suggested candidate replacements.\n   */\n  public ArrayList<CandidateData> findReplacementCandidates(String word) {\n    return findReplacementCandidates(word, false);\n  }\n\n  private ArrayList<CandidateData> findReplacementCandidates(\n      String word, boolean evenIfWordInDictionary) {\n    hMatrix.reset();\n    if (!dictionaryMetadata.getInputConversionPairs().isEmpty()) {\n      word = DictionaryLookup.applyReplacements(word, dictionaryMetadata.getInputConversionPairs());\n    }\n\n    // candidate strings, including same additional data such as edit distance from the original\n    // word.\n    List<CandidateData> candidates = new ArrayList<>();\n\n    if (word.length() > 0\n        && word.length() < MAX_WORD_LENGTH\n        && (!isInDictionary(word) || evenIfWordInDictionary)) {\n      List<String> wordsToCheck = new ArrayList<>();\n      if (replacementsTheRest != null && word.length() > 1) {\n        for (final String wordChecked : getAllReplacements(word, 0, 0)) {\n          if (isInDictionary(wordChecked)) {\n            candidates.add(new CandidateData(wordChecked, 0));\n          } else {\n            String lowerWord = wordChecked.toLowerCase(dictionaryMetadata.getLocale());\n            String upperWord = wordChecked.toUpperCase(dictionaryMetadata.getLocale());\n            if (isInDictionary(lowerWord)) {\n              // add the word as it is in the dictionary, not mixed-case versions of it\n              candidates.add(new CandidateData(lowerWord, 0));\n            }\n            if (isInDictionary(upperWord)) {\n              candidates.add(new CandidateData(upperWord, 0));\n            }\n            if (lowerWord.length() > 1) {\n              String firstUpperWord =\n                  Character.toUpperCase(lowerWord.charAt(0)) + lowerWord.substring(1);\n              if (isInDictionary(firstUpperWord)) {\n                candidates.add(new CandidateData(firstUpperWord, 0));\n              }\n            }\n          }\n          wordsToCheck.add(wordChecked);\n        }\n      } else {\n        wordsToCheck.add(word);\n      }\n\n      // Even if a candidate was found with the replacement pairs (which are usual errors),\n      // there might be more good candidates (see issue #94):\n      int i = 1;\n      for (final String wordChecked : wordsToCheck) {\n        i++;\n        if (i > UPPER_SEARCH_LIMIT) { // for performance reasons, do not search too deeply\n          break;\n        }\n        wordProcessed = wordChecked.toCharArray();\n        wordLen = wordProcessed.length;\n        if (wordLen < MIN_WORD_LENGTH\n            && i > 2) { // three-letter replacements make little sense anyway\n          break;\n        }\n        candidate = new char[MAX_WORD_LENGTH];\n        candLen = candidate.length;\n        effectEditDistance = wordLen <= editDistance ? wordLen - 1 : editDistance;\n        charBuffer = BufferUtils.clearAndEnsureCapacity(charBuffer, MAX_WORD_LENGTH);\n        byteBuffer = BufferUtils.clearAndEnsureCapacity(byteBuffer, MAX_WORD_LENGTH);\n        final byte[] prevBytes = new byte[0];\n        findRepl(candidates, 0, fsa.getRootNode(), prevBytes, 0, 0, -1, null, '\\0');\n      }\n    }\n\n    Collections.sort(candidates);\n\n    // Apply replacements, prune duplicates while preserving the candidate order.\n    final Set<String> words = new HashSet<>();\n    final ArrayList<CandidateData> result = new ArrayList<>(candidates.size());\n    for (final CandidateData cd : candidates) {\n      String replaced =\n          DictionaryLookup.applyReplacements(\n              cd.getWord(), dictionaryMetadata.getOutputConversionPairs());\n      // Add only the first occurrence of a given word.\n      if (words.add(replaced) && !replaced.equals(word)) {\n        result.add(new CandidateData(replaced, cd.origDistance));\n      }\n    }\n\n    return result;\n  }\n\n  private void findRepl(\n      List<CandidateData> candidates,\n      final int depth,\n      final int node,\n      final byte[] prevBytes,\n      final int wordIndex,\n      final int candIndex,\n      final int minLookbackWordIndex,\n      final String lastAnyToOneSource,\n      final char lastAnyToOneTarget) {\n    int dist = 0;\n    for (int arc = fsa.getFirstArc(node); arc != 0; arc = fsa.getNextArc(arc)) {\n      byteBuffer = BufferUtils.clearAndEnsureCapacity(byteBuffer, prevBytes.length + 1);\n      byteBuffer.put(prevBytes);\n      byteBuffer.put(fsa.getArcLabel(arc));\n      final int bufPos = byteBuffer.position();\n      byteBuffer.flip();\n      charBuffer.clear();\n      decoder.reset();\n      final CoderResult c = decoder.decode(byteBuffer, charBuffer, true);\n      if (c.isMalformed()) { // incomplete multi-byte sequence: accumulate bytes and descend\n        final byte[] prev = new byte[bufPos];\n        byteBuffer.position(0);\n        byteBuffer.get(prev);\n        if (!fsa.isArcTerminal(arc)) {\n          findRepl(\n              candidates,\n              depth,\n              fsa.getEndNode(arc),\n              prev,\n              wordIndex,\n              candIndex,\n              minLookbackWordIndex,\n              lastAnyToOneSource,\n              lastAnyToOneTarget); // note: depth is not incremented\n        }\n        byteBuffer.clear();\n      } else if (!c.isError()) { // unmappable characters are silently discarded\n        decoder.flush(charBuffer);\n        charBuffer.flip();\n        candidate[candIndex] = charBuffer.get();\n        charBuffer.clear();\n        byteBuffer.clear();\n\n        int lengthReplacement;\n        // replacement \"any to two\"\n        if ((lengthReplacement =\n                matchAnyToTwo(\n                    wordIndex,\n                    candIndex,\n                    minLookbackWordIndex,\n                    lastAnyToOneSource,\n                    lastAnyToOneTarget))\n            > 0) {\n          // the replacement takes place at the end of the candidate\n          if (isEndOfCandidate(arc, wordIndex)\n              && (dist = hMatrix.get(depth - 1, depth - 1)) <= effectEditDistance) {\n            if (Math.abs(wordLen - 1 - (wordIndex + lengthReplacement - 2)) > 0) {\n              // there are extra letters in the word after the replacement\n              dist = dist + Math.abs(wordLen - 1 - (wordIndex + lengthReplacement - 2));\n            }\n            if (dist <= effectEditDistance) {\n              candidates.add(new CandidateData(String.valueOf(candidate, 0, candIndex + 1), dist));\n            }\n          }\n          if (isArcNotTerminal(arc, candIndex)) {\n            int x = hMatrix.get(depth, depth);\n            hMatrix.set(depth, depth, hMatrix.get(depth - 1, depth - 1));\n            findRepl(\n                candidates,\n                Math.max(0, depth),\n                fsa.getEndNode(arc),\n                new byte[0],\n                wordIndex + lengthReplacement - 1,\n                candIndex + 1,\n                minLookbackWordIndex,\n                lastAnyToOneSource,\n                lastAnyToOneTarget);\n            hMatrix.set(depth, depth, x);\n          }\n        }\n        // replacement \"any to one\"\n        if ((lengthReplacement = matchAnyToOne(wordIndex, candIndex)) > 0) {\n          // the replacement takes place at the end of the candidate\n          if (isEndOfCandidate(arc, wordIndex)\n              && (dist = hMatrix.get(depth, depth)) <= effectEditDistance) {\n            if (Math.abs(wordLen - 1 - (wordIndex + lengthReplacement - 1)) > 0) {\n              // there are extra letters in the word after the replacement\n              dist = dist + Math.abs(wordLen - 1 - (wordIndex + lengthReplacement - 1));\n            }\n            if (dist <= effectEditDistance) {\n              candidates.add(new CandidateData(String.valueOf(candidate, 0, candIndex + 1), dist));\n            }\n          }\n          if (isArcNotTerminal(arc, candIndex)) {\n            String newAnyToOneSource = new String(wordProcessed, wordIndex, lengthReplacement);\n            findRepl(\n                candidates,\n                depth,\n                fsa.getEndNode(arc),\n                new byte[0],\n                wordIndex + lengthReplacement,\n                candIndex + 1,\n                wordIndex + lengthReplacement,\n                newAnyToOneSource,\n                candidate[candIndex]);\n          }\n        }\n        // general\n        if (cuted(depth, wordIndex, candIndex) <= effectEditDistance) {\n          if ((isEndOfCandidate(arc, wordIndex))\n              && (dist = ed(wordLen - 1 - (wordIndex - depth), depth, wordLen - 1, candIndex))\n                  <= effectEditDistance) {\n            candidates.add(new CandidateData(String.valueOf(candidate, 0, candIndex + 1), dist));\n          }\n          if (isArcNotTerminal(arc, candIndex)) {\n            findRepl(\n                candidates,\n                depth + 1,\n                fsa.getEndNode(arc),\n                new byte[0],\n                wordIndex + 1,\n                candIndex + 1,\n                minLookbackWordIndex,\n                lastAnyToOneSource,\n                lastAnyToOneTarget);\n          }\n        }\n      }\n    }\n  }\n\n  private boolean isArcNotTerminal(final int arc, final int candIndex) {\n    return !fsa.isArcTerminal(arc)\n        && !(containsSeparators && candidate[candIndex] == dictionaryMetadata.getSeparatorAsChar());\n  }\n\n  private boolean isEndOfCandidate(final int arc, final int wordIndex) {\n    return (fsa.isArcFinal(arc) || isBeforeSeparator(arc))\n        // candidate has proper length\n        && (Math.abs(wordLen - 1 - (wordIndex)) <= effectEditDistance);\n  }\n\n  private boolean isBeforeSeparator(final int arc) {\n    if (containsSeparators) {\n      final int arc1 = fsa.getArc(fsa.getEndNode(arc), dictionaryMetadata.getSeparator());\n      return arc1 != 0 && !fsa.isArcTerminal(arc1);\n    }\n    return false;\n  }\n\n  /**\n   * Calculates edit distance.\n   *\n   * @param i length of first word (here: misspelled) - 1;\n   * @param j length of second word (here: candidate) - 1.\n   * @param wordIndex (TODO: javadoc?)\n   * @param candIndex (TODO: javadoc?)\n   * @return Edit distance between the two words. Remarks: See Oflazer.\n   */\n  public int ed(final int i, final int j, final int wordIndex, final int candIndex) {\n    int result;\n    int a, b, c;\n\n    if (areEqual(wordProcessed[wordIndex], candidate[candIndex])) {\n      // last characters are the same\n      result = hMatrix.get(i, j);\n    } else if (wordIndex > 0\n        && candIndex > 0\n        && wordProcessed[wordIndex] == candidate[candIndex - 1]\n        && wordProcessed[wordIndex - 1] == candidate[candIndex]) {\n      // last two characters are transposed\n      a = hMatrix.get(i - 1, j - 1); // transposition, e.g. ababab, ababba\n      b = hMatrix.get(i + 1, j); // deletion, e.g. abab, aba\n      c = hMatrix.get(i, j + 1); // insertion e.g. aba, abab\n      result = 1 + min(a, b, c);\n    } else {\n      // otherwise\n      a = hMatrix.get(i, j); // replacement, e.g. ababa, ababb\n      b = hMatrix.get(i + 1, j); // deletion, e.g. ab, a\n      c = hMatrix.get(i, j + 1); // insertion e.g. a, ab\n      result = 1 + min(a, b, c);\n    }\n\n    hMatrix.set(i + 1, j + 1, result);\n    return result;\n  }\n\n  // by Jaume Ortola\n  private boolean areEqual(final char x, final char y) {\n    if (x == y) {\n      return true;\n    }\n    if (dictionaryMetadata.getEquivalentChars() != null) {\n      List<Character> chars = dictionaryMetadata.getEquivalentChars().get(x);\n      if (chars != null && chars.contains(y)) {\n        return true;\n      }\n    }\n    if (dictionaryMetadata.isIgnoringDiacritics()) {\n      String xn = Normalizer.normalize(Character.toString(x), Form.NFD);\n      String yn = Normalizer.normalize(Character.toString(y), Form.NFD);\n      if (xn.charAt(0) == yn.charAt(0)) { // avoid case conversion, if possible\n        return true;\n      }\n      if (dictionaryMetadata.isConvertingCase()) {\n        // again case conversion only when needed -- we\n        // do not need String.lowercase because we only check\n        // single characters, so a cheaper method is enough\n        if (Character.isLetter(xn.charAt(0))) {\n          boolean testNeeded =\n              Character.isLowerCase(xn.charAt(0)) != Character.isLowerCase(yn.charAt(0));\n          if (testNeeded) {\n            return Character.toLowerCase(xn.charAt(0)) == Character.toLowerCase(yn.charAt(0));\n          }\n        }\n      }\n      return xn.charAt(0) == yn.charAt(0);\n    }\n    return false;\n  }\n\n  /**\n   * Calculates cut-off edit distance.\n   *\n   * @param depth current length of candidates.\n   * @param wordIndex (TODO: javadoc?)\n   * @param candIndex (TODO: javadoc?)\n   * @return Cut-off edit distance. Remarks: See Oflazer.\n   */\n  public int cuted(final int depth, final int wordIndex, final int candIndex) {\n    final int l = Math.max(0, depth - effectEditDistance); // min chars from word to consider - 1\n    final int u =\n        Math.min(\n            wordLen - 1 - (wordIndex - depth),\n            depth + effectEditDistance); // max chars from word to\n    // consider - 1\n    int minEd = effectEditDistance + 1; // what is to be computed\n    int wi = wordIndex + l - depth;\n    int d;\n\n    for (int i = l; i <= u; i++, wi++) {\n      if ((d = ed(i, depth, wi, candIndex)) < minEd) {\n        minEd = d;\n      }\n    }\n    return minEd;\n  }\n\n  // Match the last letter of the candidate against two or more letters of the word.\n  private int matchAnyToOne(final int wordIndex, final int candIndex) {\n    if (replacementsAnyToOne.containsKey(candidate[candIndex])) {\n      for (final Pattern p : replacementsAnyToOne.get(candidate[candIndex])) {\n        if (p.startAnchor && wordIndex != 0) continue;\n        int i = 0;\n        while (i < p.chars.length\n            && (wordIndex + i) < wordLen\n            && p.chars[i] == wordProcessed[wordIndex + i]) {\n          i++;\n        }\n        if (i == p.chars.length) {\n          if (p.endAnchor && wordIndex + i != wordLen) continue;\n          return i;\n        }\n      }\n    }\n    return 0;\n  }\n\n  private int matchAnyToTwo(\n      final int wordIndex,\n      final int candIndex,\n      final int minLookbackWordIndex,\n      final String lastAnyToOneSource,\n      final char lastAnyToOneTarget) {\n    if (candIndex > 0 && candIndex < candidate.length && wordIndex > 0) {\n      char[] twoChar = {candidate[candIndex - 1], candidate[candIndex]};\n      String sTwoChar = new String(twoChar);\n      if (replacementsAnyToTwo.containsKey(sTwoChar)) {\n        for (final Pattern p : replacementsAnyToTwo.get(sTwoChar)) {\n          if (p.startAnchor && wordIndex - 1 != 0) continue;\n          if (p.chars.length == 2\n              && wordIndex < wordLen\n              && candidate[candIndex - 1] == wordProcessed[wordIndex - 1]\n              && candidate[candIndex] == wordProcessed[wordIndex]) {\n            return 0; // unnecessary replacements\n          }\n          int i = 0;\n          while (i < p.chars.length\n              && (wordIndex - 1 + i) < wordLen\n              && p.chars[i] == wordProcessed[wordIndex - 1 + i]) {\n            i++;\n          }\n          if (i == p.chars.length) {\n            if (p.endAnchor && wordIndex - 1 + i != wordLen) continue;\n            // Reject if this match directly reverses a previous anyToOne match at an overlapping\n            // position\n            if (wordIndex - 1 < minLookbackWordIndex\n                && lastAnyToOneSource != null\n                && p.chars.length == 1\n                && p.chars[0] == lastAnyToOneTarget\n                && sTwoChar.equals(lastAnyToOneSource)) {\n              continue;\n            }\n            return i;\n          }\n        }\n      }\n    }\n    return 0;\n  }\n\n  private static int min(final int a, final int b, final int c) {\n    return Math.min(a, Math.min(b, c));\n  }\n\n  /**\n   * Copy-paste of Character.isAlphabetic() (needed as we require only 1.6)\n   *\n   * @param codePoint The input character.\n   * @return True if the character is a Unicode alphabetic character.\n   */\n  static boolean isAlphabetic(final int codePoint) {\n    return ((1 << Character.UPPERCASE_LETTER\n                    | 1 << Character.LOWERCASE_LETTER\n                    | 1 << Character.TITLECASE_LETTER\n                    | 1 << Character.MODIFIER_LETTER\n                    | 1 << Character.OTHER_LETTER\n                    | 1 << Character.LETTER_NUMBER)\n                >> Character.getType(codePoint)\n            & 1)\n        != 0;\n  }\n\n  /**\n   * Checks whether a string contains a digit. Used for ignoring words with numbers\n   *\n   * @param s Word to be checked.\n   * @return True if there is a digit inside the word.\n   */\n  static boolean containsNoDigit(final String s) {\n    for (int k = 0; k < s.length(); k++) {\n      if (Character.isDigit(s.charAt(k))) {\n        return false;\n      }\n    }\n    return true;\n  }\n\n  /**\n   * Returns true if <code>str</code> is made up of all-uppercase characters (ignoring characters\n   * for which no upper-/lowercase distinction exists).\n   */\n  boolean isAllUppercase(final String str) {\n    for (int i = 0; i < str.length(); i++) {\n      char c = str.charAt(i);\n      if (Character.isLetter(c) && Character.isLowerCase(c)) {\n        return false;\n      }\n    }\n    return true;\n  }\n\n  /**\n   * Returns true if <code>str</code> is made up of all-lowercase characters (ignoring characters\n   * for which no upper-/lowercase distinction exists).\n   */\n  boolean isNotAllLowercase(final String str) {\n    for (int i = 0; i < str.length(); i++) {\n      char c = str.charAt(i);\n      if (Character.isLetter(c) && !Character.isLowerCase(c)) {\n        return true;\n      }\n    }\n    return false;\n  }\n\n  /**\n   * @param str input string\n   */\n  boolean isNotCapitalizedWord(final String str) {\n    if (isNotEmpty(str) && Character.isUpperCase(str.charAt(0))) {\n      for (int i = 1; i < str.length(); i++) {\n        char c = str.charAt(i);\n        if (Character.isLetter(c) && !Character.isLowerCase(c)) {\n          return true;\n        }\n      }\n      return false;\n    }\n    return true;\n  }\n\n  /**\n   * Helper method to replace calls to \"\".equals().\n   *\n   * @param str String to check\n   * @return true if string is empty OR null\n   */\n  static boolean isNotEmpty(final String str) {\n    return str != null && str.length() != 0;\n  }\n\n  /**\n   * @param str input str\n   * @return Returns true if str is MixedCase.\n   */\n  boolean isMixedCase(final String str) {\n    return !isAllUppercase(str) && isNotCapitalizedWord(str) && isNotAllLowercase(str);\n  }\n\n  /**\n   * @param str The string to check.\n   * @return Returns true if str is CamelCase. Note that German compounds with a dash (like\n   *     \"Waschmaschinen-Test\") are also considered camel case by this method.\n   */\n  public boolean isCamelCase(final String str) {\n    return isNotEmpty(str)\n        && !isAllUppercase(str)\n        && isNotCapitalizedWord(str)\n        && Character.isUpperCase(str.charAt(0))\n        && (!(str.length() > 1) || Character.isLowerCase(str.charAt(1)))\n        && isNotAllLowercase(str);\n  }\n\n  /**\n   * Used to determine whether the dictionary supports case conversions.\n   *\n   * @return boolean value that answers this question in a deep and meaningful way.\n   * @since 1.9\n   */\n  public boolean convertsCase() {\n    return dictionaryMetadata.isConvertingCase();\n  }\n\n  /**\n   * @param str The string to find the replacements for.\n   * @param fromIndex The index from which replacements are found.\n   * @param level The recursion level. The search stops if level is &gt; MAX_RECURSION_LEVEL.\n   * @return A list of all possible replacements of a {#link str} given string\n   */\n  public List<String> getAllReplacements(final String str, final int fromIndex, final int level) {\n    List<String> replaced = new ArrayList<>();\n    if (level > MAX_RECURSION_LEVEL) { // Stop searching at some point\n      replaced.add(str);\n      return replaced;\n    }\n    StringBuilder sb = new StringBuilder();\n    sb.append(str);\n    int index = MAX_WORD_LENGTH;\n    String key = \"\";\n    int keyLength = 0;\n    boolean found = false;\n    // find first possible replacement after fromIndex position\n    String strippedKeyForSelected = \"\";\n    for (final String auxKey : replacementsTheRest.keySet()) {\n      boolean startAnchor = isStartAnchored(auxKey);\n      boolean endAnchor = isEndAnchored(auxKey);\n      String stripped = (startAnchor || endAnchor) ? stripAnchors(auxKey) : auxKey;\n      int auxIndex;\n      if (startAnchor && fromIndex > 0) {\n        continue; // ^ anchor only valid from the beginning\n      } else if (startAnchor) {\n        auxIndex = sb.indexOf(stripped, 0) == 0 ? 0 : -1;\n      } else if (endAnchor) {\n        int expectedIndex = sb.length() - stripped.length();\n        auxIndex =\n            (expectedIndex >= fromIndex && sb.indexOf(stripped, expectedIndex) == expectedIndex)\n                ? expectedIndex\n                : -1;\n      } else {\n        auxIndex = sb.indexOf(auxKey, fromIndex);\n      }\n      if (auxIndex > -1\n          && (auxIndex < index\n              || (auxIndex == index\n                  && !(stripped.length() < keyLength)))) { // select the longest possible key\n        index = auxIndex;\n        key = auxKey;\n        keyLength = stripped.length();\n        strippedKeyForSelected = stripped;\n      }\n    }\n    if (index < MAX_WORD_LENGTH) {\n      for (final String rep : replacementsTheRest.get(key)) {\n        // start a branch without replacement (only once per key)\n        if (!found) {\n          replaced.addAll(\n              getAllReplacements(str, index + strippedKeyForSelected.length(), level + 1));\n          found = true;\n        }\n        // avoid unnecessary replacements (ex. don't replace L by L·L when L·L already present)\n        int ind = sb.indexOf(rep, fromIndex - rep.length() + 1);\n        if (rep.length() > strippedKeyForSelected.length()\n            && ind > -1\n            && (ind == index || ind == index - rep.length() + 1)) {\n          continue;\n        }\n        // start a branch with replacement\n        sb.replace(index, index + strippedKeyForSelected.length(), rep);\n        replaced.addAll(getAllReplacements(sb.toString(), index + rep.length(), level + 1));\n        sb.setLength(0);\n        sb.append(str);\n      }\n    }\n    if (!found) {\n      replaced.add(sb.toString());\n    }\n    return replaced;\n  }\n\n  /**\n   * Sets up the word and candidate. Used only to test the edit distance in JUnit tests.\n   *\n   * @param word the first word\n   * @param candidate the second word used for edit distance calculation\n   */\n  void setWordAndCandidate(final String word, final String candidate) {\n    wordProcessed = word.toCharArray();\n    wordLen = wordProcessed.length;\n    this.candidate = candidate.toCharArray();\n    candLen = this.candidate.length;\n    effectEditDistance = wordLen <= editDistance ? wordLen - 1 : editDistance;\n  }\n\n  public final int getWordLen() {\n    return wordLen;\n  }\n\n  public final int getCandLen() {\n    return candLen;\n  }\n\n  public final int getEffectiveED() {\n    return effectEditDistance;\n  }\n\n  /**\n   * Used to sort candidates according to edit distance, and possibly according to their frequency\n   * in the future.\n   */\n  public final class CandidateData implements Comparable<CandidateData> {\n    private final String word;\n    private final int origDistance;\n    private final int distance;\n\n    CandidateData(final String word, final int distance) {\n      this.word = word;\n      this.origDistance = distance;\n      this.distance = distance * FREQ_RANGES + FREQ_RANGES - getFrequency(word) - 1;\n    }\n\n    public final String getWord() {\n      return word;\n    }\n\n    public final int getDistance() {\n      return distance;\n    }\n\n    @Override\n    public int compareTo(final CandidateData cd) {\n      // Assume no overflow.\n      return Integer.compare(this.distance, cd.getDistance());\n    }\n\n    @Override\n    public String toString() {\n      return word + '/' + distance;\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-speller/src/test/java/morfologik/speller/HMatrixTest.java",
    "content": "package morfologik.speller;\n\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport org.junit.jupiter.api.Test;\n\npublic class HMatrixTest {\n  private static final int MAX_WORD_LENGTH = 120;\n\n  @Test\n  public void stressTestInit() {\n    for (int i = 0; i < 10; i++) { // test if we don't get beyond array limits etc.\n      HMatrix H = new HMatrix(i, MAX_WORD_LENGTH);\n      assertEquals(0, H.get(1, 1));\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-speller/src/test/java/morfologik/speller/SpellerTest.java",
    "content": "package morfologik.speller;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport static org.junit.jupiter.api.Assertions.assertTrue;\n\nimport java.io.IOException;\nimport java.net.URL;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport morfologik.stemming.Dictionary;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.BeforeAll;\nimport org.junit.jupiter.api.Test;\n\npublic class SpellerTest {\n  private static Dictionary dictionary;\n\n  @BeforeAll\n  public static void setup() throws Exception {\n    final URL url = SpellerTest.class.getResource(\"slownik.dict\");\n    dictionary = Dictionary.read(url);\n  }\n\n  /*\n   @Test\n   public void testAbka() throws Exception {\n       final Speller spell = new Speller(dictionary, 2);\n       System.out.println(\"Replacements:\");\n       for (String s : spell.findReplacements(\"abka\")) {\n           System.out.println(s);\n       }\n   }\n  */\n\n  @Test\n  public void testRunonWords() throws IOException {\n    final Speller spell = new Speller(dictionary);\n    Assertions.assertThat(spell.replaceRunOnWords(\"abaka\")).isEmpty();\n    Assertions.assertThat(spell.replaceRunOnWords(\"abakaabace\")).contains(\"abaka abace\");\n    Assertions.assertThat(spell.replaceRunOnWords(\"Abakaabace\")).contains(\"Abaka abace\");\n    Assertions.assertThat(spell.replaceRunOnWords(\"AbakaAbace\")).contains(\"Abaka Abace\");\n    Assertions.assertThat(spell.replaceRunOnWords(\"abakaAbace\")).contains(\"abaka Abace\");\n\n    // Test on an morphological dictionary - should work as well\n    final URL url1 = getClass().getResource(\"test-infix.dict\");\n    final Speller spell1 = new Speller(Dictionary.read(url1));\n    assertTrue(spell1.replaceRunOnWords(\"Rzekunia\").isEmpty());\n    assertTrue(\n        spell1.replaceRunOnWords(\"RzekuniaRzeczypospolitej\").contains(\"Rzekunia Rzeczypospolitej\"));\n    assertTrue(\n        spell1.replaceRunOnWords(\"RzekuniaRze\").isEmpty()); // Rze is not found but is a prefix\n\n    final URL url2 = getClass().getResource(\"single-char-word.dict\");\n    final Speller spell2 = new Speller(Dictionary.read(url2));\n    assertTrue(spell2.replaceRunOnWords(\"alot\").contains(\"a lot\"));\n    assertTrue(spell2.replaceRunOnWords(\"Alot\").contains(\"A lot\"));\n    assertTrue(spell2.replaceRunOnWords(\"ALot\").contains(\"A Lot\"));\n    assertTrue(spell2.replaceRunOnWords(\"LotAmusement\").contains(\"Lot Amusement\"));\n    // TODO? assertTrue(spell2.replaceRunOnWords(\"LOTAMUSEMENT\").contains(\"LOT AMUSEMENT\"));\n    assertTrue(spell2.replaceRunOnWords(\"aalot\").contains(\"aa lot\"));\n    assertTrue(spell2.replaceRunOnWords(\"aamusement\").contains(\"a amusement\"));\n    assertTrue(spell2.replaceRunOnWords(\"clot\").isEmpty());\n    assertTrue(spell2.replaceRunOnWords(\"foobar\").isEmpty());\n  }\n\n  @Test\n  public void testIsInDictionary() throws IOException {\n    // Test on an morphological dictionary, including separators\n    final URL url1 = getClass().getResource(\"test-infix.dict\");\n    final Speller spell1 = new Speller(Dictionary.read(url1));\n    assertTrue(spell1.isInDictionary(\"Rzekunia\"));\n    assertTrue(!spell1.isInDictionary(\"Rzekunia+\"));\n    assertTrue(!spell1.isInDictionary(\"Rzekunia+aaa\"));\n    // test UTF-8 dictionary\n    final URL url = getClass().getResource(\"test-utf-spell.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n    assertTrue(spell.isInDictionary(\"jaźń\"));\n    assertTrue(spell.isInDictionary(\"zażółć\"));\n    assertTrue(spell.isInDictionary(\"żółwiową\"));\n    assertTrue(spell.isInDictionary(\"ćwikła\"));\n    assertTrue(spell.isInDictionary(\"Żebrowski\"));\n    assertTrue(spell.isInDictionary(\"Święto\"));\n    assertTrue(spell.isInDictionary(\"Świerczewski\"));\n    assertTrue(spell.isInDictionary(\"abc\"));\n  }\n\n  @Test\n  public void testFindReplacements() throws IOException {\n    final Speller spell = new Speller(dictionary, 1);\n    assertTrue(spell.findReplacements(\"abka\").contains(\"abak\"));\n    // check if we get only dictionary words...\n    List<String> reps = spell.findReplacements(\"bak\");\n    for (final String word : reps) {\n      assertTrue(spell.isInDictionary(word));\n    }\n    assertTrue(\n        spell.findReplacements(\"abka~~\").isEmpty()); // 2 characters more -> edit distance too large\n    assertTrue(!spell.findReplacements(\"Rezkunia\").contains(\"Rzekunia\"));\n\n    final URL url1 = getClass().getResource(\"test-infix.dict\");\n    final Speller spell1 = new Speller(Dictionary.read(url1));\n    assertTrue(spell1.findReplacements(\"Rezkunia\").contains(\"Rzekunia\"));\n    // diacritics\n    assertTrue(spell1.findReplacements(\"Rzękunia\").contains(\"Rzekunia\"));\n    // we should get no candidates for correct words\n    assertTrue(spell1.isInDictionary(\"Rzekunia\"));\n    assertTrue(spell1.findReplacements(\"Rzekunia\").isEmpty());\n    // and no for things that are too different from the dictionary\n    assertTrue(spell1.findReplacements(\"Strefakibica\").isEmpty());\n    // nothing for nothing\n    assertTrue(spell1.findReplacements(\"\").isEmpty());\n    // nothing for weird characters\n    assertTrue(spell1.findReplacements(\"\\u0000\").isEmpty());\n    // nothing for other characters\n    assertTrue(spell1.findReplacements(\"«…»\").isEmpty());\n    // nothing for separator\n    assertTrue(spell1.findReplacements(\"+\").isEmpty());\n  }\n\n  @Test\n  public void testFrequencyNonUTFDictionary() throws IOException {\n    final URL url1 = getClass().getResource(\"test_freq_iso.dict\");\n    final Speller spell = new Speller(Dictionary.read(url1));\n    assertTrue(spell.isInDictionary(\"a\"));\n    assertTrue(!spell.isInDictionary(\"aõh\")); // non-encodable in UTF-8\n  }\n\n  @Test\n  public void testFindReplacementsInUTF() throws IOException {\n    final URL url = getClass().getResource(\"test-utf-spell.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n    assertTrue(spell.findReplacements(\"gęslą\").contains(\"gęślą\"));\n    assertTrue(spell.findReplacements(\"ćwikla\").contains(\"ćwikła\"));\n    assertTrue(spell.findReplacements(\"Swierczewski\").contains(\"Świerczewski\"));\n    assertTrue(spell.findReplacements(\"zółwiową\").contains(\"żółwiową\"));\n    assertTrue(spell.findReplacements(\"Żebrowsk\").contains(\"Żebrowski\"));\n    assertTrue(spell.findReplacements(\"święto\").contains(\"Święto\"));\n    // note: no diacritics here, but we still get matches!\n    assertTrue(spell.findReplacements(\"gesla\").contains(\"gęślą\"));\n    assertTrue(spell.findReplacements(\"swieto\").contains(\"Święto\"));\n    assertTrue(spell.findReplacements(\"zolwiowa\").contains(\"żółwiową\"));\n    // using equivalent characters 'x' = 'ź'\n    assertTrue(spell.findReplacements(\"jexn\").contains(\"jaźń\"));\n    // 'u' = 'ó', so the edit distance is still small...\n    assertTrue(spell.findReplacements(\"zażulv\").contains(\"zażółć\"));\n    // 'rz' = 'ż', so the edit distance is still small, but with string replacements...\n    assertTrue(spell.findReplacements(\"zarzulv\").contains(\"zażółć\"));\n    assertTrue(spell.findReplacements(\"Rzebrowski\").contains(\"Żebrowski\"));\n    assertTrue(spell.findReplacements(\"rzółw\").contains(\"żółw\"));\n    assertTrue(spell.findReplacements(\"Świento\").contains(\"Święto\"));\n    // avoid mixed-case words as suggestions when using replacements ('rz' = 'ż')\n    assertTrue(spell.findReplacements(\"zArzółć\").get(0).equals(\"zażółć\"));\n  }\n\n  @Test\n  public void testFindReplacementsUsingFrequency() throws IOException {\n    final URL url = getClass().getResource(\"dict-with-freq.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n\n    // check if we get only dictionary words...\n    List<String> reps = spell.findReplacements(\"jist\");\n    for (final String word : reps) {\n      assertTrue(spell.isInDictionary(word));\n    }\n    // get replacements ordered by frequency\n    assertTrue(reps.get(0).equals(\"just\"));\n    assertTrue(reps.get(1).equals(\"list\"));\n    assertTrue(reps.get(2).equals(\"fist\"));\n    assertTrue(reps.get(3).equals(\"mist\"));\n    assertTrue(reps.get(4).equals(\"jest\"));\n    assertTrue(reps.get(5).equals(\"dist\"));\n    assertTrue(reps.get(6).equals(\"gist\"));\n  }\n\n  @Test\n  public void testFindSimilarWords() throws IOException {\n    final URL url = getClass().getResource(\"dict-with-freq.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n\n    List<String> reps = spell.findSimilarWords(\"fist\");\n    assertTrue(reps.toString().equals(\"[list, mist, dist, gist, wist, hist]\"));\n    reps = spell.findSimilarWords(\"mist\");\n    assertTrue(reps.toString().equals(\"[list, fist, dist, gist, wist, hist]\"));\n    reps = spell.findSimilarWords(\"Fist\");\n    assertTrue(reps.toString().equals(\"[fist, list, mist, dist, gist, wist, hist]\"));\n    reps = spell.findSimilarWords(\"licit\");\n    assertTrue(reps.toString().equals(\"[list, fist, mist, dist, gist, wist, hist]\"));\n  }\n\n  @Test\n  public void testConcurrentReplacements() throws IOException {\n    final URL url = getClass().getResource(\"dict-with-freq.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n    // only the longest key is selected in replacement pairs\n    List<String> reps = spell.getAllReplacements(\"teached\", 0, 0);\n    assertTrue(reps.contains(\"teached\"));\n    assertTrue(reps.contains(\"taught\"));\n    assertTrue(!reps.contains(\"tgheached\"));\n  }\n\n  @Test\n  public void testIsMisspelled() throws IOException {\n    final URL url = getClass().getResource(\"test-utf-spell.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n    assertTrue(!spell.isMisspelled(\"Paragraf22\")); // ignorujemy liczby\n    assertTrue(!spell.isMisspelled(\"!\")); // ignorujemy znaki przestankowe\n    assertTrue(spell.isMisspelled(\"dziekie\")); // test, czy znajdujemy błąd\n    assertTrue(!spell.isMisspelled(\"SłowozGarbem\")); // ignorujemy słowa w stylu wielbłąda\n    assertTrue(!spell.isMisspelled(\"Ćwikła\")); // i małe litery\n    assertTrue(!spell.isMisspelled(\"TOJESTTEST\")); // i wielkie litery\n    final Speller oldStyleSpell = new Speller(dictionary, 1);\n    assertTrue(oldStyleSpell.isMisspelled(\"Paragraf22\")); // nie ignorujemy liczby\n    assertTrue(oldStyleSpell.isMisspelled(\"!\")); // nie ignorujemy znaków przestankowych\n    // assertTrue(oldStyleSpell.isMisspelled(\"SłowozGarbem\"));  //ignorujemy słowa w stylu wielbłąda\n    assertTrue(oldStyleSpell.isMisspelled(\"Abaka\")); // i małe litery\n    final URL url1 = getClass().getResource(\"test-infix.dict\");\n    final Speller spell1 = new Speller(Dictionary.read(url1));\n    assertTrue(!spell1.isMisspelled(\"Rzekunia\"));\n    assertTrue(spell1.isAllUppercase(\"RZEKUNIA\"));\n    assertTrue(spell1.isMisspelled(\"RZEKUNIAA\")); // finds a typo here\n    assertTrue(!spell1.isMisspelled(\"RZEKUNIA\")); // but not here\n  }\n\n  @Test\n  public void testCamelCase() {\n    final Speller spell = new Speller(dictionary, 1);\n    assertTrue(spell.isCamelCase(\"CamelCase\"));\n    assertTrue(!spell.isCamelCase(\"Camel\"));\n    assertTrue(!spell.isCamelCase(\"CAMEL\"));\n    assertTrue(!spell.isCamelCase(\"camel\"));\n    assertTrue(!spell.isCamelCase(\"cAmel\"));\n    assertTrue(!spell.isCamelCase(\"CAmel\"));\n    assertTrue(!spell.isCamelCase(\"\"));\n    assertTrue(!spell.isCamelCase(null));\n  }\n\n  @Test\n  public void testCapitalizedWord() {\n    final Speller spell = new Speller(dictionary, 1);\n    assertTrue(spell.isNotCapitalizedWord(\"CamelCase\"));\n    assertTrue(!spell.isNotCapitalizedWord(\"Camel\"));\n    assertTrue(spell.isNotCapitalizedWord(\"CAMEL\"));\n    assertTrue(spell.isNotCapitalizedWord(\"camel\"));\n    assertTrue(spell.isNotCapitalizedWord(\"cAmel\"));\n    assertTrue(spell.isNotCapitalizedWord(\"CAmel\"));\n    assertTrue(spell.isNotCapitalizedWord(\"\"));\n  }\n\n  @Test\n  public void testGetAllReplacements() throws IOException {\n    final URL url = getClass().getResource(\"test-utf-spell.dict\");\n    final Speller spell = new Speller(Dictionary.read(url));\n    assertTrue(spell.isMisspelled(\"rzarzerzarzu\"));\n    assertEquals(\n        \"[rzarzerzarzu]\",\n        Arrays.toString(spell.getAllReplacements(\"rzarzerzarzu\", 0, 0).toArray()));\n  }\n\n  @Test\n  public void testEditDistanceCalculation() throws IOException {\n    final Speller spell = new Speller(dictionary, 5);\n    // test examples from Oflazer's paper\n    assertTrue(getEditDistance(spell, \"recoginze\", \"recognize\") == 1);\n    assertTrue(getEditDistance(spell, \"sailn\", \"failing\") == 3);\n    assertTrue(getEditDistance(spell, \"abc\", \"abcd\") == 1);\n    assertTrue(getEditDistance(spell, \"abc\", \"abcde\") == 2);\n    // test words from fsa_spell output\n    assertTrue(getEditDistance(spell, \"abka\", \"abaka\") == 1);\n    assertTrue(getEditDistance(spell, \"abka\", \"abakan\") == 2);\n    assertTrue(getEditDistance(spell, \"abka\", \"abaką\") == 2);\n    assertTrue(getEditDistance(spell, \"abka\", \"abaki\") == 2);\n  }\n\n  @Test\n  public void testCutOffEditDistance() throws IOException {\n    final Speller spell2 = new Speller(dictionary, 2); // note: threshold = 2\n    // test cut edit distance - reprter / repo from Oflazer\n    assertTrue(getCutOffDistance(spell2, \"repo\", \"reprter\") == 1);\n    assertTrue(getCutOffDistance(spell2, \"reporter\", \"reporter\") == 0);\n  }\n\n  @Test\n  public void testReplacementsAndDistance2() throws Exception {\n    /*File infoFile = new File(\"/tmp/morfologik.info\");\n    FileWriter fw1 = new FileWriter(infoFile);\n    fw1.write(\"fsa.dict.separator=+\\n\");\n    fw1.write(\"fsa.dict.encoding=utf-8\\n\");\n    fw1.write(\"fsa.dict.speller.replacement-pairs=s ss,t d,R Rh,y ij,ę em,em ę\\n\");\n    fw1.close();\n\n    File inputFile = new File(\"/tmp/morfologik.txt\");\n    FileWriter fw2 = new FileWriter(inputFile);\n    fw2.write(\"Mitmuss\\n\");\n    fw2.write(\"Rhythmus\\n\");\n    fw2.write(\"Wald\\n\");\n    fw2.write(\"Band\\n\");\n    fw2.write(\"ijo\\n\");\n    fw2.write(\"ijond\\n\");\n    fw2.write(\"youd\\n\");\n    fw2.write(\"ijoussud\\n\");\n    fw2.write(\"ijoussuud\\n\");\n    fw2.write(\"ijussuud\\n\");\n    fw2.write(\"ijousod\\n\");\n    fw2.write(\"ij\\n\");\n    fw2.write(\"ijo\\n\");\n    fw2.write(\"Ciarkę\\n\");\n    fw2.write(\"Czarkę\\n\");\n    fw2.write(\"Clarke\\n\");\n    fw2.write(\"Clarkiem\\n\");\n    fw2.write(\"Clarkom\\n\");\n\n    fw2.close();\n\n    File dictFile = new File(\"/tmp/morfologik.dict\");\n    String[] buildToolOptions =\n            {\"-i\", inputFile.getAbsolutePath(), \"-o\", dictFile.getAbsolutePath()};\n    FSABuildTool.main(buildToolOptions);\n    Dictionary dictionary = Dictionary.read(dictFile);\n    Speller speller = new Speller(dictionary, 3);*/\n\n    final URL url = getClass().getResource(\"reps_dist2.dict\");\n    final Speller speller = new Speller(Dictionary.read(url), 3);\n\n    List<String> reps = speller.findReplacements(\"Rytmus\");\n    assertTrue(reps.get(0).equals(\"Rhythmus\"));\n    assertTrue(reps.get(1).equals(\"Mitmuss\"));\n    reps = speller.findReplacements(\"Walt\");\n    assertTrue(reps.get(0).equals(\"Wald\"));\n    assertTrue(reps.get(1).equals(\"Band\"));\n    reps = speller.findReplacements(\"yout\");\n    assertTrue(reps.get(0).equals(\"youd\"));\n    assertTrue(reps.get(1).equals(\"ijond\"));\n    assertTrue(reps.get(2).equals(\"ijo\"));\n    reps = speller.findReplacements(\"yousut\");\n    assertTrue(reps.get(0).equals(\"ijoussud\"));\n    assertTrue(reps.get(1).equals(\"ijousod\"));\n    assertTrue(reps.get(2).equals(\"ijoussuud\"));\n    assertTrue(reps.get(3).equals(\"youd\"));\n    reps = speller.findReplacements(\"yo\");\n    assertTrue(reps.get(0).equals(\"ijo\"));\n    assertTrue(reps.get(1).equals(\"ij\"));\n    reps = speller.findReplacements(\"Clarkem\");\n    assertTrue(reps.get(0).equals(\"Ciarkę\"));\n    assertTrue(reps.get(1).equals(\"Clarke\"));\n    assertTrue(reps.get(2).equals(\"Clarkiem\"));\n    assertTrue(reps.get(3).equals(\"Clarkom\"));\n    assertTrue(reps.get(4).equals(\"Czarkę\"));\n  }\n\n  @Test\n  public void testFindReplacementsConsistentAcrossRepeatedCalls() throws IOException {\n    // HMatrix must be reset at the start of each findReplacementCandidates call.\n    // Without the reset, stale edit-distance values left by a previous traversal\n    // corrupt results: a reused Speller returns different candidates than a\n    // freshly constructed one.\n    final List<String> expected = new Speller(dictionary, 3).findReplacements(\"bak\");\n\n    final Speller reused = new Speller(dictionary, 3);\n    reused.findReplacements(\"abka\"); // dirties the hMatrix\n    final List<String> actual = reused.findReplacements(\"bak\");\n\n    assertEquals(expected, actual);\n  }\n\n  @Test\n  public void testIssue38AnchoredReplacementPairs() throws Exception {\n    // GH-38: support for ^ (start), $ (end) anchors and _ (space) in replacement-pairs.\n    // editDistance=0 ensures candidates are only found via replacement pairs, not by\n    // coincidental edit distance (e.g. \"alot\"/\"a lot\" differ by just 1).\n    final URL url = getClass().getResource(\"issue38.dict\");\n    final Speller speller = new Speller(Dictionary.read(url), 0);\n\n    // ^Ij IJ: start-anchored 2-char replacement; \"Ijsland\" -> \"IJsland\"\n    assertTrue(speller.findReplacements(\"Ijsland\").contains(\"IJsland\"));\n\n    // ^alot a_lot: start-anchored replacement with _ as space; \"alot\" -> \"a lot\"\n    assertTrue(speller.findReplacements(\"alot\").contains(\"a lot\"));\n\n    // ^påny$ på_ny: both anchors + _ as space; whole-word replacement \"påny\" -> \"på ny\"\n    assertTrue(speller.findReplacements(\"påny\").contains(\"på ny\"));\n  }\n\n  @Test\n  public void testIssue94() throws Exception {\n    final URL url = getClass().getResource(\"issue94.dict\");\n    final Speller speller = new Speller(Dictionary.read(url));\n    List<String> reps = speller.findReplacements(\"schänken\");\n    assertTrue(reps.get(0).equals(\"Schänken\"));\n    assertTrue(reps.get(1).equals(\"schenken\"));\n  }\n\n  @Test\n  public void testReciprocalReplacementPairsDoNotProduceZeroDistance() throws IOException {\n    // Searching for \"pissara\" in a dictionary containing \"pissarra\", \"passara\", \"passarà\".\n    // With reciprocal replacement pairs ss↔s, the bug causes matchAnyToOne (ss→s) followed by\n    // matchAnyToTwo (s→ss) to double-consume word[3]='s', corrupting the HMatrix and making\n    // \"passara\"/\"passarà\" appear as distance=0 candidates instead of distance=1.\n    final URL url = getClass().getResource(\"pissara-test.dict\");\n    final Speller speller = new Speller(Dictionary.read(url), 2);\n\n    List<Speller.CandidateData> candidates = speller.findReplacementCandidates(\"pissara\");\n\n    // \"pissarra\" (one extra 'r') and \"passara\" (i→a, ss→s) are both valid distance-1 candidates\n    List<String> words = new ArrayList<>();\n    for (Speller.CandidateData cd : candidates) {\n      words.add(cd.getWord());\n    }\n    assertTrue(words.contains(\"pissarra\"), \"pissarra should be a suggestion for pissara\");\n    assertTrue(words.contains(\"passara\"), \"passara should be a suggestion for pissara\");\n    assertTrue(words.contains(\"passarà\"), \"passara should be a suggestion for pissara\");\n\n    // No candidate should have origDistance=0: that would indicate the double-consumption bug.\n    // With FREQ_RANGES=26 and freq=0: origDistance=0 → distance=25, origDistance=1 → distance=51.\n    for (Speller.CandidateData cd : candidates) {\n      int origDistance = cd.getDistance() / Speller.FREQ_RANGES;\n      assertTrue(\n          origDistance > 0, \"Candidate '\" + cd.getWord() + \"' has unexpected origDistance=0\");\n    }\n  }\n\n  private int getCutOffDistance(final Speller spell, final String word, final String candidate) {\n    // assuming there is no pair-replacement\n    spell.setWordAndCandidate(word, candidate);\n    final int[] ced = new int[spell.getCandLen() - spell.getWordLen()];\n    for (int i = 0; i < spell.getCandLen() - spell.getWordLen(); i++) {\n      ced[i] = spell.cuted(spell.getWordLen() + i, spell.getWordLen() + i, spell.getWordLen() + i);\n    }\n    Arrays.sort(ced);\n    // and the min value...\n    if (ced.length > 0) {\n      return ced[0];\n    }\n    return 0;\n  }\n\n  private int getEditDistance(final Speller spell, final String word, final String candidate) {\n    // assuming there is no pair-replacement\n    spell.setWordAndCandidate(word, candidate);\n    final int maxDistance = spell.getEffectiveED();\n    final int candidateLen = spell.getCandLen();\n    final int wordLen = spell.getWordLen();\n    int ed = 0;\n    for (int i = 0; i < candidateLen; i++) {\n      if (spell.cuted(i, i, i) <= maxDistance) {\n        if (Math.abs(wordLen - 1 - i) <= maxDistance) {\n          ed = spell.ed(wordLen - 1, i, wordLen - 1, i);\n        }\n      }\n    }\n    return ed;\n  }\n}\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/dict-with-freq.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso-8859-2\r\n\r\nfsa.dict.encoder=suffix\r\n\r\nfsa.dict.frequency-included=true\r\n\r\nfsa.dict.speller.locale=en_US\r\nfsa.dict.speller.ignore-diacritics=true\r\nfsa.dict.speller.replacement-pairs=ninties 1990s, teached taught, t tgh, rised rose, a ei, ei a, a ey, ey a, ai ie, ie ai, are air, are ear, are eir, air are, air ere, ere air, ere ear, ere eir, ear are, ear air, ear ere, eir are, eir ere, ch te, te ch, ch ti, ti ch, ch tu, tu ch, ch s, s ch, ch k, k ch, f ph, ph f, gh f, f gh, i igh, igh i, i uy, uy i, i ee, ee i, j di, di j, j gg, gg j, j ge, ge j, s ti, ti s, s ci, ci s, k cc, cc k, k qu, qu k, kw qu, o eau, eau o, o ew, ew o, oo ew, ew oo, ew ui, ui ew, oo ui, ui oo, ew u, u ew, oo u, u oo, u oe, oe u, u ieu, ieu u, ue ew, ew ue, uff ough, oo ieu, ieu oo, ier ear, ear ier, ear air, air ear, w qu, qu w, z ss, ss z, shun tion, shun sion, shun cion"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/dict-with-freq.txt",
    "content": "ageist+C\ndeist+G\ndidst+A\ndigest+J\ndirest+E\ndist+G\ndivest+I\nfist+J\ngist+G\ngrist+I\nheist+I\nhist+A\njest+H\njilt+D\njoist+F\njust+P\nlicit+F\nlist+O\nmist+J\nweest+A\nwist+C\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/issue38.info",
    "content": "fsa.dict.separator=+\nfsa.dict.encoding=utf-8\nfsa.dict.encoder=suffix\nfsa.dict.speller.replacement-pairs=^Ij IJ,^alot a_lot,^påny$ på_ny\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/issue38.input",
    "content": "IJsland+IJsland\na lot+a lot\npå ny+på ny\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/issue94.info",
    "content": "fsa.dict.speller.replacement-pairs=ä e\nfsa.dict.encoder=SUFFIX\nfsa.dict.separator=+\nfsa.dict.encoding=utf-8\nfsa.dict.speller.ignore-diacritics=false\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/pissara-test.info",
    "content": "fsa.dict.separator=+\nfsa.dict.encoding=utf-8\nfsa.dict.encoder=NONE\nfsa.dict.speller.replacement-pairs=s ss,ss s\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/pissara-test.txt",
    "content": "passara\npassarà\npissarra\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/reps_dist2.info",
    "content": "fsa.dict.separator=+\nfsa.dict.encoding=utf-8\nfsa.dict.speller.replacement-pairs=s ss,t d,R Rh,y ij,ę em,em ę\nfsa.dict.encoder=suffix"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/reps_dist2.txt",
    "content": "Mitmuss\nRhythmus\nWald\nBand\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/single-char-word.info",
    "content": "#\n# Dictionary properties.\n#\n\nfsa.dict.separator=+\nfsa.dict.encoding=Cp1250\n\nfsa.dict.encoder=suffix\n\nfsa.dict.speller.ignore-diacritics=false\nfsa.dict.speller.ignore-numbers=false\nfsa.dict.speller.convert-case=false\nfsa.dict.speller.ignore-punctuation=false"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/slownik.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=Cp1250\r\n\r\nfsa.dict.encoder=suffix\r\n\r\nfsa.dict.speller.ignore-diacritics=false\r\nfsa.dict.speller.ignore-numbers=false\r\nfsa.dict.speller.convert-case=false\r\nfsa.dict.speller.ignore-punctuation=false"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/test-infix.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso-8859-2\r\n\r\nfsa.dict.encoder=infix\r\n\r\nfsa.dict.speller.ignore-all-uppercase=false"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/test-utf-spell.info",
    "content": "#\r\n# Dictionary properties.\r\n# UTF-8 encoding or native2ascii has to be used for non-ASCII data.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=utf-8\r\n\r\nfsa.dict.encoder=suffix\r\n\r\nfsa.dict.speller.locale=pl_PL\r\nfsa.dict.speller.ignore-diacritics=true\r\nfsa.dict.speller.equivalent-chars=x ź, l ł, u ó, ó u\r\nfsa.dict.speller.replacement-pairs=rz ż, ż rz, ch h, h ch, ę en, en ę\r\n"
  },
  {
    "path": "morfologik-speller/src/test/resources/morfologik/speller/test_freq_iso.info",
    "content": "#\n# Dictionary properties.\n#\n\nfsa.dict.separator=+\nfsa.dict.encoding=iso-8859-2\n\nfsa.dict.encoder=suffix\n\nfsa.dict.frequency-included=true\n\nfsa.dict.speller.locale=pl_PL\nfsa.dict.speller.ignore-diacritics=true\nfsa.dict.speller.equivalent-chars=x ź, l ł, u ó, ó u\nfsa.dict.speller.replacement-pairs=ź zi, ł eu, ć ci, ć dż, ć dź, ć dz, c dz, ch h, ci ć, cz czy, dź ć, dź dzi, dż ć, dz ć, dzi dź, edzil ędził, ę em, ę en, ei eja, eja ei, em ę, en ę, eu ł, h ch, he chę, śi ś, ii ija, ija ii, iosc ość, ise się, loz łos, ni ń, ńi ń, ń ni, ą oł, oł ą, oi oja, oja oi, ą om, om ą, ą on, on ą, ru kró, ż rz, rz ż, rz sz, scia ścią, ś si, si ś, sić ść, s sną, sz ż, sz rz, tro rot, u y, wu wy, yi yja, yja yi, zal rzał, zekac rzekać, zi ź, zl azł, z żn, z rz, chłopcowi chłopcu, bratowi bratu, aleji alei, lubieć lubić, nei nie, źmie zmie, piatek piątek, pokuj pokój, poszłem poszedłem, prosze proszę, rząda żąda, sa są, sei się, standart standard, trzcionk czcionk, szłem szedłem, pry przy"
  },
  {
    "path": "morfologik-stemming/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.carrot2</groupId>\n    <artifactId>morfologik-parent</artifactId>\n    <version>2.2.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>morfologik-stemming</artifactId>\n  <packaging>bundle</packaging>\n\n  <name>Morfologik Stemming APIs</name>\n  <description>Morfologik Stemming APIs.</description>\n\n  <properties>\n    <forbiddenapis.signaturefile>../etc/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n    <project.moduleId>org.carrot2.morfologik.stemming</project.moduleId>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-fsa</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.felix</groupId>\n        <artifactId>maven-bundle-plugin</artifactId>\n        <configuration>\n          <instructions>\n            <Export-Package>morfologik.stemming</Export-Package>\n            <Import-Package>*</Import-Package>\n          </instructions>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/ArrayViewList.java",
    "content": "package morfologik.stemming;\n\nimport java.util.*;\n\n/** A view over a range of an array. */\n@SuppressWarnings(\"serial\")\nfinal class ArrayViewList<E> extends AbstractList<E> implements RandomAccess, java.io.Serializable {\n  /** Backing array. */\n  private E[] a;\n\n  private int start;\n  private int length;\n\n  /*\n   *\n   */\n  ArrayViewList(E[] array, int start, int length) {\n    if (array == null) throw new IllegalArgumentException();\n    wrap(array, start, length);\n  }\n\n  /*\n   *\n   */\n  public int size() {\n    return length;\n  }\n\n  /*\n   *\n   */\n  public E get(int index) {\n    return a[start + index];\n  }\n\n  /*\n   *\n   */\n  public E set(int index, E element) {\n    throw new UnsupportedOperationException();\n  }\n\n  /*\n   *\n   */\n  public void add(int index, E element) {\n    throw new UnsupportedOperationException();\n  }\n\n  /*\n   *\n   */\n  public E remove(int index) {\n    throw new UnsupportedOperationException();\n  }\n\n  /*\n   *\n   */\n  public boolean addAll(int index, Collection<? extends E> c) {\n    throw new UnsupportedOperationException();\n  }\n\n  /*\n   *\n   */\n  public int indexOf(Object o) {\n    if (o == null) {\n      for (int i = start; i < start + length; i++) if (a[i] == null) return i - start;\n    } else {\n      for (int i = start; i < start + length; i++) if (o.equals(a[i])) return i - start;\n    }\n    return -1;\n  }\n\n  public ListIterator<E> listIterator() {\n    return listIterator(0);\n  }\n\n  /*\n   *\n   */\n  public ListIterator<E> listIterator(final int index) {\n    return Arrays.asList(a).subList(start, start + length).listIterator(index);\n  }\n\n  /*\n   *\n   */\n  public boolean contains(Object o) {\n    return indexOf(o) != -1;\n  }\n\n  /*\n   *\n   */\n  void wrap(E[] array, int start, int length) {\n    this.a = array;\n    this.start = start;\n    this.length = length;\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/BufferUtils.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.CharacterCodingException;\nimport java.nio.charset.Charset;\nimport java.nio.charset.CharsetDecoder;\nimport java.nio.charset.CharsetEncoder;\nimport java.nio.charset.CoderResult;\nimport java.nio.charset.CodingErrorAction;\nimport java.util.Arrays;\n\npublic final class BufferUtils {\n  /** No instances. */\n  private BufferUtils() {\n    // empty\n  }\n\n  /**\n   * Ensure the buffer's capacity is large enough to hold a given number of elements. If the input\n   * buffer is not large enough, a new buffer is allocated and returned.\n   *\n   * @param elements The required number of elements to be appended to the buffer.\n   * @param buffer The buffer to check or <code>null</code> if a new buffer should be allocated.\n   * @return Returns the same buffer or a new buffer with the given capacity.\n   */\n  public static ByteBuffer clearAndEnsureCapacity(ByteBuffer buffer, int elements) {\n    if (buffer == null || buffer.capacity() < elements) {\n      buffer = ByteBuffer.allocate(elements);\n    } else {\n      buffer.clear();\n    }\n    return buffer;\n  }\n\n  /**\n   * Ensure the buffer's capacity is large enough to hold a given number of elements. If the input\n   * buffer is not large enough, a new buffer is allocated and returned.\n   *\n   * @param elements The required number of elements to be appended to the buffer.\n   * @param buffer The buffer to check or <code>null</code> if a new buffer should be allocated.\n   * @return Returns the same buffer or a new buffer with the given capacity.\n   */\n  public static CharBuffer clearAndEnsureCapacity(CharBuffer buffer, int elements) {\n    if (buffer == null || buffer.capacity() < elements) {\n      buffer = CharBuffer.allocate(elements);\n    } else {\n      buffer.clear();\n    }\n    return buffer;\n  }\n\n  /**\n   * @param buffer The buffer to convert to a string.\n   * @param charset The charset to use when converting bytes to characters.\n   * @return A string representation of buffer's content.\n   */\n  public static String toString(ByteBuffer buffer, Charset charset) {\n    buffer = buffer.slice();\n    byte[] buf = new byte[buffer.remaining()];\n    buffer.get(buf);\n    return new String(buf, charset);\n  }\n\n  public static String toString(CharBuffer buffer) {\n    buffer = buffer.slice();\n    char[] buf = new char[buffer.remaining()];\n    buffer.get(buf);\n    return new String(buf);\n  }\n\n  /**\n   * @param buffer The buffer to read from.\n   * @return Returns the remaining bytes from the buffer copied to an array.\n   */\n  public static byte[] toArray(ByteBuffer buffer) {\n    byte[] dst = new byte[buffer.remaining()];\n    buffer.mark();\n    buffer.get(dst);\n    buffer.reset();\n    return dst;\n  }\n\n  /** Compute the length of the shared prefix between two byte sequences. */\n  static int sharedPrefixLength(ByteBuffer a, int aStart, ByteBuffer b, int bStart) {\n    int i = 0;\n    final int max = Math.min(a.remaining() - aStart, b.remaining() - bStart);\n    aStart += a.position();\n    bStart += b.position();\n    while (i < max && a.get(aStart++) == b.get(bStart++)) {\n      i++;\n    }\n    return i;\n  }\n\n  /** Compute the length of the shared prefix between two byte sequences. */\n  static int sharedPrefixLength(ByteBuffer a, ByteBuffer b) {\n    return sharedPrefixLength(a, 0, b, 0);\n  }\n\n  /**\n   * Convert byte buffer's content into characters. The input buffer's bytes are not consumed (mark\n   * is set and reset).\n   */\n  public static CharBuffer bytesToChars(\n      CharsetDecoder decoder, ByteBuffer bytes, CharBuffer chars) {\n    assert decoder.malformedInputAction() == CodingErrorAction.REPORT;\n\n    chars = clearAndEnsureCapacity(chars, (int) (bytes.remaining() * decoder.maxCharsPerByte()));\n\n    bytes.mark();\n    decoder.reset();\n    CoderResult cr = decoder.decode(bytes, chars, true);\n    if (cr.isError()) {\n      bytes.reset();\n      try {\n        cr.throwException();\n      } catch (CharacterCodingException e) {\n        throw new RuntimeException(\n            \"Input cannot be mapped to bytes using encoding \"\n                + decoder.charset().name()\n                + \": \"\n                + Arrays.toString(toArray(bytes)),\n            e);\n      }\n    }\n\n    assert cr.isUnderflow(); // This should be guaranteed by ensuring max. capacity.\n    cr = decoder.flush(chars);\n    assert cr.isUnderflow();\n\n    chars.flip();\n    bytes.reset();\n\n    return chars;\n  }\n\n  /** Convert chars into bytes. */\n  public static ByteBuffer charsToBytes(CharsetEncoder encoder, CharBuffer chars, ByteBuffer bytes)\n      throws UnmappableInputException {\n    assert encoder.malformedInputAction() == CodingErrorAction.REPORT;\n\n    bytes = clearAndEnsureCapacity(bytes, (int) (chars.remaining() * encoder.maxBytesPerChar()));\n\n    chars.mark();\n    encoder.reset();\n\n    CoderResult cr = encoder.encode(chars, bytes, true);\n    if (cr.isError()) {\n      chars.reset();\n      try {\n        cr.throwException();\n      } catch (CharacterCodingException e) {\n        throw new UnmappableInputException(\n            \"Input cannot be mapped to characters using encoding \"\n                + encoder.charset().name()\n                + \": \"\n                + Arrays.toString(toArray(bytes)),\n            e);\n      }\n    }\n\n    assert cr.isUnderflow(); // This should be guaranteed by ensuring max. capacity.\n    cr = encoder.flush(bytes);\n    assert cr.isUnderflow();\n\n    bytes.flip();\n    chars.reset();\n\n    return bytes;\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/Dictionary.java",
    "content": "package morfologik.stemming;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport morfologik.fsa.FSA;\n\n/**\n * A dictionary combines {@link FSA} automaton and {@link DictionaryMetadata} describing the way\n * terms are encoded in the automaton.\n *\n * <p>A dictionary consists of two files:\n *\n * <ul>\n *   <li>an actual compressed FSA file,\n *   <li>{@link DictionaryMetadata}, describing the way terms are encoded.\n * </ul>\n */\npublic final class Dictionary {\n  /** {@link FSA} automaton with the compiled dictionary data. */\n  public final FSA fsa;\n\n  /** Metadata associated with the dictionary. */\n  public final DictionaryMetadata metadata;\n\n  /**\n   * It is strongly recommended to use static methods in this class for reading dictionaries.\n   *\n   * @param fsa An instantiated {@link FSA} instance.\n   * @param metadata A map of attributes describing the compression format and other settings not\n   *     contained in the FSA automaton. For an explanation of available attributes and their\n   *     possible values, see {@link DictionaryMetadata}.\n   */\n  public Dictionary(FSA fsa, DictionaryMetadata metadata) {\n    this.fsa = fsa;\n    this.metadata = metadata;\n  }\n\n  /**\n   * Attempts to load a dictionary using the path to the FSA file and the expected metadata\n   * extension.\n   *\n   * @param location The location of the dictionary file (<code>*.dict</code>).\n   * @return An instantiated dictionary.\n   * @throws IOException if an I/O error occurs.\n   */\n  public static Dictionary read(Path location) throws IOException {\n    final Path metadata = DictionaryMetadata.getExpectedMetadataLocation(location);\n\n    try (InputStream fsaStream = Files.newInputStream(location);\n        InputStream metadataStream = Files.newInputStream(metadata)) {\n      return read(fsaStream, metadataStream);\n    }\n  }\n\n  /**\n   * Attempts to load a dictionary using the URL to the FSA file and the expected metadata\n   * extension.\n   *\n   * @param dictURL The URL pointing to the dictionary file (<code>*.dict</code>).\n   * @return An instantiated dictionary.\n   * @throws IOException if an I/O error occurs.\n   */\n  public static Dictionary read(URL dictURL) throws IOException {\n    final URL expectedMetadataURL;\n    try {\n      String external = dictURL.toExternalForm();\n      expectedMetadataURL = new URL(DictionaryMetadata.getExpectedMetadataFileName(external));\n    } catch (MalformedURLException e) {\n      throw new IOException(\"Couldn't construct relative feature map URL for: \" + dictURL, e);\n    }\n\n    try (InputStream fsaStream = dictURL.openStream();\n        InputStream metadataStream = expectedMetadataURL.openStream()) {\n      return read(fsaStream, metadataStream);\n    }\n  }\n\n  /**\n   * Attempts to load a dictionary from opened streams of FSA dictionary data and associated\n   * metadata. Input streams are not closed automatically.\n   *\n   * @param fsaStream The stream with FSA data\n   * @param metadataStream The stream with metadata\n   * @return Returns an instantiated {@link Dictionary}.\n   * @throws IOException if an I/O error occurs.\n   */\n  public static Dictionary read(InputStream fsaStream, InputStream metadataStream)\n      throws IOException {\n    return new Dictionary(FSA.read(fsaStream), DictionaryMetadata.read(metadataStream));\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/DictionaryAttribute.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.charset.Charset;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Locale;\nimport java.util.Map;\n\n/** Attributes applying to {@link Dictionary} and {@link DictionaryMetadata}. */\npublic enum DictionaryAttribute {\n  /** Logical fields separator inside the FSA. */\n  SEPARATOR(\"fsa.dict.separator\") {\n    @Override\n    public Character fromString(String separator) {\n      if (separator == null || separator.length() != 1) {\n        throw new IllegalArgumentException(\n            \"Attribute \" + propertyName + \" must be a single character.\");\n      }\n\n      char charValue = separator.charAt(0);\n      if (Character.isHighSurrogate(charValue) || Character.isLowSurrogate(charValue)) {\n        throw new IllegalArgumentException(\n            \"Field separator character cannot be part of a surrogate pair: \" + separator);\n      }\n\n      return charValue;\n    }\n  },\n\n  /** Character to byte encoding used for strings inside the FSA. */\n  ENCODING(\"fsa.dict.encoding\") {\n    @Override\n    public Charset fromString(String charsetName) {\n      return Charset.forName(charsetName);\n    }\n  },\n\n  /** If the FSA dictionary includes frequency data. */\n  FREQUENCY_INCLUDED(\"fsa.dict.frequency-included\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** If the spelling dictionary is supposed to ignore words containing digits */\n  IGNORE_NUMBERS(\"fsa.dict.speller.ignore-numbers\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** If the spelling dictionary is supposed to ignore punctuation. */\n  IGNORE_PUNCTUATION(\"fsa.dict.speller.ignore-punctuation\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** If the spelling dictionary is supposed to ignore CamelCase words. */\n  IGNORE_CAMEL_CASE(\"fsa.dict.speller.ignore-camel-case\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** If the spelling dictionary is supposed to ignore ALL UPPERCASE words. */\n  IGNORE_ALL_UPPERCASE(\"fsa.dict.speller.ignore-all-uppercase\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /**\n   * If the spelling dictionary is supposed to ignore diacritics, so that 'a' would be treated as\n   * equivalent to 'ą'.\n   */\n  IGNORE_DIACRITICS(\"fsa.dict.speller.ignore-diacritics\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** if the spelling dictionary is supposed to treat upper and lower case as equivalent. */\n  CONVERT_CASE(\"fsa.dict.speller.convert-case\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** If the spelling dictionary is supposed to split runOnWords. */\n  RUN_ON_WORDS(\"fsa.dict.speller.runon-words\") {\n    @Override\n    public Boolean fromString(String value) {\n      return booleanValue(value);\n    }\n  },\n\n  /** Locale associated with the dictionary. */\n  LOCALE(\"fsa.dict.speller.locale\") {\n    @Override\n    public Locale fromString(String value) {\n      return new Locale(value);\n    }\n  },\n\n  /** Locale associated with the dictionary. */\n  ENCODER(\"fsa.dict.encoder\") {\n    @Override\n    public EncoderType fromString(String value) {\n      try {\n        return EncoderType.valueOf(value.trim().toUpperCase(Locale.ROOT));\n      } catch (IllegalArgumentException e) {\n        throw new IllegalArgumentException(\n            \"Invalid encoder name '\"\n                + value.trim()\n                + \"', only these coders are valid: \"\n                + Arrays.toString(EncoderType.values()));\n      }\n    }\n  },\n\n  /**\n   * Input conversion pairs to replace non-standard characters before search in a speller\n   * dictionary. For example, common ligatures can be replaced here.\n   */\n  INPUT_CONVERSION(\"fsa.dict.input-conversion\") {\n    @Override\n    public LinkedHashMap<String, String> fromString(String value) throws IllegalArgumentException {\n      LinkedHashMap<String, String> conversionPairs = new LinkedHashMap<>();\n      final String[] replacements = value.split(\",\\\\s*\");\n      for (final String stringPair : replacements) {\n        final String[] twoStrings = stringPair.trim().split(\" \");\n        if (twoStrings.length == 2) {\n          if (!conversionPairs.containsKey(twoStrings[0])) {\n            conversionPairs.put(twoStrings[0], twoStrings[1]);\n          } else {\n            throw new IllegalArgumentException(\n                \"Input conversion cannot specify different values for the same input string: \"\n                    + twoStrings[0]);\n          }\n        } else {\n          throw new IllegalArgumentException(\n              \"Attribute \" + propertyName + \" is not in the proper format: \" + value);\n        }\n      }\n      return conversionPairs;\n    }\n  },\n\n  /**\n   * Output conversion pairs to replace non-standard characters before search in a speller\n   * dictionary. For example, standard characters can be replaced here into ligatures.\n   *\n   * <p>Useful for dictionaries that do have certain standards imposed.\n   */\n  OUTPUT_CONVERSION(\"fsa.dict.output-conversion\") {\n    @Override\n    public LinkedHashMap<String, String> fromString(String value) throws IllegalArgumentException {\n      LinkedHashMap<String, String> conversionPairs = new LinkedHashMap<String, String>();\n      final String[] replacements = value.split(\",\\\\s*\");\n      for (final String stringPair : replacements) {\n        final String[] twoStrings = stringPair.trim().split(\" \");\n        if (twoStrings.length == 2) {\n          if (!conversionPairs.containsKey(twoStrings[0])) {\n            conversionPairs.put(twoStrings[0], twoStrings[1]);\n          } else {\n            throw new IllegalArgumentException(\n                \"Input conversion cannot specify different values for the same input string: \"\n                    + twoStrings[0]);\n          }\n        } else {\n          throw new IllegalArgumentException(\n              \"Attribute \" + propertyName + \" is not in the proper format: \" + value);\n        }\n      }\n      return conversionPairs;\n    }\n  },\n\n  /**\n   * Replacement pairs for non-obvious candidate search in a speller dictionary. For example, Polish\n   * <code>rz</code> is phonetically equivalent to <code>ż</code>, and this may be specified here to\n   * allow looking for replacements of <code>rz</code> with <code>ż</code> and vice versa.\n   */\n  REPLACEMENT_PAIRS(\"fsa.dict.speller.replacement-pairs\") {\n    @Override\n    public LinkedHashMap<String, List<String>> fromString(String value)\n        throws IllegalArgumentException {\n      LinkedHashMap<String, List<String>> replacementPairs = new LinkedHashMap<>();\n      final String[] replacements = value.split(\",\\\\s*\");\n      for (final String stringPair : replacements) {\n        final String[] twoStrings = stringPair.trim().split(\" \");\n        if (twoStrings.length == 2) {\n          // _ represents a space (hunspell REP convention)\n          String key = twoStrings[0].replace('_', ' ');\n          String val = twoStrings[1].replace('_', ' ');\n          if (!replacementPairs.containsKey(key)) {\n            List<String> strList = new ArrayList<String>();\n            strList.add(val);\n            replacementPairs.put(key, strList);\n          } else {\n            replacementPairs.get(key).add(val);\n          }\n        } else {\n          throw new IllegalArgumentException(\n              \"Attribute \" + propertyName + \" is not in the proper format: \" + value);\n        }\n      }\n      return replacementPairs;\n    }\n  },\n\n  /**\n   * Equivalent characters (treated similarly as equivalent chars with and without diacritics). For\n   * example, Polish <code>ł</code> can be specified as equivalent to <code>l</code>.\n   *\n   * <p>This implements a feature similar to hunspell MAP in the affix file.\n   */\n  EQUIVALENT_CHARS(\"fsa.dict.speller.equivalent-chars\") {\n    @Override\n    public LinkedHashMap<Character, List<Character>> fromString(String value)\n        throws IllegalArgumentException {\n      LinkedHashMap<Character, List<Character>> equivalentCharacters = new LinkedHashMap<>();\n      final String[] eqChars = value.split(\",\\\\s*\");\n      for (final String characterPair : eqChars) {\n        final String[] twoChars = characterPair.trim().split(\" \");\n        if (twoChars.length == 2 && twoChars[0].length() == 1 && twoChars[1].length() == 1) {\n          char fromChar = twoChars[0].charAt(0);\n          char toChar = twoChars[1].charAt(0);\n          if (!equivalentCharacters.containsKey(fromChar)) {\n            List<Character> chList = new ArrayList<Character>();\n            equivalentCharacters.put(fromChar, chList);\n          }\n          equivalentCharacters.get(fromChar).add(toChar);\n        } else {\n          throw new IllegalArgumentException(\n              \"Attribute \" + propertyName + \" is not in the proper format: \" + value);\n        }\n      }\n      return equivalentCharacters;\n    }\n  },\n\n  /** Dictionary license attribute. */\n  LICENSE(\"fsa.dict.license\"),\n\n  /** Dictionary author. */\n  AUTHOR(\"fsa.dict.author\"),\n\n  /** Dictionary creation date. */\n  CREATION_DATE(\"fsa.dict.created\");\n\n  /** Property name for this attribute. */\n  public final String propertyName;\n\n  /**\n   * Converts a string to the given attribute's value.\n   *\n   * @param value The value to convert to an attribute value.\n   * @return Returns the attribute's value converted from a string.\n   * @throws IllegalArgumentException If the input string cannot be converted to the attribute's\n   *     value.\n   */\n  public Object fromString(String value) throws IllegalArgumentException {\n    return value;\n  }\n\n  /**\n   * @param propertyName The property of a {@link DictionaryAttribute}.\n   * @return Return a {@link DictionaryAttribute} associated with a given {@link #propertyName}.\n   */\n  public static DictionaryAttribute fromPropertyName(String propertyName) {\n    DictionaryAttribute value = attrsByPropertyName.get(propertyName);\n    if (value == null) {\n      throw new IllegalArgumentException(\"No attribute for property: \" + propertyName);\n    }\n    return value;\n  }\n\n  private static final Map<String, DictionaryAttribute> attrsByPropertyName;\n\n  static {\n    attrsByPropertyName = new HashMap<String, DictionaryAttribute>();\n    for (DictionaryAttribute attr : DictionaryAttribute.values()) {\n      if (attrsByPropertyName.put(attr.propertyName, attr) != null) {\n        throw new RuntimeException(\"Duplicate property key for: \" + attr);\n      }\n    }\n  }\n\n  /** Private enum instance constructor. */\n  private DictionaryAttribute(String propertyName) {\n    this.propertyName = propertyName;\n  }\n\n  private static Boolean booleanValue(String value) {\n    value = value.toLowerCase(Locale.ROOT);\n    if (\"true\".equals(value) || \"yes\".equals(value) || \"on\".equals(value)) {\n      return Boolean.TRUE;\n    }\n    if (\"false\".equals(value) || \"no\".equals(value) || \"off\".equals(value)) {\n      return Boolean.FALSE;\n    }\n    throw new IllegalArgumentException(\"Not a boolean value: \" + value);\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/DictionaryIterator.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.CharsetDecoder;\nimport java.util.Iterator;\n\n/**\n * An iterator over {@link WordData} entries of a {@link Dictionary}. The stems can be decoded from\n * compressed format or the compressed form can be preserved.\n */\npublic final class DictionaryIterator implements Iterator<WordData> {\n  private final CharsetDecoder decoder;\n  private final Iterator<ByteBuffer> entriesIter;\n  private final WordData entry;\n  private final byte separator;\n  private final boolean decodeStems;\n\n  private ByteBuffer inflectedBuffer = ByteBuffer.allocate(0);\n  private CharBuffer inflectedCharBuffer = CharBuffer.allocate(0);\n  private ByteBuffer temp = ByteBuffer.allocate(0);\n  private final ISequenceEncoder sequenceEncoder;\n\n  public DictionaryIterator(Dictionary dictionary, CharsetDecoder decoder, boolean decodeStems) {\n    this.entriesIter = dictionary.fsa.iterator();\n    this.separator = dictionary.metadata.getSeparator();\n    this.sequenceEncoder = dictionary.metadata.getSequenceEncoderType().get();\n    this.decoder = decoder;\n    this.entry = new WordData(decoder);\n    this.decodeStems = decodeStems;\n  }\n\n  public boolean hasNext() {\n    return entriesIter.hasNext();\n  }\n\n  public WordData next() {\n    final ByteBuffer entryBuffer = entriesIter.next();\n\n    /*\n     * Entries are typically: inflected<SEP>codedBase<SEP>tag so try to find this split.\n     */\n    byte[] ba = entryBuffer.array();\n    int bbSize = entryBuffer.remaining();\n\n    int sepPos;\n    for (sepPos = 0; sepPos < bbSize; sepPos++) {\n      if (ba[sepPos] == separator) {\n        break;\n      }\n    }\n\n    if (sepPos == bbSize) {\n      throw new RuntimeException(\"Invalid dictionary \" + \"entry format (missing separator).\");\n    }\n\n    inflectedBuffer = BufferUtils.clearAndEnsureCapacity(inflectedBuffer, sepPos);\n    inflectedBuffer.put(ba, 0, sepPos);\n    inflectedBuffer.flip();\n\n    inflectedCharBuffer = BufferUtils.bytesToChars(decoder, inflectedBuffer, inflectedCharBuffer);\n    entry.update(inflectedBuffer, inflectedCharBuffer);\n\n    temp = BufferUtils.clearAndEnsureCapacity(temp, bbSize - sepPos);\n    sepPos++;\n    temp.put(ba, sepPos, bbSize - sepPos);\n    temp.flip();\n\n    ba = temp.array();\n    bbSize = temp.remaining();\n\n    /*\n     * Find the next separator byte's position splitting word form and tag.\n     */\n    assert sequenceEncoder.prefixBytes() <= bbSize : sequenceEncoder.getClass() + \" >? \" + bbSize;\n    sepPos = sequenceEncoder.prefixBytes();\n    for (; sepPos < bbSize; sepPos++) {\n      if (ba[sepPos] == separator) break;\n    }\n\n    /*\n     * Decode the stem into stem buffer.\n     */\n    if (decodeStems) {\n      entry.stemBuffer =\n          sequenceEncoder.decode(entry.stemBuffer, inflectedBuffer, ByteBuffer.wrap(ba, 0, sepPos));\n    } else {\n      entry.stemBuffer = BufferUtils.clearAndEnsureCapacity(entry.stemBuffer, sepPos);\n      entry.stemBuffer.put(ba, 0, sepPos);\n      entry.stemBuffer.flip();\n    }\n\n    // Skip separator character, if present.\n    if (sepPos + 1 <= bbSize) {\n      sepPos++;\n    }\n\n    /*\n     * Decode the tag data.\n     */\n    entry.tagBuffer = BufferUtils.clearAndEnsureCapacity(entry.tagBuffer, bbSize - sepPos);\n    entry.tagBuffer.put(ba, sepPos, bbSize - sepPos);\n    entry.tagBuffer.flip();\n\n    return entry;\n  }\n\n  public void remove() {\n    throw new UnsupportedOperationException();\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/DictionaryLookup.java",
    "content": "package morfologik.stemming;\n\nimport static morfologik.fsa.MatchResult.SEQUENCE_IS_A_PREFIX;\n\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.CharsetDecoder;\nimport java.nio.charset.CharsetEncoder;\nimport java.util.Arrays;\nimport java.util.Iterator;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Map;\nimport morfologik.fsa.ByteSequenceIterator;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSATraversal;\nimport morfologik.fsa.MatchResult;\n\n/**\n * This class implements a dictionary lookup of an inflected word over a dictionary previously\n * compiled using the <code>dict_compile</code> tool.\n */\npublic final class DictionaryLookup implements IStemmer, Iterable<WordData> {\n  /** An FSA used for lookups. */\n  private final FSATraversal matcher;\n\n  /** An iterator for walking along the final states of {@link #fsa}. */\n  private final ByteSequenceIterator finalStatesIterator;\n\n  /** FSA's root node. */\n  private final int rootNode;\n\n  /** Expand buffers and arrays by this constant. */\n  private static final int EXPAND_SIZE = 10;\n\n  /** Private internal array of reusable word data objects. */\n  private WordData[] forms = new WordData[0];\n\n  /** A \"view\" over an array implementing */\n  private final ArrayViewList<WordData> formsList =\n      new ArrayViewList<WordData>(forms, 0, forms.length);\n\n  /**\n   * Features of the compiled dictionary.\n   *\n   * @see DictionaryMetadata\n   */\n  private final DictionaryMetadata dictionaryMetadata;\n\n  /** Charset encoder for the FSA. */\n  private final CharsetEncoder encoder;\n\n  /** Charset decoder for the FSA. */\n  private final CharsetDecoder decoder;\n\n  /** The FSA we are using. */\n  private final FSA fsa;\n\n  /**\n   * @see #getSeparatorChar()\n   */\n  private final char separatorChar;\n\n  /** Internal reusable buffer for encoding words into byte arrays using {@link #encoder}. */\n  private ByteBuffer byteBuffer = ByteBuffer.allocate(0);\n\n  /** Internal reusable buffer for encoding words into byte arrays using {@link #encoder}. */\n  private CharBuffer charBuffer = CharBuffer.allocate(0);\n\n  /** Reusable match result. */\n  private final MatchResult matchResult = new MatchResult();\n\n  /** The {@link Dictionary} this lookup is using. */\n  private final Dictionary dictionary;\n\n  private final ISequenceEncoder sequenceEncoder;\n\n  /**\n   * Creates a new object of this class using the given FSA for word lookups and encoding for\n   * converting characters to bytes.\n   *\n   * @param dictionary The dictionary to use for lookups.\n   * @throws IllegalArgumentException if FSA's root node cannot be acquired (dictionary is empty).\n   */\n  public DictionaryLookup(Dictionary dictionary) throws IllegalArgumentException {\n    this.dictionary = dictionary;\n    this.dictionaryMetadata = dictionary.metadata;\n    this.sequenceEncoder = dictionary.metadata.getSequenceEncoderType().get();\n    this.rootNode = dictionary.fsa.getRootNode();\n    this.fsa = dictionary.fsa;\n    this.matcher = new FSATraversal(fsa);\n    this.finalStatesIterator = new ByteSequenceIterator(fsa, fsa.getRootNode());\n\n    if (dictionaryMetadata == null) {\n      throw new IllegalArgumentException(\"Dictionary metadata must not be null.\");\n    }\n\n    decoder = dictionary.metadata.getDecoder();\n    encoder = dictionary.metadata.getEncoder();\n    separatorChar = dictionary.metadata.getSeparatorAsChar();\n  }\n\n  /**\n   * Searches the automaton for a symbol sequence equal to <code>word</code>, followed by a\n   * separator. The result is a stem (decompressed accordingly to the dictionary's specification)\n   * and an optional tag data.\n   */\n  @Override\n  public List<WordData> lookup(CharSequence word) {\n    final byte separator = dictionaryMetadata.getSeparator();\n    final int prefixBytes = sequenceEncoder.prefixBytes();\n\n    if (!dictionaryMetadata.getInputConversionPairs().isEmpty()) {\n      word = applyReplacements(word, dictionaryMetadata.getInputConversionPairs());\n    }\n\n    // Reset the output list to zero length.\n    formsList.wrap(forms, 0, 0);\n\n    // Encode word characters into bytes in the same encoding as the FSA's.\n    charBuffer = BufferUtils.clearAndEnsureCapacity(charBuffer, word.length());\n    for (int i = 0; i < word.length(); i++) {\n      char chr = word.charAt(i);\n      if (chr == separatorChar) {\n        // No valid input can contain the separator.\n        return formsList;\n      }\n      charBuffer.put(chr);\n    }\n    charBuffer.flip();\n    try {\n      byteBuffer = BufferUtils.charsToBytes(encoder, charBuffer, byteBuffer);\n    } catch (UnmappableInputException e) {\n      // This should be a rare occurrence, but if it happens it means there is no way\n      // the dictionary can contain the input word.\n      return formsList;\n    }\n\n    // Try to find a partial match in the dictionary.\n    final MatchResult match =\n        matcher.match(matchResult, byteBuffer.array(), 0, byteBuffer.remaining(), rootNode);\n\n    if (match.kind == SEQUENCE_IS_A_PREFIX) {\n      /*\n       * The entire sequence exists in the dictionary. A separator should\n       * be the next symbol.\n       */\n      final int arc = fsa.getArc(match.node, separator);\n\n      /*\n       * The situation when the arc points to a final node should NEVER\n       * happen. After all, we want the word to have SOME base form.\n       */\n      if (arc != 0 && !fsa.isArcFinal(arc)) {\n        // There is such a word in the dictionary. Return its base forms.\n        int formsCount = 0;\n\n        finalStatesIterator.restartFrom(fsa.getEndNode(arc));\n        while (finalStatesIterator.hasNext()) {\n          final ByteBuffer bb = finalStatesIterator.next();\n          final byte[] ba = bb.array();\n          final int bbSize = bb.remaining();\n\n          if (formsCount >= forms.length) {\n            forms = Arrays.copyOf(forms, forms.length + EXPAND_SIZE);\n            for (int k = 0; k < forms.length; k++) {\n              if (forms[k] == null) forms[k] = new WordData(decoder);\n            }\n          }\n\n          /*\n           * Now, expand the prefix/ suffix 'compression' and store\n           * the base form.\n           */\n          final WordData wordData = forms[formsCount++];\n          if (dictionaryMetadata.getOutputConversionPairs().isEmpty()) {\n            wordData.update(byteBuffer, word);\n          } else {\n            wordData.update(\n                byteBuffer, applyReplacements(word, dictionaryMetadata.getOutputConversionPairs()));\n          }\n\n          /*\n           * Find the separator byte's position splitting the inflection instructions\n           * from the tag.\n           */\n          assert prefixBytes <= bbSize : sequenceEncoder.getClass() + \" >? \" + bbSize;\n          int sepPos;\n          for (sepPos = prefixBytes; sepPos < bbSize; sepPos++) {\n            if (ba[sepPos] == separator) {\n              break;\n            }\n          }\n\n          /*\n           * Decode the stem into stem buffer.\n           */\n          wordData.stemBuffer =\n              sequenceEncoder.decode(\n                  wordData.stemBuffer, byteBuffer, ByteBuffer.wrap(ba, 0, sepPos));\n\n          // Skip separator character.\n          sepPos++;\n\n          /*\n           * Decode the tag data.\n           */\n          final int tagSize = bbSize - sepPos;\n          if (tagSize > 0) {\n            wordData.tagBuffer = BufferUtils.clearAndEnsureCapacity(wordData.tagBuffer, tagSize);\n            wordData.tagBuffer.put(ba, sepPos, tagSize);\n            wordData.tagBuffer.flip();\n          }\n        }\n\n        formsList.wrap(forms, 0, formsCount);\n      }\n    } else {\n      /*\n       * this case is somewhat confusing: we should have hit the separator\n       * first... I don't really know how to deal with it at the time\n       * being.\n       */\n    }\n    return formsList;\n  }\n\n  /**\n   * Apply partial string replacements from a given map.\n   *\n   * <p>Useful if the word needs to be normalized somehow (i.e., ligatures, apostrophes and such).\n   *\n   * @param word The word to apply replacements to.\n   * @param replacements A map of replacements (from-&gt;to).\n   * @return new string with all replacements applied.\n   */\n  public static String applyReplacements(\n      CharSequence word, LinkedHashMap<String, String> replacements) {\n    // quite horrible from performance point of view; this should really be a transducer.\n    StringBuilder sb = new StringBuilder(word);\n    for (final Map.Entry<String, String> e : replacements.entrySet()) {\n      String key = e.getKey();\n      int index = sb.indexOf(e.getKey());\n      while (index != -1) {\n        sb.replace(index, index + key.length(), e.getValue());\n        index = sb.indexOf(key, index + key.length());\n      }\n    }\n    return sb.toString();\n  }\n\n  /**\n   * Return an iterator over all {@link WordData} entries available in the embedded {@link\n   * Dictionary}.\n   */\n  @Override\n  public Iterator<WordData> iterator() {\n    return new DictionaryIterator(dictionary, decoder, true);\n  }\n\n  /**\n   * @return Return the {@link Dictionary} used by this object.\n   */\n  public Dictionary getDictionary() {\n    return dictionary;\n  }\n\n  /**\n   * @return Returns the logical separator character splitting inflected form, lemma correction\n   *     token and a tag. Note that this character is a best-effort conversion from a byte in {@link\n   *     DictionaryMetadata#separator} and may not be valid in the target encoding (although this is\n   *     highly unlikely).\n   */\n  public char getSeparatorChar() {\n    return separatorChar;\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/DictionaryMetadata.java",
    "content": "package morfologik.stemming;\n\nimport static morfologik.stemming.DictionaryAttribute.*;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.io.Writer;\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.CharacterCodingException;\nimport java.nio.charset.Charset;\nimport java.nio.charset.CharsetDecoder;\nimport java.nio.charset.CharsetEncoder;\nimport java.nio.charset.CodingErrorAction;\nimport java.nio.charset.UnsupportedCharsetException;\nimport java.nio.file.Path;\nimport java.util.Collections;\nimport java.util.EnumMap;\nimport java.util.EnumSet;\nimport java.util.Enumeration;\nimport java.util.HashMap;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Locale;\nimport java.util.Map;\nimport java.util.Properties;\n\n/** Description of attributes, their types and default values. */\npublic final class DictionaryMetadata {\n  /** Default attribute values. */\n  private static Map<DictionaryAttribute, String> DEFAULT_ATTRIBUTES =\n      new DictionaryMetadataBuilder()\n          .frequencyIncluded(false)\n          .ignorePunctuation()\n          .ignoreNumbers()\n          .ignoreCamelCase()\n          .ignoreAllUppercase()\n          .ignoreDiacritics()\n          .convertCase()\n          .supportRunOnWords()\n          .toMap();\n\n  /** Required attributes. */\n  private static EnumSet<DictionaryAttribute> REQUIRED_ATTRIBUTES =\n      EnumSet.of(SEPARATOR, ENCODER, ENCODING);\n\n  /**\n   * A separator character between fields (stem, lemma, form). The character must be within byte\n   * range (FSA uses bytes internally).\n   */\n  private byte separator;\n\n  private char separatorChar;\n\n  /** Encoding used for converting bytes to characters and vice versa. */\n  private String encoding;\n\n  private Charset charset;\n  private Locale locale = Locale.getDefault();\n\n  /** Replacement pairs for non-obvious candidate search in a speller dictionary. */\n  private LinkedHashMap<String, List<String>> replacementPairs = new LinkedHashMap<>();\n\n  /** Conversion pairs for input conversion, for example to replace ligatures. */\n  private LinkedHashMap<String, String> inputConversion = new LinkedHashMap<>();\n\n  /** Conversion pairs for output conversion, for example to replace ligatures. */\n  private LinkedHashMap<String, String> outputConversion = new LinkedHashMap<>();\n\n  /**\n   * Equivalent characters (treated similarly as equivalent chars with and without diacritics). For\n   * example, Polish <code>ł</code> can be specified as equivalent to <code>l</code>.\n   *\n   * <p>This implements a feature similar to hunspell MAP in the affix file.\n   */\n  private LinkedHashMap<Character, List<Character>> equivalentChars = new LinkedHashMap<>();\n\n  /** All attributes. */\n  private final EnumMap<DictionaryAttribute, String> attributes;\n\n  /** All \"enabled\" boolean attributes. */\n  private final EnumMap<DictionaryAttribute, Boolean> boolAttributes;\n\n  /** Sequence encoder. */\n  private EncoderType encoderType;\n\n  /** Expected metadata file extension. */\n  public static final String METADATA_FILE_EXTENSION = \"info\";\n\n  /**\n   * @return Return all metadata attributes.\n   */\n  public Map<DictionaryAttribute, String> getAttributes() {\n    return Collections.unmodifiableMap(attributes);\n  }\n\n  // Cached attrs.\n  public String getEncoding() {\n    return encoding;\n  }\n\n  public byte getSeparator() {\n    return separator;\n  }\n\n  public Locale getLocale() {\n    return locale;\n  }\n\n  public LinkedHashMap<String, String> getInputConversionPairs() {\n    return inputConversion;\n  }\n\n  public LinkedHashMap<String, String> getOutputConversionPairs() {\n    return outputConversion;\n  }\n\n  public LinkedHashMap<String, List<String>> getReplacementPairs() {\n    return replacementPairs;\n  }\n\n  public LinkedHashMap<Character, List<Character>> getEquivalentChars() {\n    return equivalentChars;\n  }\n\n  // Dynamically fetched.\n  public boolean isFrequencyIncluded() {\n    return boolAttributes.get(FREQUENCY_INCLUDED);\n  }\n\n  public boolean isIgnoringPunctuation() {\n    return boolAttributes.get(IGNORE_PUNCTUATION);\n  }\n\n  public boolean isIgnoringNumbers() {\n    return boolAttributes.get(IGNORE_NUMBERS);\n  }\n\n  public boolean isIgnoringCamelCase() {\n    return boolAttributes.get(IGNORE_CAMEL_CASE);\n  }\n\n  public boolean isIgnoringAllUppercase() {\n    return boolAttributes.get(IGNORE_ALL_UPPERCASE);\n  }\n\n  public boolean isIgnoringDiacritics() {\n    return boolAttributes.get(IGNORE_DIACRITICS);\n  }\n\n  public boolean isConvertingCase() {\n    return boolAttributes.get(CONVERT_CASE);\n  }\n\n  public boolean isSupportingRunOnWords() {\n    return boolAttributes.get(RUN_ON_WORDS);\n  }\n\n  /**\n   * Create an instance from an attribute map.\n   *\n   * @param attrs A set of {@link DictionaryAttribute} keys and their associated values.\n   * @see DictionaryMetadataBuilder\n   */\n  public DictionaryMetadata(Map<DictionaryAttribute, String> attrs) {\n    this.boolAttributes = new EnumMap<DictionaryAttribute, Boolean>(DictionaryAttribute.class);\n    this.attributes = new EnumMap<DictionaryAttribute, String>(DictionaryAttribute.class);\n    this.attributes.putAll(attrs);\n\n    EnumMap<DictionaryAttribute, String> attributeMap =\n        new EnumMap<DictionaryAttribute, String>(DEFAULT_ATTRIBUTES);\n    attributeMap.putAll(attrs);\n\n    // Convert some attrs from the map to local fields for performance reasons.\n    EnumSet<DictionaryAttribute> requiredAttributes = EnumSet.copyOf(REQUIRED_ATTRIBUTES);\n\n    for (Map.Entry<DictionaryAttribute, String> e : attributeMap.entrySet()) {\n      requiredAttributes.remove(e.getKey());\n\n      // Run validation and conversion on all of them.\n      Object value = e.getKey().fromString(e.getValue());\n      switch (e.getKey()) {\n        case ENCODING:\n          this.encoding = e.getValue();\n          if (!Charset.isSupported(encoding)) {\n            throw new IllegalArgumentException(\"Encoding not supported on this JVM: \" + encoding);\n          }\n          this.charset = (Charset) value;\n          break;\n\n        case SEPARATOR:\n          this.separatorChar = (Character) value;\n          break;\n\n        case LOCALE:\n          this.locale = (Locale) value;\n          break;\n\n        case ENCODER:\n          this.encoderType = (EncoderType) value;\n          break;\n\n        case INPUT_CONVERSION:\n          {\n            @SuppressWarnings(\"unchecked\")\n            LinkedHashMap<String, String> gvalue = (LinkedHashMap<String, String>) value;\n            this.inputConversion = gvalue;\n          }\n          break;\n\n        case OUTPUT_CONVERSION:\n          {\n            @SuppressWarnings(\"unchecked\")\n            LinkedHashMap<String, String> gvalue = (LinkedHashMap<String, String>) value;\n            this.outputConversion = gvalue;\n          }\n          break;\n\n        case REPLACEMENT_PAIRS:\n          {\n            @SuppressWarnings(\"unchecked\")\n            LinkedHashMap<String, List<String>> gvalue =\n                (LinkedHashMap<String, List<String>>) value;\n            this.replacementPairs = gvalue;\n          }\n          break;\n\n        case EQUIVALENT_CHARS:\n          {\n            @SuppressWarnings(\"unchecked\")\n            LinkedHashMap<Character, List<Character>> gvalue =\n                (LinkedHashMap<Character, List<Character>>) value;\n            this.equivalentChars = gvalue;\n          }\n          break;\n\n        case IGNORE_PUNCTUATION:\n        case IGNORE_NUMBERS:\n        case IGNORE_CAMEL_CASE:\n        case IGNORE_ALL_UPPERCASE:\n        case IGNORE_DIACRITICS:\n        case CONVERT_CASE:\n        case RUN_ON_WORDS:\n        case FREQUENCY_INCLUDED:\n          this.boolAttributes.put(e.getKey(), (Boolean) value);\n          break;\n\n        case AUTHOR:\n        case LICENSE:\n        case CREATION_DATE:\n          // Just run validation.\n          e.getKey().fromString(e.getValue());\n          break;\n\n        default:\n          throw new RuntimeException(\n              \"Unexpected code path (attribute should be handled but is not): \" + e.getKey());\n      }\n    }\n\n    if (!requiredAttributes.isEmpty()) {\n      throw new IllegalArgumentException(\n          \"At least one the required attributes was not provided: \"\n              + requiredAttributes.toString());\n    }\n\n    // Sanity check.\n    CharsetEncoder encoder = getEncoder();\n    try {\n      ByteBuffer encoded = encoder.encode(CharBuffer.wrap(new char[] {separatorChar}));\n      if (encoded.remaining() > 1) {\n        throw new IllegalArgumentException(\n            \"Separator character is not a single byte in encoding \"\n                + encoding\n                + \": \"\n                + separatorChar);\n      }\n      this.separator = encoded.get();\n    } catch (CharacterCodingException e) {\n      throw new IllegalArgumentException(\n          \"Separator character cannot be converted to a byte in \" + encoding + \": \" + separatorChar,\n          e);\n    }\n  }\n\n  /**\n   * @return Returns a new {@link CharsetDecoder} for the {@link #encoding}.\n   */\n  public CharsetDecoder getDecoder() {\n    try {\n      return charset\n          .newDecoder()\n          .onMalformedInput(CodingErrorAction.REPORT)\n          .onUnmappableCharacter(CodingErrorAction.REPORT);\n    } catch (UnsupportedCharsetException e) {\n      throw new RuntimeException(\"FSA's encoding charset is not supported: \" + encoding);\n    }\n  }\n\n  /**\n   * @return Returns a new {@link CharsetEncoder} for the {@link #encoding}.\n   */\n  public CharsetEncoder getEncoder() {\n    try {\n      return charset\n          .newEncoder()\n          .onMalformedInput(CodingErrorAction.REPORT)\n          .onUnmappableCharacter(CodingErrorAction.REPORT);\n    } catch (UnsupportedCharsetException e) {\n      throw new RuntimeException(\"FSA's encoding charset is not supported: \" + encoding);\n    }\n  }\n\n  /**\n   * @return Return sequence encoder type.\n   */\n  public EncoderType getSequenceEncoderType() {\n    return encoderType;\n  }\n\n  /**\n   * @return Returns the {@link #separator} byte converted to a single <code>char</code>.\n   * @throws RuntimeException if this conversion is for some reason impossible (the byte is a\n   *     surrogate pair, FSA's {@link #encoding} is not available).\n   */\n  public char getSeparatorAsChar() {\n    return separatorChar;\n  }\n\n  /**\n   * @return A shortcut returning {@link DictionaryMetadataBuilder}.\n   */\n  public static DictionaryMetadataBuilder builder() {\n    return new DictionaryMetadataBuilder();\n  }\n\n  /**\n   * Returns the expected name of the metadata file, based on the name of the dictionary file. The\n   * expected name is resolved by truncating any file extension of <code>name</code> and appending\n   * {@link DictionaryMetadata#METADATA_FILE_EXTENSION}.\n   *\n   * @param dictionaryFile The name of the dictionary (<code>*.dict</code>) file.\n   * @return Returns the expected name of the metadata file.\n   */\n  public static String getExpectedMetadataFileName(String dictionaryFile) {\n    final int dotIndex = dictionaryFile.lastIndexOf('.');\n    final String featuresName;\n    if (dotIndex >= 0) {\n      featuresName = dictionaryFile.substring(0, dotIndex) + \".\" + METADATA_FILE_EXTENSION;\n    } else {\n      featuresName = dictionaryFile + \".\" + METADATA_FILE_EXTENSION;\n    }\n\n    return featuresName;\n  }\n\n  /**\n   * @param dictionary The location of the dictionary file.\n   * @return Returns the expected location of a metadata file.\n   */\n  public static Path getExpectedMetadataLocation(Path dictionary) {\n    return dictionary.resolveSibling(\n        getExpectedMetadataFileName(dictionary.getFileName().toString()));\n  }\n\n  /**\n   * Read dictionary metadata from a property file (stream).\n   *\n   * @param metadataStream The stream with metadata.\n   * @return Returns {@link DictionaryMetadata} read from a the stream (property file).\n   * @throws IOException Thrown if an I/O exception occurs.\n   */\n  public static DictionaryMetadata read(InputStream metadataStream) throws IOException {\n    Map<DictionaryAttribute, String> map = new HashMap<DictionaryAttribute, String>();\n    final Properties properties = new Properties();\n    properties.load(new InputStreamReader(metadataStream, \"UTF-8\"));\n\n    // Handle back-compatibility for encoder specification.\n    if (!properties.containsKey(DictionaryAttribute.ENCODER.propertyName)) {\n      boolean hasDeprecated =\n          properties.containsKey(\"fsa.dict.uses-suffixes\")\n              || properties.containsKey(\"fsa.dict.uses-infixes\")\n              || properties.containsKey(\"fsa.dict.uses-prefixes\");\n\n      boolean usesSuffixes =\n          Boolean.valueOf(properties.getProperty(\"fsa.dict.uses-suffixes\", \"true\"));\n      boolean usesPrefixes =\n          Boolean.valueOf(properties.getProperty(\"fsa.dict.uses-prefixes\", \"false\"));\n      boolean usesInfixes =\n          Boolean.valueOf(properties.getProperty(\"fsa.dict.uses-infixes\", \"false\"));\n\n      final EncoderType encoder;\n      if (usesInfixes) {\n        encoder = EncoderType.INFIX;\n      } else if (usesPrefixes) {\n        encoder = EncoderType.PREFIX;\n      } else if (usesSuffixes) {\n        encoder = EncoderType.SUFFIX;\n      } else {\n        encoder = EncoderType.NONE;\n      }\n\n      if (!hasDeprecated) {\n        throw new IOException(\n            \"Use an explicit \"\n                + DictionaryAttribute.ENCODER.propertyName\n                + \"=\"\n                + encoder.name()\n                + \" metadata key: \");\n      }\n\n      throw new IOException(\n          \"Deprecated encoder keys in metadata. Use \"\n              + DictionaryAttribute.ENCODER.propertyName\n              + \"=\"\n              + encoder.name());\n    }\n\n    for (Enumeration<?> e = properties.propertyNames(); e.hasMoreElements(); ) {\n      String key = (String) e.nextElement();\n      map.put(DictionaryAttribute.fromPropertyName(key), properties.getProperty(key));\n    }\n\n    return new DictionaryMetadata(map);\n  }\n\n  /**\n   * Write dictionary attributes (metadata).\n   *\n   * @param writer The writer to write to.\n   * @throws IOException Thrown when an I/O error occurs.\n   */\n  public void write(Writer writer) throws IOException {\n    final Properties properties = new Properties();\n\n    for (Map.Entry<DictionaryAttribute, String> e : getAttributes().entrySet()) {\n      properties.setProperty(e.getKey().propertyName, e.getValue());\n    }\n\n    properties.store(writer, \"# \" + getClass().getName());\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/DictionaryMetadataBuilder.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.charset.Charset;\nimport java.util.EnumMap;\nimport java.util.List;\nimport java.util.Locale;\nimport java.util.Map;\n\n/** Helper class to build {@link DictionaryMetadata} instances. */\npublic final class DictionaryMetadataBuilder {\n  private final EnumMap<DictionaryAttribute, String> attrs =\n      new EnumMap<>(DictionaryAttribute.class);\n\n  public DictionaryMetadataBuilder separator(char c) {\n    this.attrs.put(DictionaryAttribute.SEPARATOR, Character.toString(c));\n    return this;\n  }\n\n  public DictionaryMetadataBuilder encoding(Charset charset) {\n    return encoding(charset.name());\n  }\n\n  public DictionaryMetadataBuilder encoding(String charsetName) {\n    this.attrs.put(DictionaryAttribute.ENCODING, charsetName);\n    return this;\n  }\n\n  public DictionaryMetadataBuilder frequencyIncluded() {\n    return frequencyIncluded(true);\n  }\n\n  public DictionaryMetadataBuilder frequencyIncluded(boolean v) {\n    this.attrs.put(DictionaryAttribute.FREQUENCY_INCLUDED, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder ignorePunctuation() {\n    return ignorePunctuation(true);\n  }\n\n  public DictionaryMetadataBuilder ignorePunctuation(boolean v) {\n    this.attrs.put(DictionaryAttribute.IGNORE_PUNCTUATION, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder ignoreNumbers() {\n    return ignoreNumbers(true);\n  }\n\n  public DictionaryMetadataBuilder ignoreNumbers(boolean v) {\n    this.attrs.put(DictionaryAttribute.IGNORE_NUMBERS, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder ignoreCamelCase() {\n    return ignoreCamelCase(true);\n  }\n\n  public DictionaryMetadataBuilder ignoreCamelCase(boolean v) {\n    this.attrs.put(DictionaryAttribute.IGNORE_CAMEL_CASE, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder ignoreAllUppercase() {\n    return ignoreAllUppercase(true);\n  }\n\n  public DictionaryMetadataBuilder ignoreAllUppercase(boolean v) {\n    this.attrs.put(DictionaryAttribute.IGNORE_ALL_UPPERCASE, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder ignoreDiacritics() {\n    return ignoreDiacritics(true);\n  }\n\n  public DictionaryMetadataBuilder ignoreDiacritics(boolean v) {\n    this.attrs.put(DictionaryAttribute.IGNORE_DIACRITICS, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder convertCase() {\n    return convertCase(true);\n  }\n\n  public DictionaryMetadataBuilder convertCase(boolean v) {\n    this.attrs.put(DictionaryAttribute.CONVERT_CASE, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder supportRunOnWords() {\n    return supportRunOnWords(true);\n  }\n\n  public DictionaryMetadataBuilder supportRunOnWords(boolean v) {\n    this.attrs.put(DictionaryAttribute.RUN_ON_WORDS, Boolean.valueOf(v).toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder encoder(EncoderType type) {\n    this.attrs.put(DictionaryAttribute.ENCODER, type.name());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder locale(Locale locale) {\n    return locale(locale.toString());\n  }\n\n  public DictionaryMetadataBuilder locale(String localeName) {\n    this.attrs.put(DictionaryAttribute.LOCALE, localeName);\n    return this;\n  }\n\n  public DictionaryMetadataBuilder withReplacementPairs(\n      Map<String, List<String>> replacementPairs) {\n    StringBuilder builder = new StringBuilder();\n    for (Map.Entry<String, List<String>> e : replacementPairs.entrySet()) {\n      String k = e.getKey();\n      for (String v : e.getValue()) {\n        if (builder.length() > 0) builder.append(\", \");\n        builder.append(k).append(\" \").append(v);\n      }\n    }\n    this.attrs.put(DictionaryAttribute.REPLACEMENT_PAIRS, builder.toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder withEquivalentChars(\n      Map<Character, List<Character>> equivalentChars) {\n    StringBuilder builder = new StringBuilder();\n    for (Map.Entry<Character, List<Character>> e : equivalentChars.entrySet()) {\n      Character k = e.getKey();\n      for (Character v : e.getValue()) {\n        if (builder.length() > 0) builder.append(\", \");\n        builder.append(k).append(\" \").append(v);\n      }\n    }\n    this.attrs.put(DictionaryAttribute.EQUIVALENT_CHARS, builder.toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder withInputConversionPairs(Map<String, String> conversionPairs) {\n    StringBuilder builder = new StringBuilder();\n    for (Map.Entry<String, String> e : conversionPairs.entrySet()) {\n      String k = e.getKey();\n      if (builder.length() > 0) builder.append(\", \");\n      builder.append(k).append(\" \").append(conversionPairs.get(k));\n    }\n    this.attrs.put(DictionaryAttribute.INPUT_CONVERSION, builder.toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder withOutputConversionPairs(Map<String, String> conversionPairs) {\n    StringBuilder builder = new StringBuilder();\n    for (Map.Entry<String, String> e : conversionPairs.entrySet()) {\n      String k = e.getKey();\n      if (builder.length() > 0) builder.append(\", \");\n      builder.append(k).append(\" \").append(conversionPairs.get(k));\n    }\n    this.attrs.put(DictionaryAttribute.OUTPUT_CONVERSION, builder.toString());\n    return this;\n  }\n\n  public DictionaryMetadataBuilder author(String author) {\n    this.attrs.put(DictionaryAttribute.AUTHOR, author);\n    return this;\n  }\n\n  public DictionaryMetadataBuilder creationDate(String creationDate) {\n    this.attrs.put(DictionaryAttribute.CREATION_DATE, creationDate);\n    return this;\n  }\n\n  public DictionaryMetadataBuilder license(String license) {\n    this.attrs.put(DictionaryAttribute.LICENSE, license);\n    return this;\n  }\n\n  public DictionaryMetadata build() {\n    return new DictionaryMetadata(attrs);\n  }\n\n  public EnumMap<DictionaryAttribute, String> toMap() {\n    return new EnumMap<>(attrs);\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/EncoderType.java",
    "content": "package morfologik.stemming;\n\n/** Known {@link ISequenceEncoder}s. */\npublic enum EncoderType {\n  SUFFIX {\n    @Override\n    public ISequenceEncoder get() {\n      return new TrimSuffixEncoder();\n    }\n  },\n  PREFIX {\n    @Override\n    public ISequenceEncoder get() {\n      return new TrimPrefixAndSuffixEncoder();\n    }\n  },\n  INFIX {\n    @Override\n    public ISequenceEncoder get() {\n      return new TrimInfixAndSuffixEncoder();\n    }\n  },\n  NONE {\n    @Override\n    public ISequenceEncoder get() {\n      return new NoEncoder();\n    }\n  };\n\n  public abstract ISequenceEncoder get();\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/ISequenceEncoder.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\n\n/**\n * The logic of encoding one sequence of bytes relative to another sequence of bytes. The \"base\"\n * form and the \"derived\" form are typically the stem of a word and the inflected form of a word.\n *\n * <p>Derived form encoding helps in making the data for the automaton smaller and more repetitive\n * (which results in higher compression rates).\n *\n * <p>See example implementation for details.\n */\npublic interface ISequenceEncoder {\n  /**\n   * Encodes <code>target</code> relative to <code>source</code>, optionally reusing the provided\n   * {@link ByteBuffer}.\n   *\n   * @param reuse Reuses the provided {@link ByteBuffer} or allocates a new one if there is not\n   *     enough remaining space.\n   * @param source The source byte sequence.\n   * @param target The target byte sequence to encode relative to <code>source</code>\n   * @return Returns the {@link ByteBuffer} with encoded <code>target</code>.\n   */\n  public ByteBuffer encode(ByteBuffer reuse, ByteBuffer source, ByteBuffer target);\n\n  /**\n   * Decodes <code>encoded</code> relative to <code>source</code>, optionally reusing the provided\n   * {@link ByteBuffer}.\n   *\n   * @param reuse Reuses the provided {@link ByteBuffer} or allocates a new one if there is not\n   *     enough remaining space.\n   * @param source The source byte sequence.\n   * @param encoded The {@linkplain #encode previously encoded} byte sequence.\n   * @return Returns the {@link ByteBuffer} with decoded <code>target</code>.\n   */\n  public ByteBuffer decode(ByteBuffer reuse, ByteBuffer source, ByteBuffer encoded);\n\n  /**\n   * The number of encoded form's prefix bytes that should be ignored (needed for separator lookup).\n   * An ugly workaround for GH-85, should be fixed by prior knowledge of whether the dictionary\n   * contains tags; then we can scan for separator right-to-left.\n   *\n   * @see \"https://github.com/morfologik/morfologik-stemming/issues/85\"\n   */\n  @Deprecated\n  public int prefixBytes();\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/IStemmer.java",
    "content": "package morfologik.stemming;\n\nimport java.util.List;\n\n/** A generic &quot;stemmer&quot; interface in Morfologik. */\npublic interface IStemmer {\n  /**\n   * Returns a list of {@link WordData} entries for a given word. The returned list is never <code>\n   * null</code>. Depending on the stemmer's implementation the {@link WordData} may carry the stem\n   * and additional information (tag) or just the stem.\n   *\n   * <p>The returned list and any object it contains are not usable after a subsequent call to this\n   * method. Any data that should be stored in between must be copied by the caller.\n   *\n   * @param word The word (typically inflected) to look up base forms for.\n   * @return A list of {@link WordData} entries (possibly empty).\n   */\n  public List<WordData> lookup(CharSequence word);\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/NoEncoder.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\n\n/** No relative encoding at all (full target form is returned). */\npublic class NoEncoder implements ISequenceEncoder {\n  @Override\n  public ByteBuffer encode(ByteBuffer reuse, ByteBuffer source, ByteBuffer target) {\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, target.remaining());\n\n    target.mark();\n    reuse.put(target).flip();\n    target.reset();\n\n    return reuse;\n  }\n\n  @Override\n  public ByteBuffer decode(ByteBuffer reuse, ByteBuffer source, ByteBuffer encoded) {\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, encoded.remaining());\n\n    encoded.mark();\n    reuse.put(encoded).flip();\n    encoded.reset();\n\n    return reuse;\n  }\n\n  @Override\n  public int prefixBytes() {\n    return 0;\n  }\n\n  @Override\n  public String toString() {\n    return getClass().getSimpleName();\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/TrimInfixAndSuffixEncoder.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\n\n/**\n * Encodes <code>dst</code> relative to <code>src</code> by trimming whatever non-equal suffix and\n * infix <code>src</code> and <code>dst</code> have. The output code is (bytes):\n *\n * <pre>\n * {X}{L}{K}{suffix}\n * </pre>\n *\n * where <code>src's</code> infix at position (<code>X</code> - 'A') and of length (<code>L</code> -\n * 'A') should be removed, then (<code>K</code> - 'A') bytes should be trimmed from the end and then\n * the <code>suffix</code> should be appended to the resulting byte sequence.\n *\n * <p>Examples:\n *\n * <pre>\n * src: ayz\n * dst: abc\n * encoded: AACbc\n *\n * src: aillent\n * dst: aller\n * encoded: BBCr\n * </pre>\n */\npublic class TrimInfixAndSuffixEncoder implements ISequenceEncoder {\n  /** Maximum encodable single-byte code. */\n  private static final int REMOVE_EVERYTHING = 255;\n\n  private ByteBuffer scratch = ByteBuffer.allocate(0);\n\n  public ByteBuffer encode(ByteBuffer reuse, ByteBuffer source, ByteBuffer target) {\n    assert source.hasArray() && source.position() == 0 && source.arrayOffset() == 0;\n\n    assert target.hasArray() && target.position() == 0 && target.arrayOffset() == 0;\n\n    // Search for the infix that can we can encode and remove from src\n    // to get a maximum-length prefix of dst. This could be done more efficiently\n    // by running a smarter longest-common-subsequence algorithm and some pruning (?).\n    //\n    // For now, naive loop should do.\n\n    // There can be only two positions for the infix to delete:\n    // 1) we remove leading bytes, even if they are partially matching (but a longer match\n    //    exists somewhere later on).\n    // 2) we leave max. matching prefix and remove non-matching bytes that follow.\n    int maxInfixIndex = 0;\n    int maxSubsequenceLength = BufferUtils.sharedPrefixLength(source, target);\n    int maxInfixLength = 0;\n    for (int i : new int[] {0, maxSubsequenceLength}) {\n      for (int j = 1; j <= source.remaining() - i; j++) {\n        // Compute temporary src with the infix removed.\n        // Concatenate in scratch space for simplicity.\n        final int len2 = source.remaining() - (i + j);\n        scratch = BufferUtils.clearAndEnsureCapacity(scratch, i + len2);\n        scratch.put(source.array(), 0, i);\n        scratch.put(source.array(), i + j, len2);\n        scratch.flip();\n\n        int sharedPrefix = BufferUtils.sharedPrefixLength(scratch, target);\n\n        // Only update maxSubsequenceLength if we will be able to encode it.\n        if (sharedPrefix > 0\n            && sharedPrefix > maxSubsequenceLength\n            && i < REMOVE_EVERYTHING\n            && j < REMOVE_EVERYTHING) {\n          maxSubsequenceLength = sharedPrefix;\n          maxInfixIndex = i;\n          maxInfixLength = j;\n        }\n      }\n    }\n\n    int truncateSuffixBytes = source.remaining() - (maxInfixLength + maxSubsequenceLength);\n\n    // Special case: if we're removing the suffix in the infix code, move it\n    // to the suffix code instead.\n    if (truncateSuffixBytes == 0 && maxInfixIndex + maxInfixLength == source.remaining()) {\n      truncateSuffixBytes = maxInfixLength;\n      maxInfixIndex = maxInfixLength = 0;\n    }\n\n    if (maxInfixIndex >= REMOVE_EVERYTHING\n        || maxInfixLength >= REMOVE_EVERYTHING\n        || truncateSuffixBytes >= REMOVE_EVERYTHING) {\n      maxInfixIndex = maxSubsequenceLength = 0;\n      maxInfixLength = truncateSuffixBytes = REMOVE_EVERYTHING;\n    }\n\n    final int len1 = target.remaining() - maxSubsequenceLength;\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, 3 + len1);\n\n    reuse.put((byte) ((maxInfixIndex + 'A') & 0xFF));\n    reuse.put((byte) ((maxInfixLength + 'A') & 0xFF));\n    reuse.put((byte) ((truncateSuffixBytes + 'A') & 0xFF));\n    reuse.put(target.array(), maxSubsequenceLength, len1);\n    reuse.flip();\n\n    return reuse;\n  }\n\n  @Override\n  public int prefixBytes() {\n    return 3;\n  }\n\n  public ByteBuffer decode(ByteBuffer reuse, ByteBuffer source, ByteBuffer encoded) {\n    assert encoded.remaining() >= 3;\n\n    final int p = encoded.position();\n    int infixIndex = (encoded.get(p) - 'A') & 0xFF;\n    int infixLength = (encoded.get(p + 1) - 'A') & 0xFF;\n    int truncateSuffixBytes = (encoded.get(p + 2) - 'A') & 0xFF;\n\n    if (infixLength == REMOVE_EVERYTHING || truncateSuffixBytes == REMOVE_EVERYTHING) {\n      infixIndex = 0;\n      infixLength = source.remaining();\n      truncateSuffixBytes = 0;\n    }\n\n    final int len1 = source.remaining() - (infixIndex + infixLength + truncateSuffixBytes);\n    final int len2 = encoded.remaining() - 3;\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, infixIndex + len1 + len2);\n\n    assert encoded.hasArray() && encoded.position() == 0 && encoded.arrayOffset() == 0;\n\n    assert source.hasArray() && source.position() == 0 && source.arrayOffset() == 0;\n\n    reuse.put(source.array(), 0, infixIndex);\n    reuse.put(source.array(), infixIndex + infixLength, len1);\n    reuse.put(encoded.array(), 3, len2);\n    reuse.flip();\n\n    return reuse;\n  }\n\n  @Override\n  public String toString() {\n    return getClass().getSimpleName();\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/TrimPrefixAndSuffixEncoder.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\n\n/**\n * Encodes <code>dst</code> relative to <code>src</code> by trimming whatever non-equal suffix and\n * prefix <code>src</code> and <code>dst</code> have. The output code is (bytes):\n *\n * <pre>\n * {P}{K}{suffix}\n * </pre>\n *\n * where (<code>P</code> - 'A') bytes should be trimmed from the start of <code>src</code>, (<code>K\n * </code> - 'A') bytes should be trimmed from the end of <code>src</code> and then the <code>suffix\n * </code> should be appended to the resulting byte sequence.\n *\n * <p>Examples:\n *\n * <pre>\n * src: abc\n * dst: abcd\n * encoded: AAd\n *\n * src: abc\n * dst: xyz\n * encoded: ADxyz\n * </pre>\n */\npublic class TrimPrefixAndSuffixEncoder implements ISequenceEncoder {\n  /** Maximum encodable single-byte code. */\n  private static final int REMOVE_EVERYTHING = 255;\n\n  public ByteBuffer encode(ByteBuffer reuse, ByteBuffer source, ByteBuffer target) {\n    // Search for the maximum matching subsequence that can be encoded.\n    int maxSubsequenceLength = 0;\n    int maxSubsequenceIndex = 0;\n    for (int i = 0; i < source.remaining(); i++) {\n      // prefix at i => shared subsequence (infix)\n      int sharedPrefix = BufferUtils.sharedPrefixLength(source, i, target, 0);\n      // Only update maxSubsequenceLength if we will be able to encode it.\n      if (sharedPrefix > maxSubsequenceLength\n          && i < REMOVE_EVERYTHING\n          && (source.remaining() - (i + sharedPrefix)) < REMOVE_EVERYTHING) {\n        maxSubsequenceLength = sharedPrefix;\n        maxSubsequenceIndex = i;\n      }\n    }\n\n    // Determine how much to remove (and where) from src to get a prefix of dst.\n    int truncatePrefixBytes = maxSubsequenceIndex;\n    int truncateSuffixBytes = (source.remaining() - (maxSubsequenceIndex + maxSubsequenceLength));\n    if (truncatePrefixBytes >= REMOVE_EVERYTHING || truncateSuffixBytes >= REMOVE_EVERYTHING) {\n      maxSubsequenceIndex = maxSubsequenceLength = 0;\n      truncatePrefixBytes = truncateSuffixBytes = REMOVE_EVERYTHING;\n    }\n\n    final int len1 = target.remaining() - maxSubsequenceLength;\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, 2 + len1);\n\n    assert target.hasArray() && target.position() == 0 && target.arrayOffset() == 0;\n\n    reuse.put((byte) ((truncatePrefixBytes + 'A') & 0xFF));\n    reuse.put((byte) ((truncateSuffixBytes + 'A') & 0xFF));\n    reuse.put(target.array(), maxSubsequenceLength, len1);\n    reuse.flip();\n\n    return reuse;\n  }\n\n  @Override\n  public int prefixBytes() {\n    return 2;\n  }\n\n  public ByteBuffer decode(ByteBuffer reuse, ByteBuffer source, ByteBuffer encoded) {\n    assert encoded.remaining() >= 2;\n\n    final int p = encoded.position();\n    int truncatePrefixBytes = (encoded.get(p) - 'A') & 0xFF;\n    int truncateSuffixBytes = (encoded.get(p + 1) - 'A') & 0xFF;\n\n    if (truncatePrefixBytes == REMOVE_EVERYTHING || truncateSuffixBytes == REMOVE_EVERYTHING) {\n      truncatePrefixBytes = source.remaining();\n      truncateSuffixBytes = 0;\n    }\n\n    assert source.hasArray() && source.position() == 0 && source.arrayOffset() == 0;\n\n    assert encoded.hasArray() && encoded.position() == 0 && encoded.arrayOffset() == 0;\n\n    final int len1 = source.remaining() - (truncateSuffixBytes + truncatePrefixBytes);\n    final int len2 = encoded.remaining() - 2;\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, len1 + len2);\n\n    reuse.put(source.array(), truncatePrefixBytes, len1);\n    reuse.put(encoded.array(), 2, len2);\n    reuse.flip();\n\n    return reuse;\n  }\n\n  @Override\n  public String toString() {\n    return getClass().getSimpleName();\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/TrimSuffixEncoder.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.ByteBuffer;\n\n/**\n * Encodes <code>dst</code> relative to <code>src</code> by trimming whatever non-equal suffix\n * <code>src</code> has. The output code is (bytes):\n *\n * <pre>\n * {K}{suffix}\n * </pre>\n *\n * where (<code>K</code> - 'A') bytes should be trimmed from the end of <code>src</code> and then\n * the <code>suffix</code> should be appended to the resulting byte sequence.\n *\n * <p>Examples:\n *\n * <pre>\n * src: foo\n * dst: foobar\n * encoded: Abar\n *\n * src: foo\n * dst: bar\n * encoded: Dbar\n * </pre>\n */\npublic class TrimSuffixEncoder implements ISequenceEncoder {\n  /** Maximum encodable single-byte code. */\n  private static final int REMOVE_EVERYTHING = 255;\n\n  public ByteBuffer encode(ByteBuffer reuse, ByteBuffer source, ByteBuffer target) {\n    int sharedPrefix = BufferUtils.sharedPrefixLength(source, target);\n    int truncateBytes = source.remaining() - sharedPrefix;\n    if (truncateBytes >= REMOVE_EVERYTHING) {\n      truncateBytes = REMOVE_EVERYTHING;\n      sharedPrefix = 0;\n    }\n\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, 1 + target.remaining() - sharedPrefix);\n\n    assert target.hasArray() && target.position() == 0 && target.arrayOffset() == 0;\n\n    final byte suffixTrimCode = (byte) (truncateBytes + 'A');\n    reuse\n        .put(suffixTrimCode)\n        .put(target.array(), sharedPrefix, target.remaining() - sharedPrefix)\n        .flip();\n\n    return reuse;\n  }\n\n  @Override\n  public int prefixBytes() {\n    return 1;\n  }\n\n  public ByteBuffer decode(ByteBuffer reuse, ByteBuffer source, ByteBuffer encoded) {\n    assert encoded.remaining() >= 1;\n\n    int suffixTrimCode = encoded.get(encoded.position());\n    int truncateBytes = (suffixTrimCode - 'A') & 0xFF;\n    if (truncateBytes == REMOVE_EVERYTHING) {\n      truncateBytes = source.remaining();\n    }\n\n    final int len1 = source.remaining() - truncateBytes;\n    final int len2 = encoded.remaining() - 1;\n\n    reuse = BufferUtils.clearAndEnsureCapacity(reuse, len1 + len2);\n\n    assert source.hasArray() && source.position() == 0 && source.arrayOffset() == 0;\n\n    assert encoded.hasArray() && encoded.position() == 0 && encoded.arrayOffset() == 0;\n\n    reuse.put(source.array(), 0, len1).put(encoded.array(), 1, len2).flip();\n\n    return reuse;\n  }\n\n  @Override\n  public String toString() {\n    return getClass().getSimpleName();\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/UnmappableInputException.java",
    "content": "package morfologik.stemming;\n\nimport java.nio.charset.CharacterCodingException;\n\n/**\n * Thrown when some input cannot be mapped using the declared charset (bytes to characters or the\n * other way around).\n */\n@SuppressWarnings(\"serial\")\npublic final class UnmappableInputException extends Exception {\n  UnmappableInputException(String message, CharacterCodingException cause) {\n    super(message, cause);\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/main/java/morfologik/stemming/WordData.java",
    "content": "package morfologik.stemming;\n\nimport java.io.UnsupportedEncodingException;\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.*;\n\n/**\n * Stem and tag data associated with a given word.\n *\n * <p>Instances of this class are reused and mutable (values returned from {@link #getStem()},\n * {@link #getWord()} and other related methods change on subsequent calls to {@link\n * DictionaryLookup} class that returned a given instance of {@link WordData}.\n *\n * <p>If you need a copy of the stem or tag data for a given word, you have to create a custom\n * buffer yourself and copy the associated data, perform {@link #clone()} or create strings (they\n * are immutable) using {@link #getStem()} and then {@link CharSequence#toString()}.\n *\n * <p>For reasons above it makes no sense to use instances of this class in associative containers\n * or lists. In fact, both {@link #equals(Object)} and {@link #hashCode()} are overridden and throw\n * exceptions to prevent accidental damage.\n */\npublic final class WordData implements Cloneable {\n  /** Error information if somebody puts us in a Java collection. */\n  private static final String COLLECTIONS_ERROR_MESSAGE =\n      \"Not suitable for use\"\n          + \" in Java collections framework (volatile content). Refer to documentation.\";\n\n  /** Character encoding in internal buffers. */\n  private final CharsetDecoder decoder;\n\n  /** Inflected word form data. */\n  private CharSequence wordCharSequence;\n\n  /** Character sequence after converting {@link #stemBuffer} using {@link #decoder}. */\n  private CharBuffer stemCharSequence;\n\n  /** Character sequence after converting {@link #tagBuffer} using {@link #decoder}. */\n  private CharBuffer tagCharSequence;\n\n  /** Byte buffer holding the inflected word form data. */\n  ByteBuffer wordBuffer;\n\n  /** Byte buffer holding stem data. */\n  ByteBuffer stemBuffer;\n\n  /** Byte buffer holding tag data. */\n  ByteBuffer tagBuffer;\n\n  /** Package scope constructor. */\n  WordData(CharsetDecoder decoder) {\n    this.decoder = decoder;\n\n    stemBuffer = ByteBuffer.allocate(0);\n    tagBuffer = ByteBuffer.allocate(0);\n    stemCharSequence = CharBuffer.allocate(0);\n    tagCharSequence = CharBuffer.allocate(0);\n  }\n\n  /** A constructor for tests only. */\n  WordData(String stem, String tag, String encoding) {\n    this(Charset.forName(encoding).newDecoder());\n\n    try {\n      if (stem != null) stemBuffer.put(stem.getBytes(encoding));\n      if (tag != null) tagBuffer.put(tag.getBytes(encoding));\n    } catch (UnsupportedEncodingException e) {\n      throw new RuntimeException(e);\n    }\n  }\n\n  /**\n   * Copy the stem's binary data (no charset decoding) to a custom byte buffer.\n   *\n   * <p>The buffer is cleared prior to copying and flipped for reading upon returning from this\n   * method. If the buffer is null or not large enough to hold the result, a new buffer is\n   * allocated.\n   *\n   * @param target Target byte buffer to copy the stem buffer to or <code>null</code> if a new\n   *     buffer should be allocated.\n   * @return Returns <code>target</code> or the new reallocated buffer.\n   */\n  public ByteBuffer getStemBytes(ByteBuffer target) {\n    target = BufferUtils.clearAndEnsureCapacity(target, stemBuffer.remaining());\n    stemBuffer.mark();\n    target.put(stemBuffer);\n    stemBuffer.reset();\n    target.flip();\n    return target;\n  }\n\n  /**\n   * Copy the tag's binary data (no charset decoding) to a custom byte buffer.\n   *\n   * <p>The buffer is cleared prior to copying and flipped for reading upon returning from this\n   * method. If the buffer is null or not large enough to hold the result, a new buffer is\n   * allocated.\n   *\n   * @param target Target byte buffer to copy the tag buffer to or <code>null</code> if a new buffer\n   *     should be allocated.\n   * @return Returns <code>target</code> or the new reallocated buffer.\n   */\n  public ByteBuffer getTagBytes(ByteBuffer target) {\n    target = BufferUtils.clearAndEnsureCapacity(target, tagBuffer.remaining());\n    tagBuffer.mark();\n    target.put(tagBuffer);\n    tagBuffer.reset();\n    target.flip();\n    return target;\n  }\n\n  /**\n   * Copy the inflected word's binary data (no charset decoding) to a custom byte buffer.\n   *\n   * <p>The buffer is cleared prior to copying and flipped for reading upon returning from this\n   * method. If the buffer is null or not large enough to hold the result, a new buffer is\n   * allocated.\n   *\n   * @param target Target byte buffer to copy the word buffer to or <code>null</code> if a new\n   *     buffer should be allocated.\n   * @return Returns <code>target</code> or the new reallocated buffer.\n   */\n  public ByteBuffer getWordBytes(ByteBuffer target) {\n    target = BufferUtils.clearAndEnsureCapacity(target, wordBuffer.remaining());\n    wordBuffer.mark();\n    target.put(wordBuffer);\n    wordBuffer.reset();\n    target.flip();\n    return target;\n  }\n\n  /**\n   * @return Return tag data decoded to a character sequence or <code>null</code> if no associated\n   *     tag data exists.\n   */\n  public CharSequence getTag() {\n    tagCharSequence = BufferUtils.bytesToChars(decoder, tagBuffer, tagCharSequence);\n    return tagCharSequence.remaining() == 0 ? null : tagCharSequence;\n  }\n\n  /**\n   * @return Return stem data decoded to a character sequence or <code>null</code> if no associated\n   *     stem data exists.\n   */\n  public CharSequence getStem() {\n    stemCharSequence = BufferUtils.bytesToChars(decoder, stemBuffer, stemCharSequence);\n    return stemCharSequence.remaining() == 0 ? null : stemCharSequence;\n  }\n\n  /**\n   * @return Return inflected word form data. Usually the parameter passed to {@link\n   *     DictionaryLookup#lookup(CharSequence)}.\n   */\n  public CharSequence getWord() {\n    return wordCharSequence;\n  }\n\n  /*\n   *\n   */\n  @Override\n  public boolean equals(Object obj) {\n    throw new UnsupportedOperationException(COLLECTIONS_ERROR_MESSAGE);\n  }\n\n  /*\n   *\n   */\n  @Override\n  public int hashCode() {\n    throw new UnsupportedOperationException(COLLECTIONS_ERROR_MESSAGE);\n  }\n\n  @Override\n  public String toString() {\n    return \"WordData[\" + this.getWord() + \",\" + this.getStem() + \",\" + this.getTag() + \"]\";\n  }\n\n  /**\n   * Declare a covariant of {@link Object#clone()} that returns a deep copy of this object. The\n   * content of all internal buffers is copied.\n   */\n  @Override\n  public WordData clone() {\n    final WordData clone = new WordData(this.decoder);\n    clone.wordCharSequence = cloneCharSequence(wordCharSequence);\n    clone.wordBuffer = getWordBytes(null);\n    clone.stemBuffer = getStemBytes(null);\n    clone.tagBuffer = getTagBytes(null);\n    return clone;\n  }\n\n  /** Clone char sequences only if not immutable. */\n  private CharSequence cloneCharSequence(CharSequence chs) {\n    if (chs instanceof String) return chs;\n    return chs.toString();\n  }\n\n  void update(ByteBuffer wordBuffer, CharSequence word) {\n    this.stemCharSequence.clear();\n    this.tagCharSequence.clear();\n    this.stemBuffer.clear();\n    this.tagBuffer.clear();\n\n    this.wordBuffer = wordBuffer;\n    this.wordCharSequence = word;\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/java/morfologik/stemming/DictionaryLookupTest.java",
    "content": "package morfologik.stemming;\n\nimport static org.assertj.core.api.Assertions.*;\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport java.io.IOException;\nimport java.net.URL;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.LinkedHashMap;\nimport morfologik.fsa.FSA;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\n\npublic class DictionaryLookupTest {\n  @Test\n  public void testApplyReplacements() {\n    LinkedHashMap<String, String> conversion = new LinkedHashMap<>();\n    conversion.put(\"'\", \"`\");\n    conversion.put(\"fi\", \"ﬁ\");\n    conversion.put(\"\\\\a\", \"ą\");\n    conversion.put(\"Barack\", \"George\");\n    conversion.put(\"_\", \"xx\");\n    assertEquals(\"ﬁlut\", DictionaryLookup.applyReplacements(\"filut\", conversion));\n    assertEquals(\"ﬁzdrygałką\", DictionaryLookup.applyReplacements(\"fizdrygałk\\\\a\", conversion));\n    assertEquals(\"George Bush\", DictionaryLookup.applyReplacements(\"Barack Bush\", conversion));\n    assertEquals(\"xxxxxxxx\", DictionaryLookup.applyReplacements(\"____\", conversion));\n  }\n\n  @Test\n  public void testRemovedEncoderProperties() throws IOException {\n    final URL url = this.getClass().getResource(\"test-removed-props.dict\");\n    try {\n      new DictionaryLookup(Dictionary.read(url));\n      Assertions.fail();\n    } catch (IOException e) {\n      assertThat(e).hasMessageContaining(DictionaryAttribute.ENCODER.propertyName);\n    }\n  }\n\n  @Test\n  public void testPrefixDictionaries() throws IOException {\n    final URL url = this.getClass().getResource(\"test-prefix.dict\");\n    final IStemmer s = new DictionaryLookup(Dictionary.read(url));\n\n    assertArrayEquals(new String[] {\"Rzeczpospolita\", \"subst:irreg\"}, stem(s, \"Rzeczypospolitej\"));\n    assertArrayEquals(new String[] {\"Rzeczpospolita\", \"subst:irreg\"}, stem(s, \"Rzecząpospolitą\"));\n\n    // This word is not in the dictionary.\n    assertNoStemFor(s, \"martygalski\");\n  }\n\n  @Test\n  public void testInputConversion() throws IOException {\n    final URL url = this.getClass().getResource(\"test-prefix.dict\");\n    final IStemmer s = new DictionaryLookup(Dictionary.read(url));\n\n    assertArrayEquals(\n        new String[] {\"Rzeczpospolita\", \"subst:irreg\"}, stem(s, \"Rzecz\\\\apospolit\\\\a\"));\n\n    assertArrayEquals(\n        new String[] {\"Rzeczpospolita\", \"subst:irreg\"}, stem(s, \"krowa\\\\apospolit\\\\a\"));\n  }\n\n  /* */\n  @Test\n  public void testInfixDictionaries() throws IOException {\n    final URL url = this.getClass().getResource(\"test-infix.dict\");\n    final IStemmer s = new DictionaryLookup(Dictionary.read(url));\n\n    Assertions.assertThat(stem(s, \"Rzeczypospolitej\"))\n        .containsExactly(\"Rzeczpospolita\", \"subst:irreg\");\n\n    Assertions.assertThat(stem(s, \"Rzeczyccy\")).containsExactly(\"Rzeczycki\", \"adj:pl:nom:m\");\n\n    Assertions.assertThat(stem(s, \"Rzecząpospolitą\"))\n        .containsExactly(\"Rzeczpospolita\", \"subst:irreg\");\n\n    // This word is not in the dictionary.\n    assertNoStemFor(s, \"martygalski\");\n\n    // This word uses characters that are outside of the encoding range of the dictionary.\n    assertNoStemFor(s, \"Rzeczyckiõh\");\n  }\n\n  /* */\n  @Test\n  public void testWordDataIterator() throws IOException {\n    final URL url = this.getClass().getResource(\"test-infix.dict\");\n    final DictionaryLookup s = new DictionaryLookup(Dictionary.read(url));\n\n    final HashSet<String> entries = new HashSet<String>();\n    for (WordData wd : s) {\n      entries.add(wd.getWord() + \" \" + wd.getStem() + \" \" + wd.getTag());\n    }\n\n    // Make sure a sample of the entries is present.\n    Assertions.assertThat(entries)\n        .contains(\n            \"Rzekunia Rzekuń subst:sg:gen:m\",\n            \"Rzeczkowskie Rzeczkowski adj:sg:nom.acc.voc:n+adj:pl:acc.nom.voc:f.n\",\n            \"Rzecząpospolitą Rzeczpospolita subst:irreg\",\n            \"Rzeczypospolita Rzeczpospolita subst:irreg\",\n            \"Rzeczypospolitych Rzeczpospolita subst:irreg\",\n            \"Rzeczyckiej Rzeczycki adj:sg:gen.dat.loc:f\");\n  }\n\n  /* */\n  @Test\n  public void testWordDataCloning() throws IOException {\n    final URL url = this.getClass().getResource(\"test-infix.dict\");\n    final DictionaryLookup s = new DictionaryLookup(Dictionary.read(url));\n\n    ArrayList<WordData> words = new ArrayList<WordData>();\n    for (WordData wd : s) {\n      WordData clone = wd.clone();\n      words.add(clone);\n    }\n\n    // Reiterate and verify that we have the same entries.\n    final DictionaryLookup s2 = new DictionaryLookup(Dictionary.read(url));\n    int i = 0;\n    for (WordData wd : s2) {\n      WordData clone = words.get(i++);\n      assertEqualSequences(clone.getStem(), wd.getStem());\n      assertEqualSequences(clone.getTag(), wd.getTag());\n      assertEqualSequences(clone.getWord(), wd.getWord());\n    }\n\n    // Check collections contract.\n    final HashSet<WordData> entries = new HashSet<WordData>();\n    try {\n      entries.add(words.get(0));\n      Assertions.fail();\n    } catch (RuntimeException e) {\n      // Expected.\n    }\n  }\n\n  private void assertEqualSequences(CharSequence s1, CharSequence s2) {\n    assertEquals(s1.toString(), s2.toString());\n  }\n\n  /* */\n  @Test\n  public void testMultibyteEncodingUTF8() throws IOException {\n    final URL url = this.getClass().getResource(\"test-diacritics-utf8.dict\");\n    Dictionary read = Dictionary.read(url);\n    final IStemmer s = new DictionaryLookup(read);\n\n    assertArrayEquals(new String[] {\"merge\", \"001\"}, stem(s, \"mergeam\"));\n    assertArrayEquals(new String[] {\"merge\", \"002\"}, stem(s, \"merseserăm\"));\n  }\n\n  /* */\n  @Test\n  public void testSynthesis() throws IOException {\n    final URL url = this.getClass().getResource(\"test-synth.dict\");\n    final IStemmer s = new DictionaryLookup(Dictionary.read(url));\n\n    assertArrayEquals(new String[] {\"miała\", null}, stem(s, \"mieć|verb:praet:sg:ter:f:?perf\"));\n    assertArrayEquals(new String[] {\"a\", null}, stem(s, \"a|conj\"));\n    assertArrayEquals(new String[] {}, stem(s, \"dziecko|subst:sg:dat:n\"));\n\n    // This word is not in the dictionary.\n    assertNoStemFor(s, \"martygalski\");\n  }\n\n  /* */\n  @Test\n  public void testInputWithSeparators() throws IOException {\n    final URL url = this.getClass().getResource(\"test-separators.dict\");\n    final DictionaryLookup s = new DictionaryLookup(Dictionary.read(url));\n\n    /*\n     * Attemp to reconstruct input sequences using WordData iterator.\n     */\n    ArrayList<String> sequences = new ArrayList<String>();\n    for (WordData wd : s) {\n      sequences.add(\"\" + wd.getWord() + \" \" + wd.getStem() + \" \" + wd.getTag());\n    }\n    Collections.sort(sequences);\n\n    assertEquals(\"token1 null null\", sequences.get(0));\n    assertEquals(\"token2 null null\", sequences.get(1));\n    assertEquals(\"token3 null +\", sequences.get(2));\n    assertEquals(\"token4 token2 null\", sequences.get(3));\n    assertEquals(\"token5 token2 null\", sequences.get(4));\n    assertEquals(\"token6 token2 +\", sequences.get(5));\n    assertEquals(\"token7 token2 token3+\", sequences.get(6));\n    assertEquals(\"token8 token2 token3++\", sequences.get(7));\n  }\n\n  /* */\n  @Test\n  public void testSeparatorInLookupTerm() throws IOException {\n    FSA fsa = FSA.read(getClass().getResourceAsStream(\"test-separator-in-lookup.fsa\"));\n\n    DictionaryMetadata metadata =\n        new DictionaryMetadataBuilder()\n            .separator('+')\n            .encoding(\"iso8859-1\")\n            .encoder(EncoderType.INFIX)\n            .build();\n\n    final DictionaryLookup s = new DictionaryLookup(new Dictionary(fsa, metadata));\n    assertEquals(0, s.lookup(\"l+A\").size());\n  }\n\n  /* */\n  @Test\n  public void testGetSeparator() throws IOException {\n    final URL url = this.getClass().getResource(\"test-separators.dict\");\n    final DictionaryLookup s = new DictionaryLookup(Dictionary.read(url));\n    assertEquals('+', s.getSeparatorChar());\n  }\n\n  /* */\n  public static String asString(CharSequence s) {\n    if (s == null) return null;\n    return s.toString();\n  }\n\n  /* */\n  public static String[] stem(IStemmer s, String word) {\n    ArrayList<String> result = new ArrayList<String>();\n    for (WordData wd : s.lookup(word)) {\n      result.add(asString(wd.getStem()));\n      result.add(asString(wd.getTag()));\n    }\n    return result.toArray(new String[result.size()]);\n  }\n\n  /* */\n  public static void assertNoStemFor(IStemmer s, String word) {\n    assertArrayEquals(new String[] {}, stem(s, word));\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/java/morfologik/stemming/DictionaryMetadataBuilderTest.java",
    "content": "package morfologik.stemming;\n\nimport java.io.IOException;\nimport java.nio.charset.Charset;\nimport java.util.Collections;\nimport java.util.EnumSet;\nimport java.util.List;\nimport java.util.Locale;\nimport java.util.Set;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\n\npublic class DictionaryMetadataBuilderTest {\n  @Test\n  public void testAllConstantsHaveBuilderMethods() throws IOException {\n    Set<DictionaryAttribute> keySet =\n        new DictionaryMetadataBuilder()\n            .convertCase()\n            .encoding(Charset.defaultCharset())\n            .encoding(\"UTF-8\")\n            .frequencyIncluded()\n            .ignoreAllUppercase()\n            .ignoreCamelCase()\n            .ignoreDiacritics()\n            .ignoreNumbers()\n            .ignorePunctuation()\n            .separator('+')\n            .supportRunOnWords()\n            .encoder(EncoderType.SUFFIX)\n            .withEquivalentChars(Collections.<Character, List<Character>>emptyMap())\n            .withReplacementPairs(Collections.<String, List<String>>emptyMap())\n            .withInputConversionPairs(Collections.<String, String>emptyMap())\n            .withOutputConversionPairs(Collections.<String, String>emptyMap())\n            .locale(Locale.getDefault())\n            .license(\"\")\n            .author(\"\")\n            .creationDate(\"\")\n            .toMap()\n            .keySet();\n\n    Set<DictionaryAttribute> all = EnumSet.allOf(DictionaryAttribute.class);\n    all.removeAll(keySet);\n\n    Assertions.assertThat(all).isEmpty();\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/java/morfologik/stemming/DictionaryMetadataTest.java",
    "content": "package morfologik.stemming;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomPicks;\nimport java.io.ByteArrayInputStream;\nimport java.io.IOException;\nimport java.io.StringWriter;\nimport java.nio.charset.Charset;\nimport java.nio.charset.StandardCharsets;\nimport java.util.Arrays;\nimport java.util.Random;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\n\n@Randomized\npublic class DictionaryMetadataTest extends RandomizedTest {\n  @Test\n  public void testEscapeSeparator() throws IOException {\n    DictionaryMetadata m =\n        DictionaryMetadata.read(getClass().getResourceAsStream(\"escape-separator.info\"));\n    Assertions.assertThat(m.getSeparator()).isEqualTo((byte) '\\t');\n  }\n\n  @Test\n  public void testUnicodeSeparator() throws IOException {\n    DictionaryMetadata m =\n        DictionaryMetadata.read(getClass().getResourceAsStream(\"unicode-separator.info\"));\n    Assertions.assertThat(m.getSeparator()).isEqualTo((byte) '\\t');\n  }\n\n  @Test\n  public void testWriteMetadata(Random rnd) throws IOException {\n    StringWriter sw = new StringWriter();\n\n    EncoderType encoder = RandomPicks.randomFrom(rnd, EncoderType.values());\n    Charset encoding =\n        RandomPicks.randomFrom(\n            rnd,\n            Arrays.asList(\n                StandardCharsets.UTF_8, StandardCharsets.ISO_8859_1, StandardCharsets.US_ASCII));\n\n    DictionaryMetadata.builder()\n        .encoding(encoding)\n        .encoder(encoder)\n        .separator('|')\n        .build()\n        .write(sw);\n\n    DictionaryMetadata other =\n        DictionaryMetadata.read(\n            new ByteArrayInputStream(sw.toString().getBytes(StandardCharsets.UTF_8)));\n\n    Assertions.assertThat(other.getSeparator()).isEqualTo((byte) '|');\n    Assertions.assertThat(other.getDecoder().charset()).isEqualTo(encoding);\n    Assertions.assertThat(other.getEncoder().charset()).isEqualTo(encoding);\n    Assertions.assertThat(other.getSequenceEncoderType()).isEqualTo(encoder);\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/java/morfologik/stemming/DictionaryTest.java",
    "content": "package morfologik.stemming;\n\nimport static org.junit.jupiter.api.Assertions.*;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.io.TempDir;\n\n@Randomized\npublic class DictionaryTest extends RandomizedTest {\n  @Test\n  public void testReadFromFile(@TempDir Path tempDir) throws IOException {\n    Path dict = tempDir.resolve(\"odd name.dict\");\n    Path info = dict.resolveSibling(\"odd name.info\");\n    try (InputStream dictInput = this.getClass().getResource(\"test-infix.dict\").openStream();\n        InputStream infoInput = this.getClass().getResource(\"test-infix.info\").openStream()) {\n      Files.copy(dictInput, dict);\n      Files.copy(infoInput, info);\n    }\n\n    assertNotNull(Dictionary.read(dict.toUri().toURL()));\n    assertNotNull(Dictionary.read(dict));\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/java/morfologik/stemming/EncodersTest.java",
    "content": "package morfologik.stemming;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport java.io.IOException;\nimport java.nio.ByteBuffer;\nimport java.nio.charset.StandardCharsets;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\n\n@Randomized\npublic class EncodersTest extends RandomizedTest {\n  @Test\n  public void testSharedPrefix() throws IOException {\n    Assertions.assertThat(\n            BufferUtils.sharedPrefixLength(\n                ByteBuffer.wrap(b(\"abcdef\")), ByteBuffer.wrap(b(\"abcd__\"))))\n        .isEqualTo(4);\n\n    Assertions.assertThat(\n            BufferUtils.sharedPrefixLength(ByteBuffer.wrap(b(\"\")), ByteBuffer.wrap(b(\"_\"))))\n        .isEqualTo(0);\n\n    Assertions.assertThat(\n            BufferUtils.sharedPrefixLength(\n                ByteBuffer.wrap(b(\"abcdef\"), 2, 2), ByteBuffer.wrap(b(\"___cd__\"), 3, 2)))\n        .isEqualTo(2);\n  }\n\n  private static byte[] b(String arg) {\n    byte[] bytes = arg.getBytes(StandardCharsets.UTF_8);\n    Assertions.assertThat(bytes).hasSize(arg.length());\n    return bytes;\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/java/morfologik/stemming/SequenceEncodersTest.java",
    "content": "package morfologik.stemming;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomStrings;\nimport java.nio.ByteBuffer;\nimport java.nio.charset.StandardCharsets;\nimport java.util.Random;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.params.ParameterizedClass;\nimport org.junit.jupiter.params.provider.EnumSource;\n\n@Randomized\n@ParameterizedClass\n@EnumSource(EncoderType.class)\npublic class SequenceEncodersTest extends RandomizedTest {\n  private final ISequenceEncoder coder;\n\n  public SequenceEncodersTest(EncoderType coderType) {\n    this.coder = coderType.get();\n  }\n\n  @Test\n  public void testEncodeSuffixOnRandomSequences(Random rnd) {\n    for (int i = 0; i < 10000; i++) {\n      assertRoundtripEncode(\n          rnd,\n          RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 500),\n          RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 500));\n    }\n  }\n\n  @Test\n  public void testEncodeSamples(Random rnd) {\n    assertRoundtripEncode(rnd, \"\", \"\");\n    assertRoundtripEncode(rnd, \"abc\", \"ab\");\n    assertRoundtripEncode(rnd, \"abc\", \"abx\");\n    assertRoundtripEncode(rnd, \"ab\", \"abc\");\n    assertRoundtripEncode(rnd, \"xabc\", \"abc\");\n    assertRoundtripEncode(rnd, \"axbc\", \"abc\");\n    assertRoundtripEncode(rnd, \"axybc\", \"abc\");\n    assertRoundtripEncode(rnd, \"axybc\", \"abc\");\n    assertRoundtripEncode(rnd, \"azbc\", \"abcxy\");\n\n    assertRoundtripEncode(rnd, \"Niemcami\", \"Niemiec\");\n    assertRoundtripEncode(rnd, \"Niemiec\", \"Niemcami\");\n  }\n\n  private void assertRoundtripEncode(Random rnd, String srcString, String dstString) {\n    ByteBuffer source = ByteBuffer.wrap(srcString.getBytes(StandardCharsets.UTF_8));\n    ByteBuffer target = ByteBuffer.wrap(dstString.getBytes(StandardCharsets.UTF_8));\n\n    ByteBuffer encoded = coder.encode(ByteBuffer.allocate(rnd.nextInt(30)), source, target);\n    ByteBuffer decoded = coder.decode(ByteBuffer.allocate(rnd.nextInt(30)), source, encoded);\n\n    if (!decoded.equals(target)) {\n      System.out.println(\"src: \" + BufferUtils.toString(source, StandardCharsets.UTF_8));\n      System.out.println(\"dst: \" + BufferUtils.toString(target, StandardCharsets.UTF_8));\n      System.out.println(\"enc: \" + BufferUtils.toString(encoded, StandardCharsets.UTF_8));\n      System.out.println(\"dec: \" + BufferUtils.toString(decoded, StandardCharsets.UTF_8));\n      Assertions.fail(\"Mismatch.\");\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/escape-separator.info",
    "content": "#\r\n# An escape sequence for the separator.\r\n#\r\n\r\nfsa.dict.separator=\\t\r\nfsa.dict.encoding=UTF-8\r\nfsa.dict.encoder=suffix\r\n"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-diacritics-utf8.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=UTF-8\r\n\r\nfsa.dict.encoder=suffix\r\n"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-infix.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso-8859-2\r\n\r\nfsa.dict.encoder=infix"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-prefix.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso-8859-2\r\n\r\nfsa.dict.encoder=prefix\r\n\r\nfsa.dict.input-conversion=\\\\a ą, krowa Rzecz"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-removed-props.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso-8859-2\r\n\r\nfsa.dict.uses-infixes=true"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-separator-in-lookup.in",
    "content": "l+A+LW\nl+A+NN1d"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-separators.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso8859-1\r\n\r\nfsa.dict.encoder=none\r\n"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-separators.txt",
    "content": "token1+\ntoken2++\ntoken3+++\ntoken4+token2\ntoken5+token2+\ntoken6+token2++\ntoken7+token2+token3+\ntoken8+token2+token3++"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/test-synth.info",
    "content": "#\r\n# Dictionary properties.\r\n#\r\n\r\nfsa.dict.separator=+\r\nfsa.dict.encoding=iso-8859-2\r\n\r\nfsa.dict.encoder=suffix"
  },
  {
    "path": "morfologik-stemming/src/test/resources/morfologik/stemming/unicode-separator.info",
    "content": "#\r\n# An escape sequence for the separator.\r\n#\r\n\r\nfsa.dict.separator=\\u0009\r\nfsa.dict.encoding=UTF-8\r\nfsa.dict.encoder=suffix\r\n"
  },
  {
    "path": "morfologik-tools/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.carrot2</groupId>\n    <artifactId>morfologik-parent</artifactId>\n    <version>2.2.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>morfologik-tools</artifactId>\n  <packaging>jar</packaging>\n\n  <name>Morfologik Command Line Tools</name>\n  <description>Morfologik Command Line Tools</description>\n\n  <properties>\n    <forbiddenapis.signaturefile>../etc/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n    <project.moduleId>org.carrot2.morfologik.tools</project.moduleId>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-fsa</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-fsa-builders</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n\n    <dependency>\n      <groupId>org.carrot2</groupId>\n      <artifactId>morfologik-stemming</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n\n    <dependency>\n      <groupId>com.beust</groupId>\n      <artifactId>jcommander</artifactId>\n      <version>1.78</version>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-jar-plugin</artifactId>\n        <configuration>\n          <archive>\n            <manifest>\n              <mainClass>morfologik.tools.Launcher</mainClass>\n              <addClasspath>true</addClasspath>\n            </manifest>\n          </archive>\n        </configuration>\n      </plugin>\n\n      <plugin>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>package-zip</id>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n            <configuration>\n              <formats>\n                <format>zip</format>\n              </formats>\n              <descriptors>\n                <descriptor>src/main/assembly/package.xml</descriptor>\n              </descriptors>\n              <attach>false</attach>\n              <appendAssemblyId>true</appendAssemblyId>\n              <finalName>${project.artifactId}-${project.version}</finalName>\n            </configuration>\n          </execution>\n\n          <execution>\n            <id>package-dir</id>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n            <configuration>\n              <formats>\n                <format>dir</format>\n              </formats>\n              <descriptors>\n                <descriptor>src/main/assembly/package.xml</descriptor>\n              </descriptors>\n              <attach>false</attach>\n              <appendAssemblyId>false</appendAssemblyId>\n              <finalName>${project.artifactId}-${project.version}</finalName>\n            </configuration>\n          </execution>          \n        </executions>\n      </plugin>    \n\n      <plugin>\n        <groupId>de.thetaphi</groupId>\n        <artifactId>forbiddenapis</artifactId>\n        <version>${version.forbiddenapis}</version>\n\n        <executions>\n          <execution>\n            <id>forbidden-apis</id>\n            <configuration>\n              <bundledSignatures combine.self=\"override\">\n                <bundledSignature>jdk-unsafe</bundledSignature>\n                <bundledSignature>jdk-deprecated</bundledSignature>\n              </bundledSignatures>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "morfologik-tools/src/main/assembly/package.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0\"\n  xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd\">\n\n  <id>package</id>\n  <includeBaseDirectory>true</includeBaseDirectory>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>/lib</outputDirectory>\n      <useTransitiveDependencies>true</useTransitiveDependencies>\n      <useTransitiveFiltering>true</useTransitiveFiltering>\n      <useProjectArtifact>true</useProjectArtifact>\n    </dependencySet>\n  </dependencySets>\n  \n  <fileSets>\n    <fileSet>\n      <directory>src/main/package</directory>\n      <filtered>false</filtered>\n      <outputDirectory>.</outputDirectory>\n      <excludes>\n        <exclude>**/*.txt</exclude>\n      </excludes>\n    </fileSet>\n    <fileSet>\n      <directory>src/main/package</directory>\n      <filtered>true</filtered>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>**/*.txt</include>\n      </includes>\n      <lineEnding>unix</lineEnding>\n    </fileSet>\n  </fileSets>\n</assembly>\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/BinaryInput.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport java.io.BufferedInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\n\nfinal class BinaryInput {\n  private static final String ARG_ACCEPT_BOM = \"--accept-bom\";\n  private static final String ARG_ACCEPT_CR = \"--accept-cr\";\n  private static final String ARG_IGNORE_EMPTY = \"--ignore-empty\";\n\n  private static interface LineConsumer {\n    byte[] process(byte[] buffer, int length);\n  }\n\n  @Parameter(\n      names = BinaryInput.ARG_ACCEPT_BOM,\n      arity = 0,\n      description = \"Accept leading BOM bytes (UTF-8).\")\n  private boolean acceptBom;\n\n  @Parameter(\n      names = BinaryInput.ARG_ACCEPT_CR,\n      arity = 0,\n      description = \"Accept CR bytes in input sequences (\\\\r).\")\n  private boolean acceptCr;\n\n  @Parameter(\n      names = BinaryInput.ARG_IGNORE_EMPTY,\n      arity = 0,\n      description = \"Ignore empty lines in the input.\")\n  private boolean ignoreEmpty;\n\n  BinaryInput() {}\n\n  public BinaryInput(boolean acceptBom, boolean acceptCr, boolean ignoreEmpty) {\n    this.acceptBom = acceptBom;\n    this.acceptCr = acceptCr;\n    this.ignoreEmpty = ignoreEmpty;\n  }\n\n  List<byte[]> readBinarySequences(Path input, byte separator) throws IOException {\n    final List<byte[]> sequences = new ArrayList<>();\n    try (InputStream is = new BufferedInputStream(Files.newInputStream(input))) {\n      if (!acceptBom) {\n        is.mark(4);\n        if (is.read() == 0xef && is.read() == 0xbb && is.read() == 0xbf) {\n          throw new ExitStatusException(\n              ExitStatus.ERROR_OTHER,\n              \"The input starts with UTF-8 BOM bytes which is most likely not what you want. Use\"\n                  + \" header-less UTF-8 file or override with %s.\",\n              ARG_ACCEPT_BOM);\n        }\n        is.reset();\n      }\n\n      forAllLines(\n          is,\n          separator,\n          new LineConsumer() {\n            @Override\n            public byte[] process(byte[] buffer, int length) {\n              if (!acceptCr && hasCr(buffer, length)) {\n                throw new ExitStatusException(\n                    ExitStatus.ERROR_OTHER,\n                    \"The input contains \\\\r byte (CR) which would be encoded as part of the\"\n                        + \" automaton. If this is desired, use %s.\",\n                    ARG_ACCEPT_CR);\n              }\n\n              if (length == 0) {\n                if (!ignoreEmpty) {\n                  throw new ExitStatusException(\n                      ExitStatus.ERROR_OTHER,\n                      \"The input contains empty sequences.\"\n                          + \" If these can be ignored, use --ignore-empty.\");\n                }\n              } else {\n                sequences.add(Arrays.copyOf(buffer, length));\n              }\n\n              return buffer;\n            }\n          });\n    }\n\n    return sequences;\n  }\n\n  private static boolean hasCr(byte[] seq, int length) {\n    for (int o = length; --o >= 0; ) {\n      if (seq[o] == '\\r') {\n        return true;\n      }\n    }\n    return false;\n  }\n\n  /** Read all byte-separated sequences. */\n  private static int forAllLines(InputStream is, byte separator, LineConsumer lineConsumer)\n      throws IOException {\n    int lines = 0;\n    byte[] buffer = new byte[0];\n    int b, pos = 0;\n    while ((b = is.read()) != -1) {\n      if (b == separator) {\n        buffer = lineConsumer.process(buffer, pos);\n        pos = 0;\n        lines++;\n      } else {\n        if (pos >= buffer.length) {\n          buffer =\n              java.util.Arrays.copyOf(buffer, buffer.length + Math.max(10, buffer.length / 10));\n        }\n        buffer[pos++] = (byte) b;\n      }\n    }\n\n    if (pos > 0) {\n      lineConsumer.process(buffer, pos);\n      lines++;\n    }\n    return lines;\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/CliTool.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.JCommander;\nimport com.beust.jcommander.MissingCommandException;\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.ParameterException;\nimport com.beust.jcommander.Parameters;\nimport java.io.PrintStream;\nimport java.util.List;\nimport java.util.Locale;\nimport java.util.concurrent.Callable;\n\n/** Base class for command-line applications. */\npublic abstract class CliTool implements Callable<ExitStatus> {\n  protected static final String ARG_OVERWRITE = \"--overwrite\";\n  protected static final String ARG_VALIDATE = \"--validate\";\n\n  @Parameter(\n      names = {\"--exit\"},\n      hidden = true,\n      arity = 1,\n      description = \"Call System.exit() at the end of command processing.\")\n  private boolean callSystemExit = true;\n\n  @Parameter(\n      names = {\"-h\", \"--help\"},\n      help = true,\n      hidden = true,\n      description = \"Help about options and switches.\")\n  private boolean help;\n\n  public CliTool() {\n    if (!getClass().isAnnotationPresent(Parameters.class)) {\n      throw new RuntimeException();\n    }\n  }\n\n  /**\n   * Call {@link System#exit(int)} at the end of command processing.\n   *\n   * @param flag Call {@link System#exit(int)} if <code>true</code>.\n   */\n  public void setCallSystemExit(boolean flag) {\n    this.callSystemExit = flag;\n  }\n\n  /**\n   * Parse and execute one of the commands.\n   *\n   * @param args Command line arguments (command and options).\n   * @param commands A list of commands.\n   */\n  protected static void main(String[] args, CliTool... commands) {\n    if (commands.length == 1) {\n      main(args, commands[0]);\n    } else {\n      JCommander jc = new JCommander();\n      for (CliTool command : commands) {\n        jc.addCommand(command);\n      }\n      jc.addConverterFactory(new CustomParameterConverters());\n      jc.setProgramName(\"\");\n\n      ExitStatus exitStatus = ExitStatus.SUCCESS;\n      try {\n        jc.parse(args);\n\n        final String commandName = jc.getParsedCommand();\n        if (commandName == null) {\n          helpDisplayCommandOptions(System.err, jc);\n        } else {\n          List<Object> objects = jc.getCommands().get(commandName).getObjects();\n          if (objects.size() != 1) {\n            throw new RuntimeException();\n          }\n\n          CliTool command = CliTool.class.cast(objects.get(0));\n          exitStatus = command.call();\n          if (command.callSystemExit) {\n            System.exit(exitStatus.code);\n          }\n        }\n      } catch (ExitStatusException e) {\n        System.err.println(e.getMessage());\n        if (e.getCause() != null) {\n          e.getCause().printStackTrace(System.err);\n        }\n        exitStatus = e.exitStatus;\n      } catch (MissingCommandException e) {\n        System.err.println(\"Invalid argument: \" + e);\n        System.err.println();\n        helpDisplayCommandOptions(System.err, jc);\n        exitStatus = ExitStatus.ERROR_INVALID_ARGUMENTS;\n      } catch (ParameterException e) {\n        System.err.println(\"Invalid argument: \" + e.getMessage());\n        System.err.println();\n\n        if (jc.getParsedCommand() == null) {\n          helpDisplayCommandOptions(System.err, jc);\n        } else {\n          helpDisplayCommandOptions(System.err, jc.getParsedCommand(), jc);\n        }\n        exitStatus = ExitStatus.ERROR_INVALID_ARGUMENTS;\n      } catch (Throwable t) {\n        System.err.println(\"An unhandled exception occurred. Stack trace below.\");\n        t.printStackTrace(System.err);\n        exitStatus = ExitStatus.ERROR_OTHER;\n      }\n    }\n  }\n\n  /**\n   * Parse and execute a single command.\n   *\n   * @param args Command line arguments (command and options).\n   * @param command The command to execute.\n   */\n  protected static void main(String[] args, CliTool command) {\n    JCommander jc = new JCommander(command);\n    jc.addConverterFactory(new CustomParameterConverters());\n    jc.setProgramName(command.getClass().getAnnotation(Parameters.class).commandNames()[0]);\n\n    ExitStatus exitStatus = ExitStatus.SUCCESS;\n    try {\n      jc.parse(args);\n      if (command.help) {\n        helpDisplayCommandOptions(System.err, jc);\n      } else {\n        exitStatus = command.call();\n      }\n    } catch (ExitStatusException e) {\n      System.err.println(e.getMessage());\n      if (e.getCause() != null) {\n        e.getCause().printStackTrace(System.err);\n      }\n      exitStatus = e.exitStatus;\n    } catch (MissingCommandException e) {\n      System.err.println(\"Invalid argument: \" + e);\n      System.err.println();\n      helpDisplayCommandOptions(System.err, jc);\n      exitStatus = ExitStatus.ERROR_INVALID_ARGUMENTS;\n    } catch (ParameterException e) {\n      System.err.println(\"Invalid argument: \" + e.getMessage());\n      System.err.println();\n\n      if (jc.getParsedCommand() == null) {\n        helpDisplayCommandOptions(System.err, jc);\n      } else {\n        helpDisplayCommandOptions(System.err, jc.getParsedCommand(), jc);\n      }\n      exitStatus = ExitStatus.ERROR_INVALID_ARGUMENTS;\n    } catch (Throwable t) {\n      System.err.println(\"An unhandled exception occurred. Stack trace below.\");\n      t.printStackTrace(System.err);\n      exitStatus = ExitStatus.ERROR_OTHER;\n    }\n\n    if (command.callSystemExit) {\n      System.exit(exitStatus.code);\n    }\n  }\n\n  protected static void printf(String msg, Object... args) {\n    System.out.println(String.format(Locale.ROOT, msg, args));\n  }\n\n  protected static <T> T checkNotNull(T arg) {\n    if (arg == null) {\n      throw new IllegalArgumentException(\"Argument must not be null.\");\n    }\n    return arg;\n  }\n\n  private static void helpDisplayCommandOptions(PrintStream pw, String command, JCommander jc) {\n    StringBuilder sb = new StringBuilder();\n    jc = jc.getCommands().get(command);\n    jc.getUsageFormatter().usage(sb, \"\");\n    pw.print(sb);\n  }\n\n  private static void helpDisplayCommandOptions(PrintStream pw, JCommander jc) {\n    StringBuilder sb = new StringBuilder();\n    jc.getUsageFormatter().usage(sb, \"\");\n    pw.print(sb);\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/CustomParameterConverters.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.IStringConverter;\nimport com.beust.jcommander.IStringConverterFactory;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\nclass CustomParameterConverters implements IStringConverterFactory {\n  public static class PathConverter implements IStringConverter<Path> {\n    @Override\n    public Path convert(String value) {\n      return Paths.get(value);\n    }\n  }\n\n  @Override\n  public Class<? extends IStringConverter<?>> getConverter(Class<?> forType) {\n    if (forType.equals(Path.class)) {\n      return PathConverter.class;\n    }\n    return null;\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/DictApply.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.Parameters;\nimport java.io.BufferedInputStream;\nimport java.io.BufferedReader;\nimport java.io.Closeable;\nimport java.io.Console;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.nio.charset.Charset;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.List;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.WordData;\n\n/** Applies a morphological dictionary automaton to the input. */\n@Parameters(\n    commandNames = \"dict_apply\",\n    commandDescription = \"Applies a dictionary to an input. Each line is considered an input term.\")\npublic class DictApply extends CliTool {\n  private static final String ARG_ENCODING = \"--input-charset\";\n\n  @Parameter(\n      names = {\"-i\", \"--input\"},\n      required = false,\n      description = \"The input file, each entry in a single line. If not provided, stdin is used.\",\n      validateValueWith = ValidateFileExists.class)\n  private Path input;\n\n  @Parameter(\n      names = {\"-d\", \"--dictionary\"},\n      description = \"The dictionary (*.dict and a sibling *.info metadata) to apply.\",\n      required = true,\n      validateValueWith = ValidateFileExists.class)\n  private Path dictionary;\n\n  @Parameter(\n      names = {ARG_ENCODING},\n      required = false,\n      description = \"Character encoding of the input (platform's default).\")\n  private String inputEncoding;\n\n  @Parameter(\n      names = {\"--skip-tags\"},\n      required = false,\n      description = \"Skip tags in the output, only print base forms if found.\")\n  private boolean skipTags = false;\n\n  private abstract class LineSupplier implements Closeable {\n    public abstract String nextLine() throws IOException;\n\n    @Override\n    public void close() throws IOException {\n      // No-op by default.\n    }\n  }\n\n  private class ReaderLineSupplier extends LineSupplier {\n    private final BufferedReader lineReader;\n\n    public ReaderLineSupplier(BufferedReader reader) {\n      this.lineReader = reader;\n    }\n\n    @Override\n    public String nextLine() throws IOException {\n      return lineReader.readLine();\n    }\n\n    @Override\n    public void close() throws IOException {\n      lineReader.close();\n    }\n  }\n\n  DictApply() {}\n\n  public DictApply(Path dictionary, Path input, String inputEncoding) {\n    this.input = checkNotNull(input);\n    this.dictionary = checkNotNull(dictionary);\n  }\n\n  @Override\n  public ExitStatus call() throws Exception {\n    ExitStatus exitStatus = validateArguments();\n    if (exitStatus != null) {\n      return exitStatus;\n    }\n\n    final DictionaryLookup lookup = new DictionaryLookup(Dictionary.read(this.dictionary));\n    try (final LineSupplier input = determineInput()) {\n      String line;\n      while ((line = input.nextLine()) != null) {\n        if (line.length() == 0) {\n          continue;\n        }\n\n        List<WordData> wordData = lookup.lookup(line);\n        if (wordData.isEmpty()) {\n          System.out.println(line + \" => [not found]\");\n        } else {\n          for (WordData wd : wordData) {\n            CharSequence stem = wd.getStem();\n            CharSequence tag = wd.getTag();\n            System.out.println(\n                line + \" => \" + ((skipTags || tag == null) ? stem : stem + \" \" + tag));\n          }\n        }\n      }\n    }\n\n    return ExitStatus.SUCCESS;\n  }\n\n  private LineSupplier determineInput() throws IOException {\n    if (this.input != null) {\n      return new ReaderLineSupplier(\n          Files.newBufferedReader(this.input, Charset.forName(inputEncoding)));\n    }\n\n    final Console c = System.console();\n    if (c != null) {\n      System.err.println(\n          \"NOTE: Using Console for input, character encoding is unknown but should be all right.\");\n      return new LineSupplier() {\n        @Override\n        public String nextLine() throws IOException {\n          return c.readLine();\n        }\n      };\n    }\n\n    Charset charset =\n        this.inputEncoding != null ? Charset.forName(this.inputEncoding) : Charset.defaultCharset();\n    System.err.println(\n        \"NOTE: Using stdin for input, character encoding set to: \"\n            + charset.name()\n            + \" (use \"\n            + ARG_ENCODING\n            + \" to override).\");\n    return new ReaderLineSupplier(\n        new BufferedReader(new InputStreamReader(new BufferedInputStream(System.in), charset)));\n  }\n\n  private ExitStatus validateArguments() {\n    if (this.input != null) {\n      if (this.inputEncoding == null) {\n        System.err.println(\"Input encoding is required if file input is used.\");\n        return ExitStatus.ERROR_INVALID_ARGUMENTS;\n      }\n    } else {\n      if (System.console() != null && this.inputEncoding != null) {\n        System.err.println(\"Input encoding is only valid with file input or stdin redirection.\");\n        return ExitStatus.ERROR_INVALID_ARGUMENTS;\n      }\n    }\n\n    return null;\n  }\n\n  public static void main(String[] args) {\n    main(args, new DictApply());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/DictCompile.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.Parameters;\nimport com.beust.jcommander.ParametersDelegate;\nimport java.io.BufferedInputStream;\nimport java.io.BufferedOutputStream;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.charset.CharsetDecoder;\nimport java.nio.charset.CodingErrorAction;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.Collections;\nimport java.util.Iterator;\nimport java.util.List;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.builders.FSABuilder;\nimport morfologik.fsa.builders.FSASerializer;\nimport morfologik.stemming.BufferUtils;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.DictionaryMetadata;\nimport morfologik.stemming.ISequenceEncoder;\n\n/** Decompiles morphological dictionary automaton back to source state. */\n@Parameters(\n    commandNames = \"dict_compile\",\n    commandDescription = \"Compiles a morphological dictionary automaton.\")\npublic class DictCompile extends CliTool {\n  @Parameter(\n      names = {\"-i\", \"--input\"},\n      description =\n          \"The input file (base,inflected,tag). An associated metadata (*.info) file must exist.\",\n      required = true,\n      validateValueWith = ValidateFileExists.class)\n  private Path input;\n\n  @Parameter(\n      names = ARG_VALIDATE,\n      arity = 1,\n      description = \"Validate input to make sure it makes sense.\")\n  private boolean validate = true;\n\n  @Parameter(\n      names = {\"-f\", \"--format\"},\n      description = \"Automaton serialization format.\")\n  private SerializationFormat format = SerializationFormat.FSA5;\n\n  @Parameter(names = ARG_OVERWRITE, description = \"Overwrite the output file if it exists.\")\n  private boolean overwrite;\n\n  @ParametersDelegate private final BinaryInput binaryInput;\n\n  DictCompile() {\n    binaryInput = new BinaryInput();\n  }\n\n  public DictCompile(\n      Path input,\n      boolean overwrite,\n      boolean validate,\n      boolean acceptBom,\n      boolean acceptCr,\n      boolean ignoreEmpty) {\n    this.input = checkNotNull(input);\n    this.overwrite = overwrite;\n    this.validate = validate;\n    this.binaryInput = new BinaryInput(acceptBom, acceptCr, ignoreEmpty);\n  }\n\n  @Override\n  public ExitStatus call() throws Exception {\n    final Path metadataPath = DictionaryMetadata.getExpectedMetadataLocation(input);\n\n    if (!Files.isRegularFile(metadataPath)) {\n      System.err.println(\"Dictionary metadata file for the input does not exist: \" + metadataPath);\n      System.err.println(\n          \"The metadata file (with at least the column separator and byte encoding) \"\n              + \"is required. Check out the examples.\");\n      return ExitStatus.ERROR_OTHER;\n    }\n\n    final Path output =\n        metadataPath.resolveSibling(\n            metadataPath\n                .getFileName()\n                .toString()\n                .replaceAll(\"\\\\.\" + DictionaryMetadata.METADATA_FILE_EXTENSION + \"$\", \".dict\"));\n\n    if (!overwrite && Files.exists(output)) {\n      throw new ExitStatusException(\n          ExitStatus.ERROR_CONFIRMATION_REQUIRED,\n          \"Output dictionary file already exists: %s, use %s to override.\",\n          output,\n          ARG_OVERWRITE);\n    }\n\n    final DictionaryMetadata metadata;\n    try (InputStream is = new BufferedInputStream(Files.newInputStream(metadataPath))) {\n      metadata = DictionaryMetadata.read(is);\n    }\n\n    final List<byte[]> sequences = binaryInput.readBinarySequences(input, (byte) '\\n');\n\n    final CharsetDecoder charsetDecoder =\n        metadata\n            .getDecoder()\n            .onMalformedInput(CodingErrorAction.REPORT)\n            .onUnmappableCharacter(CodingErrorAction.REPORT);\n\n    final byte separator = metadata.getSeparator();\n    final ISequenceEncoder sequenceEncoder = metadata.getSequenceEncoderType().get();\n\n    if (!sequences.isEmpty()) {\n      Iterator<byte[]> i = sequences.iterator();\n      byte[] row = i.next();\n      final int separatorCount = countOf(separator, row);\n\n      if (separatorCount < 1 || separatorCount > 2) {\n        throw new ExitStatusException(\n            ExitStatus.ERROR_OTHER,\n            \"Invalid input. Each row must consist of [base,inflected,tag?] columns, where ',' is a\"\n                + \" separator character (declared as: %s). This row contains %d separator\"\n                + \" characters: %s\",\n            Character.isJavaIdentifierPart(metadata.getSeparatorAsChar())\n                ? \"'\" + Character.toString(metadata.getSeparatorAsChar()) + \"'\"\n                : \"0x\" + Integer.toHexString((int) separator & 0xff),\n            separatorCount,\n            new String(row, charsetDecoder.charset()));\n      }\n\n      while (i.hasNext()) {\n        row = i.next();\n        int count = countOf(separator, row);\n        if (count != separatorCount) {\n          throw new ExitStatusException(\n              ExitStatus.ERROR_OTHER,\n              \"The number of separators (%d) is inconsistent with previous lines: %s\",\n              count,\n              new String(row, charsetDecoder.charset()));\n        }\n      }\n    }\n\n    ByteBuffer encoded = ByteBuffer.allocate(0);\n    ByteBuffer source = ByteBuffer.allocate(0);\n    ByteBuffer target = ByteBuffer.allocate(0);\n    ByteBuffer tag = ByteBuffer.allocate(0);\n    ByteBuffer assembled = ByteBuffer.allocate(0);\n    for (int i = 0, max = sequences.size(); i < max; i++) {\n      byte[] row = sequences.get(i);\n      int sep1 = indexOf(separator, row, 0);\n      int sep2 = indexOf(separator, row, sep1 + 1);\n      if (sep2 < 0) {\n        sep2 = row.length;\n      }\n\n      source = BufferUtils.clearAndEnsureCapacity(source, sep1);\n      source.put(row, 0, sep1);\n      source.flip();\n\n      final int len = sep2 - (sep1 + 1);\n      target = BufferUtils.clearAndEnsureCapacity(target, len);\n      target.put(row, sep1 + 1, len);\n      target.flip();\n\n      final int len2 = row.length - (sep2 + 1);\n      tag = BufferUtils.clearAndEnsureCapacity(tag, len2);\n      if (len2 > 0) {\n        tag.put(row, sep2 + 1, len2);\n      }\n      tag.flip();\n\n      encoded = sequenceEncoder.encode(encoded, target, source);\n\n      assembled =\n          BufferUtils.clearAndEnsureCapacity(\n              assembled, target.remaining() + 1 + encoded.remaining() + 1 + tag.remaining());\n\n      assembled.put(target);\n      assembled.put(separator);\n      assembled.put(encoded);\n      if (tag.hasRemaining()) {\n        assembled.put(separator);\n        assembled.put(tag);\n      }\n      assembled.flip();\n\n      sequences.set(i, BufferUtils.toArray(assembled));\n    }\n\n    Collections.sort(sequences, FSABuilder.LEXICAL_ORDERING);\n    FSA fsa = FSABuilder.build(sequences);\n\n    FSASerializer serializer = format.getSerializer();\n    try (OutputStream os = new BufferedOutputStream(Files.newOutputStream(output))) {\n      serializer.serialize(fsa, os);\n    }\n\n    // If validating, try to scan the input\n    if (validate) {\n      DictionaryLookup dictionaryLookup = new DictionaryLookup(new Dictionary(fsa, metadata));\n      for (Iterator<?> i = dictionaryLookup.iterator(); i.hasNext(); i.next()) {\n        // Do nothing, just scan and make sure no exceptions are thrown.\n      }\n    }\n\n    return ExitStatus.SUCCESS;\n  }\n\n  private static int countOf(byte separator, byte[] row) {\n    int cnt = 0;\n    for (int i = row.length; --i >= 0; ) {\n      if (row[i] == separator) {\n        cnt++;\n      }\n    }\n    return cnt;\n  }\n\n  private static int indexOf(byte separator, byte[] row, int fromIndex) {\n    while (fromIndex < row.length) {\n      if (row[fromIndex] == separator) {\n        return fromIndex;\n      }\n      fromIndex++;\n    }\n    return -1;\n  }\n\n  public static void main(String[] args) {\n    main(args, new DictCompile());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/DictDecompile.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.Parameters;\nimport java.io.BufferedOutputStream;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.WordData;\n\n/** Decompiles morphological dictionary automaton back to source state. */\n@Parameters(\n    commandNames = \"dict_decompile\",\n    commandDescription = \"Decompiles morphological dictionary automaton back to source state.\")\npublic class DictDecompile extends CliTool {\n  @Parameter(\n      names = {\"-i\", \"--input\"},\n      description = \"The input dictionary (*.dict and a sibling *.info metadata).\",\n      required = true,\n      validateValueWith = ValidateFileExists.class)\n  private Path input;\n\n  @Parameter(\n      names = {\"-o\", \"--output\"},\n      description = \"The output file for dictionary data.\")\n  private Path output;\n\n  @Parameter(names = ARG_OVERWRITE, description = \"Overwrite the output file if it exists.\")\n  private boolean overwrite;\n\n  @Parameter(\n      names = ARG_VALIDATE,\n      arity = 1,\n      description = \"Validate decoded output to make sure it can be re-encoded.\")\n  private boolean validate = true;\n\n  DictDecompile() {}\n\n  public DictDecompile(Path input, Path output, boolean overwrite, boolean validate) {\n    this.input = checkNotNull(input);\n    this.output = output;\n    this.overwrite = overwrite;\n    this.validate = validate;\n  }\n\n  @Override\n  public ExitStatus call() throws Exception {\n    final Dictionary dictionary = Dictionary.read(input);\n    final DictionaryLookup lookup = new DictionaryLookup(dictionary);\n\n    if (output == null) {\n      output =\n          input.resolveSibling(\n              input.getFileName().toString().replaceAll(\"\\\\.dict$\", \"\") + \".input\");\n      if (Files.exists(output) && !overwrite) {\n        System.err.println(\n            \"ERROR: the default output file location already exists. Use --overwrite or remove\"\n                + \" the file manually: \"\n                + output.toString());\n        return ExitStatus.ERROR_CONFIRMATION_REQUIRED;\n      }\n    }\n\n    final byte separator = dictionary.metadata.getSeparator();\n    ByteBuffer stem = ByteBuffer.allocate(0);\n    ByteBuffer word = ByteBuffer.allocate(0);\n    ByteBuffer tag = ByteBuffer.allocate(0);\n    try (OutputStream os = new BufferedOutputStream(Files.newOutputStream(output))) {\n      boolean hasTags = false;\n      for (WordData wd : lookup) {\n        tag = wd.getTagBytes(tag);\n        if (tag.hasRemaining()) {\n          hasTags = true;\n          break;\n        }\n      }\n\n      for (WordData wd : lookup) {\n        stem = wd.getStemBytes(stem);\n        word = wd.getWordBytes(word);\n        tag = wd.getTagBytes(tag);\n\n        write(os, stem);\n        os.write(separator);\n        write(os, word);\n        if (hasTags) {\n          os.write(separator);\n          write(os, tag);\n        }\n        os.write('\\n');\n\n        if (validate\n            && (ensureNoSeparator(stem, separator) || ensureNoSeparator(word, separator))) {\n          System.err.println(\n              \"ERROR: The stem or word of a dictionary entry contains separator \"\n                  + \" byte \"\n                  + FSAInfo.byteAsChar(separator)\n                  + \", this will prevent proper re-encoding.\"\n                  + \" Add '--validate false' to override. Offending entry: \"\n                  + wd.getStem()\n                  + \", \"\n                  + wd.getWord());\n          return ExitStatus.ERROR_OTHER;\n        }\n      }\n    }\n\n    return ExitStatus.SUCCESS;\n  }\n\n  private void write(OutputStream os, ByteBuffer bb) throws IOException {\n    os.write(bb.array(), bb.arrayOffset() + bb.position(), bb.remaining());\n  }\n\n  private boolean ensureNoSeparator(ByteBuffer bb, byte marker) {\n    byte[] buf = bb.array();\n    for (int o = bb.arrayOffset() + bb.position(), i = bb.remaining(); i > 0; i--) {\n      if (buf[o] == marker) {\n        return true;\n      }\n    }\n    return false;\n  }\n\n  public static void main(String[] args) {\n    main(args, new DictDecompile());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/ExitStatus.java",
    "content": "package morfologik.tools;\n\npublic enum ExitStatus {\n  /** The command was successful. */\n  SUCCESS(0),\n\n  /** Unknown error cause. */\n  ERROR_OTHER(1),\n\n  /** Invalid input arguments or their combination. */\n  ERROR_INVALID_ARGUMENTS(2),\n\n  /** A potentially destructive command requires explicit confirmation that was not present. */\n  ERROR_CONFIRMATION_REQUIRED(3);\n\n  public final int code;\n\n  private ExitStatus(int systemExitCode) {\n    this.code = systemExitCode;\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/ExitStatusException.java",
    "content": "package morfologik.tools;\n\nimport java.util.Locale;\n\n@SuppressWarnings(\"serial\")\nclass ExitStatusException extends RuntimeException {\n  final ExitStatus exitStatus;\n\n  public ExitStatusException(ExitStatus status, String message, Object... args) {\n    this(status, null, message, args);\n  }\n\n  public ExitStatusException(ExitStatus status, Throwable t, String message, Object... args) {\n    super(String.format(Locale.ROOT, message, args), t);\n    this.exitStatus = status;\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/FSABuild.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameters;\n\n@Parameters(\n    hidden = true,\n    commandNames = \"fsa_build\",\n    commandDescription = \"Builds finite state automaton from \\\\n-delimited input.\")\n@Deprecated\npublic class FSABuild extends FSACompile {}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/FSACompile.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.Parameters;\nimport com.beust.jcommander.ParametersDelegate;\nimport java.io.BufferedOutputStream;\nimport java.io.OutputStream;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.Collections;\nimport java.util.List;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.builders.FSABuilder;\nimport morfologik.fsa.builders.FSASerializer;\n\n/** Build finite state automaton out of text input. */\n@Parameters(\n    commandNames = {\"fsa_compile\"},\n    commandDescription = \"Builds finite state automaton from \\\\n-delimited input.\")\npublic class FSACompile extends CliTool {\n  @Parameter(\n      names = {\"-i\", \"--input\"},\n      description = \"The input sequences (one sequence per \\\\n-delimited line).\",\n      required = true,\n      validateValueWith = ValidateFileExists.class)\n  private Path input;\n\n  @Parameter(\n      names = {\"-o\", \"--output\"},\n      description = \"The output automaton file.\",\n      required = true,\n      validateValueWith = ValidateParentDirExists.class)\n  private Path output;\n\n  @Parameter(\n      names = {\"-f\", \"--format\"},\n      description = \"Automaton serialization format.\")\n  private SerializationFormat format = SerializationFormat.FSA5;\n\n  @ParametersDelegate private final BinaryInput binaryInput;\n\n  FSACompile() {\n    binaryInput = new BinaryInput();\n  }\n\n  public FSACompile(\n      Path input,\n      Path output,\n      SerializationFormat format,\n      boolean acceptBom,\n      boolean acceptCr,\n      boolean ignoreEmpty) {\n    this.input = checkNotNull(input);\n    this.output = checkNotNull(output);\n    this.binaryInput = new BinaryInput(acceptBom, acceptCr, ignoreEmpty);\n  }\n\n  @Override\n  public ExitStatus call() throws Exception {\n    final List<byte[]> sequences = binaryInput.readBinarySequences(input, (byte) '\\n');\n\n    Collections.sort(sequences, FSABuilder.LEXICAL_ORDERING);\n    FSA fsa = FSABuilder.build(sequences);\n\n    FSASerializer serializer = format.getSerializer();\n    try (OutputStream os = new BufferedOutputStream(Files.newOutputStream(output))) {\n      serializer.serialize(fsa, os);\n    }\n\n    return ExitStatus.SUCCESS;\n  }\n\n  public static void main(String[] args) {\n    main(args, new FSACompile());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/FSADecompile.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.Parameters;\nimport java.io.BufferedInputStream;\nimport java.io.BufferedOutputStream;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport morfologik.fsa.FSA;\n\n/** Dump all byte sequences encoded in a finite state automaton. */\n@Parameters(\n    commandNames = \"fsa_decompile\",\n    commandDescription = \"Dumps all sequences encoded in an automaton.\")\npublic class FSADecompile extends CliTool {\n  @Parameter(\n      names = {\"-i\", \"--input\"},\n      description = \"The input automaton.\",\n      required = true,\n      validateValueWith = ValidateFileExists.class)\n  private Path input;\n\n  @Parameter(\n      names = {\"-o\", \"--output\"},\n      description = \"The output file for byte sequences.\",\n      required = true,\n      validateValueWith = ValidateParentDirExists.class)\n  private Path output;\n\n  FSADecompile() {}\n\n  public FSADecompile(Path input, Path output) {\n    this.input = checkNotNull(input);\n    this.output = checkNotNull(output);\n  }\n\n  @Override\n  public ExitStatus call() throws Exception {\n    final FSA fsa;\n    try (InputStream is = new BufferedInputStream(Files.newInputStream(input))) {\n      fsa = FSA.read(is);\n    }\n\n    try (OutputStream os = new BufferedOutputStream(Files.newOutputStream(output))) {\n      for (ByteBuffer bb : fsa) {\n        int o = bb.arrayOffset();\n        os.write(bb.array(), o + bb.position(), o + bb.remaining());\n        os.write('\\n');\n      }\n    }\n\n    return ExitStatus.SUCCESS;\n  }\n\n  public static void main(String[] args) {\n    main(args, new FSADecompile());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/FSADump.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameters;\n\n@Parameters(\n    hidden = true,\n    commandNames = \"fsa_dump\",\n    commandDescription = \"Dumps all sequences encoded in an automaton.\")\n@Deprecated\npublic class FSADump extends FSADecompile {}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/FSAInfo.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.Parameter;\nimport com.beust.jcommander.Parameters;\nimport java.io.BufferedInputStream;\nimport java.io.InputStream;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.Locale;\nimport morfologik.fsa.CFSA;\nimport morfologik.fsa.CFSA2;\nimport morfologik.fsa.FSA;\nimport morfologik.fsa.FSA5;\n\n/** Print extra information about a compiled automaton file. */\n@Parameters(\n    commandNames = \"fsa_info\",\n    commandDescription = \"Print extra information about a compiled automaton file.\")\npublic class FSAInfo extends CliTool {\n  @Parameter(\n      names = {\"-i\", \"--input\"},\n      description = \"The input automaton.\",\n      required = true,\n      validateValueWith = ValidateFileExists.class)\n  private Path input;\n\n  FSAInfo() {}\n\n  public FSAInfo(Path input) {\n    this.input = checkNotNull(input);\n  }\n\n  @Override\n  public ExitStatus call() throws Exception {\n    final FSA fsa;\n    try (InputStream is = new BufferedInputStream(Files.newInputStream(input))) {\n      fsa = FSA.read(is);\n    }\n\n    printf(\"%-25s : %s\", \"FSA implementation\", fsa.getClass().getName());\n    printf(\"%-25s : %s\", \"Compiled with flags\", fsa.getFlags().toString());\n\n    final morfologik.fsa.builders.FSAInfo info = new morfologik.fsa.builders.FSAInfo(fsa);\n    printf(\"%-25s : %,d\", \"Number of arcs (merged)\", info.arcsCount);\n    printf(\"%-25s : %,d\", \"Number of arcs (total)\", info.arcsCountTotal);\n    printf(\"%-25s : %,d\", \"Number of nodes\", info.nodeCount);\n    printf(\"%-25s : %,d\", \"Number of final states\", info.finalStatesCount);\n    printf(\"\");\n\n    if (fsa instanceof FSA5) {\n      FSA5 fsa5 = (FSA5) fsa;\n      printf(\"%-25s : %d\", \"Goto length (GTL)\", fsa5.gtl);\n      printf(\"%-25s : %d\", \"Node extra data\", fsa5.nodeDataLength);\n      printf(\"%-25s : %s\", \"Annotation separator\", byteAsChar(fsa5.annotation));\n      printf(\"%-25s : %s\", \"Filler character\", byteAsChar(fsa5.filler));\n    }\n\n    if (fsa instanceof CFSA) {\n      CFSA cfsa = (CFSA) fsa;\n      printf(\"%-25s : %d\", \"Goto length (GTL)\", cfsa.gtl);\n      printf(\"%-25s : %d\", \"Node extra data\", cfsa.nodeDataLength);\n    }\n\n    if (fsa instanceof CFSA2) {\n      CFSA2 cfsa2 = (CFSA2) fsa;\n\n      byte[] labelMapping = cfsa2.labelMapping;\n      if (labelMapping != null && labelMapping.length > 0) {\n        printf(\"%-25s :\", \"Label mapping\");\n        for (int i = 0; i < labelMapping.length; i++) {\n          printf(\"%-25s   %2d -> %s\", \"\", i, byteAsChar(labelMapping[i]));\n        }\n      }\n    }\n\n    return ExitStatus.SUCCESS;\n  }\n\n  /** Convert a byte to an informative string. */\n  static String byteAsChar(byte v) {\n    int chr = v & 0xff;\n    return String.format(\n        Locale.ROOT,\n        \"%s (0x%02x)\",\n        (Character.isWhitespace(chr) || chr > 127)\n            ? \"[non-printable]\"\n            : Character.toString((char) chr),\n        v & 0xFF);\n  }\n\n  public static void main(String[] args) {\n    main(args, new FSAInfo());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/Launcher.java",
    "content": "package morfologik.tools;\n\n/** JAR entry point. */\npublic final class Launcher {\n  private Launcher() {}\n\n  @SuppressWarnings(\"deprecation\")\n  public static void main(String[] args) {\n    CliTool.main(\n        args,\n        new FSACompile(),\n        new FSADump(),\n        new FSADecompile(),\n        new FSABuild(),\n        new FSAInfo(),\n        new DictCompile(),\n        new DictDecompile(),\n        new DictApply());\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/SerializationFormat.java",
    "content": "package morfologik.tools;\n\nimport morfologik.fsa.builders.CFSA2Serializer;\nimport morfologik.fsa.builders.FSA5Serializer;\nimport morfologik.fsa.builders.FSASerializer;\n\n/** The serialization and encoding format to use for compressing the automaton. */\npublic enum SerializationFormat {\n  FSA5 {\n    @Override\n    FSASerializer getSerializer() {\n      return new FSA5Serializer();\n    }\n  },\n\n  CFSA2 {\n    @Override\n    CFSA2Serializer getSerializer() {\n      return new CFSA2Serializer();\n    }\n  };\n\n  abstract FSASerializer getSerializer();\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/ValidateFileExists.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.IValueValidator;\nimport com.beust.jcommander.ParameterException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.Locale;\n\npublic final class ValidateFileExists implements IValueValidator<Path> {\n  @Override\n  public void validate(String name, Path value) throws ParameterException {\n    if (!Files.exists(value)) {\n      throw new ParameterException(\n          String.format(Locale.ROOT, \"%s does not exist: %s\", name, value));\n    }\n\n    if (!Files.isRegularFile(value)) {\n      throw new ParameterException(String.format(Locale.ROOT, \"%s is not a file: %s\", name, value));\n    }\n\n    if (!Files.isReadable(value)) {\n      throw new ParameterException(\n          String.format(Locale.ROOT, \"%s is not readable: %s\", name, value));\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/java/morfologik/tools/ValidateParentDirExists.java",
    "content": "package morfologik.tools;\n\nimport com.beust.jcommander.IValueValidator;\nimport com.beust.jcommander.ParameterException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.Locale;\n\npublic final class ValidateParentDirExists implements IValueValidator<Path> {\n  @Override\n  public void validate(String name, Path value) throws ParameterException {\n    value = value.toAbsolutePath().normalize().getParent();\n\n    if (!Files.exists(value)) {\n      throw new ParameterException(\n          String.format(Locale.ROOT, \"Directory does not exist: %s\", value));\n    }\n\n    if (!Files.isDirectory(value)) {\n      throw new ParameterException(\n          String.format(Locale.ROOT, \"Path is not a directory: %s\", value));\n    }\n\n    if (!Files.isWritable(value)) {\n      throw new ParameterException(String.format(Locale.ROOT, \"Path is not writable: %s\", value));\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/main/package/README.txt",
    "content": "${project.artifactId}, ${project.version}\r\n\r\nTools for morphological dictionary and finite state automata construction.\r\nhttps://github.com/morfologik\r\n\r\nTry the examples (each one comes with a simple description of what it does).\r\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/01-fsa-build.input",
    "content": "black sabbath\nmetallica\njudas priest\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/01-fsa-build.txt",
    "content": "# This example constructs a finite state automaton (FSA) out\r\n# of byte sequences in the input file:\r\n#\r\n# https://en.wikipedia.org/wiki/Finite-state_machine\r\n#\r\n# Each sequence is encoded as one path in the automaton. Input are LF-separated \r\n# byte sequences.\r\n#\r\n\r\n# This example constructs an automaton serialized with FSA5 (Jan Daciuk's fsa_build compatible format).\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar fsa_build --input 01-fsa-build.input --output 01-fsa-build.fsa5  --format fsa5\r\n\r\n# This example uses CFSA2, a custom format that is packed slightly better, but slower at runtime.\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar fsa_build --input 01-fsa-build.input --output 01-fsa-build.cfsa2 --format cfsa2\r\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/02-fsa-dump.txt",
    "content": "# This example dumps byte sequences from a finite\r\n# state automaton (created in a previous example), \r\n# separating each sequence with a CR byte.\r\n\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar fsa_dump --input 01-fsa-build.fsa5 --output 02-fsa-dump.output\r\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/03-fsa-info.txt",
    "content": "# This example prints diagnostic information about\r\n# a compiled automaton.\r\n\r\necho \"FSA5:\"\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar fsa_info --input 01-fsa-build.fsa5\r\n\r\necho \"CFSA2:\"\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar fsa_info --input 01-fsa-build.cfsa2\r\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/04-dict-compile.info",
    "content": "#\r\n# Dictionary metadata. A Java property file, read as UTF-8.\r\n#\r\n\r\n#\r\n# REQUIRED PROPERTIES\r\n#\r\n\r\n# Column (lemma, inflected, tag) separator. This must be a single byte in the target encoding.\r\nfsa.dict.separator=;\r\n\r\n# The charset in which the input is encoded. UTF-8 is strongly recommended.\r\nfsa.dict.encoding=UTF-8\r\n\r\n# The type of lemma-inflected form encoding compression that precedes automaton\r\n# construction. Allowed values: [suffix, infix, prefix, none].\r\n# Details are in Daciuk's paper and in the code. \r\n# Leave at 'prefix' if not sure.\r\nfsa.dict.encoder=prefix\r\n\r\n\r\n#\r\n# OPTIONAL PROPERTIES\r\n#\r\n\r\n# Author of the dictionary.\r\nfsa.dict.author=Acme Inc.\r\n\r\n# Date the dictionary data was assembled (not compilation time!).\r\nfsa.dict.created=2013/10/24 18:18:00\r\n\r\n# The license for the dictionary data.\r\nfsa.dict.license=(license here)\r\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/04-dict-compile.input",
    "content": "jawa;jawy;subst:pl:acc:f\njawa;jawy;subst:pl:nom:f\njawa;jawy;subst:pl:voc:f\njawa;jawy;subst:sg:gen:f\njawór;jawór;subst:sg:acc:m3+subst:sg:nom:m3\njaw;jawów;subst:pl:gen:m3\njawa;jawą;subst:sg:inst:f\njawa;jawę;subst:sg:acc:f"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/04-dict-compile.txt",
    "content": "#\r\n# This example compiles a dictionary for use with DictionaryLookup \r\n# (dictionary-driven stemming and morphological tag lookup).\r\n#\r\n# The input file must contain, in each \\n-delimited line a sequence of:\r\n# \r\n# lemma;inflected;tag\r\n#\r\n# The separator character (byte) is configurable.\r\n# The tag is optional.\r\n# \r\n# Note that, in addition to the input file, the compiler will\r\n# also require an associated dictionary \"metadata\" file, which tells\r\n# it how to compress and interpret the input.\r\n#\r\n# Open and inspect the content of this example's input files:\r\n#   04-dict-compile.input\r\n#   04-dict-compile.info\r\n#\r\n\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar dict_compile --input 04-dict-compile.input\r\n\r\n# The compiled dictionary should be written to 04-dict-compile.dict.\r\n"
  },
  {
    "path": "morfologik-tools/src/main/package/examples/05-dict-decompile.txt",
    "content": "#\r\n# This example decompiles an existing dictionary into\r\n# its source form (columns).\r\n#\r\n# The input file must point at the *.dict file (automaton) and\r\n# it must have an associated metadata (*.info) file.\r\n#\r\njava -jar ../lib/${project.artifactId}-${project.version}.jar dict_decompile --input 04-dict-compile.dict --output 05-dict-decompile.input\r\n"
  },
  {
    "path": "morfologik-tools/src/test/java/morfologik/tools/DictCompileBug.java",
    "content": "package morfologik.tools;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomNumbers;\nimport java.io.Writer;\nimport java.nio.charset.StandardCharsets;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.LinkedHashSet;\nimport java.util.Random;\nimport java.util.Set;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.DictionaryMetadata;\nimport morfologik.stemming.EncoderType;\nimport morfologik.stemming.WordData;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.io.TempDir;\n\n@Randomized\npublic class DictCompileBug extends RandomizedTest {\n  @Test\n  public void testSeparatorInEncoded(@TempDir Path tempDir, Random rnd) throws Exception {\n    final Path input = tempDir.resolve(\"dictionary.input\");\n    final Path metadata = DictionaryMetadata.getExpectedMetadataLocation(input);\n\n    char separator = '_';\n    try (Writer writer = Files.newBufferedWriter(metadata, StandardCharsets.UTF_8)) {\n      DictionaryMetadata.builder()\n          .separator(separator)\n          .encoder(EncoderType.SUFFIX)\n          .encoding(StandardCharsets.UTF_8)\n          .build()\n          .write(writer);\n    }\n\n    Set<String> sequences = new LinkedHashSet<>();\n    for (int seqs = RandomNumbers.randomIntInRange(rnd, 0, 100); --seqs >= 0; ) {\n      sequences.add(\"anfragen_anfragen|VER:1:PLU:KJ1:SFT:NEB\");\n      sequences.add(\"Anfragen_anfragen|VER:1:PLU:KJ1:SFT:NEB\");\n    }\n\n    try (Writer writer = Files.newBufferedWriter(input, StandardCharsets.UTF_8)) {\n      for (String in : sequences) {\n        writer.write(in);\n        writer.write('\\n');\n      }\n    }\n\n    Assertions.assertThat(new DictCompile(input, false, true, false, false, false).call())\n        .isEqualTo(ExitStatus.SUCCESS);\n\n    Path dict = input.resolveSibling(\"dictionary.dict\");\n    Assertions.assertThat(dict).isRegularFile();\n\n    // Verify the dictionary is valid.\n\n    DictionaryLookup dictionaryLookup = new DictionaryLookup(Dictionary.read(dict));\n    for (WordData wd : dictionaryLookup) {\n      System.out.println(wd);\n    }\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/test/java/morfologik/tools/DictCompileTest.java",
    "content": "package morfologik.tools;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomNumbers;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomPicks;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomStrings;\nimport java.io.Writer;\nimport java.nio.charset.StandardCharsets;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.LinkedHashSet;\nimport java.util.List;\nimport java.util.Random;\nimport java.util.Set;\nimport morfologik.stemming.Dictionary;\nimport morfologik.stemming.DictionaryLookup;\nimport morfologik.stemming.DictionaryMetadata;\nimport morfologik.stemming.EncoderType;\nimport morfologik.stemming.WordData;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.RepeatedTest;\nimport org.junit.jupiter.api.io.TempDir;\n\n@Randomized\npublic class DictCompileTest extends RandomizedTest {\n  @RepeatedTest(200)\n  public void testRoundTrip(@TempDir Path tempDir, Random rnd) throws Exception {\n    final Path input = tempDir.resolve(\"dictionary.input\");\n    final Path metadata = DictionaryMetadata.getExpectedMetadataLocation(input);\n\n    char separator =\n        RandomPicks.randomFrom(\n            rnd,\n            new Character[] {\n              '|', ',', '\\t',\n            });\n\n    try (Writer writer = Files.newBufferedWriter(metadata, StandardCharsets.UTF_8)) {\n      DictionaryMetadata.builder()\n          .separator(separator)\n          .encoder(RandomPicks.randomFrom(rnd, EncoderType.values()))\n          .encoding(StandardCharsets.UTF_8)\n          .build()\n          .write(writer);\n    }\n\n    final boolean useTags = rnd.nextBoolean();\n\n    Set<String> sequences = new LinkedHashSet<>();\n    for (int seqs = RandomNumbers.randomIntInRange(rnd, 0, 100); --seqs >= 0; ) {\n      String base;\n      switch (RandomNumbers.randomIntInRange(rnd, 0, 5)) {\n        case 0:\n          base = RandomStrings.randomAsciiLettersOfLength(rnd, ('A' - separator) & 0xff);\n          break;\n\n        default:\n          base = RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 1, 100);\n          break;\n      }\n\n      String inflected;\n      switch (RandomNumbers.randomIntInRange(rnd, 0, 5)) {\n        case 0:\n          inflected = base;\n          break;\n\n        case 1:\n          inflected = RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 5) + base;\n          break;\n\n        case 3:\n          inflected = base + RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 5);\n          break;\n\n        case 4:\n          inflected =\n              RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 5)\n                  + base\n                  + RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 5);\n          break;\n\n        default:\n          inflected = RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 200);\n          break;\n      }\n\n      sequences.add(\n          base\n              + separator\n              + inflected\n              + (useTags\n                  ? (separator + RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 0, 10))\n                  : \"\"));\n    }\n\n    final boolean ignoreEmpty = rnd.nextBoolean();\n    try (Writer writer = Files.newBufferedWriter(input, StandardCharsets.UTF_8)) {\n      for (String in : sequences) {\n        writer.write(in);\n        writer.write('\\n');\n\n        if (ignoreEmpty && rnd.nextBoolean()) {\n          writer.write('\\n');\n        }\n      }\n    }\n\n    boolean validate = rnd.nextBoolean();\n    Assertions.assertThat(new DictCompile(input, false, validate, false, false, ignoreEmpty).call())\n        .isEqualTo(ExitStatus.SUCCESS);\n\n    Path dict = input.resolveSibling(\"dictionary.dict\");\n    Assertions.assertThat(dict).isRegularFile();\n\n    // Verify the dictionary is valid.\n\n    DictionaryLookup dictionaryLookup = new DictionaryLookup(Dictionary.read(dict));\n    Set<String> reconstructed = new LinkedHashSet<>();\n    for (WordData wd : dictionaryLookup) {\n      reconstructed.add(\n          \"\"\n              + wd.getStem()\n              + separator\n              + wd.getWord()\n              + (useTags ? separator : \"\")\n              + (wd.getTag() == null ? \"\" : wd.getTag()));\n    }\n\n    Assertions.assertThat(reconstructed).containsOnlyElementsOf(sequences);\n\n    // Verify decompilation via DictDecompile.\n\n    // GH-79: if there's only one sequence and there is no tag the decompiler will\n    // drop it.\n    if (useTags && sequences.size() == 1) {\n      String onlyOne = sequences.iterator().next();\n      if (onlyOne.endsWith(Character.toString(separator))) {\n        sequences.clear();\n        sequences.add(onlyOne.substring(0, onlyOne.length() - 1));\n      }\n    }\n\n    Files.delete(input);\n    Assertions.assertThat(new DictDecompile(dict, null, true, validate).call())\n        .isEqualTo(ExitStatus.SUCCESS);\n\n    List<String> allLines = Files.readAllLines(input, StandardCharsets.UTF_8);\n    Assertions.assertThat(allLines).containsOnlyElementsOf(sequences);\n  }\n}\n"
  },
  {
    "path": "morfologik-tools/src/test/java/morfologik/tools/FSACompileTest.java",
    "content": "package morfologik.tools;\n\nimport com.carrotsearch.randomizedtesting.jupiter.Randomized;\nimport com.carrotsearch.randomizedtesting.jupiter.RandomizedTest;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomNumbers;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomPicks;\nimport com.carrotsearch.randomizedtesting.jupiter.generators.RandomStrings;\nimport java.io.ByteArrayOutputStream;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.io.PrintStream;\nimport java.nio.ByteBuffer;\nimport java.nio.charset.StandardCharsets;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.HashSet;\nimport java.util.Iterator;\nimport java.util.LinkedHashSet;\nimport java.util.Random;\nimport java.util.Set;\nimport java.util.concurrent.Callable;\nimport morfologik.fsa.FSA;\nimport morfologik.stemming.BufferUtils;\nimport org.assertj.core.api.Assertions;\nimport org.junit.jupiter.api.RepeatedTest;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.io.TempDir;\n\n@Randomized\npublic class FSACompileTest extends RandomizedTest {\n  @RepeatedTest(100)\n  public void testCliInvocation(@TempDir Path tempDir, Random rnd) throws Exception {\n    final Path input = Files.createTempFile(tempDir, \"input\", \"in\");\n    final Path output = Files.createTempFile(tempDir, \"input\", \"out\");\n\n    Set<String> sequences = new LinkedHashSet<>();\n    for (int seqs = RandomNumbers.randomIntInRange(rnd, 0, 100); --seqs >= 0; ) {\n      sequences.add(RandomStrings.randomAsciiLettersOfLengthBetween(rnd, 1, 10));\n    }\n\n    try (OutputStream os = Files.newOutputStream(input)) {\n      Iterator<String> i = sequences.iterator();\n      while (i.hasNext()) {\n        os.write(i.next().getBytes(StandardCharsets.UTF_8));\n\n        // Sometimes don't add trailing '\\n'.\n        if (!i.hasNext() && rnd.nextBoolean()) {\n          break;\n        } else {\n          os.write('\\n');\n          if (rnd.nextBoolean()) {\n            os.write('\\n');\n          }\n        }\n      }\n    }\n\n    SerializationFormat format = RandomPicks.randomFrom(rnd, SerializationFormat.values());\n\n    Assertions.assertThat(new FSACompile(input, output, format, false, false, true).call())\n        .isEqualTo(ExitStatus.SUCCESS);\n\n    try (InputStream is = Files.newInputStream(output)) {\n      FSA fsa = FSA.read(is);\n      Assertions.assertThat(fsa).isNotNull();\n\n      Set<String> result = new HashSet<>();\n      for (ByteBuffer bb : fsa) {\n        result.add(BufferUtils.toString(bb, StandardCharsets.UTF_8));\n      }\n\n      Assertions.assertThat(result).containsOnlyElementsOf(sequences);\n    }\n  }\n\n  @Test\n  public void testEmptyWarning(@TempDir Path tempDir, Random rnd) throws Exception {\n    final Path input = Files.createTempFile(tempDir, \"input\", \"in\");\n    final Path output = Files.createTempFile(tempDir, \"input\", \"out\");\n\n    Files.write(input, \"abc\\n\\ndef\".getBytes(StandardCharsets.US_ASCII));\n\n    String out =\n        sysouts(\n            new Callable<Void>() {\n              @Override\n              public Void call() throws Exception {\n                FSACompile.main(\n                    new String[] {\n                      \"--exit\", \"false\",\n                      \"--input\", input.toAbsolutePath().toString(),\n                      \"--output\", output.toAbsolutePath().toString()\n                    });\n                return null;\n              }\n            });\n\n    Assertions.assertThat(out).contains(\"--ignore-empty\");\n  }\n\n  @Test\n  public void testCrWarning(@TempDir Path tempDir, Random rnd) throws Exception {\n    final Path input = Files.createTempFile(tempDir, \"input\", \"in\");\n    final Path output = Files.createTempFile(tempDir, \"input\", \"out\");\n\n    Files.write(input, \"abc\\r\\ndef\\r\\n\".getBytes(StandardCharsets.US_ASCII));\n\n    String out =\n        sysouts(\n            new Callable<Void>() {\n              @Override\n              public Void call() throws Exception {\n                FSACompile.main(\n                    new String[] {\n                      \"--exit\", \"false\",\n                      \"--input\", input.toAbsolutePath().toString(),\n                      \"--output\", output.toAbsolutePath().toString()\n                    });\n                return null;\n              }\n            });\n\n    Assertions.assertThat(out).contains(\"CR\");\n  }\n\n  @Test\n  public void testBomWarning(@TempDir Path tempDir) throws Exception {\n    final Path input = Files.createTempFile(tempDir, \"input\", \"in\");\n    final Path output = Files.createTempFile(tempDir, \"input\", \"out\");\n\n    // Emit UTF-8 BOM prefixed list of three strings.\n    ByteArrayOutputStream baos = new ByteArrayOutputStream();\n    baos.write(new byte[] {(byte) 0xEF, (byte) 0xBB, (byte) 0xBF});\n    baos.write(\"abc\\ndef\\nxyz\".getBytes(StandardCharsets.UTF_8));\n    Files.write(input, baos.toByteArray());\n\n    String out =\n        sysouts(\n            new Callable<Void>() {\n              @Override\n              public Void call() throws Exception {\n                FSACompile.main(\n                    new String[] {\n                      \"--exit\", \"false\",\n                      \"--input\", input.toAbsolutePath().toString(),\n                      \"--output\", output.toAbsolutePath().toString()\n                    });\n                return null;\n              }\n            });\n\n    Assertions.assertThat(out).contains(\"UTF-8 BOM\");\n  }\n\n  private String sysouts(Callable<Void> callable) throws Exception {\n    PrintStream sout = System.out;\n    PrintStream serr = System.err;\n\n    ByteArrayOutputStream baos = new ByteArrayOutputStream();\n    PrintStream ps = new PrintStream(baos, true, \"UTF-8\");\n    System.setOut(ps);\n    System.setErr(ps);\n    try {\n      callable.call();\n      return new String(baos.toByteArray(), StandardCharsets.UTF_8);\n    } finally {\n      System.setOut(sout);\n      System.setErr(serr);\n    }\n  }\n}\n"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n\n  <groupId>org.carrot2</groupId>\n  <artifactId>morfologik-parent</artifactId>\n  <version>2.2.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n\n  <name>Morfologik (parent POM)</name>\n  <description>Morfologik is a collection of tools for building finite state automata and stemming/ inflection dictionaries built on top of these. </description>\n  <url>http://morfologik.blogspot.com/</url>\n\n  <licenses>\n    <license>\n      <name>BSD</name>\n      <url>http://www.opensource.org/licenses/bsd-license.php</url>\n      <distribution>repo</distribution>\n    </license>\n  </licenses>\n\n  <mailingLists>\n    <mailingList>\n      <name>Announcements, bug reports, developers mailing list</name>\n      <post>morfologik-devel@lists.sourceforge.net</post>\n    </mailingList>\n  </mailingLists>\n\n  <scm>\n    <url>git@github.com:morfologik/morfologik-stemming.git</url>\n    <connection>scm:git:git@github.com:morfologik/morfologik-stemming.git</connection>\n    <developerConnection>scm:git:git@github.com:morfologik/morfologik-stemming.git</developerConnection>\n  </scm>\n\n  <developers>\n    <developer>\n      <id>dawid.weiss</id>\n      <name>Dawid Weiss</name>\n      <email>dawid.weiss@carrotsearch.com</email>\n    </developer>\n\n    <developer>\n      <id>marcin.milkowski</id>\n      <name>Marcin Miłkowski</name>\n    </developer>\n  </developers>\n\n  <properties>\n    <maven.compiler.release>11</maven.compiler.release>\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n\n    <version.maven>3.9.12</version.maven>\n    <version.assertj>3.27.7</version.assertj>\n    <version.hppc>0.7.2</version.hppc>\n    <version.junit>6.0.3</version.junit>\n    <version.randomizedtesting>0.2.0</version.randomizedtesting>\n\n    <version.maven-compiler-plugin>3.15.0</version.maven-compiler-plugin>\n    <version.maven-enforcer-plugin>3.6.2</version.maven-enforcer-plugin>\n    <version.maven-clean-plugin>3.5.0</version.maven-clean-plugin>\n    <version.maven-jar-plugin>3.5.0</version.maven-jar-plugin>\n\n    <version.forbiddenapis>3.10</version.forbiddenapis>\n    <forbiddenapis.signaturefile>src/forbidden-apis/signatures.txt</forbiddenapis.signaturefile>\n  </properties>\n\n  <modules>\n    <module>morfologik-fsa</module>\n    <module>morfologik-fsa-builders</module>\n    <module>morfologik-stemming</module>\n    <module>morfologik-polish</module>\n    <module>morfologik-speller</module>\n    <module>morfologik-tools</module>\n  </modules>\n\n  <dependencyManagement>\n    <dependencies>\n      <dependency>\n        <groupId>com.carrotsearch</groupId>\n        <artifactId>hppc</artifactId>\n        <version>${version.hppc}</version>\n      </dependency>\n    </dependencies>\n  </dependencyManagement>\n\n  <dependencies>\n    <dependency>\n        <groupId>com.carrotsearch.randomizedtesting</groupId>\n        <artifactId>randomizedtesting-jupiter</artifactId>\n        <version>${version.randomizedtesting}</version>\n        <scope>test</scope>\n      </dependency>\n\n      <dependency>\n        <groupId>org.junit.jupiter</groupId>\n        <artifactId>junit-jupiter</artifactId>\n        <version>${version.junit}</version>\n        <scope>test</scope>\n      </dependency>\n\n      <dependency>\n        <groupId>org.assertj</groupId>\n        <artifactId>assertj-core</artifactId>\n        <version>${version.assertj}</version>\n        <scope>test</scope>\n      </dependency>\n  </dependencies>\n\n  <build>\n    <pluginManagement>\n      <plugins>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-clean-plugin</artifactId>\n          <version>${version.maven-clean-plugin}</version>\n          <configuration>\n            <failOnError>false</failOnError>\n            <excludeDefaultDirectories>true</excludeDefaultDirectories>\n            <filesets>\n              <fileset>\n                <directory>${project.build.directory}</directory>\n                <excludes>\n                  <exclude>eclipse/**</exclude>\n                  <exclude>idea/**</exclude>\n                </excludes>\n              </fileset>\n            </filesets>\n          </configuration>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-deploy-plugin</artifactId>\n          <version>3.1.4</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-source-plugin</artifactId>\n          <version>3.4.0</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-javadoc-plugin</artifactId>\n          <version>3.12.0</version>\n          <configuration>\n            <sourcepath>src/main/java</sourcepath>\n            <doclint>all,-missing</doclint>\n          </configuration>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-antrun-plugin</artifactId>\n          <version>3.2.0</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-assembly-plugin</artifactId>\n          <version>3.8.0</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-jar-plugin</artifactId>\n          <version>${version.maven-jar-plugin}</version>\n          <configuration>\n            <archive>\n              <addMavenDescriptor>false</addMavenDescriptor>\n              <manifestEntries>\n                <Project-GroupId>${project.groupId}</Project-GroupId>\n                <Project-ArtifactId>${project.artifactId}</Project-ArtifactId>\n                <Project-Version>${project.version}</Project-Version>\n                <Project-Name>${project.name}</Project-Name>\n\n                <Automatic-Module-Name>${project.moduleId}</Automatic-Module-Name>\n              </manifestEntries>\n            </archive>\n          </configuration>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-install-plugin</artifactId>\n          <version>3.1.4</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-resources-plugin</artifactId>\n          <version>3.5.0</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-dependency-plugin</artifactId>\n          <version>3.10.0</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-compiler-plugin</artifactId>\n          <version>${version.maven-compiler-plugin}</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-gpg-plugin</artifactId>\n          <version>3.2.8</version>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.felix</groupId>\n          <artifactId>maven-bundle-plugin</artifactId>\n          <version>6.0.2</version>\n          <extensions>true</extensions>\n        </plugin>\n\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-surefire-plugin</artifactId>\n          <version>3.5.5</version>\n        </plugin>\n        \n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-enforcer-plugin</artifactId>\n          <version>${version.maven-enforcer-plugin}</version>\n        </plugin>\n\n        <plugin>\n          <groupId>com.diffplug.spotless</groupId>\n          <artifactId>spotless-maven-plugin</artifactId>\n          <version>3.4.0</version>\n          <configuration>\n            <java>\n              <googleJavaFormat>\n                <version>1.35.0</version>\n                <reflowLongStrings>true</reflowLongStrings>\n                <formatJavadoc>true</formatJavadoc>\n              </googleJavaFormat>\n              <lineEndings>UNIX</lineEndings>\n            </java>\n          </configuration>\n          <executions>\n            <execution>\n              <goals>\n                <goal>check</goal>\n              </goals>\n            </execution>\n          </executions>\n        </plugin>\n      </plugins>\n    </pluginManagement>\n    \n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-jar-plugin</artifactId>\n      </plugin>\n\n      <plugin>\n        <groupId>com.diffplug.spotless</groupId>\n        <artifactId>spotless-maven-plugin</artifactId>\n      </plugin>\n\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-enforcer-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>enforce-java-version</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <phase>validate</phase>\n            <configuration>\n              <rules>\n                <requireJavaVersion>\n                  <version>[21,)</version>\n                  <message>JDK 21 or newer is required to build this project.</message>\n                </requireJavaVersion>\n              </rules>\n            </configuration>\n          </execution>\n          <execution>\n            <id>enforce-dependency-convergence</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <phase>verify</phase>\n            <configuration>\n              <rules>\n                <DependencyConvergence/>\n              </rules>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-enforcer-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>enforce-environment</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <inherited>true</inherited>\n            <configuration>\n              <rules combine.children=\"append\">\n                <requireMavenVersion>\n                  <version>[${version.maven},)</version>\n                  <message>At least Maven ${version.maven}+ required.</message>\n                </requireMavenVersion>\n              </rules>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n\n      <plugin>\n        <groupId>de.thetaphi</groupId>\n        <artifactId>forbiddenapis</artifactId>\n        <version>${version.forbiddenapis}</version>\n\n        <executions>\n          <execution>\n            <id>forbidden-apis</id>\n            <configuration>\n              <targetVersion>${maven.compiler.release}</targetVersion>\n              <failOnUnsupportedJava>false</failOnUnsupportedJava>\n              <excludes>\n              </excludes>\n              <bundledSignatures>\n                <bundledSignature>jdk-unsafe</bundledSignature>\n                <bundledSignature>jdk-deprecated</bundledSignature>\n                <bundledSignature>jdk-system-out</bundledSignature>\n              </bundledSignatures>\n              <signaturesFiles>\n                <signaturesFile>${forbiddenapis.signaturefile}</signaturesFile>\n              </signaturesFiles>                   \n            </configuration>\n            <phase>process-classes</phase>\n            <goals>\n              <goal>check</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>      \n    </plugins>\n  </build>\n\n  <profiles>\n    <profile>\n      <id>profile.ide.eclipse-m2e</id>\n      \n      <activation>\n        <property>\n          <name>m2e.version</name>\n        </property>\n      </activation>\n      \n      <build>\n        <directory>target/eclipse</directory>\n\n        <pluginManagement>\n          <plugins>\n            <plugin>\n              <groupId>org.eclipse.m2e</groupId>\n              <artifactId>lifecycle-mapping</artifactId>\n              <version>1.0.0</version>\n              <configuration>\n                <lifecycleMappingMetadata>\n                  <pluginExecutions>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>de.thetaphi</groupId>\n                        <artifactId>forbiddenapis</artifactId>\n                        <versionRange>[1.0.0,)</versionRange>\n                        <goals>\n                          <goal>testCheck</goal>\n                          <goal>check</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore />\n                      </action>\n                    </pluginExecution>\n                  </pluginExecutions>\n                </lifecycleMappingMetadata>\n              </configuration>\n            </plugin>\n          </plugins>\n        </pluginManagement>\n      </build>\n    </profile>\n\n    <profile>\n      <id>eclipse</id>\n      <build>\n        <defaultGoal>compile antrun:run</defaultGoal>\n        <pluginManagement>\n          <plugins>\n            <plugin>\n              <artifactId>maven-antrun-plugin</artifactId>\n              <version>3.2.0</version>\n              <executions>\n                <execution>\n                  <id>default-cli</id>\n                  <phase>none</phase>\n                  <inherited>false</inherited>\n                  <configuration>\n                    <target>\n                      <presetdef name=\"copy\">\n                        <copy overwrite=\"true\" />\n                      </presetdef>\n                      <condition property=\"onwin\">\n                        <os family=\"windows\" />\n                      </condition>\n\n                      <fileset id=\"id:settings\" dir=\"etc/eclipse/settings\" />\n                      <copy todir=\"morfologik-fsa/.settings\">           <fileset refid=\"id:settings\" /></copy>\n                      <copy todir=\"morfologik-fsa-builders/.settings\">  <fileset refid=\"id:settings\" /></copy>\n                      <copy todir=\"morfologik-polish/.settings\">        <fileset refid=\"id:settings\" /></copy>\n                      <copy todir=\"morfologik-speller/.settings\">       <fileset refid=\"id:settings\" /></copy>\n                      <copy todir=\"morfologik-stemming/.settings\">      <fileset refid=\"id:settings\" /></copy>\n                      <copy todir=\"morfologik-tools/.settings\">         <fileset refid=\"id:settings\" /></copy>\n\n                      <!-- no custom configs.\n                      <copy todir=\".\">\n                        <fileset dir=\"etc/eclipse/configs\" />\n                        <filtermapper>\n                          <replacestring from=\"_\" to=\".\" />\n                        </filtermapper>\n                        <filterchain unless:true=\"${onwin}\" xmlns:unless=\"ant:unless\">\n                          <tokenfilter>\n                            <filetokenizer />\n                            <replacestring from=\".bat\" to=\"\" />\n                          </tokenfilter>\n                        </filterchain>\n                      </copy> -->\n                    </target>\n                  </configuration>\n                  <goals>\n                    <goal>run</goal>\n                  </goals>\n                </execution>\n              </executions>\n              <dependencies>\n                <dependency>\n                  <groupId>org.apache.ant</groupId>\n                  <artifactId>ant</artifactId>\n                  <version>1.10.15</version>\n                </dependency>\n              </dependencies>\n            </plugin>\n\n            <plugin>\n              <groupId>org.eclipse.m2e</groupId>\n              <artifactId>lifecycle-mapping</artifactId>\n              <version>1.0.0</version>\n              <configuration>\n                <lifecycleMappingMetadata>\n                  <pluginExecutions>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>de.thetaphi</groupId>\n                        <artifactId>forbiddenapis</artifactId>\n                        <versionRange>[0.0.0,)</versionRange>\n                        <goals>\n                          <goal>check</goal>\n                          <goal>testCheck</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore />\n                      </action>\n                    </pluginExecution>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>com.carrotsearch</groupId>\n                        <artifactId>hppc-template-processor</artifactId>\n                        <versionRange>[0.0.0,)</versionRange>\n                        <goals>\n                          <goal>template-processor</goal>\n                          <goal>add-source</goal>\n                          <goal>add-test-source</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <execute>\n                          <runOnIncremental>false</runOnIncremental>\n                          <runOnConfiguration>true</runOnConfiguration>\n                        </execute>\n                      </action>\n                    </pluginExecution>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>org.apache.maven.plugins</groupId>\n                        <artifactId>maven-plugin-plugin</artifactId>\n                        <versionRange>[3.4,)</versionRange>\n                        <goals>\n                          <goal>descriptor</goal>\n                          <goal>helpmojo</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore></ignore>\n                      </action>\n                    </pluginExecution>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>org.apache.maven.plugins</groupId>\n                        <versionRange>[0.0,)</versionRange>\n                        <goals>\n                          <goal>enforce</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore></ignore>\n                      </action>\n                    </pluginExecution>\n                  </pluginExecutions>\n                </lifecycleMappingMetadata>\n              </configuration>\n            </plugin>\n          </plugins>\n        </pluginManagement>\n      </build>\n    </profile>\n\n    <profile>\n      <id>sonatype-oss-release</id>\n\n      <build>\n        <plugins>\n          <plugin>\n            <groupId>org.sonatype.central</groupId>\n            <artifactId>central-publishing-maven-plugin</artifactId>\n            <version>0.10.0</version>\n            <extensions>true</extensions>\n            <configuration>\n              <publishingServerId>central</publishingServerId>\n              <deploymentName>morfologik-stemming-${project.version}</deploymentName>\n              <autoPublish>true</autoPublish>\n              <waitUntil>published</waitUntil>\n            </configuration>\n          </plugin>\n\n          <plugin>\n            <groupId>org.apache.maven.plugins</groupId>\n            <artifactId>maven-gpg-plugin</artifactId>\n            <configuration>\n              <excludes>\n                <exclude>**/*.gz</exclude>\n                <exclude>**/*.zip</exclude>\n              </excludes>\n            </configuration>\n            <executions>\n              <execution>\n                <goals>\n                  <goal>sign</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n\n          <plugin>\n            <groupId>org.apache.maven.plugins</groupId>\n            <artifactId>maven-javadoc-plugin</artifactId>\n            <configuration>\n              <encoding>${project.build.sourceEncoding}</encoding>\n              <windowtitle>${project.name} v${project.version} API Documentation</windowtitle>\n              <doctitle>${project.name} v${project.version} API Documentation</doctitle>\n              <charset>UTF-8</charset>\n              <detectJavaApiLink>false</detectJavaApiLink>\n            </configuration>\n            <executions>\n              <execution>\n                <id>attach-javadocs</id>\n                <goals>\n                  <goal>jar</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n\n          <plugin>\n            <groupId>org.apache.maven.plugins</groupId>\n            <artifactId>maven-source-plugin</artifactId>\n            <configuration>\n              <excludeResources>true</excludeResources>\n            </configuration>\n            <executions>\n              <execution>\n                <id>attach-sources</id>\n                <goals>\n                  <goal>jar-no-fork</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>    \n  </profiles>\n</project>\n\n"
  }
]