[
  {
    "path": ".gitignore",
    "content": ".DS_Store\nstanford-parser*\nstanford-corenlp*\nbuild*\ndist*\nLango.egg-info*\n_build*\n_templates*\n*.pyc"
  },
  {
    "path": "LICENSE.txt",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 2, June 1991\n\n Copyright (C) 1989, 1991 Free Software Foundation, Inc.,\n 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The licenses for most software are designed to take away your\nfreedom to share and change it.  By contrast, the GNU General Public\nLicense is intended to guarantee your freedom to share and change free\nsoftware--to make sure the software is free for all its users.  This\nGeneral Public License applies to most of the Free Software\nFoundation's software and to any other program whose authors commit to\nusing it.  (Some other Free Software Foundation software is covered by\nthe GNU Lesser General Public License instead.)  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthis service if you wish), that you receive source code or can get it\nif you want it, that you can change the software or use pieces of it\nin new free programs; and that you know you can do these things.\n\n  To protect your rights, we need to make restrictions that forbid\nanyone to deny you these rights or to ask you to surrender the rights.\nThese restrictions translate to certain responsibilities for you if you\ndistribute copies of the software, or if you modify it.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must give the recipients all the rights that\nyou have.  You must make sure that they, too, receive or can get the\nsource code.  And you must show them these terms so they know their\nrights.\n\n  We protect your rights with two steps: (1) copyright the software, and\n(2) offer you this license which gives you legal permission to copy,\ndistribute and/or modify the software.\n\n  Also, for each author's protection and ours, we want to make certain\nthat everyone understands that there is no warranty for this free\nsoftware.  If the software is modified by someone else and passed on, we\nwant its recipients to know that what they have is not the original, so\nthat any problems introduced by others will not reflect on the original\nauthors' reputations.\n\n  Finally, any free program is threatened constantly by software\npatents.  We wish to avoid the danger that redistributors of a free\nprogram will individually obtain patent licenses, in effect making the\nprogram proprietary.  To prevent this, we have made it clear that any\npatent must be licensed for everyone's free use or not licensed at all.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                    GNU GENERAL PUBLIC LICENSE\n   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n  0. This License applies to any program or other work which contains\na notice placed by the copyright holder saying it may be distributed\nunder the terms of this General Public License.  The \"Program\", below,\nrefers to any such program or work, and a \"work based on the Program\"\nmeans either the Program or any derivative work under copyright law:\nthat is to say, a work containing the Program or a portion of it,\neither verbatim or with modifications and/or translated into another\nlanguage.  (Hereinafter, translation is included without limitation in\nthe term \"modification\".)  Each licensee is addressed as \"you\".x\n\nActivities other than copying, distribution and modification are not\ncovered by this License; they are outside its scope.  The act of\nrunning the Program is not restricted, and the output from the Program\nis covered only if its contents constitute a work based on the\nProgram (independent of having been made by running the Program).\nWhether that is true depends on what the Program does.\n\n  1. You may copy and distribute verbatim copies of the Program's\nsource code as you receive it, in any medium, provided that you\nconspicuously and appropriately publish on each copy an appropriate\ncopyright notice and disclaimer of warranty; keep intact all the\nnotices that refer to this License and to the absence of any warranty;\nand give any other recipients of the Program a copy of this License\nalong with the Program.\n\nYou may charge a fee for the physical act of transferring a copy, and\nyou may at your option offer warranty protection in exchange for a fee.\n\n  2. You may modify your copy or copies of the Program or any portion\nof it, thus forming a work based on the Program, and copy and\ndistribute such modifications or work under the terms of Section 1\nabove, provided that you also meet all of these conditions:\n\n    a) You must cause the modified files to carry prominent notices\n    stating that you changed the files and the date of any change.\n\n    b) You must cause any work that you distribute or publish, that in\n    whole or in part contains or is derived from the Program or any\n    part thereof, to be licensed as a whole at no charge to all third\n    parties under the terms of this License.\n\n    c) If the modified program normally reads commands interactively\n    when run, you must cause it, when started running for such\n    interactive use in the most ordinary way, to print or display an\n    announcement including an appropriate copyright notice and a\n    notice that there is no warranty (or else, saying that you provide\n    a warranty) and that users may redistribute the program under\n    these conditions, and telling the user how to view a copy of this\n    License.  (Exception: if the Program itself is interactive but\n    does not normally print such an announcement, your work based on\n    the Program is not required to print an announcement.)\n\nThese requirements apply to the modified work as a whole.  If\nidentifiable sections of that work are not derived from the Program,\nand can be reasonably considered independent and separate works in\nthemselves, then this License, and its terms, do not apply to those\nsections when you distribute them as separate works.  But when you\ndistribute the same sections as part of a whole which is a work based\non the Program, the distribution of the whole must be on the terms of\nthis License, whose permissions for other licensees extend to the\nentire whole, and thus to each and every part regardless of who wrote it.\n\nThus, it is not the intent of this section to claim rights or contest\nyour rights to work written entirely by you; rather, the intent is to\nexercise the right to control the distribution of derivative or\ncollective works based on the Program.\n\nIn addition, mere aggregation of another work not based on the Program\nwith the Program (or with a work based on the Program) on a volume of\na storage or distribution medium does not bring the other work under\nthe scope of this License.\n\n  3. You may copy and distribute the Program (or a work based on it,\nunder Section 2) in object code or executable form under the terms of\nSections 1 and 2 above provided that you also do one of the following:\n\n    a) Accompany it with the complete corresponding machine-readable\n    source code, which must be distributed under the terms of Sections\n    1 and 2 above on a medium customarily used for software interchange; or,\n\n    b) Accompany it with a written offer, valid for at least three\n    years, to give any third party, for a charge no more than your\n    cost of physically performing source distribution, a complete\n    machine-readable copy of the corresponding source code, to be\n    distributed under the terms of Sections 1 and 2 above on a medium\n    customarily used for software interchange; or,\n\n    c) Accompany it with the information you received as to the offer\n    to distribute corresponding source code.  (This alternative is\n    allowed only for noncommercial distribution and only if you\n    received the program in object code or executable form with such\n    an offer, in accord with Subsection b above.)\n\nThe source code for a work means the preferred form of the work for\nmaking modifications to it.  For an executable work, complete source\ncode means all the source code for all modules it contains, plus any\nassociated interface definition files, plus the scripts used to\ncontrol compilation and installation of the executable.  However, as a\nspecial exception, the source code distributed need not include\nanything that is normally distributed (in either source or binary\nform) with the major components (compiler, kernel, and so on) of the\noperating system on which the executable runs, unless that component\nitself accompanies the executable.\n\nIf distribution of executable or object code is made by offering\naccess to copy from a designated place, then offering equivalent\naccess to copy the source code from the same place counts as\ndistribution of the source code, even though third parties are not\ncompelled to copy the source along with the object code.\n\n  4. You may not copy, modify, sublicense, or distribute the Program\nexcept as expressly provided under this License.  Any attempt\notherwise to copy, modify, sublicense or distribute the Program is\nvoid, and will automatically terminate your rights under this License.\nHowever, parties who have received copies, or rights, from you under\nthis License will not have their licenses terminated so long as such\nparties remain in full compliance.\n\n  5. You are not required to accept this License, since you have not\nsigned it.  However, nothing else grants you permission to modify or\ndistribute the Program or its derivative works.  These actions are\nprohibited by law if you do not accept this License.  Therefore, by\nmodifying or distributing the Program (or any work based on the\nProgram), you indicate your acceptance of this License to do so, and\nall its terms and conditions for copying, distributing or modifying\nthe Program or works based on it.\n\n  6. Each time you redistribute the Program (or any work based on the\nProgram), the recipient automatically receives a license from the\noriginal licensor to copy, distribute or modify the Program subject to\nthese terms and conditions.  You may not impose any further\nrestrictions on the recipients' exercise of the rights granted herein.\nYou are not responsible for enforcing compliance by third parties to\nthis License.\n\n  7. If, as a consequence of a court judgment or allegation of patent\ninfringement or for any other reason (not limited to patent issues),\nconditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot\ndistribute so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you\nmay not distribute the Program at all.  For example, if a patent\nlicense would not permit royalty-free redistribution of the Program by\nall those who receive copies directly or indirectly through you, then\nthe only way you could satisfy both it and this License would be to\nrefrain entirely from distribution of the Program.\n\nIf any portion of this section is held invalid or unenforceable under\nany particular circumstance, the balance of the section is intended to\napply and the section as a whole is intended to apply in other\ncircumstances.\n\nIt is not the purpose of this section to induce you to infringe any\npatents or other property right claims or to contest validity of any\nsuch claims; this section has the sole purpose of protecting the\nintegrity of the free software distribution system, which is\nimplemented by public license practices.  Many people have made\ngenerous contributions to the wide range of software distributed\nthrough that system in reliance on consistent application of that\nsystem; it is up to the author/donor to decide if he or she is willing\nto distribute software through any other system and a licensee cannot\nimpose that choice.\n\nThis section is intended to make thoroughly clear what is believed to\nbe a consequence of the rest of this License.\n\n  8. If the distribution and/or use of the Program is restricted in\ncertain countries either by patents or by copyrighted interfaces, the\noriginal copyright holder who places the Program under this License\nmay add an explicit geographical distribution limitation excluding\nthose countries, so that distribution is permitted only in or among\ncountries not thus excluded.  In such case, this License incorporates\nthe limitation as if written in the body of this License.\n\n  9. The Free Software Foundation may publish revised and/or new versions\nof the General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\nEach version is given a distinguishing version number.  If the Program\nspecifies a version number of this License which applies to it and \"any\nlater version\", you have the option of following the terms and conditions\neither of that version or of any later version published by the Free\nSoftware Foundation.  If the Program does not specify a version number of\nthis License, you may choose any version ever published by the Free Software\nFoundation.\n\n  10. If you wish to incorporate parts of the Program into other free\nprograms whose distribution conditions are different, write to the author\nto ask for permission.  For software which is copyrighted by the Free\nSoftware Foundation, write to the Free Software Foundation; we sometimes\nmake exceptions for this.  Our decision will be guided by the two goals\nof preserving the free status of all derivatives of our free software and\nof promoting the sharing and reuse of software generally.\n\n                            NO WARRANTY\n\n  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY\nFOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN\nOTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES\nPROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED\nOR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS\nTO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE\nPROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,\nREPAIR OR CORRECTION.\n\n  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR\nREDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,\nINCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING\nOUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED\nTO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY\nYOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER\nPROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGES.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nconvey the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software; you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation; either version 2 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License along\n    with this program; if not, write to the Free Software Foundation, Inc.,\n    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nAlso add information on how to contact you by electronic and paper mail.\n\nIf the program is interactive, make it output a short notice like this\nwhen it starts in an interactive mode:\n\n    Gnomovision version 69, Copyright (C) year name of author\n    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, the commands you use may\nbe called something other than `show w' and `show c'; they could even be\nmouse-clicks or menu items--whatever suits your program.\n\nYou should also get your employer (if you work as a programmer) or your\nschool, if any, to sign a \"copyright disclaimer\" for the program, if\nnecessary.  Here is a sample; alter the names:\n\n  Yoyodyne, Inc., hereby disclaims all copyright interest in the program\n  `Gnomovision' (which makes passes at compilers) written by James Hacker.\n\n  <signature of Ty Coon>, 1 April 1989\n  Ty Coon, President of Vice\n\nThis General Public License does not permit incorporating your program into\nproprietary programs.  If your program is a subroutine library, you may\nconsider it more useful to permit linking proprietary applications with the\nlibrary.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.\n"
  },
  {
    "path": "docs/Makefile",
    "content": "# Makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHINXBUILD   = sphinx-build\nPAPER         =\nBUILDDIR      = _build\n\n# User-friendly check for sphinx-build\nifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)\n$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)\nendif\n\n# Internal variables.\nPAPEROPT_a4     = -D latex_paper_size=a4\nPAPEROPT_letter = -D latex_paper_size=letter\nALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .\n# the i18n builder cannot share the environment and doctrees with the others\nI18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .\n\n.PHONY: help\nhelp:\n\t@echo \"Please use \\`make <target>' where <target> is one of\"\n\t@echo \"  html       to make standalone HTML files\"\n\t@echo \"  dirhtml    to make HTML files named index.html in directories\"\n\t@echo \"  singlehtml to make a single large HTML file\"\n\t@echo \"  pickle     to make pickle files\"\n\t@echo \"  json       to make JSON files\"\n\t@echo \"  htmlhelp   to make HTML files and a HTML help project\"\n\t@echo \"  qthelp     to make HTML files and a qthelp project\"\n\t@echo \"  applehelp  to make an Apple Help Book\"\n\t@echo \"  devhelp    to make HTML files and a Devhelp project\"\n\t@echo \"  epub       to make an epub\"\n\t@echo \"  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\"\n\t@echo \"  latexpdf   to make LaTeX files and run them through pdflatex\"\n\t@echo \"  latexpdfja to make LaTeX files and run them through platex/dvipdfmx\"\n\t@echo \"  text       to make text files\"\n\t@echo \"  man        to make manual pages\"\n\t@echo \"  texinfo    to make Texinfo files\"\n\t@echo \"  info       to make Texinfo files and run them through makeinfo\"\n\t@echo \"  gettext    to make PO message catalogs\"\n\t@echo \"  changes    to make an overview of all changed/added/deprecated items\"\n\t@echo \"  xml        to make Docutils-native XML files\"\n\t@echo \"  pseudoxml  to make pseudoxml-XML files for display purposes\"\n\t@echo \"  linkcheck  to check all external links for integrity\"\n\t@echo \"  doctest    to run all doctests embedded in the documentation (if enabled)\"\n\t@echo \"  coverage   to run coverage check of the documentation (if enabled)\"\n\n.PHONY: clean\nclean:\n\trm -rf $(BUILDDIR)/*\n\n.PHONY: html\nhtml:\n\t$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/html.\"\n\n.PHONY: dirhtml\ndirhtml:\n\t$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/dirhtml.\"\n\n.PHONY: singlehtml\nsinglehtml:\n\t$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml\n\t@echo\n\t@echo \"Build finished. The HTML page is in $(BUILDDIR)/singlehtml.\"\n\n.PHONY: pickle\npickle:\n\t$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle\n\t@echo\n\t@echo \"Build finished; now you can process the pickle files.\"\n\n.PHONY: json\njson:\n\t$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json\n\t@echo\n\t@echo \"Build finished; now you can process the JSON files.\"\n\n.PHONY: htmlhelp\nhtmlhelp:\n\t$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp\n\t@echo\n\t@echo \"Build finished; now you can run HTML Help Workshop with the\" \\\n\t      \".hhp project file in $(BUILDDIR)/htmlhelp.\"\n\n.PHONY: qthelp\nqthelp:\n\t$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp\n\t@echo\n\t@echo \"Build finished; now you can run \"qcollectiongenerator\" with the\" \\\n\t      \".qhcp project file in $(BUILDDIR)/qthelp, like this:\"\n\t@echo \"# qcollectiongenerator $(BUILDDIR)/qthelp/lango.qhcp\"\n\t@echo \"To view the help file:\"\n\t@echo \"# assistant -collectionFile $(BUILDDIR)/qthelp/lango.qhc\"\n\n.PHONY: applehelp\napplehelp:\n\t$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp\n\t@echo\n\t@echo \"Build finished. The help book is in $(BUILDDIR)/applehelp.\"\n\t@echo \"N.B. You won't be able to view it unless you put it in\" \\\n\t      \"~/Library/Documentation/Help or install it in your application\" \\\n\t      \"bundle.\"\n\n.PHONY: devhelp\ndevhelp:\n\t$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp\n\t@echo\n\t@echo \"Build finished.\"\n\t@echo \"To view the help file:\"\n\t@echo \"# mkdir -p $$HOME/.local/share/devhelp/lango\"\n\t@echo \"# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/lango\"\n\t@echo \"# devhelp\"\n\n.PHONY: epub\nepub:\n\t$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub\n\t@echo\n\t@echo \"Build finished. The epub file is in $(BUILDDIR)/epub.\"\n\n.PHONY: latex\nlatex:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo\n\t@echo \"Build finished; the LaTeX files are in $(BUILDDIR)/latex.\"\n\t@echo \"Run \\`make' in that directory to run these through (pdf)latex\" \\\n\t      \"(use \\`make latexpdf' here to do that automatically).\"\n\n.PHONY: latexpdf\nlatexpdf:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo \"Running LaTeX files through pdflatex...\"\n\t$(MAKE) -C $(BUILDDIR)/latex all-pdf\n\t@echo \"pdflatex finished; the PDF files are in $(BUILDDIR)/latex.\"\n\n.PHONY: latexpdfja\nlatexpdfja:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo \"Running LaTeX files through platex and dvipdfmx...\"\n\t$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja\n\t@echo \"pdflatex finished; the PDF files are in $(BUILDDIR)/latex.\"\n\n.PHONY: text\ntext:\n\t$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text\n\t@echo\n\t@echo \"Build finished. The text files are in $(BUILDDIR)/text.\"\n\n.PHONY: man\nman:\n\t$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man\n\t@echo\n\t@echo \"Build finished. The manual pages are in $(BUILDDIR)/man.\"\n\n.PHONY: texinfo\ntexinfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo\n\t@echo \"Build finished. The Texinfo files are in $(BUILDDIR)/texinfo.\"\n\t@echo \"Run \\`make' in that directory to run these through makeinfo\" \\\n\t      \"(use \\`make info' here to do that automatically).\"\n\n.PHONY: info\ninfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo \"Running Texinfo files through makeinfo...\"\n\tmake -C $(BUILDDIR)/texinfo info\n\t@echo \"makeinfo finished; the Info files are in $(BUILDDIR)/texinfo.\"\n\n.PHONY: gettext\ngettext:\n\t$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale\n\t@echo\n\t@echo \"Build finished. The message catalogs are in $(BUILDDIR)/locale.\"\n\n.PHONY: changes\nchanges:\n\t$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes\n\t@echo\n\t@echo \"The overview file is in $(BUILDDIR)/changes.\"\n\n.PHONY: linkcheck\nlinkcheck:\n\t$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck\n\t@echo\n\t@echo \"Link check complete; look for any errors in the above output \" \\\n\t      \"or in $(BUILDDIR)/linkcheck/output.txt.\"\n\n.PHONY: doctest\ndoctest:\n\t$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest\n\t@echo \"Testing of doctests in the sources finished, look at the \" \\\n\t      \"results in $(BUILDDIR)/doctest/output.txt.\"\n\n.PHONY: coverage\ncoverage:\n\t$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage\n\t@echo \"Testing of coverage in the sources finished, look at the \" \\\n\t      \"results in $(BUILDDIR)/coverage/python.txt.\"\n\n.PHONY: xml\nxml:\n\t$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml\n\t@echo\n\t@echo \"Build finished. The XML files are in $(BUILDDIR)/xml.\"\n\n.PHONY: pseudoxml\npseudoxml:\n\t$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml\n\t@echo\n\t@echo \"Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml.\"\n"
  },
  {
    "path": "docs/conf.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# lango documentation build configuration file, created by\n# sphinx-quickstart on Wed May 25 00:07:47 2016.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport lango\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n    'sphinx.ext.autodoc',\n    'sphinxcontrib.napoleon',\n    'sphinx.ext.todo',\n    'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'lango'\ncopyright = '2016, Michael Young'\nauthor = 'Michael Young'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = lango.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = 'en'\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\nhtml_theme = 'classic'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further.  For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents.  If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar.  Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it.  The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n#   'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n#   'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'langodoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n#  author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n    (master_doc, 'lango.tex', 'lango Documentation',\n     'Michael Young', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n    (master_doc, 'lango', 'lango Documentation',\n     [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n#  dir menu entry, description, category)\ntexinfo_documents = [\n    (master_doc, 'lango', 'lango Documentation',\n     author, 'lango', 'One line description of project.',\n     'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The basename for the epub file. It defaults to the project name.\n#epub_basename = project\n\n# The HTML theme for the epub output. Since the default themes are not\n# optimized for small screen space, using the same theme for HTML and epub\n# output is usually not wise. This defaults to 'epub', a theme designed to save\n# visual space.\n#epub_theme = 'epub'\n\n# The language of the text. It defaults to the language option\n# or 'en' if the language is not set.\n#epub_language = ''\n\n# The scheme of the identifier. Typical schemes are ISBN or URL.\n#epub_scheme = ''\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#epub_identifier = ''\n\n# A unique identification for the text.\n#epub_uid = ''\n\n# A tuple containing the cover image and cover page html template filenames.\n#epub_cover = ()\n\n# A sequence of (type, uri, title) tuples for the guide element of content.opf.\n#epub_guide = ()\n\n# HTML files that should be inserted before the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_pre_files = []\n\n# HTML files that should be inserted after the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_post_files = []\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n# The depth of the table of contents in toc.ncx.\n#epub_tocdepth = 3\n\n# Allow duplicate toc entries.\n#epub_tocdup = True\n\n# Choose between 'default' and 'includehidden'.\n#epub_tocscope = 'default'\n\n# Fix unsupported image types using the Pillow.\n#epub_fix_images = False\n\n# Scale large images.\n#epub_max_image_width = 0\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#epub_show_urls = 'inline'\n\n# If false, no index is generated.\n#epub_use_index = True\n"
  },
  {
    "path": "docs/index.rst",
    "content": ".. lango documentation master file, created by\n   sphinx-quickstart on Wed May 25 00:07:47 2016.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\nWelcome to Lango's documentation!\n=================================\n\n.. toctree::\n\n  installation\n  matching\n\nReference\n==========\n\n.. toctree::\n   :maxdepth: 4\n\n   lango\n\n\nIndices and tables\n==================\n\n* :ref:`genindex`\n* :ref:`modindex`\n* :ref:`search`\n\n"
  },
  {
    "path": "docs/installation.rst",
    "content": "Installation\n=================================\n\nInstall package with pip\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n::\n\n    pip install lango\n\nDownload Stanford CoreNLP\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nMake sure you have Java installed for the Stanford CoreNLP to work.\n\n`Download Stanford CoreNLP`_\n\nExtract to any folder\n\nRun Server\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn extracted folder, run the following command to start the server:\n\n::\n\n    java -mx4g -cp \"*\" edu.stanford.nlp.pipeline.StanfordCoreNLPServer\n\n.. _Download Stanford CoreNLP: http://stanfordnlp.github.io/CoreNLP/#download"
  },
  {
    "path": "docs/lango.matcher.rst",
    "content": "lango.matcher module\n====================\n\n.. automodule:: lango.matcher\n    :members:\n    :undoc-members:\n    :show-inheritance:\n"
  },
  {
    "path": "docs/lango.parser.rst",
    "content": "lango.parser module\n===================\n\n.. automodule:: lango.parser\n    :members:\n    :undoc-members:\n    :show-inheritance:\n"
  },
  {
    "path": "docs/lango.rst",
    "content": "lango package\n=============\n\nSubmodules\n----------\n\n.. toctree::\n\n   lango.matcher\n   lango.parser\n\nModule contents\n---------------\n\n.. automodule:: lango\n    :members:\n    :undoc-members:\n    :show-inheritance:\n"
  },
  {
    "path": "docs/make.bat",
    "content": "@ECHO OFF\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sphinx-build\r\n)\r\nset BUILDDIR=_build\r\nset ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .\r\nset I18NSPHINXOPTS=%SPHINXOPTS% .\r\nif NOT \"%PAPER%\" == \"\" (\r\n\tset ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%\r\n\tset I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%\r\n)\r\n\r\nif \"%1\" == \"\" goto help\r\n\r\nif \"%1\" == \"help\" (\r\n\t:help\r\n\techo.Please use `make ^<target^>` where ^<target^> is one of\r\n\techo.  html       to make standalone HTML files\r\n\techo.  dirhtml    to make HTML files named index.html in directories\r\n\techo.  singlehtml to make a single large HTML file\r\n\techo.  pickle     to make pickle files\r\n\techo.  json       to make JSON files\r\n\techo.  htmlhelp   to make HTML files and a HTML help project\r\n\techo.  qthelp     to make HTML files and a qthelp project\r\n\techo.  devhelp    to make HTML files and a Devhelp project\r\n\techo.  epub       to make an epub\r\n\techo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\r\n\techo.  text       to make text files\r\n\techo.  man        to make manual pages\r\n\techo.  texinfo    to make Texinfo files\r\n\techo.  gettext    to make PO message catalogs\r\n\techo.  changes    to make an overview over all changed/added/deprecated items\r\n\techo.  xml        to make Docutils-native XML files\r\n\techo.  pseudoxml  to make pseudoxml-XML files for display purposes\r\n\techo.  linkcheck  to check all external links for integrity\r\n\techo.  doctest    to run all doctests embedded in the documentation if enabled\r\n\techo.  coverage   to run coverage check of the documentation if enabled\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"clean\" (\r\n\tfor /d %%i in (%BUILDDIR%\\*) do rmdir /q /s %%i\r\n\tdel /q /s %BUILDDIR%\\*\r\n\tgoto end\r\n)\r\n\r\n\r\nREM Check if sphinx-build is available and fallback to Python version if any\r\n%SPHINXBUILD% 1>NUL 2>NUL\r\nif errorlevel 9009 goto sphinx_python\r\ngoto sphinx_ok\r\n\r\n:sphinx_python\r\n\r\nset SPHINXBUILD=python -m sphinx.__init__\r\n%SPHINXBUILD% 2> nul\r\nif errorlevel 9009 (\r\n\techo.\r\n\techo.The 'sphinx-build' command was not found. Make sure you have Sphinx\r\n\techo.installed, then set the SPHINXBUILD environment variable to point\r\n\techo.to the full path of the 'sphinx-build' executable. Alternatively you\r\n\techo.may add the Sphinx directory to PATH.\r\n\techo.\r\n\techo.If you don't have Sphinx installed, grab it from\r\n\techo.http://sphinx-doc.org/\r\n\texit /b 1\r\n)\r\n\r\n:sphinx_ok\r\n\r\n\r\nif \"%1\" == \"html\" (\r\n\t%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The HTML pages are in %BUILDDIR%/html.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"dirhtml\" (\r\n\t%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"singlehtml\" (\r\n\t%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"pickle\" (\r\n\t%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can process the pickle files.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"json\" (\r\n\t%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can process the JSON files.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"htmlhelp\" (\r\n\t%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can run HTML Help Workshop with the ^\r\n.hhp project file in %BUILDDIR%/htmlhelp.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"qthelp\" (\r\n\t%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can run \"qcollectiongenerator\" with the ^\r\n.qhcp project file in %BUILDDIR%/qthelp, like this:\r\n\techo.^> qcollectiongenerator %BUILDDIR%\\qthelp\\lango.qhcp\r\n\techo.To view the help file:\r\n\techo.^> assistant -collectionFile %BUILDDIR%\\qthelp\\lango.ghc\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"devhelp\" (\r\n\t%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"epub\" (\r\n\t%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The epub file is in %BUILDDIR%/epub.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"latex\" (\r\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; the LaTeX files are in %BUILDDIR%/latex.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"latexpdf\" (\r\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\r\n\tcd %BUILDDIR%/latex\r\n\tmake all-pdf\r\n\tcd %~dp0\r\n\techo.\r\n\techo.Build finished; the PDF files are in %BUILDDIR%/latex.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"latexpdfja\" (\r\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\r\n\tcd %BUILDDIR%/latex\r\n\tmake all-pdf-ja\r\n\tcd %~dp0\r\n\techo.\r\n\techo.Build finished; the PDF files are in %BUILDDIR%/latex.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"text\" (\r\n\t%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The text files are in %BUILDDIR%/text.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"man\" (\r\n\t%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The manual pages are in %BUILDDIR%/man.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"texinfo\" (\r\n\t%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"gettext\" (\r\n\t%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The message catalogs are in %BUILDDIR%/locale.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"changes\" (\r\n\t%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.The overview file is in %BUILDDIR%/changes.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"linkcheck\" (\r\n\t%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Link check complete; look for any errors in the above output ^\r\nor in %BUILDDIR%/linkcheck/output.txt.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"doctest\" (\r\n\t%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Testing of doctests in the sources finished, look at the ^\r\nresults in %BUILDDIR%/doctest/output.txt.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"coverage\" (\r\n\t%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Testing of coverage in the sources finished, look at the ^\r\nresults in %BUILDDIR%/coverage/python.txt.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"xml\" (\r\n\t%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The XML files are in %BUILDDIR%/xml.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"pseudoxml\" (\r\n\t%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.\r\n\tgoto end\r\n)\r\n\r\n:end\r\n"
  },
  {
    "path": "docs/matching.rst",
    "content": "Matching\n--------\n\nMatching is done by comparing a set rules and matching it with a parse\ntree. You can see parse trees for sentences from\nexamples/parser\\_input.py.\n\nThe set of rules is recursive and can match multiple parts of the parse\ntree.\n\nRules can be broken down into smaller parts: - Tag - Token - Token Tree\n- Rules\n\nTag\n~~~\n\nA tag is a POS (part of speech) tag to match. A list of POS tags used by\nthe Stanford Parser can be found `here`_.\n\n::\n\n    Format:\n    tag = string\n\n    Example:\n    'NP'\n    'VP'\n    'PP'\n\nToken\n~~~~~\n\nA token is a string comprising of a tag and modifiers/labels for matching. We specify a match_label to match the tag to. We can specify opts for extracting the string from a tree. We can specify eq for matching the tree to a string.\n\n::\n\n    Example string:\n    The red car\n    \n    opts \n    -o Get object by removing \"a\", \"the\", etc. (Ex. red car)\n    -r Get raw string (Ex. The red car)\n::\n\n    Format: (only tag is required)\n    token = tag:match_label-opts=eq\n\n\n    Example: \n    'VP'\n    'NP:subject-o'\n    'NP:np'\n    'VP=run'\n    'VP:action=run'\n\nToken Tree\n~~~~~~~~~~\n\nA token tree is a recursive tree of tokens. The tree matches the\nstructure of a parse tree.\n\n::\n\n    Format:\n    token_tree = ( token token_tree token_tree ... )\n\n    Examples: \n    '( NP ( DT ) ( NP:subject-o ) )'\n    '( NP )'\n    '( PP ( TO=to ) ( NP:object-o ) )'\n\nRules\n~~~~~\n\nRules are a dictionary of token trees to dictionaries of matching labels\nto a nested set of rules.\n\n\n::\n\n    Format:\n    rules = {token_tree: {match_label: rules}}\n\n    Example: \n    {\n        '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )': {\n            'np': {\n                '( NP:subject-o )': {}\n            },\n            'pp': {\n                '( PP ( TO=to ) ( NP:to_object-o ) )': {},\n                '( PP ( IN=from ) ( NP:from_object-o ) )': {},\n            }\n        },\n    }\n\nWhen matching a rule to a parse tree, the token tree is first matched.\nThen, all matching tags are matched to nested rules corresponding to\ntheir matching label.\n\nAll nested match labels must have a subrule match or the rules will not\nmatch.\n\nThe first rule to match is returned so the order of match is based on\nkey ordering (use OrderedDict if order matters). Once a rule is matched,\nit calls the callback function with the context as arguments.\n\nExample\n~~~~~~~\n\nSuppose we have the sentence “Sam ran to his house” and we wanted to\nmatch the subject (“Sam”), the object to (“his house”) and the action\n(“ran”).\n\nSample parse tree for “Sam ran to his house” from the Stanford Parser.\n\n::\n\n    (S\n      (NP \n        (NNP Sam)\n        )\n      (VP\n        (VBD ran)\n          (PP \n            (TO to)\n            (NP\n              (PRP$ his)\n              (NN house)\n              )\n            )\n        )\n      )\n\nSimplified image of tree:\n\n.. figure:: /_static/img/sent_tree.png\n   :alt: tree\n\n   tree\n\n::\n\n    Matching:\n    Parse Tree: \n    (S (NP (NNP Sam) ) (VP (VBD ran) (PP (TO to) (NP (PRP$ his) (NN house))))\n\n    Matched token tree: '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )'\n    Matched context: \n      np: (NP (NNP Sam))\n      action-o: 'ran'\n      pp: (PP (TO to) (NP (PRP$ his) (NN house)))\n\nRule for ‘( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )’:\n\n.. figure:: /_static/img/rule_tree_1.png\n   :alt: tree\n\n   tree\n\nMatching ‘NP’ matches the whole NP tree and converts to a word:\n\n.. _here: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html\n\n::\n\n    Matched token tree for np: '( NP:subject-o )'\n    Matched context:\n      subject-o: 'Sam'\n\nMatching ‘PP’ requires matching the nested rules:\n\n::\n\n    Match token tree for pp: '( PP ( TO=to ) ( NP:to_object-o ) )'\n    Match context:\n      object-o: 'his house'\n\n    Match token tree for pp: '( PP ( IN=from ) ( NP:from_object-o ) )'\n    No match found\n\nPP of the sample sentence:\n\n.. figure:: /_static/img/sent_tree_pp.png\n   :alt: tree\n\n   tree\n\nNested PP rules:\n\n|tree2| |tree3|\n\nOnly the first rule matches for ‘PP’.\n\nNow that we have a match for all nested rules, we can return the\ncontext:\n\n::\n\n    Returned context:\n      action: 'ran'\n      subject: 'sam'\n      to_object: 'his house'\n\nFull code:\n\n.. code:: python\n\n    from lango.parser import StanfordLibParser\n    from lango.matcher import match_rules\n\n    parser = StanfordLibParser()\n\n    rules = {\n      '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )': {\n        'np': {\n            '( NP:subject-o )': {}\n        },\n        'pp': {\n            '( PP ( TO=to ) ( NP:to_object-o ) )': {},\n            '( PP ( IN=from ) ( NP:from_object-o ) )': {}\n        }\n      }\n    }\n\n    def fun(subject, action, to_object=None, from_object=None):\n        print \"%s,%s,%s,%s\" % (subject, action, to_object, from_object)\n\n    tree = parser.parse('Sam ran to his house')\n    match_rules(tree, rules, fun)\n    # output should be: sam, ran, his house, None\n\n    tree = parser.parse('Billy walked from his apartment')\n    match_rules(tree, rules, fun)\n    # output should be: billy, walked, None, his apartment\n\n.. |tree2| image:: /_static/img/rule_tree_2.png\n.. |tree3| image:: /_static/img/rule_tree_3.png\n\n"
  },
  {
    "path": "docs/modules.rst",
    "content": "lango\n=====\n\n.. toctree::\n   :maxdepth: 4\n\n   lango\n"
  },
  {
    "path": "docs.md",
    "content": "# Docs\n\nPip Installs\n```\nsphinx-autobuild==0.6.0\nsphinx-rtd-theme==0.1.9\nsphinxcontrib-napoleon==0.5.0\n```\n\nGenerate docs\n```\nsphinx-apidoc -f -e -o docs lango\ncd docs\nmake html\n```\n\n## Development\n\n```\npython setup.py develop\n```"
  },
  {
    "path": "examples/matching.py",
    "content": "\nfrom collections import OrderedDict\nimport os\nfrom lango.parser import StanfordServerParser\nfrom lango.matcher import match_rules\n\n\n\nparser = StanfordServerParser()\n\nsents = [\n    'Call me an Uber.',\n    'Get my mother some flowers.',\n    'Find me a pizza with extra cheese.',\n    'Give Sam\\'s dog a biscuit from Petshop.'\n]\n\n\"\"\"\nme.call({'item': u'uber'})\nmy.mother.get({'item': u'flowers'})\nme.order({'item': u'pizza', u'with': u'extra cheese'})\nsam.dog.give({'item': u'biscuit', u'from': u'petshop'})\n\"\"\"\n\nsubj_obj_rules = {\n    'subj_t': OrderedDict([\n        # my brother / my mother\n        ('( NP ( PRP$:subject-o=my ) ( NN:relation-o ) )', {}),\n        # Sam's dog\n        ('( NP ( NP ( NNP:subject-o ) ( POS ) ) ( NN:relation-o ) )', {}),\n        # me\n        ('( NP:subject-o )', {}),\n    ]),\n    'obj_t': OrderedDict([\n        # pizza with onions\n        ('( NP ( NP:item-O ) ( PP ( IN:item_in-O ) ( NP:item_addon-O ) ) )', {}),\n        # pizza\n        ('( NP:item-O )', {}),\n    ])\n}\n\nrules = {\n    # Get me a pizza\n    '( S ( VP ( VB:action-o ) ( S ( NP:subj_t ) ( NP:obj_t ) ) ) )': subj_obj_rules,\n    # Get my mother flowers\n    '( S ( VP ( VB:action-o ) ( NP:subj_t ) ( NP:obj_t ) ) )': subj_obj_rules,\n}\n\ndef perform_action(action, item, subject, relation=None,\n    item_addon=None, item_in=None):\n\n    entity = subject\n    if entity == \"my\":\n        entity = \"me\"\n    if relation:\n        entity = '{0}.{1}'.format(entity, relation)\n\n    item_props = {'item': item}\n    if item_in and item_addon:\n        item_props[item_in] = item_addon\n\n    return '{0}.{1}({2})'.format(entity, action, item_props)\n\nfor sent in sents:\n    tree = parser.parse(sent)\n    print(match_rules(tree, rules, perform_action))\n"
  },
  {
    "path": "examples/multimatch.py",
    "content": "\nfrom collections import OrderedDict\nimport os\nfrom lango.parser import StanfordServerParser\nfrom lango.matcher import match_rules\n\nparser = StanfordServerParser()\n\nsents = [\n    'What religion is the President of the United States?'\n]\n\nrules = {\n    '( SBARQ ( WHNP/WHADVP:wh_t ) ( SQ ( VBZ ) ( NP:np_t ) ) )': {\n        'np_t': {\n            '( NP ( NP:subj-o ) ( PP ( IN:subj_in-o ) ( NP:obj-o ) ) )': {},\n            '( NP:subj-o )': {},\n        },\n        'wh_t': {\n            '( WHNP:whnp ( WDT ) ( NN:prop-o ) )': {},\n            '( WHNP/WHADVP:qtype-o )': {},\n        }\n    },\n    '( SBARQ:subj-o )': {},\n}\n\nkeys = ['subj', 'subj_in', 'obj', 'prop', 'qtype']\n\nfor sent in sents:\n    tree = parser.parse(sent)\n    contexts = match_rules(tree, rules, multi=True)\n    for context in contexts:\n        print(\", \".join(['%s:%s' % (k, context.get(k)) for k in keys]))\n\n\"\"\"\n5 possible matches:\nsubj:president of united states, subj_in:None, obj:None, prop:religion, qtype:None\nsubj:president of united states, subj_in:None, obj:None, prop:None, qtype:what religion\nsubj:president, subj_in:of, obj:united states, prop:religion, qtype:None\nsubj:president, subj_in:of, obj:united states, prop:None, qtype:what religion\nsubj:what religion is president of united states ?, subj_in:None, obj:None, prop:None, qtype:None\n\"\"\""
  },
  {
    "path": "examples/parser_input.py",
    "content": "\nimport os\nfrom lango.parser import StanfordServerParser\nfrom lango.matcher import match_rules\n\ndef main():\n    parser = StanfordServerParser()\n    while True:\n        try:\n            line = input(\"Enter line: \")\n            tree = parser.parse(line)\n            tree.pretty_print()\n        except EOFError:\n            print(\"Bye!\")\n            sys.exit(0)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "lango/__init__.py",
    "content": "\"\"\"\nLango is a natural language framework for matching parse trees \nand modeling conversations.\n\"\"\"\n__version__ = '0.21.0'"
  },
  {
    "path": "lango/matcher.py",
    "content": "from nltk import Tree\nimport logging\n\nlogger = logging.getLogger(__name__)\n\ndef match_rules(tree, rules, fun=None, multi=False):\n    \"\"\"Matches a Tree structure with the given query rules.\n\n    Query rules are represented as a dictionary of template to action.\n    Action is either a function, or a dictionary of subtemplate parameter to rules::\n\n        rules = { 'template' : { 'key': rules } }\n              | { 'template' : {} }\n\n    Args:\n        tree (Tree): Parsed tree structure\n        rules (dict): A dictionary of query rules\n        fun (function): Function to call with context (set to None if you want to return context)\n        multi (Bool): If True, returns all matched contexts, else returns first matched context\n    Returns:\n        Contexts from matched rules\n    \"\"\"\n    if multi:\n        context = match_rules_context_multi(tree, rules)\n    else:\n        context = match_rules_context(tree, rules)\n        if not context:\n            return None\n\n    if fun:\n        args = fun.__code__.co_varnames\n        if multi:\n            res = []\n            for c in context:\n                action_context = {}\n                for arg in args:\n                    if arg in c:\n                        action_context[arg] = c[arg]\n                res.append(fun(**action_context))\n            return res\n        else:\n            action_context = {}\n            for arg in args:\n                if arg in context:\n                    action_context[arg] = context[arg]\n            return fun(**action_context)\n    else:\n        return context\n\ndef match_rules_context(tree, rules, parent_context={}):\n    \"\"\"Recursively matches a Tree structure with rules and returns context\n\n    Args:\n        tree (Tree): Parsed tree structure\n        rules (dict): See match_rules\n        parent_context (dict): Context of parent call\n    Returns:\n        dict: Context matched dictionary of matched rules or\n        None if no match\n    \"\"\"\n    for template, match_rules in rules.items():\n        context = parent_context.copy()\n        if match_template(tree, template, context):\n            for key, child_rules in match_rules.items():\n                child_context = match_rules_context(context[key], child_rules, context)\n                if child_context:\n                    for k, v in child_context.items():\n                        context[k] = v\n                else:\n                    return None\n            return context\n    return None\n\ndef cross_context(contextss):\n    \"\"\"\n    Cross product of all contexts\n    [[a], [b], [c]] -> [[a] x [b] x [c]]\n\n    \"\"\"\n    if not contextss:\n        return []\n\n    product = [{}]\n\n    for contexts in contextss:\n        tmp_product = []\n        for c in contexts:\n            for ce in product:\n                c_copy = c.copy()\n                c_copy.update(ce)\n                tmp_product.append(c_copy)\n        product = tmp_product\n    return product\n\ndef match_rules_context_multi(tree, rules, parent_context={}):\n    \"\"\"Recursively matches a Tree structure with rules and returns context\n\n    Args:\n        tree (Tree): Parsed tree structure\n        rules (dict): See match_rules\n        parent_context (dict): Context of parent call\n    Returns:\n        dict: Context matched dictionary of matched rules or\n        None if no match\n    \"\"\"\n    all_contexts = []\n    for template, match_rules in rules.items():\n        context = parent_context.copy()\n        if match_template(tree, template, context):\n            child_contextss = []\n            if not match_rules:\n                all_contexts += [context]\n            else:\n                for key, child_rules in match_rules.items():\n                    child_contextss.append(match_rules_context_multi(context[key], child_rules, context))\n                all_contexts += cross_context(child_contextss)    \n    return all_contexts\n\ndef match_template(tree, template, args=None):\n    \"\"\"Check if match string matches Tree structure\n    \n    Args:\n        tree (Tree): Parsed Tree structure of a sentence\n        template (str): String template to match. Example: \"( S ( NP ) )\"\n    Returns:\n        bool: If they match or not\n    \"\"\"\n    tokens = get_tokens(template.split())\n    cur_args = {}\n    if match_tokens(tree, tokens, cur_args):\n        if args is not None:\n            for k, v in cur_args.items():\n                args[k] = v\n        logger.debug('MATCHED: {0}'.format(template))\n        return True\n    else:\n        return False\n\n\ndef match_tokens(tree, tokens, args):\n    \"\"\"Check if stack of tokens matches the Tree structure\n    \n    Special matching rules that can be specified in the template::\n\n        ':label': Label a token, the token will be returned as part of the context with key 'label'.\n        '-@': Additional single letter argument determining return format of labeled token. Valid options are:\n            '-r': Return token as word\n            '-o': Return token as object\n        '=word|word2|....|wordn': Force match raw lower case\n        '$': Match end of tree\n\n    Args:\n        tree : Parsed tree structure\n        tokens : Stack of tokens\n    Returns:\n        Boolean if they match or not\n    \"\"\"\n    arg_type_to_func = {\n        'r': get_raw_lower,\n        'R': get_raw,\n        'o': get_object_lower,\n        'O': get_object,\n    }\n\n    if len(tokens) == 0:\n        return True\n\n    if not isinstance(tree, Tree):\n        return False\n\n    root_token = tokens[0]\n\n    # Equality\n    if root_token.find('=') >= 0:\n        eq_tokens = root_token.split('=')[1].lower().split('|')\n        root_token = root_token.split('=')[0]\n        word = get_raw_lower(tree)\n        if word not in eq_tokens:\n            return False\n\n    # Get arg\n    if root_token.find(':') >= 0:\n        arg_tokens = root_token.split(':')[1].split('-')\n        if len(arg_tokens) == 1:\n            arg_name = arg_tokens[0]\n            args[arg_name] = tree\n        else:\n            arg_name = arg_tokens[0]\n            arg_type = arg_tokens[1]\n            args[arg_name] = arg_type_to_func[arg_type](tree)\n        root_token = root_token.split(':')[0]\n\n    # Does not match wild card and label does not match\n    if root_token != '.' and tree.label() not in root_token.split('/'):\n        return False\n\n    # Check end symbol\n    if tokens[-1] == '$':\n        if len(tree) != len(tokens[:-1]) - 1:\n            return False\n        else:\n            tokens = tokens[:-1]\n\n    # Check # of tokens\n    if len(tree) < len(tokens) - 1:\n        return False\n\n    for i in range(len(tokens) - 1):\n        if not match_tokens(tree[i], tokens[i + 1], args):\n            return False\n    return True\n\n\ndef get_tokens(tokens):\n    \"\"\"Recursively gets tokens from a match list\n    \n    Args:\n        tokens : List of tokens ['(', 'S', '(', 'NP', ')', ')']\n    Returns:\n        Stack of tokens\n    \"\"\"\n    tokens = tokens[1:-1]\n    ret = []\n    start = 0\n    stack = 0\n    for i in range(len(tokens)):\n        if tokens[i] == '(':\n            if stack == 0:\n                start = i\n            stack += 1\n        elif tokens[i] == ')':\n            stack -= 1\n            if stack < 0:\n                raise Exception('Bracket mismatch: ' + str(tokens))\n            if stack == 0:\n                ret.append(get_tokens(tokens[start:i + 1]))\n        else:\n            if stack == 0:\n                ret.append(tokens[i])\n    if stack != 0:\n        raise Exception('Bracket mismatch: ' + str(tokens))\n    return ret\n\n\ndef get_object(tree):\n    \"\"\"Get the object in the tree object.\n    \n    Method should remove unnecessary letters and words::\n\n        the\n        a/an\n        's\n\n    Args:\n        tree (Tree): Parsed tree structure\n    Returns:\n        Resulting string of tree ``(Ex: \"red car\")``\n    \"\"\"\n    if isinstance(tree, Tree):\n        if tree.label() == 'DT' or tree.label() == 'POS':\n            return ''\n        words = []\n        for child in tree:\n            words.append(get_object(child))\n        return ' '.join([_f for _f in words if _f])\n    else:\n        return tree\n\n\ndef get_object_lower(tree):\n    return get_object(tree).lower()\n\n\ndef get_raw(tree):\n    \"\"\"Get the exact words in lowercase in the tree object.\n    \n    Args:\n        tree (Tree): Parsed tree structure\n    Returns:\n        Resulting string of tree ``(Ex: \"The red car\")``\n    \"\"\"\n    if isinstance(tree, Tree):\n        words = []\n        for child in tree:\n            words.append(get_raw(child))\n        return ' '.join(words)\n    else:\n        return tree\n\n\ndef get_raw_lower(tree):\n    return get_raw(tree).lower()"
  },
  {
    "path": "lango/parser.py",
    "content": "from nltk.parse.stanford import StanfordParser, GenericStanfordParser\nfrom nltk.internals import find_jars_within_path\nfrom nltk.tree import Tree\nfrom pycorenlp import StanfordCoreNLP\n\n\nclass Parser:\n    \"\"\"Abstract Parser class\"\"\"\n    def __init__():\n        pass\n\n    def parse(self, sent):\n        pass\n\n\nclass OldStanfordLibParser(Parser):\n    \"\"\"For StanfordParser < 3.6.0\"\"\"\n\n    def __init__(self):\n        self.parser = StanfordParser()\n\n    def parse(self, line):\n        \"\"\"Returns tree objects from a sentence\n\n        Args:\n            line: Sentence to be parsed into a tree\n\n        Returns:\n            Tree object representing parsed sentence\n            None if parse fails\n        \"\"\"\n        tree = list(self.parser.raw_parse(line))[0]\n        tree = tree[0]\n        return tree\n\n\nclass StanfordLibParser(OldStanfordLibParser):\n    \"\"\"For StanfordParser == 3.6.0\"\"\"\n    def __init__(self):\n        self.parser = StanfordParser(\n            model_path='edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz')\n        stanford_dir = self.parser._classpath[0].rpartition('/')[0]\n        self.parser._classpath = tuple(find_jars_within_path(stanford_dir))\n\n\nclass StanfordServerParser(Parser, GenericStanfordParser):\n    \"\"\"Follow the readme to setup the Stanford CoreNLP server\"\"\"\n    def __init__(self, host='localhost', port=9000, properties={}):\n        url = 'http://{0}:{1}'.format(host, port)\n        self.nlp = StanfordCoreNLP(url)\n\n        if not properties:\n            self.properties = {\n                'annotators': 'parse',\n                'outputFormat': 'json',\n            }\n        else:\n            self.properties = properties\n\n    def _make_tree(self, result):\n        return Tree.fromstring(result)\n\n    def parse(self, sent):\n        output = self.nlp.annotate(sent, properties=self.properties)\n\n        # Got random html, return empty tree\n        if isinstance(output, str):\n            return Tree('', [])\n\n        parse_output = output['sentences'][0]['parse'] + '\\n\\n'\n        tree = next(next(self._parse_trees_output(parse_output)))[0]\n        return tree"
  },
  {
    "path": "readme.md",
    "content": "# Lango\n\n[![Gitter](https://badges.gitter.im/lango-nlp/Lobby.svg)](https://gitter.im/lango-nlp/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\nLango is a natural language processing library for working with the building blocks of language. It includes tools for:\n\n* matching [constituent parse trees](https://en.wikipedia.org/wiki/Parse_tree#Constituency-based_parse_trees). \n* modeling conversations (TODO)\n\nNeed help? Ask me for help on [Gitter](https://gitter.im/lango-nlp/Lobby)\n\n## Installation\n\n### Install package with pip\n\n```\npip install lango\n```\n\n### Download Stanford CoreNLP\n\nMake sure you have Java installed for the Stanford CoreNLP to work.\n\n[Download Stanford CoreNLP](http://stanfordnlp.github.io/CoreNLP/#download)\n\nExtract to any folder\n\n### Run the Stanford CoreNLP server\n\nRun the following command in the folder where you extracted Stanford CoreNLP\n```\njava -mx4g -cp \"*\" edu.stanford.nlp.pipeline.StanfordCoreNLPServer\n```\n\n## Docs\n\n- [Blog Post](http://blog.ayoungprogrammer.com/2016/07/natural-language-understanding-by.html/)\n- [Read the docs](http://lango.readthedocs.io/en/latest/)\n- [Examples](http://github.com/ayoungprogrammer/lango/tree/master/examples)\n\n## Matching\n\nMatching is done by comparing a set rules and matching it with a parse tree. You\ncan see parse trees for sentences from examples/parser_input.py. \n\nThe set of rules is recursive and can match multiple parts of the parse tree.\n\nRules can be broken down into smaller parts:\n- Tag\n- Token\n- Token Tree\n- Rules\n\n### Tag\n\nA tag is a POS (part of speech) tag to match. A list of POS tags used by the Stanford Parser can be found [here](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html).\n\n```\nFormat:\ntag = string\n\nExample:\n'NP'\n'VP'\n'PP'\n```\n\n### Token\n\nA token is a string comprising of a tag and modifiers/labels for matching. We specify a match_label to match the tag to. We can specify opts for extracting the string from a tree. We can specify eq for matching the tree to a string.\n\n```\nExample string:\nThe red car\n\nopts:\n-o Get object by removing \"a\", \"the\", etc. (Ex. red car)\n-r Get raw string (Ex. The red car)\n```\n\n```\nFormat: (only tag is required)\ntoken = tag:match_label-opts=eq\n\nExample: \n'VP'\n'NP:subject-o'\n'NP:np'\n'VP=run'\n'VP:action=run'\n```\n\n### Token Tree\n\nA token tree is a recursive tree of tokens. The tree matches the structure of a parse tree.\n\n```\nFormat:\ntoken_tree = ( token token_tree token_tree ... )\n\nExamples: \n'( NP ( DT ) ( NP:subject-o ) )'\n'( NP )'\n'( PP ( TO=to ) ( NP:object-o ) )'\n```\n\n### Rules\n\nRules are a dictionary of token trees to dictionaries of matching labels to a \nnested set of rules. \n\n```\nFormat:\nrules = {token_tree: {match_label: rules}}\n\nExample: \n{\n    '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )': {\n        'np': {\n            '( NP:subject-o )': {}\n        },\n        'pp': {\n            '( PP ( TO=to ) ( NP:to_object-o ) )': {},\n            '( PP ( IN=from ) ( NP:from_object-o ) )': {},\n        }\n    },\n}\n```\n\nWhen matching a rule to a parse tree, the token tree is first matched. Then, all\nmatching tags are matched to nested rules corresponding to their matching label.\n\nAll nested match labels must have a subrule match or the rules will not match.\n\nThe first rule to match is returned so the order of match is based on key \nordering (use OrderedDict if order matters). Once a rule is matched, it calls\nthe callback function with the context as arguments.\n\n### Example\n\nSuppose we have the sentence \"Sam ran to his house\" and we wanted to match the\nsubject (\"Sam\"), the object (\"his house\") and the action (\"ran\"). \n\nSample parse tree for \"Sam ran to his house\" from the Stanford Parser. \n\n```\n(S\n  (NP \n    (NNP Sam)\n    )\n  (VP\n    (VBD ran)\n      (PP \n        (TO to)\n        (NP\n          (PRP$ his)\n          (NN house)\n          )\n        )\n    )\n  )\n```\n\nSimplified image of tree:\n\n![tree](/docs/_static/img/sent_tree.png)\n\n```\nMatching:\nParse Tree: \n(S (NP (NNP Sam) ) (VP (VBD ran) (PP (TO to) (NP (PRP$ his) (NN house))))\n\nMatched token tree: '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )'\nMatched context: \n  np: (NP (NNP Sam))\n  action-o: 'ran'\n  pp: (PP (TO to) (NP (PRP$ his) (NN house)))\n```\n\nRule for '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )':\n\n![tree](/docs/_static/img/rule_tree_1.png)\n\nMatching 'NP' matches the whole NP tree and converts to a word:\n\n```\nMatched token tree for np: '( NP:subject-o )'\nMatched context:\n  subject-o: 'Sam'\n```\n\nMatching 'PP' requires matching the nested rules:\n\n```\nMatch token tree for pp: '( PP ( TO=to ) ( NP:to_object-o ) )'\nMatch context:\n  object-o: 'his house'\n\nMatch token tree for pp: '( PP ( IN=from ) ( NP:from_object-o ) )'\nNo match found\n```\nPP of the sample sentence:\n\n![tree](/docs/_static/img/sent_tree_pp.png)\n\nNested PP rules:\n\n![tree](/docs/_static/img/rule_tree_2.png)\n![tree](/docs/_static/img/rule_tree_3.png)\n\nOnly the first rule matches for 'PP'.\n\nNow that we have a match for all nested rules, we can return the context:\n```\nReturned context:\n  action: 'ran'\n  subject: 'sam'\n  to_object: 'his house'\n```\n\nFull code:\n\n```python\nfrom lango.parser import StanfordServerParser\nfrom lango.matcher import match_rules\n\nparser = StanfordServerParser()\n\nrules = {\n  '( S ( NP:np ) ( VP ( VBD:action-o ) ( PP:pp ) ) )': {\n    'np': {\n        '( NP:subject-o )': {}\n    },\n    'pp': {\n        '( PP ( TO=to ) ( NP:to_object-o ) )': {},\n        '( PP ( IN=from ) ( NP:from_object-o ) )': {}\n    }\n  }\n}\n\ndef fun(subject, action, to_object=None, from_object=None):\n    print \"%s,%s,%s,%s\" % (subject, action, to_object, from_object)\n\ntree = parser.parse('Sam ran to his house')\nmatch_rules(tree, rules, fun)\n# output should be: sam, ran, his house, None\n\ntree = parser.parse('Billy walked from his apartment')\nmatch_rules(tree, rules, fun)\n# output should be: billy, walked, None, his apartment\n```\n"
  },
  {
    "path": "requirements.txt",
    "content": "nltk==3.1\npycorenlp==0.3.0"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import find_packages, setup\nimport lango\n\nsetup(\n    name='Lango',\n    version=lango.__version__,\n    description='Natural Language Framework for Matching Parse Trees and Modeling Conversation',\n    packages=find_packages(),\n    author='Michael Young',\n    author_email='michaelyoung1995@gmail.com',\n    url='https://github.com/ayoungprogrammer/lango',\n    scripts=[],\n    install_requires=[\n        'nltk',\n        'pycorenlp'\n    ],\n)\n"
  }
]