[
  {
    "path": "AUTHORS",
    "content": "# This is the list of Moonlight OMR authors for copyright purposes.\n#\n# Copyright for contributions made under Google's Corporate CLA belong to the\n# contributor's organization. Contributions made under the Individual CLA belong\n# to the author. Individual contributors are recognized separately in the\n# \"CONTRIBUTORS\" file.\nGoogle Inc.\nNuno Jesus\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# How to Contribute\n\nWe'd love to accept your patches and contributions to this project. There are\njust a few small guidelines you need to follow.\n\n## Contributor License Agreement\n\nContributions to this project must be accompanied by a Contributor License\nAgreement. You (or your employer) retain the copyright to your contribution;\nthis simply gives us permission to use and redistribute your contributions as\npart of the project. Head over to <https://cla.developers.google.com/> to see\nyour current agreements on file or to sign a new one.\n\nYou generally only need to submit a CLA once, so if you've already submitted one\n(even if it was for a different project), you probably don't need to do it\nagain.\n\n## Code reviews\n\nAll submissions, including submissions by project members, require review. We\nuse GitHub pull requests for this purpose. Consult\n[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more\ninformation on using pull requests.\n\n## Community Guidelines\n\nThis project follows [Google's Open Source Community\nGuidelines](https://opensource.google.com/conduct/).\n"
  },
  {
    "path": "CONTRIBUTORS",
    "content": "# This is the list of individual contributors to the Moonlight OMR project.\n#\n# Some contributions belong to the organization of the contributor. Copyright is\n# tracked separately, in the \"AUTHORS\" file.\n#\n# We will make an effort to recognize all contributors here. However, the source\n# of truth for contributors is the commit author in source control history.\nDan Ringwalt\nLarry Li\nNuno Jesus\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "<img align=\"center\" width=\"400\" height=\"94,358\" src=\"https://user-images.githubusercontent.com/34600369/40580500-74088e4a-6137-11e8-9705-ecac1499b1ce.png\">\n\n# Moonlight Optical Music Recognition (OMR) [![Build Status](https://travis-ci.org/tensorflow/moonlight.svg?branch=master)](https://travis-ci.org/tensorflow/moonlight)\n\nAn experimental [optical music\nrecognition](https://en.wikipedia.org/wiki/Optical_music_recognition) engine.\n\nMoonlight reads PNG image(s) containing sheet music and outputs\n[MusicXML](https://www.musicxml.com/) or a\n[NoteSequence message](https://github.com/tensorflow/magenta/blob/master/magenta/protobuf/music.proto).\nMusicXML is a standard sheet music interchange format, and `NoteSequence` is\nused by [Magenta](http://magenta.tensorflow.org) for training generative music\nmodels.\n\nMoonlight is not an officially supported Google product.\n\n### Command-Line Usage\n\n    git clone https://github.com/tensorflow/moonlight\n    cd moonlight\n    # You may want to run this inside a virtualenv.\n    pip install -r requirements.txt\n    # Build the OMR command-line tool.\n    bazel build moonlight:omr\n    # Prints a Score message.\n    bazel-bin/moonlight/omr moonlight/testdata/IMSLP00747-000.png\n    # Scans several pages and prints a NoteSequence message.\n    bazel-bin/moonlight/omr --output_type=NoteSequence IMSLP00001-*.png\n    # Writes MusicXML to ~/mozart.xml.\n    bazel-bin/moonlight/omr --output_type=MusicXML --output=$HOME/mozart.xml \\\n        corpus/56/IMSLP56442-*.png\n\nThe `omr` CLI will print a [`Score`](moonlight/protobuf/musicscore.proto)\nmessage by default, or [MusicXML](https://www.musicxml.com/) or a\n`NoteSequence` message if specified.\n\nMoonlight is intended to be run in bulk, and will not offer a full UI for\ncorrecting the score. The main entry point will be an Apache Beam pipeline that\nprocesses an entire corpus of images.\n\nThere is no release yet, and Moonlight is not ready for end users. To run\ninteractively or import the module, you can use the [sandbox\ndirectory](sandbox/README.md). Moonlight will be used offline for digitizing\na scanned corpus (it can be installed on all Cloud Compute platforms, and OS\ncompatibility is not a priority).\n\n### Dependencies\n\n* Linux\n  - Note: Our Google dep versions are fragile, and updating them or updating other OS may break directory structure in fragile ways.\n* [Protobuf 3.6.1](https://pypi.org/project/protobuf/3.6.1/)\n* [Bazel 0.20.0](https://github.com/bazelbuild/bazel/releases/tag/0.20.0). We\n  encountered some errors using Bazel 0.21.0 to build Protobuf 3.6.1, which is\n  the latest Protobuf release at the time of writing.\n* Python version supported by TensorFlow (Python 3.5-3.7)\n* Python dependencies specified in the [requirements](requirements.txt).\n\n### Resources\n\n[Forum](https://groups.google.com/forum/#!forum/moonlight-omr)\n"
  },
  {
    "path": "WORKSPACE",
    "content": "http_archive(\n    name = \"com_google_protobuf\",\n    sha256 = \"40f009cb0c190816a52fc21d45c26558ee7d63c3bd511b326bd85739b2fd99a6\",\n    strip_prefix = \"protobuf-3.6.1\",\n    url = \"https://github.com/google/protobuf/releases/download/v3.6.1/protobuf-python-3.6.1.tar.gz\",\n)\n\nnew_http_archive(\n    name = \"six_archive\",\n    build_file = \"six.BUILD\",\n    sha256 = \"105f8d68616f8248e24bf0e9372ef04d3cc10104f1980f54d57b2ce73a5ad56a\",\n    strip_prefix = \"six-1.10.0\",\n    url = \"https://pypi.python.org/packages/source/s/six/six-1.10.0.tar.gz#md5=34eed507548117b2ab523ab14b2f8b55\",\n)\n\nbind(\n    name = \"six\",\n    actual = \"@six_archive//:six\",\n)\n\nnew_http_archive(\n    name = \"magenta\",\n    strip_prefix = \"magenta-48a199085e303eeae7c36068f050696209b856bb/magenta\",\n    url = \"https://github.com/tensorflow/magenta/archive/48a199085e303eeae7c36068f050696209b856bb.tar.gz\",\n    sha256 = \"931fb7b57714d667db618b0c31db1444e44baab17865ad66b76fd24d9e20ad6d\",\n    build_file_content = \"\",\n)\n\nnew_http_archive(\n    name = \"omr_regression_test_data\",\n    strip_prefix = \"omr_regression_test_data_20180516\",\n    url = \"https://github.com/tensorflow/moonlight/releases/download/v2018.05.16-data/omr_regression_test_data_20180516.tar.gz\",\n    sha256 = \"b47577ee6b359c2cbbcdb8064c6bd463692c728c55ec5c0ab78139165ba8f35a\",\n    build_file_content = \"\"\"\npackage(default_visibility = [\"//visibility:public\"])\nfilegroup(\n    name = \"omr_regression_test_data\",\n    srcs = glob([\"**/*.png\"]),\n)\n\"\"\",\n)\n"
  },
  {
    "path": "docs/concepts.md",
    "content": "# Moonlight OMR Concepts\n\n## Diagram\n\n<img src=\"concepts_diagram.png\" />\n\n## Glossary\n\n| Term         | Definition                                                    |\n| ------------ | ------------------------------------------------------------- |\n| barline      | A vertical line spanning one or more staves, which is not a   |\n:              : stem.                                                         :\n| beam         | A thick line connecting multiple stems horizontally. Each     |\n:              : beam halves the duration of a filled notehead connected to    :\n:              : the stem, which would otherwise be a quarter note.            :\n| notehead     | An ellipse representing a single note. May be filled (quarter |\n:              : or lesser value) or empty (half or whole note).               :\n| ledger line  | An extra line above or below the 5 staff lines.               |\n| staff        | The object which notes and other glyphs are placed on.        |\n| staff system | One or more staves joined by at least one barline.            |\n| staves       | Plural of staff.                                              |\n| stem         | A vertical line attached to a notehead. All noteheads except  |\n:              : for whole notes should have a stem.                           :\n\n## Staves\n\nStaves have 5 parallel, horizontal lines, and are parameterized by the center\n(3rd) line, and the staffline distance (vertical distance between consecutive\nstaff lines). The staffline distance is constant for reasonable quality printed\nmusic scores, and this representation avoids redundancy and makes it possible to\nfind the coordinates of each staff line.\n\n## Staff Positions\n\nGlyphs are vertically centered either on staff lines (or ledger lines), or on\nthe space halfway between lines. We refer to each of these y coordinates as a\n*position*.\n\n**Note**: We still refer to positions as *stafflines* in many places which is\ncounter-intuitive, since positions are also on the space between lines, and can\nbe a potential ledger line which is empty in a particular image. We are in the\nprocess of renaming stafflines in this context to staff positions.\n\n*   Staff position 0 is the third staff line (staff center line), which is also\n    the y-coordinate that is outputted by staff detection.\n*   Staff positions are half the staffline distance apart. Always calculate the\n    relative y position by <code>tf.floordiv(staffline_distance * y_position,\n    2)</code> instead of dividing by 2 first.\n*   In treble clef, staff position 0 is B4, -6 is C4, -1 is A4, and +1 is C5.\n*   In bass clef, staff position 0 is D3, +3 is G3, and +6 is C4.\n\n## Glyphs\n\nGlyphs are defined in\n[<code>musicscore.proto</code>](../moonlight/protobuf/musicscore.proto). Each\nglyph has an x coordinate on the original image, and a y position (staff\nposition). The staff position determines the pitch of the glyph, if applicable.\nIf the glyph is especially large (e.g. clefs) or is not centered on a particular\nvertical position, we choose a *canonical* staff position (e.g. the G line for\ntreble clef, aka G clef).\n\n### Glyph centers\n\n*   Every glyph needs a canonical center coordinate. The classifier will detect\n    the glyph if the window (e.g. convolutional filter) is centered on this\n    point.\n*   Noteheads are centered on the middle of the ellipsis, which should be\n    exactly on a staffline or halfway between stafflines.\n*   Accidentals should be centered on their center of mass. For flats, sharps,\n    and naturals, this is the center of the empty space inside them. For double\n    sharps, this is the middle of the crosshairs.\n*   Treble clef ('G clef\") is centered on the intersection of the thin vertical\n    line with the G staffline (staff position -2).\n*   Bass clef (\"F clef\") is centered on the F staffline (staff position +2). The\n    x coordinate is halfway between the filled circle on the left and the\n    vertical segment on the right.\n*   All rests should normally be centered at staff position 0, unless shifted\n    from their usual position.\n"
  },
  {
    "path": "docs/engine.md",
    "content": "## OMR Engine Detailed Reference\n\nThe OMR engine converts a PNG image to a Magenta\n[NoteSequence message](https://github.com/magenta/note-seq/blob/master/note_seq/protobuf/music.proto),\nwhich is interoperable with MIDI and MusicXML.\n\nOMR uses TensorFlow for glyph (symbol) classification, as well as all other\ncompute-intensive steps. Final processing is done in pure Python. The entry\npoint is [OMREngine.run](../moonlight/engine.py). Glyph classification is\nconfigurable by the`glyph_classifier_fn` argument, and other subtasks of image\nrecognition are part of the [Structure](../moonlight/structure/__init__.py).\n\n### Diagram\n\n<img src=\"engine_diagram.svg\">\n\n### API\n\nThe entry point for running OMR is [`OMREngine.run()`](../moonlight/engine.py).\nIt takes in a list of PNG filenames and outputs a `Score` or `NoteSequence`\nmessage. The `Score` can further be converted to\n[MusicXML](../moonlight/conversions/musicxml.py).\n\n### TensorFlow Graph\n\nFor maximum parallelism, all processing is run in the same TensorFlow graph. The\ngraph is run by `OMREngine._get_page()`. This also evaluates the `Structure` in\nthe same graph. `Structure` is a wrapper for detectors which extract information\nfrom the image, including [staves](../moonlight/staves/hough.py), [vertical\nlines](../moonlight/structure/verticals.py), and [note\nbeams](../moonlight/structure/beams.py).\n\n#### Structure\n\n[Structure](../moonlight/structure/__init__.py) holds the structural elements\nthat need to be evaluated for OMR, but does not do any symbol recognition. The\nstructure encompasses staves, beams, and vertical lines, and may contain more\nelements in the future (e.g. full connected component analysis) which can be\nused to detect more elements (e.g. note dots). Structure detection is currently\nsimple computer vision rather than ML, but it can easily be swapped out with a\ndifferent TensorFlow model.\n\n##### Staffline Distance Estimation\n\nWe estimate the [staffline\ndistance(s)](../moonlight/staves/staffline_distance.py) of the entire image.\nThere may be staves with multiple different sizes for different parts on a\nsingle page, but there should be just a few possible staffline distance values.\n\n##### Staff Detection\n\nConcrete subclasses of [BaseStaffDetector](../moonlight/staves/base.py) take in\nthe image and produce:\n\n*   `staves`: Tensor of shape `(num_staves, num_points, 2)`. Coordinates of the\n    staff center line (third line on the staff).\n*   `staffline_distance`: Vector of the estimated staffline distance (distance\n    between consecutive staff lines) for each staff.\n*   `staffline_thickness`: Scalar thickness of staff lines. Assumed to be the\n    same for all staves.\n*   `staves_interpolated_y`: Tensor of shape `(num_staves, width)`. For each\n    staff and column of the image, outputs the interpolated y position of the\n    staff center line.\n\n##### Staff Removal\n\n[StaffRemover](../moonlight/staves/removal.py) takes in the image and staves,\nand outputs `remove_staves` which is the image with the staff lines erased. This\nis useful so that [glyphs](concepts.md) look the same whether they are centered\non a staff line or the space between lines. It is also used within the\nstructure, by beam detection.\n\n##### Beam Detection\n\n[Beams](../moonlight/structure/beams.py) are currently detected from connected\ncomponents on an\n[eroded](https://en.wikipedia.org/wiki/Mathematical_morphology#Binary_morphology)\nstaves-removed image. These are attached to notes by a `BeamProcessor`.\n\n##### Vertical Line Detection\n\n[ColumnBasedVerticals](../moonlight/structure/verticals.py) detects all vertical\nlines in the image. These will later be used as either stems or barlines.\n\n#### Glyph Classification: 1-D Convolutional Model\n\nWe also run a glyph classifier as part of the TensorFlow graph, which outputs\npredictions.\n\n##### Staffline Extraction\n\nGlyphs are considered to lie on a black staff line, or halfway between staff\nlines. For OMR, extracted stafflines are slices of the image that are either\ncentered on an staff line, or halfway between staff lines. The line that the\nextracted staffline lies on may just be referred to a staffline, or a y position\nof the staff.\n\n[StafflineExtractor](../moonlight/staves/staffline_extractor.py) extracts these\nvertical slices of the image, and scales their height to a constant value\n(currently, 18 pixels tall). `StaffRemover` is used so that all extracted\nstafflines should look similar.\n\n##### Glyph Classification\n\n[Glyphs](concepts.md) are classified on small, horizontal slices (currently, 15\npixels wide) of the extracted staffline, a 1D convolutional model.\n\nA [GlyphClassifier](../moonlight/glyphs/base.py) outputs a Tensor\n`staffline_predictions` of shape `(num_staves, num_stafflines, width)`. The\nvalues are for the `Glyph.Type` enum. Value 0 (UNKNOWN_TYPE) is not used; value\n1 (NONE) corresponds to no glyph.\n\n### Post-Processing\n\n#### Page Construction\n\nOMR processing operates on [Page\nprotos](../moonlight/protobuf/musicscore.proto). The Page is first constructed\nby `BaseGlyphClassifier.get_page`, which populates the glyphs on each staff.\nStaff location information is then added by `StaffProcessor`.\n\nSingle `Glyph`s are created from consecutive runs in `staffline_predictions`\nthat are classified as the same glyph type.\n\nAdditional processors modify the `Page` in place, usually adding information\nfrom the `Structure`. Each page is run through\n[`page_processors.process()`](../moonlight/page_processors.py), and then the\nscore (containing all pages) is run through\n[`score_processors.process()`](../moonlight/score_processors.py).\n\n#### Stem Detection\n\n[Stems](../moonlight/structure/stems.py) finds stem candidates from the vertical\nlines, and adds a `Stem` to notehead `Glyph`s if the closest stem is close\nenough to the expected position. The `ScoreReader` considers multiple noteheads\nwith identical `Stem`s as a single chord. Stems will also be used as a negative\nsignal to avoid detecting barlines in the same area.\n\n#### Beam Processing\n\nBeams from the `Structure` that are close enough to a stem are added to one or\nmore notes by [BeamProcessor](../moonlight/structure/beam_processor.py).\n\n#### Barlines\n\n[Barlines](../moonlight/structure/barlines.py) are detected from the verticals\nif they have not already been used as a stem.\n\n#### Score Reading\n\nThe [ScoreReader](../moonlight/score/reader.py) is the only score processor. It\ncan potentially use state that lasts across multiple pages, such as the current\ntime in the score, which needs to persist for the entire score.\n\nStaves are scanned from left to right for glyphs. The `ScoreReader` manages a\nhierarchy of state, from the global `ScoreState` to the `MeasureState`, holding\nlocal state such as accidentals. Based on the preceding `Glyph`s, each notehead\n`Glyph` gets assigned a\n[`Note`](https://github.com/magenta/note-seq/blob/d7153cdb26758a69c2fa022782c5817970de7066/note_seq/protobuf/music.proto#L104)\nfield holding its musical value.\n\nAfterwards, the `Score` can be converted to a `NoteSequence` (just pulling out\nall of the `Note`s) or [MusicXML](../moonlight/conversions/musicxml.py).\n"
  },
  {
    "path": "moonlight/BUILD",
    "content": "# Description:\n# Optical music recognition using TensorFlow.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\n# The OMR engine. Entry point for running OMR.\npy_library(\n    name = \"engine\",\n    srcs = [\"engine.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":image\",\n        \":page_processors\",\n        \":score_processors\",\n        \"//moonlight/conversions\",\n        \"//moonlight/glyphs:saved_classifier_fn\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/staves:base\",\n        \"//moonlight/structure\",\n        \"//moonlight/structure:beams\",\n        \"//moonlight/structure:components\",\n        \"//moonlight/structure:verticals\",\n        # numpy dep\n        # six dep\n        # tensorflow dep\n    ],\n)\n\n# The omr CLI for running locally on a single score.\npy_binary(\n    name = \"omr\",\n    srcs = [\"omr.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":engine\",\n        # disable_tf2\n        \"@com_google_protobuf//:protobuf_python\",\n        # absl dep\n        \"//moonlight/conversions\",\n        \"//moonlight/glyphs:saved_classifier_fn\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"omr_endtoend_test\",\n    size = \"large\",\n    srcs = [\"omr_endtoend_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    shard_count = 4,\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":engine\",\n        # disable_tf2\n        # pillow dep\n        # absl/testing dep\n        # librosa dep\n        # lxml dep\n        \"//moonlight/conversions\",\n        \"@magenta//protobuf:music_py_pb2\",\n        # numpy dep\n        # tensorflow.python.platform dep\n    ],\n)\n\npy_test(\n    name = \"omr_regression_test\",\n    size = \"large\",\n    srcs = [\"omr_regression_test.py\"],\n    args = [\"--corpus_dir=../omr_regression_test_data\"],\n    data = [\"@omr_regression_test_data\"],\n    shard_count = 4,\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":engine\",\n        # disable_tf2\n        # absl/testing dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/score:reader\",\n    ],\n)\n\npy_library(\n    name = \"image\",\n    srcs = [\"image.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [],  # tensorflow dep\n)\n\npy_library(\n    name = \"page_processors\",\n    srcs = [\"page_processors.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/glyphs:glyph_types\",\n        \"//moonlight/glyphs:note_dots\",\n        \"//moonlight/glyphs:repeated\",\n        \"//moonlight/staves:staff_processor\",\n        \"//moonlight/structure:barlines\",\n        \"//moonlight/structure:beam_processor\",\n        \"//moonlight/structure:section_barlines\",\n        \"//moonlight/structure:stems\",\n    ],\n)\n\npy_library(\n    name = \"score_processors\",\n    srcs = [\"score_processors.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\"//moonlight/score:reader\"],\n)\n"
  },
  {
    "path": "moonlight/conversions/BUILD",
    "content": "# Description:\n# Format conversions for OMR.\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"conversions\",\n    srcs = [\"__init__.py\"],\n    deps = [\n        \":musicxml\",\n        \":notesequence\",\n    ],\n)\n\npy_library(\n    name = \"musicxml\",\n    srcs = [\"musicxml.py\"],\n    deps = [\n        # librosa dep\n        # lxml dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/score:measures\",\n    ],\n)\n\npy_test(\n    name = \"musicxml_test\",\n    srcs = [\"musicxml_test.py\"],\n    deps = [\n        \":musicxml\",\n        # absl/testing dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"@magenta//protobuf:music_py_pb2\",\n    ],\n)\n\npy_library(\n    name = \"notesequence\",\n    srcs = [\"notesequence.py\"],\n    deps = [\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"@magenta//protobuf:music_py_pb2\",\n    ],\n)\n"
  },
  {
    "path": "moonlight/conversions/__init__.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"OMR format conversions.\"\"\"\n# TODO(ringw): Score to MusicXML, preserving staves, etc.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.conversions.musicxml import score_to_musicxml\nfrom moonlight.conversions.notesequence import page_to_notesequence\nfrom moonlight.conversions.notesequence import score_to_notesequence\n"
  },
  {
    "path": "moonlight/conversions/musicxml.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Score to MusicXML conversion.\"\"\"\n# TODO(ringw): Key signature\n# TODO(ringw): Chords\n# TODO(ringw): Stems--MusicXML supports \"up\" or \"down\".\n# TODO(ringw): Accurate layout of pages, staves, and measures.\n# TODO(ringw): Barline types.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport copy\nimport re\n\nimport librosa\nfrom lxml import etree\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.score import measures\nfrom six import moves\n\nDOCTYPE = ('<!DOCTYPE score-partwise PUBLIC\\n'\n           '    \"-//Recordare//DTD MusicXML 3.0 Partwise//EN\"\\n'\n           '    \"http://www.musicxml.org/dtds/partwise.dtd\">\\n')\nMUSICXML_VERSION = '3.0'\n\n# Number of divisions (duration units) per quarter note.\nDIVISIONS = 1024\n\n# TODO(ringw): Detect the actual time signature.\nTIME_SIGNATURE = etree.Element('time', symbol='common')\netree.SubElement(TIME_SIGNATURE, 'beats').text = '4'\netree.SubElement(TIME_SIGNATURE, 'beat-type').text = '4'\n\n# Note types.\nHALF = 'half'\nWHOLE = 'whole'\n# Indexed by the number of beams on a filled note. Each beam halves the duration\n# of the note.\nFILLED = [\n    'quarter', 'eighth', '16th', '32nd', '64th', '128th', '256th', '512th',\n    '1024th'\n]\n\n# Maps the ASCII accidental names to the actual pitch alteration.\nACCIDENTAL_TO_ALTER = {'': 0, '#': 1, 'b': -1}\n\n\ndef score_to_musicxml(score):\n  \"\"\"Converts a `tensorflow.moonlight.Score` to MusicXML.\n\n  Args:\n    score: The OMR score.\n\n  Returns:\n    XML text.\n  \"\"\"\n  musicxml = MusicXMLScore(score)\n  measure_num = 0\n  previous_note_start_time = 0\n  previous_note_end_time = 0\n\n  for page in score.page:\n    for system in page.system:\n      system_measures = measures.Measures(system)\n      for system_measure_num in moves.xrange(system_measures.size()):\n        for staff_num, staff in enumerate(system.staff):\n          # Produce the measure, even if there are no glyphs.\n          measure = musicxml.get_measure(staff_num, measure_num)\n\n          for glyph in staff.glyph:\n            if system_measures.get_measure(glyph) == system_measure_num:\n              clef = _glyph_to_clef(glyph)\n              if clef is not None:\n                attributes = _get_attributes(measure)\n                if attributes.find('clef') is not None:\n                  attributes.remove(attributes.find('clef'))\n                attributes.append(clef)\n              note = _glyph_to_note(glyph)\n              if note is not None:\n                if (glyph.note.start_time == previous_note_start_time and\n                    glyph.note.end_time == previous_note_end_time):\n                  position = note.index(note.find('pitch'))\n                  chord = etree.Element('chord')\n                  note.insert(position, chord)\n                previous_note_start_time = glyph.note.start_time\n                previous_note_end_time = glyph.note.end_time\n                measure.append(note)\n        measure_num += 1\n  # Add <divisions> and <time> to each part.\n  for part in musicxml.score:\n    # XPath indexing is 1-based.\n    measure = part.find('measure[1]')\n    if measure is not None:\n      attributes = _get_attributes(measure, position=0)\n      etree.SubElement(attributes, 'divisions').text = str(DIVISIONS)\n      attributes.append(copy.deepcopy(TIME_SIGNATURE))\n  return musicxml.to_string()\n\n\n_TREBLE_CLEF = etree.Element('clef')\netree.SubElement(_TREBLE_CLEF, 'sign').text = 'G'\netree.SubElement(_TREBLE_CLEF, 'line').text = '2'\n_BASS_CLEF = etree.Element('clef')\netree.SubElement(_BASS_CLEF, 'sign').text = 'F'\netree.SubElement(_BASS_CLEF, 'line').text = '4'\n\n\ndef _get_attributes(measure, position=-1):\n  \"\"\"Gets or creates an `<attributes>` tag in the `<measure>` tag.\n\n  If the child of `measure` at the given `position` is not an `<attributes>`\n  tag, creates a new `<attributes>` tag, appends, and returns it.\n  `position == -1` will get or insert attributes at the end of the measure.\n\n  Args:\n    measure: A `<measure>` etree tag.\n    position: The index where the attributes should be found or inserted.\n\n  Returns:\n    An `<attributes>` etree tag.\n  \"\"\"\n  if len(measure) and measure[position].tag == 'attributes':\n    return measure[position]\n  else:\n    attributes = etree.Element('attributes')\n    measure.insert(position, attributes)\n    return attributes\n\n\ndef _glyph_to_clef(glyph):\n  \"\"\"Converts a `Glyph` to a `<clef>` tag.\n\n  Args:\n    glyph: A `tensorflow.moonlight.Glyph` message.\n\n  Returns:\n    An etree `<clef>` tag, or `None` if the glyph is not a clef.\n  \"\"\"\n  if glyph.type == musicscore_pb2.Glyph.CLEF_TREBLE:\n    return copy.deepcopy(_TREBLE_CLEF)\n  elif glyph.type == musicscore_pb2.Glyph.CLEF_BASS:\n    return copy.deepcopy(_BASS_CLEF)\n  else:\n    return None\n\n\ndef _glyph_to_note(glyph):\n  \"\"\"Converts a `Glyph` message to a `<note>` tag.\n\n  Args:\n    glyph: A `tensorflow.moonlight.Glyph` message. The glyph type should be one\n      of `NOTEHEAD_*`.\n\n  Returns:\n    An etree `<note>` tag, or `None` if the glyph is not a notehead.\n\n  Raises:\n    ValueError: If the note duration is not a multiple of `1 / DIVISIONS`.\n  \"\"\"\n  if not glyph.HasField('note'):\n    return None\n  note = etree.Element('note')\n  etree.SubElement(note, 'voice').text = '1'\n  if glyph.type == musicscore_pb2.Glyph.NOTEHEAD_EMPTY:\n    note_type = HALF\n  elif glyph.type == musicscore_pb2.Glyph.NOTEHEAD_WHOLE:\n    note_type = WHOLE\n  else:\n    index = min(len(FILLED), len(glyph.beam))\n    note_type = FILLED[index]\n  etree.SubElement(note, 'type').text = note_type\n  duration = DIVISIONS * (glyph.note.end_time - glyph.note.start_time)\n  if not duration.is_integer():\n    raise ValueError('Duration is not an integer: ' + str(duration))\n  etree.SubElement(note, 'duration').text = str(int(duration))\n  pitch_match = re.match('([A-G])([#b]?)([0-9]+)',\n                         librosa.midi_to_note(glyph.note.pitch))\n  pitch = etree.SubElement(note, 'pitch')\n  etree.SubElement(pitch, 'step').text = pitch_match.group(1)\n  etree.SubElement(pitch, 'alter').text = str(\n      ACCIDENTAL_TO_ALTER[pitch_match.group(2)])\n  etree.SubElement(pitch, 'octave').text = pitch_match.group(3)\n  return note\n\n\nclass MusicXMLScore(object):\n  \"\"\"Manages the parts and measures of the MusicXML score.\n\n  Provides access to parts and measures by index, creating new parts and\n  measures as needed.\n  \"\"\"\n\n  def __init__(self, omr_score):\n    num_parts = max(\n        len(system.staff) for page in omr_score.page for system in page.system)\n    part_list = _create_part_list(num_parts)\n    self.score = etree.Element('score-partwise', version=MUSICXML_VERSION)\n    self.score.append(part_list)\n\n  def get_measure(self, part_ind, measure_ind):\n    while len(self.score.findall('part')) <= part_ind:\n      next_part_ind = len(self.score.findall('part'))\n      etree.SubElement(self.score, 'part', id=_get_part_id(next_part_ind))\n    # XPath indexing is 1-based.\n    part = self.score.find('part[%d]' % (part_ind + 1))\n    while len(part.findall('measure')) <= measure_ind:\n      next_measure_ind = len(part.findall('measure'))\n      etree.SubElement(part, 'measure', number=str(next_measure_ind + 1))\n    return part.find('measure[%d]' % (measure_ind + 1))\n\n  def to_string(self):\n    return etree.tostring(\n        self.score.getroottree(),\n        pretty_print=True,\n        xml_declaration=True,\n        encoding='UTF-8',\n        doctype=DOCTYPE)\n\n\ndef _create_part_list(num_parts):\n  part_list = etree.Element('part-list')\n  for part_num in moves.xrange(1, num_parts + 1):\n    score_part = etree.SubElement(part_list, 'score-part')\n    score_part.set('id', 'P%d' % part_num)\n    etree.SubElement(score_part, 'part-name').text = 'Part %d' % part_num\n  return part_list\n\n\ndef _get_part_id(part_ind):\n  return 'P%d' % (part_ind + 1)\n"
  },
  {
    "path": "moonlight/conversions/musicxml_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for MusicXML output.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\n\nfrom protobuf import music_pb2\nfrom moonlight.conversions import musicxml\nfrom moonlight.protobuf import musicscore_pb2\n\n\nclass MusicXMLTest(absltest.TestCase):\n\n  def testSmallScore(self):\n    score = musicscore_pb2.Score(page=[\n        musicscore_pb2.Page(system=[\n            musicscore_pb2.StaffSystem(staff=[\n                musicscore_pb2.Staff(glyph=[\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=10,\n                        y_position=0,\n                        note=music_pb2.NoteSequence.Note(\n                            start_time=0, end_time=1, pitch=71)),\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_EMPTY,\n                        x=110,\n                        y_position=-6,\n                        note=music_pb2.NoteSequence.Note(\n                            start_time=1, end_time=2.5, pitch=61)),\n                ]),\n                musicscore_pb2.Staff(glyph=[\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_WHOLE,\n                        x=10,\n                        y_position=2,\n                        note=music_pb2.NoteSequence.Note(\n                            start_time=0, end_time=4, pitch=50)),\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        beam=[\n                            musicscore_pb2.LineSegment(),\n                            musicscore_pb2.LineSegment()\n                        ],\n                        x=110,\n                        y_position=-4,\n                        note=music_pb2.NoteSequence.Note(\n                            start_time=4, end_time=4.25, pitch=60)),\n                ]),\n                musicscore_pb2.Staff(glyph=[\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=10,\n                        y_position=2,\n                        note=music_pb2.NoteSequence.Note(\n                            start_time=0, end_time=4, pitch=50)),\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=10,\n                        y_position=-4,\n                        note=music_pb2.NoteSequence.Note(\n                            start_time=0, end_time=4, pitch=60)),\n                ]),\n            ]),\n        ]),\n    ])\n    self.assertEqual(\n        b\"\"\"<?xml version='1.0' encoding='UTF-8'?>\n<!DOCTYPE score-partwise PUBLIC\n    \"-//Recordare//DTD MusicXML 3.0 Partwise//EN\"\n    \"http://www.musicxml.org/dtds/partwise.dtd\">\n\n<score-partwise version=\"3.0\">\n  <part-list>\n    <score-part id=\"P1\">\n      <part-name>Part 1</part-name>\n    </score-part>\n    <score-part id=\"P2\">\n      <part-name>Part 2</part-name>\n    </score-part>\n    <score-part id=\"P3\">\n      <part-name>Part 3</part-name>\n    </score-part>\n  </part-list>\n  <part id=\"P1\">\n    <measure number=\"1\">\n      <attributes>\n        <divisions>1024</divisions>\n        <time symbol=\"common\">\n          <beats>4</beats>\n          <beat-type>4</beat-type>\n        </time>\n      </attributes>\n      <note>\n        <voice>1</voice>\n        <type>quarter</type>\n        <duration>1024</duration>\n        <pitch>\n          <step>B</step>\n          <alter>0</alter>\n          <octave>4</octave>\n        </pitch>\n      </note>\n      <note>\n        <voice>1</voice>\n        <type>half</type>\n        <duration>1536</duration>\n        <pitch>\n          <step>C</step>\n          <alter>1</alter>\n          <octave>4</octave>\n        </pitch>\n      </note>\n    </measure>\n  </part>\n  <part id=\"P2\">\n    <measure number=\"1\">\n      <attributes>\n        <divisions>1024</divisions>\n        <time symbol=\"common\">\n          <beats>4</beats>\n          <beat-type>4</beat-type>\n        </time>\n      </attributes>\n      <note>\n        <voice>1</voice>\n        <type>whole</type>\n        <duration>4096</duration>\n        <pitch>\n          <step>D</step>\n          <alter>0</alter>\n          <octave>3</octave>\n        </pitch>\n      </note>\n      <note>\n        <voice>1</voice>\n        <type>16th</type>\n        <duration>256</duration>\n        <pitch>\n          <step>C</step>\n          <alter>0</alter>\n          <octave>4</octave>\n        </pitch>\n      </note>\n    </measure>\n  </part>\n  <part id=\"P3\">\n    <measure number=\"1\">\n      <attributes>\n        <divisions>1024</divisions>\n        <time symbol=\"common\">\n          <beats>4</beats>\n          <beat-type>4</beat-type>\n        </time>\n      </attributes>\n      <note>\n        <voice>1</voice>\n        <type>quarter</type>\n        <duration>4096</duration>\n        <pitch>\n          <step>D</step>\n          <alter>0</alter>\n          <octave>3</octave>\n        </pitch>\n      </note>\n      <note>\n        <voice>1</voice>\n        <type>quarter</type>\n        <duration>4096</duration>\n        <chord/>\n        <pitch>\n          <step>C</step>\n          <alter>0</alter>\n          <octave>4</octave>\n        </pitch>\n      </note>\n    </measure>\n  </part>\n</score-partwise>\n\"\"\", musicxml.score_to_musicxml(score))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/conversions/notesequence.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Converts an OMR `Score` to a `NoteSequence`.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom protobuf import music_pb2\n\nfrom moonlight.protobuf import musicscore_pb2\n\n\ndef score_to_notesequence(score):\n  \"\"\"Score to NoteSequence conversion.\n\n  Args:\n    score: A `tensorflow.moonlight.Score` message.\n\n  Returns:\n    A `tensorflow.magenta.NoteSequence` message containing the notes in the\n    score.\n  \"\"\"\n  return music_pb2.NoteSequence(notes=list(_score_notes(score)))\n\n\ndef page_to_notesequence(page):\n  return score_to_notesequence(musicscore_pb2.Score(page=[page]))\n\n\ndef _score_notes(score):\n  for page in score.page:\n    for system in page.system:\n      for staff in system.staff:\n        for glyph in staff.glyph:\n          if glyph.HasField('note'):\n            yield glyph.note\n"
  },
  {
    "path": "moonlight/data/README.md",
    "content": "Description:\n\nBuilt-in models required for running OMR.\n"
  },
  {
    "path": "moonlight/data/glyphs_nn_model_20180808/BUILD",
    "content": "package(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\nfilegroup(\n    name = \"glyphs_nn_model_20180808\",\n    srcs = glob([\"**/*\"]),\n)\n"
  },
  {
    "path": "moonlight/data/glyphs_nn_model_20180808/saved_model.pbtxt",
    "content": "saved_model_schema_version: 1\nmeta_graphs {\n  meta_info_def {\n    stripped_op_list {\n      op {\n        name: \"Add\"\n        input_arg {\n          name: \"x\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"y\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"z\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_HALF\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_UINT8\n              type: DT_INT8\n              type: DT_INT16\n              type: DT_INT32\n              type: DT_INT64\n              type: DT_COMPLEX64\n              type: DT_COMPLEX128\n              type: DT_STRING\n            }\n          }\n        }\n      }\n      op {\n        name: \"ArgMax\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"dimension\"\n          type_attr: \"Tidx\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"output_type\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_UINT8\n              type: DT_INT16\n              type: DT_INT8\n              type: DT_COMPLEX64\n              type: DT_INT64\n              type: DT_QINT8\n              type: DT_QUINT8\n              type: DT_QINT32\n              type: DT_BFLOAT16\n              type: DT_UINT16\n              type: DT_COMPLEX128\n              type: DT_HALF\n              type: DT_UINT32\n              type: DT_UINT64\n            }\n          }\n        }\n        attr {\n          name: \"Tidx\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n        attr {\n          name: \"output_type\"\n          type: \"type\"\n          default_value {\n            type: DT_INT64\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"AsString\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"output\"\n          type: DT_STRING\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_INT8\n              type: DT_INT16\n              type: DT_INT32\n              type: DT_INT64\n              type: DT_COMPLEX64\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_BOOL\n            }\n          }\n        }\n        attr {\n          name: \"precision\"\n          type: \"int\"\n          default_value {\n            i: -1\n          }\n        }\n        attr {\n          name: \"scientific\"\n          type: \"bool\"\n          default_value {\n            b: false\n          }\n        }\n        attr {\n          name: \"shortest\"\n          type: \"bool\"\n          default_value {\n            b: false\n          }\n        }\n        attr {\n          name: \"width\"\n          type: \"int\"\n          default_value {\n            i: -1\n          }\n        }\n        attr {\n          name: \"fill\"\n          type: \"string\"\n          default_value {\n            s: \"\"\n          }\n        }\n      }\n      op {\n        name: \"Assign\"\n        input_arg {\n          name: \"ref\"\n          type_attr: \"T\"\n          is_ref: true\n        }\n        input_arg {\n          name: \"value\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"output_ref\"\n          type_attr: \"T\"\n          is_ref: true\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"validate_shape\"\n          type: \"bool\"\n          default_value {\n            b: true\n          }\n        }\n        attr {\n          name: \"use_locking\"\n          type: \"bool\"\n          default_value {\n            b: true\n          }\n        }\n        allows_uninitialized_input: true\n      }\n      op {\n        name: \"BiasAdd\"\n        input_arg {\n          name: \"value\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"bias\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_UINT8\n              type: DT_INT16\n              type: DT_INT8\n              type: DT_COMPLEX64\n              type: DT_INT64\n              type: DT_QINT8\n              type: DT_QUINT8\n              type: DT_QINT32\n              type: DT_BFLOAT16\n              type: DT_UINT16\n              type: DT_COMPLEX128\n              type: DT_HALF\n              type: DT_UINT32\n              type: DT_UINT64\n            }\n          }\n        }\n        attr {\n          name: \"data_format\"\n          type: \"string\"\n          default_value {\n            s: \"NHWC\"\n          }\n          allowed_values {\n            list {\n              s: \"NHWC\"\n              s: \"NCHW\"\n            }\n          }\n        }\n      }\n      op {\n        name: \"Cast\"\n        input_arg {\n          name: \"x\"\n          type_attr: \"SrcT\"\n        }\n        output_arg {\n          name: \"y\"\n          type_attr: \"DstT\"\n        }\n        attr {\n          name: \"SrcT\"\n          type: \"type\"\n        }\n        attr {\n          name: \"DstT\"\n          type: \"type\"\n        }\n        attr {\n          name: \"Truncate\"\n          type: \"bool\"\n          default_value {\n            b: false\n          }\n        }\n      }\n      op {\n        name: \"Const\"\n        output_arg {\n          name: \"output\"\n          type_attr: \"dtype\"\n        }\n        attr {\n          name: \"value\"\n          type: \"tensor\"\n        }\n        attr {\n          name: \"dtype\"\n          type: \"type\"\n        }\n      }\n      op {\n        name: \"Equal\"\n        input_arg {\n          name: \"x\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"y\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"z\"\n          type: DT_BOOL\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_HALF\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_UINT8\n              type: DT_INT8\n              type: DT_INT16\n              type: DT_INT32\n              type: DT_INT64\n              type: DT_COMPLEX64\n              type: DT_QUINT8\n              type: DT_QINT8\n              type: DT_QINT32\n              type: DT_STRING\n              type: DT_BOOL\n              type: DT_COMPLEX128\n            }\n          }\n        }\n        is_commutative: true\n      }\n      op {\n        name: \"ExpandDims\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"dim\"\n          type_attr: \"Tdim\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"Tdim\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"HistogramSummary\"\n        input_arg {\n          name: \"tag\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"values\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"summary\"\n          type: DT_STRING\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          default_value {\n            type: DT_FLOAT\n          }\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_UINT8\n              type: DT_INT16\n              type: DT_INT8\n              type: DT_INT64\n              type: DT_BFLOAT16\n              type: DT_UINT16\n              type: DT_HALF\n              type: DT_UINT32\n              type: DT_UINT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"Identity\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n      }\n      op {\n        name: \"MatMul\"\n        input_arg {\n          name: \"a\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"b\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"product\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"transpose_a\"\n          type: \"bool\"\n          default_value {\n            b: false\n          }\n        }\n        attr {\n          name: \"transpose_b\"\n          type: \"bool\"\n          default_value {\n            b: false\n          }\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_HALF\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_COMPLEX64\n              type: DT_COMPLEX128\n            }\n          }\n        }\n      }\n      op {\n        name: \"Mean\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"reduction_indices\"\n          type_attr: \"Tidx\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"keep_dims\"\n          type: \"bool\"\n          default_value {\n            b: false\n          }\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_UINT8\n              type: DT_INT16\n              type: DT_INT8\n              type: DT_COMPLEX64\n              type: DT_INT64\n              type: DT_QINT8\n              type: DT_QUINT8\n              type: DT_QINT32\n              type: DT_BFLOAT16\n              type: DT_UINT16\n              type: DT_COMPLEX128\n              type: DT_HALF\n              type: DT_UINT32\n              type: DT_UINT64\n            }\n          }\n        }\n        attr {\n          name: \"Tidx\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"MergeV2Checkpoints\"\n        input_arg {\n          name: \"checkpoint_prefixes\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"destination_prefix\"\n          type: DT_STRING\n        }\n        attr {\n          name: \"delete_old_dirs\"\n          type: \"bool\"\n          default_value {\n            b: true\n          }\n        }\n        is_stateful: true\n      }\n      op {\n        name: \"Mul\"\n        input_arg {\n          name: \"x\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"y\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"z\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_HALF\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_UINT8\n              type: DT_INT8\n              type: DT_UINT16\n              type: DT_INT16\n              type: DT_INT32\n              type: DT_INT64\n              type: DT_COMPLEX64\n              type: DT_COMPLEX128\n            }\n          }\n        }\n        is_commutative: true\n      }\n      op {\n        name: \"NoOp\"\n      }\n      op {\n        name: \"Pack\"\n        input_arg {\n          name: \"values\"\n          type_attr: \"T\"\n          number_attr: \"N\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"N\"\n          type: \"int\"\n          has_minimum: true\n          minimum: 1\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"axis\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n      }\n      op {\n        name: \"ParseExample\"\n        input_arg {\n          name: \"serialized\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"names\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"sparse_keys\"\n          type: DT_STRING\n          number_attr: \"Nsparse\"\n        }\n        input_arg {\n          name: \"dense_keys\"\n          type: DT_STRING\n          number_attr: \"Ndense\"\n        }\n        input_arg {\n          name: \"dense_defaults\"\n          type_list_attr: \"Tdense\"\n        }\n        output_arg {\n          name: \"sparse_indices\"\n          type: DT_INT64\n          number_attr: \"Nsparse\"\n        }\n        output_arg {\n          name: \"sparse_values\"\n          type_list_attr: \"sparse_types\"\n        }\n        output_arg {\n          name: \"sparse_shapes\"\n          type: DT_INT64\n          number_attr: \"Nsparse\"\n        }\n        output_arg {\n          name: \"dense_values\"\n          type_list_attr: \"Tdense\"\n        }\n        attr {\n          name: \"Nsparse\"\n          type: \"int\"\n          has_minimum: true\n        }\n        attr {\n          name: \"Ndense\"\n          type: \"int\"\n          has_minimum: true\n        }\n        attr {\n          name: \"sparse_types\"\n          type: \"list(type)\"\n          has_minimum: true\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_INT64\n              type: DT_STRING\n            }\n          }\n        }\n        attr {\n          name: \"Tdense\"\n          type: \"list(type)\"\n          has_minimum: true\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_INT64\n              type: DT_STRING\n            }\n          }\n        }\n        attr {\n          name: \"dense_shapes\"\n          type: \"list(shape)\"\n          has_minimum: true\n        }\n      }\n      op {\n        name: \"Placeholder\"\n        output_arg {\n          name: \"output\"\n          type_attr: \"dtype\"\n        }\n        attr {\n          name: \"dtype\"\n          type: \"type\"\n        }\n        attr {\n          name: \"shape\"\n          type: \"shape\"\n          default_value {\n            shape {\n              unknown_rank: true\n            }\n          }\n        }\n      }\n      op {\n        name: \"RandomUniform\"\n        input_arg {\n          name: \"shape\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"dtype\"\n        }\n        attr {\n          name: \"seed\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n        attr {\n          name: \"seed2\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n        attr {\n          name: \"dtype\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_HALF\n              type: DT_BFLOAT16\n              type: DT_FLOAT\n              type: DT_DOUBLE\n            }\n          }\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n        is_stateful: true\n      }\n      op {\n        name: \"Range\"\n        input_arg {\n          name: \"start\"\n          type_attr: \"Tidx\"\n        }\n        input_arg {\n          name: \"limit\"\n          type_attr: \"Tidx\"\n        }\n        input_arg {\n          name: \"delta\"\n          type_attr: \"Tidx\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"Tidx\"\n        }\n        attr {\n          name: \"Tidx\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"Reshape\"\n        input_arg {\n          name: \"tensor\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"shape\"\n          type_attr: \"Tshape\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"Tshape\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"RestoreV2\"\n        input_arg {\n          name: \"prefix\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"tensor_names\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"shape_and_slices\"\n          type: DT_STRING\n        }\n        output_arg {\n          name: \"tensors\"\n          type_list_attr: \"dtypes\"\n        }\n        attr {\n          name: \"dtypes\"\n          type: \"list(type)\"\n          has_minimum: true\n          minimum: 1\n        }\n        is_stateful: true\n      }\n      op {\n        name: \"SaveV2\"\n        input_arg {\n          name: \"prefix\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"tensor_names\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"shape_and_slices\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"tensors\"\n          type_list_attr: \"dtypes\"\n        }\n        attr {\n          name: \"dtypes\"\n          type: \"list(type)\"\n          has_minimum: true\n          minimum: 1\n        }\n        is_stateful: true\n      }\n      op {\n        name: \"ScalarSummary\"\n        input_arg {\n          name: \"tags\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"values\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"summary\"\n          type: DT_STRING\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_INT32\n              type: DT_UINT8\n              type: DT_INT16\n              type: DT_INT8\n              type: DT_INT64\n              type: DT_BFLOAT16\n              type: DT_UINT16\n              type: DT_HALF\n              type: DT_UINT32\n              type: DT_UINT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"Shape\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"out_type\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"out_type\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"ShardedFilename\"\n        input_arg {\n          name: \"basename\"\n          type: DT_STRING\n        }\n        input_arg {\n          name: \"shard\"\n          type: DT_INT32\n        }\n        input_arg {\n          name: \"num_shards\"\n          type: DT_INT32\n        }\n        output_arg {\n          name: \"filename\"\n          type: DT_STRING\n        }\n      }\n      op {\n        name: \"Sigmoid\"\n        input_arg {\n          name: \"x\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"y\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_HALF\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_COMPLEX64\n              type: DT_COMPLEX128\n            }\n          }\n        }\n      }\n      op {\n        name: \"Softmax\"\n        input_arg {\n          name: \"logits\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"softmax\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_HALF\n              type: DT_BFLOAT16\n              type: DT_FLOAT\n              type: DT_DOUBLE\n            }\n          }\n        }\n      }\n      op {\n        name: \"StridedSlice\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"begin\"\n          type_attr: \"Index\"\n        }\n        input_arg {\n          name: \"end\"\n          type_attr: \"Index\"\n        }\n        input_arg {\n          name: \"strides\"\n          type_attr: \"Index\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"Index\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n        attr {\n          name: \"begin_mask\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n        attr {\n          name: \"end_mask\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n        attr {\n          name: \"ellipsis_mask\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n        attr {\n          name: \"new_axis_mask\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n        attr {\n          name: \"shrink_axis_mask\"\n          type: \"int\"\n          default_value {\n            i: 0\n          }\n        }\n      }\n      op {\n        name: \"StringJoin\"\n        input_arg {\n          name: \"inputs\"\n          type: DT_STRING\n          number_attr: \"N\"\n        }\n        output_arg {\n          name: \"output\"\n          type: DT_STRING\n        }\n        attr {\n          name: \"N\"\n          type: \"int\"\n          has_minimum: true\n          minimum: 1\n        }\n        attr {\n          name: \"separator\"\n          type: \"string\"\n          default_value {\n            s: \"\"\n          }\n        }\n      }\n      op {\n        name: \"Sub\"\n        input_arg {\n          name: \"x\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"y\"\n          type_attr: \"T\"\n        }\n        output_arg {\n          name: \"z\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n          allowed_values {\n            list {\n              type: DT_BFLOAT16\n              type: DT_HALF\n              type: DT_FLOAT\n              type: DT_DOUBLE\n              type: DT_UINT8\n              type: DT_INT8\n              type: DT_UINT16\n              type: DT_INT16\n              type: DT_INT32\n              type: DT_INT64\n              type: DT_COMPLEX64\n              type: DT_COMPLEX128\n            }\n          }\n        }\n      }\n      op {\n        name: \"Tile\"\n        input_arg {\n          name: \"input\"\n          type_attr: \"T\"\n        }\n        input_arg {\n          name: \"multiples\"\n          type_attr: \"Tmultiples\"\n        }\n        output_arg {\n          name: \"output\"\n          type_attr: \"T\"\n        }\n        attr {\n          name: \"T\"\n          type: \"type\"\n        }\n        attr {\n          name: \"Tmultiples\"\n          type: \"type\"\n          default_value {\n            type: DT_INT32\n          }\n          allowed_values {\n            list {\n              type: DT_INT32\n              type: DT_INT64\n            }\n          }\n        }\n      }\n      op {\n        name: \"VariableV2\"\n        output_arg {\n          name: \"ref\"\n          type_attr: \"dtype\"\n          is_ref: true\n        }\n        attr {\n          name: \"shape\"\n          type: \"shape\"\n        }\n        attr {\n          name: \"dtype\"\n          type: \"type\"\n        }\n        attr {\n          name: \"container\"\n          type: \"string\"\n          default_value {\n            s: \"\"\n          }\n        }\n        attr {\n          name: \"shared_name\"\n          type: \"string\"\n          default_value {\n            s: \"\"\n          }\n        }\n        is_stateful: true\n      }\n    }\n    tags: \"serve\"\n    tensorflow_version: \"1.10.0-rc1\"\n    tensorflow_git_version: \"unknown\"\n    stripped_default_attrs: true\n  }\n  graph_def {\n    node {\n      name: \"global_step/Initializer/zeros\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@global_step\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT64\n            tensor_shape {\n            }\n            int64_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"global_step\"\n      op: \"VariableV2\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@global_step\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"shape\"\n        value {\n          shape {\n          }\n        }\n      }\n    }\n    node {\n      name: \"global_step/Assign\"\n      op: \"Assign\"\n      input: \"global_step\"\n      input: \"global_step/Initializer/zeros\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@global_step\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"global_step/read\"\n      op: \"Identity\"\n      input: \"global_step\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@global_step\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"Placeholder\"\n      op: \"Placeholder\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"shape\"\n        value {\n          shape {\n            dim {\n              size: -1\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"ParseExample/Const\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n              dim {\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"ParseExample/ParseExample/names\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n              dim {\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"ParseExample/ParseExample/dense_keys_0\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"patch\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"ParseExample/ParseExample\"\n      op: \"ParseExample\"\n      input: \"Placeholder\"\n      input: \"ParseExample/ParseExample/names\"\n      input: \"ParseExample/ParseExample/dense_keys_0\"\n      input: \"ParseExample/Const\"\n      attr {\n        key: \"Ndense\"\n        value {\n          i: 1\n        }\n      }\n      attr {\n        key: \"Nsparse\"\n        value {\n          i: 0\n        }\n      }\n      attr {\n        key: \"Tdense\"\n        value {\n          list {\n            type: DT_FLOAT\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 15\n              }\n              dim {\n                size: 12\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dense_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 15\n              }\n              dim {\n                size: 12\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"sparse_types\"\n        value {\n          list {\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/l1_regularization_strength\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/augmentation_max_rotation_degrees\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/label_weights\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"NONE=0.25,FLAT=5.0,NATURAL=5.0,CLEF_TREBLE=5.0,CLEF_BASS=5.0\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/dropout\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/learning_rate\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0.01\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/use_included_label_weight\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_BOOL\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_BOOL\n            tensor_shape {\n            }\n            bool_val: true\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/activation_fn\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"sigmoid\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/augmentation_x_shift_probability\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0.25\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/layer_dims\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 150\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/model_name\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"glyphs_dnn\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"params/l2_regularization_strength\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/Shape\"\n      op: \"Shape\"\n      input: \"ParseExample/ParseExample\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 3\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice/stack\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice/stack_1\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice/stack_2\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice\"\n      op: \"StridedSlice\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/Shape\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice/stack\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice/stack_1\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice/stack_2\"\n      attr {\n        key: \"Index\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"shrink_axis_mask\"\n        value {\n          i: 1\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/Reshape/shape/1\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 180\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/Reshape/shape\"\n      op: \"Pack\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/strided_slice\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/Reshape/shape/1\"\n      attr {\n        key: \"N\"\n        value {\n          i: 2\n        }\n      }\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/patch/Reshape\"\n      op: \"Reshape\"\n      input: \"ParseExample/ParseExample\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/Reshape/shape\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 180\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/concat/concat_dim\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/input_from_feature_columns/input_layer/concat\"\n      op: \"Identity\"\n      input: \"dnn/input_from_feature_columns/input_layer/patch/Reshape\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 180\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/shape\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 2\n              }\n            }\n            tensor_content: \"\\264\\000\\000\\000\\226\\000\\000\\000\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/min\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: -0.13483997\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/max\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0.13483997\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/RandomUniform\"\n      op: \"RandomUniform\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/shape\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/sub\"\n      op: \"Sub\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/max\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/min\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/mul\"\n      op: \"Mul\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/RandomUniform\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/sub\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform\"\n      op: \"Add\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/mul\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform/min\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0\"\n      op: \"VariableV2\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"shape\"\n        value {\n          shape {\n            dim {\n              size: 180\n            }\n            dim {\n              size: 150\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/Assign\"\n      op: \"Assign\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/read\"\n      op: \"Identity\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/bias/part_0/Initializer/zeros\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n              dim {\n                size: 150\n              }\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/bias/part_0\"\n      op: \"VariableV2\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"shape\"\n        value {\n          shape {\n            dim {\n              size: 150\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/bias/part_0/Assign\"\n      op: \"Assign\"\n      input: \"dnn/hiddenlayer_0/bias/part_0\"\n      input: \"dnn/hiddenlayer_0/bias/part_0/Initializer/zeros\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/bias/part_0/read\"\n      op: \"Identity\"\n      input: \"dnn/hiddenlayer_0/bias/part_0\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel\"\n      op: \"Identity\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/read\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/MatMul\"\n      op: \"MatMul\"\n      input: \"dnn/input_from_feature_columns/input_layer/concat\"\n      input: \"dnn/hiddenlayer_0/kernel\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/bias\"\n      op: \"Identity\"\n      input: \"dnn/hiddenlayer_0/bias/part_0/read\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/BiasAdd\"\n      op: \"BiasAdd\"\n      input: \"dnn/hiddenlayer_0/MatMul\"\n      input: \"dnn/hiddenlayer_0/bias\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/Sigmoid\"\n      op: \"Sigmoid\"\n      input: \"dnn/hiddenlayer_0/BiasAdd\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction/zero\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction/Equal\"\n      op: \"Equal\"\n      input: \"dnn/hiddenlayer_0/Sigmoid\"\n      input: \"dnn/zero_fraction/zero\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction/Cast\"\n      op: \"Cast\"\n      input: \"dnn/zero_fraction/Equal\"\n      attr {\n        key: \"DstT\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"SrcT\"\n        value {\n          type: DT_BOOL\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction/Const\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 2\n              }\n            }\n            tensor_content: \"\\000\\000\\000\\000\\001\\000\\000\\000\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction/Mean\"\n      op: \"Mean\"\n      input: \"dnn/zero_fraction/Cast\"\n      input: \"dnn/zero_fraction/Const\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/hiddenlayer_0/fraction_of_zero_values/tags\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"dnn/dnn/hiddenlayer_0/fraction_of_zero_values\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/hiddenlayer_0/fraction_of_zero_values\"\n      op: \"ScalarSummary\"\n      input: \"dnn/dnn/hiddenlayer_0/fraction_of_zero_values/tags\"\n      input: \"dnn/zero_fraction/Mean\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/hiddenlayer_0/activation/tag\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"dnn/dnn/hiddenlayer_0/activation\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/hiddenlayer_0/activation\"\n      op: \"HistogramSummary\"\n      input: \"dnn/dnn/hiddenlayer_0/activation/tag\"\n      input: \"dnn/hiddenlayer_0/Sigmoid\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform/shape\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 2\n              }\n            }\n            tensor_content: \"\\226\\000\\000\\000\\024\\000\\000\\000\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform/min\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: -0.18786728\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform/max\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0.18786728\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform/RandomUniform\"\n      op: \"RandomUniform\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/shape\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform/sub\"\n      op: \"Sub\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/max\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/min\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform/mul\"\n      op: \"Mul\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/RandomUniform\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/sub\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Initializer/random_uniform\"\n      op: \"Add\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/mul\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform/min\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0\"\n      op: \"VariableV2\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"shape\"\n        value {\n          shape {\n            dim {\n              size: 150\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/Assign\"\n      op: \"Assign\"\n      input: \"dnn/logits/kernel/part_0\"\n      input: \"dnn/logits/kernel/part_0/Initializer/random_uniform\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/read\"\n      op: \"Identity\"\n      input: \"dnn/logits/kernel/part_0\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/bias/part_0/Initializer/zeros\"\n      op: \"Const\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n              dim {\n                size: 20\n              }\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/bias/part_0\"\n      op: \"VariableV2\"\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"shape\"\n        value {\n          shape {\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/bias/part_0/Assign\"\n      op: \"Assign\"\n      input: \"dnn/logits/bias/part_0\"\n      input: \"dnn/logits/bias/part_0/Initializer/zeros\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/bias/part_0/read\"\n      op: \"Identity\"\n      input: \"dnn/logits/bias/part_0\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel\"\n      op: \"Identity\"\n      input: \"dnn/logits/kernel/part_0/read\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/MatMul\"\n      op: \"MatMul\"\n      input: \"dnn/hiddenlayer_0/Sigmoid\"\n      input: \"dnn/logits/kernel\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/bias\"\n      op: \"Identity\"\n      input: \"dnn/logits/bias/part_0/read\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/BiasAdd\"\n      op: \"BiasAdd\"\n      input: \"dnn/logits/MatMul\"\n      input: \"dnn/logits/bias\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction_1/zero\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction_1/Equal\"\n      op: \"Equal\"\n      input: \"dnn/logits/BiasAdd\"\n      input: \"dnn/zero_fraction_1/zero\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction_1/Cast\"\n      op: \"Cast\"\n      input: \"dnn/zero_fraction_1/Equal\"\n      attr {\n        key: \"DstT\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"SrcT\"\n        value {\n          type: DT_BOOL\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction_1/Const\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 2\n              }\n            }\n            tensor_content: \"\\000\\000\\000\\000\\001\\000\\000\\000\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/zero_fraction_1/Mean\"\n      op: \"Mean\"\n      input: \"dnn/zero_fraction_1/Cast\"\n      input: \"dnn/zero_fraction_1/Const\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/logits/fraction_of_zero_values/tags\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"dnn/dnn/logits/fraction_of_zero_values\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/logits/fraction_of_zero_values\"\n      op: \"ScalarSummary\"\n      input: \"dnn/dnn/logits/fraction_of_zero_values/tags\"\n      input: \"dnn/zero_fraction_1/Mean\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/logits/activation/tag\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"dnn/dnn/logits/activation\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/dnn/logits/activation\"\n      op: \"HistogramSummary\"\n      input: \"dnn/dnn/logits/activation/tag\"\n      input: \"dnn/logits/BiasAdd\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/logits/Shape\"\n      op: \"Shape\"\n      input: \"dnn/logits/BiasAdd\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/logits/assert_rank_at_least/rank\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 2\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/logits/assert_rank_at_least/assert_type/statically_determined_correct_type\"\n      op: \"NoOp\"\n    }\n    node {\n      name: \"dnn/head/logits/assert_rank_at_least/static_checks_determined_all_ok\"\n      op: \"NoOp\"\n    }\n    node {\n      name: \"dnn/head/predictions/class_ids/dimension\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: -1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/predictions/class_ids\"\n      op: \"ArgMax\"\n      input: \"dnn/logits/BiasAdd\"\n      input: \"dnn/head/predictions/class_ids/dimension\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/predictions/ExpandDims/dim\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: -1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/predictions/ExpandDims\"\n      op: \"ExpandDims\"\n      input: \"dnn/head/predictions/class_ids\"\n      input: \"dnn/head/predictions/ExpandDims/dim\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/predictions/str_classes\"\n      op: \"AsString\"\n      input: \"dnn/head/predictions/ExpandDims\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/predictions/probabilities\"\n      op: \"Softmax\"\n      input: \"dnn/logits/BiasAdd\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/Shape\"\n      op: \"Shape\"\n      input: \"dnn/head/predictions/probabilities\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/strided_slice/stack\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/strided_slice/stack_1\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/strided_slice/stack_2\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 1\n              }\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/strided_slice\"\n      op: \"StridedSlice\"\n      input: \"dnn/head/Shape\"\n      input: \"dnn/head/strided_slice/stack\"\n      input: \"dnn/head/strided_slice/stack_1\"\n      input: \"dnn/head/strided_slice/stack_2\"\n      attr {\n        key: \"Index\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"shrink_axis_mask\"\n        value {\n          i: 1\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/range/start\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/range/limit\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 20\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/range/delta\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/range\"\n      op: \"Range\"\n      input: \"dnn/head/range/start\"\n      input: \"dnn/head/range/limit\"\n      input: \"dnn/head/range/delta\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/AsString\"\n      op: \"AsString\"\n      input: \"dnn/head/range\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/ExpandDims/dim\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/ExpandDims\"\n      op: \"ExpandDims\"\n      input: \"dnn/head/AsString\"\n      input: \"dnn/head/ExpandDims/dim\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/Tile/multiples/1\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/Tile/multiples\"\n      op: \"Pack\"\n      input: \"dnn/head/strided_slice\"\n      input: \"dnn/head/Tile/multiples/1\"\n      attr {\n        key: \"N\"\n        value {\n          i: 2\n        }\n      }\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/head/Tile\"\n      op: \"Tile\"\n      input: \"dnn/head/ExpandDims\"\n      input: \"dnn/head/Tile/multiples\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: -1\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction/zero\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction/Equal\"\n      op: \"Equal\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/read\"\n      input: \"zero_fraction/zero\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction/Cast\"\n      op: \"Cast\"\n      input: \"zero_fraction/Equal\"\n      attr {\n        key: \"DstT\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"SrcT\"\n        value {\n          type: DT_BOOL\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction/Const\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 2\n              }\n            }\n            tensor_content: \"\\000\\000\\000\\000\\001\\000\\000\\000\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction/Mean\"\n      op: \"Mean\"\n      input: \"zero_fraction/Cast\"\n      input: \"zero_fraction/Const\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/fraction_of_zero_weights/tags\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"dnn/hiddenlayer_0/kernel/part_0/fraction_of_zero_weights\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/hiddenlayer_0/kernel/part_0/fraction_of_zero_weights\"\n      op: \"ScalarSummary\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/fraction_of_zero_weights/tags\"\n      input: \"zero_fraction/Mean\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction_1/zero\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_FLOAT\n            tensor_shape {\n            }\n            float_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction_1/Equal\"\n      op: \"Equal\"\n      input: \"dnn/logits/kernel/part_0/read\"\n      input: \"zero_fraction_1/zero\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction_1/Cast\"\n      op: \"Cast\"\n      input: \"zero_fraction_1/Equal\"\n      attr {\n        key: \"DstT\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"SrcT\"\n        value {\n          type: DT_BOOL\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction_1/Const\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 2\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n              dim {\n                size: 2\n              }\n            }\n            tensor_content: \"\\000\\000\\000\\000\\001\\000\\000\\000\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"zero_fraction_1/Mean\"\n      op: \"Mean\"\n      input: \"zero_fraction_1/Cast\"\n      input: \"zero_fraction_1/Const\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/fraction_of_zero_weights/tags\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"dnn/logits/kernel/part_0/fraction_of_zero_weights\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"dnn/logits/kernel/part_0/fraction_of_zero_weights\"\n      op: \"ScalarSummary\"\n      input: \"dnn/logits/kernel/part_0/fraction_of_zero_weights/tags\"\n      input: \"zero_fraction_1/Mean\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"init\"\n      op: \"NoOp\"\n    }\n    node {\n      name: \"init_all_tables\"\n      op: \"NoOp\"\n    }\n    node {\n      name: \"init_1\"\n      op: \"NoOp\"\n    }\n    node {\n      name: \"group_deps\"\n      op: \"NoOp\"\n      input: \"^init\"\n      input: \"^init_1\"\n      input: \"^init_all_tables\"\n    }\n    node {\n      name: \"save/Const\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"model\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/StringJoin/inputs_1\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n            }\n            string_val: \"_temp_579b5e65d43d45f3bd142ae66888bbab/part\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/StringJoin\"\n      op: \"StringJoin\"\n      input: \"save/Const\"\n      input: \"save/StringJoin/inputs_1\"\n      attr {\n        key: \"N\"\n        value {\n          i: 2\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/num_shards\"\n      op: \"Const\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 1\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/ShardedFilename/shard\"\n      op: \"Const\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_INT32\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_INT32\n            tensor_shape {\n            }\n            int_val: 0\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/ShardedFilename\"\n      op: \"ShardedFilename\"\n      input: \"save/StringJoin\"\n      input: \"save/ShardedFilename/shard\"\n      input: \"save/num_shards\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/SaveV2/tensor_names\"\n      op: \"Const\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 5\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n              dim {\n                size: 5\n              }\n            }\n            string_val: \"dnn/hiddenlayer_0/bias\"\n            string_val: \"dnn/hiddenlayer_0/kernel\"\n            string_val: \"dnn/logits/bias\"\n            string_val: \"dnn/logits/kernel\"\n            string_val: \"global_step\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/SaveV2/shape_and_slices\"\n      op: \"Const\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 5\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n              dim {\n                size: 5\n              }\n            }\n            string_val: \"150 0,150\"\n            string_val: \"180 150 0,180:0,150\"\n            string_val: \"20 0,20\"\n            string_val: \"150 20 0,150:0,20\"\n            string_val: \"\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/SaveV2\"\n      op: \"SaveV2\"\n      input: \"save/ShardedFilename\"\n      input: \"save/SaveV2/tensor_names\"\n      input: \"save/SaveV2/shape_and_slices\"\n      input: \"dnn/hiddenlayer_0/bias/part_0/read\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0/read\"\n      input: \"dnn/logits/bias/part_0/read\"\n      input: \"dnn/logits/kernel/part_0/read\"\n      input: \"global_step\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"dtypes\"\n        value {\n          list {\n            type: DT_FLOAT\n            type: DT_FLOAT\n            type: DT_FLOAT\n            type: DT_FLOAT\n            type: DT_INT64\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/control_dependency\"\n      op: \"Identity\"\n      input: \"save/ShardedFilename\"\n      input: \"^save/SaveV2\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@save/ShardedFilename\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/MergeV2Checkpoints/checkpoint_prefixes\"\n      op: \"Pack\"\n      input: \"save/ShardedFilename\"\n      input: \"^save/control_dependency\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"N\"\n        value {\n          i: 1\n        }\n      }\n      attr {\n        key: \"T\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 1\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/MergeV2Checkpoints\"\n      op: \"MergeV2Checkpoints\"\n      input: \"save/MergeV2Checkpoints/checkpoint_prefixes\"\n      input: \"save/Const\"\n      device: \"/device:CPU:0\"\n    }\n    node {\n      name: \"save/Identity\"\n      op: \"Identity\"\n      input: \"save/Const\"\n      input: \"^save/MergeV2Checkpoints\"\n      input: \"^save/control_dependency\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/RestoreV2/tensor_names\"\n      op: \"Const\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 5\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n              dim {\n                size: 5\n              }\n            }\n            string_val: \"dnn/hiddenlayer_0/bias\"\n            string_val: \"dnn/hiddenlayer_0/kernel\"\n            string_val: \"dnn/logits/bias\"\n            string_val: \"dnn/logits/kernel\"\n            string_val: \"global_step\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/RestoreV2/shape_and_slices\"\n      op: \"Const\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 5\n              }\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtype\"\n        value {\n          type: DT_STRING\n        }\n      }\n      attr {\n        key: \"value\"\n        value {\n          tensor {\n            dtype: DT_STRING\n            tensor_shape {\n              dim {\n                size: 5\n              }\n            }\n            string_val: \"150 0,150\"\n            string_val: \"180 150 0,180:0,150\"\n            string_val: \"20 0,20\"\n            string_val: \"150 20 0,150:0,20\"\n            string_val: \"\"\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/RestoreV2\"\n      op: \"RestoreV2\"\n      input: \"save/Const\"\n      input: \"save/RestoreV2/tensor_names\"\n      input: \"save/RestoreV2/shape_and_slices\"\n      device: \"/device:CPU:0\"\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n            shape {\n              dim {\n                size: 20\n              }\n            }\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n            shape {\n              unknown_rank: true\n            }\n          }\n        }\n      }\n      attr {\n        key: \"dtypes\"\n        value {\n          list {\n            type: DT_FLOAT\n            type: DT_FLOAT\n            type: DT_FLOAT\n            type: DT_FLOAT\n            type: DT_INT64\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/Assign\"\n      op: \"Assign\"\n      input: \"dnn/hiddenlayer_0/bias/part_0\"\n      input: \"save/RestoreV2\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/Assign_1\"\n      op: \"Assign\"\n      input: \"dnn/hiddenlayer_0/kernel/part_0\"\n      input: \"save/RestoreV2:1\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/hiddenlayer_0/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 180\n              }\n              dim {\n                size: 150\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/Assign_2\"\n      op: \"Assign\"\n      input: \"dnn/logits/bias/part_0\"\n      input: \"save/RestoreV2:2\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/bias/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/Assign_3\"\n      op: \"Assign\"\n      input: \"dnn/logits/kernel/part_0\"\n      input: \"save/RestoreV2:3\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_FLOAT\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@dnn/logits/kernel/part_0\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n              dim {\n                size: 150\n              }\n              dim {\n                size: 20\n              }\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/Assign_4\"\n      op: \"Assign\"\n      input: \"global_step\"\n      input: \"save/RestoreV2:4\"\n      attr {\n        key: \"T\"\n        value {\n          type: DT_INT64\n        }\n      }\n      attr {\n        key: \"_class\"\n        value {\n          list {\n            s: \"loc:@global_step\"\n          }\n        }\n      }\n      attr {\n        key: \"_output_shapes\"\n        value {\n          list {\n            shape {\n            }\n          }\n        }\n      }\n    }\n    node {\n      name: \"save/restore_shard\"\n      op: \"NoOp\"\n      input: \"^save/Assign\"\n      input: \"^save/Assign_1\"\n      input: \"^save/Assign_2\"\n      input: \"^save/Assign_3\"\n      input: \"^save/Assign_4\"\n    }\n    node {\n      name: \"save/restore_all\"\n      op: \"NoOp\"\n      input: \"^save/restore_shard\"\n    }\n    versions {\n      producer: 26\n    }\n  }\n  saver_def {\n    filename_tensor_name: \"save/Const:0\"\n    save_tensor_name: \"save/Identity:0\"\n    restore_op_name: \"save/restore_all\"\n    max_to_keep: 5\n    sharded: true\n    keep_checkpoint_every_n_hours: 10000\n    version: V2\n  }\n  collection_def {\n    key: \"global_step\"\n    value {\n      bytes_list {\n        value: \"\\n\\rglobal_step:0\\022\\022global_step/Assign\\032\\022global_step/read:02\\037global_step/Initializer/zeros:0\"\n      }\n    }\n  }\n  collection_def {\n    key: \"params\"\n    value {\n      node_list {\n        value: \"params/l1_regularization_strength:0\"\n        value: \"params/augmentation_max_rotation_degrees:0\"\n        value: \"params/label_weights:0\"\n        value: \"params/dropout:0\"\n        value: \"params/learning_rate:0\"\n        value: \"params/use_included_label_weight:0\"\n        value: \"params/activation_fn:0\"\n        value: \"params/augmentation_x_shift_probability:0\"\n        value: \"params/layer_dims:0\"\n        value: \"params/model_name:0\"\n        value: \"params/l2_regularization_strength:0\"\n      }\n    }\n  }\n  collection_def {\n    key: \"saved_model_main_op\"\n    value {\n      node_list {\n        value: \"group_deps\"\n      }\n    }\n  }\n  collection_def {\n    key: \"summaries\"\n    value {\n      node_list {\n        value: \"dnn/dnn/hiddenlayer_0/fraction_of_zero_values:0\"\n        value: \"dnn/dnn/hiddenlayer_0/activation:0\"\n        value: \"dnn/dnn/logits/fraction_of_zero_values:0\"\n        value: \"dnn/dnn/logits/activation:0\"\n        value: \"dnn/hiddenlayer_0/kernel/part_0/fraction_of_zero_weights:0\"\n        value: \"dnn/logits/kernel/part_0/fraction_of_zero_weights:0\"\n      }\n    }\n  }\n  collection_def {\n    key: \"trainable_variables\"\n    value {\n      bytes_list {\n        value: \"\\n!dnn/hiddenlayer_0/kernel/part_0:0\\022&dnn/hiddenlayer_0/kernel/part_0/Assign\\032&dnn/hiddenlayer_0/kernel/part_0/read:0\\\"*\\n\\030dnn/hiddenlayer_0/kernel\\022\\004\\264\\001\\226\\001\\032\\002\\000\\000\\\"\\004\\264\\001\\226\\0012<dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform:08\\001\"\n        value: \"\\n\\037dnn/hiddenlayer_0/bias/part_0:0\\022$dnn/hiddenlayer_0/bias/part_0/Assign\\032$dnn/hiddenlayer_0/bias/part_0/read:0\\\"#\\n\\026dnn/hiddenlayer_0/bias\\022\\002\\226\\001\\032\\001\\000\\\"\\002\\226\\00121dnn/hiddenlayer_0/bias/part_0/Initializer/zeros:08\\001\"\n        value: \"\\n\\032dnn/logits/kernel/part_0:0\\022\\037dnn/logits/kernel/part_0/Assign\\032\\037dnn/logits/kernel/part_0/read:0\\\"!\\n\\021dnn/logits/kernel\\022\\003\\226\\001\\024\\032\\002\\000\\000\\\"\\003\\226\\001\\02425dnn/logits/kernel/part_0/Initializer/random_uniform:08\\001\"\n        value: \"\\n\\030dnn/logits/bias/part_0:0\\022\\035dnn/logits/bias/part_0/Assign\\032\\035dnn/logits/bias/part_0/read:0\\\"\\032\\n\\017dnn/logits/bias\\022\\001\\024\\032\\001\\000\\\"\\001\\0242*dnn/logits/bias/part_0/Initializer/zeros:08\\001\"\n      }\n    }\n  }\n  collection_def {\n    key: \"variables\"\n    value {\n      bytes_list {\n        value: \"\\n\\rglobal_step:0\\022\\022global_step/Assign\\032\\022global_step/read:02\\037global_step/Initializer/zeros:0\"\n        value: \"\\n!dnn/hiddenlayer_0/kernel/part_0:0\\022&dnn/hiddenlayer_0/kernel/part_0/Assign\\032&dnn/hiddenlayer_0/kernel/part_0/read:0\\\"*\\n\\030dnn/hiddenlayer_0/kernel\\022\\004\\264\\001\\226\\001\\032\\002\\000\\000\\\"\\004\\264\\001\\226\\0012<dnn/hiddenlayer_0/kernel/part_0/Initializer/random_uniform:08\\001\"\n        value: \"\\n\\037dnn/hiddenlayer_0/bias/part_0:0\\022$dnn/hiddenlayer_0/bias/part_0/Assign\\032$dnn/hiddenlayer_0/bias/part_0/read:0\\\"#\\n\\026dnn/hiddenlayer_0/bias\\022\\002\\226\\001\\032\\001\\000\\\"\\002\\226\\00121dnn/hiddenlayer_0/bias/part_0/Initializer/zeros:08\\001\"\n        value: \"\\n\\032dnn/logits/kernel/part_0:0\\022\\037dnn/logits/kernel/part_0/Assign\\032\\037dnn/logits/kernel/part_0/read:0\\\"!\\n\\021dnn/logits/kernel\\022\\003\\226\\001\\024\\032\\002\\000\\000\\\"\\003\\226\\001\\02425dnn/logits/kernel/part_0/Initializer/random_uniform:08\\001\"\n        value: \"\\n\\030dnn/logits/bias/part_0:0\\022\\035dnn/logits/bias/part_0/Assign\\032\\035dnn/logits/bias/part_0/read:0\\\"\\032\\n\\017dnn/logits/bias\\022\\001\\024\\032\\001\\000\\\"\\001\\0242*dnn/logits/bias/part_0/Initializer/zeros:08\\001\"\n      }\n    }\n  }\n  signature_def {\n    key: \"example:classification\"\n    value {\n      inputs {\n        key: \"inputs\"\n        value {\n          name: \"Placeholder:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"classes\"\n        value {\n          name: \"dnn/head/Tile:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"scores\"\n        value {\n          name: \"dnn/head/predictions/probabilities:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      method_name: \"tensorflow/serving/classify\"\n    }\n  }\n  signature_def {\n    key: \"example:predict\"\n    value {\n      inputs {\n        key: \"input\"\n        value {\n          name: \"Placeholder:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"class_ids\"\n        value {\n          name: \"dnn/head/predictions/ExpandDims:0\"\n          dtype: DT_INT64\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"classes\"\n        value {\n          name: \"dnn/head/predictions/str_classes:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"logits\"\n        value {\n          name: \"dnn/logits/BiasAdd:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"probabilities\"\n        value {\n          name: \"dnn/head/predictions/probabilities:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      method_name: \"tensorflow/serving/predict\"\n    }\n  }\n  signature_def {\n    key: \"example:serving_default\"\n    value {\n      inputs {\n        key: \"inputs\"\n        value {\n          name: \"Placeholder:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"classes\"\n        value {\n          name: \"dnn/head/Tile:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"scores\"\n        value {\n          name: \"dnn/head/predictions/probabilities:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      method_name: \"tensorflow/serving/classify\"\n    }\n  }\n  signature_def {\n    key: \"patch:predict\"\n    value {\n      inputs {\n        key: \"input\"\n        value {\n          name: \"ParseExample/ParseExample:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 15\n            }\n            dim {\n              size: 12\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"class_ids\"\n        value {\n          name: \"dnn/head/predictions/ExpandDims:0\"\n          dtype: DT_INT64\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"classes\"\n        value {\n          name: \"dnn/head/predictions/str_classes:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"logits\"\n        value {\n          name: \"dnn/logits/BiasAdd:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"probabilities\"\n        value {\n          name: \"dnn/head/predictions/probabilities:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      method_name: \"tensorflow/serving/predict\"\n    }\n  }\n  signature_def {\n    key: \"predict\"\n    value {\n      inputs {\n        key: \"input\"\n        value {\n          name: \"ParseExample/ParseExample:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 15\n            }\n            dim {\n              size: 12\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"class_ids\"\n        value {\n          name: \"dnn/head/predictions/ExpandDims:0\"\n          dtype: DT_INT64\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"classes\"\n        value {\n          name: \"dnn/head/predictions/str_classes:0\"\n          dtype: DT_STRING\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 1\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"logits\"\n        value {\n          name: \"dnn/logits/BiasAdd:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      outputs {\n        key: \"probabilities\"\n        value {\n          name: \"dnn/head/predictions/probabilities:0\"\n          dtype: DT_FLOAT\n          tensor_shape {\n            dim {\n              size: -1\n            }\n            dim {\n              size: 20\n            }\n          }\n        }\n      }\n      method_name: \"tensorflow/serving/predict\"\n    }\n  }\n}\n\n"
  },
  {
    "path": "moonlight/engine.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"The optical music recognition engine.\n\nParses PNG music score images.\n\nThe engine holds a TensorFlow graph containing all structural information and\nclassifier predictions from an image.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport six\nimport tensorflow as tf\n\nfrom moonlight import conversions\nfrom moonlight import image\nfrom moonlight import page_processors\nfrom moonlight import score_processors\nfrom moonlight import structure as structure_module\nfrom moonlight.glyphs import saved_classifier_fn\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.staves import base as staves_base\nfrom moonlight.structure import beams\nfrom moonlight.structure import components\nfrom moonlight.structure import verticals\n\n# TODO(ringw): Get OMR running on GPU. It seems to create too many individual\n# ops/allocations and can freeze the machine.\nCONFIG = tf.ConfigProto(device_count={'GPU': 0})\n\n\nclass OMREngine(object):\n  \"\"\"The OMR engine.\n\n  The engine reads one music score page image at a time, and extracts musical\n  elements from it.\n\n  The glyph classifier can be chosen by a custom glyph_classifier_fn.\n\n  Attributes:\n    graph: The TensorFlow graph used by OMR.\n    png_path: A scalar tensor representing the PNG image filename to load. It\n      should be used as a key in the `feed_dict` to `get_page()`. The image is\n      expected to contain an entire music score page.\n    image: A 2D uint8 tensor representing the music score page image. The\n      background is white (255) and the foreground is black (0). It may be fed\n      in to `get_page()` instead of specifying a PNG filename with `png_path`.\n    structure: The `Structure` holds the tensors that represent structural\n      information in the music score.\n    glyph_classifier: An instance of `BaseGlyphClassifier`, which holds the\n      tensor representing the detected glyphs for the entire page. Defaults to\n      nearest-neighbor classification using the pre-packaged labeled clusters.\n  \"\"\"\n\n  def __init__(self, glyph_classifier_fn=None):\n    \"\"\"Creates the engine and TF graph for running OMR.\n\n    Args:\n      glyph_classifier_fn: Callable that loads the glyph classifier into the\n        graph. Accepts a `Structure` as the single argument, and returns an\n        instance of `BaseGlyphClassifier`. The function typically loads a TF\n        saved model or other external data, and wraps the classification in a\n        concrete glyph classifier subclass. If the classifier uses a\n        `StafflineExtractor` for classification, it must set the\n        `staffline_extractor` attribute of the `Structure`. Otherwise, glyph x\n        coordinates will not be scaled back to image coordinates.\n    \"\"\"\n    glyph_classifier_fn = (\n        glyph_classifier_fn or saved_classifier_fn.build_classifier_fn())\n    self.graph = tf.Graph()\n    self.session = tf.Session(graph=self.graph)\n    with self.graph.as_default():\n      with self.session.as_default():\n        with tf.name_scope('OMREngine'):\n          self.png_path = tf.placeholder(tf.string, name='png_path', shape=())\n          self.image = image.decode_music_score_png(\n              tf.read_file(self.png_path, name='page_image'))\n          self.structure = structure_module.create_structure(self.image)\n        # Loading saved models happens outside of the name scope, because scopes\n        # can rename tensors from the model and cause dangling references.\n        # TODO(ringw): TF should be able to load models gracefully within a\n        # name scope.\n        self.glyph_classifier = glyph_classifier_fn(self.structure)\n\n  def run(self, input_pngs, output_notesequence=False):\n    \"\"\"Converts input PNGs into a `Score` message.\n\n    Args:\n      input_pngs: A list of PNG filenames to process.\n      output_notesequence: Whether to return a NoteSequence, as opposed to a\n        Score containing Pages with Glyphs.\n\n    Returns:\n      A NoteSequence message, or a Score message holding Pages for each input\n          image (with their detected Glyphs).\n    \"\"\"\n    if isinstance(input_pngs, six.string_types):\n      input_pngs = [input_pngs]\n    score = musicscore_pb2.Score()\n    score.page.extend(\n        self._get_page(feed_dict={self.png_path: png}) for png in input_pngs)\n    score = score_processors.process(score)\n    return (conversions.score_to_notesequence(score)\n            if output_notesequence else score)\n\n  def process_image(self, image_arr, process_structure=True):\n    \"\"\"Processes a uint8 image array.\n\n    Args:\n      image_arr: A 2D (H, W) uint8 NumPy array. Must have a white (255)\n        background and black (0) foreground.\n      process_structure: Whether to add structural information to the page.\n\n    Returns:\n      A `Page` message constructed from the contents of the image.\n    \"\"\"\n    with tf.Session(graph=self.graph, config=CONFIG):\n      return self._get_page(\n          feed_dict={self.image: image_arr},\n          process_structure=process_structure)\n\n  def _get_page(self, feed_dict=None, process_structure=True):\n    \"\"\"Returns the Page holding Glyphs for the page.\n\n    Args:\n      feed_dict: The feed dict to use for the TensorFlow graph. The image must\n        be fed in.\n      process_structure: If True, run the page_processors, which add staff\n        locations and other structural information. If False, return a Page\n        containing only Glyphs for each staff.\n\n    Returns:\n      A Page message holding Staff protos which have location information and\n          detected Glyphs.\n    \"\"\"\n    # If structure is given, output all structural information in addition to\n    # the Page message.\n    structure_data = (\n        _nested_ndarrays_to_tensors(self.structure.data)\n        if self.structure else [])\n    structure_data, glyphs = self.session.run(\n        [structure_data,\n         self.glyph_classifier.get_detected_glyphs()],\n        feed_dict=feed_dict)\n    computed_staves, computed_beams, computed_verticals, computed_components = (\n        structure_data)\n\n    # Construct and return a computed Structure.\n    computed_structure = structure_module.Structure(\n        staves_base.ComputedStaves(*computed_staves),\n        beams.ComputedBeams(*computed_beams),\n        verticals.ComputedVerticals(*computed_verticals),\n        components.ComputedComponents(*computed_components))\n\n    # The Page without staff location information.\n    labels_page = self.glyph_classifier.glyph_predictions_to_page(glyphs)\n\n    # Process the Page using the computed structure.\n    if process_structure:\n      processed_page = page_processors.process(\n          labels_page, computed_structure,\n          self.glyph_classifier.staffline_extractor)\n    else:\n      processed_page = labels_page\n    return processed_page\n\n\ndef _nested_ndarrays_to_tensors(data):\n  \"\"\"Converts possibly nested lists of np.ndarrays and Tensors to Tensors.\n\n  This is necessary in case some data in the Structure is already computed. We\n  just pass everything to tf.Session.run, and some data may be a tf.constant\n  which is just spit back out.\n\n  Args:\n    data: An np.ndarray, Tensor, or list recursively containing the same types.\n\n  Returns:\n    data with all np.ndarrays converted to Tensors.\n\n  Raises:\n    ValueError: If unexpected data was given.\n  \"\"\"\n  if isinstance(data, list):\n    return [_nested_ndarrays_to_tensors(element) for element in data]\n  elif isinstance(data, np.ndarray):\n    return tf.constant(data)\n  elif isinstance(data, tf.Tensor):\n    return data\n  else:\n    raise ValueError('Unexpected data: %s' % data)\n"
  },
  {
    "path": "moonlight/evaluation/BUILD",
    "content": "# Description:\n# OMR evaluation and music score similarity metrics.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"evaluation\",\n    deps = [\n        \":evaluator_lib\",\n        \":musicxml\",\n    ],\n)\n\npy_binary(\n    name = \"evaluator\",\n    srcs = [\"evaluator.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":evaluator_lib\",\n        # disable_tf2\n    ],\n)\n\npy_library(\n    name = \"evaluator_lib\",\n    srcs = [\"evaluator.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":musicxml\",\n        \"@com_google_protobuf//:protobuf_python\",\n        \"//moonlight:engine\",\n        \"//moonlight/conversions\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"evaluator_endtoend_test\",\n    size = \"large\",\n    srcs = [\"evaluator_endtoend_test.py\"],\n    data = [\n        \"//moonlight/testdata:images\",\n        \"//moonlight/testdata:musicxml\",\n    ],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":evaluator_lib\",\n        # disable_tf2\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"musicxml\",\n    srcs = [\"musicxml.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # lxml dep\n        \"//moonlight/music:constants\",\n        # pandas dep\n        # six dep\n    ],\n)\n\npy_test(\n    name = \"musicxml_test\",\n    srcs = [\"musicxml_test.py\"],\n    data = [\"//moonlight/testdata:musicxml\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":musicxml\",\n        # disable_tf2\n        # absl/testing dep\n        # lxml dep\n        # pandas dep\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/evaluation/evaluator.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Evaluates OMR using ground truth images and MusicXML.\"\"\"\n# TODO(ringw): Maybe add a flag for exporting CSV. We might also want to join\n# all DataFrames, with a new index for the ground truth title, if that's not too\n# unwieldy.\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom google.protobuf import text_format\nfrom tensorflow.python.lib.io import file_io\n\nfrom moonlight import conversions\nfrom moonlight import engine\nfrom moonlight.evaluation import musicxml\nfrom moonlight.protobuf import groundtruth_pb2\n\n\nclass Evaluator(object):\n\n  def __init__(self, **kwargs):\n    self.omr = engine.OMREngine(**kwargs)\n\n  def evaluate(self, ground_truth):\n    expected = file_io.read_file_to_string(ground_truth.ground_truth_filename)\n    score = self.omr.run(\n        page_spec.filename for page_spec in ground_truth.page_spec)\n    actual = conversions.score_to_musicxml(score)\n    return musicxml.musicxml_similarity(actual, expected)\n\n\ndef main(argv):\n  if len(argv) <= 1:\n    raise ValueError('Ground truth filenames are required')\n  evaluator = Evaluator()\n  for ground_truth_file in argv[1:]:\n    truth = groundtruth_pb2.GroundTruth()\n    text_format.Parse(file_io.read_file_to_string(ground_truth_file), truth)\n    print(truth.title)\n    print(evaluator.evaluate(truth))\n\n\nif __name__ == '__main__':\n  tf.app.run()\n"
  },
  {
    "path": "moonlight/evaluation/evaluator_endtoend_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Runs OMR evaluation end to end and asserts on the evaluation metric.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport tensorflow as tf\n\nfrom moonlight.evaluation import evaluator\nfrom moonlight.protobuf import groundtruth_pb2\n\n\nclass EvaluatorEndToEndTest(tf.test.TestCase):\n\n  def testIncludedGroundTruth(self):\n    ground_truth = groundtruth_pb2.GroundTruth(\n        ground_truth_filename=os.path.join(\n            tf.resource_loader.get_data_files_path(),\n            '../testdata/IMSLP00747.golden.xml'),\n        page_spec=[\n            groundtruth_pb2.PageSpec(\n                filename=os.path.join(tf.resource_loader.get_data_files_path(),\n                                      '../testdata/IMSLP00747-000.png')),\n        ])\n    results = evaluator.Evaluator().evaluate(ground_truth)\n    # Evaluation score is known to be greater than 0.65.\n    self.assertGreater(results['overall_score']['total', ''], 0.65)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/evaluation/musicxml.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Simple MusicXML high-level diffing and evaluation.\n\nThis is a high-level evaluation for OMR because it compares musical elements,\ninstead of the individual glyphs and lines that are detected. It is also easier\nto find MusicXML ground truth that corresponds exactly to a scanned score.\n\nThe evaluation result is a pandas DataFrame. This can eventually have multiple\ncolumns with different evaluation metrics/subscores for different musical\nelements for each measure (and the average for the whole score), which we can\nchoose between as needed.\n\nThis is considered high-level evaluation [1] because MusicXML does not have\ninformation on every visual element (e.g. it has the stem direction and number\nof beams, but not the start and end coordinates of the stem and beams). We also\nwant to evaluate emergent properties of OMR, such as note durations and voices,\ninstead of just counting the raw elements that are detected. This has the\ndrawback that one mistake in low-level symbol detection can have drastic effects\non the evaluation, but has the benefit that we can obtain thousands of MusicXML\nscores. Low-level evaluation would require labeling the coordinates of all\nelements on a given music score image, and there is not a public dataset for\nprinted music scores (MUSCIMA++ has handwritten music scores, which are\ncurrently outside of our scope.)\n\n[1] D. Byrd and J. G. Simonsen. Towards a standard testbed for optical music\n    recognition: definitions, metrics, and page images, 2015.\n    http://www.informatics.indiana.edu/donbyrd/OMRTestbed/\n    OMRStandardTestbed1Mar2013.pdf\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport copy\nimport fractions\n\nfrom lxml import etree\nimport pandas as pd\nimport six\nfrom six import moves\n\nfrom moonlight.music import constants\n\nOVERALL_SCORE = 'overall_score'\n\n\ndef musicxml_similarity(a, b):\n  \"\"\"Determines the similarity of two scores represented as musicXML strings.\n\n  We currently assume the following:\n  - The durations of each representation represent the same tempo.\n    - i.e. the unit of measure tempo divisions is the same for both pieces.\n  - Scores have an equal number of measures. They are dissimilar otherwise.\n  - Corresponding parts have an equal number of staves.\n\n  This currently accounts for:\n  - Intra-measure note-to-note pitch and tempo distance, as an edit distance.\n    - This can easily be generalized to weight differences based on distance,\n      such as penalizing larger gaps in pitch or tempo.\n\n  Args:\n    a: a musicXML string\n    b: a musicXML string\n\n  Returns:\n    A pd.DataFrame with scores on similarity between two scores. 1.0 means\n    exactly the same, 0.0 means not similar at all.\n    Example:\n                        overall_score\n      staff  measure\n      0      0          0.750\n             1          1.000\n      total             0.875\n  \"\"\"\n  if isinstance(a, six.text_type):\n    a = a.encode('utf-8')\n  if isinstance(a, bytes):\n    a = etree.fromstring(a)\n  if isinstance(b, six.text_type):\n    b = b.encode('utf-8')\n  if isinstance(b, bytes):\n    b = etree.fromstring(b)\n\n  a = PartStaves(a)\n  b = PartStaves(b)\n\n  # TODO(larryruili): Implement dissimilar measure count edit distance.\n  if a.all_measure_counts() != b.all_measure_counts():\n    return _not_similar()\n\n  measure_similarities = []\n  index = []\n  for part_staff in moves.xrange(a.num_partstaves()):\n    for measure_num in moves.xrange(a.num_measures(part_staff)):\n      measure_similarities.append(\n          measure_similarity(\n              a.get_measure(part_staff, measure_num),\n              b.get_measure(part_staff, measure_num)))\n      index.append((part_staff, measure_num))\n  df = pd.DataFrame(\n      measure_similarities,\n      columns=[OVERALL_SCORE],\n      index=pd.MultiIndex.from_tuples(index, names=['staff', 'measure']))\n  return df.append(\n      pd.DataFrame([df[OVERALL_SCORE].mean()],\n                   columns=[OVERALL_SCORE],\n                   index=pd.MultiIndex.from_tuples([('total', '')],\n                                                   names=['staff', 'measure'])))\n\n\ndef _not_similar():\n  \"\"\"Returns a pd.DataFrame representing 0 similarity between scores.\"\"\"\n  return pd.DataFrame(\n      [[0]],\n      columns=[OVERALL_SCORE],\n      index=pd.MultiIndex.from_product([('total',), ('',)],\n                                       names=['staff', 'measure']),\n  )\n\n\ndef _index(num_staves, num_measures):\n  return pd.MultiIndex.from_product(\n      [range(num_staves), range(num_measures)], names=['staff', 'measure'])\n\n\nclass PartStaves(object):\n  \"\"\"Accessor for parts and staves in a MusicXML score.\n\n  self.score: An etree rooted at <score>\n  self.part_staves: A list of tuples (part number, staff number)\n  \"\"\"\n\n  def __init__(self, score):\n    self.score = score\n    num_parts = len(score.findall('part'))\n    part_staves = []\n    for i in moves.xrange(num_parts):\n      # XPath is 1-indexed.\n      part = score.find('part[%d]' % (i + 1))\n\n      def num_staves(part):\n        yield 1\n        for staves_tag in part.findall('measure/attributes/staves'):\n          yield int(staves_tag.text)\n        # Find all tags underneath any <attributes> tag. They may also include\n        # the staff number as a \"number\" XML attribute.\n        for attribute in part.findall('.//attributes//'):\n          if 'number' in attribute.attrib:\n            yield int(attribute.attrib['number'])\n\n      num_staves = max(num_staves(part))\n      for j in moves.xrange(num_staves):\n        part_staves.append((i, j))\n    self.part_staves = part_staves\n\n  def get_measure(self, part_staff, measure_num):\n    \"\"\"Returns a single staff measure given a part and measure index.\n\n    Args:\n      part_staff: Index of the staff across all parts.\n      measure_num: Index of the measure.\n\n    Returns:\n      A <measure> etree element.\n    \"\"\"\n    part_num, staff_num = self.part_staves[part_staff]\n    part = self.score.find('part[%d]' % (part_num + 1))\n    measure = part.find('measure[%d]' % (measure_num + 1))\n    staff_measure = etree.Element('measure')\n    staff_measure.append(self._build_attributes(staff_num, measure))\n    for elem in measure:\n      if elem.tag != 'attributes':\n        new_elem = _filter_for_staff(staff_num, elem)\n        if new_elem is not None:\n          staff_measure.append(new_elem)\n    return staff_measure\n\n  def _build_attributes(self, staff_num, orig_measure):\n    \"\"\"Builds a new <attributes> tag for the given measure.\n\n    If the measure already contains an <attributes> tag, all children are copied\n    as long as they do not have a <staff> tag that's incompatible with the given\n    staff number.\n\n    If the existing <attributes> don't contain a <divisions> tag, we take the\n    most recent <divisions> from any measure on any part. The <divisions> tag is\n    required at the start of the score.\n\n    Args:\n      staff_num: Index of the staff across all parts.\n      orig_measure: The original <measure> tag, which may or may not contain its\n        own <attributes>.\n\n    Returns:\n      A new <attributes> tag, keeping any applicable attributes from\n      `orig_measure`, and the most recent <divisions>.\n    \"\"\"\n    new_attributes = etree.Element('attributes')\n\n    orig_attributes = orig_measure.find('attributes')\n    if orig_attributes is not None:\n      for attribute in orig_attributes:\n        new_attribute = _filter_for_staff(staff_num, attribute)\n        if new_attribute is not None:\n          new_attributes.append(new_attribute)\n    # Look for the most recent \"divisions\" tag.\n    def find_divisions():\n      divisions = None\n      for part in orig_measure.getparent().getparent().findall('part'):\n        for prev_measure in part.findall('measure'):\n          prev_attrs = prev_measure.find('attributes')\n          if (prev_attrs is not None and\n              prev_attrs.find('divisions') is not None):\n            divisions = prev_attrs.find('divisions')\n          if prev_measure is orig_measure:\n            # Done.\n            if divisions is None:\n              raise ValueError('<divisions> is required')\n            # Copy the tag, or else when appending the tag to a new element, it\n            # will be deleted from the old element.\n            return copy.deepcopy(divisions)\n\n    if new_attributes.find('divisions') is None:\n      new_attributes.append(find_divisions())\n    return new_attributes\n\n  def num_partstaves(self):\n    \"\"\"Returns the total count of parts/staves.\"\"\"\n    return len(self.part_staves)\n\n  def num_measures(self, part_staff):\n    part_num, _ = self.part_staves[part_staff]\n    return len(self._get_part(part_num).findall('measure'))\n\n  def _get_part(self, part_num):\n    return self.score.find('part[%d]' % (part_num + 1))\n\n  def all_measure_counts(self):\n    return [\n        self.num_measures(part_staff)\n        for part_staff in moves.xrange(self.num_partstaves())\n    ]\n\n\ndef _filter_for_staff(staff_num, elem):\n  \"\"\"Determines whether the element belongs to the given staff.\n\n  Elements normally have a <staff> child, with a 1-indexed staff number.\n  However, elements in <attributes> may use a \"number\" XML attribute instead\n  (also 1-indexed).\n\n  Args:\n    staff_num: The 0-indexed staff index.\n    elem: The XML element (a descendant of <measure>).\n\n  Returns:\n    A copied element with staff information removed, or None if the element does\n    not belong to the given staff.\n  \"\"\"\n  staff = elem.find('staff')\n  new_elem = copy.deepcopy(elem)\n  if staff is not None:\n    for subelem in new_elem:\n      if subelem.tag == 'staff':\n        if subelem.text == str(staff_num + 1):\n          # Got the correct \"staff\" tag.\n          return new_elem\n        else:\n          # Incorrect \"staff\" value.\n          return None\n  # The \"number\" XML attribute can refer to the staff within <attributes>.\n  if 'number' in new_elem.attrib:\n    if new_elem.attrib['number'] == str(staff_num + 1):\n      del new_elem.attrib['number']\n      return new_elem\n    else:\n      # Incorrect \"number\" attribute.\n      return None\n  # No staff information, element should be copied to all staves.\n  return copy.deepcopy(elem)\n\n\ndef measure_similarity(a, b):\n  \"\"\"Similarity metric between <measure> tags.\n\n  This currently converts <note> tags into tuples of the form\n  ((pitch letter, octave number, alteration number), duration),\n  then determines the edit distance between the two sequences of notes,\n  normalized by the maximum edit distance possible.\n\n  The similarity is simply 1 - (normalized edit distance).\n\n  Args:\n    a: A <measure> etree tag.\n    b: A <measure> etree tag.\n\n  Returns:\n    A float between 0.0 (no similarity) and 1.0 (measures are equivalent).\n  \"\"\"\n  a = measure_to_note_list(a)\n  b = measure_to_note_list(b)\n  scale = max(len(a), len(b))\n  if scale == 0:\n    return 1\n  else:\n    return 1 - (levenshtein(a, b) / scale)\n\n\ndef measure_to_note_list(measure):\n  notes = measure.findall('note')\n  note_list = []\n  for note in notes:\n    duration = duration_to_fraction(note.find('duration'), measure)\n    pitch = pitch_to_int(note.find('pitch'))\n    note_list.append((pitch, duration))\n  return note_list\n\n\ndef pitch_to_int(pitch):\n  \"\"\"Converts a <pitch> tag to a an integer representation.\n\n  Note is a character [A-G].\n  Octave is an integer [0-9]\n  Alter is an integer [(-2)-2]\n\n  The formula is: pitch = 12 * octave + note_map[note] + alter\n  Where note_map maps a note to its 12-tone representation, where C = 0, D = 2,\n  and B = 12. We only represent white-key tones in this map because the alter is\n  a separate dimension.\n\n  This is a heuristic that treats A#4 the same as Bb4, which may not\n  be the semantics we want for an optical recognition pipeline. However, it is\n  a simple way of gauging 'how far' a note's pitch is from the other.\n\n  Args:\n    pitch: A <pitch> etree tag.\n\n  Returns:\n    An integer representation of the pitch, from 0 to 129, which is the number\n    of semitones the pitch is above C0.\n  \"\"\"\n  note_map = dict(zip('CDEFGAB', constants.MAJOR_SCALE))\n  if pitch is None:\n    return None\n  note = pitch.find('step').text\n  octave = int(pitch.find('octave').text)\n  alter_tag = pitch.find('alter')\n  alter = int(alter_tag.text) if alter_tag is not None else 0\n  return 12 * octave + note_map[note] + alter\n\n\ndef duration_to_fraction(duration, measure):\n  if duration is None:\n    return None\n  else:\n    return fractions.Fraction(\n        int(duration.text), int(measure.find('attributes/divisions').text))\n\n\ndef _note_distance(n1, n2):\n  \"\"\"Returns the distance between two notes.\n\n  Pitch distance is a scaled absolute difference in pitch values between the two\n  notes. If the difference is below 6 half-steps, the distance is proportional\n  to difference / 8, else it's 1. This represents 'close-enough' semantics,\n  where we reward pitches that are at least reasonably close to the target,\n  since pitches can easily be very far off.\n\n  Duration distance is simply the ratio between the absolute distance between\n  representations, and the maximum duration between the notes. This represents\n  'percent-correct' semantics, because all durations will be at least somewhat\n  correct.\n\n  The note distance is defined as the average between pitch distance and\n  duration distance. Currently each distance has the same weight.\n\n  Args:\n    n1: a tuple of (pitch_int_representation, duration_int_representation)\n    n2: a tuple of (pitch_int_representation, duration_int_representation)\n\n  Returns:\n    A float from 0 to 1 representing the distance between the two notes.\n  \"\"\"\n  # If either pitch or duration is missing from either note, use exact equality.\n  if not (n1[0] and n2[0]) or not (n1[1] and n2[1]):\n    return n1 != n2\n\n  max_pitch_diff = 4.\n  pitch_diff = float(abs(n1[0] - n2[0]))\n  if pitch_diff > max_pitch_diff:\n    pitch_distance = 1\n  else:\n    pitch_distance = pitch_diff / max_pitch_diff\n\n  duration1 = float(n1[1])\n  duration2 = float(n2[1])\n  duration_distance = abs(duration1 - duration2) / max(duration1, duration2, 1)\n  return (pitch_distance + duration_distance) / 2\n\n\ndef levenshtein(s1, s2, distance=_note_distance):\n  \"\"\"Determines the edit distance between two sequences.\n\n  Args:\n    s1: A sequence.\n    s2: A sequence.\n    distance: A callable that accepts an element from each of s1 and s2, and\n      returns a float distance metric between 0 and 1.\n\n  Returns:\n    The Levenshtein distance between the two sequences.\n  \"\"\"\n  if len(s1) < len(s2):\n    return levenshtein(s2, s1, distance)\n\n  if not s2:\n    return len(s1)\n\n  previous_row = range(len(s2) + 1)\n  for i, c1 in enumerate(s1):\n    current_row = [i + 1]\n    for j, c2 in enumerate(s2):\n      insertions = previous_row[j + 1] + 1\n      deletions = current_row[j] + 1\n      substitutions = previous_row[j] + distance(c1, c2)\n      current_row.append(min(insertions, deletions, substitutions))\n    previous_row = current_row\n\n  return previous_row[-1]\n\n\ndef exact_match_distance(n1, n2):\n  \"\"\"Metric for `levenshtein_distance` that requires an exact match.\"\"\"\n  return n1 != n2\n"
  },
  {
    "path": "moonlight/evaluation/musicxml_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for MusicXML evaluation.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport copy\nimport os.path\n\nfrom absl.testing import absltest\nfrom lxml import etree\nfrom moonlight.evaluation import musicxml\nimport pandas as pd\nimport pandas.util.testing as pd_testing\nimport six\nfrom six import moves\nfrom tensorflow.python.platform import resource_loader\n\n\nclass MusicXMLTest(absltest.TestCase):\n\n  def testNotSimilar(self):\n    self.assertEqual(musicxml._not_similar()['overall_score']['total'][0], 0)\n\n  def testEmptyMeasures(self):\n    a = etree.Element('score')\n    a_part = etree.SubElement(a, 'part')\n    measure = etree.SubElement(a_part, 'measure')\n    attributes = etree.SubElement(measure, 'attributes')\n    etree.SubElement(attributes, 'divisions').text = '24'\n    etree.SubElement(a_part, 'measure')\n    etree.SubElement(a_part, 'measure')\n    b = etree.Element('score')\n    b_part = etree.SubElement(b, 'part')\n    measure = etree.SubElement(b_part, 'measure')\n    attributes = etree.SubElement(measure, 'attributes')\n    etree.SubElement(attributes, 'divisions').text = '24'\n    etree.SubElement(b_part, 'measure')\n    etree.SubElement(b_part, 'measure')\n    similarity = musicxml.musicxml_similarity(a, b)\n    self.assertEqual(len(similarity), 4)\n    self.assertEqual(similarity['overall_score']['total'][0], 1.0)\n\n  def testSomeMeasuresNotEqual(self):\n    a = etree.Element('score')\n    a_part = etree.SubElement(a, 'part')\n    measure = etree.SubElement(a_part, 'measure')\n    attributes = etree.SubElement(measure, 'attributes')\n    etree.SubElement(attributes, 'divisions').text = '24'\n    etree.SubElement(etree.SubElement(a_part, 'measure'), 'note')\n    a_part = etree.SubElement(a, 'part')\n    etree.SubElement(etree.SubElement(a_part, 'measure'), 'note')\n    etree.SubElement(a_part, 'measure')\n\n    b = etree.Element('score')\n    b_part = etree.SubElement(b, 'part')\n    measure = etree.SubElement(b_part, 'measure')\n    attributes = etree.SubElement(measure, 'attributes')\n    etree.SubElement(attributes, 'divisions').text = '24'\n    etree.SubElement(measure, 'note')\n    etree.SubElement(etree.SubElement(b_part, 'measure'), 'note')\n    b_part = etree.SubElement(b, 'part')\n    etree.SubElement(b_part, 'measure')\n    etree.SubElement(etree.SubElement(b_part, 'measure'), 'note')\n\n    pd_testing.assert_frame_equal(\n        musicxml.musicxml_similarity(a, b),\n        pd.DataFrame([0.0, 1.0, 0.0, 0.0, 0.25],\n                     columns=[musicxml.OVERALL_SCORE],\n                     index=pd.MultiIndex.from_tuples([(0, 0), (0, 1), (1, 0),\n                                                      (1, 1), ('total', '')],\n                                                     names=['staff',\n                                                            'measure'])))\n\n  def testIdentical(self):\n    filename = six.text_type(\n        os.path.join(resource_loader.get_data_files_path(),\n                     '../testdata/IMSLP00747.golden.xml'))\n    score = etree.fromstring(open(filename, 'rb').read())\n    similarity = musicxml.musicxml_similarity(score, score)\n    self.assertGreater(len(similarity), 1)\n    self.assertEqual(similarity['overall_score']['total'][0], 1.0)\n\n  def testMoreChangesLessSimilar(self):\n    filename = six.text_type(\n        os.path.join(resource_loader.get_data_files_path(),\n                     '../testdata/TWO_MEASURE_SAMPLE.xml'))\n    score = etree.fromstring(open(filename, 'rb').read())\n    score2 = copy.deepcopy(score)\n    durations2 = score2.findall('part/measure/note/duration')\n    # Change the duration of one note by +1\n    durations2[0].text = str(int(durations2[0].text) + 1)\n    score3 = copy.deepcopy(score2)\n    octaves3 = score3.findall('part/measure/note/pitch/octave')\n    # Change the octave of another note by +1\n    octaves3[1].text = str(int(octaves3[1].text) + 1)\n    similarity11 = musicxml.musicxml_similarity(score, score)\n    similarity12 = musicxml.musicxml_similarity(score, score2)\n    similarity13 = musicxml.musicxml_similarity(score, score3)\n\n    # Score 2 should be less similar than 1 to itself\n    self.assertLess(similarity12['overall_score']['total'][0],\n                    similarity11['overall_score']['total'][0])\n    # Score 3 should be less similar than 2 to 1\n    self.assertLess(similarity13['overall_score']['total'][0],\n                    similarity12['overall_score']['total'][0])\n\n  def testLevenshteinDistance(self):\n\n    def levenshtein(a, b):\n      return musicxml.levenshtein(a, b, musicxml.exact_match_distance)\n\n    self.assertEqual(levenshtein('sky', 'sky'), 0)\n    self.assertEqual(levenshtein('sky', 'soy'), 1)\n    self.assertEqual(levenshtein('sky', 'skyll'), 2)\n    self.assertEqual(levenshtein('sky', 's'), 2)\n    self.assertEqual(levenshtein('sky', 'dig'), 3)\n    self.assertEqual(levenshtein([(1, 2), (3, 4)], [(1, 2), (4, 3)]), 1)\n    self.assertEqual(\n        levenshtein([(1), (2), (3, (1, 2))], [(1, 2), (2), (3, (1, 3))]), 2)\n\n  def testOptionalAlter(self):\n    pitch = etree.Element('pitch')\n    etree.SubElement(pitch, 'step').text = 'C'\n    etree.SubElement(pitch, 'octave').text = '4'\n\n    pitch_with_alter = copy.deepcopy(pitch)\n    etree.SubElement(pitch_with_alter, 'alter').text = '0'\n    self.assertEqual(\n        musicxml.pitch_to_int(pitch), musicxml.pitch_to_int(pitch_with_alter))\n\n    pitch_with_alter.find('alter').text = '-1'\n    self.assertNotEqual(\n        musicxml.pitch_to_int(pitch), musicxml.pitch_to_int(pitch_with_alter))\n\n  def testPitchToInt(self):\n    pitch_c4 = etree.Element('pitch')\n    etree.SubElement(pitch_c4, 'step').text = 'C'\n    etree.SubElement(pitch_c4, 'octave').text = '4'\n    self.assertEqual(musicxml.pitch_to_int(pitch_c4), 48)\n\n    pitch_bsharp3 = etree.Element('pitch')\n    etree.SubElement(pitch_bsharp3, 'step').text = 'B'\n    etree.SubElement(pitch_bsharp3, 'octave').text = '3'\n    etree.SubElement(pitch_bsharp3, 'alter').text = '1'\n    self.assertEqual(musicxml.pitch_to_int(pitch_bsharp3), 48)\n\n    pitch_dflat4 = etree.Element('pitch')\n    etree.SubElement(pitch_dflat4, 'step').text = 'D'\n    etree.SubElement(pitch_dflat4, 'octave').text = '4'\n    etree.SubElement(pitch_dflat4, 'alter').text = '-1'\n    self.assertEqual(musicxml.pitch_to_int(pitch_dflat4), 49)\n\n  def testNoteDistance_samePitch_sameDuration_isEqual(self):\n    self.assertEqual(musicxml._note_distance((48, 3), (48, 3)), 0.0)\n\n  def testNoteDistance_samePitch_differentDuration_fuzzyDuration(self):\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (48, 4)), 0.125)\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (48, 6)), 0.25)\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (48, 8)), 0.3125)\n\n  def testNoteDistance_differentPitch_sameDuration_fuzzyPitch(self):\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (49, 3)), 0.125)\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (46, 3)), 0.25)\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (55, 3)), 0.5)\n\n  def testNoteDistance_differentPitchAndDuration_fuzzyDistance(self):\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (47, 4)), 0.25)\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (50, 5)), 0.45)\n    self.assertAlmostEqual(musicxml._note_distance((48, 3), (52, 6)), 0.75)\n    self.assertAlmostEqual(musicxml._note_distance((48, 1), (53, 4)), 0.875)\n\n\nclass PartStavesTest(absltest.TestCase):\n\n  def testDivisions_golden(self):\n    \"\"\"Test that <divisions> is propagated across all measures.\"\"\"\n    filename = six.text_type(\n        os.path.join(resource_loader.get_data_files_path(),\n                     '../testdata/IMSLP00747.golden.xml'))\n    score = etree.fromstring(open(filename, 'rb').read())\n    part_staves = musicxml.PartStaves(score)\n    self.assertEqual(part_staves.num_partstaves(), 2)\n    self.assertEqual(part_staves.num_measures(0), 22)\n    self.assertEqual(part_staves.num_measures(1), 22)\n    for i in moves.xrange(2):\n      for j in moves.xrange(22):\n        measure = part_staves.get_measure(i, j)\n        self.assertEqual(measure.find('attributes/divisions').text, '8')\n\n  def testAttributes_perStaff(self):\n    score = etree.Element('score-partwise')\n    part = etree.SubElement(score, 'part')\n    measure = etree.SubElement(part, 'measure')\n    attributes = etree.SubElement(measure, 'attributes')\n\n    # Include the required <divisions>.\n    etree.SubElement(attributes, 'divisions').text = '8'\n\n    # Key signature for the first staff.\n    key = etree.SubElement(attributes, 'key')\n    key.attrib['number'] = '1'\n    # Dummy value\n    key.text = 'C major'\n\n    # Key signature for the second staff.\n    key = etree.SubElement(attributes, 'key')\n    key.attrib['number'] = '2'\n    # Dummy value\n    key.text = 'G major'\n\n    part_staves = musicxml.PartStaves(score)\n    self.assertEqual(part_staves.num_partstaves(), 2)\n\n    staff_1_measure = part_staves.get_measure(0, 0)\n    self.assertEqual(len(staff_1_measure), 1)\n    attributes = staff_1_measure.find('attributes')\n    self.assertEqual(len(attributes), 2)\n    self.assertEqual(etree.tostring(attributes[0]), b'<divisions>8</divisions>')\n    self.assertEqual(etree.tostring(attributes[1]), b'<key>C major</key>')\n\n    staff_2_measure = part_staves.get_measure(1, 0)\n    self.assertEqual(len(staff_2_measure), 1)\n    attributes = staff_2_measure.find('attributes')\n    self.assertEqual(len(attributes), 2)\n    self.assertEqual(etree.tostring(attributes[0]), b'<divisions>8</divisions>')\n    self.assertEqual(etree.tostring(attributes[1]), b'<key>G major</key>')\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/glyphs/BUILD",
    "content": "# Description:\n# Glyph classification for OMR.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"glyphs\",\n    deps = [\n        \":base\",\n        \":corpus\",\n        \":geometry\",\n        \":glyph_types\",\n        \":knn\",\n        \":knn_model\",\n        \":neural\",\n        \":note_dots\",\n        \":repeated\",\n        \":saved_classifier\",\n    ],\n)\n\npy_library(\n    name = \"corpus\",\n    srcs = [\"corpus.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"base\",\n    srcs = [\"base.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # enum34 dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"convolutional\",\n    srcs = [\"convolutional.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":base\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"convolutional_test\",\n    srcs = [\"convolutional_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":base\",\n        \":convolutional\",\n        \":testing\",\n        # disable_tf2\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # pandas dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"knn\",\n    srcs = [\"knn.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":convolutional\",\n        \":corpus\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/util:patches\",\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"knn_model\",\n    srcs = [\"knn_model.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"knn_test\",\n    srcs = [\"knn_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":base\",\n        \":knn\",\n        # disable_tf2\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # pandas dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"geometry\",\n    srcs = [\"geometry.py\"],\n    srcs_version = \"PY2AND3\",\n)\n\npy_library(\n    name = \"glyph_types\",\n    srcs = [\"glyph_types.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\"//moonlight/protobuf:protobuf_py_pb2\"],\n)\n\npy_library(\n    name = \"neural\",\n    srcs = [\"neural.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"neural_test\",\n    size = \"small\",\n    srcs = [\"neural_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":neural\",\n        # disable_tf2\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"saved_classifier\",\n    srcs = [\"saved_classifier.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":convolutional\",\n        \"//moonlight/staves:staffline_extractor\",\n        \"//moonlight/util:patches\",\n        # tensorflow dep\n        # tensorflow.contrib.graph_editor py dep\n        # tensorflow.contrib.util py dep\n    ],\n)\n\npy_test(\n    name = \"saved_classifier_test\",\n    srcs = [\"saved_classifier_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":saved_classifier\",\n        # disable_tf2\n        \"//moonlight:image\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/structure\",\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"saved_classifier_fn\",\n    srcs = [\"saved_classifier_fn.py\"],\n    data = [\"//moonlight/data/glyphs_nn_model_20180808\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\":saved_classifier\"],\n)\n\npy_library(\n    name = \"note_dots\",\n    srcs = [\"note_dots.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":geometry\",\n        \":glyph_types\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/structure:components\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"repeated\",\n    srcs = [\"repeated.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\"//moonlight/glyphs:glyph_types\"],\n)\n\npy_library(\n    name = \"testing\",\n    testonly = True,\n    srcs = [\"testing.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":convolutional\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/glyphs/base.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Base glyph classifier model.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport abc\n\nimport enum\nimport numpy as np\n\nfrom moonlight.protobuf import musicscore_pb2\n\n\nclass GlyphsTensorColumns(enum.IntEnum):\n  \"\"\"The columns of the glyphs tensors.\n\n  Glyphs should be held in a 2D tensor where the columns are the staff of the\n  glyph, the vertical position on the staff, x coordinate, and glyph type.\n  \"\"\"\n\n  STAFF_INDEX = 0\n  Y_POSITION = 1\n  X = 2\n  TYPE = 3\n\n\nclass BaseGlyphClassifier(object):\n  \"\"\"The base glyph classifier model.\"\"\"\n  __metaclass__ = abc.ABCMeta\n\n  def __init__(self):\n    \"\"\"Base constructor for a glyph classifier.\n\n    Attributes:\n      staffline_extractor: Optional staffline extractor, if used for\n        classification. If present, classification uses the scaled stafflines,\n        and glyph x positions will be scaled back to page coordinates when\n        constructing the Page. If None, no scaling is done.\n    \"\"\"\n    self.staffline_extractor = None\n\n  @abc.abstractmethod\n  def get_detected_glyphs(self):\n    \"\"\"Detects glyphs in the image.\n\n    Each glyph belongs to a staff, and has a y position numbered from 0 for the\n    center staff line.\n\n    Returns:\n      A Tensor of glyphs, with shape (num_glyphs, 4). The columns are indexed by\n        `GlyphsTensorColumns`. The glyphs will be sorted later, so they may be\n        in any order.\n    \"\"\"\n    pass\n\n  def glyph_predictions_to_page(self, predictions):\n    \"\"\"Converts the glyph predictions to a Page message.\n\n    Args:\n      predictions: NumPy array which is equal to\n        `self.get_detected_glyphs().eval()` (but multiple tensors are evaluated\n        in a single run for efficiency.) Shape `(num_glyphs, 3)`.\n\n    Returns:\n      A `Page` message holding a single `StaffSystem`, with `Staff` messages\n        that only hold `Glyph`s. Structural information is added to the page by\n        `OMREngine`.\n    \"\"\"\n    num_staves = (\n        predictions[:, int(GlyphsTensorColumns.STAFF_INDEX)].max() +\n        1 if predictions.size else 0)\n\n    def create_glyph(glyph):\n      return musicscore_pb2.Glyph(\n          x=glyph[GlyphsTensorColumns.X],\n          y_position=glyph[GlyphsTensorColumns.Y_POSITION],\n          type=glyph[GlyphsTensorColumns.TYPE])\n\n    def generate_staff(staff_num):\n      glyphs = predictions[predictions[:,\n                                       int(GlyphsTensorColumns.STAFF_INDEX)] ==\n                           staff_num]\n      # For determinism, sort glyphs by x, breaking ties by position (low to\n      # high).\n      glyph_order = np.lexsort(\n          glyphs[:, [GlyphsTensorColumns.Y_POSITION, GlyphsTensorColumns.X]].T)\n      glyphs = glyphs[glyph_order]\n      return musicscore_pb2.Staff(glyph=map(create_glyph, glyphs))\n\n    return musicscore_pb2.Page(system=[\n        musicscore_pb2.StaffSystem(\n            staff=map(generate_staff, range(num_staves)))\n    ])\n"
  },
  {
    "path": "moonlight/glyphs/convolutional.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Base glyph classifier model.\"\"\"\n# TODO(ringw): Replace subclasses with a saved TF model. Hardcode the\n# stafflines and predictions tensor names, so that we define the classifier\n# separately. It can either be defined in the same graph before constructing the\n# Convolutional1DGlyphClassifier, or loaded from a saved model after training\n# externally.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport abc\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.glyphs import base\nfrom moonlight.protobuf import musicscore_pb2\n\nDEFAULT_RUN_MIN_LENGTH = 3\n\n\nclass Convolutional1DGlyphClassifier(base.BaseGlyphClassifier):\n  \"\"\"The base 1D convolutional glyph classifier model.\"\"\"\n\n  def __init__(self, run_min_length=DEFAULT_RUN_MIN_LENGTH):\n    \"\"\"Base classifier model.\n\n    Args:\n      run_min_length: Must have this many consecutive pixels with the same\n        non-NONE predicted glyph to emit the glyph.\n    \"\"\"\n    super(Convolutional1DGlyphClassifier, self).__init__()\n    self.run_min_length = run_min_length\n\n  @property\n  @abc.abstractmethod\n  def staffline_predictions(self):\n    \"\"\"The staffline predictions tensor.\n\n    Convolutional1DGlyphClassifier wraps this output, which would be the output\n    of a 1D convolutional model, and extracts individual glyphs to be added to\n    the Page message.\n\n    Shape (num_staves, num_stafflines, width).\n    \"\"\"\n    pass\n\n  def _build_detected_glyphs(self, predictions_arr):\n    \"\"\"Takes the convolutional output ndarray and builds the individual glyphs.\n\n    At each staff and y position, looks for short runs of the same detected\n    glyph, and then outputs a single glyph at the x coordinate of the center of\n    the run.\n\n    Args:\n      predictions_arr: A NumPy array with the result of `staffline_predictions`.\n        Shape (num_staves, num_stafflines, width).\n\n    Returns:\n      A 2D array of the glyph coordinates. Shape (num_glyphs, 4) with columns\n          corresponding to base.GlyphsTensorColumns.\n    \"\"\"\n    glyphs = []\n    num_staves, num_stafflines, width = predictions_arr.shape\n    for staff in range(num_staves):\n      for staffline in range(num_stafflines):\n        y_position = num_stafflines // 2 - staffline\n        run_start = -1\n        run_value = musicscore_pb2.Glyph.NONE\n        for x in range(width + 1):\n          if x < width:\n            value = predictions_arr[staff, staffline, x]\n          if x == width or value != run_value:\n            if run_value > musicscore_pb2.Glyph.NONE:\n              # Process the current run if it is at least run_min_length pixels.\n              if x - run_start >= self.run_min_length:\n                glyph_center_x = (run_start + x) // 2\n                glyphs.append(\n                    self._create_glyph_arr(staff, y_position, glyph_center_x,\n                                           run_value))\n            run_value = value\n            run_start = x\n    # Convert to a 2D array.\n    glyphs = np.asarray(glyphs, np.int32)\n    return np.reshape(glyphs, (-1, 4))\n\n  def _create_glyph_arr(self, staff_index, y_position, x, type_value):\n    glyph = np.empty(len(base.GlyphsTensorColumns), np.int32)\n    glyph[base.GlyphsTensorColumns.STAFF_INDEX] = staff_index\n    glyph[base.GlyphsTensorColumns.Y_POSITION] = y_position\n    glyph[base.GlyphsTensorColumns.X] = x\n    glyph[base.GlyphsTensorColumns.TYPE] = type_value\n    return glyph\n\n  def get_detected_glyphs(self):\n    \"\"\"Extracts the individual glyphs as a Tensor.\n\n    This is run in the TensorFlow graph, so we have to wrap the Python glyph\n    logic in a `py_func`.\n\n    Returns:\n      A Tensor of glyphs, with shape (num_glyphs, 4). The columns are indexed by\n        `base.GlyphsTensorColumns`.\n    \"\"\"\n    return tf.py_func(self._build_detected_glyphs, [self.staffline_predictions],\n                      tf.int32)\n"
  },
  {
    "path": "moonlight/glyphs/convolutional_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for Convolutional1DGlyphClassifier.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom moonlight.glyphs import base\nfrom moonlight.glyphs import convolutional\nfrom moonlight.glyphs import testing\nfrom moonlight.protobuf import musicscore_pb2\n\nSTAFF_INDEX = base.GlyphsTensorColumns.STAFF_INDEX\nY_POSITION = base.GlyphsTensorColumns.Y_POSITION\nX = base.GlyphsTensorColumns.X\nTYPE = base.GlyphsTensorColumns.TYPE\n\n\nclass ConvolutionalTest(tf.test.TestCase):\n\n  def testGetGlyphsPage(self):\n    # Refer to testing.py for the glyphs array.\n    # pyformat: disable\n    glyphs = pd.DataFrame(\n        [\n            {STAFF_INDEX: 0, Y_POSITION: 0, X: 0, TYPE: 3},\n            {STAFF_INDEX: 0, Y_POSITION: -1, X: 1, TYPE: 4},\n            {STAFF_INDEX: 0, Y_POSITION: 0, X: 2, TYPE: 5},\n            {STAFF_INDEX: 0, Y_POSITION: 1, X: 4, TYPE: 2},\n            {STAFF_INDEX: 1, Y_POSITION: 1, X: 2, TYPE: 3},\n            {STAFF_INDEX: 1, Y_POSITION: 0, X: 2, TYPE: 5},\n            {STAFF_INDEX: 1, Y_POSITION: -1, X: 4, TYPE: 3},\n            {STAFF_INDEX: 1, Y_POSITION: -1, X: 5, TYPE: 5},\n        ],\n        columns=[STAFF_INDEX, Y_POSITION, X, TYPE])\n    # Compare glyphs (rows in the glyphs array) regardless of their position in\n    # the array (they are not required to be sorted).\n    self.assertEqual(\n        set(\n            map(tuple,\n                convolutional.Convolutional1DGlyphClassifier(\n                    run_min_length=1)._build_detected_glyphs(\n                        testing.PREDICTIONS))),\n        set(map(tuple, glyphs.values)))\n\n  def testNoGlyphs_dummyClassifier(self):\n\n    class DummyClassifier(convolutional.Convolutional1DGlyphClassifier):\n      \"\"\"Outputs the classifications for no glyphs on multiple staves.\"\"\"\n\n      @property\n      def staffline_predictions(self):\n        return tf.fill([5, 9, 100], musicscore_pb2.Glyph.NONE)\n\n    with self.test_session():\n      self.assertAllEqual(\n          DummyClassifier().get_detected_glyphs().eval(),\n          np.zeros((0, 4), np.int32))\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/glyphs/corpus.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Labeled glyph corpus.\n\nReads Examples holding image patches and glyph records from TFRecords.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\n\ndef get_patch_shape(corpus_file):\n  \"\"\"Gets the patch shape (height, width) from the corpus file.\n\n  Args:\n    corpus_file: Path to a TFRecords file.\n\n  Returns:\n    A tuple (height, width), extracted from the first record.\n\n  Raises:\n    ValueError: if the corpus_file is empty.\n  \"\"\"\n  example = tf.train.Example()\n  try:\n    example.ParseFromString(next(tf.python_io.tf_record_iterator(corpus_file)))\n  except StopIteration as e:\n    raise ValueError('corpus_file cannot be empty: %s' % e)\n  return (example.features.feature['height'].int64_list.value[0],\n          example.features.feature['width'].int64_list.value[0])\n\n\ndef parse_corpus(corpus_file, height, width):\n  \"\"\"Returns tensors holding the parsed result of the corpus file.\n\n  Uses the default TensorFlow session to read examples.\n\n  Args:\n    corpus_file: Path to a TFRecords file.\n    height: Patch height, as returned from `get_patch_shape`.\n    width: Patch width, as returned from `get_patch_width`.\n\n  Returns:\n    patches: float32 tensor with shape (num_patches, height, width).\n    labels: int64 tensor with shape (num_patches,).\n  \"\"\"\n  sess = tf.get_default_session()\n  producer = tf.train.string_input_producer([corpus_file], num_epochs=1)\n  unused_keys, examples = tf.TFRecordReader().read_up_to(producer, 10000)\n  parsed_examples = tf.parse_example(\n      examples, {\n          'patch': tf.FixedLenFeature((height, width), tf.float32),\n          'label': tf.FixedLenFeature((), tf.int64)\n      })\n  sess.run(tf.local_variables_initializer())  # initialize num_epochs\n  coord = tf.train.Coordinator()\n  queue_runners = tf.train.start_queue_runners(start=True, coord=coord)\n  assert queue_runners, 'started queue runners'\n  all_patches = []\n  all_labels = []\n  while True:\n    try:\n      patch, label = sess.run(\n          [parsed_examples['patch'], parsed_examples['label']])\n    except tf.errors.OutOfRangeError:\n      break  # done\n    all_patches.append(patch)\n    all_labels.append(label)\n  coord.request_stop()\n  coord.join()\n  return np.concatenate(all_patches), np.concatenate(all_labels)\n"
  },
  {
    "path": "moonlight/glyphs/geometry.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Glyph y coordinate calculation.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n\ndef glyph_y(staff, glyph):\n  \"\"\"Calculates the glyph y coordinate.\n\n  Args:\n    staff: A Staff, used for interpolating the staff center y coordinate.\n    glyph: A Glyph on the Staff.\n\n  Returns:\n    The y coordinate of the glyph on the page.\n\n  Raises:\n    ValueError: If the glyph is not contained by the interval spanned by the\n        staff on the x axis.\n  \"\"\"\n  for point_a, point_b in zip(staff.center_line[:-1], staff.center_line[1:]):\n    if point_a.x <= glyph.x < point_b.x:\n      staff_center_y = point_a.y + ((point_b.y - point_a.y) *\n                                    (glyph.x - point_a.x) //\n                                    (point_b.x - point_a.x))\n      # y positions count up (in the negative y direction).\n      return staff_center_y - staff.staffline_distance * glyph.y_position // 2\n  raise ValueError('Glyph (%s) is not contained by staff (%s)' %\n                   (glyph, staff.center_line))\n"
  },
  {
    "path": "moonlight/glyphs/glyph_types.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility for glyph types.\n\nDetermines which modifiers may be attached to which glyphs (currently just\nnoteheads).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.protobuf import musicscore_pb2\n\n\ndef is_notehead(glyph):\n  return glyph.type in [\n      musicscore_pb2.Glyph.NOTEHEAD_EMPTY, musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n      musicscore_pb2.Glyph.NOTEHEAD_WHOLE\n  ]\n\n\ndef is_stemmed_notehead(glyph):\n  return glyph.type in [\n      musicscore_pb2.Glyph.NOTEHEAD_EMPTY, musicscore_pb2.Glyph.NOTEHEAD_FILLED\n  ]\n\n\ndef is_beamed_notehead(glyph):\n  return glyph.type == musicscore_pb2.Glyph.NOTEHEAD_FILLED\n\n\ndef is_dotted_notehead(glyph):\n  # Any notehead can be dotted.\n  return is_notehead(glyph)\n\n\ndef is_clef(glyph):\n  return glyph.type in [\n      musicscore_pb2.Glyph.CLEF_TREBLE, musicscore_pb2.Glyph.CLEF_BASS\n  ]\n\n\ndef is_rest(glyph):\n  return glyph.type in [\n      musicscore_pb2.Glyph.REST_QUARTER,\n      musicscore_pb2.Glyph.REST_EIGHTH,\n      musicscore_pb2.Glyph.REST_SIXTEENTH,\n  ]\n"
  },
  {
    "path": "moonlight/glyphs/knn.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"K-Nearest-Neighbors glyph classification.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.glyphs import convolutional\nfrom moonlight.glyphs import corpus\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.util import patches\n\n# k = 3 has the best performance for noteheads, clefs, and sharps. k = 5 seems\n# to increase false negatives, so we probably don't want to increase k further\n# with our current data.\nK_NEAREST_VALUE = 3\n\nNUM_GLYPHS = len(musicscore_pb2.Glyph.Type.keys())\n\n\nclass NearestNeighborGlyphClassifier(\n    convolutional.Convolutional1DGlyphClassifier):\n  \"\"\"Classifies staffline patches using 1 nearest neighbor.\"\"\"\n\n  def __init__(self, corpus_file, staffline_extractor, **kwargs):\n    \"\"\"Build a 1-nearest-neighbor classifier with labeled patches.\n\n    Args:\n      corpus_file: Path to the TFRecords of Examples with patch (cluster) values\n        in the \"patch\" feature, and the glyph label in the \"label\" feature.\n      staffline_extractor: The staffline extractor.\n      **kwargs: Passed through to `Convolutional1DGlyphClassifier`.\n    \"\"\"\n    super(NearestNeighborGlyphClassifier, self).__init__(**kwargs)\n\n    patch_height, patch_width = corpus.get_patch_shape(corpus_file)\n    centroids, labels = corpus.parse_corpus(corpus_file, patch_height,\n                                            patch_width)\n    centroids_shape = tf.shape(centroids)\n    flattened_centroids = tf.reshape(\n        centroids,\n        [centroids_shape[0], centroids_shape[1] * centroids_shape[2]])\n    self.staffline_extractor = staffline_extractor\n    stafflines = staffline_extractor.extract_staves()\n    # Collapse the stafflines per stave.\n    width = tf.shape(stafflines)[-1]\n    # Shape (num_staves, num_stafflines, num_patches, height, patch_width).\n    staffline_patches = patches.patches_1d(stafflines, patch_width)\n    staffline_patches_shape = tf.shape(staffline_patches)\n    flattened_patches = tf.reshape(staffline_patches, [\n        staffline_patches_shape[0] * staffline_patches_shape[1] *\n        staffline_patches_shape[2],\n        staffline_patches_shape[3] * staffline_patches_shape[4]\n    ])\n    distance_matrix = _squared_euclidean_distance_matrix(\n        flattened_patches, flattened_centroids)\n\n    # Take the k centroids with the lowest distance to each patch. Wrap the k\n    # constant in a tf.identity, which tests can use to feed in another value.\n    k_value = tf.identity(tf.constant(K_NEAREST_VALUE), name='k_nearest_value')\n    nearest_centroid_inds = tf.nn.top_k(-distance_matrix, k=k_value)[1]\n    # Get the label corresponding to each nearby centroids, and reshape the\n    # labels back to the original shape.\n    nearest_labels = tf.reshape(\n        tf.gather(labels, tf.reshape(nearest_centroid_inds, [-1])),\n        tf.shape(nearest_centroid_inds))\n    # Make a histogram of counts for each glyph type in the nearest centroids,\n    # for each row (patch).\n    bins = tf.map_fn(lambda row: tf.bincount(row, minlength=NUM_GLYPHS),\n                     tf.to_int32(nearest_labels))\n    # Take the argmax of the histogram to get the top prediction. Discard glyph\n    # type 1 (NONE) for now.\n    mode_out_of_k = tf.argmax(\n        bins[:, musicscore_pb2.Glyph.NONE + 1:], axis=1) + 2\n    # Force predictions to NONE only if all k nearby centroids were NONE.\n    # Otherwise, the non-NONE nearby centroids will contribute to the\n    # prediction.\n    mode_out_of_k = tf.where(\n        tf.equal(bins[:, musicscore_pb2.Glyph.NONE], k_value),\n        tf.fill(\n            tf.shape(mode_out_of_k), tf.to_int64(musicscore_pb2.Glyph.NONE)),\n        mode_out_of_k)\n    predictions = tf.reshape(mode_out_of_k, staffline_patches_shape[:3])\n\n    # Pad the output.\n    predictions_width = tf.shape(predictions)[-1]\n    pad_before = (width - predictions_width) // 2\n    pad_shape_before = tf.concat([staffline_patches_shape[:2], [pad_before]],\n                                 axis=0)\n    pad_shape_after = tf.concat(\n        [staffline_patches_shape[:2], [width - predictions_width - pad_before]],\n        axis=0)\n    self.output = tf.concat(\n        [\n            # NONE has value 1.\n            tf.ones(pad_shape_before, tf.int64),\n            predictions,\n            tf.ones(pad_shape_after, tf.int64),\n        ],\n        axis=-1)\n\n  @property\n  def staffline_predictions(self):\n    return self.output\n\n\ndef _squared_euclidean_distance_matrix(a, b):\n  # Trick for computing the squared Euclidean distance matrix.\n  # Entry (i, j) = a[i].sum() + b[j].sum() - 2 * (a[i] * b[j]).sum()\n  #              = sum_k (a[i, k] + b[j, k] - 2 * a[i, k] * b[j, k])\n  #              = sum_k (a[i, k] - b[j, k]) ** 2\n  a_sum = tf.reshape(tf.reduce_sum(a, axis=1), [-1, 1])  # column vector\n  b_sum = tf.reshape(tf.reduce_sum(b, axis=1), [1, -1])  # row vector\n\n  return a_sum + b_sum - 2 * tf.matmul(a, b, transpose_b=True)\n"
  },
  {
    "path": "moonlight/glyphs/knn_model.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"K-Nearest-Neighbors glyph classification.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow_estimator.python.estimator.canned import prediction_keys\n\nfrom moonlight.protobuf import musicscore_pb2\n\n# k = 3 has the best performance for noteheads, clefs, and sharps. k = 5 seems\n# to increase false negatives, so we probably don't want to increase k further\n# with our current data.\nK_NEAREST_VALUE = 3\n\nNUM_GLYPHS = len(musicscore_pb2.Glyph.Type.keys())\n\n\ndef knn_kmeans_model(centroids, labels, patches=None):\n  \"\"\"The KNN k-means classifier model.\n\n  Args:\n    centroids: The k-means centroids NumPy array. Shape `(num_centroids,\n      patch_height, patch_width)`.\n    labels: The centroid labels NumPy array. Vector with length `num_centroids`.\n    patches: Optional input tensor for the patches. If None, a placeholder will\n      be used.\n\n  Returns:\n    The predictions (class ids) tensor determined from the input patches. Vector\n    with the same length as `patches`.\n  \"\"\"\n  with tf.name_scope('knn_model'):\n    centroids = tf.identity(\n        _to_float(tf.constant(_to_uint8(centroids))), name='centroids')\n    labels = tf.constant(labels, name='labels')\n    centroids_shape = tf.shape(centroids)\n    num_centroids = centroids_shape[0]\n    patch_height = centroids_shape[1]\n    patch_width = centroids_shape[2]\n    flattened_centroids = tf.reshape(\n        centroids, [num_centroids, patch_height * patch_width],\n        name='flattened_centroids')\n    if patches is None:\n      patches = tf.placeholder(\n          tf.float32, (None, centroids.shape[1], centroids.shape[2]),\n          name='patches')\n    patches_shape = tf.shape(patches)\n    flattened_patches = tf.reshape(\n        patches, [patches_shape[0], patches_shape[1] * patches_shape[2]],\n        name='flattened_patches')\n    with tf.name_scope('distance_matrix'):\n      distance_matrix = _squared_euclidean_distance_matrix(\n          flattened_patches, flattened_centroids)\n\n    # Take the k centroids with the lowest distance to each patch. Wrap the k\n    # constant in a tf.identity, which tests can use to feed in another value.\n    k_value = tf.identity(tf.constant(K_NEAREST_VALUE), name='k_nearest_value')\n    nearest_centroid_inds = tf.nn.top_k(-distance_matrix, k=k_value)[1]\n    # Get the label corresponding to each nearby centroids, and reshape the\n    # labels back to the original shape.\n    nearest_labels = tf.reshape(\n        tf.gather(labels, tf.reshape(nearest_centroid_inds, [-1])),\n        tf.shape(nearest_centroid_inds),\n        name='nearest_labels')\n    # Make a histogram of counts for each glyph type in the nearest centroids,\n    # for each row (patch).\n    length = NUM_GLYPHS\n    bins = tf.map_fn(\n        lambda row: tf.bincount(row, minlength=length, maxlength=length),\n        tf.to_int32(nearest_labels),\n        name='bins')\n    with tf.name_scope('mode_out_of_k'):\n      # Take the argmax of the histogram to get the top prediction. Discard\n      # glyph type 1 (NONE) for now.\n      mode_out_of_k = tf.argmax(\n          bins[:, musicscore_pb2.Glyph.NONE + 1:], axis=1) + 2\n      # Force predictions to NONE only if all k nearby centroids were NONE.\n      # Otherwise, the non-NONE nearby centroids will contribute to the\n      # prediction.\n      mode_out_of_k = tf.where(\n          tf.equal(bins[:, musicscore_pb2.Glyph.NONE], k_value),\n          tf.fill(\n              tf.shape(mode_out_of_k), tf.to_int64(musicscore_pb2.Glyph.NONE)),\n          mode_out_of_k)\n    return tf.identity(mode_out_of_k, name='predictions')\n\n\ndef _to_uint8(values):\n  return np.rint(values * 255).astype(np.uint8)\n\n\ndef _to_float(values_t):\n  return tf.to_float(values_t) / tf.constant(255.)\n\n\ndef export_knn_model(centroids, labels, export_path):\n  \"\"\"Writes the KNN saved model.\n\n  Args:\n    centroids: The k-means centroids NumPy array.\n    labels: The labels of the k-means centroids.\n    export_path: The output saved model directory.\n  \"\"\"\n  g = tf.Graph()\n  with g.as_default():\n    predictions = knn_kmeans_model(centroids, labels)\n    patches = g.get_tensor_by_name('knn_model/patches:0')\n    predictions_info = tf.saved_model.utils.build_tensor_info(predictions)\n    patches_info = tf.saved_model.utils.build_tensor_info(patches)\n    with tf.Session(graph=g) as sess:\n      builder = tf.saved_model.builder.SavedModelBuilder(export_path)\n      builder.add_meta_graph_and_variables(\n          sess, ['serve'],\n          signature_def_map={\n              tf.saved_model.signature_constants\n              .DEFAULT_SERVING_SIGNATURE_DEF_KEY:\n                  tf.saved_model.signature_def_utils.build_signature_def(\n                      inputs={'input': patches_info},\n                      outputs={\n                          prediction_keys.PredictionKeys.CLASS_IDS:\n                              predictions_info\n                      }),\n          })\n      builder.save()\n\n\ndef _squared_euclidean_distance_matrix(a, b):\n  # Trick for computing the squared Euclidean distance matrix.\n  # Entry (i, j) = a[i].sum() + b[j].sum() - 2 * (a[i] * b[j]).sum()\n  #              = sum_k (a[i, k] + b[j, k] - 2 * a[i, k] * b[j, k])\n  #              = sum_k (a[i, k] - b[j, k]) ** 2\n  a_sum = tf.reshape(tf.reduce_sum(a, axis=1), [-1, 1])  # column vector\n  b_sum = tf.reshape(tf.reduce_sum(b, axis=1), [1, -1])  # row vector\n\n  return a_sum + b_sum - 2 * tf.matmul(a, b, transpose_b=True)\n"
  },
  {
    "path": "moonlight/glyphs/knn_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the KNN glyph classifier.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tempfile\n\nimport pandas as pd\nimport tensorflow as tf\n\nfrom moonlight.glyphs import base\nfrom moonlight.glyphs import knn\nfrom moonlight.protobuf import musicscore_pb2\n\nSTAFF_INDEX = base.GlyphsTensorColumns.STAFF_INDEX\nY_POSITION = base.GlyphsTensorColumns.Y_POSITION\nX = base.GlyphsTensorColumns.X\nTYPE = base.GlyphsTensorColumns.TYPE\nGlyph = musicscore_pb2.Glyph  # pylint: disable=invalid-name\n\n\nclass KnnTest(tf.test.TestCase):\n\n  def testFakeStaffline(self):\n    # Staffline containing fake glyphs.\n    staffline = tf.constant(\n        [[1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1],\n         [0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1],\n         [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1]])\n    staffline = tf.cast(staffline, tf.float32)\n    # Mapping of glyphs (row-major order) to glyph types.\n    patterns = {\n        (1, 0, 1, 0, 1, 0, 1, 0, 1): musicscore_pb2.Glyph.CLEF_TREBLE,\n        (0, 0, 0, 0, 1, 0, 0, 0, 0): musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n        (1, 1, 1, 1, 0, 1, 1, 1, 1): musicscore_pb2.Glyph.NOTEHEAD_EMPTY,\n        (0, 0, 0, 0, 0, 0, 0, 0, 0): musicscore_pb2.Glyph.NONE,\n        (1, 0, 0, 0, 0, 0, 0, 0, 0): musicscore_pb2.Glyph.NONE,\n        (0, 1, 0, 0, 0, 0, 0, 0, 0): musicscore_pb2.Glyph.NONE,\n        (0, 0, 1, 0, 0, 0, 0, 0, 0): musicscore_pb2.Glyph.NONE,\n        (0, 0, 0, 1, 0, 0, 0, 0, 0): musicscore_pb2.Glyph.NONE,\n        (0, 0, 0, 0, 0, 1, 0, 0, 0): musicscore_pb2.Glyph.NONE,\n        (0, 0, 0, 0, 0, 0, 1, 0, 0): musicscore_pb2.Glyph.NONE,\n        (0, 0, 0, 0, 0, 0, 0, 1, 0): musicscore_pb2.Glyph.NONE,\n        (0, 0, 0, 0, 0, 0, 0, 0, 1): musicscore_pb2.Glyph.NONE,\n    }\n    with tf.Session():\n      with tempfile.NamedTemporaryFile(mode='r') as examples_file:\n        with tf.python_io.TFRecordWriter(examples_file.name) as writer:\n          # Sort the keys for determinism.\n          for pattern in sorted(patterns):\n            example = tf.train.Example()\n            example.features.feature['patch'].float_list.value.extend(pattern)\n            example.features.feature['height'].int64_list.value.append(3)\n            example.features.feature['width'].int64_list.value.append(3)\n            example.features.feature['label'].int64_list.value.append(\n                patterns[pattern])\n            writer.write(example.SerializeToString())\n\n        class FakeStafflineExtractor(object):\n\n          def extract_staves(self):\n            return staffline[None, None, :, :]\n\n        # stafflines are 4D (num_staves, num_stafflines, height, width).\n        classifier = knn.NearestNeighborGlyphClassifier(\n            examples_file.name, FakeStafflineExtractor(), run_min_length=1)\n        k_nearest_value = tf.get_default_graph().get_tensor_by_name(\n            'k_nearest_value:0')\n        glyphs = classifier.get_detected_glyphs().eval(\n            feed_dict={k_nearest_value: 1})\n    # The patches of the staffline that match non-NONE patterns (in row-major\n    # order) should appear here (x is their center coordinate).\n    # pyformat: disable\n    expected_glyphs = pd.DataFrame(\n        [\n            {STAFF_INDEX: 0, Y_POSITION: 0, X: 1, TYPE: Glyph.CLEF_TREBLE},\n            {STAFF_INDEX: 0, Y_POSITION: 0, X: 9, TYPE: Glyph.NOTEHEAD_EMPTY},\n            {STAFF_INDEX: 0, Y_POSITION: 0, X: 14, TYPE: Glyph.CLEF_TREBLE},\n            {STAFF_INDEX: 0, Y_POSITION: 0, X: 19, TYPE: Glyph.NOTEHEAD_FILLED},\n        ],\n        columns=[STAFF_INDEX, Y_POSITION, X, TYPE])\n    self.assertAllEqual(glyphs, expected_glyphs)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/glyphs/neural.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"1-D convolutional neural network glyph classifier model.\n\nConvolves a filter horizontally along a staffline, to classify glyphs at each\nx position.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.protobuf import musicscore_pb2\n\n# Count every glyph type except for UNKNOWN_TYPE.\nNUM_GLYPHS = len(musicscore_pb2.Glyph.Type.values()) - 1\n\n\n# TODO(ringw): Make this extend BaseGlyphClassifier.\nclass NeuralNetworkGlyphClassifier(object):\n  \"\"\"Holds a TensorFlow NN model used for classifying glyphs on staff lines.\"\"\"\n\n  def __init__(self,\n               input_placeholder,\n               hidden_layer,\n               reconstruction_layer=None,\n               autoencoder_vars=None,\n               labels_placeholder=None,\n               prediction_layer=None,\n               prediction_vars=None):\n    \"\"\"Builds the NeuralNetworkGlyphClassifier that holds the TensorFlow model.\n\n    Args:\n      input_placeholder: A tf.placeholder representing the input staffline\n        image. Dtype float32 and shape (batch_size, target_height, None).\n      hidden_layer: An inner layer in the model. Should be the last layer in the\n        autoencoder model before reconstructing the input, and/or an\n        intermediate layer in the prediction network. self is intended to be the\n        last common ancestor of the reconstruction_layer output and the\n        prediction_layer output, if both are present.\n      reconstruction_layer: The reconstruction of the input, for an autoencoder\n        model. If non-None, should have the same shape as input_placeholder.\n      autoencoder_vars: The variables for the autoencoder model (parameters\n        affecting hidden_layer and reconstruction_layer), or None. If non-None,\n        a dict mapping variable name to tf.Variable object.\n      labels_placeholder: The labels tensor. A placeholder will be created if\n        None is given. Dtype int32 and shape (batch_size, width). Values are\n        between 0 and NUM_GLYPHS - 1 (where each value is the Glyph.Type enum\n        value minus one, to skip UNKNOWN_TYPE).\n      prediction_layer: The logit probability of each glyph for each column.\n        Must be able to be passed to tf.nn.softmax to produce the probability of\n        each glyph. 2D (width, NUM_GLYPHS). May be None if the model is not\n        being used for classification.\n      prediction_vars: The variables for the classification model (parameters\n        affecting hidden_layer and prediction_layer), or None. If non-None, a\n        dict mapping variable name to tf.Variable object.\n    \"\"\"\n    self.input_placeholder = input_placeholder\n    self.hidden_layer = hidden_layer\n    self.reconstruction_layer = reconstruction_layer\n    self.autoencoder_vars = autoencoder_vars or {}\n    # Calculate the loss that will be minimized for the autoencoder model.\n    self.autoencoder_loss = None\n    if self.reconstruction_layer is not None:\n      self.autoencoder_loss = (\n          tf.reduce_mean(\n              tf.squared_difference(self.input_placeholder,\n                                    self.reconstruction_layer)))\n    self.prediction_layer = prediction_layer\n    self.prediction_vars = prediction_vars or {}\n    self.labels_placeholder = (\n        labels_placeholder if labels_placeholder is not None else\n        tf.placeholder(tf.int32, (None, None)))\n    # Calculate the loss that will be minimized for the prediction model.\n    self.prediction_loss = None\n    if self.prediction_layer is not None:\n      self.prediction_loss = (\n          tf.reduce_mean(\n              tf.nn.softmax_cross_entropy_with_logits(\n                  logits=self.prediction_layer,\n                  labels=tf.one_hot(self.labels_placeholder, NUM_GLYPHS))))\n    # The probabilities of each glyph for each column.\n    self.prediction = tf.nn.softmax(self.prediction_layer)\n\n  def get_autoencoder_initializers(self):\n    \"\"\"Gets the autoencoder initializer ops.\n\n    Returns:\n      The list of TensorFlow ops which initialize the autoencoder model.\n    \"\"\"\n    return [var.initializer for var in self.autoencoder_vars.values()]\n\n  def get_classifier_initializers(self):\n    \"\"\"Gets the classifier initializer ops.\n\n    Returns:\n      The list of TensorFlow ops which initialize the classifier model.\n    \"\"\"\n    return [var.initializer for var in self.prediction_vars.values()]\n\n  @staticmethod\n  def semi_supervised_model(batch_size,\n                            target_height,\n                            input_placeholder=None,\n                            labels_placeholder=None):\n    \"\"\"Constructs the semi-supervised model.\n\n    Consists of an autoencoder and classifier, sharing a hidden layer.\n\n    Args:\n      batch_size: The number of staffline images in a batch, which must be known\n        at model definition time. int.\n      target_height: The height of each scaled staffline image. int.\n      input_placeholder: The input layer. A placeholder will be created if None\n        is given. Dtype float32 and shape (batch_size, target_height,\n        any_width).\n      labels_placeholder: The labels tensor. A placeholder will be created if\n        None is given. Dtype int32 and shape (batch_size, width).\n\n    Returns:\n      A NeuralNetworkGlyphClassifier instance holding the model.\n    \"\"\"\n    if input_placeholder is None:\n      input_placeholder = tf.placeholder(tf.float32,\n                                         (batch_size, target_height, None))\n    autoencoder_vars = {}\n    prediction_vars = {}\n\n    hidden, layer_vars = InputConvLayer(input_placeholder, 10).get()\n    autoencoder_vars.update(layer_vars)\n    prediction_vars.update(layer_vars)\n\n    hidden, layer_vars = HiddenLayer(hidden, 10, 10).get()\n    autoencoder_vars.update(layer_vars)\n    prediction_vars.update(layer_vars)\n\n    reconstruction, layer_vars = ReconstructionLayer(hidden, target_height,\n                                                     target_height).get()\n    autoencoder_vars.update(layer_vars)\n\n    hidden, layer_vars = HiddenLayer(hidden, 10, 10, name=\"hidden_2\").get()\n    prediction_vars.update(layer_vars)\n\n    prediction, layer_vars = PredictionLayer(hidden).get()\n    prediction_vars.update(layer_vars)\n\n    return NeuralNetworkGlyphClassifier(\n        input_placeholder,\n        hidden,\n        reconstruction_layer=reconstruction,\n        autoencoder_vars=autoencoder_vars,\n        labels_placeholder=labels_placeholder,\n        prediction_layer=prediction,\n        prediction_vars=prediction_vars)\n\n\nclass BaseLayer(object):\n\n  def __init__(self, filter_size, n_in, n_out, name):\n    self.weights = tf.Variable(\n        tf.truncated_normal((filter_size, n_in, n_out)), name=name + \"_W\")\n    self.bias = tf.Variable(tf.zeros(n_out), name=name + \"_bias\")\n    self.vars = {self.weights.name: self.weights, self.bias.name: self.bias}\n\n  def get(self):\n    \"\"\"Gets the layer output and variables.\n\n    Returns:\n      The output tensor of the layer.\n      The dict of variables (parameters) for the layer.\n    \"\"\"\n    return self.output, self.vars\n\n\nclass InputConvLayer(BaseLayer):\n  \"\"\"Convolves the input image strip, producing multiple outputs per column.\"\"\"\n\n  def __init__(self, image, n_hidden, activation=tf.nn.sigmoid, name=\"input\"):\n    \"\"\"Creates the InputConvLayer.\n\n    Args:\n      image: The input image (height, width). Should be wider than it is tall.\n      n_hidden: The number of output nodes of the layer.\n      activation: Callable applied to the convolved image. Applied to the 1D\n        convolution result to produce the activation of the layer.\n      name: The prefix for variable names for the layer.  Produces self.output\n        with shape (width, n_hidden).\n    \"\"\"\n    height = int(image.get_shape()[1])\n    super(InputConvLayer, self).__init__(\n        filter_size=height, n_in=height, n_out=n_hidden, name=name)\n    self.input = image\n    # Transpose the image, so that the rows are \"channels\" in a 1D input.\n    self.output = activation(\n        tf.nn.conv1d(\n            tf.transpose(image, [0, 2, 1]),\n            self.weights,\n            stride=1,\n            padding=\"SAME\") + self.bias[None, None, :])\n\n\nclass HiddenLayer(BaseLayer):\n  \"\"\"Performs a 1D convolution between hidden layers in the model.\"\"\"\n\n  def __init__(self,\n               layer_in,\n               filter_size,\n               n_out,\n               activation=tf.nn.sigmoid,\n               name=\"hidden\"):\n    \"\"\"Performs a 1D convolution between hidden layers in the model.\n\n    Args:\n      layer_in: The input layer (width, num_channels).\n      filter_size: The width of the convolution filter.\n      n_out: The number of output channels.\n      activation: Callable applied to the convolved image. Applied to the 1D\n        convolution result to produce the activation of the layer.\n      name: The prefix for variable names for the layer.  Produces self.output\n        with shape (width, n_out).\n    \"\"\"\n    n_in = int(layer_in.get_shape()[2])\n    super(HiddenLayer, self).__init__(filter_size, n_in, n_out, name)\n    self.output = activation(\n        tf.nn.conv1d(layer_in, self.weights, stride=1, padding=\"SAME\") +\n        self.bias[None, None, :])\n\n\nclass ReconstructionLayer(BaseLayer):\n  \"\"\"Outputs a reconstructed layer.\"\"\"\n\n  def __init__(self,\n               layer_in,\n               filter_size,\n               out_height,\n               activation=tf.nn.sigmoid,\n               name=\"reconstruction\"):\n    \"\"\"Outputs a reconstructed image of shape (out_height, width).\n\n    Args:\n      layer_in: The input layer (width, num_channels).\n      filter_size: The width of the convolution filter.\n      out_height: The height of the output image.\n      activation: Callable applied to the convolved image. Applied to the 1D\n        convolution result to produce the activation of the output.\n      name: The prefix for variable names for the layer.  Produces self.output\n        with shape (width, n_out).\n    \"\"\"\n    n_in = int(layer_in.get_shape()[2])\n    super(ReconstructionLayer, self).__init__(filter_size, n_in, out_height,\n                                              name)\n    output = activation(\n        tf.nn.conv1d(layer_in, self.weights, stride=1, padding=\"SAME\") +\n        self.bias[None, None, :])\n    self.output = tf.transpose(output, [0, 2, 1])\n\n\nclass PredictionLayer(BaseLayer):\n  \"\"\"Classifies each column from a hidden layer.\"\"\"\n\n  def __init__(self, layer_in, name=\"prediction\"):\n    \"\"\"Outputs logit predictions for each column from a hidden layer.\n\n    Args:\n      layer_in: The input layer (width, num_channels).\n      name: The prefix for variable names for the layer.  Produces the logits\n        for each class in self.output. Shape (width, NUM_GLYPHS)\n    \"\"\"\n    n_in = int(layer_in.get_shape()[2])\n    n_out = NUM_GLYPHS\n    super(PredictionLayer, self).__init__(1, n_in, n_out, name)\n\n    input_shape = tf.shape(layer_in)\n    input_columns = tf.reshape(\n        layer_in, [input_shape[0] * input_shape[1], input_shape[2]])\n    # Ignore the 0th axis of the weights (convolutional filter, which is 1 here)\n    weights = self.weights[0, :, :]\n    output = tf.matmul(input_columns, weights) + self.bias\n    self.output = tf.reshape(output,\n                             [input_shape[0], input_shape[1], NUM_GLYPHS])\n"
  },
  {
    "path": "moonlight/glyphs/neural_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the neural network glyph classifier.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.glyphs import neural\n\nNUM_STAFFLINES = 5\nTARGET_HEIGHT = 15\nIMAGE_WIDTH = 100\n\n\nclass NeuralNetworkGlyphClassifierTest(tf.test.TestCase):\n\n  def testSemiSupervisedClassifier(self):\n    # Ensure that the losses can be evaluated without error.\n    stafflines = tf.random_uniform((NUM_STAFFLINES, TARGET_HEIGHT, IMAGE_WIDTH))\n    # Use every single glyph once except for NONE (0).\n    labels_single_batch = np.concatenate([\n        np.arange(neural.NUM_GLYPHS),\n        np.zeros(IMAGE_WIDTH - neural.NUM_GLYPHS)\n    ]).astype(np.int32)\n    labels = np.repeat(labels_single_batch[None, :], NUM_STAFFLINES, axis=0)\n    classifier = neural.NeuralNetworkGlyphClassifier.semi_supervised_model(\n        batch_size=NUM_STAFFLINES,\n        target_height=TARGET_HEIGHT,\n        input_placeholder=stafflines,\n        labels_placeholder=tf.constant(labels))\n    with self.test_session() as sess:\n      # The autoencoder must run successfully with only its vars initialized.\n      # The loss must always be positive.\n      sess.run(classifier.get_autoencoder_initializers())\n      self.assertGreater(sess.run(classifier.autoencoder_loss), 0)\n\n      # The classifier must run successfully with its vars initialized too.\n      sess.run(classifier.get_classifier_initializers())\n      self.assertGreater(sess.run(classifier.prediction_loss), 0)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/glyphs/note_dots.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Detects dots which are attached to noteheads.\n\nDots are round, solid, smaller than other glyphs, and are typically spaced so\nthat they don't intersect with staff lines. Therefore, we detect them from the\nconnected components. We determine that the component is round-ish and solid if\nthe area (black pixel count) is at least half the area of the bounds of the\ncomponent.\n\nCandidate note dots are components that are round-ish and follow the expected\nsize. For each notehead, we look for candidate dots slightly to the right of the\nnote to assign to it.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n\nfrom moonlight.glyphs import geometry\nfrom moonlight.glyphs import glyph_types\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.structure import components\n\nCOMPONENTS = components.ConnectedComponentsColumns\n\n\nclass NoteDots(object):\n\n  def __init__(self, structure):\n    self.dots = _extract_dots(structure)\n\n  def apply(self, page):\n    \"\"\"Detects note dots in the page.\n\n    Dots must be to the right of a notehead.\n\n    Args:\n      page: A `Page` message.\n\n    Returns:\n      The same `Page`, with note dots added in place.\n    \"\"\"\n    for system in page.system:\n      for staff in system.staff:\n        for glyph in staff.glyph:\n          if glyph_types.is_dotted_notehead(glyph):\n            x_min = glyph.x\n            x_max = glyph.x + staff.staffline_distance * 3.\n            y = geometry.glyph_y(staff, glyph)\n            y_min = y - staff.staffline_distance / 2.\n            y_max = y + staff.staffline_distance / 2.\n            dots = self.dots[_is_in_range(x_min, self.dots[:, 0], x_max)\n                             & _is_in_range(y_min, self.dots[:, 1], y_max)]\n            glyph.dot.extend(\n                musicscore_pb2.Point(x=dot[0], y=dot[1]) for dot in dots)\n\n    return page\n\n\ndef _is_in_range(min_value, values, max_value):\n  return np.logical_and(min_value <= values, values <= max_value)\n\n\ndef _extract_dots(structure):\n  \"\"\"Returns candidate note dots.\n\n  Note dots must be connected components which are roundish (the area of the\n  component's bounds is at least half full), and are the expected size.\n\n  Args:\n    structure: A computed `Structure`.\n\n  Returns:\n    A numpy array of shape `(N, 2)`. Each entry holds the center `(x, y)` of a\n    candidate note dot.\n  \"\"\"\n  min_height_width = structure.staff_detector.staffline_thickness + 1\n  # TODO(ringw): Are note dots typically smaller in ossia parts?\n  max_height_width = np.median(\n      structure.staff_detector.staffline_distance) * 2 / 3\n  connected_components = structure.connected_components.components\n  width = connected_components[:, COMPONENTS\n                               .X1] - connected_components[:, COMPONENTS.X0]\n  height = connected_components[:, COMPONENTS\n                                .Y1] - connected_components[:, COMPONENTS.Y0]\n  is_full = np.greater_equal(connected_components[:, COMPONENTS.SIZE] * 2,\n                             width * height)\n  candidates = connected_components[\n      is_full\n      & _is_in_range(min_height_width, width, max_height_width)\n      & _is_in_range(min_height_width, height, max_height_width)]\n  # pyformat would make this completely unreadable\n  # pyformat: disable\n  candidate_centers = (\n      np.c_[\n          (candidates[:, COMPONENTS.X0] + candidates[:, COMPONENTS.X1]) / 2,\n          (candidates[:, COMPONENTS.Y0] + candidates[:, COMPONENTS.Y1]) / 2]\n      .astype(int))\n  return candidate_centers\n"
  },
  {
    "path": "moonlight/glyphs/repeated.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Fixes duplicate rests.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.glyphs import glyph_types\n\n\nclass FixRepeatedRests(object):\n\n  def apply(self, page):\n    \"\"\"Remove duplicate rests of the same type.\"\"\"\n    for system in page.system:\n      for staff in system.staff:\n        to_remove = []\n        last_rest = None\n        for glyph in staff.glyph:\n          if (last_rest and glyph_types.is_rest(glyph) and\n              last_rest.type == glyph.type and\n              glyph.x - last_rest.x < staff.staffline_distance):\n            to_remove.append(glyph)\n          last_rest = glyph\n\n        for glyph in to_remove:\n          staff.glyph.remove(glyph)\n\n    return page\n"
  },
  {
    "path": "moonlight/glyphs/saved_classifier.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Saved patch classifier models for OMR.\n\nThe saved model should accept a 3D tensor of patches\n`(num_patches, patch_height, patch_width)`, and return a vector of class ids of\nlength `num_patches`. Horizontal patches are extracted from each vertical\nposition on each staff where we expect to find glyphs, and any arbitrary model\ncan be loaded here.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow.contrib import graph_editor as contrib_graph_editor\nfrom tensorflow.contrib import util as contrib_util\nfrom tensorflow_estimator.python.estimator.canned import prediction_keys\n\nfrom moonlight.glyphs import convolutional\nfrom moonlight.staves import staffline_extractor\nfrom moonlight.util import patches\n\n_SIGNATURE_KEYS = [\n    # Created if the ServingInputReceiver has 'patch' in\n    # receiver_tensors_alternatives.\n    'patch:predict',\n    # Seems to only be created if receiver_tensors_alternatives is not set.\n    tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,\n]\n\n_RUN_MIN_LENGTH_CONSTANT_NAME = 'runtime_params/run_min_length:0'\n\n\nclass SavedConvolutional1DClassifier(\n    convolutional.Convolutional1DGlyphClassifier):\n  \"\"\"Holds a saved glyph classifier model.\n\n  To use a saved glyph classifier with `OMREngine`, see the\n  `saved_classifier_fn` wrapper.\n  \"\"\"\n\n  def __init__(self,\n               structure,\n               saved_model_dir,\n               num_sections=19,\n               *args,\n               **kwargs):\n    \"\"\"Loads a saved classifier model for the OMR engine.\n\n    Args:\n      structure: A `structure.Structure`.\n      saved_model_dir: Path to the TF saved_model directory to load.\n      num_sections: Number of vertical positions of patches to extract, centered\n        on the middle staff line.\n      *args: Passed through to `SavedConvolutional1DClassifier`.\n      **kwargs: Passed through to `SavedConvolutional1DClassifier`.\n\n    Raises:\n      ValueError: If the saved model input could not be interpreted as a 3D\n        array with the patch size.\n    \"\"\"\n    super(SavedConvolutional1DClassifier, self).__init__(*args, **kwargs)\n    sess = tf.get_default_session()\n    graph_def = tf.saved_model.loader.load(\n        sess, [tf.saved_model.tag_constants.SERVING], saved_model_dir)\n\n    signature = None\n    for key in _SIGNATURE_KEYS:\n      if key in graph_def.signature_def:\n        signature = graph_def.signature_def[key]\n        break\n    else:\n      # for/else is only executed if the loop completes without breaking.\n      raise ValueError('One of the following signatures must be present: %s' %\n                       _SIGNATURE_KEYS)\n\n    input_info = signature.inputs['input']\n    if not (len(input_info.tensor_shape.dim) == 3 and\n            input_info.tensor_shape.dim[1].size > 0 and\n            input_info.tensor_shape.dim[2].size > 0):\n      raise ValueError('Invalid patches input: ' + str(input_info))\n    patch_height = input_info.tensor_shape.dim[1].size\n    patch_width = input_info.tensor_shape.dim[2].size\n\n    with tf.name_scope('saved_classifier'):\n      self.staffline_extractor = staffline_extractor.StafflineExtractor(\n          structure.staff_remover.remove_staves,\n          structure.staff_detector,\n          num_sections=num_sections,\n          target_height=patch_height)\n      stafflines = self.staffline_extractor.extract_staves()\n      num_staves = tf.shape(stafflines)[0]\n      num_sections = tf.shape(stafflines)[1]\n      staffline_patches = patches.patches_1d(stafflines, patch_width)\n      staffline_patches_shape = tf.shape(staffline_patches)\n      patches_per_position = staffline_patches_shape[2]\n      flat_patches = tf.reshape(staffline_patches, [\n          num_staves * num_sections * patches_per_position, patch_height,\n          patch_width\n      ])\n\n      # Feed in the flat extracted patches as the classifier input.\n      predictions_name = signature.outputs[\n          prediction_keys.PredictionKeys.CLASS_IDS].name\n      predictions = contrib_graph_editor.graph_replace(\n          sess.graph.get_tensor_by_name(predictions_name), {\n              sess.graph.get_tensor_by_name(signature.inputs['input'].name):\n                  flat_patches\n          })\n      # Reshape to the original patches shape.\n      predictions = tf.reshape(predictions, staffline_patches_shape[:3])\n\n      # Pad the output. We take only the valid patches, but we want to shift all\n      # of the predictions so that a patch at index i on the x-axis is centered\n      # on column i. This determines the x coordinates of the glyphs.\n      width = tf.shape(stafflines)[-1]\n      predictions_width = tf.shape(predictions)[-1]\n      pad_before = (width - predictions_width) // 2\n      pad_shape_before = tf.concat([staffline_patches_shape[:2], [pad_before]],\n                                   axis=0)\n      pad_shape_after = tf.concat([\n          staffline_patches_shape[:2], [width - predictions_width - pad_before]\n      ],\n                                  axis=0)\n      self.output = tf.concat(\n          [\n              # NONE has value 1.\n              tf.ones(pad_shape_before, tf.int64),\n              tf.to_int64(predictions),\n              tf.ones(pad_shape_after, tf.int64),\n          ],\n          axis=-1)\n\n    # run_min_length can be set on the saved model to tweak its behavior, but\n    # should be overridden by the keyword argument.\n    if 'run_min_length' not in kwargs:\n      try:\n        # Try to read the run min length from the saved model. This is tweaked\n        # on a per-model basis.\n        run_min_length_t = sess.graph.get_tensor_by_name(\n            _RUN_MIN_LENGTH_CONSTANT_NAME)\n        run_min_length = contrib_util.constant_value(run_min_length_t)\n        # Implicit comparison is invalid on a NumPy array.\n        # pylint: disable=g-explicit-bool-comparison\n        if run_min_length is None or run_min_length.shape != ():\n          raise ValueError('Bad run_min_length: {}'.format(run_min_length))\n        # Overwrite the property after the Convolutional1DGlyphClassifier\n        # constructor completes.\n        self.run_min_length = int(run_min_length)\n      except KeyError:\n        pass  # No run_min_length tensor in the saved model.\n\n  @property\n  def staffline_predictions(self):\n    return self.output\n"
  },
  {
    "path": "moonlight/glyphs/saved_classifier_fn.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Provides the default included OMR classifier.\n\nThis assumes a `saved_model.pb` file with no variables in the model (which would\nbe separate files).\n\nThis is an internal version that can be run in a .PAR file. This module will not\nbe shared between Piper and Git, which will have a simpler open source version.\nThe open source version is just a wrapper around the\n`SavedConvolutional1DClassifier` ctor, which assumes that the saved model is a\nreal directory.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport tensorflow as tf\n\nfrom moonlight.glyphs import saved_classifier\n\n_SAVED_MODEL_PATH = '../data/glyphs_nn_model_20180808'\n\n\ndef build_classifier_fn(saved_model=None):\n  \"\"\"Returns a glyph classifier fn for a saved model.\n\n  The result can be given to `OMREngine` to configure the saved model to use.\n\n  Args:\n    saved_model: Saved model directory. If None, uses the default KNN saved\n        model included with Magenta.\n\n  Returns:\n    A callable that accepts a `Structure` and returns a `BaseGlyphClassifier`.\n  \"\"\"\n  saved_model = (\n      saved_model or\n      os.path.join(tf.resource_loader.get_data_files_path(), _SAVED_MODEL_PATH))\n  ctor = saved_classifier.SavedConvolutional1DClassifier\n  return lambda structure: ctor(structure, saved_model)\n"
  },
  {
    "path": "moonlight/glyphs/saved_classifier_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests running OMR with a dummy saved model.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport tempfile\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight import image\nfrom moonlight import structure\nfrom moonlight.glyphs import convolutional\nfrom moonlight.glyphs import saved_classifier\nfrom moonlight.protobuf import musicscore_pb2\n\n\nclass SavedClassifierTest(tf.test.TestCase):\n\n  def testSaveAndLoadDummyClassifier(self):\n    with tempfile.TemporaryDirectory() as base_dir:\n      export_dir = os.path.join(base_dir, 'export')\n      with self.test_session() as sess:\n        patches = tf.placeholder(tf.float32, shape=(None, 18, 15))\n        num_patches = tf.shape(patches)[0]\n        # Glyph.NONE is number 1.\n        class_ids = tf.ones([num_patches], tf.int32)\n        signature = tf.saved_model.signature_def_utils.build_signature_def(\n            # pyformat: disable\n            {'input': tf.saved_model.utils.build_tensor_info(patches)},\n            {'class_ids': tf.saved_model.utils.build_tensor_info(class_ids)},\n            'serve')\n        builder = tf.saved_model.builder.SavedModelBuilder(export_dir)\n        builder.add_meta_graph_and_variables(\n            sess, ['serve'],\n            signature_def_map={\n                tf.saved_model.signature_constants\n                .DEFAULT_SERVING_SIGNATURE_DEF_KEY:\n                    signature\n            })\n        builder.save()\n      tf.reset_default_graph()\n      # Load the saved model.\n      with self.test_session() as sess:\n        filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                                '../testdata/IMSLP00747-000.png')\n        page = image.decode_music_score_png(tf.read_file(filename))\n        clazz = saved_classifier.SavedConvolutional1DClassifier(\n            structure.create_structure(page), export_dir)\n        # Run min length should be the default.\n        self.assertEqual(clazz.run_min_length,\n                         convolutional.DEFAULT_RUN_MIN_LENGTH)\n        predictions = clazz.staffline_predictions.eval()\n        self.assertEqual(predictions.ndim, 3)  # Staff, staff position, x\n        self.assertGreater(predictions.size, 0)\n        # Predictions are all musicscore_pb2.Glyph.NONE.\n        self.assertAllEqual(\n            predictions,\n            np.full(predictions.shape, musicscore_pb2.Glyph.NONE, np.int32))\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/glyphs/testing.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Testing utilities for glyph classification.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.glyphs import convolutional\nfrom moonlight.protobuf import musicscore_pb2\n\n# Sample glyph predictions.\n# Shape (num_staves, num_stafflines, width).\nPREDICTIONS = np.asarray(\n    [[[1, 1, 1, 1, 2, 1],\n      [3, 1, 5, 1, 1, 1],\n      [1, 4, 1, 1, 1, 1]],\n     [[1, 1, 3, 1, 1, 1],\n      [1, 1, 5, 1, 1, 1],\n      [1, 1, 1, 1, 3, 5]]])  # pyformat: disable\n# Page corresponding to the glyphs in PREDICTIONS.\nGLYPHS_PAGE = musicscore_pb2.Page(system=[\n    musicscore_pb2.StaffSystem(staff=[\n        musicscore_pb2.Staff(glyph=[\n            musicscore_pb2.Glyph(x=0, y_position=0, type=3),\n            musicscore_pb2.Glyph(x=1, y_position=-1, type=4),\n            musicscore_pb2.Glyph(x=2, y_position=0, type=5),\n            musicscore_pb2.Glyph(x=4, y_position=1, type=2),\n        ]),\n        musicscore_pb2.Staff(glyph=[\n            musicscore_pb2.Glyph(x=2, y_position=1, type=3),\n            musicscore_pb2.Glyph(x=2, y_position=0, type=5),\n            musicscore_pb2.Glyph(x=4, y_position=-1, type=3),\n            musicscore_pb2.Glyph(x=5, y_position=-1, type=5),\n        ]),\n    ]),\n])\n\n\nclass DummyGlyphClassifier(convolutional.Convolutional1DGlyphClassifier):\n  \"\"\"A 1D convolutional glyph classifier with constant predictions.\n\n  The predictions have shape (num_staves, num_stafflines, width).\n  \"\"\"\n\n  def __init__(self, predictions):\n    super(DummyGlyphClassifier, self).__init__(run_min_length=1)\n    self.predictions = predictions\n\n  @property\n  def staffline_predictions(self):\n    return tf.constant(self.predictions)\n"
  },
  {
    "path": "moonlight/image.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility for reading music score images.\n\nReads grayscale images, and reverses the values if the image is detected to be\ninverted.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\n\ndef decode_music_score_png(contents):\n  \"\"\"Reads a music score image.\n\n  This reads a binary or grayscale image and takes the only channel. If the\n      image is detected to be inverted, the values will be flipped so that\n      the white background has value 255 and the black content has value 0.\n\n  Args:\n    contents: PNG data in a scalar string tensor.\n\n  Returns:\n    The music score image. A two-dimensional tensor (HW) of type uint8.\n  \"\"\"\n  with tf.name_scope(\"decode_music_score_png\"):\n    contents = tf.convert_to_tensor(contents, name=\"contents\")\n    image_t = tf.image.decode_png(contents, channels=1, dtype=tf.uint8)[:, :, 0]\n\n    def inverted_image():\n      # Sub op is not defined for uint8.\n      int32_image = tf.cast(image_t, tf.int32)\n      return tf.cast(255 - int32_image, tf.uint8)\n\n    threshold = 127\n    num_pixels = tf.shape(image_t)[0] * tf.shape(image_t)[1]\n    majority_dark = tf.greater(\n        tf.reduce_sum(tf.cast(image_t < threshold, tf.int32)), num_pixels // 2)\n    return tf.cond(majority_dark, inverted_image, lambda: image_t)\n"
  },
  {
    "path": "moonlight/models/base/BUILD",
    "content": "# Description:\n# Common logic for all model training.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"batches\",\n    srcs = [\"batches.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # absl dep\n    ],\n)\n\npy_test(\n    name = \"batches_test\",\n    srcs = [\"batches_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":batches\",\n        # disable_tf2\n        # absl dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"glyph_patches\",\n    srcs = [\"glyph_patches.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":batches\",\n        \":label_weights\",\n        # absl dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/util:memoize\",\n        # tensorflow dep\n        # tensorflow.contrib.estimator py dep\n        # tensorflow.contrib.image py dep\n    ],\n)\n\npy_test(\n    name = \"glyph_patches_test\",\n    srcs = [\"glyph_patches_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":glyph_patches\",\n        # disable_tf2\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"hyperparameters\",\n    srcs = [\"hyperparameters.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # six dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"hyperparameters_test\",\n    srcs = [\"hyperparameters_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":hyperparameters\",\n        # disable_tf2\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"label_weights\",\n    srcs = [\"label_weights.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # absl dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # six dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"label_weights_test\",\n    srcs = [\"label_weights_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":label_weights\",\n        # disable_tf2\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/models/base/batches.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility for batching and limiting the dataset size according to flags.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import flags\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_integer(\n    'dataset_shuffle_buffer_size', 10000,\n    'Shuffles this many entries in the dataset. 0 indicates no'\n    ' shuffling.')\nflags.DEFINE_integer(\n    'dataset_limit_size', None,\n    'Only take this many entries in the dataset (which may be repeated for some'\n    ' number of training steps).')\nflags.DEFINE_integer('dataset_batch_size', 32, 'Resulting batch size.')\n\n\ndef get_batched_tensor(dataset):\n  \"\"\"Gets the tensor representing a single batch from a `tf.data.Dataset`.\n\n  Batch and epoch options are passed on the command line.\n\n  Args:\n    dataset: A `tf.data.Dataset` containing single examples.\n\n  Returns:\n    A dict of tensors, which contains the concatenated features from each\n        example in a single batch. Each time the tensor is evaluated, it will\n        produce the next batch.\n  \"\"\"\n  if FLAGS.dataset_shuffle_buffer_size:\n    dataset = dataset.shuffle(buffer_size=FLAGS.dataset_shuffle_buffer_size)\n  if FLAGS.dataset_limit_size:\n    dataset = dataset.take(FLAGS.dataset_limit_size)\n  # Run through batches for multiple epochs, until the max num of train steps is\n  # exhausted.\n  dataset = dataset.batch(FLAGS.dataset_batch_size).repeat()\n  iterator = dataset.make_one_shot_iterator()\n  return iterator.get_next()\n"
  },
  {
    "path": "moonlight/models/base/batches_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for batches.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import flags\nfrom moonlight.models.base import batches\nimport numpy as np\nimport tensorflow as tf\n\n\nclass BatchesTest(tf.test.TestCase):\n\n  def testBatching(self):\n    all_as = np.random.rand(1000, 2, 3)\n    all_bs = np.random.randint(0, 100, [1000], np.int32)\n    all_labels = np.random.randint(0, 5, [1000], np.int32)\n    random_dataset = tf.data.Dataset.from_tensor_slices(({\n        'a': tf.constant(all_as),\n        'b': tf.constant(all_bs)\n    }, tf.constant(all_labels)))\n\n    flags.FLAGS.dataset_shuffle_buffer_size = 0\n    batch_tensors = batches.get_batched_tensor(random_dataset)\n    with self.test_session() as sess:\n      batch = sess.run(batch_tensors)\n\n      # First batch.\n      self.assertEqual(len(batch), 2)\n      self.assertEqual(sorted(batch[0].keys()), ['a', 'b'])\n      batch_size = flags.FLAGS.dataset_batch_size\n      self.assertAllEqual(batch[0]['a'], all_as[:batch_size])\n      self.assertAllEqual(batch[0]['b'], all_bs[:batch_size])\n      self.assertAllEqual(batch[1], all_labels[:batch_size])\n\n      batch = sess.run(batch_tensors)\n\n      # Second batch.\n      self.assertEqual(len(batch), 2)\n      self.assertEqual(sorted(batch[0].keys()), ['a', 'b'])\n      batch_size = flags.FLAGS.dataset_batch_size\n      self.assertAllEqual(batch[0]['a'], all_as[batch_size:batch_size * 2])\n      self.assertAllEqual(batch[0]['b'], all_bs[batch_size:batch_size * 2])\n      self.assertAllEqual(batch[1], all_labels[batch_size:batch_size * 2])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/models/base/glyph_patches.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Base patch-based glyph model.\n\nFor example, this accepts the staff patch k-means centroids emitted by\nstaffline_patches_kmeans_pipeline and labeled by kmeans_labeler.\n\nThis defines the input and signature of the model, and allows any type of\nmulti-class classifier using the normalized patches as input.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport math\n\nfrom absl import flags\nfrom moonlight.models.base import batches\nfrom moonlight.models.base import label_weights\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.util import memoize\nimport tensorflow as tf\nfrom tensorflow.contrib import estimator as contrib_estimator\nfrom tensorflow.contrib import image as contrib_image\nfrom tensorflow.python.lib.io import file_io\nfrom tensorflow.python.lib.io import tf_record\n\nWEIGHT_COLUMN_NAME = 'weight'\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('train_input_patches', None,\n                    'Glob of labeled patch TFRecords for training')\nflags.DEFINE_string('eval_input_patches', None,\n                    'Glob of labeled patch TFRecords for eval')\nflags.DEFINE_string('model_dir', None, 'Output trained model directory')\nflags.DEFINE_boolean(\n    'use_included_label_weight', False,\n    'Whether to multiply a \"label_weight\" feature included in the example by'\n    ' the weight determined by the \"label\" value.')\nflags.DEFINE_float(\n    'augmentation_x_shift_probability', 0.5,\n    'Probability of shifting the patch left or right by one pixel. The edge is'\n    ' filled using the adjacent column. It is equally likely that the patch is'\n    ' shifted left or right.')\nflags.DEFINE_float(\n    'augmentation_max_rotation_degrees', 2.,\n    'Max rotation of the patch, in degrees. The rotation is selected uniformly'\n    ' randomly from the range +- this value. A value of 0 implies no rotation.')\nflags.DEFINE_integer('eval_throttle_secs', 60,\n                     'Evaluate at at most this interval, in seconds.')\nflags.DEFINE_integer(\n    'train_max_steps', 100000,\n    'Max steps for training. If 0, will train until the process is'\n    ' interrupted.')\nflags.DEFINE_integer('eval_steps', 500, 'Num steps to evaluate the model.')\nflags.DEFINE_integer(\n    'exports_to_keep', 10,\n    'Keep the last N saved models (exported on each eval) before deleting'\n    ' previous exports.')\n\nflags.DEFINE_multi_string('classes_for_metrics', [\n    'NONE',\n    'CLEF_TREBLE',\n    'CLEF_BASS',\n    'NOTEHEAD_FILLED',\n    'NOTEHEAD_EMPTY',\n    'NOTEHEAD_WHOLE',\n    'SHARP',\n    'FLAT',\n    'NATURAL',\n], 'Generate accuracy metrics for these class names.')\n\n\n@memoize.MemoizedFunction\ndef read_patch_dimensions():\n  \"\"\"Reads the dimensions of the input patches from disk.\n\n  Parses the first example in the training set, which must have \"height\" and\n  \"width\" features.\n\n  Returns:\n    Tuple of (height, width) read from disk, using the glob passed to\n    --train_input_patches.\n  \"\"\"\n  for filename in file_io.get_matching_files(FLAGS.train_input_patches):\n    # If one matching file is empty, go on to the next file.\n    for record in tf_record.tf_record_iterator(filename):\n      example = tf.train.Example.FromString(record)\n      # Convert long (int64) to int, necessary for use in feature columns in\n      # Python 2.\n      patch_height = int(example.features.feature['height'].int64_list.value[0])\n      patch_width = int(example.features.feature['width'].int64_list.value[0])\n      return patch_height, patch_width\n\n\ndef input_fn(input_patches):\n  \"\"\"Defines the estimator input function.\n\n  Args:\n    input_patches: The input patches TFRecords pattern.\n\n  Returns:\n    A callable. Each invocation returns a tuple containing:\n    * A dict with a single key 'patch', and the patch tensor as a value.\n    * A scalar tensor with the patch label, as an integer.\n  \"\"\"\n  patch_height, patch_width = read_patch_dimensions()\n  dataset = tf.data.TFRecordDataset(file_io.get_matching_files(input_patches))\n\n  def parser(record):\n    \"\"\"Dataset parser function.\n\n    Args:\n      record: A single serialized Example proto tensor.\n\n    Returns:\n      A tuple of:\n      * A dict of features ('patch' and 'weight')\n      * A label tensor (int64 scalar).\n    \"\"\"\n    feature_types = {\n        'patch': tf.FixedLenFeature((patch_height, patch_width), tf.float32),\n        'label': tf.FixedLenFeature((), tf.int64),\n    }\n    if FLAGS.use_included_label_weight:\n      feature_types['label_weight'] = tf.FixedLenFeature((), tf.float32)\n    features = tf.parse_single_example(record, feature_types)\n\n    label = features['label']\n    weight = label_weights.weights_from_labels(label)\n    if FLAGS.use_included_label_weight:\n      # Both operands must be the same type (float32).\n      weight = tf.to_float(weight) * tf.to_float(features['label_weight'])\n    patch = _augment(features['patch'])\n    return {'patch': patch, WEIGHT_COLUMN_NAME: weight}, label\n\n  return batches.get_batched_tensor(dataset.map(parser))\n\n\ndef _augment(patch):\n  \"\"\"Performs multiple augmentations on the patch, helping to generalize.\"\"\"\n  return _augment_rotation(_augment_shift(patch))\n\n\ndef _augment_shift(patch):\n  \"\"\"Augments the patch by possibly shifting it 1 pixel horizontally.\"\"\"\n  with tf.name_scope('augment_shift'):\n    rand = tf.random_uniform(())\n\n    def shift_left():\n      return _shift_left(patch)\n\n    def shift_right():\n      return _shift_right(patch)\n\n    def identity():\n      return patch\n\n    shift_prob = min(1., FLAGS.augmentation_x_shift_probability)\n    return tf.cond(rand < shift_prob / 2, shift_left,\n                   lambda: tf.cond(rand < shift_prob, shift_right, identity))\n\n\ndef _shift_left(patch):\n  patch = tf.convert_to_tensor(patch)\n  return tf.concat([patch[:, 1:], patch[:, -1:]], axis=1)\n\n\ndef _shift_right(patch):\n  patch = tf.convert_to_tensor(patch)\n  return tf.concat([patch[:, :1], patch[:, :-1]], axis=1)\n\n\ndef _augment_rotation(patch):\n  \"\"\"Augments the patch by rotating it by a small amount.\"\"\"\n  max_rotation_radians = math.radians(FLAGS.augmentation_max_rotation_degrees)\n  rotation = tf.random_uniform((),\n                               minval=-max_rotation_radians,\n                               maxval=max_rotation_radians)\n  # Background is white (1.0) but tf.contrib.image.rotate currently always fills\n  # the edges with black (0). Invert the patch before rotating.\n  return 1. - contrib_image.rotate(\n      1. - patch, rotation, interpolation='BILINEAR')\n\n\ndef serving_fn():\n  \"\"\"Returns the ServingInputReceiver for the exported model.\n\n  Returns:\n    A ServingInputReceiver object which may be passed to\n    `Estimator.export_savedmodel`. A model saved using this receiver may be used\n    for running OMR.\n  \"\"\"\n  examples = tf.placeholder(tf.string, shape=[None])\n  patch_height, patch_width = read_patch_dimensions()\n  parsed = tf.parse_example(examples, {\n      'patch': tf.FixedLenFeature((patch_height, patch_width), tf.float32),\n  })\n  return tf.estimator.export.ServingInputReceiver(\n      features={'patch': parsed['patch']},\n      receiver_tensors=parsed['patch'],\n      receiver_tensors_alternatives={\n          'example': examples,\n          'patch': parsed['patch']\n      })\n\n\ndef create_patch_feature_column():\n  return tf.feature_column.numeric_column(\n      'patch', shape=read_patch_dimensions())\n\n\ndef multiclass_binary_metric(class_number, binary_metric, labels, predictions):\n  \"\"\"Wraps a binary metric for detecting a certain class (glyph type).\"\"\"\n  # Convert multiclass (integer) labels and predictions to booleans (for\n  # whether or not they are equal to the given class).\n  label_positive = tf.equal(labels, class_number)\n  predicted_positive = tf.equal(predictions['class_ids'], class_number)\n  return binary_metric(labels=label_positive, predictions=predicted_positive)\n\n\ndef metrics_fn(features, labels, predictions):\n  \"\"\"Metrics to be computed on every evaluation run, viewable in TensorBoard.\n\n  This function has the expected signature of a callable to be passed to\n  `tf.contrib.estimator.add_metrics`.\n\n  Args:\n    features: Dict of feature tensors.\n    labels: A tensor of example labels (ints).\n    predictions: Dict of prediction types. Has an entry \"class_ids\" which is\n      comparable to the ground truth in `labels`.\n\n  Returns:\n    A dict from metric name to TF metric.\n  \"\"\"\n  del features  # Unused.\n  metrics = {\n      'mean_per_class_accuracy':\n          tf.metrics.mean_per_class_accuracy(\n              labels=labels,\n              predictions=predictions['class_ids'],\n              num_classes=len(musicscore_pb2.Glyph.Type.keys()),\n          ),\n  }\n  for class_name in FLAGS.classes_for_metrics:\n    class_number = musicscore_pb2.Glyph.Type.Value(class_name)\n    metrics['class/{}_precision'.format(class_name)] = multiclass_binary_metric(\n        class_number, tf.metrics.precision, labels, predictions)\n    metrics['class/{}_recall'.format(class_name)] = multiclass_binary_metric(\n        class_number, tf.metrics.recall, labels, predictions)\n  return metrics\n\n\ndef train_and_evaluate(estimator):\n  tf.estimator.train_and_evaluate(\n      contrib_estimator.add_metrics(estimator, metrics_fn),\n      tf.estimator.TrainSpec(\n          input_fn=lambda: input_fn(FLAGS.train_input_patches),\n          max_steps=FLAGS.train_max_steps),\n      tf.estimator.EvalSpec(\n          input_fn=lambda: input_fn(FLAGS.eval_input_patches),\n          start_delay_secs=0,\n          throttle_secs=FLAGS.eval_throttle_secs,\n          steps=FLAGS.eval_steps,\n          exporters=[\n              tf.estimator.LatestExporter(\n                  'exporter', serving_fn,\n                  exports_to_keep=FLAGS.exports_to_keep),\n          ]))\n"
  },
  {
    "path": "moonlight/models/base/glyph_patches_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for glyph_patches.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tempfile\n\nfrom absl import flags\nfrom moonlight.models.base import glyph_patches\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.python.lib.io import tf_record\n\n\nclass GlyphPatchesTest(tf.test.TestCase):\n\n  def testInputFn(self):\n    with tempfile.NamedTemporaryFile() as records_file:\n      with tf_record.TFRecordWriter(records_file.name) as records_writer:\n        flags.FLAGS.augmentation_x_shift_probability = 0\n        flags.FLAGS.augmentation_max_rotation_degrees = 0\n        example = tf.train.Example()\n        height = 5\n        width = 3\n        example.features.feature['height'].int64_list.value.append(height)\n        example.features.feature['width'].int64_list.value.append(width)\n        example.features.feature['patch'].float_list.value.extend(\n            range(height * width))\n        label = 1\n        example.features.feature['label'].int64_list.value.append(label)\n        for _ in range(3):\n          records_writer.write(example.SerializeToString())\n\n      flags.FLAGS.train_input_patches = records_file.name\n      batch_tensors = glyph_patches.input_fn(records_file.name)\n\n      with self.test_session() as sess:\n        batch = sess.run(batch_tensors)\n\n        self.assertAllEqual(\n            batch[0]['patch'],\n            np.arange(height * width).reshape(\n                (1, height, width)).repeat(3, axis=0))\n        self.assertAllEqual(batch[1], [label, label, label])\n\n  def testShiftLeft(self):\n    with self.test_session():\n      self.assertAllEqual(\n          # pyformat: disable\n          glyph_patches._shift_left([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10,\n                                                                  11]]).eval(),\n          [[1, 2, 3, 3], [5, 6, 7, 7], [9, 10, 11, 11]])\n\n  def testShiftRight(self):\n    with self.test_session():\n      self.assertAllEqual(\n          # pyformat: disable\n          glyph_patches._shift_right([[0, 1, 2, 3], [4, 5, 6, 7],\n                                      [8, 9, 10, 11]]).eval(),\n          [[0, 0, 1, 2], [4, 4, 5, 6], [8, 8, 9, 10]])\n\n  def testMulticlassBinaryMetric(self):\n    # pyformat: disable\n    # pylint: disable=bad-whitespace\n    labels =                     tf.constant([1, 1, 3, 2, 2, 2, 2])\n    predictions = dict(class_ids=tf.constant([1, 3, 2, 2, 4, 3, 2]))\n    _, precision_1 = glyph_patches.multiclass_binary_metric(\n        1, tf.metrics.precision, labels, predictions)\n    _, recall_1 = glyph_patches.multiclass_binary_metric(\n        1, tf.metrics.recall, labels, predictions)\n    _, precision_2 = glyph_patches.multiclass_binary_metric(\n        2, tf.metrics.precision, labels, predictions)\n    _, recall_2 = glyph_patches.multiclass_binary_metric(\n        2, tf.metrics.recall, labels, predictions)\n    _, precision_3 = glyph_patches.multiclass_binary_metric(\n        3, tf.metrics.precision, labels, predictions)\n    _, recall_3 = glyph_patches.multiclass_binary_metric(\n        3, tf.metrics.recall, labels, predictions)\n\n    with self.test_session() as sess:\n      sess.run(tf.local_variables_initializer())\n      # For class 1: 1 true positive and no false positives\n      self.assertEqual(1.0, precision_1.eval())\n      # For class 1: 1 true positive and 1 false negative\n      self.assertEqual(0.5, recall_1.eval())\n      # For class 2: 2 true positives and 1 false positive\n      self.assertAlmostEqual(2 / 3, precision_2.eval(), places=5)\n      # For class 2: 2 true positives and 2 false negatives\n      self.assertEqual(0.5, recall_2.eval())\n      # For class 3: No true positives\n      self.assertEqual(0, precision_3.eval())\n      self.assertEqual(0, recall_3.eval())\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/models/base/hyperparameters.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Wrapper which saves hyperparameters to a model collection.\n\nThe hyperparameters will be carefully tuned, and should be included in the\nexported saved model to ensure reproducibility.\n\"\"\"\n# TODO(ringw): Try to get a standardized mechanism for saving params into\n# TensorFlow.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport six\nimport tensorflow as tf\n\n\ndef estimator_with_saved_params(estimator, params):\n  \"\"\"Wraps an estimator with hyperparameters to be stored in the saved model.\n\n  Args:\n    estimator: A `tf.estimator.Estimator` instance.\n    params: A dict of string to constant value (string, number, or NumPy array).\n\n  Returns:\n    A wrapped `tf.estimator.Estimator`. The model of the new estimator extends\n    the wrapped model, and also includes the hyperparameters in a collection\n    where they can be inspected later.\n\n  Raises:\n    ValueError: If a hyperparameter value is None.\n  \"\"\"\n  # Validate parameters immediately, not in the model_fn.\n  for name, value in six.iteritems(params):\n    if value is None:\n      raise ValueError('Hyperparameter cannot be None: {}'.format(name))\n\n  # Estimator is mostly just a wrapper around the model_fn callable. Our wrapper\n  # just needs a callable that adds all the params to a collection, and then\n  # invokes the original callable.\n  def model_fn(features, labels, mode, params, config):\n    \"\"\"Wraps the delegate estimator model_fn.\n\n    Args:\n      features: A dict of string to Tensor. Features to classify.\n      labels: A Tensor with example labels, or None for prediction.\n      mode: The mode string for the estimator.\n      params: Passed through the newly constructed Estimator. These should be\n        identical to the outer function's params.\n      config: A TensorFlow estimator config object.\n\n    Returns:\n      An object holding the predictions, optimizer, etc.\n    \"\"\"\n    with tf.name_scope('params'):\n      for name, value in six.iteritems(params):\n        tf.add_to_collection('params', tf.constant(name=name, value=value))\n\n    # The Estimator model_fn property always returns a wrapped \"public\"\n    # model_fn. The public wrapper doesn't take \"params\", and passes the params\n    # from the Estimator constructor into the internal model_fn. Therefore, it\n    # only matters that we pass the params to the Estimator below.\n    return estimator.model_fn(features, labels, mode, config)\n\n  return tf.estimator.Estimator(\n      model_fn, model_dir=estimator.model_dir, params=params)\n"
  },
  {
    "path": "moonlight/models/base/hyperparameters_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.models.base import hyperparameters\nimport numpy as np\nimport tensorflow as tf\n\n\nclass HyperparametersTest(tf.test.TestCase):\n\n  def testSimpleModel(self):\n    learning_rate = np.float32(0.123)\n    params = {'learning_rate': learning_rate}\n    estimator = hyperparameters.estimator_with_saved_params(\n        tf.estimator.DNNClassifier(\n            hidden_units=[10],\n            feature_columns=[tf.feature_column.numeric_column('feature')]),\n        params)\n    with self.test_session():\n      # Build the estimator model.\n      estimator.model_fn(\n          features={'feature': tf.placeholder(tf.float32)},\n          labels=tf.placeholder(tf.float32),\n          mode='TRAIN',\n          config=None)\n      # We should be able to pull hyperparameters out of the TensorFlow graph.\n      # The entire graph will also be written to the saved model in training.\n      self.assertEqual(\n          learning_rate,\n          tf.get_default_graph().get_tensor_by_name(\n              'params/learning_rate:0').eval())\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/models/base/label_weights.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Configures the weight (importance) of examples with a given label.\n\nAny glyph type may be over- or under-represented in the examples, which would\nhurt the precision and/or recall for that glyph type. When training, the\ngradient for each example is multiplied by the weight, which scales the\nparameter update for that example.\n\nFor an example custom weight, if naturals are often misclassified as sharps, and\nnot vice versa, we may want to increase the weight for NATURAL.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import flags\nfrom moonlight.protobuf import musicscore_pb2\nimport numpy as np\nimport six\nimport tensorflow as tf\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n    'label_weights', 'NONE=0.5',\n    'Example weights for patches of each label type. For example,'\n    ' \"NONE=0.01,FLAT=2.0\" would weight \"NONE\" examples\\' influence as 0.01,'\n    ' \"FLAT\" examples as 2.0, and all other examples as 1.0.')\n\n# The glyph types array must be large enough to hold the highest enum value.\nGLYPH_TYPES_ARRAY_SIZE = 1 + max(\n    number for name, number in musicscore_pb2.Glyph.Type.items())\n\n\ndef parse_label_weights_array(weights_str=None):\n  \"\"\"Creates an array with all of the label weights.\n\n  Args:\n    weights_str: String of label name-weight pairs, separated by commas.\n      Defaults to the command-line flag.\n\n  Returns:\n    A NumPy array large enough to hold all of the glyph enum types. At the index\n    for a glyph enum value, we store the example weight, defaulting to 1.0.\n\n  Raises:\n    ValueError: If a glyph type is listed multiple times.\n  \"\"\"\n  weights_str = weights_str or FLAGS.label_weights\n  weights_array = np.ones(GLYPH_TYPES_ARRAY_SIZE)\n  if not weights_str:\n    return weights_array\n\n  weights = {}\n  for pair in weights_str.split(','):\n    name, glyph_weight_str = pair.split('=')\n    if name in weights:\n      raise ValueError('Duplicate weight: {}'.format(name))\n    weights[name] = float(glyph_weight_str)\n\n  for name, weight in six.iteritems(weights):\n    weights_array[musicscore_pb2.Glyph.Type.Value(name)] = weight\n  return weights_array\n\n\ndef weights_from_labels(labels, weights_str=None):\n  \"\"\"Determines the example weights from a tensor of example labels.\"\"\"\n  with tf.name_scope('weights_from_labels'):\n    weights = tf.constant(\n        parse_label_weights_array(weights_str), name='label_weights')\n    return tf.gather(weights, labels, name='label_weights_lookup')\n"
  },
  {
    "path": "moonlight/models/base/label_weights_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for label_weights.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.models.base import label_weights\nfrom moonlight.protobuf import musicscore_pb2\nimport tensorflow as tf\n\n\nclass LabelWeightsTest(tf.test.TestCase):\n\n  def testWeightsFromLabels(self):\n    g = musicscore_pb2.Glyph\n    labels = tf.constant(\n        [g.NONE, g.NONE, g.NOTEHEAD_FILLED, g.SHARP, g.FLAT, g.NATURAL])\n    weights = 'NONE=0.1,NATURAL=2.0,SHARP=0.5,NOTEHEAD_FILLED=0.8'\n    weights_tensor = label_weights.weights_from_labels(labels, weights)\n    with self.test_session():\n      self.assertAllEqual([0.1, 0.1, 0.8, 0.5, 1.0, 2.0], weights_tensor.eval())\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/models/glyphs_dnn/BUILD",
    "content": "# Description:\n# Glyph patches DNN classifier.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"model\",\n    srcs = [\"model.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # absl dep\n        \"//moonlight/models/base:glyph_patches\",\n        \"//moonlight/models/base:hyperparameters\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n    ],\n)\n\npy_binary(\n    name = \"train\",\n    srcs = [\"train.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":model\",\n        # disable_tf2\n        # absl dep\n        \"//moonlight/models/base:glyph_patches\",\n    ],\n)\n"
  },
  {
    "path": "moonlight/models/glyphs_dnn/model.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Defines the glyph patches DNN classifier.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import flags\nfrom moonlight.models.base import glyph_patches\nfrom moonlight.models.base import hyperparameters\nfrom moonlight.protobuf import musicscore_pb2\nimport tensorflow as tf\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_multi_integer(\n    'layer_dims', [20, 20],\n    'Dimensions of each hidden layer. --layer_dims=0 indicates logistic'\n    ' regression (predictions directly connected to inputs through a sigmoid'\n    ' layer).')\nflags.DEFINE_string(\n    'activation_fn', 'sigmoid',\n    'The name of the function (under tf.nn) to apply after each layer.')\nflags.DEFINE_float('learning_rate', 0.1, 'FTRL learning rate')\nflags.DEFINE_float('l1_regularization_strength', 0.01, 'L1 penalty')\nflags.DEFINE_float('l2_regularization_strength', 0, 'L2 penalty')\nflags.DEFINE_float('dropout', 0, 'Dropout to apply to all hidden nodes.')\n\n\ndef get_flag_params():\n  \"\"\"Returns the hyperparameters specified by flags.\n\n  Returns:\n    A dict of hyperparameter names and values.\n  \"\"\"\n  layer_dims = FLAGS.layer_dims\n  if not any(layer_dims):\n    # Must pass a single layer of size 0 on the command line to indicate\n    # logistic regression (no hidden dims).\n    layer_dims = []\n  return {\n      'model_name':\n          'glyphs_dnn',\n      'layer_dims':\n          layer_dims,\n      'activation_fn':\n          FLAGS.activation_fn,\n      'learning_rate':\n          FLAGS.learning_rate,\n      'l1_regularization_strength':\n          FLAGS.l1_regularization_strength,\n      'l2_regularization_strength':\n          FLAGS.l2_regularization_strength,\n      'dropout':\n          FLAGS.dropout,\n\n      # Declared in glyph_patches.py.\n      'augmentation_x_shift_probability':\n          FLAGS.augmentation_x_shift_probability,\n      'augmentation_max_rotation_degrees':\n          FLAGS.augmentation_max_rotation_degrees,\n      'use_included_label_weight':\n          FLAGS.use_included_label_weight,\n\n      # Declared in label_weights.py.\n      'label_weights':\n          FLAGS.label_weights,\n  }\n\n\ndef create_estimator(params=None):\n  \"\"\"Returns the glyphs DNNClassifier estimator.\n\n  Args:\n    params: Optional hyperparameters, defaulting to command-line values.\n\n  Returns:\n    A DNNClassifier instance.\n  \"\"\"\n  params = params or get_flag_params()\n  if not params['layer_dims'] and params['activation_fn'] != 'sigmoid':\n    tf.logging.warning(\n        'activation_fn should be sigmoid for logistic regression. Got: %s',\n        params['activation_fn'])\n\n  activation_fn = getattr(tf.nn, params['activation_fn'])\n  estimator = tf.estimator.DNNClassifier(\n      params['layer_dims'],\n      feature_columns=[glyph_patches.create_patch_feature_column()],\n      weight_column=glyph_patches.WEIGHT_COLUMN_NAME,\n      n_classes=len(musicscore_pb2.Glyph.Type.keys()),\n      optimizer=tf.train.FtrlOptimizer(\n          learning_rate=params['learning_rate'],\n          l1_regularization_strength=params['l1_regularization_strength'],\n          l2_regularization_strength=params['l2_regularization_strength'],\n      ),\n      activation_fn=activation_fn,\n      dropout=FLAGS.dropout,\n      model_dir=glyph_patches.FLAGS.model_dir,\n  )\n  return hyperparameters.estimator_with_saved_params(estimator, params)\n"
  },
  {
    "path": "moonlight/models/glyphs_dnn/train.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Script for training the glyphs DNN classifier.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import app\nfrom absl import flags\nfrom moonlight.models.base import glyph_patches\nfrom moonlight.models.glyphs_dnn import model\nimport tensorflow as tf\n\nFLAGS = flags.FLAGS\n\n\ndef main(_):\n  tf.logging.set_verbosity(tf.logging.INFO)\n  glyph_patches.train_and_evaluate(model.create_estimator())\n\n\nif __name__ == '__main__':\n  app.run(main)\n"
  },
  {
    "path": "moonlight/music/BUILD",
    "content": "# Description:\n# General music theory for OMR.\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"constants\",\n    srcs = [\"constants.py\"],\n    srcs_version = \"PY2AND3\",\n)\n"
  },
  {
    "path": "moonlight/music/constants.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Constants for music theory in OMR.\"\"\"\n\n# The indices of the pitch classes in a major scale.\nMAJOR_SCALE = [0, 2, 4, 5, 7, 9, 11]\n\nNUM_SEMITONES_PER_OCTAVE = 12\n\n# These constants are coincidentally equal.\n# The size of the perfect fifth interval.\nNUM_SEMITONES_IN_PERFECT_FIFTH = 7\n# The number of pitch classes present in a diatonic scale (e.g. the major scale)\nNUM_NOTES_IN_DIATONIC_SCALE = 7\n\n# The consecutive base notes of a key signature are each separated by a fifth,\n# or 7 semitones.\nCIRCLE_OF_FIFTHS = [0, 7, 2, 9, 4, 11, 6, 1, 8, 3, 10, 5]\n# CIRCLE_OF_FIFTHS is declared as a constant for clarity, but can be generated\n# from:\n# CIRCLE_OF_FIFTHS = [\n#     (i * NUM_SEMITONES_IN_PERFECT_FIFTH) % NUM_SEMITONES_PER_OCTAVE\n#     for i in range(NUM_SEMITONES_PER_OCTAVE)\n# ]\n"
  },
  {
    "path": "moonlight/omr.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Runs OMR and outputs a Score or NoteSequence message.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys\nimport time\n\nfrom absl import flags\nimport tensorflow as tf\n\nfrom google.protobuf import text_format\n\nfrom tensorflow.python.lib.io import file_io\n\nfrom moonlight import conversions\nfrom moonlight import engine\nfrom moonlight.glyphs import saved_classifier_fn\n\nFLAGS = flags.FLAGS\n\nVALID_OUTPUT_TYPES = ['MusicXML', 'NoteSequence', 'Score']\n\n# The name of the png file contents tensor.\nPNG_CONTENTS_TENSOR = 'png_contents'\n\nflags.DEFINE_string(\n    'glyphs_saved_model', None,\n    'Path to the patch-based glyph classifier saved model dir. Defaults to the'\n    ' included KNN classifier.')\nflags.DEFINE_string('output', '/dev/stdout',\n                    'Path to write the output text-format or binary proto.')\nflags.DEFINE_string('output_type', 'Score',\n                    'Which output type to produce (Score or NoteSequence).')\nflags.DEFINE_boolean('text_format', True, 'Whether the output is text format.')\n\n\ndef run(input_pngs, glyphs_saved_model=None, output_notesequence=False):\n  \"\"\"Runs OMR over a list of input images.\n\n  Args:\n    input_pngs: A list of PNG filenames to process.\n    glyphs_saved_model: Optional saved model dir to override the included model.\n    output_notesequence: Whether to return a NoteSequence, as opposed to a Score\n      containing Pages with Glyphs.\n\n  Returns:\n    A NoteSequence message, or a Score message holding Pages for each input\n        image (with their detected Glyphs).\n  \"\"\"\n  return engine.OMREngine(\n      saved_classifier_fn.build_classifier_fn(glyphs_saved_model)).run(\n          input_pngs, output_notesequence=output_notesequence)\n\n\ndef main(argv):\n  if FLAGS.output_type not in VALID_OUTPUT_TYPES:\n    raise ValueError('output_type \"%s\" not in allowed types: %s' %\n                     (FLAGS.output_type, VALID_OUTPUT_TYPES))\n\n  # Exclude argv[0], which is the current binary.\n  patterns = argv[1:]\n  if not patterns:\n    raise ValueError('PNG file glob(s) must be specified')\n  input_paths = []\n  for pattern in patterns:\n    pattern_paths = file_io.get_matching_files(pattern)\n    if not pattern_paths:\n      raise ValueError('Pattern \"%s\" failed to match any files' % pattern)\n    input_paths.extend(pattern_paths)\n\n  start = time.time()\n  output = run(\n      input_paths,\n      FLAGS.glyphs_saved_model,\n      output_notesequence=FLAGS.output_type == 'NoteSequence')\n  end = time.time()\n  sys.stderr.write('OMR elapsed time: %.2f\\n' % (end - start))\n\n  if FLAGS.output_type == 'MusicXML':\n    output_bytes = conversions.score_to_musicxml(output)\n  else:\n    if FLAGS.text_format:\n      output_bytes = text_format.MessageToString(output).encode('utf-8')\n    else:\n      output_bytes = output.SerializeToString()\n  file_io.write_string_to_file(FLAGS.output, output_bytes)\n\n\nif __name__ == '__main__':\n  tf.app.run()\n"
  },
  {
    "path": "moonlight/omr_endtoend_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Simple end-to-end test for OMR on the sample image.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nfrom absl.testing import absltest\nimport librosa\nfrom lxml import etree\nimport numpy as np\nfrom PIL import Image\n\nfrom protobuf import music_pb2\nfrom tensorflow.python.platform import resource_loader\nfrom moonlight import conversions\nfrom moonlight import engine\n\n\nclass OmrEndToEndTest(absltest.TestCase):\n\n  def setUp(self):\n    self.engine = engine.OMREngine()\n\n  def testNoteSequence(self):\n    filename = os.path.join(resource_loader.get_data_files_path(),\n                            'testdata/IMSLP00747-000.png')\n    notes = self.engine.run(filename, output_notesequence=True)\n    # TODO(ringw): Fix the extra note that is detected before the actual\n    # first eighth note.\n    self.assertEqual(librosa.note_to_midi('C4'), notes.notes[1].pitch)\n    self.assertEqual(librosa.note_to_midi('D4'), notes.notes[2].pitch)\n    self.assertEqual(librosa.note_to_midi('E4'), notes.notes[3].pitch)\n    self.assertEqual(librosa.note_to_midi('F4'), notes.notes[4].pitch)\n    self.assertEqual(librosa.note_to_midi('D4'), notes.notes[5].pitch)\n    self.assertEqual(librosa.note_to_midi('E4'), notes.notes[6].pitch)\n    self.assertEqual(librosa.note_to_midi('C4'), notes.notes[7].pitch)\n\n  def testBeams_sixteenthNotes(self):\n    filename = os.path.join(resource_loader.get_data_files_path(),\n                            'testdata/IMSLP00747-000.png')\n    notes = self.engine.run([filename], output_notesequence=True)\n\n    def _sixteenth_note(pitch, start_time):\n      return music_pb2.NoteSequence.Note(\n          pitch=librosa.note_to_midi(pitch),\n          start_time=start_time,\n          end_time=start_time + 0.25)\n\n    # TODO(ringw): Fix the phantom quarter note detected before the treble\n    # clef, and the eighth rest before the first note (should be sixteenth).\n    self.assertIn(_sixteenth_note('C4', 1.5), notes.notes)\n    self.assertIn(_sixteenth_note('D4', 1.75), notes.notes)\n    self.assertIn(_sixteenth_note('E4', 2), notes.notes)\n    self.assertIn(_sixteenth_note('F4', 2.25), notes.notes)\n    # TODO(ringw): The second D and E are detected with only one beam, even\n    # though they are connected to the same beams as the F before them and the\n    # C after them. Fix.\n\n  def testIMSLP00747_000_structure_barlines(self):\n    page = self.engine.run(\n        os.path.join(resource_loader.get_data_files_path(),\n                     'testdata/IMSLP00747-000.png')).page[0]\n    self.assertEqual(len(page.system), 6)\n\n    self.assertEqual(len(page.system[0].staff), 2)\n    self.assertEqual(len(page.system[0].bar), 4)\n\n    self.assertEqual(len(page.system[1].staff), 2)\n    self.assertEqual(len(page.system[1].bar), 5)\n\n    self.assertEqual(len(page.system[2].staff), 2)\n    self.assertEqual(len(page.system[2].bar), 5)\n\n    self.assertEqual(len(page.system[3].staff), 2)\n    self.assertEqual(len(page.system[3].bar), 4)\n\n    self.assertEqual(len(page.system[4].staff), 2)\n    self.assertEqual(len(page.system[4].bar), 5)\n\n    self.assertEqual(len(page.system[5].staff), 2)\n    self.assertEqual(len(page.system[5].bar), 5)\n\n    for system in page.system:\n      for staff in system.staff:\n        self.assertEqual(staff.staffline_distance, 16)\n\n  def testMusicXML(self):\n    filename = os.path.join(resource_loader.get_data_files_path(),\n                            'testdata/IMSLP00747-000.png')\n    score = self.engine.run([filename])\n    num_measures = sum(\n        len(system.bar) - 1 for page in score.page for system in page.system)\n    musicxml = etree.fromstring(conversions.score_to_musicxml(score))\n    self.assertEqual(2, len(musicxml.findall('part')))\n    self.assertEqual(num_measures,\n                     len(musicxml.find('part[1]').findall('measure')))\n\n  def testProcessImage(self):\n    pil_image = Image.open(\n        os.path.join(resource_loader.get_data_files_path(),\n                     'testdata/IMSLP00747-000.png')).convert('L')\n    arr = np.array(pil_image.getdata(), np.uint8).reshape(\n        # Size is (width, height).\n        pil_image.size[1],\n        pil_image.size[0])\n    page = self.engine.process_image(arr)\n    self.assertEqual(6, len(page.system))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/omr_regression_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests OMR with corpus scores from Placer.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport re\n\nfrom absl.app import flags\nfrom absl.testing import absltest\n\nfrom moonlight import engine\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.score import measures\nfrom moonlight.score import reader\n\nNOTEHEAD_FILLED = musicscore_pb2.Glyph.NOTEHEAD_FILLED\n\nIMSLP_FILENAME = re.compile('IMSLP([0-9]{5,})-[0-9]{3}.png')\n\nflags.DEFINE_string('corpus_dir', 'corpus', 'Path to the extracted IMSLP pngs.')\n\nFLAGS = flags.FLAGS\n\n\nclass OmrRegressionTest(absltest.TestCase):\n\n  def testIMSLP01963_106_multipleStaffSizes(self):\n    page = engine.OMREngine().run(_get_imslp_path('IMSLP01963-106.png')).page[0]\n    self.assertEqual(len(page.system), 3)\n\n    for system in page.system:\n      self.assertEqual(len(system.staff), 4)\n      self.assertEqual(system.staff[0].staffline_distance, 14)\n      self.assertEqual(system.staff[1].staffline_distance, 14)\n      self.assertEqual(system.staff[2].staffline_distance, 22)\n      self.assertEqual(system.staff[3].staffline_distance, 22)\n\n  def testIMSLP00823_000_structure(self):\n    page = engine.OMREngine().run(_get_imslp_path('IMSLP00823-000.png')).page[0]\n    self.assertEqual(len(page.system), 6)\n\n    self.assertEqual(len(page.system[0].staff), 2)\n    self.assertEqual(len(page.system[0].bar), 7)\n\n    self.assertEqual(len(page.system[1].staff), 2)\n    self.assertEqual(len(page.system[1].bar), 7)\n\n    self.assertEqual(len(page.system[2].staff), 2)\n    self.assertEqual(len(page.system[2].bar), 6)\n\n    self.assertEqual(len(page.system[3].staff), 2)\n    self.assertEqual(len(page.system[3].bar), 6)\n\n    self.assertEqual(len(page.system[4].staff), 2)\n    # TODO(ringw): Fix barline detection here.\n    # self.assertEqual(len(page.system[4].bar), 6)\n\n    self.assertEqual(len(page.system[5].staff), 2)\n    self.assertEqual(len(page.system[5].bar), 6)\n\n  def testIMSLP00823_008_mergeStandardAndBeginRepeatBars(self):\n    page = engine.OMREngine().run(_get_imslp_path('IMSLP00823-008.png')).page[0]\n    self.assertEqual(len(page.system), 6)\n\n    self.assertEqual(len(page.system[0].staff), 2)\n    # TODO(ringw): Fix barline detection here.\n    # self.assertEqual(len(page.system[0].bar), 6)\n\n    self.assertEqual(len(page.system[1].staff), 2)\n    # self.assertEqual(len(page.system[1].bar), 6)\n\n    self.assertEqual(len(page.system[2].staff), 2)\n    self.assertEqual(len(page.system[2].bar), 7)\n\n    self.assertEqual(len(page.system[3].staff), 2)\n    self.assertEqual(len(page.system[3].bar), 6)\n\n    self.assertEqual(len(page.system[4].staff), 2)\n    self.assertEqual(len(page.system[4].bar), 6)\n    # TODO(ringw): Detect BEGIN_REPEAT_BAR here.\n    self.assertEqual(page.system[4].bar[0].type,\n                     musicscore_pb2.StaffSystem.Bar.END_BAR)\n    self.assertEqual(page.system[4].bar[1].type,\n                     musicscore_pb2.StaffSystem.Bar.STANDARD_BAR)\n\n    self.assertEqual(len(page.system[5].staff), 2)\n    self.assertEqual(len(page.system[5].bar), 7)\n\n  def testIMSLP39661_keySignature_CSharpMinor(self):\n    page = engine.OMREngine().run(_get_imslp_path('IMSLP39661-000.png')).page[0]\n    score_reader = reader.ScoreReader()\n    # One of the sharps in the first system is heavily obscured.\n    score_reader.read_system(page.system[1])\n    treble_sig = score_reader.score_state.staves[0].get_key_signature()\n    self.assertEqual(treble_sig.get_type(), musicscore_pb2.Glyph.SHARP)\n    self.assertEqual(len(treble_sig), 4)\n    bass_sig = score_reader.score_state.staves[1].get_key_signature()\n    self.assertEqual(bass_sig.get_type(), musicscore_pb2.Glyph.SHARP)\n    # TODO(ringw): Get glyphs detected correctly in the bass signature.\n    # self.assertEqual(len(bass_sig), 4)\n\n  def testIMSLP00023_015_doubleNoteDots(self):\n    \"\"\"Tests note dots in system[1].staff[1] of the image.\"\"\"\n    page = engine.OMREngine().run(_get_imslp_path('IMSLP00023-015.png')).page[0]\n    self.assertEqual(len(page.system), 6)\n\n    system = page.system[1]\n    system_measures = measures.Measures(system)\n    staff = system.staff[1]\n\n    # All dotted notes in the first measure belong to one chord, and are\n    # double-dotted.\n    double_dotted_notes = [\n        glyph for glyph in staff.glyph\n        if system_measures.get_measure(glyph) == 0 and len(glyph.dot) == 2\n    ]\n    for note in double_dotted_notes:\n      self.assertEqual(len(note.beam), 1)\n      # Double-dotted eighth note duration.\n      self.assertEqual(note.note.end_time - note.note.start_time,\n                       .5 + .25 + .125)\n    double_dotted_note_ys = [glyph.y_position for glyph in double_dotted_notes]\n    self.assertIn(-6, double_dotted_note_ys)\n    self.assertIn(-3, double_dotted_note_ys)\n    self.assertIn(-1, double_dotted_note_ys)\n    self.assertTrue(\n        set(double_dotted_note_ys).issubset([-6, -3, -1, +3, +4]),\n        'No unexpected double-dotted noteheads')\n\n    # TODO(ringw): Notehead at +4 picks up extra dots (4 total). The dots\n    # should be in a horizontal line, and we should discard other dots.\n    # There should only be one notehead at +4 with 2 or more dots.\n    self.assertEqual(\n        len([\n            glyph for glyph in staff.glyph\n            if system_measures.get_measure(glyph) == 0 and glyph.type ==\n            NOTEHEAD_FILLED and glyph.y_position == +4 and len(glyph.dot) >= 2\n        ]), 1)\n\n    # All dotted notes in the second measure belong to one chord, and are\n    # single-dotted.\n    single_dotted_notes = [\n        glyph for glyph in staff.glyph\n        if system_measures.get_measure(glyph) == 1 and len(glyph.dot) == 1\n    ]\n    for note in single_dotted_notes:\n      if note.y_position == +2:\n        # TODO(ringw): Detect the beam for this notehead. Its stem is too\n        # short.\n        continue\n      self.assertEqual(len(note.beam), 1)\n      # Single-dotted eighth note duration.\n      self.assertEqual(note.note.end_time - note.note.start_time, .75)\n    single_dotted_note_ys = [glyph.y_position for glyph in single_dotted_notes]\n    self.assertIn(-5, single_dotted_note_ys)\n    self.assertIn(-3, single_dotted_note_ys)\n    self.assertIn(0, single_dotted_note_ys)\n    self.assertIn(+2, single_dotted_note_ys)\n    # TODO(ringw): Detect the dot for the note at y position +4.\n    self.assertTrue(set(single_dotted_note_ys).issubset([-5, -3, 0, +2, +4]))\n\n\ndef _get_imslp_path(filename):\n  m = re.match(IMSLP_FILENAME, filename)\n  bucket = int(m.group(1)) // 1000\n  return os.path.join(FLAGS.corpus_dir, str(bucket), filename)\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/page_processors.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Callables that process Page messages.\n\nEach processor takes a Page and returns a possibly modified Page. It may modify\nthe Page message in place, and return the same message.\n\nThe purpose of a processor is to perform simple inference on elements already in\nthe Page and in the Structure. Processing should not be CPU-intensive, or the\nheavy lifting needs to be implemented in TensorFlow for efficiency.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.glyphs import glyph_types\nfrom moonlight.glyphs import note_dots\nfrom moonlight.glyphs import repeated\nfrom moonlight.staves import staff_processor\nfrom moonlight.structure import barlines\nfrom moonlight.structure import beam_processor\nfrom moonlight.structure import section_barlines\nfrom moonlight.structure import stems\n\n\ndef create_processors(structure, staffline_extractor=None):\n  \"\"\"Generator for the processors to be applied to the Page in order.\n\n  Args:\n    structure: The computed `Structure`.\n    staffline_extractor: The staffline extractor to use for scaling glyph x\n      coordinates. Optional.\n\n  Yields:\n    Callables which accept a single `Page` as an argument, and return it\n      (either modifying in place or returning a modified copy).\n  \"\"\"\n  yield staff_processor.StaffProcessor(structure, staffline_extractor)\n  yield stems.Stems(structure)\n  yield beam_processor.BeamProcessor(structure)\n  yield note_dots.NoteDots(structure)\n\n  yield CenteredRests()\n  yield repeated.FixRepeatedRests()\n\n  yield barlines.Barlines(structure)\n  yield section_barlines.SectionBarlines(structure)\n  yield section_barlines.MergeStandardAndBeginRepeatBars(structure)\n\n\ndef process(page, structure, staffline_extractor=None):\n  for processor in create_processors(structure, staffline_extractor):\n    page = processor.apply(page)\n  return page\n\n\n# TODO(ringw): Add a helper for processors that filter the glyphs like this.\nclass CenteredRests(object):\n\n  def apply(self, page):\n    \"\"\"Rests should be centered on the staff, assuming a single voice.\"\"\"\n    for system in page.system:\n      for staff in system.staff:\n        to_remove = []\n        for glyph in staff.glyph:\n          if glyph_types.is_rest(glyph) and abs(glyph.y_position) > 2:\n            to_remove.append(glyph)\n\n        for glyph in to_remove:\n          staff.glyph.remove(glyph)\n\n    return page\n"
  },
  {
    "path": "moonlight/pipeline/BUILD",
    "content": "# Description:\n# OMR pipelines and Apache Beam utilities.\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"pipeline_flags\",\n    srcs = [\"pipeline_flags.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # absl dep\n        # apache-beam dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/pipeline/pipeline_flags.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Configures the Apache Beam runner from the command line in pipelines.\n\nCommand-line flags for particular runners can be added here later, if necessary.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import flags\nimport apache_beam\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n    'runner', 'DirectRunner',\n    'The class name of the Apache Beam runner to use in the pipeline.')\n\n\ndef create_pipeline(**kwargs):\n  return apache_beam.Pipeline(FLAGS.runner, **kwargs)\n"
  },
  {
    "path": "moonlight/protobuf/BUILD",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\nload(\"@com_google_protobuf//:protobuf.bzl\", \"py_proto_library\")\n\npy_proto_library(\n    name = \"protobuf_py_pb2\",\n    srcs = glob([\"*.proto\"]),\n    deps = [\"@magenta//protobuf:music_py_pb2\"],\n    default_runtime = \"@com_google_protobuf//:protobuf_python\",\n    protoc = \"@com_google_protobuf//:protoc\",\n    srcs_version = \"PY2AND3\",\n)\n"
  },
  {
    "path": "moonlight/protobuf/groundtruth.proto",
    "content": "// Copyright 2018 Google LLC\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     https://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\nsyntax = \"proto2\";\n\npackage tensorflow.moonlight;\n\n// MusicXML ground truth corpus. Each MusicXML file corresponds to one or more\n// music score page images. The output of OMR on the pages is compared against\n// the MusicXML ground truth to evaluate the OMR engine.\n\nmessage GroundTruth {\n  optional string title = 1;\n\n  optional string ground_truth_filename = 2;\n\n  // Has one or more page images corresponding to the ground truth.\n  repeated PageSpec page_spec = 3;\n\n  enum Tag {\n    // A parsed tag that was unrecognized.\n    UNKNOWN_TAG = 0;\n    // One voice per staff, with chords allowed in each voice. If not present,\n    // all staves are assumed to be polyphonic, which is not supported, and the\n    // score may be excluded from evaluation.\n    MONOPHONIC = 1;\n    // One staff per staff system.\n    SINGLE_STAFF = 2;\n    // Piano/grand staff.\n    PIANO = 3;\n    // One voice (usually connected by a beam) has notes in multiple staves.\n    // This is not supported by evaluation, and the score may be excluded from\n    // evaluation.\n    VOICES_CROSS_STAVES = 4;\n    // The score contains chords in more than one measure (chords are often\n    // present in the last measure, and we want to select scores with more than\n    // one measure).\n    CHORDS = 5;\n  }\n\n  repeated Tag tag = 4;\n}\n\nmessage PageSpec {\n  optional string filename = 1;\n\n  // TODO(ringw): Allow choosing a range of staff systems within one page.\n  // The score may have one movement end and the next movement start on the same\n  // page. The MusicXML ground truth will likely have one file per movement.\n}\n"
  },
  {
    "path": "moonlight/protobuf/musicscore.proto",
    "content": "// Copyright 2018 Google LLC\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     https://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\nsyntax = \"proto2\";\n\npackage tensorflow.moonlight;\n\n// TODO(ringw): Fix python import issue, then load magenta with a prefix of\n// \"magenta\" instead of \"magenta/magenta\". The import here should be\n// \"magenta/music/protobuf/music.proto\".\nimport \"protobuf/music.proto\";\n\n// Alpha-stage OMR page model. Most protos (except Glyph.Type) are subject to\n// backwards-incompatible changes (e.g. renumbering or replacing fields).\n\n// A score, containing multiple pages.\nmessage Score {\n  repeated Page page = 1;\n}\n\n// A page, holding all detected objects.\nmessage Page {\n  repeated StaffSystem system = 1;\n}\n\n// A staff system, holding one or more staves connected by barline(s).\n// Contains zero or more bars, x coordinates where there is a vertical barline.\n// If there are no bars, the staff contains a single measure spanning the width\n// of the page. If there is a single bar, the staff contains a single measure\n// spanning [bar[0], width). Otherwise, for n bars, there are n - 1 measures\n// bounded by each adjacent pair of bars.\nmessage StaffSystem {\n  repeated Staff staff = 1;\n  repeated Bar bar = 2;\n\n  message Bar {\n    optional int32 x = 1;\n    optional Type type = 2;\n\n    enum Type {\n      UNKNOWN_BAR = 0;\n      STANDARD_BAR = 1;\n      DOUBLE_BAR = 2;\n      END_BAR = 3;\n      BEGIN_REPEAT_BAR = 4;\n      END_REPEAT_BAR = 5;\n      BEGIN_AND_END_REPEAT_BAR = 6;\n    }\n  }\n}\n\n// A staff is represented by the distance between consecutive horizontal lines,\n// and line segments that make up the center line (3rd out of 5 lines).\nmessage Staff {\n  // The vertical distance between the horizontal lines that make up the staff.\n  // This is assumed to be constant for printed music scores.\n  optional int64 staffline_distance = 1;\n  // The approximately horizontal staff center line (third line of the staff).\n  // The other lines must be a multiple of staffline_distance above and below.\n  // Currently, the line always starts at x = 0 and ends at x = width - 1.\n  repeated Point center_line = 2;\n  // Glyphs which belong to the staff, sorted by x coordinate.\n  repeated Glyph glyph = 3;\n}\n\n// A musical element which is attached to a staff. The y_position represents the\n// the staffline or staff space that a glyph is placed on. The center line\n// (third staff line) has position 0, and the y_position increases with\n// decreasing y coordinates (towards musically higher notes).\n// For example, in treble clef, position 0 is B4, position -6 is C4, and\n// position 1 is C5. For more details, see:\n// g3doc/learning/brain/research/magenta/models/omr/g3doc/glyphs.md\nmessage Glyph {\n  optional Type type = 1;\n  optional int64 x = 2;\n  optional int64 y_position = 3;\n  // Noteheads may be attached to a Stem. The reader joins noteheads with the\n  // same Stem as chords, which makes them all have the same timing.\n  optional LineSegment stem = 4;\n  // The Stem may intersect with Beam(s). A NOTEHEAD_FILLED has a duration of 1\n  // which is halved for each incident Beam.\n  repeated LineSegment beam = 5;\n  // Dots are attached to noteheads and alter their duration. Each dot adds half\n  // of the previous dot's value (or half the original duration, for the first\n  // dot) to the final duration.\n  repeated Point dot = 6;\n\n  // The Glyph may be detected as a Note by ScoreReader. The Note is stored as\n  // part of the Glyph.\n  optional magenta.NoteSequence.Note note = 7;\n\n  // Glyph Types are used as labels for the Examples in our corpus. Don't change\n  // or reuse the numbers!\n  enum Type {\n    // Default value.\n    UNKNOWN_TYPE = 0;\n    // No glyph at the given position. This is used for classification, where\n    // we will evaluate every pixel as a possible center point for a glyph.\n    NONE = 1;\n    // G clef. Typically centered on the second line from the bottom of the\n    // staff (G4 in treble clef). May be shifted in other, uncommon clefs.\n    CLEF_TREBLE = 2;\n    // F clef. Typically centered on the fourth line from the bottom of the\n    // staff (F3 in bass clef). May be shifted in other, uncommon clefs.\n    CLEF_BASS = 3;\n    // C clef is the clef symbol used for alto, tenor, and other clefs. In each\n    // such clef, the line that the C clef is centered on represents C4.\n    CLEF_C = 13;\n    // \"Common time\" is a stylized \"c\" which is equivalent to a 4/4 time\n    // signature. Note that numeral time signatures are not detected here. They\n    // will need to be detected in a post-processing step using OCR.\n    TIME_COMMON = 18;\n    // \"Cut time\" has a vertical line through the \"common time\" symbol, and is\n    // equivalent to 2/2 time.\n    TIME_CUT = 19;\n    NOTEHEAD_FILLED = 4;\n    NOTEHEAD_EMPTY = 5;\n    NOTEHEAD_WHOLE = 6;\n    // Note that whole and half rests will likely not be labeled for the\n    // classifier. They can easily be detected as rectangular connected\n    // components after staff removal, and are distinguished by their absolute\n    // position on the staff. However, the two currently look identical on the\n    // extracted patches because we use staff removal.\n    REST_WHOLE = 14;\n    REST_HALF = 15;\n    REST_QUARTER = 7;\n    REST_EIGHTH = 8;\n    REST_SIXTEENTH = 9;\n    // TODO(ringw): Sixty-fourth rests don't fit within our typical patch\n    // size (3 times the staffline distance), so they can't be detected under\n    // the current model.\n    REST_THIRTYSECOND = 17;\n    FLAT = 10;\n    SHARP = 11;\n    DOUBLE_SHARP = 16;\n    NATURAL = 12;\n    // Next tag: 20\n  }\n}\n\nmessage LineSegment {\n  optional Point start = 1;\n  optional Point end = 2;\n}\n\nmessage Rect {\n  optional Point top_left = 1;\n  optional Point bottom_right = 2;\n}\n\nmessage Point {\n  optional int64 x = 1;\n  optional int64 y = 2;\n}\n"
  },
  {
    "path": "moonlight/score/BUILD",
    "content": "# Description:\n# Score reading for OMR.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"score\",\n    deps = [\n        \":measures\",\n        \":reader\",\n    ],\n)\n\npy_library(\n    name = \"measures\",\n    srcs = [\"measures.py\"],\n    deps = [\"//moonlight/protobuf:protobuf_py_pb2\"],\n)\n\npy_library(\n    name = \"reader\",\n    srcs = [\"reader.py\"],\n    deps = [\n        \":measures\",\n        # absl dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/score/elements:clef\",\n        \"//moonlight/score/state\",\n    ],\n)\n\npy_test(\n    name = \"reader_test\",\n    srcs = [\"reader_test.py\"],\n    deps = [\n        \":reader\",\n        # absl/testing dep\n        # librosa dep\n        \"//moonlight/conversions\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"@magenta//protobuf:music_py_pb2\",\n    ],\n)\n"
  },
  {
    "path": "moonlight/score/elements/BUILD",
    "content": "# Description:\n# Score elements which encapsulate custom logic for score reading.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"clef\",\n    srcs = [\"clef.py\"],\n    deps = [\n        # librosa dep\n        \"//moonlight/music:constants\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n    ],\n)\n\npy_test(\n    name = \"clef_test\",\n    srcs = [\"clef_test.py\"],\n    deps = [\n        \":clef\",\n        # absl/testing dep\n        # librosa dep\n    ],\n)\n\npy_library(\n    name = \"key_signature\",\n    srcs = [\"key_signature.py\"],\n    deps = [\n        \":clef\",\n        # librosa dep\n        \"//moonlight/music:constants\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n    ],\n)\n\npy_test(\n    name = \"key_signature_test\",\n    srcs = [\"key_signature_test.py\"],\n    deps = [\n        \":clef\",\n        \":key_signature\",\n        # absl/testing dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n    ],\n)\n"
  },
  {
    "path": "moonlight/score/elements/clef.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Clef logic for OMR.\n\nA Clef object maps y positions on the staff to the MIDI pitch of the natural\nnote at the y position.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport librosa\n\nfrom moonlight.music import constants\nfrom moonlight.protobuf import musicscore_pb2\n\n\nclass Clef(object):\n  \"\"\"Represents a clef which maps y positions to MIDI notes.\n\n  Attributes:\n    center_line_pitch: A _ScalePitch representing the center line (3rd line of\n      the staff).\n  \"\"\"\n  center_line_pitch = None\n\n  def y_position_to_midi(self, y_position):\n    return (self.center_line_pitch + y_position).midi\n\n\nclass TrebleClef(Clef):\n  \"\"\"Represents a treble clef.\"\"\"\n\n  def __init__(self):\n    self.center_line_pitch = _ScalePitch(constants.MAJOR_SCALE,\n                                         librosa.note_to_midi('B4'))\n    self.glyph = musicscore_pb2.Glyph.CLEF_TREBLE\n\n\nclass BassClef(Clef):\n  \"\"\"Represents a bass clef.\"\"\"\n\n  def __init__(self):\n    self.center_line_pitch = _ScalePitch(constants.MAJOR_SCALE,\n                                         librosa.note_to_midi('D3'))\n    self.glyph = musicscore_pb2.Glyph.CLEF_BASS\n\n\nclass _ScalePitch(object):\n  \"\"\"A natural note which can be offset to get another note.\n\n  Attributes:\n    scale: The scale which this pitch is based on. A list of MIDI pitch values\n      spanning one octave.\n    index: The index of the pitch's pitch class within the scale.\n    octave: The index of the octave that the pitch is in, relative to the octave\n      spanning the scale notes.\n  \"\"\"\n\n  def __init__(self, scale, midi):\n    self.scale = scale\n    self.index = scale.index(midi % constants.NUM_SEMITONES_PER_OCTAVE)\n    self.octave = (midi - scale[0]) // 12\n\n  @property\n  def midi(self):\n    \"\"\"The MIDI value for the pitch.\"\"\"\n    notes_per_octave = constants.NUM_SEMITONES_PER_OCTAVE\n    return self.scale[self.index] + notes_per_octave * self.octave\n\n  @property\n  def pitch_index(self):\n    \"\"\"The index of the pitch in the C major scale.\"\"\"\n    return self.index + len(self.scale) * self.octave\n\n  def __add__(self, interval):\n    \"\"\"Returns the natural note `interval` away on `self.scale`.\"\"\"\n    pitch = _ScalePitch(self.scale, self.midi)\n    pitch_index = self.pitch_index + interval\n    pitch.index = pitch_index % len(self.scale)\n    pitch.octave = pitch_index // len(self.scale)\n    return pitch\n"
  },
  {
    "path": "moonlight/score/elements/clef_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the clefs.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\nimport librosa\n\nfrom moonlight.score.elements import clef\n\n\nclass ClefTest(absltest.TestCase):\n\n  def testTrebleClef(self):\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(-8),\n                     librosa.note_to_midi('A3'))\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(-6),\n                     librosa.note_to_midi('C4'))\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(0),\n                     librosa.note_to_midi('B4'))\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(1),\n                     librosa.note_to_midi('C5'))\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(3),\n                     librosa.note_to_midi('E5'))\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(4),\n                     librosa.note_to_midi('F5'))\n    self.assertEqual(clef.TrebleClef().y_position_to_midi(14),\n                     librosa.note_to_midi('B6'))\n\n  def testBassClef(self):\n    self.assertEqual(clef.BassClef().y_position_to_midi(-10),\n                     librosa.note_to_midi('A1'))\n    self.assertEqual(clef.BassClef().y_position_to_midi(-7),\n                     librosa.note_to_midi('D2'))\n    self.assertEqual(clef.BassClef().y_position_to_midi(-5),\n                     librosa.note_to_midi('F2'))\n    self.assertEqual(clef.BassClef().y_position_to_midi(-1),\n                     librosa.note_to_midi('C3'))\n    self.assertEqual(clef.BassClef().y_position_to_midi(0),\n                     librosa.note_to_midi('D3'))\n    self.assertEqual(clef.BassClef().y_position_to_midi(6),\n                     librosa.note_to_midi('C4'))\n    self.assertEqual(clef.BassClef().y_position_to_midi(8),\n                     librosa.note_to_midi('E4'))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/score/elements/key_signature.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Music key signature inference.\n\nThe accidentals classes are Accidentals, which are reset for each new measure,\nand KeySignature, which is persisted for a staff at a time (because it is\nexpected to be repeated on each new staff). The key signature must follow the\nexpected pattern, or subsequent accidentals will fail to be added to it, and\nshould be added to the Accidentals instead.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport itertools\n\nimport librosa\nfrom moonlight.music import constants\nfrom moonlight.protobuf import musicscore_pb2\nimport six\n\nGlyph = musicscore_pb2.Glyph  # pylint: disable=invalid-name\n\n\nclass _BaseAccidentals(object):\n  \"\"\"Holds accidentals which are not part of the key signature.\"\"\"\n\n  def __init__(self, clef, accidentals=None):\n    self.clef = clef\n    self._accidentals = dict(accidentals or {})\n\n  def _normalize_position(self, position):\n    \"\"\"No octave normalization.\n\n    Accidentals only apply to the current octave, whereas `KeySignature`\n    overrides this to make its accidentals octave-invariant.\n\n    Args:\n      position: The vertical staff y position.\n\n    Returns:\n      The normalized position.\n    \"\"\"\n    return position\n\n  def get_accidental_for_position(self, position):\n    return self._accidentals.get(self._normalize_position(position), Glyph.NONE)\n\n\nclass Accidentals(_BaseAccidentals):\n  \"\"\"Simple map of staff y position to accidental value.\"\"\"\n\n  def __init__(self, clef):\n    super(Accidentals, self).__init__(clef)\n\n  def put(self, position, accidental):\n    self._accidentals[position] = accidental\n\n\nclass KeySignature(_BaseAccidentals):\n  \"\"\"Music key signature.\n\n  Tracks the expected order of accidentals in a key signature. If we detect that\n  an accidental does not match the next expected accidental, it will be treated\n  as a normal accidental and not part of the key signature.\n  \"\"\"\n\n  def _normalize_position(self, position):\n    \"\"\"Normalize base notes by octave for the key signature.\n\n    The key signature contains one accidental for one base notes, which applies\n    to the same pitch class in all octaves.\n\n    Args:\n      position: The staff y position of the glyph.\n\n    Returns:\n      The base note normalized by octave. This causes an accidental in the key\n          signature to apply to the same note in all octaves.\n    \"\"\"\n    return position % len(constants.MAJOR_SCALE)\n\n  def try_put(self, position, accidental):\n    \"\"\"Adds an accidental to the key signature if applicable.\n\n    Args:\n      position: The accidental glyph y position.\n      accidental: The accidental glyph type.\n\n    Returns:\n      True if the accidental was successfully added to the key signature. False\n        if the key signature would be invalid when adding the new accidental.\n    \"\"\"\n    can_put = self._can_put(position, accidental)\n    if can_put:\n      self._accidentals[self._normalize_position(position)] = accidental\n    return can_put\n\n  def _can_put(self, position, accidental):\n    if not self._accidentals:\n      pitch_class = self.clef.y_position_to_midi(position) % 12\n      return (accidental in _KEY_SIGNATURE_PITCH_CLASS_LIST and\n              pitch_class == _KEY_SIGNATURE_PITCH_CLASS_LIST[accidental][0])\n    return (position, accidental) == self.get_next_accidental()\n\n  def get_next_accidental(self):\n    \"\"\"Predicts the next accidental which would be present in the key signature.\n\n    Cannot predict the next accidental if the key signature is currently empty\n    (C major), because the key could contain either sharps or flats.\n\n    Returns:\n      The expected y position of the next accidental if possible, or None.\n      The expected accidental glyph type, or None.\n    \"\"\"\n    # There must already be some accidentals, which are all sharps or all flats.\n    # Get the base pitch class for each note that has an accidental.\n    pitch_classes = [\n        self.clef.y_position_to_midi(position) %\n        constants.NUM_SEMITONES_PER_OCTAVE\n        for position in self._accidentals.keys()\n    ]\n    # Determine the order of pitch classes (for either all sharps or all flats).\n    values = set(self._accidentals.values())\n    if len(values) == 1:\n      full_key_sig = _KEY_SIGNATURE_PITCH_CLASS_LIST[six.next(iter(values))]\n    else:\n      # Key signature is empty. Don't know whether to predict a sharp or a flat.\n      return None, None\n\n    if len(pitch_classes) == len(full_key_sig):\n      # No more accidentals to add.\n      return None, None\n    elif set(pitch_classes) == set(full_key_sig[:len(pitch_classes)]):\n      # Use the next pitch class in the list.\n      next_pitch_class = full_key_sig[len(pitch_classes)]\n      accidental = six.next(iter(values))\n      # The pitch class must match exactly one of the 7 y positions that are\n      # allowed for this key signature.\n      for y_position in _KEY_SIGNATURE_Y_POSITION_RANGES[self.clef.glyph,\n                                                         accidental]:\n        if (self.clef.y_position_to_midi(y_position) %\n            constants.NUM_SEMITONES_PER_OCTAVE) == next_pitch_class:\n          return y_position, accidental\n      raise AssertionError('Failed to find the next accidental y position')\n    else:\n      # The current key signature is unrecognized.\n      return None, None\n\n  def get_type(self):\n    \"\"\"Returns whether this is a sharp, flat, or None (C major) signature.\"\"\"\n    return (six.next(iter(self._accidentals.values()))\n            if self._accidentals else None)\n\n  def __len__(self):\n    \"\"\"Returns the number of accidentals in the key signature.\"\"\"\n    return len(self._accidentals)\n\n\ndef _key_sig_pitch_classes(note_name, ascending_fifths):\n  first_pitch_class = (\n      librosa.note_to_midi(note_name + '0') %\n      constants.NUM_SEMITONES_PER_OCTAVE)\n  # Go through the circle of fifths in ascending or descending order.\n  step = 1 if ascending_fifths else -1\n  order = constants.CIRCLE_OF_FIFTHS[::step]\n  # Get the start index for the key signature.\n  first_pitch_class_ind = order.index(first_pitch_class)\n  return list(\n      itertools.islice(\n          # Create a cycle of the order. We may loop around, e.g. from F back to\n          # C.\n          itertools.cycle(order),\n          # Take the 7 pitch classes from the cycle.\n          first_pitch_class_ind,\n          first_pitch_class_ind + constants.NUM_NOTES_IN_DIATONIC_SCALE))\n\n\n_KEY_SIGNATURE_PITCH_CLASS_LIST = {\n    # The sharp key signature starts with F#, and each subsequent note ascends\n    # by a fifth.\n    Glyph.SHARP:\n        _key_sig_pitch_classes('F', ascending_fifths=True),\n    # The flat key signature starts with Bb, and each subsequent note descends\n    # by a fifth.\n    Glyph.FLAT:\n        _key_sig_pitch_classes('B', ascending_fifths=False),\n}\n\n# Maps the clef and type of accidentals in the key signature to the range of y\n# positions where the key signature is shown.\n_KEY_SIGNATURE_Y_POSITION_RANGES = {\n    (Glyph.CLEF_TREBLE, Glyph.SHARP): range(-1, 6),  # A#4 to G#5\n    (Glyph.CLEF_TREBLE, Glyph.FLAT): range(-3, 4),  # Fb4 to Eb5\n    (Glyph.CLEF_BASS, Glyph.SHARP): range(-3, 4),  # A#2 to G#3\n    (Glyph.CLEF_BASS, Glyph.FLAT): range(-5, 2),  # Fb2 to Eb3\n}\n"
  },
  {
    "path": "moonlight/score/elements/key_signature_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for key signature inference.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\n\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.score.elements import clef\nfrom moonlight.score.elements import key_signature\n\n\nclass KeySignatureTest(absltest.TestCase):\n\n  def testEmpty_noNextAccidental(self):\n    self.assertEqual(\n        key_signature.KeySignature(clef.TrebleClef()).get_next_accidental(),\n        (None, None))\n\n  def testGMajor(self):\n    sig = key_signature.KeySignature(clef.TrebleClef())\n    self.assertTrue(sig.try_put(+4, musicscore_pb2.Glyph.SHARP))  # F#\n    self.assertEqual(sig.get_next_accidental(),\n                     (+1, musicscore_pb2.Glyph.SHARP))  # C#\n\n  def testGMajor_bassClef(self):\n    sig = key_signature.KeySignature(clef.BassClef())\n    self.assertTrue(sig.try_put(+2, musicscore_pb2.Glyph.SHARP))  # F#\n    self.assertEqual(sig.get_next_accidental(),\n                     (-1, musicscore_pb2.Glyph.SHARP))  # C#\n\n  def testBMajor(self):\n    sig = key_signature.KeySignature(clef.TrebleClef())\n    self.assertTrue(sig.try_put(+4, musicscore_pb2.Glyph.SHARP))  # F#\n    self.assertTrue(sig.try_put(+1, musicscore_pb2.Glyph.SHARP))  # C#\n    self.assertTrue(sig.try_put(+5, musicscore_pb2.Glyph.SHARP))  # G#\n    self.assertEqual(sig.get_next_accidental(),\n                     (+2, musicscore_pb2.Glyph.SHARP))  # D#\n\n  def testEFlatMajor(self):\n    sig = key_signature.KeySignature(clef.TrebleClef())\n    self.assertTrue(sig.try_put(0, musicscore_pb2.Glyph.FLAT))  # Bb\n    self.assertTrue(sig.try_put(+3, musicscore_pb2.Glyph.FLAT))  # Eb\n    self.assertTrue(sig.try_put(-1, musicscore_pb2.Glyph.FLAT))  # Ab\n    self.assertEqual(sig.get_next_accidental(),\n                     (+2, musicscore_pb2.Glyph.FLAT))  # Db\n\n  def testEFlatMajor_bassClef(self):\n    sig = key_signature.KeySignature(clef.BassClef())\n    self.assertTrue(sig.try_put(-2, musicscore_pb2.Glyph.FLAT))  # Bb\n    self.assertTrue(sig.try_put(+1, musicscore_pb2.Glyph.FLAT))  # Eb\n    self.assertTrue(sig.try_put(-3, musicscore_pb2.Glyph.FLAT))  # Ab\n    self.assertEqual(sig.get_next_accidental(),\n                     (0, musicscore_pb2.Glyph.FLAT))  # Db\n\n  def testCFlatMajor_noMoreAccidentals(self):\n    sig = key_signature.KeySignature(clef.TrebleClef())\n    self.assertTrue(sig.try_put(0, musicscore_pb2.Glyph.FLAT))  # Bb\n    self.assertNotEqual(sig.get_next_accidental(), (None, None))\n    self.assertTrue(sig.try_put(+3, musicscore_pb2.Glyph.FLAT))  # Eb\n    self.assertNotEqual(sig.get_next_accidental(), (None, None))\n    self.assertTrue(sig.try_put(-1, musicscore_pb2.Glyph.FLAT))  # Ab\n    self.assertNotEqual(sig.get_next_accidental(), (None, None))\n    self.assertTrue(sig.try_put(+2, musicscore_pb2.Glyph.FLAT))  # Db\n    self.assertNotEqual(sig.get_next_accidental(), (None, None))\n    self.assertTrue(sig.try_put(-2, musicscore_pb2.Glyph.FLAT))  # Gb\n    self.assertNotEqual(sig.get_next_accidental(), (None, None))\n    self.assertTrue(sig.try_put(+1, musicscore_pb2.Glyph.FLAT))  # Cb\n    self.assertNotEqual(sig.get_next_accidental(), (None, None))\n    self.assertTrue(sig.try_put(-3, musicscore_pb2.Glyph.FLAT))  # Fb\n    # Already at Cb major, no more accidentals to add.\n    self.assertEqual(sig.get_next_accidental(), (None, None))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/score/measures.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Represents the measures of a staff system.\n\nConverts bar x coordinates to a series of measures, with the x interval covered\nby each measure.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.protobuf import musicscore_pb2\n\n# sys.maxint overflows the int32 proto field.\nMEASURE_MAX_X = 2**31 - 1\n\n\nclass Measures(object):\n  \"\"\"Represents the measures of a staff system.\"\"\"\n\n  def __init__(self, staff_system):\n    self.bars = list(_get_bar_intervals(staff_system))\n\n  def size(self):\n    \"\"\"Returns the number of measures in the staff system.\n\n    Returns:\n      The number of measures.\n    \"\"\"\n    return len(self.bars)\n\n  def get_measure(self, glyph):\n    \"\"\"Gets the measure number of a `tensorflow.moonlight.Glyph`.\n\n    Args:\n      glyph: A `Glyph` message.\n\n    Returns:\n      The measure index, or -1 if it lies outside of the measures.\n    \"\"\"\n    for i, (start_bar, end_bar) in enumerate(self.bars):\n      if start_bar.x <= glyph.x < end_bar.x:\n        return i\n    return -1\n\n\ndef _get_bar_intervals(staff_system):\n  if not staff_system.bar:\n    # TODO(ringw): Store the image dimensions in the Page message, so that we\n    # can use the actual width as the end of the measure.\n    yield (musicscore_pb2.StaffSystem.Bar(x=0),\n           musicscore_pb2.StaffSystem.Bar(x=MEASURE_MAX_X))\n  elif len(staff_system.bar) == 1:\n    # Single barline is at the beginning of the staff.\n    yield staff_system.bar[0], musicscore_pb2.StaffSystem.Bar(x=MEASURE_MAX_X)\n  else:\n    for start, end in zip(staff_system.bar[:-1], staff_system.bar[1:]):\n      yield start, end\n"
  },
  {
    "path": "moonlight/score/reader.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Reads Pages of glyphs and outputs a NoteSequence.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import logging\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.score import measures\nfrom moonlight.score import state\nfrom moonlight.score.elements import clef\nfrom six import moves\n\n# The expected y position for clefs.\nTREBLE_CLEF_EXPECTED_Y = -2\nBASS_CLEF_EXPECTED_Y = 2\n\n# 4 beats to a whole note.\nREST_DURATIONS_ = {\n    musicscore_pb2.Glyph.REST_QUARTER: 4 / 4,\n    musicscore_pb2.Glyph.REST_EIGHTH: 4 / 8,\n    musicscore_pb2.Glyph.REST_SIXTEENTH: 4 / 16,\n}\n\n\nclass ScoreReader(object):\n  \"\"\"Reads a Score proto and interprets musical elements from the glyphs.\n\n  Given a Page containing glyphs, holds global state for the entire score, and\n  per-measure state (accidentals). Each glyph is added to the NoteSequence based\n  on the current state.\n\n  OMR is a work in progress. Voice detection is not yet implemented; the score\n  is assumed to be monophonic.\n  \"\"\"\n\n  def __init__(self):\n    self.time = 0.0\n    self.score_state = state.ScoreState()\n\n  def __call__(self, score):\n    \"\"\"Reads a `tensorflow.moonlight.Score` message.\n\n    Modifies the message in place to add detected musical elements.\n\n    Args:\n      score: A `tensorflow.moonlight.Score` message.\n\n    Returns:\n      The same Score object.\n    \"\"\"\n    for page in score.page:\n      self.read_page(page)\n    # Modifies the score in place.\n    return score\n\n  def read_page(self, page):\n    \"\"\"Reads a `tensorflow.moonlight.Page` message.\n\n    Modifies the page in place to add detected musical elements.\n\n    Args:\n      page: A `tensorflow.moonlight.Page` message.\n\n    Returns:\n      The same Page object.\n    \"\"\"\n    for system in page.system:\n      self.read_system(system)\n    return page\n\n  def read_system(self, system):\n    self.score_state.num_staves(len(system.staff))\n    system_measures = measures.Measures(system)\n    for measure_num in moves.xrange(system_measures.size()):\n      for staff, staff_state in zip(system.staff, self.score_state.staves):\n        for glyph in staff.glyph:\n          if system_measures.get_measure(glyph) == measure_num:\n            self._read_glyph(glyph, staff_state)\n      self.score_state.add_measure()\n\n  def _read_glyph(self, glyph, staff_state):\n    if glyph.type not in ScoreReader.GLYPH_HANDLERS_:\n      logging.warning('Handler not implemented: %s',\n                      musicscore_pb2.Glyph.Type.Name(glyph.type))\n      return\n\n    ScoreReader.GLYPH_HANDLERS_[glyph.type](self, staff_state, glyph)\n\n  def _read_clef(self, staff_state, glyph):\n    \"\"\"Reads a clef glyph.\n\n    If the clef is at the expected y position, set the current clef.\n\n    Args:\n      staff_state: The state of the staff that the glyph is on.\n      glyph: A glyph of type CLEF_TREBLE or CLEF_BASS.\n\n    Raises:\n      ValueError: If glyph is an unexpected type.\n    \"\"\"\n    if glyph.type == musicscore_pb2.Glyph.CLEF_TREBLE:\n      if glyph.y_position == TREBLE_CLEF_EXPECTED_Y:\n        staff_state.set_clef(clef.TrebleClef())\n    elif glyph.type == musicscore_pb2.Glyph.CLEF_BASS:\n      if glyph.y_position == BASS_CLEF_EXPECTED_Y:\n        staff_state.set_clef(clef.BassClef())\n    else:\n      raise ValueError('Unknown clef of type: ' +\n                       musicscore_pb2.Glyph.Type.Name(glyph.type))\n\n  def _read_note(self, staff_state, glyph):\n    staff_state.measure_state.on_read_notehead()\n    glyph.note.CopyFrom(staff_state.measure_state.get_note(glyph))\n\n  def _read_rest(self, staff_state, glyph):\n    staff_state.set_time(staff_state.get_time() + REST_DURATIONS_[glyph.type])\n\n  def _read_accidental(self, staff_state, glyph):\n    staff_state.measure_state.set_accidental(glyph.y_position, glyph.type)\n\n  def _no_op_handler(self, glyph):\n    pass\n\n  GLYPH_HANDLERS_ = {\n      musicscore_pb2.Glyph.NONE: _no_op_handler,\n      musicscore_pb2.Glyph.CLEF_TREBLE: _read_clef,\n      musicscore_pb2.Glyph.CLEF_BASS: _read_clef,\n      musicscore_pb2.Glyph.NOTEHEAD_FILLED: _read_note,\n      musicscore_pb2.Glyph.NOTEHEAD_EMPTY: _read_note,\n      musicscore_pb2.Glyph.NOTEHEAD_WHOLE: _read_note,\n      musicscore_pb2.Glyph.REST_QUARTER: _read_rest,\n      musicscore_pb2.Glyph.REST_EIGHTH: _read_rest,\n      musicscore_pb2.Glyph.REST_SIXTEENTH: _read_rest,\n      musicscore_pb2.Glyph.FLAT: _read_accidental,\n      musicscore_pb2.Glyph.NATURAL: _read_accidental,\n      musicscore_pb2.Glyph.SHARP: _read_accidental,\n      musicscore_pb2.Glyph.DOUBLE_SHARP: _read_accidental,\n  }\n"
  },
  {
    "path": "moonlight/score/reader_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the OMR score reader.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\nimport librosa\n\nfrom protobuf import music_pb2\nfrom moonlight import conversions\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.score import reader\n\n# pylint: disable=invalid-name\nGlyph = musicscore_pb2.Glyph\nNote = music_pb2.NoteSequence.Note\nPoint = musicscore_pb2.Point\n\n\nclass ReaderTest(absltest.TestCase):\n\n  def testTreble_simple(self):\n    staff = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=0),\n        ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[musicscore_pb2.StaffSystem(\n            staff=[staff])])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            Note(pitch=librosa.note_to_midi('B4'), start_time=0, end_time=1)\n        ]))\n\n  def testBass_simple(self):\n    staff = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_BASS,\n                x=1,\n                y_position=reader.BASS_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=0),\n        ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[musicscore_pb2.StaffSystem(\n            staff=[staff])])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            Note(pitch=librosa.note_to_midi('D3'), start_time=0, end_time=1)\n        ]))\n\n  def testTreble_accidentals(self):\n    staff_1 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-6),\n            Glyph(type=Glyph.FLAT, x=16, y_position=-4),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=20, y_position=-4),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=30, y_position=-2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=40, y_position=-4),\n        ])\n    staff_2 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=150), Point(x=100, y=150)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-6),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=20, y_position=-4),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=30, y_position=-2),\n            Glyph(type=Glyph.SHARP, x=35, y_position=-2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=40, y_position=-2),\n            Glyph(type=Glyph.NATURAL, x=45, y_position=-2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=50, y_position=-2),\n        ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[\n            musicscore_pb2.StaffSystem(staff=[staff_1]),\n            musicscore_pb2.StaffSystem(staff=[staff_2])\n        ])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            # First staff.\n            Note(pitch=librosa.note_to_midi('C4'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('Eb4'), start_time=1, end_time=2),\n            Note(pitch=librosa.note_to_midi('G4'), start_time=2, end_time=3),\n            Note(pitch=librosa.note_to_midi('Eb4'), start_time=3, end_time=4),\n            # Second staff.\n            Note(pitch=librosa.note_to_midi('C4'), start_time=4, end_time=5),\n            Note(pitch=librosa.note_to_midi('E4'), start_time=5, end_time=6),\n            Note(pitch=librosa.note_to_midi('G4'), start_time=6, end_time=7),\n            Note(pitch=librosa.note_to_midi('G#4'), start_time=7, end_time=8),\n            Note(pitch=librosa.note_to_midi('G4'), start_time=8, end_time=9),\n        ]))\n\n  def testChords(self):\n    stem_1 = musicscore_pb2.LineSegment(\n        start=Point(x=20, y=10), end=Point(x=20, y=70))\n    stem_2 = musicscore_pb2.LineSegment(\n        start=Point(x=50, y=10), end=Point(x=50, y=70))\n    staff = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            # Chord of 2 notes.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-4, stem=stem_1),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-1, stem=stem_1),\n\n            # Note not attached to a stem.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=30, y_position=3),\n\n            # Chord of 3 notes.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=40, y_position=0, stem=stem_2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=60, y_position=2, stem=stem_2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=60, y_position=4, stem=stem_2),\n        ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[musicscore_pb2.StaffSystem(\n            staff=[staff])])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            # First chord.\n            Note(pitch=librosa.note_to_midi('E4'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('A4'), start_time=0, end_time=1),\n\n            # Note without a stem.\n            Note(pitch=librosa.note_to_midi('E5'), start_time=1, end_time=2),\n\n            # Second chord.\n            Note(pitch=librosa.note_to_midi('B4'), start_time=2, end_time=3),\n            Note(pitch=librosa.note_to_midi('D5'), start_time=2, end_time=3),\n            Note(pitch=librosa.note_to_midi('F5'), start_time=2, end_time=3),\n        ]))\n\n  def testBeams(self):\n    beam_1 = musicscore_pb2.LineSegment(\n        start=Point(x=10, y=20), end=Point(x=40, y=20))\n    beam_2 = musicscore_pb2.LineSegment(\n        start=Point(x=70, y=40), end=Point(x=90, y=40))\n    beam_3 = musicscore_pb2.LineSegment(\n        start=Point(x=70, y=60), end=Point(x=90, y=60))\n    staff = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            # 2 eighth notes.\n            Glyph(\n                type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-4, beam=[beam_1]),\n            Glyph(\n                type=Glyph.NOTEHEAD_FILLED, x=40, y_position=-1, beam=[beam_1]),\n            # 1 quarter note.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=50, y_position=0),\n            # 2 sixteenth notes.\n            Glyph(\n                type=Glyph.NOTEHEAD_FILLED,\n                x=60,\n                y_position=-2,\n                beam=[beam_2, beam_3]),\n            Glyph(\n                type=Glyph.NOTEHEAD_FILLED,\n                x=90,\n                y_position=2,\n                beam=[beam_2, beam_3]),\n        ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[musicscore_pb2.StaffSystem(\n            staff=[staff])])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            Note(pitch=librosa.note_to_midi('E4'), start_time=0, end_time=0.5),\n            Note(pitch=librosa.note_to_midi('A4'), start_time=0.5, end_time=1),\n            Note(pitch=librosa.note_to_midi('B4'), start_time=1, end_time=2),\n            Note(pitch=librosa.note_to_midi('G4'), start_time=2, end_time=2.25),\n            Note(\n                pitch=librosa.note_to_midi('D5'), start_time=2.25,\n                end_time=2.5),\n        ]))\n\n  def testAllNoteheadTypes(self):\n    staff = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-6),\n            Glyph(type=Glyph.NOTEHEAD_EMPTY, x=10, y_position=-6),\n            Glyph(type=Glyph.NOTEHEAD_WHOLE, x=10, y_position=-6),\n        ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[musicscore_pb2.StaffSystem(\n            staff=[staff])])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            Note(pitch=librosa.note_to_midi('C4'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('C4'), start_time=1, end_time=3),\n            Note(pitch=librosa.note_to_midi('C4'), start_time=3, end_time=7),\n        ]))\n\n  def testStaffSystems(self):\n    # 2 staff systems on separate pages, each with 2 staves, and no bars.\n    system_1_staff_1 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=100, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=-6),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=50, y_position=-2),\n        ])\n    system_1_staff_2 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=150), Point(x=100, y=150)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_BASS,\n                x=2,\n                y_position=reader.BASS_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=0),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=40, y_position=2),\n            # Played after the second note in the first staff, although it is to\n            # the left of it.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=45, y_position=4),\n        ])\n    system_2_staff_1 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=250), Point(x=100, y=250)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.REST_QUARTER, x=20, y_position=0),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=50, y_position=-2),\n        ])\n    system_2_staff_2 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=250), Point(x=100, y=250)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_BASS,\n                x=2,\n                y_position=reader.BASS_CLEF_EXPECTED_Y),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=10, y_position=0),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=40, y_position=2),\n        ])\n    notes = conversions.score_to_notesequence(reader.ScoreReader()(\n        musicscore_pb2.Score(page=[\n            musicscore_pb2.Page(system=[\n                musicscore_pb2.StaffSystem(\n                    staff=[system_1_staff_1, system_1_staff_2]),\n            ]),\n            musicscore_pb2.Page(system=[\n                musicscore_pb2.StaffSystem(\n                    staff=[system_2_staff_1, system_2_staff_2]),\n            ]),\n        ]),))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            # System 1, staff 1.\n            Note(pitch=librosa.note_to_midi('C4'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('G4'), start_time=1, end_time=2),\n            # System 1, staff 2.\n            Note(pitch=librosa.note_to_midi('D3'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('F3'), start_time=1, end_time=2),\n            Note(pitch=librosa.note_to_midi('A3'), start_time=2, end_time=3),\n            # System 2, staff 1.\n            # Quarter rest.\n            Note(pitch=librosa.note_to_midi('G4'), start_time=4, end_time=5),\n            # System 2, staff 2.\n            Note(pitch=librosa.note_to_midi('D3'), start_time=3, end_time=4),\n            Note(pitch=librosa.note_to_midi('F3'), start_time=4, end_time=5),\n        ]))\n\n  def testMeasures(self):\n    # 2 staves in the same staff system with multiple bars.\n    staff_1 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=50), Point(x=300, y=50)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_TREBLE,\n                x=1,\n                y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n            # Key signature.\n            Glyph(type=Glyph.SHARP, x=10, y_position=+4),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=20, y_position=-2),\n\n            # Accidental.\n            Glyph(type=Glyph.FLAT, x=40, y_position=-1),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=50, y_position=-1),\n\n            # Second bar.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=120, y_position=0),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=180, y_position=+4),\n\n            # Third bar.\n            # Accidental not propagated to this note.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=220, y_position=-1),\n        ])\n    staff_2 = musicscore_pb2.Staff(\n        staffline_distance=10,\n        center_line=[Point(x=0, y=150), Point(x=300, y=150)],\n        glyph=[\n            Glyph(\n                type=Glyph.CLEF_BASS,\n                x=1,\n                y_position=reader.BASS_CLEF_EXPECTED_Y),\n            # Key signature.\n            Glyph(type=Glyph.FLAT, x=15, y_position=-2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=20, y_position=-2),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=50, y_position=+2),\n\n            # Second bar.\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=150, y_position=-2),\n\n            # Third bar.\n            Glyph(type=Glyph.REST_QUARTER, x=220, y_position=0),\n            Glyph(type=Glyph.NOTEHEAD_FILLED, x=280, y_position=-2),\n        ])\n    staff_system = musicscore_pb2.StaffSystem(\n        staff=[staff_1, staff_2],\n        bar=[_bar(0), _bar(100), _bar(200),\n             _bar(300)])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[staff_system])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            # Staff 1, bar 1.\n            Note(pitch=librosa.note_to_midi('G4'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('Ab4'), start_time=1, end_time=2),\n            # Staff 1, bar 2.\n            Note(pitch=librosa.note_to_midi('B4'), start_time=2, end_time=3),\n            Note(pitch=librosa.note_to_midi('F#5'), start_time=3, end_time=4),\n            # Staff 1, bar 3.\n            Note(pitch=librosa.note_to_midi('A4'), start_time=4, end_time=5),\n            # Staff 2, bar 1.\n            Note(pitch=librosa.note_to_midi('Bb2'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('F3'), start_time=1, end_time=2),\n            # Staff 2, bar 2.\n            Note(pitch=librosa.note_to_midi('Bb2'), start_time=2, end_time=3),\n            # Staff 2, bar 3.\n            Note(pitch=librosa.note_to_midi('Bb2'), start_time=5, end_time=6),\n        ]))\n\n  def testKeySignatures(self):\n    # One staff per system, two systems.\n    staff_1 = musicscore_pb2.Staff(glyph=[\n        Glyph(\n            type=Glyph.CLEF_TREBLE,\n            x=5,\n            y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n        # D major key signature.\n        Glyph(type=Glyph.SHARP, x=15, y_position=+4),\n        Glyph(type=Glyph.SHARP, x=25, y_position=+1),\n\n        # Accidental which cannot be interpreted as part of the key\n        # signature.\n        Glyph(type=Glyph.SHARP, x=35, y_position=+2),\n        Glyph(type=Glyph.NOTEHEAD_FILLED, x=45, y_position=+2),  # D#5\n        Glyph(type=Glyph.NOTEHEAD_EMPTY, x=55, y_position=+1),  # C#5\n        Glyph(type=Glyph.NOTEHEAD_FILLED, x=65, y_position=-3),  # F#4\n\n        # New measure. The key signature should be retained.\n        Glyph(type=Glyph.NOTEHEAD_EMPTY, x=105, y_position=-3),  # F#4\n        Glyph(type=Glyph.NOTEHEAD_FILLED, x=125, y_position=+1),  # C#5\n        # Accidental is not retained.\n        Glyph(type=Glyph.NOTEHEAD_FILLED, x=145, y_position=+2),  # D5\n    ])\n    staff_2 = musicscore_pb2.Staff(glyph=[\n        Glyph(\n            type=Glyph.CLEF_TREBLE,\n            x=5,\n            y_position=reader.TREBLE_CLEF_EXPECTED_Y),\n        # No key signature on this line. No accidentals.\n        Glyph(type=Glyph.NOTEHEAD_EMPTY, x=25, y_position=-3),  # F4\n        Glyph(type=Glyph.NOTEHEAD_EMPTY, x=45, y_position=+1),  # C5\n    ])\n    notes = conversions.page_to_notesequence(reader.ScoreReader().read_page(\n        musicscore_pb2.Page(system=[\n            musicscore_pb2.StaffSystem(\n                staff=[staff_1], bar=[_bar(0), _bar(100),\n                                      _bar(200)]),\n            musicscore_pb2.StaffSystem(staff=[staff_2]),\n        ])))\n    self.assertEqual(\n        notes,\n        music_pb2.NoteSequence(notes=[\n            # First measure.\n            Note(pitch=librosa.note_to_midi('D#5'), start_time=0, end_time=1),\n            Note(pitch=librosa.note_to_midi('C#5'), start_time=1, end_time=3),\n            Note(pitch=librosa.note_to_midi('F#4'), start_time=3, end_time=4),\n            # Second measure.\n            Note(pitch=librosa.note_to_midi('F#4'), start_time=4, end_time=6),\n            Note(pitch=librosa.note_to_midi('C#5'), start_time=6, end_time=7),\n            Note(pitch=librosa.note_to_midi('D5'), start_time=7, end_time=8),\n            # Third measure on a new line, with no key signature.\n            Note(pitch=librosa.note_to_midi('F4'), start_time=8, end_time=10),\n            Note(pitch=librosa.note_to_midi('C5'), start_time=10, end_time=12),\n        ]))\n\n\ndef _bar(x):\n  return musicscore_pb2.StaffSystem.Bar(\n      x=x, type=musicscore_pb2.StaffSystem.Bar.STANDARD_BAR)\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/score/state/BUILD",
    "content": "# Description:\n# The music score state. Holds information such as the key signature and current time in the score,\n# which is used for reading notes from the score.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"state\",\n    srcs = [\"__init__.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":measure\",\n        \":staff\",\n        # six dep\n    ],\n)\n\npy_library(\n    name = \"measure\",\n    srcs = [\"measure.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # enum34 dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/score/elements:key_signature\",\n        \"@magenta//protobuf:music_py_pb2\",\n    ],\n)\n\npy_library(\n    name = \"staff\",\n    srcs = [\"staff.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":measure\",\n        \"//moonlight/score/elements:clef\",\n    ],\n)\n"
  },
  {
    "path": "moonlight/score/state/__init__.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"The global state for the entire score.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.score.state import staff as staff_state\nfrom six import moves\n\n\nclass ScoreState(object):\n  \"\"\"The global state for the entire score.\n\n  Represents the state of the score across multiple staff systems. The staff\n  system state does not change on a new line, unless the number of staves\n  changes. `num_staves` should be called for every new staff system, to reset\n  the staff state properly.\n\n  Attributes:\n    staves: A list of StaffState objects, representing each staff in the current\n      staff system.\n  \"\"\"\n\n  def __init__(self):\n    self.staves = []\n\n  def num_staves(self, num_staves):\n    \"\"\"Updates the score to have the given number of staves.\n\n    If `num_staves` matches the current `len(self.staves)`, copies the persisted\n    state from the previous staves to the new staff system. Otherwise,\n    discards any current staves and constructs `num_staves` new staves.\n\n    Args:\n      num_staves: The number of staves for the current staff system.\n    \"\"\"\n    time = self.add_measure()\n    if len(self.staves) != self.num_staves:\n      self.staves = [\n          staff_state.StaffState(time) for _ in moves.xrange(num_staves)\n      ]\n    else:\n      self.staves = [staff.new_staff(time) for staff in self.staves]\n\n  def add_measure(self):\n    \"\"\"Adds a new measure for the current staff system.\n\n    Called on every bar. Updates each staff.\n\n    Returns:\n      The start time of the new measure, which is the max of the current time of\n      each current staff.\n    \"\"\"\n    time = (\n        max([staff.get_time() for staff in self.staves]) if self.staves else 0)\n    for staff in self.staves:\n      staff.add_measure(time)\n    return time\n"
  },
  {
    "path": "moonlight/score/state/measure.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"The score state which is not persisted between measures.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport copy\nimport enum\n\nfrom protobuf import music_pb2\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.score.elements import key_signature as key_signature_module\n\nACCIDENTAL_PITCH_SHIFT_ = {\n    # TODO(ringw): Detect 2 adjacent flats as a double flat.\n    musicscore_pb2.Glyph.FLAT:\n        -1,\n    musicscore_pb2.Glyph.NATURAL:\n        0,\n    musicscore_pb2.Glyph.NONE:\n        0,\n    musicscore_pb2.Glyph.SHARP:\n        +1,\n    musicscore_pb2.Glyph.DOUBLE_SHARP:\n        +2,\n}\n\n\nclass _KeySignatureState(enum.Enum):\n\n  KEY_SIGNATURE = 1\n  ACCIDENTALS = 2\n\n\nclass MeasureState(object):\n  \"\"\"State of a single measure of a staff.\n\n  Attributes:\n    clef: The current clef.\n    key_signature: The current `KeySignature`.\n    chords: A map from stem (tuple `((x0, y0), (x1, y1))`) to the first note\n      that was read and is attached to the stem. Subsequent notes attached to\n      the same stem will read their start and end time from the first note.\n    time: The current time in the measure. Absolute time relative to the start\n      of the score. float.\n  \"\"\"\n\n  def __init__(self, start_time, clef, key_signature=None):\n    \"\"\"Initializes a new measure.\n\n    Args:\n      start_time: The start time (in quarter notes) of the measure.\n      clef: A `Clef`.\n      key_signature: The previously detected key signature (optional). If\n        present, do not detect a key signature in this measure. This should be\n        taken from the previously measure on this staff if this is not the first\n        measure. It should not be propagated from one staff to the next, because\n        we expect the key signature to be repeated on each staff and we will\n        re-detect it.\n    \"\"\"\n    self.time = start_time\n    self.clef = clef\n    self.key_signature = (\n        key_signature or key_signature_module.KeySignature(clef))\n    self._accidentals = key_signature_module.Accidentals(clef)\n    self._key_signature_state = (\n        _KeySignatureState.ACCIDENTALS\n        if key_signature else _KeySignatureState.KEY_SIGNATURE)\n    self.chords = {}\n\n  def new_measure(self, start_time):\n    \"\"\"Constructs a new MeasureState for the next measure.\n\n    Args:\n      start_time: The start time of the new measure.\n\n    Returns:\n      A new MeasureState object.\n    \"\"\"\n    return MeasureState(\n        start_time,\n        clef=self.clef,\n        key_signature=copy.deepcopy(self.key_signature))\n\n  def set_accidental(self, y_position, accidental):\n    \"\"\"Adds a glyph to the key signature or accidentals.\n\n    Args:\n      y_position: The position of the accidental.\n      accidental: The accidental value.\n    \"\"\"\n    if self._key_signature_state == _KeySignatureState.KEY_SIGNATURE:\n      if self.key_signature.try_put(y_position, accidental):\n        return\n      self._key_signature_state = _KeySignatureState.ACCIDENTALS\n    self._accidentals.put(y_position, accidental)\n\n  def get_note(self, glyph):\n    \"\"\"Converts a Glyph to a Note.\n\n    Gets the note timing from an existing chord if available, or increments the\n    current measure time otherwise.\n\n    Args:\n      glyph: A Glyph message. Type must be one of NOTEHEAD_*.\n\n    Returns:\n      A Note message.\n    \"\"\"\n    accidental = self._accidentals.get_accidental_for_position(glyph.y_position)\n    if accidental == musicscore_pb2.Glyph.NONE:\n      accidental = self.key_signature.get_accidental_for_position(\n          glyph.y_position)\n    pitch = (\n        self.clef.y_position_to_midi(glyph.y_position) +\n        ACCIDENTAL_PITCH_SHIFT_[accidental])\n    first_note_in_chord = None\n    if glyph.HasField('stem'):\n      # Try to get the timing from another note in the same chord.\n      stem = ((glyph.stem.start.x, glyph.stem.start.y), (glyph.stem.end.x,\n                                                         glyph.stem.end.y))\n      if stem in self.chords:\n        first_note_in_chord = self.chords[stem]\n    else:\n      stem = None\n\n    if first_note_in_chord:\n      start_time, end_time = (first_note_in_chord.start_time,\n                              first_note_in_chord.end_time)\n    else:\n      # TODO(ringw): Check all note durations, not just the first seen in a\n      # chord, and use the median detected duration.\n      duration = _get_note_duration(glyph)\n      start_time, end_time = self.time, self.time + duration\n      self.time += duration\n    note = music_pb2.NoteSequence.Note(\n        pitch=pitch, start_time=start_time, end_time=end_time)\n    if stem:\n      self.chords[stem] = note\n    return note\n\n  def set_clef(self, clef):\n    \"\"\"Sets the clef, and resets the key signature if necessary.\"\"\"\n    if clef != self.clef:\n      self._key_signature_state = _KeySignatureState.KEY_SIGNATURE\n      self.key_signature = key_signature_module.KeySignature(clef)\n      self._accidentals = key_signature_module.Accidentals(clef)\n    self.clef = clef\n\n  def on_read_notehead(self):\n    \"\"\"Called after a notehead has been read.\n\n    The key signature should occur before any noteheads in the measure. This\n    causes subsequent accidental glyphs to be read as accidentals, and not part\n    of the key signature.\n    \"\"\"\n    self._key_signature_state = _KeySignatureState.ACCIDENTALS\n\n\ndef _get_note_duration(note):\n  \"\"\"Determines the duration of a notehead glyph.\n\n  This depends on the glyph type, beams (which each halve the duration), and\n  dots (which each add a fractional duration). In the future, notes may be\n  recognized as a tuplet, which will result in a Fraction duration. For now, the\n  duration is a float, because the denominator is always a sum of powers of two.\n\n  Args:\n    note: A `Glyph` of a notehead type.\n\n  Returns:\n    The float duration of the note, in quarter notes.\n\n  Raises:\n    ValueError: If `note` is not a notehead type.\n  \"\"\"\n  if note.type == musicscore_pb2.Glyph.NOTEHEAD_FILLED:\n    # Quarter note: 2.0 ** 0 == 1\n    # Each beam halves the note duration.\n    duration = 2.0**-len(note.beam)\n  elif note.type == musicscore_pb2.Glyph.NOTEHEAD_EMPTY:\n    duration = 2.0\n  elif note.type == musicscore_pb2.Glyph.NOTEHEAD_WHOLE:\n    duration = 4.0\n  else:\n    raise ValueError('Expected a notehead, got: %s' % note)\n  # The first dot adds half the original duration, and further dots add half the\n  # value added by the previous dot.\n  dot_value = duration / 2.\n  for _ in note.dot:\n    duration += dot_value\n    dot_value /= 2.\n  return duration\n"
  },
  {
    "path": "moonlight/score/state/staff.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"The state for a single staff of a staff system.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.score.elements import clef as clef_module\nfrom moonlight.score.state import measure\n\n\nclass StaffState(object):\n  \"\"\"The state for a single staff of a staff system.\n\n  Holds the current measure of the staff, which has per-measure state (e.g.\n  accidentals). Other state is copied to a new measure at each barline, and\n  copied to a new StaffState representing a new line when `new_staff` is called.\n  \"\"\"\n\n  def __init__(self, start_time, clef=None):\n    clef = clef or clef_module.TrebleClef()\n    self.measure_state = measure.MeasureState(start_time, clef=clef)\n\n  def add_measure(self, start_time):\n    \"\"\"Updates `measure_state` for a new measure.\n\n    Args:\n      start_time: The start time of the new measure.  Copies state which is\n        persisted between measures, and initializes other state to the defaults\n        for the new measure.\n    \"\"\"\n    self.measure_state = self.measure_state.new_measure(start_time)\n\n  def new_staff(self, start_time):\n    \"\"\"Copies the StaffState to a new staff on a new line.\n\n    Args:\n      start_time: Start time of the first measure of the new staff.\n\n    Returns:\n      A new StaffState instance.\n    \"\"\"\n    # Don't persist the key signature between staves, since we expect it to be\n    # at the start of each line.\n    return StaffState(start_time, clef=self.measure_state.clef)\n\n  def get_key_signature(self):\n    \"\"\"Returns the key signature at the current point in time.\"\"\"\n    return self.measure_state.key_signature\n\n  def get_time(self):\n    \"\"\"Returns the current time.\"\"\"\n    return self.measure_state.time\n\n  def set_time(self, time):\n    \"\"\"Updates the current time of the current measure.\n\n    Args:\n      time: A floating-point time.  Updates `self.measure_state`.\n    \"\"\"\n    self.measure_state.time = time\n\n  def set_clef(self, clef):\n    \"\"\"Updates the clef.\n\n    Args:\n      clef: A TrebleClef or BassClef.  Updates `self.measure_state`.\n    \"\"\"\n    self.measure_state.set_clef(clef)\n"
  },
  {
    "path": "moonlight/score_processors.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Processors that need to visit each page of the score in one pass.\n\nThese are intended for detecting musical elements, where musical context may\nspan staff systems and pages (e.g. the time signature). Musical elements (e.g.\nnotes) are added to the `Score` message directly.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.score import reader\n\n\ndef create_processors():\n  yield reader.ScoreReader()\n\n\ndef process(score):\n  \"\"\"Processes a Score.\n\n  Detects notes in the Score, and returns the Score in place.\n\n  Args:\n    score: A `Score` message.\n\n  Returns:\n    A `Score` message with `Note`s added to the `Glyph`s where applicable.\n  \"\"\"\n  for processor in create_processors():\n    score = processor(score)\n  return score\n"
  },
  {
    "path": "moonlight/scripts/imslp_pdfs_to_pngs.sh",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#!/bin/bash\n# Extracts PNGs from the IMSLP backup. The output PNGs are suitable for training\n# and running OMR. See IMSLP for the latest details on the backup:\n# http://imslp.org/wiki/IMSLP:Backups\n\nINPUT_DIR=\"$1\"\nOUTPUT_DIR=\"$2\"\n\nif ! [[ -d \"$INPUT_DIR\" ]]; then\n  echo \"First argument must be a directory\" > /dev/stderr\n  exit -1\nfi\n\nif ! [[ -d \"$OUTPUT_DIR\" ]]; then\n  mkdir -v \"$OUTPUT_DIR\"\nfi\n\nif ! [[ -x \"$(which pdfimages)\" ]]; then\n  echo \"pdfimages is required. Please install poppler-utils.\" > /dev/stderr\n  exit -1\nfi\n\nif ! [[ -x \"$(which parallel)\" ]]; then\n  echo \"GNU parallel is required. Please install parallel.\" > /dev/stderr\n  exit -1\nfi\n\nif ! [[ -x \"$(which convert)\" ]]; then\n  echo \"'convert' is required. Please install imagemagick.\" > /dev/stderr\n  exit -1\nfi\n\n# For each pdf...\nfind \"$INPUT_DIR\" -name \"IMSLP*.pdf\" | \\\n    # Convert to \"IMSLPnnnnn-nnn.ppm\" or \".pgm\" images in $OUTPUT_DIR.\n    perl -ne 'chomp; /(IMSLP[0-9]+)/; print qq(pdfimages \"$_\" \"'\"$OUTPUT_DIR\"'\"/$1\\n)' | \\\n    # Run all emitted commands in parallel.\n    parallel -v\n\n# Convert extracted \"pbm\", \"pgm\", and \"ppm\" images to PNG.\n(for file in \"$OUTPUT_DIR\"/*.p[bgp]m; do\n  echo \"convert '$file' '${file%.*}.png' && rm -v '$file'\"\ndone) | parallel -v\n"
  },
  {
    "path": "moonlight/staves/BUILD",
    "content": "# Description:\n# Staff detection and removal routines.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"staves\",\n    srcs = [\"__init__.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":hough\",\n        \":projection\",\n    ],\n)\n\npy_library(\n    name = \"base\",\n    srcs = [\"base.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/util:memoize\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"filter\",\n    srcs = [\"filter.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/util:segments\",\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"hough\",\n    srcs = [\"hough.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":base\",\n        \":filter\",\n        \":staffline_distance\",\n        \"//moonlight/util:memoize\",\n        \"//moonlight/vision:hough\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"projection\",\n    srcs = [\"projection.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":base\",\n        \"//moonlight/util:memoize\",\n        # numpy dep\n        # scipy dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"detectors_test\",\n    srcs = [\"detectors_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_distance\",\n        \":staves\",\n        \":testing\",\n        # disable_tf2\n        \"//moonlight:image\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"staff_processor\",\n    srcs = [\"staff_processor.py\"],\n    srcs_version = \"PY2AND3\",\n)\n\npy_test(\n    name = \"staff_processor_test\",\n    srcs = [\"staff_processor_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staff_processor\",\n        \":testing\",\n        # disable_tf2\n        # absl/testing dep\n        \"//moonlight:engine\",\n        \"//moonlight/glyphs:testing\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/structure\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"staffline_extractor\",\n    srcs = [\"staffline_extractor.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":removal\",\n        \":staves\",\n        # enum34 dep\n        \"//moonlight:image\",\n        # six dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"staffline_extractor_test\",\n    size = \"small\",\n    timeout = \"moderate\",\n    srcs = [\"staffline_extractor_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_extractor\",\n        \":staves\",\n        # disable_tf2\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"staffline_distance\",\n    srcs = [\"staffline_distance.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/util:run_length\",\n        \"//moonlight/util:segments\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"staffline_distance_test\",\n    srcs = [\"staffline_distance_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_distance\",\n        # disable_tf2\n        \"//moonlight:image\",\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"removal\",\n    srcs = [\"removal.py\"],\n    deps = [\n        \"//moonlight/util:memoize\",\n        \"//moonlight/util:segments\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"removal_test\",\n    srcs = [\"removal_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    deps = [\n        \":removal\",\n        \":staffline_distance\",\n        \":staves\",\n        # disable_tf2\n        \"//moonlight:image\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"testing\",\n    testonly = 1,\n    srcs = [\"testing.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\":base\"],\n)\n"
  },
  {
    "path": "moonlight/staves/__init__.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Staff detection.\n\nHolds the staff detector classes that can be used as part of an OMR pipeline.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.staves import hough\nfrom moonlight.staves import projection\n\n# Alias the staff detectors to access them directly from the staves module.\n# pylint: disable=invalid-name\nFilteredHoughStaffDetector = hough.FilteredHoughStaffDetector\nProjectionStaffDetector = projection.ProjectionStaffDetector\n\n# The default staff detector that should be used in production.\nStaffDetector = FilteredHoughStaffDetector\n"
  },
  {
    "path": "moonlight/staves/base.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Defines the base class for all staff detectors.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport abc\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.util import memoize\n\n\nclass BaseStaffDetector(object):\n  \"\"\"Base for a routine that returns staves in a music score.\n\n  Attributes of concrete subclasses:\n    staves\n    staffline_distance\n    staffline_thickness\n  \"\"\"\n  __metaclass__ = abc.ABCMeta\n\n  def __init__(self, image=None):\n    \"\"\"Creates a staff detector for the given music score image.\n\n    Args:\n      image: The music score image. If none, sets self.image to a placeholder.\n    \"\"\"\n    if image is None:\n      self.image = tf.placeholder(tf.uint8, shape=(None, None))\n    else:\n      self.image = tf.convert_to_tensor(image, tf.uint8)\n\n  @property\n  def data(self):\n    \"\"\"Returns the list of staff detection tensors to be computed.\n\n    Returns:\n      A list of Tensors.\n    \"\"\"\n    return [\n        self.staves, self.staffline_distance, self.staffline_thickness,\n        self.staves_interpolated_y\n    ]\n\n  @property\n  @memoize.MemoizedFunction\n  def staves_interpolated_y(self):\n    \"\"\"Interpolates the center line y coordinate for each staff.\n\n    Calculates the staff center y for each x coordinate from 0 to `width - 1`.\n\n    Returns:\n      A tensor of shape (num_staves, width).\n    \"\"\"\n    image_shape = tf.shape(self.image)\n\n    def _get_staff_center_line_y(staff):\n      \"\"\"Interpolates the y position for the staff.\n\n      For x values in the interval [0, image_shape[1]), calculate the y\n      position. The y position past either end of the staff line is assumed to\n      be the same as at the endpoint.\n\n      Args:\n        staff: The sequence of (x, y) coordinates for the staff center line.\n          int32 tensor of shape (num_points, 2).\n\n      Returns:\n        The array of y position values.\n      \"\"\"\n      staff = tf.convert_to_tensor(staff, dtype=tf.int32)\n      input_validation = [\n          tf.Assert(\n              tf.greater_equal(tf.shape(staff)[0], 2),\n              [staff, tf.shape(staff)],\n              name=\"at_least_2_points\"),\n          tf.Assert(\n              tf.equal(tf.shape(staff)[1], 2), [staff, tf.shape(staff)],\n              name=\"x_and_y\"),\n          tf.Assert(\n              tf.greater_equal(staff[0, 0], 0), [image_shape, staff],\n              name=\"staff_x_positive\"),\n          tf.Assert(\n              tf.less(staff[-1, 0], image_shape[1]), [image_shape, staff],\n              name=\"staff_x_ends_before_end_of_image\"),\n      ]\n      # Validate the input before the main body.\n      with tf.control_dependencies(input_validation):\n        num_points = tf.shape(staff)[0]\n\n      # The segments cover left of the staff, each consecutive pair of points,\n      # and right of the staff.\n      num_segments = num_points + 1\n\n      def loop_body(i, ys_array):\n        \"\"\"Executes on each iteration of the TF while loop.\"\"\"\n        # Interpolate the y coordinates of the line between staff points i - 1\n        # and i (i >= 1). The y coordinates correspond to x in the interval\n        # [staff[i - 1, 0], staff[i, 0]).\n        x0 = staff[i - 1, 0]\n        y0 = staff[i - 1, 1]\n        x1 = staff[i, 0]\n        y1 = staff[i, 1]\n        segment_ys = (\n            tf.cast(\n                tf.round(\n                    tf.cast(y1 - y0, tf.float32) *\n                    tf.linspace(0., 1., x1 - x0 + 1)[:-1]), tf.int32) + y0)\n        # Update the loop variables. Increment i, and write the current segment\n        # ys to the array.\n        return i + 1, ys_array.write(i, segment_ys)\n\n      # Run a while loop to generate line segments between consecutive staff\n      # points.\n      all_ys_array = tf.TensorArray(\n          tf.int32, infer_shape=False, size=num_segments)\n      # The first segment covers [0, staff[0, 0]) (may be empty).\n      all_ys_array = all_ys_array.write(0, tf.tile([staff[0, 1]],\n                                                   [staff[0, 0]]))\n      # Write the segments in the interval [1, num_segments - 2].\n      unused_i, all_ys_array = tf.while_loop(\n          lambda i, unused_ys: i < num_segments - 1, loop_body,\n          [1, all_ys_array])\n      # The last segment covers [staff[-1, 0], width) (may be empty).\n      all_ys_array = all_ys_array.write(\n          num_segments - 1,\n          tf.tile([staff[-1, 1]], [image_shape[1] - staff[-1, 0]]))\n      all_ys = all_ys_array.concat()\n      output_validation = [\n          tf.Assert(\n              tf.equal(tf.shape(all_ys)[0], image_shape[1]),\n              [tf.shape(all_ys), image_shape]),\n      ]\n      # Validate the output before returning. We need an actual op inside the\n      # with statement (tf.identity).\n      with tf.control_dependencies(output_validation):\n        return tf.identity(all_ys)\n\n    # The map_fn will fail if there are no staves. In that case, return an empty\n    # array with the correct width.\n    return tf.cond(\n        tf.shape(self.staves)[0] > 0,\n        lambda: tf.map_fn(_get_staff_center_line_y, self.staves),\n        lambda: tf.zeros([0, image_shape[1]], tf.int32))\n\n  def compute(self, session=None, feed_dict=None):\n    \"\"\"Runs staff detection.\n\n    Args:\n      session: The session to use instead of the default session.\n      feed_dict: The feed dict for the TensorFlow graph.\n\n    Returns:\n      A `ComputedStaves` holding NumPy arrays for the staves.\n    \"\"\"\n    if session is None:\n      session = tf.get_default_session()\n    return ComputedStaves(*session.run(self.data, feed_dict=feed_dict))\n\n\nclass ComputedStaves(BaseStaffDetector):\n  \"\"\"Computed staves holder.\n\n  The result of `BaseStaffDetector.compute()`. Holds NumPy arrays with the\n      result of staff detection.\n  \"\"\"\n\n  def __init__(self, staves, staffline_distance, staffline_thickness,\n               staves_interpolated_y):\n    super(ComputedStaves, self).__init__()\n    # TODO(ringw): Add a way to ensure the inputs are array-like and not\n    # Tensor objects.\n    self.staves = np.asarray(staves)\n    self.staffline_distance = np.asarray(staffline_distance)\n    self.staffline_thickness = np.asarray(staffline_thickness)\n    self.staves_interpolated_y_arr = np.asarray(staves_interpolated_y)\n\n  @property\n  def staves_interpolated_y(self):\n    return self.staves_interpolated_y_arr\n\n  def compute(self, session=None, feed_dict=None):\n    \"\"\"Returns the already computed staves.\n\n    Args:\n      session: TensorFlow session; ignored.\n      feed_dict: TensorFlow feed dict; ignored.\n\n    Returns:\n      self.\n    \"\"\"\n    return self\n"
  },
  {
    "path": "moonlight/staves/detectors_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the staff detectors.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight import image as omr_image\nfrom moonlight import staves\nfrom moonlight.staves import staffline_distance\nfrom moonlight.staves import testing\n\n\nclass StaffDetectorsTest(tf.test.TestCase):\n\n  def setUp(self):\n    # The normal _MIN_STAFFLINE_DISTANCE_SCORE is too large for the small images\n    # used in unit tests.\n    self.old_min_staffline_distance_score = (\n        staffline_distance._MIN_STAFFLINE_DISTANCE_SCORE)\n    staffline_distance._MIN_STAFFLINE_DISTANCE_SCORE = 10\n\n  def tearDown(self):\n    staffline_distance._MIN_STAFFLINE_DISTANCE_SCORE = (\n        self.old_min_staffline_distance_score)\n\n  def test_single_staff(self):\n    blank_row = [255] * 50\n    staff_row = [255] * 4 + [0] * 42 + [255] * 4\n    # Create an image with 5 staff lines, with a slightly noisy staffline\n    # thickness and distance.\n    image = np.asarray([blank_row] * 25 + [staff_row] * 2 + [blank_row] * 8 +\n                       [staff_row] * 3 + [blank_row] * 8 + [staff_row] * 3 +\n                       [blank_row] * 9 + [staff_row] * 2 + [blank_row] * 8 +\n                       [staff_row] * 2 + [blank_row] * 25, np.uint8)\n    for detector in self.generate_staff_detectors(image):\n      with self.test_session() as sess:\n        staves_arr, staffline_distances, staffline_thickness = sess.run(\n            (detector.staves, detector.staffline_distance,\n             detector.staffline_thickness))\n      expected_y = 25 + 2 + 8 + 3 + 8 + 1  # y coordinate of the center line\n      self.assertEqual(\n          staves_arr.shape[0], 1,\n          'Expected single staff from detector %s. Got: %d' %\n          (detector, staves_arr.shape[0]))\n      self.assertAlmostEqual(\n          np.mean(staves_arr[0, :, 1]),  # average y position\n          expected_y,\n          delta=2.0)\n      self.assertAlmostEqual(staffline_distances[0], 11, delta=1.0)\n      self.assertLessEqual(staffline_thickness, 3)\n\n  def test_corpus_image(self):\n    # Test only the default staff detector (because projection won't detect all\n    # staves).\n    filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                            '../testdata/IMSLP00747-000.png')\n    image_t = omr_image.decode_music_score_png(tf.read_file(filename))\n    detector = staves.StaffDetector(image_t)\n    with self.test_session() as sess:\n      staves_arr, staffline_distances = sess.run(\n          [detector.staves, detector.staffline_distance])\n    self.assertAllClose(\n        np.mean(staves_arr[:, :, 1], axis=1),  # average y position\n        [413, 603, 848, 1040, 1286, 1476, 1724, 1915, 2162, 2354, 2604, 2795],\n        atol=5)\n    self.assertAllEqual(staffline_distances, [16] * 12)\n\n  def test_staves_interpolated_y(self):\n    # Test staff center line interpolation.\n    # The sequence of (x, y) points always starts at x = 0 and ends at\n    # x = width - 1.\n    staff = tf.constant(\n        np.array([[[0, 10], [5, 5], [11, 0], [15, 10], [20, 20], [23, 49]]],\n                 np.int32))\n\n    with self.test_session():\n      line_y = testing.FakeStaves(tf.zeros([50, 24]),\n                                  staff).staves_interpolated_y[0].eval()\n    self.assertEquals(\n        list(line_y), [\n            10, 9, 8, 7, 6, 5, 4, 3, 3, 2, 1, 0, 2, 5, 8, 10, 12, 14, 16, 18,\n            20, 30, 39, 49\n        ])\n\n  def test_staves_interpolated_y_empty(self):\n    with self.test_session():\n      self.assertAllEqual(\n          testing.FakeStaves(tf.zeros([50, 25]), tf.zeros(\n              [0, 2, 2], np.int32)).staves_interpolated_y.eval().shape, [0, 25])\n\n  def test_staves_interpolated_y_staves_dont_extend_to_edge(self):\n    staff = tf.constant(np.array([[[5, 10], [12, 8]]], np.int32))\n    with self.test_session():\n      # The y values should extend past the endpoints to the edge of the image,\n      # and should be equal to the y value at the nearest endpoint.\n      self.assertAllEqual(\n          testing.FakeStaves(tf.zeros([50, 15]),\n                             staff).staves_interpolated_y[0].eval(),\n          [10, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 8, 8, 8, 8])\n\n  def generate_staff_detectors(self, image):\n    yield staves.ProjectionStaffDetector(image)\n    yield staves.FilteredHoughStaffDetector(image)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/staves/filter.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Staff center line filter.\n\nIdentifies candidates for the center (third line) of a staff in each column of\nan image. Using the estimated staffline distance, there must be black pixels\nin the expected positions of the five staff lines. Some stafflines will be\ncovered by a black glyph, so the black run will be thicker than the expected\nstaffline thickness. To account for this, at least three lines must have white\npixels both above and below them.\n\nThe filtered image is used for a Hough transform (`hough.py`) to robustly\nidentify staves.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import segments\n\n# The minimum number of columns that must have a candidate staff center line\n# in a given row, to detect the row as a staff center line.\nMIN_STAFF_SLICES = 0.25\n\n\ndef staff_center_filter(image,\n                        staffline_distance,\n                        staffline_thickness,\n                        threshold=127):\n  \"\"\"Filters the image for candidate staff center lines.\n\n  Args:\n    image: The 2D tensor image.\n    staffline_distance: The estimated staffline distance. Scalar tensor.\n    staffline_thickness: The estimated staffline thickness. Scalar tensor.\n    threshold: Scalar tensor. Pixels below the threshold are black (possible\n      stafflines).\n\n  Returns:\n    A boolean tensor of the same shape as image. The candidate center staff\n        lines.\n  \"\"\"\n  image = image < threshold\n\n  # Add is not supported for unsigned ints, so use int8 instead of uint8.\n  # Dark: the image is dark where we expect a staffline.\n  dark_staffline_count = tf.zeros_like(image, tf.int8)\n  # Space: the image is light above and below where we expect a staffline,\n  # indicating a horizontal line.\n  space_staffline_count = tf.zeros_like(image, tf.int8)\n  for staffline_pos in range(-2, 3):\n    expected_y_line = staffline_pos * staffline_distance\n    # Allow each staffline to differ slightly from the expected position.\n    # The second and fourth lines can differ by 1 pixel, and the first and fifth\n    # lines can differ by 2 pixels.\n    # At each possible location, look for a dark pixel and light space above and\n    # below.\n    found_dark = tf.zeros_like(image, tf.bool)\n    found_space = tf.zeros_like(image, tf.bool)\n    y_adjustments = range(-abs(staffline_pos), abs(staffline_pos) + 1)\n    for y_adjustment in y_adjustments:\n      y_line = expected_y_line + y_adjustment\n      y_above = y_line - 2 * staffline_thickness\n      y_below = y_line + 2 * staffline_thickness\n      found_dark |= _shift_y(image, y_line)\n      found_space |= tf.logical_not(\n          tf.logical_or(_shift_y(image, y_above), _shift_y(image, y_below)))\n    dark_staffline_count += tf.cast(found_dark, tf.int8)\n    space_staffline_count += tf.cast(found_space, tf.int8)\n  return tf.logical_and(\n      tf.equal(dark_staffline_count, 5),\n      tf.greater_equal(space_staffline_count, 3))\n\n\ndef _shift_y(image, y_offset):\n  \"\"\"Shift the image vertically.\n\n  Args:\n    image: The 2D tensor image.\n    y_offset: The vertical offset for the image.\n\n  Returns:\n    The shifted image. Each pixel is shifted up or down by y_offset. Blank space\n    is filled with zeros.\n  \"\"\"\n  height = tf.shape(image)[0]\n  width = tf.shape(image)[1]\n\n  def invalid():\n    return image\n\n  def shift_up():\n    # y_offset is positive\n    sliced = image[y_offset:]\n    return tf.concat(\n        [sliced, tf.zeros([y_offset, width], dtype=image.dtype)], axis=0)\n\n  def shift_down():\n    # y_offset is negative\n    sliced = image[:y_offset]\n    return tf.concat([tf.zeros([-y_offset, width], dtype=image.dtype), sliced],\n                     axis=0)\n\n  return tf.cond(height <= tf.abs(y_offset), invalid,\n                 lambda: tf.cond(y_offset >= 0, shift_up, shift_down))\n\n\ndef _get_staff_ys(is_staff, staffline_thickness):\n  # Return the detected staves--segments in is_staff that are roughly not much\n  # bigger than staffline_thickness.\n  segment_ys, segment_sizes = segments.true_segments_1d(is_staff)\n  return tf.boolean_mask(segment_ys, segment_sizes <= staffline_thickness * 2)\n"
  },
  {
    "path": "moonlight/staves/hough.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Filtered hough staff detector.\n\nRuns the staff center filter (see `filter.py`), and then uses the Hough\ntransform of the filtered image to detect nearly-horizontal lines (theta is\nclose to `pi / 2`).\n\nThe process is repeated for each unique detected staffline distance.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport math\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.staves import base\nfrom moonlight.staves import filter as staves_filter\nfrom moonlight.staves import staffline_distance as distance\nfrom moonlight.util import memoize\nfrom moonlight.vision import hough\n\n# The minimum number of columns that must have a candidate staff center line\n# in a given row, to detect the row as a staff center line.\nMIN_STAFF_SLICES = 0.25\nDEFAULT_MAX_ABS_THETA = math.pi / 50\nDEFAULT_NUM_THETA = 51\n\n\nclass FilteredHoughStaffDetector(base.BaseStaffDetector):\n  \"\"\"Filtered hough staff detector.\n\n  Runs the staff center filter (see `filter.py`), and then uses the Hough\n  transform of the filtered image to detect nearly-horizontal lines (theta is\n  close to `pi / 2`).\n  \"\"\"\n\n  def __init__(self,\n               image=None,\n               max_abs_theta=DEFAULT_MAX_ABS_THETA,\n               num_theta=DEFAULT_NUM_THETA):\n    \"\"\"Filtered hough staff detector.\n\n    Args:\n      image: The image. If None, a placeholder will be created.\n      max_abs_theta: The maximum deviation of the angle for the staff from the\n        horizontal, in radians.\n      num_theta: The number of thetas to be detected, between `pi/2 -\n        max_abs_theta` and `pi/2 + max_abs_theta`.\n    \"\"\"\n    super(FilteredHoughStaffDetector, self).__init__(image)\n    staffline_distance, staffline_thickness = (\n        distance.estimate_staffline_distance_and_thickness(self.image))\n    self.estimated_staffline_distance = staffline_distance\n    self.estimated_staffline_thickness = staffline_thickness\n    self.max_abs_theta = float(max_abs_theta)\n    self.num_theta = int(num_theta)\n\n  @property\n  def staves(self):\n    staves, _ = self._data\n    return staves\n\n  @property\n  def staffline_distance(self):\n    _, staffline_distance = self._data\n    return staffline_distance\n\n  @property\n  def staffline_thickness(self):\n    return self.estimated_staffline_thickness\n\n  @property\n  @memoize.MemoizedFunction\n  def _data(self):\n\n    def detection_loop_body(i, staves, staffline_distances):\n      \"\"\"Per-staffline-distance staff detection loop.\n\n      Args:\n        i: The index of the current staffline distance to use.\n        staves: The current staves tensor of shape (N, 2, 2).\n        staffline_distances: The current staffline distance tensor. 1D with\n          length N.\n\n      Returns:\n        i + 1.\n        staves concatd with any newly detected staves.\n        staffline_distance with the current staffline distance appended for each\n            new staff.\n      \"\"\"\n      current_staffline_distance = self.estimated_staffline_distance[i]\n      current_staves = _SingleSizeFilteredHoughStaffDetector(\n          self.image, current_staffline_distance,\n          self.estimated_staffline_thickness, self.max_abs_theta,\n          self.num_theta).staves\n      staves = tf.concat([staves, current_staves], axis=0)\n      staffline_distances = tf.concat([\n          staffline_distances,\n          tf.tile([current_staffline_distance],\n                  tf.shape(staves)[0:1]),\n      ],\n                                      axis=0)\n      return i + 1, staves, staffline_distances\n\n    num_staffline_distances = tf.shape(self.estimated_staffline_distance)[0]\n    _, staves, staffline_distances = tf.while_loop(\n        lambda i, _, __: tf.less(i, num_staffline_distances),\n        detection_loop_body, [\n            tf.constant(0),\n            tf.zeros([0, 2, 2], tf.int32),\n            tf.zeros([0], tf.int32)\n        ],\n        shape_invariants=[\n            tf.TensorShape(()),\n            tf.TensorShape([None, 2, 2]),\n            tf.TensorShape([None])\n        ],\n        parallel_iterations=1)\n\n    # Sort by y0.\n    order, = _argsort(staves[:, 0, 1])\n    staves = tf.gather(staves, order)\n    staffline_distances = tf.gather(staffline_distances, order)\n\n    return staves, staffline_distances\n\n\nclass _SingleSizeFilteredHoughStaffDetector(object):\n  \"\"\"Filtered hough staff detector for a single staffline distance size.\n\n  This is run in a loop by `FilteredHoughStaffDetector` in order to cover all of\n  the detected staffline distances.\n\n  Runs the staff center filter (see `filter.py`), and then uses the Hough\n  transform of the filtered image to detect nearly-horizontal lines (theta is\n  close to `pi / 2`).\n  \"\"\"\n\n  def __init__(self, image, staffline_distance, staffline_thickness,\n               max_abs_theta, num_theta):\n    \"\"\"Filtered hough staff detector.\n\n    Args:\n      image: The image. If None, a placeholder will be created.\n      staffline_distance: The single staffline distance scalar to use.\n      staffline_thickness: The staffline thickness.\n      max_abs_theta: The maximum deviation of the angle for the staff from the\n        horizontal, in radians.\n      num_theta: The number of thetas to be detected, between `pi/2 -\n        max_abs_theta` and `pi/2 + max_abs_theta`.\n    \"\"\"\n    self.image = image\n    self.estimated_staffline_distance = staffline_distance\n    self.estimated_staffline_thickness = staffline_thickness\n    self.max_abs_theta = float(max_abs_theta)\n    self.num_theta = int(num_theta)\n\n  # Memoize this to not re-compute \"staves\" as a different tensor each time this\n  # property is referenced. TF's common subexpression elimination doesn't seem\n  # to handle this case, maybe because we have too many ops.\n  @property\n  def staves(self):\n    \"\"\"The staves detected for a single staffline distance.\n\n    Returns:\n      A staves tensor of shape (N, 2, 2).\n    \"\"\"\n    height = tf.shape(self.image)[0]\n    width = tf.shape(self.image)[1]\n    staff_center = staves_filter.staff_center_filter(\n        self.image, self.estimated_staffline_distance,\n        self.estimated_staffline_thickness)\n    all_thetas = tf.linspace(math.pi / 2 - self.max_abs_theta,\n                             math.pi / 2 + self.max_abs_theta, self.num_theta)\n    hough_bins = hough.hough_lines(staff_center, all_thetas)\n    staff_rhos, staff_thetas = hough.hough_peaks(\n        hough_bins,\n        all_thetas,\n        minval=MIN_STAFF_SLICES * tf.cast(width, tf.float32),\n        invalidate_distance=self.estimated_staffline_distance * 4)\n    num_staves = tf.shape(staff_rhos)[0]\n    # Interpolate the start and end points for the staff center line.\n    x0 = tf.zeros([num_staves], tf.int32)\n    y0 = tf.cast(\n        tf.cast(staff_rhos, tf.float32) / tf.sin(staff_thetas), tf.int32)\n    x1 = tf.fill([num_staves], width - 1)\n    y1 = tf.cast((tf.cast(staff_rhos, tf.float32) -\n                  tf.cast(width - 1, tf.float32) * tf.cos(staff_thetas)) /\n                 tf.sin(staff_thetas), tf.int32)\n    # Cut out staves which have a start or end y outside of the image.\n    is_valid = tf.logical_and(\n        tf.logical_and(0 <= y0, y0 < height),\n        tf.logical_and(0 <= y1, y1 < height))\n    staves = tf.reshape(tf.stack([x0, y0, x1, y1], axis=1), [-1, 2, 2])\n    return tf.boolean_mask(staves, is_valid)\n\n\n# TODO(ringw): Add tf.argsort.\ndef _argsort(values):\n  return tf.py_func(np.argsort, [values], [tf.int64])\n"
  },
  {
    "path": "moonlight/staves/projection.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A naive horizontal projection-based staff detector.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport scipy.ndimage\nimport tensorflow as tf\n\nfrom moonlight.staves import base\nfrom moonlight.util import memoize\n\n\nclass ProjectionStaffDetector(base.BaseStaffDetector):\n  \"\"\"A naive staff detector that uses horizontal projections.\n\n  Detects peaks in the number of black pixels in each row, which should\n  correspond to staff lines.\n  \"\"\"\n  staves_tensor = None\n  staffline_distance_tensor = None\n\n  def __init__(self, image=None):\n    super(ProjectionStaffDetector, self).__init__(image)\n    projection = tf.reduce_sum(tf.cast(self.image <= 127, tf.int32), 1)\n    width = tf.shape(self.image)[1]\n    min_num_dark_pixels = width // 2\n    staff_lines = projection > min_num_dark_pixels\n    staves, staffline_distance, staffline_thickness = tf.py_func(\n        _projection_to_staves, [staff_lines, width],\n        [tf.int32, tf.int32, tf.int32])\n    self.staves_tensor = staves\n    self.staffline_distance_tensor = staffline_distance\n    self.staffline_thickness_tensor = staffline_thickness\n\n  @property\n  @memoize.MemoizedFunction\n  def staves(self):\n    return self.staves_tensor\n\n  @property\n  @memoize.MemoizedFunction\n  def staffline_distance(self):\n    return self.staffline_distance_tensor\n\n  @property\n  @memoize.MemoizedFunction\n  def staffline_thickness(self):\n    return self.staffline_thickness_tensor\n\n\ndef _projection_to_staves(projection, width):\n  \"\"\"Pure python implementation of projection-based staff detection.\"\"\"\n  labels, num_labels = scipy.ndimage.measurements.label(projection)\n  current_staff = []\n  staff_center_lines = []\n  staffline_distance = []\n  staffline_thicknesses = []\n  for line in range(1, num_labels + 1):\n    line_start = np.where(labels == line)[0].min()\n    line_end = np.where(labels == line)[0].max()\n    staffline_thickness = line_end - line_start + 1\n    line_center = np.int32(round((line_start + line_end) / 2.0))\n    current_staff.append(line_center)\n    if len(current_staff) > 5:\n      del current_staff[0]\n    if len(current_staff) == 5:\n      dists = np.array([\n          current_staff[1] - current_staff[0],\n          current_staff[2] - current_staff[1],\n          current_staff[3] - current_staff[2],\n          current_staff[4] - current_staff[3]\n      ])\n      if np.max(dists) - np.min(dists) < 3:\n        staff_center = round(np.mean(current_staff))\n        staff_center_lines.append([[0, staff_center], [width - 1,\n                                                       staff_center]])\n        staffline_distance.append(round(np.mean(dists)))\n        staffline_thicknesses.append(staffline_thickness)\n  staffline_thickness = (\n      np.median(staffline_thicknesses).astype(np.int32)\n      if staffline_thicknesses else np.int32(1))\n  return (np.array(staff_center_lines,\n                   np.int32), np.array(staffline_distance,\n                                       np.int32), staffline_thickness)\n"
  },
  {
    "path": "moonlight/staves/removal.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Staffline removal for glyph classification.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import memoize\nfrom moonlight.util import segments\n\n# The number of lines to remove above and below the staff center line. This\n# removes the 5 staff lines, and 4 ledger lines (if present) above and below.\nLINES_TO_REMOVE_ABOVE_AND_BELOW = 6\n\n\nclass StaffRemover(object):\n  \"\"\"Removes staff lines for glyph classification.\n\n  Identifies and removes short vertical runs where we expect the staff lines.\n  This means that the extracted staffline images for classification are more\n  consistent, whether they are centered on the line or halfway between lines.\n  \"\"\"\n\n  def __init__(self, staff_detector, threshold=127):\n    self.staff_detector = staff_detector\n    self.threshold = threshold\n\n  @property\n  @memoize.MemoizedFunction\n  def remove_staves(self):\n    \"\"\"Returns the page with staff lines removed.\n\n    Returns:\n      An image of the same size as `self.staff_detector.image`, with staff lines\n      erased (set to white, 255).\n    \"\"\"\n    image = tf.convert_to_tensor(self.staff_detector.image)\n    height = tf.shape(image)[0]\n    width = tf.shape(image)[1]\n    # Max height of a run length that can be removed. Runs should have height\n    # around staffline_thickness.\n    max_runlength = self.staff_detector.staffline_thickness * 2\n\n    # Calculate the expected y position of each staff line for each staff and\n    # each column of the image.\n    staff_center_ys = self.staff_detector.staves_interpolated_y\n    all_staffline_center_ys = (\n        staff_center_ys[:, None, :] +\n        self.staff_detector.staffline_distance[:, None, None] *\n        tf.range(-LINES_TO_REMOVE_ABOVE_AND_BELOW,\n                 LINES_TO_REMOVE_ABOVE_AND_BELOW + 1)[None, :, None])\n    ys = tf.range(height)\n\n    def _process_column(i):\n      \"\"\"Removes staves from a single column of the image.\n\n      Args:\n        i: The index of the column to remove.\n\n      Returns:\n        The single column of the image with staff lines erased.\n      \"\"\"\n      column = image[:, i]\n      # Identify runs in the column that correspond to staff lines and can be\n      # erased.\n      runs, run_lengths = segments.true_segments_1d(column < self.threshold)\n\n      column_staffline_ys = all_staffline_center_ys[:, :, i]\n      # The run center has to be within staffline_thickness of a staff line.\n      run_matches_staffline = tf.less_equal(\n          tf.reduce_min(\n              tf.abs(runs[:, None, None] - column_staffline_ys[None, :, :]),\n              axis=[1, 2]), self.staff_detector.staffline_thickness)\n\n      keep_run = tf.logical_and(run_lengths < max_runlength,\n                                run_matches_staffline)\n      keep_run.set_shape([None])\n      runs = tf.boolean_mask(runs, keep_run)\n      run_lengths = tf.boolean_mask(run_lengths, keep_run)\n\n      def do_process_column(runs, run_lengths):\n        \"\"\"Process the column if there are any runs matching staff lines.\n\n        Args:\n          runs: The center of each vertical run.\n          run_lengths: The length of each vertical run.\n\n        Returns:\n          The column of the image with staff lines erased.\n        \"\"\"\n\n        # Erase ys that belong to a run corresponding to a staff line.\n        y_run_pair_distance = tf.abs(ys[:, None] - runs[None, :])\n        y_runs = tf.argmin(y_run_pair_distance, axis=1)\n        y_run_distance = tf.reduce_min(y_run_pair_distance, axis=1)\n        y_run_lengths = tf.gather(run_lengths, y_runs)\n        erase_y = tf.less_equal(y_run_distance, tf.floordiv(y_run_lengths, 2))\n        white_column = tf.fill(tf.shape(column), tf.constant(255, tf.uint8))\n        return tf.where(erase_y, white_column, column)\n\n      return tf.cond(\n          tf.shape(runs)[0] > 0, lambda: do_process_column(runs, run_lengths),\n          lambda: column)\n\n    return tf.transpose(\n        tf.map_fn(\n            _process_column,\n            tf.range(width),\n            name=\"staff_remover\",\n            dtype=tf.uint8))\n"
  },
  {
    "path": "moonlight/staves/removal_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for staff removal.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight import image as omr_image\nfrom moonlight import staves\nfrom moonlight.staves import removal\nfrom moonlight.staves import staffline_distance\n\n\nclass RemovalTest(tf.test.TestCase):\n\n  def test_corpus_image(self):\n    filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                            '../testdata/IMSLP00747-000.png')\n    image_t = omr_image.decode_music_score_png(tf.read_file(filename))\n    remover = removal.StaffRemover(staves.StaffDetector(image_t))\n    with self.test_session() as sess:\n      removed, image = sess.run([remover.remove_staves, image_t])\n      self.assertFalse(np.allclose(removed, image))\n      # If staff removal runs successfully, we should be unable to estimate the\n      # staffline distance from the staves-removed image.\n      est_staffline_distance, est_staffline_thickness = sess.run(\n          staffline_distance.estimate_staffline_distance_and_thickness(removed))\n      print(est_staffline_distance)\n      self.assertAllEqual([], est_staffline_distance)\n      self.assertEqual(-1, est_staffline_thickness)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/staves/staff_processor.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Adds staff location information to the Page.\n\nThe Page initially contains a single staff system with only glyphs, and this\nadds the location of each staff.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom six import moves\n\n\nclass StaffProcessor(object):\n\n  def __init__(self, structure, staffline_extractor):\n    self.staff_detector = structure.staff_detector\n    self.staffline_extractor = staffline_extractor\n\n  def apply(self, page):\n    \"\"\"Adds staff location information to the Page message.\"\"\"\n    assert len(page.system) == 1, ('Page must initially have a single staff '\n                                   'system')\n    assert len(page.system[0].staff) == len(self.staff_detector.staves), (\n        'Glyphs page must have the same number of staves as the staff detector')\n    staves_arr = self.staff_detector.staves\n    for i, staff in enumerate(page.system[0].staff):\n      staff.staffline_distance = self.staff_detector.staffline_distance[i]\n      for j in moves.xrange(staves_arr.shape[1]):\n        if (0 < j and j + 1 < staves_arr.shape[1] and\n            staves_arr[i, j - 1, 0] == staves_arr[i, j, 0] and\n            staves_arr[i, j, 0] == staves_arr[i, j + 1, 0]):\n          continue\n        point = staff.center_line.add()\n        point.x = staves_arr[i, j, 0]\n        point.y = staves_arr[i, j, 1]\n\n      # Scale the glyph x coordinates back for the original image.\n      if self.staffline_extractor:\n        # The height of an extracted slice of the image before scaling.\n        staffline_orig_height = (\n            staff.staffline_distance *\n            self.staffline_extractor.staffline_distance_multiple)\n        # The scale factor from the scaled staffline images to the original.\n        staffline_scale = (\n            staffline_orig_height / self.staffline_extractor.target_height)\n        for glyph in staff.glyph:\n          glyph.x = int(round(staffline_scale * glyph.x))\n    return page\n"
  },
  {
    "path": "moonlight/staves/staff_processor_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the staff page processor.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport copy\n\nfrom absl.testing import absltest\nimport numpy as np\n\nfrom moonlight import engine\nfrom moonlight import structure as structure_module\nfrom moonlight.glyphs import testing as glyphs_testing\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.staves import staff_processor\nfrom moonlight.staves import testing as staves_testing\n\n\nclass StaffProcessorTest(absltest.TestCase):\n\n  def testGetPage_x_scale(self):\n    # Random staffline images matching the dimensions of PREDICTIONS.\n    dummy_stafflines = np.random.random((2, 3, 5, 6))\n    classifier = glyphs_testing.DummyGlyphClassifier(glyphs_testing.PREDICTIONS)\n    image = np.random.randint(0, 255, (30, 20), dtype=np.uint8)\n    staves = staves_testing.FakeStaves(\n        image_t=image,\n        staves_t=np.asarray([[[0, 10], [19, 10]], [[0, 20], [19, 20]]],\n                            np.int32),\n        staffline_distance_t=np.asarray([5, 20], np.int32),\n        staffline_thickness_t=np.asarray(1, np.int32))\n    structure = structure_module.create_structure(image,\n                                                  lambda unused_image: staves)\n\n    class DummyStafflineExtractor(object):\n      \"\"\"A placeholder for StafflineExtractor.\n\n      It only contains the constants necessary to scale the x coordinates.\n      \"\"\"\n      staffline_distance_multiple = 2\n      target_height = 10\n\n    omr = engine.OMREngine(lambda _: classifier)\n    page = omr.process_image(\n        # Feed in a dummy image. It doesn't matter because FakeStaves has\n        # hard-coded staff values.\n        np.random.randint(0, 255, (100, 100)),\n        process_structure=False)\n    page = staff_processor.StaffProcessor(structure,\n                                          DummyStafflineExtractor()).apply(page)\n    self.assertEqual(len(page.system[0].staff), 2)\n    # The first staff has a staffline distance of 5.\n    # The extracted staffline slices have an original height of\n    # staffline_distance * staffline_distance_multiple (10), which equals\n    # target_height here, so there is no scaling.\n    self.assertEqual(\n        musicscore_pb2.Staff(glyph=page.system[0].staff[0].glyph),\n        glyphs_testing.GLYPHS_PAGE.system[0].staff[0])\n    # Glyphs in the second staff have a scaled x coordinate.\n    self.assertEqual(\n        len(page.system[0].staff[1].glyph),\n        len(glyphs_testing.GLYPHS_PAGE.system[0].staff[1].glyph))\n    for glyph in glyphs_testing.GLYPHS_PAGE.system[0].staff[1].glyph:\n      expected_glyph = copy.deepcopy(glyph)\n      # The second staff has a staffline distance of 20. The extracted staffline\n      # slice would be 4 times the size of the scaled staffline, so x\n      # coordinates are scaled by 4. Also, the glyphs may be in a different\n      # order.\n      expected_glyph.x *= 4\n      self.assertIn(expected_glyph, page.system[0].staff[1].glyph)\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/staves/staffline_distance.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Implements staffline distance estimation.\n\nThe staffline distance is the vertical distance between consecutive lines in a\nstaff, which is assumed to be uniform for a single staff on a scanned music\nscore. The staffline thickness is the vertical height of each staff line, which\nis assumed to be uniform for the entire page.\n\nUses the algorithm described in [1], which creates a histogram of possible\nstaffline distance and thickness values for the entire image, based on the\nvertical run-length encoding [2]. Each consecutive pair of black and white runs\ncontributes to the staffline distance histogram (because they may be the\nstaffline followed by an unobstructed space, or vice versa). We then take the\nargmax of the histogram, and find candidate staff line runs. These runs must be\nbefore or after another run, such that the sum of the run lengths is the\ndetected staffline distance. Then the black run is considered to be an actual\nstaff line, and its length contributes to the staffline thickness histogram.\n\nAlthough we use a single staffline distance value for staffline thickness\ndetection, we may detect multiple distinct peaks in the histogram. We then run\nstaff detection using each distinct peak value, to detect smaller staves with an\nunusual size, e.g. ossia parts [3].\n\n[1] Cardoso, Jaime S., and Ana Rebelo. \"Robust staffline thickness and distance\n    estimation in binary and gray-level music scores.\" 20th International\n    Conference on Pattern Recognition (ICPR). IEEE, 2010.\n[2] https://en.wikipedia.org/wiki/Run-length_encoding\n[3] https://en.wikipedia.org/wiki/Ossia\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import run_length\nfrom moonlight.util import segments\n\n# The size of the histograms. Normal values for the peak are around 20 for\n# staffline distance, and 2-3 for staffline thickness.\n_MAX_STAFFLINE_DISTANCE_THICKNESS_VALUE = 256\n\n# The minimum number of votes for a staffline distance bin. We expect images to\n# be a reasonable size (> 100x100), and want to ensure we exclude images that\n# don't contain any staves.\n_MIN_STAFFLINE_DISTANCE_SCORE = 10000\n\n# The maximum allowed number of unique staffline distances. If more staffline\n# distances are detected, return an empty list instead.\n_MAX_ALLOWED_UNIQUE_STAFFLINE_DISTANCES = 3\n\n_STAFFLINE_DISTANCE_INVALIDATE_DISTANCE = 1\n_STAFFLINE_THICKNESS_INVALIDATE_DISTANCE = 1\n_PEAK_CUTOFF = 0.5\n\n\ndef _single_peak(values, relative_cutoff, minval, invalidate_distance):\n  \"\"\"Takes a single peak if it is high enough compared to all other peaks.\n\n  Args:\n    values: 1D tensor of values to take the peaks on.\n    relative_cutoff: The fraction of the highest peak which all other peaks\n      should be below.\n    minval: The peak should have at least this value.\n    invalidate_distance: Exclude values that are up to invalidate_distance away\n      from the peak.\n\n  Returns:\n    The index of the single peak in `values`, or -1 if there is not a single\n        peak that satisfies `relative_cutoff`.\n  \"\"\"\n  relative_cutoff = tf.convert_to_tensor(relative_cutoff, tf.float32)\n\n  # argmax is safe because the histogram is always non-empty.\n  peak = tf.to_int32(tf.argmax(values))\n  # Take values > minval away from the peak.\n  other_values = tf.boolean_mask(\n      values,\n      tf.greater(\n          tf.abs(tf.range(tf.shape(values)[0]) - peak), invalidate_distance))\n  should_take_peak = tf.logical_and(\n      tf.greater_equal(values[peak], minval),\n      # values[peak] * relative_cutoff must be >= other_values.\n      tf.reduce_all(\n          tf.greater_equal(\n              tf.to_float(values[peak]) * relative_cutoff,\n              tf.to_float(other_values))))\n  return tf.cond(should_take_peak, lambda: peak, lambda: -1)\n\n\ndef _estimate_staffline_distance(columns, lengths):\n  \"\"\"Estimates the staffline distances of a music score.\n\n  Args:\n    columns: 1D array. The column indices of each vertical run.\n    lengths: 1D array. The length of each consecutive vertical run.\n\n  Returns:\n    A 1D tensor of possible staffline distances in the image.\n  \"\"\"\n  with tf.name_scope('estimate_staffline_distance'):\n    run_pair_lengths = lengths[:-1] + lengths[1:]\n    keep_pair = tf.equal(columns[:-1], columns[1:])\n    staffline_distance_histogram = tf.bincount(\n        tf.boolean_mask(run_pair_lengths, keep_pair),\n        # minlength required to avoid errors on a fully white image.\n        minlength=_MAX_STAFFLINE_DISTANCE_THICKNESS_VALUE,\n        maxlength=_MAX_STAFFLINE_DISTANCE_THICKNESS_VALUE)\n    peaks = segments.peaks(\n        staffline_distance_histogram,\n        minval=_MIN_STAFFLINE_DISTANCE_SCORE,\n        invalidate_distance=_STAFFLINE_DISTANCE_INVALIDATE_DISTANCE)\n\n    def do_filter_peaks():\n      \"\"\"Process the peaks if they are non-empty.\n\n      Returns:\n        The filtered peaks. Peaks below the cutoff when compared to the highest\n            peak are removed. If the peaks are invalid, then an empty list is\n            returned.\n      \"\"\"\n      histogram_size = tf.shape(staffline_distance_histogram)[0]\n      peak_values = tf.to_float(tf.gather(staffline_distance_histogram, peaks))\n      max_value = tf.reduce_max(peak_values)\n      allowed_peaks = tf.greater_equal(peak_values,\n                                       max_value * tf.constant(_PEAK_CUTOFF))\n\n      # Check if there are too many detected staffline distances, and we should\n      # return an empty list.\n      allowed_peaks &= tf.less_equal(\n          tf.reduce_sum(tf.to_int32(allowed_peaks)),\n          _MAX_ALLOWED_UNIQUE_STAFFLINE_DISTANCES)\n\n      # Check if any values sufficiently far away from the peaks are too high.\n      # This means the peaks are not sharp enough and we should return an empty\n      # list.\n      far_from_peak = tf.greater(\n          tf.reduce_min(\n              tf.abs(tf.range(histogram_size)[None, :] - peaks[:, None]),\n              axis=0), _STAFFLINE_DISTANCE_INVALIDATE_DISTANCE)\n      allowed_peaks &= tf.less(\n          tf.to_float(\n              tf.reduce_max(\n                  tf.boolean_mask(staffline_distance_histogram,\n                                  far_from_peak))),\n          max_value * tf.constant(_PEAK_CUTOFF))\n\n      return tf.boolean_mask(peaks, allowed_peaks)\n\n    return tf.cond(\n        tf.greater(tf.shape(peaks)[0], 0), do_filter_peaks,\n        lambda: tf.identity(peaks))\n\n\ndef _estimate_staffline_thickness(columns, values, lengths, staffline_distance):\n  \"\"\"Estimates the staffline thickness of a music score.\n\n  Args:\n    columns: 1D array. The column indices of each consecutive vertical run.\n    values: 1D array. The value (0 or 1) of each vertical run.\n    lengths: 1D array. The length of each vertical run.\n    staffline_distance: A 1D tensor of the possible staffline distances in the\n      image. One of the distances may be chosen arbitrarily.\n\n  Returns:\n    A scalar tensor with the staffline thickness for the entire page, or -1 if\n      it could not be estimated (staffline_distance is empty, or there are not\n      enough runs to estimate the staffline thickness).\n  \"\"\"\n\n  with tf.name_scope('estimate_staffline_thickness'):\n\n    def do_estimate():\n      \"\"\"Compute the thickness if distance detection was successful.\"\"\"\n      run_pair_lengths = lengths[:-1] + lengths[1:]\n      # Use the smallest staffline distance to estimate the staffline thickness.\n      keep_pair = tf.logical_and(\n          tf.equal(columns[:-1], columns[1:]),\n          tf.equal(run_pair_lengths, staffline_distance[0]))\n\n      run_pair_lengths = tf.boolean_mask(run_pair_lengths, keep_pair)\n      start_values = tf.boolean_mask(values[:-1], keep_pair)\n      start_lengths = tf.boolean_mask(lengths[:-1], keep_pair)\n      end_lengths = tf.boolean_mask(lengths[1:], keep_pair)\n\n      staffline_thickness_values = tf.where(\n          tf.not_equal(start_values, 0), start_lengths, end_lengths)\n      staffline_thickness_histogram = tf.bincount(\n          staffline_thickness_values,\n          minlength=_MAX_STAFFLINE_DISTANCE_THICKNESS_VALUE,\n          maxlength=_MAX_STAFFLINE_DISTANCE_THICKNESS_VALUE)\n\n      return _single_peak(\n          staffline_thickness_histogram,\n          _PEAK_CUTOFF,\n          minval=1,\n          invalidate_distance=_STAFFLINE_THICKNESS_INVALIDATE_DISTANCE)\n\n    return tf.cond(\n        tf.greater(tf.shape(staffline_distance)[0], 0), do_estimate,\n        lambda: tf.constant(-1, tf.int32))\n\n\ndef estimate_staffline_distance_and_thickness(image, threshold=127):\n  \"\"\"Estimates the staffline distance and thickness of a music score.\n\n  Args:\n    image: A 2D tensor (HW) and type uint8.\n    threshold: The global threshold for the image.\n\n  Returns:\n    The estimated vertical distance(s) from the center of one staffline to the\n        next in the music score. 1D tensor containing all unique values of the\n        estimated staffline distance for each staff.\n    The estimated staffline thickness of the music score.\n\n  Raises:\n    TypeError: If `image` is an invalid type.\n  \"\"\"\n  image = tf.convert_to_tensor(image, name='image', dtype=tf.uint8)\n  threshold = tf.convert_to_tensor(threshold, name='threshold', dtype=tf.uint8)\n  if image.dtype.base_dtype != tf.uint8:\n    raise TypeError('Invalid dtype %s.' % image.dtype)\n\n  columns, values, lengths = run_length.vertical_run_length_encoding(\n      tf.less(image, threshold))\n\n  staffline_distance = _estimate_staffline_distance(columns, lengths)\n  staffline_thickness = _estimate_staffline_thickness(columns, values, lengths,\n                                                      staffline_distance)\n  # staffline_thickness may be -1 even if staffline_distance > 0. Fix it so\n  # that we can check either one to determine whether there are staves.\n  staffline_distance = tf.cond(\n      tf.equal(staffline_thickness, -1), lambda: tf.zeros([0], tf.int32),\n      lambda: tf.identity(staffline_distance))\n  return staffline_distance, staffline_thickness\n"
  },
  {
    "path": "moonlight/staves/staffline_distance_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for staffline distance estimation.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport tensorflow as tf\n\nfrom moonlight.image import decode_music_score_png\nfrom moonlight.staves import staffline_distance\n\n\nclass StafflineDistanceTest(tf.test.TestCase):\n\n  def testCorpusImage(self):\n    filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                            '../testdata/IMSLP00747-000.png')\n    image_contents = open(filename, 'rb').read()\n    image_t = decode_music_score_png(tf.constant(image_contents))\n    staffdist_t, staffthick_t = (\n        staffline_distance.estimate_staffline_distance_and_thickness(image_t,))\n    with self.test_session() as sess:\n      staffdist, staffthick = sess.run((staffdist_t, staffthick_t))\n    # Manually determined values for the image.\n    self.assertAllEqual(staffdist, [16])\n    self.assertEquals(staffthick, 2)\n\n  def testZeros(self):\n    # All white (0) shouldn't be picked up as a music score.\n    image_t = tf.zeros((512, 512), dtype=tf.uint8)\n    staffdist_t, staffthick_t = (\n        staffline_distance.estimate_staffline_distance_and_thickness(image_t))\n    with self.test_session() as sess:\n      staffdist, staffthick = sess.run((staffdist_t, staffthick_t))\n    self.assertAllEqual(staffdist, [])\n    self.assertEqual(staffthick, -1)\n\n  def testSpeckles(self):\n    # Random speckles shouldn't be picked up as a music score.\n    tf.set_random_seed(1234)\n    image_t = tf.where(\n        tf.random_uniform((512, 512)) < 0.1,\n        tf.fill((512, 512), tf.constant(255, tf.uint8)),\n        tf.fill((512, 512), tf.constant(0, tf.uint8)))\n    staffdist_t, staffthick_t = (\n        staffline_distance.estimate_staffline_distance_and_thickness(image_t))\n    with self.test_session() as sess:\n      staffdist, staffthick = sess.run((staffdist_t, staffthick_t))\n    self.assertAllEqual(staffdist, [])\n    self.assertEqual(staffthick, -1)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/staves/staffline_extractor.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Extracts horizontal slices from a staff for glyph classification.\"\"\"\n# TODO(ringw): Rename StafflineExtractor to PositionExtractor. Stafflines in\n# this context should be renamed \"extracted positions\" to avoid confusion.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport enum\nfrom moonlight import image as image_module\nfrom moonlight import staves as staves_module\nfrom moonlight.staves import removal\nfrom six import moves\nimport tensorflow as tf\n\nDEFAULT_TARGET_HEIGHT = 18\nDEFAULT_NUM_SECTIONS = 19\nDEFAULT_STAFFLINE_DISTANCE_MULTIPLE = 3\n\n\nclass Axes(enum.IntEnum):\n\n  STAFF = 0\n  POSITION = 1\n  Y = 2\n  X = 3\n\n\ndef get_staffline(y_position, extracted_staff_arr):\n  \"\"\"Gets the staffline of the extracted staff.\n\n  Args:\n    y_position: The staffline position--the relative number of notes from the\n      3rd line on the staff.\n    extracted_staff_arr: An extracted staff NumPy array, e.g.\n      `StafflineExtractor.extract_staves()[0].eval()` (`StafflineExtractor`\n      returns multiple staves).\n\n  Returns:\n    The correct staffline from `extracted_staff_arr`, with shape\n        `(target_height, image_width)`.\n\n  Raises:\n    ValueError: If the `y_position` is out of bounds in either direction.\n  \"\"\"\n  return extracted_staff_arr[y_position_to_index(y_position,\n                                                 len(extracted_staff_arr))]\n\n\ndef y_position_to_index(y_position, num_stafflines):\n  index = num_stafflines // 2 - y_position\n  if not 0 <= index < num_stafflines:\n    raise ValueError('y_position %d too large for %d stafflines' %\n                     (y_position, num_stafflines))\n  return index\n\n\nclass StafflineExtractor(object):\n  \"\"\"Extracts horizontal slices from a staff for glyph classification.\n\n  Glyphs must be centered on either a staff line or a staff space (halfway\n  between staff lines). For classification, a window is extracted with height\n  2*staffline_distance around a staffline or staff space. If num_sections is 9,\n  extracts the five staff lines and the staff spaces between them.\n\n  The slice is scaled proportionally to the staffline distance, making the\n  output height equal to target_height, so that the glyph classifier is\n  scale-invariant.\n\n  This class is used in inference as part of a larger TF graph. See\n  StafflinePatchExtractor for training.\n  \"\"\"\n\n  def __init__(self,\n               image,\n               staves,\n               target_height=DEFAULT_TARGET_HEIGHT,\n               num_sections=DEFAULT_NUM_SECTIONS,\n               staffline_distance_multiple=DEFAULT_STAFFLINE_DISTANCE_MULTIPLE):\n    \"\"\"Create the staffline extractor.\n\n    Args:\n      image: A uint8 tensor of shape (height, width). The background (usually\n        white) must have a value of 0.\n      staves: An instance of base.BaseStaffDetector.\n      target_height: The height of the scaled output windows.\n      num_sections: The number of stafflines to extract.\n      staffline_distance_multiple: The height of the extracted staffline, in\n        multiples of the staffline distance. For example, a notehead should fit\n        in a staffline distance multiple of 1, because it starts and ends\n        vertically on a staff line. However, other glyphs may need more space\n        above and below to classify accurately.\n    \"\"\"\n    self.float_image = tf.cast(image, tf.float32) / 255.\n    self.staves = staves\n    self.target_height = target_height\n    self.num_sections = num_sections\n    self.staffline_distance_multiple = staffline_distance_multiple\n\n    # Calculate the maximum width needed.\n    min_staffline_distance = tf.reduce_min(staves.staffline_distance)\n    self.target_width = self._get_resized_width(min_staffline_distance)\n\n  def extract_staves(self):\n    \"\"\"Extracts stafflines from all staves in the image.\n\n    Returns:\n      A float32 Tensor of shape\n      (num_staves, num_sections, target_height, slice_width). If the staffline\n      distance is inconsistent between staves, smaller staves will be padded\n      on the right with zeros.\n    \"\"\"\n\n    # Only map if we have any staves, otherwise return an empty array with the\n    # correct dimensionality.\n    def do_extract_staves():\n      \"\"\"Actually performs staffline extraction if we have any staves.\n\n      Returns:\n        The stafflines tensor. See outer function doc.\n      \"\"\"\n      staff_ys = self.staves.staves_interpolated_y\n\n      def extract_staff(i):\n\n        def extract_staffline_by_index(j):\n          return self._extract_staffline(staff_ys[i],\n                                         self.staves.staffline_distance[i], j)\n\n        return tf.map_fn(\n            extract_staffline_by_index,\n            tf.range(-(self.num_sections // 2), self.num_sections // 2 + 1),\n            dtype=tf.float32)\n\n      return tf.map_fn(\n          extract_staff,\n          tf.range(tf.shape(self.staves.staves)[0]),\n          dtype=tf.float32)\n\n    # Shape of the empty stafflines tensor, if no staves are present.\n    empty_shape = (0, self.num_sections, self.target_height, 0)\n    stafflines = tf.cond(\n        tf.shape(self.staves.staves)[0] > 0,\n        do_extract_staves,\n        # Otherwise, return an empty stafflines array.\n        lambda: tf.zeros(empty_shape, tf.float32))\n    # We need target_height to be statically known for e.g. `util/patches.py`.\n    stafflines.set_shape((None, self.num_sections, self.target_height, None))\n    return stafflines\n\n  def _extract_staffline(self, staff_y, staffline_distance, staffline_num):\n    \"\"\"Extracts a single staffline from a single staff.\"\"\"\n    # Use a float image on a 0.0-1.0 scale for classification.\n    image_shape = tf.shape(self.float_image)\n    height = image_shape[0]  # Can't unpack a tensor object.\n    width = image_shape[1]\n\n    # Calculate the height of the extracted staffline in the unscaled image.\n    staff_window = self._get_staffline_window_size(staffline_distance)\n\n    # Calculate the coordinates to extract for the window.\n    # Note: tf.meshgrid uses xs before ys by default, but y is the 0th axis\n    # for indexing.\n    xs, ys = tf.meshgrid(\n        tf.range(width),\n        tf.range(staff_window) - (staff_window // 2))\n    # ys are centered around 0. Add the staff_y, repeating along the\n    # 0th axis.\n    ys += tf.tile(staff_y[None, :], [staff_window, 1])\n    # Add the offset for the staff line within the staff.\n    # Round up in case the y position is not whole (in between staff lines with\n    # an odd staffline distance). This puts the center of the staff space closer\n    # to the center of the window.\n    ys += tf.cast(\n        tf.ceil(tf.truediv(staffline_num * staffline_distance, 2)), tf.int32)\n\n    invalid = tf.logical_not((0 <= ys) & (ys < height) & (0 <= xs)\n                             & (xs < width))\n    # Use a coordinate of (0, 0) for pixels outside of the original image.\n    # We will then fill in those pixels with zeros.\n    ys = tf.where(invalid, tf.zeros_like(ys), ys)\n    xs = tf.where(invalid, tf.zeros_like(xs), xs)\n    inds = tf.stack([ys, xs], axis=2)\n    staffline_image = tf.gather_nd(self.float_image, inds)\n    # Fill the pixels outside of the original image with zeros.\n    staffline_image = tf.where(invalid, tf.zeros_like(staffline_image),\n                               staffline_image)\n\n    # Calculate the proportional width after scaling the height to\n    # self.target_height.\n    resized_width = self._get_resized_width(staffline_distance)\n    # Use area resizing because we expect the output to be smaller.\n    # Add extra axes, because we only have 1 image and 1 channel.\n    staffline_image = tf.image.resize_area(\n        staffline_image[None, :, :,\n                        None], [self.target_height, resized_width])[0, :, :, 0]\n    # Pad to make the width consistent with target_width.\n    staffline_image = tf.pad(staffline_image,\n                             [[0, 0], [0, self.target_width - resized_width]])\n    return staffline_image\n\n  def _get_resized_width(self, staffline_distance):\n    image_width = tf.shape(self.float_image)[1]\n    window_height = self._get_staffline_window_size(staffline_distance)\n    return tf.cast(\n        tf.round(tf.truediv(image_width * self.target_height, window_height)),\n        tf.int32)\n\n  def _get_staffline_window_size(self, staffline_distance):\n    return tf.to_int32(\n        tf.round(\n            tf.to_float(staffline_distance) *\n            tf.to_float(self.staffline_distance_multiple)))\n\n\nclass StafflinePatchExtractor(object):\n  \"\"\"Wraps the OMR TensorFlow graph and performs staff patch extraction.\n\n  While inference uses StafflineExtractor/Convolutional1DGlyphClassifier to\n  efficiently extract patches within the TF graph, StafflinePatchExtractor\n  encapsulates the TF graph necessary for extraction. Therefore, it is to be\n  used for training example extraction, where only staff detection and staffline\n  extraction are run in TF.\n  \"\"\"\n\n  def __init__(self,\n               num_sections=DEFAULT_NUM_SECTIONS,\n               patch_height=15,\n               patch_width=12,\n               run_options=None):\n    self.num_sections = num_sections\n    self.patch_height = patch_height\n    self.patch_width = patch_width\n    self.run_options = run_options\n\n    self.graph = tf.Graph()\n    with self.graph.as_default():\n      # Identifying information for the patch.\n      self.filename = tf.placeholder(tf.string, name='filename')\n      self.staff_index = tf.placeholder(tf.int64, name='staff_index')\n      self.y_position = tf.placeholder(tf.int64, name='y_position')\n\n      image = image_module.decode_music_score_png(tf.read_file(self.filename))\n      staff_detector = staves_module.StaffDetector(image)\n      staff_remover = removal.StaffRemover(staff_detector)\n      extractor = StafflineExtractor(\n          staff_remover.remove_staves,\n          staff_detector,\n          num_sections=num_sections,\n          target_height=patch_height)\n      # Index into the staff strips array, where a y position of 0 is the center\n      # element. Positive positions count up (towards higher notes, towards the\n      # top of the image, and smaller indices into the array).\n      position_index = num_sections // 2 - self.y_position\n      self.all_stafflines = extractor.extract_staves()\n      # The entire extracted horizontal strip of the image.\n      self.staffline = self.all_stafflines[self.staff_index, position_index]\n\n      # Determine the scale for converting image x coordinates to the scaled\n      # staff strip from which the patch is extracted.\n      extracted_staff_strip_height = tf.shape(self.all_stafflines)[2]\n      unscaled_staff_strip_heights = tf.multiply(\n          DEFAULT_STAFFLINE_DISTANCE_MULTIPLE,\n          staff_detector.staffline_distance)\n      self.all_staffline_scales = tf.divide(\n          tf.to_float(extracted_staff_strip_height),\n          tf.to_float(unscaled_staff_strip_heights))\n      self.staffline_scale = self.all_staffline_scales[self.staff_index]\n\n  def extract_staff_strip(self, filename, staff_index, y_position):\n    \"\"\"Extracts an entire horizontal strip from the image.\n\n    Args:\n      filename: The absolute filename of the image.\n      staff_index: Index of the staff out of all staves on the page.\n      y_position: Note y position on the staff, on which to extract the strip.\n        The position starts out 0 for the staff center line, and grows more\n        positive for higher notes.\n\n    Returns:\n      A tuple of:\n        A wide strip of the image as a NumPy array.\n        The scale factor from the original image scale to the normalized staff\n            strip scale.\n    \"\"\"\n    return tf.get_default_session().run(\n        [self.staffline, self.staffline_scale],\n        feed_dict={\n            self.filename: filename,\n            self.staff_index: staff_index,\n            self.y_position: y_position,\n        },\n        options=self.run_options)\n\n  def extract_staff_patch(self, filename, staff_index, y_position, image_x):\n    \"\"\"Extracts a rectangular patch to be labeled.\n\n    Args:\n      filename: The absolute filename of the image.\n      staff_index: Index of the staff out of all staves on the page.\n      y_position: Note y position on the staff, on which to extract the strip.\n      image_x: The coordinate of the patch center x, in image coordinates.\n\n    Returns:\n      The rectangular NumPy array for the patch.\n\n    Raises:\n      ValueError: If the x coordinate is too close to the left or right edge of\n          the image to extract a full patch.\n    \"\"\"\n    staffline, scale = self.extract_staff_strip(filename, staff_index,\n                                                y_position)\n    staffline_x = int(round(image_x * scale))\n    patch_x_start = staffline_x + (-self.patch_width // 2)\n    patch_x_stop = staffline_x + self.patch_width // 2\n    if not (self.patch_width // 2 <= staffline_x <\n            staffline.shape[1] - self.patch_width // 2):\n      raise ValueError('image_x too close to bounds of image')\n    return staffline[:, patch_x_start:patch_x_stop]\n\n  def page_patch_iterator(self, filename):\n    \"\"\"Iterates over every patch on every staff.\n\n    Args:\n      filename: Path to a PNG file.\n\n    Returns:\n      A generator yielding pairs of patch id (with coordinates) and 2D\n      `np.ndarray` with the patch contents.\n    \"\"\"\n    all_stafflines, scales = tf.get_default_session().run(\n        [self.all_stafflines, self.all_staffline_scales],\n        feed_dict={self.filename: filename},\n        options=self.run_options)\n\n    def generator():\n      \"\"\"The patch generator.\n\n      Yields:\n        Every extracted patch, as a logical id and patch ndarray.\n      \"\"\"\n      for staff_index, staff in enumerate(all_stafflines):\n        scale = scales[staff_index]\n        for staffline_index, staffline in enumerate(staff):\n          y_position = self.num_sections // 2 - staffline_index\n          prev_image_x = None\n\n          for staffline_x_start in moves.xrange(staffline.shape[1] -\n                                                self.patch_width):\n            image_x = int(\n                round((staffline_x_start + self.patch_width // 2) / scale))\n            if image_x != prev_image_x:\n              patch_id = StafflinePatchExtractor.make_patch_id(\n                  filename, staff_index, y_position, image_x)\n              patch = staffline[:, staffline_x_start:staffline_x_start +\n                                self.patch_width]\n              yield patch_id, patch\n              prev_image_x = image_x\n\n    return generator()\n\n  @staticmethod\n  def make_patch_id(filename, staff_index, y_position, image_x):\n    \"\"\"Formats the short id uniquely identifying a patch in the training set.\"\"\"\n    short_filename, _ = os.path.splitext(os.path.basename(filename))\n    # Format y_position as e.g. +2, +0, or -3.\n    return '{},{},{:+d},{}'.format(short_filename, staff_index, y_position,\n                                   image_x)\n"
  },
  {
    "path": "moonlight/staves/staffline_extractor_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for StafflineExtractor.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight import staves\nfrom moonlight.staves import staffline_extractor\n\n\nclass StafflineExtractorTest(tf.test.TestCase):\n\n  def setUp(self):\n    # Small image with a single staff.\n    # pyformat: disable\n    self.single_staff_image = np.asarray(\n        [[1, 1, 1, 1, 1, 1, 1],\n         [1, 0, 0, 0, 0, 0, 1],\n         [1, 1, 0, 1, 1, 1, 1],\n         [1, 1, 1, 1, 0, 0, 1],\n         [1, 0, 0, 0, 0, 0, 1],\n         [1, 1, 1, 1, 1, 1, 1],\n         [1, 1, 1, 1, 1, 1, 1],\n         [1, 0, 0, 0, 0, 0, 1],\n         [1, 0, 1, 1, 1, 1, 1],\n         [1, 1, 1, 1, 1, 1, 1],\n         [1, 0, 0, 0, 0, 0, 1],\n         [1, 1, 1, 1, 1, 1, 1],\n         [1, 1, 1, 1, 1, 1, 1],\n         [1, 0, 0, 0, 0, 0, 1]], np.uint8) * 255\n\n  def testExtractStaff(self):\n    image_t = tf.constant(self.single_staff_image, name='image')\n    detector = staves.ProjectionStaffDetector(image_t)\n    # The staffline distance is 3, so use a target height of 6 to avoid scaling\n    # the image.\n    extractor = staffline_extractor.StafflineExtractor(\n        image_t,\n        detector,\n        target_height=6,\n        num_sections=9,\n        staffline_distance_multiple=2)\n    with self.test_session():\n      stafflines = extractor.extract_staves().eval()\n    assert stafflines.shape == (1, 9, 6, 7)\n    # The top staff line is at a y-value of 2 because of rounding.\n    assert np.array_equal(\n        stafflines[0, 0],\n        np.concatenate((np.zeros((2, 7)), self.single_staff_image[:4] / 255.0)))\n    # The staff space is centered in the window.\n    assert np.array_equal(stafflines[0, 3],\n                          self.single_staff_image[3:9] / 255.0)\n    # Staffline height is 3 and extracted strips have 2 staff line distances\n    # with a total height of 6, so the strip is not actually scaled.\n    self.assertTrue(\n        np.logical_or(np.isclose(stafflines, 1.), np.isclose(stafflines,\n                                                             0.)).all())\n\n  def testFloatMultiple(self):\n    image_t = tf.constant(self.single_staff_image, name='image')\n    detector = staves.ProjectionStaffDetector(image_t)\n    extractor = staffline_extractor.StafflineExtractor(\n        image_t,\n        detector,\n        target_height=6,\n        num_sections=9,\n        staffline_distance_multiple=1.5)\n    with self.test_session():\n      stafflines = extractor.extract_staves().eval()\n    # Staff strip is scaled up because the input height is less. Some output\n    # pixels have aliasing.\n    self.assertAllEqual(stafflines.shape, (1, 9, 6, 10))\n    self.assertFalse(\n        np.logical_or(np.isclose(stafflines, 1.), np.isclose(stafflines,\n                                                             0.)).all())\n\n\nclass StafflinePatchExtractorTest(tf.test.TestCase):\n\n  def testCompareIteratorWithSinglePatch(self):\n    # Patches from the iterator should be exactly equal to the patch when the\n    # coordinates from the id are given.\n    filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                            '../testdata/IMSLP00747-000.png')\n\n    extractor = staffline_extractor.StafflinePatchExtractor()\n    with self.test_session(graph=extractor.graph):\n      single_patch = extractor.extract_staff_patch(filename, 3, 4, 800)\n      # Unwrap the single patch with the matching id in the iterator.\n      patch_from_iterator, = (\n          patch\n          for patch_id, patch in extractor.page_patch_iterator(filename)\n          if patch_id == 'IMSLP00747-000,3,+4,800'\n      )\n    self.assertAllClose(single_patch, patch_from_iterator)\n    # Patch contains some content.\n    self.assertAlmostEqual(single_patch.min(), 0, places=5)\n    self.assertAlmostEqual(single_patch.max(), 1, places=5)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/staves/testing.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Staff detection test utilities.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom moonlight.staves import base\n\n\nclass FakeStaves(base.BaseStaffDetector):\n  \"\"\"Fake staff detector holding an arbitrary staves tensor.\n\n  Attributes:\n    image: The image.\n    staves_t: The staves given to the constructor. None may be given if the\n      staves are never checked (only the staffline distance).\n    staffline_distance_t: The estimated staffline distance. 1D tensor (values\n      for each staff) or None.\n    staffline_thickness_t: The estimated staffline thickness. Scalar tensor or\n      None.\n  \"\"\"\n\n  def __init__(self,\n               image_t,\n               staves_t,\n               staffline_distance_t=None,\n               staffline_thickness_t=None):\n    self.image = image_t\n    self.staves_t = staves_t\n    self.staffline_distance_t = staffline_distance_t\n    self.staffline_thickness_t = staffline_thickness_t\n\n  @property\n  def staves(self):\n    return self.staves_t\n\n  @property\n  def staffline_distance(self):\n    return self.staffline_distance_t\n\n  @property\n  def staffline_thickness(self):\n    return self.staffline_thickness_t\n"
  },
  {
    "path": "moonlight/structure/BUILD",
    "content": "# Description:\n# Music score structure detection above the staff level.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"structure\",\n    srcs = [\"__init__.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":beams\",\n        \":components\",\n        \":verticals\",\n        \"//moonlight/staves\",\n        \"//moonlight/staves:base\",\n        \"//moonlight/staves:removal\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"structure_test\",\n    srcs = [\"structure_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":structure\",\n        # disable_tf2\n        \"//moonlight:image\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"barlines\",\n    srcs = [\"barlines.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n    ],\n)\n\npy_test(\n    name = \"barlines_test\",\n    srcs = [\"barlines_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":barlines\",\n        \":beams\",\n        \":components\",\n        \":structure\",\n        \":verticals\",\n        # disable_tf2\n        # absl/testing dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/staves:base\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"beams\",\n    srcs = [\"beams.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":components\",\n        \"//moonlight/vision:morphology\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"beam_processor\",\n    srcs = [\"beam_processor.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":components\",\n        \"//moonlight/glyphs:glyph_types\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"components\",\n    srcs = [\"components.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # enum34 dep\n        # tensorflow dep\n        # tensorflow.contrib.image py dep\n    ],\n)\n\npy_test(\n    name = \"components_test\",\n    srcs = [\"components_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":components\",\n        # disable_tf2\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"section_barlines\",\n    srcs = [\"section_barlines.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":barlines\",\n        \":components\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"stems\",\n    srcs = [\"stems.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/glyphs:glyph_types\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n    ],\n)\n\npy_test(\n    name = \"stems_test\",\n    srcs = [\"stems_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":beams\",\n        \":components\",\n        \":stems\",\n        \":structure\",\n        \":verticals\",\n        # disable_tf2\n        # absl/testing dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/staves:base\",\n        # numpy dep\n    ],\n)\n\npy_library(\n    name = \"verticals\",\n    srcs = [\"verticals.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/util:functional_ops\",\n        \"//moonlight/util:memoize\",\n        \"//moonlight/util:segments\",\n        \"//moonlight/vision:images\",\n        \"//moonlight/vision:morphology\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"verticals_test\",\n    srcs = [\"verticals_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":verticals\",\n        # disable_tf2\n        \"//moonlight/staves:testing\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/structure/__init__.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Holder for the page structure detectors.\n\n`create_structure()` constructs the staff and verticals detectors with the\ngiven callables. `Structure.compute()` is run to compute all structure in a\nsingle TensorFlow graph, to increase parallelism.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight import staves\nfrom moonlight.staves import base as staves_base\nfrom moonlight.staves import removal\nfrom moonlight.structure import beams as beams_module\nfrom moonlight.structure import components as components_module\nfrom moonlight.structure import verticals as verticals_module\n\n\ndef create_structure(image,\n                     staff_detector=staves.StaffDetector,\n                     beams=beams_module.Beams,\n                     verticals=verticals_module.ColumnBasedVerticals,\n                     components=components_module.from_staff_remover):\n  \"\"\"Constructs a Structure instance.\n\n  Constructs a staff detector and verticals with the given callables.\n\n  Args:\n    image: The image tensor.\n    staff_detector: A callable that accepts the image and returns a\n      StaffDetector.\n    beams: A callable that accept a StaffRemover and returns a Beams.\n    verticals: A callable that accepts the staff detector and returns a\n      verticals impl (e.g. ColumnBasedVerticals).\n    components: A callable that accepts a StaffRemover and returns a\n      ConnectedComponents.\n\n  Returns:\n    The Structure instance.\n  \"\"\"\n  with tf.name_scope('staff_detector'):\n    staff_detector = staff_detector(image)\n  with tf.name_scope('staff_remover'):\n    staff_remover = removal.StaffRemover(staff_detector)\n  with tf.name_scope('beams'):\n    beams = beams(staff_remover)\n  with tf.name_scope('verticals'):\n    verticals = verticals(staff_detector)\n  with tf.name_scope('components'):\n    components = components(staff_remover)\n  structure = Structure(\n      staff_detector,\n      beams,\n      verticals,\n      components,\n      image=image,\n      staff_remover=staff_remover)\n  return structure\n\n\nclass Structure(object):\n  \"\"\"Holds page structure detectors.\"\"\"\n\n  def __init__(self,\n               staff_detector,\n               beams,\n               verticals,\n               connected_components,\n               image=None,\n               staff_remover=None):\n    self.image = image\n    self.staff_detector = staff_detector\n    self.beams = beams\n    self.verticals = verticals\n    self.connected_components = connected_components\n    self.staff_remover = staff_remover\n\n  def compute(self, session=None, image=None):\n    \"\"\"Computes the structure.\n\n    If the staves are already `ComputedStaves` and the verticals are already\n    `ComputedVerticals`, returns `self`. Otherwise, runs staff detection and/or\n    verticals detection in the TensorFlow `session`.\n\n    Args:\n      session: The TensorFlow session to use instead of the default session.\n      image: If non-None, fed as the value of `self.staff_detector.image`.\n\n    Returns:\n      A computed `Structure` object. `staff_detector` and `verticals` hold NumPy\n          arrays with the result of the TensorFlow graph.\n    \"\"\"\n\n    if isinstance(self.staff_detector, staves_base.ComputedStaves):\n      staff_detector_data = []\n    else:\n      staff_detector_data = self.staff_detector.data\n    if isinstance(self.beams, beams_module.ComputedBeams):\n      beams_data = []\n    else:\n      beams_data = self.beams.data\n    if isinstance(self.verticals, verticals_module.ComputedVerticals):\n      verticals_data = []\n    else:\n      verticals_data = self.verticals.data\n    if isinstance(self.connected_components,\n                  components_module.ComputedComponents):\n      components_data = []\n    else:\n      components_data = self.connected_components.data\n    if not (staff_detector_data or beams_data or verticals_data or\n            components_data):\n      return self\n\n    if not session:\n      session = tf.get_default_session()\n    if image is not None:\n      feed_dict = {self.staff_detector.image: image}\n    else:\n      feed_dict = {}\n    staff_detector_data, beams_data, verticals_data, components_data = (\n        session.run(\n            [staff_detector_data, beams_data, verticals_data, components_data],\n            feed_dict=feed_dict))\n    staff_detector_data = staff_detector_data or self.staff_detector.data\n    staff_detector = staves_base.ComputedStaves(*staff_detector_data)\n    beams_data = beams_data or self.beams.data\n    beams = beams_module.ComputedBeams(*beams_data)\n    verticals_data = verticals_data or self.verticals.data\n    verticals = verticals_module.ComputedVerticals(*verticals_data)\n    connected_components = components_module.ConnectedComponents(\n        *components_data)\n    return Structure(\n        staff_detector, beams, verticals, connected_components, image=image)\n\n  def is_computed(self):\n    return (isinstance(self.staff_detector, staves_base.ComputedStaves) and\n            isinstance(self.beams, beams_module.ComputedBeams) and\n            isinstance(self.verticals, verticals_module.ComputedVerticals) and\n            isinstance(self.connected_components,\n                       components_module.ComputedComponents))\n\n  @property\n  def data(self):\n    return [\n        self.staff_detector.data, self.beams.data, self.verticals.data,\n        self.connected_components.data\n    ]\n"
  },
  {
    "path": "moonlight/structure/barlines.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Splits the single StaffSystem into multiple StaffSystems with bars.\"\"\"\n# TODO(ringw): Detect double barlines (with the expected distance between\n# them) as one DOUBLE_BAR.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom moonlight.protobuf import musicscore_pb2\nfrom six import moves\n\n\nclass Barlines(object):\n  \"\"\"Staff system and barline detector.\"\"\"\n\n  def __init__(self, structure, close_barline_threshold=None):\n    barline_valid, self.barline_staff_start, self.barline_staff_end = (\n        assign_barlines_to_staves(\n            barline_x=structure.verticals.lines[:, :,\n                                                0].mean(axis=1).astype(int),\n            barline_y0=structure.verticals.lines[:, 0, 1],\n            barline_y1=structure.verticals.lines[:, 1, 1],\n            staff_detector=structure.staff_detector))\n    self.barlines = structure.verticals.lines[barline_valid]\n    self.close_barline_threshold = (\n        close_barline_threshold or\n        np.median(structure.staff_detector.staffline_distance) * 4)\n\n  def apply(self, page):\n    \"\"\"Splits the staves in the page into systems with barlines.\"\"\"\n    assert len(page.system) == 1\n    systems_map = dict(\n        (i, (i, i)) for i in moves.xrange(len(page.system[0].staff)))\n    for start, end in zip(self.barline_staff_start, self.barline_staff_end):\n      for staff in moves.xrange(start, end + 1):\n        start = min(start, systems_map[staff][0])\n        end = max(end, systems_map[staff][1])\n      for staff in moves.xrange(start, end + 1):\n        systems_map[staff] = (start, end)\n    system_inds = sorted(set(systems_map.values()), key=lambda x: x[0])\n    staves = page.system[0].staff\n    systems = [\n        musicscore_pb2.StaffSystem(staff=staves[start:end + 1])\n        for (start, end) in system_inds\n    ]\n    self._assign_barlines(systems)\n    return musicscore_pb2.Page(system=systems)\n\n  def _assign_barlines(self, systems):\n    \"\"\"Assigns each barline to a system.\n\n    Args:\n      systems: The list of StaffSystem messages.\n    \"\"\"\n    system_start = 0\n    for system in systems:\n      system_end = system_start + len(system.staff) - 1\n      selected_barlines = set()\n      blacklist_x = self._get_blacklist_x(system)\n      for i in moves.xrange(len(self.barlines)):\n        barline_x = self.barlines[i, 0, 0]\n        start = self.barline_staff_start[i]\n        end = self.barline_staff_end[i]\n        if (not blacklist_x[barline_x] and\n            system_start <= start <= end <= system_end):\n          # Get the selected barlines which are close enough to the current\n          # barline that they are probably a duplicate.\n          close_barlines = [\n              other_barline for other_barline in selected_barlines\n              if abs(self.barlines[other_barline, 0, 0] -\n                     barline_x) < self.close_barline_threshold\n          ]\n\n          def get_span(barline):\n            return (self.barline_staff_end[barline] -\n                    self.barline_staff_start[barline])\n\n          # Assumes all barlines span the entire staff system.\n          # Don't add a barline if we've already seen a duplicate unless it\n          # spans more staves than the currently selected one.\n          # TODO(ringw): This works for piano scores, but not multi-part\n          # scores, which have one barline spanning the entire staff system at\n          # the beginning and then one barline per staff for the following\n          # measures. Make this more robust.\n          if (all(end - start >= get_span(other_barline)\n                  for other_barline in selected_barlines) and\n              all(end - start > get_span(other_barline)\n                  for other_barline in close_barlines)):\n            selected_barlines.difference_update(close_barlines)\n            selected_barlines.add(i)\n      barline_xs = sorted(\n          self.barlines[barline, 0, 0] for barline in selected_barlines)\n      system.bar.extend(\n          musicscore_pb2.StaffSystem.Bar(\n              x=x, type=musicscore_pb2.StaffSystem.Bar.STANDARD_BAR)\n          for x in barline_xs)\n\n      system_start = system_end + 1\n\n  def _get_blacklist_x(self, system):\n    \"\"\"Computes the x coordinates that are blacklisted for barlines.\n\n    Barlines cannot be too close to a detected stem, because stems at a certain\n    vertical position could be confused with barlines spanning a single staff.\n\n    Args:\n      system: The StaffSystem message.\n\n    Returns:\n      A boolean NumPy array. 1D and long enough to contain all of the barlines\n      on the x axis. True for x coordinates where barlines are disallowed.\n    \"\"\"\n    staffline_distance = np.median(\n        [staff.staffline_distance for staff in system.staff]).astype(int)\n    # Width needed to contain all of the barlines.\n    barlines_width = (0 if self.barlines.size == 0 else\n                      np.max(self.barlines[:, :, 0]) + 1)\n    blacklist_x = np.zeros(barlines_width, np.bool)\n    for staff in system.staff:\n      for glyph in staff.glyph:\n        if glyph.HasField('stem'):\n          stem = glyph.stem\n          blacklist_start = max(\n              0,\n              min(stem.start.x, stem.end.x) - staffline_distance)\n          blacklist_end = min(\n              barlines_width,\n              max(stem.start.x, stem.end.x) + staffline_distance)\n          blacklist_x[blacklist_start:blacklist_end] = True\n    return blacklist_x\n\n\ndef assign_barlines_to_staves(barline_x, barline_y0, barline_y1,\n                              staff_detector):\n  \"\"\"Chooses valid barlines for each staff.\n\n  Args:\n    barline_x: 1D array of length N. The barline x coordinates.\n    barline_y0: 1D array of length N. The barline top y coordinates.\n    barline_y1: 1D array of length N. The barline bottom y coordinates.\n    staff_detector: A BaseStaffDetector, for reading the staffline distance.\n\n  Returns:\n    A tuple of:\n    barline_valid: Boolean array of length N. Whether each of the input barlines\n        was selected as a valid barline.\n    barline_staff_start: Boolean array of length `K = barline_valid.sum()`. The\n        staff index for the top of each valid barline.\n    barline_staff_end: Boolean array of length K. The staff index for the bottom\n        of each valid barline. Each entry is >= barline_staff_start.\n  \"\"\"\n  # To be a barline, the start and end of the line have to be this close to\n  # the start or end of the staff.\n  max_distance_to_start_and_end_of_staff = (staff_detector.staffline_distance)\n  # Compute the closest start and end staves for each vertical line.\n  staff_starts = (\n      staff_detector.staves_interpolated_y -\n      2 * staff_detector.staffline_distance[:, None])\n  barline_staff_start_distance = np.abs(barline_y0[None, :] -\n                                        staff_starts[:, barline_x])\n  barline_staff_start = np.argmin(barline_staff_start_distance, axis=0)\n\n  # Barlines must be at most a single staffline distance away from the\n  # expected start and end, which are the top line of the start staff and the\n  # bottom line of the end staff.\n  barline_valid = np.less_equal(\n      np.min(barline_staff_start_distance, axis=0),\n      max_distance_to_start_and_end_of_staff[barline_staff_start])\n\n  # Check the closest end staff.\n  staff_ends = (\n      staff_detector.staves_interpolated_y +\n      2 * staff_detector.staffline_distance[:, None])\n  barline_staff_end_distance = np.abs(barline_y1 - staff_ends[:, barline_x])\n  barline_staff_end = np.argmin(barline_staff_end_distance, axis=0)\n  barline_valid &= np.less_equal(\n      np.min(barline_staff_end_distance, axis=0),\n      max_distance_to_start_and_end_of_staff[barline_staff_end])\n  return (barline_valid, barline_staff_start[barline_valid],\n          barline_staff_end[barline_valid])\n"
  },
  {
    "path": "moonlight/structure/barlines_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for stem detection.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\nimport numpy as np\n\nfrom moonlight import structure\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.staves import base as staves_base\nfrom moonlight.structure import barlines as barlines_module\nfrom moonlight.structure import beams\nfrom moonlight.structure import components\nfrom moonlight.structure import verticals\n\nPoint = musicscore_pb2.Point  # pylint: disable=invalid-name\n\n\nclass BarlinesTest(absltest.TestCase):\n\n  def testDummy(self):\n    # Create a single staff, and a single vertical which is the correct height\n    # of a stem. The vertical has x = 20 and goes from\n    struct = structure.Structure(\n        staff_detector=staves_base.ComputedStaves(\n            staves=[[[10, 50], [90, 50]], [[11, 150], [91, 150]],\n                    [[10, 250], [90, 250]], [[10, 350], [90, 350]]],\n            staffline_distance=[12] * 4,\n            staffline_thickness=2,\n            staves_interpolated_y=[[50] * 100, [150] * 100, [250] * 100,\n                                   [350] * 100]),\n        beams=beams.ComputedBeams(np.zeros((0, 2, 2))),\n        connected_components=components.ComputedComponents(np.zeros((0, 5))),\n        verticals=verticals.ComputedVerticals(lines=[\n            # Joins the first 2 staves.\n            [[10, 50 - 12 * 2], [10, 150 + 12 * 2]],\n            # Another barline, too close to the first one.\n            [[12, 50 - 12 * 2], [12, 150 + 12 * 2]],\n            # This barline is far enough, because the second barline was\n            # skipped.\n            [[13, 50 - 12 * 2], [13, 150 + 12 * 2]],\n            # Single staff barlines are skipped.\n            [[30, 50 - 12 * 2], [30, 50 + 12 * 2]],\n            [[31, 150 - 12 * 2], [31, 150 + 12 * 2]],\n            # Too close to a stem.\n            [[70, 50 - 12 * 2], [70, 50 + 12 * 2]],\n            # Too short.\n            [[90, 50 - 12 * 2], [90, 50 + 12 * 2]],\n            # Another barline which is kept.\n            [[90, 50 - 12 * 2], [90, 150 + 12 * 2]],\n            # Staff 1 has no barlines.\n            # Staff 2 has 2 barlines.\n            [[11, 350 - 12 * 2], [11, 350 + 12 * 2]],\n            [[90, 350 - 12 * 2], [90, 350 + 12 * 2]],\n        ]))\n    barlines = barlines_module.Barlines(struct, close_barline_threshold=3)\n    # Create a Page with Glyphs.\n    input_page = musicscore_pb2.Page(system=[\n        musicscore_pb2.StaffSystem(staff=[\n            musicscore_pb2.Staff(\n                staffline_distance=12,\n                center_line=[\n                    musicscore_pb2.Point(x=10, y=50),\n                    musicscore_pb2.Point(x=90, y=50)\n                ],\n                glyph=[\n                    # Stem is close to the last vertical on the first staff, so\n                    # a barline will not be detected there.\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=60,\n                        y_position=2,\n                        stem=musicscore_pb2.LineSegment(\n                            start=musicscore_pb2.Point(x=72, y=40),\n                            end=musicscore_pb2.Point(x=72, y=80))),\n                ]),\n            musicscore_pb2.Staff(\n                staffline_distance=12,\n                center_line=[\n                    musicscore_pb2.Point(x=10, y=150),\n                    musicscore_pb2.Point(x=90, y=150)\n                ]),\n            musicscore_pb2.Staff(\n                staffline_distance=12,\n                center_line=[\n                    musicscore_pb2.Point(x=10, y=250),\n                    musicscore_pb2.Point(x=90, y=250)\n                ]),\n            musicscore_pb2.Staff(\n                staffline_distance=12,\n                center_line=[\n                    musicscore_pb2.Point(x=10, y=350),\n                    musicscore_pb2.Point(x=90, y=350)\n                ]),\n        ])\n    ])\n    page = barlines.apply(input_page)\n    self.assertEqual(3, len(page.system))\n\n    self.assertEqual(2, len(page.system[0].staff))\n    self.assertItemsEqual([10, 13, 90], (bar.x for bar in page.system[0].bar))\n\n    self.assertEqual(1, len(page.system[1].staff))\n    self.assertEqual(0, len(page.system[1].bar))\n\n    self.assertEqual(1, len(page.system[2].staff))\n    self.assertEqual(2, len(page.system[2].bar))\n    self.assertItemsEqual([11, 90], (bar.x for bar in page.system[2].bar))\n\n\nif __name__ == \"__main__\":\n  absltest.main()\n"
  },
  {
    "path": "moonlight/structure/beam_processor.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Adds Beams to notes with intersecting Stems.\n\nFirst, detects beams that have enough area (black pixel count) proportionate to\ntheir width to count as multiple beams. The beam coordinates are just repeated\nin this case for each detected beam, because we don't know specifically where\neach individual beam is.\n\nNext, for each stem already attached to a note, we assign any intersecting beam\ncandidates to the note.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n\nfrom moonlight.glyphs import glyph_types\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.structure import components\n\nCOLUMNS = components.ConnectedComponentsColumns\n\n\nclass BeamProcessor(object):\n\n  def __init__(self, structure):\n    self.beams = _maybe_duplicate_beams(\n        structure.beams.beams,\n        np.median(structure.staff_detector.staffline_distance))\n\n  def apply(self, page):\n    \"\"\"Adds beams that intersect with note stems to the page.\n\n    Beams should intersect with two or more stems. Beams are currently\n    implemented as a bounding box, so we just see whether that box intersects\n    with each stem.\n\n    Args:\n      page: A Page message.\n\n    Returns:\n      The same page, with `beam`s added to the `Glyph`s.\n    \"\"\"\n    for system in page.system:\n      for staff in system.staff:\n        # Extend the beams by the staffline distance on either side. Beams may\n        # end immediately at a stem, so give an extra allowance for that stem.\n        extended_beams = self.beams.copy()\n        extended_beams[:, COLUMNS.X0] -= staff.staffline_distance\n        extended_beams[:, COLUMNS.X1] += staff.staffline_distance\n        for glyph in staff.glyph:\n          if glyph_types.is_beamed_notehead(glyph) and glyph.HasField('stem'):\n            xs = [glyph.stem.start.x, glyph.stem.end.x]\n            ys = [glyph.stem.start.y, glyph.stem.end.y]\n            stem_bounding_box = np.asarray([[min(*xs), min(*ys)],\n                                            [max(*xs), max(*ys)]])\n            overlapping_beams = _get_overlapping_beams(stem_bounding_box,\n                                                       extended_beams)\n            glyph.beam.extend(\n                musicscore_pb2.LineSegment(\n                    start=musicscore_pb2.Point(\n                        x=beam[COLUMNS.X0], y=beam[COLUMNS.Y0]),\n                    end=musicscore_pb2.Point(\n                        x=beam[COLUMNS.X1], y=beam[COLUMNS.Y1]))\n                for beam in overlapping_beams)\n    return page\n\n\ndef _get_overlapping_beams(stem, beams):\n  \"\"\"Filters beams that overlap with the stem.\n\n  Args:\n    stem: NumPy array `((x0, y0), (x1, y1))` representing the stem line.\n    beams: NumPy array of shape `(num_beams, 2, 2)`. The line segment for every\n      candidate beam.\n\n  Returns:\n    Filtered beams of shape `(num_filtered_beams, 2, 2)`. All of the beams which\n        intersect with the given stem.\n  \"\"\"\n  # The horizontal and vertical intervals of the stem line must match the\n  # intervals that the beam covers. Broadcast the single stem against all of\n  # the beams.\n  x_overlaps = _do_intervals_overlap(stem[None, :, 0],\n                                     beams[:, [COLUMNS.X0, COLUMNS.X1]])\n  y_overlaps = _do_intervals_overlap(stem[None, :, 1],\n                                     beams[:, [COLUMNS.Y0, COLUMNS.Y1]])\n  return beams[np.logical_and(x_overlaps, y_overlaps)]\n\n\ndef _maybe_duplicate_beams(beams, staffline_distance):\n  \"\"\"Determines whether each candidate beam actually contains multiple beams.\n\n  Beams are normally separated by a narrow space, but sometimes they can blur\n  together. Example: https://imgur.com/2ompQAz.png\n\n  The number of black pixels in a single beam is proportional to its width. If\n  the total area of the component is a multiple of the expected area, repeat the\n  beam to count as multiple beams.\n\n  Args:\n    beams: The connected component array with shape (N, 5). The values in the\n      columns are determined in `components.ConnectedComponentsColumns`.\n    staffline_distance: The scalar staffline distance (median from all staves).\n\n  Returns:\n    The beams array, possibly with some beam candidates repeated along the 0th\n    axis.\n  \"\"\"\n\n  def _estimate_num_beams(beam):\n    width = beam[COLUMNS.X1] - beam[COLUMNS.X0]\n    # Beams appear to typically be slightly shorter than the staffline distance.\n    estimated_area_per_beam = width * staffline_distance * 0.75\n    return max(1, np.round(beam[COLUMNS.SIZE] / estimated_area_per_beam))\n\n  estimated_num_beams = list(map(_estimate_num_beams, beams))\n  return np.repeat(beams, estimated_num_beams, axis=0)\n\n\ndef _do_intervals_overlap(intervals_a, intervals_b):\n  \"\"\"Whether the intervals overlap, pairwise.\n\n  intervals_a and intervals_b should both have the same shape\n  `(num_intervals, 2)`. For each interval from each argument, returns a boolean\n  of whether the numeric intervals overlap.\n\n  Args:\n    intervals_a: Numeric NumPy array of shape `(num_intervals, 2)`.\n    intervals_b: Numeric NumPy array of shape `(num_intervals, 2)`.\n\n  Returns:\n    Boolean NumPy array of length `num_intervals`.\n  \"\"\"\n\n  def contained(points, intervals):\n    return np.logical_and(\n        np.less_equal(intervals[:, 0], points),\n        np.less_equal(points, intervals[:, 1]))\n\n  return np.logical_or(\n      np.logical_or(\n          contained(intervals_a[:, 0], intervals_b),\n          contained(intervals_a[:, 1], intervals_b)),\n      np.logical_or(\n          contained(intervals_b[:, 0], intervals_a),\n          contained(intervals_b[:, 1], intervals_a)))\n"
  },
  {
    "path": "moonlight/structure/beams.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Detects note beams.\n\nBeams are long, very thick, horizontal or diagonal lines that may intersect with\nthe staves. To detect them, we use staff removal followed by extra binary\nerosion, in case the staves are not completely removed and still have extra\nblack pixels around a beam. We then find all of the connected components,\nbecause each beam should now be detached from the stem, staff, and (typically)\nother beams. We filter beams by minimum width. Further processing and assignment\nof stems to beams is done in `beam_processor.py`.\n\nEach beam halves the duration of each note it is atteched to by a stem.\n\"\"\"\n# TODO(ringw): Make Hough line segments more robust, and then use them here\n# instead of connected components.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.structure import components\nfrom moonlight.vision import morphology\n\nCOLUMNS = components.ConnectedComponentsColumns\n\n\nclass Beams(object):\n  \"\"\"Note beam detector.\"\"\"\n\n  def __init__(self, staff_remover, threshold=127):\n    staff_detector = staff_remover.staff_detector\n    image = morphology.binary_erosion(\n        tf.less(staff_remover.remove_staves, threshold),\n        staff_detector.staffline_thickness)\n    beams = components.get_component_bounds(image)\n    staffline_distance = tf.cond(\n        # pyformat: disable\n        tf.greater(tf.shape(staff_detector.staves)[0], 0),\n        lambda: tf.reduce_mean(staff_detector.staffline_distance),\n        lambda: tf.constant(0, tf.int32))\n    min_length = 2 * staffline_distance\n    keep_beam = tf.greater_equal(beams[:, COLUMNS.X1] - beams[:, COLUMNS.X0],\n                                 min_length)\n    keep_beam.set_shape([None])\n    self.beams = tf.boolean_mask(beams, keep_beam)\n    self.data = [self.beams]\n\n\nclass ComputedBeams(object):\n  \"\"\"Holder for the computed beams NumPy array.\"\"\"\n\n  def __init__(self, beams):\n    self.beams = np.asarray(beams, np.int32)\n"
  },
  {
    "path": "moonlight/structure/components.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Connected component analysis.\n\nConnected components will be used to detect solid-ish elements or blobs on the\nscore (e.g. beams, dots, and whole/half rests).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport enum\nimport tensorflow as tf\nfrom tensorflow.contrib import image as contrib_image\n\n\nclass ConnectedComponentsColumns(enum.IntEnum):\n  \"\"\"The field names of the connected components 2D array columns.\"\"\"\n\n  X0 = 0\n  Y0 = 1\n  X1 = 2\n  Y1 = 3\n  SIZE = 4\n\n\ndef from_staff_remover(staff_remover, threshold=127):\n  with tf.name_scope('from_staff_remover'):\n    image = tf.less(staff_remover.remove_staves, threshold)\n    return ConnectedComponents(get_component_bounds(image))\n\n\nclass ConnectedComponents(object):\n  \"\"\"Holds the connected components on the staves removed image.\"\"\"\n\n  def __init__(self, components):\n    with tf.name_scope('ConnectedComponents'):\n      self.components = components\n      self.data = [self.components]\n\n\nclass ComputedComponents(object):\n  \"\"\"Holds the computed NumPy array of connected components.\"\"\"\n\n  def __init__(self, components):\n    self.components = components\n    self.data = [components]\n\n\ndef get_component_bounds(image):\n  \"\"\"Returns the bounding box of each connected component in `image`.\n\n  Connected components are segments of adjacent True pixels in the image.\n\n  Args:\n    image: A 2D boolean image tensor.\n\n  Returns:\n    A tensor of shape (num_components, 5), where each row represents a connected\n        component of the image as `(x0, y0, x1, y1, size)`. `size` is the count\n        of True pixels in the component, and the coordinates are the top left\n        and bottom right corners of the bounding box.\n  \"\"\"\n  with tf.name_scope('get_component_bounds'):\n    components = contrib_image.connected_components(image)\n    num_components = tf.reduce_max(components) + 1\n    width = tf.shape(image)[1]\n    height = tf.shape(image)[0]\n    xs, ys = tf.meshgrid(tf.range(width), tf.range(height))\n    component_x0 = _unsorted_segment_min(xs, components, num_components)[1:]\n    component_x1 = tf.unsorted_segment_max(xs, components, num_components)[1:]\n    component_y0 = _unsorted_segment_min(ys, components, num_components)[1:]\n    component_y1 = tf.unsorted_segment_max(ys, components, num_components)[1:]\n    component_size = tf.bincount(components)[1:]\n    return tf.stack([\n        component_x0, component_y0, component_x1, component_y1, component_size\n    ],\n                    axis=1)\n\n\ndef _unsorted_segment_min(data, segment_ids, num_segments):\n  return -tf.unsorted_segment_max(-data, segment_ids, num_segments)\n"
  },
  {
    "path": "moonlight/structure/components_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for connected component analysis.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.structure import components\n\n\nclass ComponentsTest(tf.test.TestCase):\n\n  def testComponents(self):\n    arr = [[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],\n           [1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1],\n           [1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1],\n           [1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0],\n           [1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],\n           [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]  # pyformat: disable\n    component_bounds_t = components.get_component_bounds(tf.cast(arr, tf.bool))\n    with self.test_session():\n      component_bounds = component_bounds_t.eval()\n    self.assertAllEqual(\n        component_bounds,\n        # x0, y0, x1, y1, size\n        [[5, 0, 5, 0, 1], [0, 1, 4, 5, 16], [8, 1, 10, 3, 5], [12, 1, 12, 2, 2],\n         [2, 3, 2, 3, 1], [6, 4, 6, 4, 1]])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/structure/section_barlines.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Detects section barlines, which are much thicker than normal barlines.\n\nSection barlines appear as connected components which span the height of the\nsystem, and are not too thick. They may have 2 repeat dots on one or both sides\nof each staff (at y positions -1 and 1), which affect the barline type.\n\"\"\"\n# TODO(ringw): Get repeat dots from the components and adjust the barline\n# type accordingly. Currently, assume all thick barlines are END_BAR.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.structure import barlines\nfrom moonlight.structure import components as components_module\n\nBar = musicscore_pb2.StaffSystem.Bar  # pylint: disable=invalid-name\nCOLUMNS = components_module.ConnectedComponentsColumns\n\n\nclass SectionBarlines(object):\n  \"\"\"Reads the connected components, and adds thick barlines to the page.\"\"\"\n\n  def __init__(self, structure):\n    self.components = structure.connected_components.components\n    self.staff_detector = structure.staff_detector\n\n  def apply(self, page):\n    \"\"\"Detects thick section barlines from the connected components.\n\n    These should be tall components that start and end near the start and end\n    of two (possibly different) staves. We use the standard barlines logic to\n    assign components to the nearest start and end staff. We filter for\n    candidate barlines, whose start and end are sufficiently close to the\n    expected values. We then filter again by whether the component width is\n    within the expected values for section barlines.\n\n    For each staff system, we take the section barlines that match exactly that\n    system's staves. Any standard barlines that are too close to a new section\n    barline are removed, and we merge the existing standard barlines with the\n    new section barlines.\n\n    Args:\n      page: A Page message.\n\n    Returns:\n      The same Page message, with new section barlines added.\n    \"\"\"\n    component_center_x = np.mean(\n        self.components[:, [COLUMNS.X0, COLUMNS.X1]], axis=1).astype(int)\n    # Take section barline candidates, whose start and end y values are close\n    # enough to the staff start and end ys.\n    component_is_candidate, candidate_start_staff, candidate_end_staff = (\n        barlines.assign_barlines_to_staves(\n            barline_x=component_center_x,\n            barline_y0=self.components[:, COLUMNS.Y0],\n            barline_y1=self.components[:, COLUMNS.Y1],\n            staff_detector=self.staff_detector))\n    candidates = self.components[component_is_candidate]\n    candidate_center_x = component_center_x[component_is_candidate]\n    del component_center_x\n\n    # Filter again by the expected section barline width.\n    component_width = candidates[:, COLUMNS.X1] - candidates[:, COLUMNS.X0]\n    component_width_ok = np.logical_and(\n        self._section_min_width() <= component_width,\n        component_width <= self._section_max_width(candidate_start_staff))\n    candidates = candidates[component_width_ok]\n    candidate_center_x = candidate_center_x[component_width_ok]\n    candidate_start_staff = candidate_start_staff[component_width_ok]\n    candidate_end_staff = candidate_end_staff[component_width_ok]\n\n    # For each existing staff system, consider only the candidates that match\n    # exactly the system's start and end staves.\n    start_staff = 0\n    for system in page.system:\n      staffline_distance = np.median(\n          [staff.staffline_distance for staff in system.staff]).astype(int)\n      candidate_covers_staff_system = np.logical_and(\n          candidate_start_staff == start_staff,\n          candidate_end_staff + 1 == start_staff + len(system.staff))\n      # Calculate the x coordinates of all section barlines to keep.\n      section_bar_x = candidate_center_x[candidate_covers_staff_system]\n      # Extract the existing bar x coordinates and types for merging.\n      existing_bar_type = {bar.x: bar.type for bar in system.bar}\n      existing_bars = np.asarray([bar.x for bar in system.bar])\n      # Merge the existing barlines and section barlines.\n      if existing_bars.size and section_bar_x.size:\n        # Filter the existing bars by whether they are far enough from a new\n        # section barline. Section barlines override the existing standard\n        # barlines.\n        existing_bars_ok = np.greater(\n            np.min(\n                np.abs(existing_bars[:, None] - section_bar_x[None, :]),\n                axis=1), staffline_distance * 4)\n        existing_bars = existing_bars[existing_bars_ok]\n\n      # Merge the existing barlines which we kept, and the new section barlines\n      # (which are assumed to be type END_BAR), in sorted order.\n      bars = sorted(\n          [Bar(x=x, type=existing_bar_type[x]) for x in existing_bars] +\n          [Bar(x=x, type=Bar.END_BAR) for x in section_bar_x],\n          key=lambda bar: bar.x)\n      # Update the staff system.\n      system.ClearField('bar')\n      system.bar.extend(bars)\n\n      start_staff += len(system.staff)\n    return page\n\n  def _section_min_width(self):\n    return self.staff_detector.staffline_thickness * 3\n\n  def _section_max_width(self, staff_index):\n    return self.staff_detector.staffline_distance[staff_index] * 2\n\n\nclass MergeStandardAndBeginRepeatBars(object):\n  \"\"\"Detects a begin repeat at the beginning of the staff system.\n\n  Typically, a begin repeat bar on a new line will be preceded by a standard\n  barline, clef, and key signature. We can override a standard bar with a\n  section bar if they are close together, but this distance is typically closer\n  than the two bars are in this case.\n\n  We want the two bars to be replaced by a single begin repeat bar where we\n  actually found the first bar, because we want the clef, key signature, and\n  notes to be a single measure.\n\n  Because we don't yet detect repeat dots, and all non-STANDARD barlines are\n  detected as END_BAR, we accept any non-STANDARD barlines for the second bar.\n  \"\"\"\n\n  def __init__(self, structure):\n    self.staff_detector = structure.staff_detector\n\n  def apply(self, page):\n    for system in page.system:\n      if (len(system.bar) > 1 and system.bar[0].type == Bar.STANDARD_BAR and\n          system.bar[1].type != Bar.STANDARD_BAR):\n        staffline_distance = np.median(\n            [staff.staffline_distance for staff in system.staff])\n        if system.bar[1].x - system.bar[0].x < staffline_distance * 12:\n          system.bar[0].type = system.bar[1].type\n          del system.bar[1]\n    return page\n"
  },
  {
    "path": "moonlight/structure/stems.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Stem detection.\n\nA stem detector takes stem candidates from `ColumnBasedVerticals`, which are\nvertical lines with a height close to the expected height of a stem.\n\nThe distance is computed from each notehead to each stem, and the notehead is\nassigned to the closest stem if the distance is below a threshold. The distance\nis based on the coordinate where the notehead would ideally lie if it belongs\nto the stem. First, the glyph y is clamped to the range of the stem, because the\ncenter of the notehead should not be above or below the stem. Next, if the glyph\nis left of the stem, the ideal x is a constant left of the stem, and similar if\nit is right of the stem. This is because the left or right side of the notehead\nshould touch the stem, so they are ideally a fixed distance away.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n\nfrom moonlight.glyphs import glyph_types\nfrom moonlight.protobuf import musicscore_pb2\n\n# The minimum height of a stem, as a multiple of the staffline distance.\n_MIN_STEM_HEIGHT_STAFFLINE_DISTANCE = 2.5\n# The expected horizontal distance from the notehead to the stem.\n_STEM_NOTEHEAD_HORIZONTAL_STAFFLINE_DISTANCE = 0.5\n# The maximum Euclidean distance from a notehead to its ideal position for a\n# given stem (see module docstring).\n_STEM_NOTEHEAD_DISTANCE_STAFFLINE_DISTANCE = 0.5\n\n\nclass Stems(object):\n  \"\"\"Stem detector.\"\"\"\n\n  def __init__(self, structure):\n    \"\"\"Constructs the stem detector.\n\n    Args:\n      structure: A computed structure.\n\n    Raises:\n      ValueError: If structure.is_computed() is false.\n    \"\"\"\n    if not structure.is_computed():\n      raise ValueError(\"Run Structure.compute() before passing it here\")\n    self.staff_detector = structure.staff_detector\n    staffline_distance = np.mean(self.staff_detector.staffline_distance)\n    verticals = structure.verticals\n    self.stem_candidates = _get_stem_candidates(staffline_distance, verticals)\n\n  def apply(self, page):\n    \"\"\"Detects stems on the page.\n\n    Using `self.stem_candidates`, finds verticals that align with a notehead\n    glyph, and adds the stems.\n\n    Args:\n      page: The Page message.\n\n    Returns:\n      The same page, updated with stems.\n    \"\"\"\n    for system in page.system:\n      for staff, staff_ys in zip(system.staff,\n                                 self.staff_detector.staves_interpolated_y):\n        allowed_distance = np.multiply(\n            _STEM_NOTEHEAD_DISTANCE_STAFFLINE_DISTANCE,\n            staff.staffline_distance)\n        expected_horizontal_distance = np.multiply(\n            _STEM_NOTEHEAD_HORIZONTAL_STAFFLINE_DISTANCE,\n            staff.staffline_distance)\n        for glyph in staff.glyph:\n          if glyph_types.is_stemmed_notehead(glyph):\n            glyph_y = (\n                staff_ys[glyph.x] -\n                glyph.y_position * staff.staffline_distance / 2.0)\n            # Compute the ideal coordinates for the glyph to be assigned to each\n            # stem.\n\n            # Clip the glyph_y to the stem start and end y to get the ideal y.\n            ideal_y = np.clip(glyph_y, self.stem_candidates[:, 0, 1],\n                              self.stem_candidates[:, 1, 1])\n            # If the glyph is left of the stem, subtract the expected distance\n            # from the stem x; otherwise, add it.\n            ideal_x = self.stem_candidates[:, 0, 0] + np.where(\n                glyph.x < self.stem_candidates[:, 0, 0],\n                -expected_horizontal_distance, expected_horizontal_distance)\n            stem_distance = np.linalg.norm(\n                np.c_[ideal_x - glyph.x, ideal_y - glyph_y], axis=1)\n            stem = np.argmin(stem_distance)\n            if stem_distance[stem] <= allowed_distance:\n              stem_coords = self.stem_candidates[stem]\n              glyph.stem.CopyFrom(\n                  musicscore_pb2.LineSegment(\n                      start=musicscore_pb2.Point(\n                          x=stem_coords[0, 0], y=stem_coords[0, 1]),\n                      end=musicscore_pb2.Point(\n                          x=stem_coords[1, 0], y=stem_coords[1, 1])))\n    return page\n\n\ndef _get_stem_candidates(staffline_distance, verticals):\n  heights = verticals.lines[:, 1, 1] - verticals.lines[:, 0, 1]\n  return verticals.lines[np.greater_equal(\n      heights, staffline_distance * _MIN_STEM_HEIGHT_STAFFLINE_DISTANCE)]\n"
  },
  {
    "path": "moonlight/structure/stems_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for stem detection.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\nimport numpy as np\n\nfrom moonlight import structure\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.staves import base as staves_base\nfrom moonlight.structure import beams\nfrom moonlight.structure import components\nfrom moonlight.structure import stems as stems_module\nfrom moonlight.structure import verticals\n\nPoint = musicscore_pb2.Point  # pylint: disable=invalid-name\n\n\nclass StemsTest(absltest.TestCase):\n\n  def testDummy(self):\n    # Create a single staff, and a single vertical which is the correct height\n    # of a stem. The vertical has x = 20 and goes from\n    struct = structure.Structure(\n        staff_detector=staves_base.ComputedStaves(\n            staves=[[[10, 50], [90, 50]]],\n            staffline_distance=[12],\n            staffline_thickness=2,\n            staves_interpolated_y=[[50] * 100]),\n        beams=beams.ComputedBeams(np.zeros((0, 2, 2))),\n        verticals=verticals.ComputedVerticals(\n            lines=[[[20, 38], [20, 38 + 12 * 4]]]),\n        connected_components=components.ComputedComponents([]))\n    stems = stems_module.Stems(struct)\n    # Create a Page with Glyphs.\n    input_page = musicscore_pb2.Page(system=[\n        musicscore_pb2.StaffSystem(staff=[\n            musicscore_pb2.Staff(\n                staffline_distance=12,\n                center_line=[\n                    musicscore_pb2.Point(x=10, y=50),\n                    musicscore_pb2.Point(x=90, y=50)\n                ],\n                glyph=[\n                    # Cannot have a stem because it's a flat.\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.FLAT, x=15, y_position=-1),\n                    # On the right side of the stem, the correct distance away.\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=25,\n                        y_position=-1),\n                    # Too high for the stem.\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=25,\n                        y_position=4),\n                    # Too far right from the stem.\n                    musicscore_pb2.Glyph(\n                        type=musicscore_pb2.Glyph.NOTEHEAD_FILLED,\n                        x=35,\n                        y_position=-1),\n                ])\n        ])\n    ])\n    page = stems.apply(input_page)\n    self.assertFalse(page.system[0].staff[0].glyph[0].HasField(\"stem\"))\n    self.assertTrue(page.system[0].staff[0].glyph[1].HasField(\"stem\"))\n    self.assertEqual(\n        page.system[0].staff[0].glyph[1].stem,\n        musicscore_pb2.LineSegment(\n            start=Point(x=20, y=38), end=Point(x=20, y=38 + 12 * 4)))\n    self.assertFalse(page.system[0].staff[0].glyph[2].HasField(\"stem\"))\n    self.assertFalse(page.system[0].staff[0].glyph[3].HasField(\"stem\"))\n\n\nif __name__ == \"__main__\":\n  absltest.main()\n"
  },
  {
    "path": "moonlight/structure/structure_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for structure computation.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight import image as image_module\nfrom moonlight import structure\n\n\nclass StructureTest(tf.test.TestCase):\n\n  def testCompute(self):\n    filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                            '../testdata/IMSLP00747-000.png')\n    image = image_module.decode_music_score_png(tf.read_file(filename))\n    struct = structure.create_structure(image)\n    with self.test_session():\n      struct = struct.compute()\n    self.assertEqual(np.int32, struct.staff_detector.staves.dtype)\n    # Expected number of staves for the corpus image.\n    self.assertEqual((12, 2, 2), struct.staff_detector.staves.shape)\n\n    self.assertEqual(np.int32, struct.verticals.lines.dtype)\n    self.assertEqual(3, struct.verticals.lines.ndim)\n    self.assertEqual((2, 2), struct.verticals.lines.shape[1:])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/structure/verticals.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Detects vertical lines using the runs in each column of the image.\n\nAfter the TensorFlow graph is run, vertical lines are classified as stems or\nbarlines.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.util import functional_ops\nfrom moonlight.util import memoize\nfrom moonlight.util import segments\nfrom moonlight.vision import images\nfrom moonlight.vision import morphology\n\n# Join gaps in vertical lines using a small gap (relative to the difference\n# between stafflines).\n_DEFAULT_MAX_GAP_STAFFLINE_DISTANCE = frozenset((0.1, 0.2, 0.5))\n# Beams and barlines should be at least the height of a staff (4 stafflines).\n# Use a minimum of 3 * staffline distance.\n_DEFAULT_MIN_LENGTH_STAFFLINE_DISTANCE = 2.5\n\n\nclass ColumnBasedVerticals(object):\n  \"\"\"Vertical line segment detector.\n\n  Does not resolve duplicates across multiple columns, or the same line that is\n  detected multiple times with different `max_gap` settings. The user should\n  expect to find multiple detected lines that correspond to the same physical\n  line, and choose the line that seems the most likely for a given purpose.\n\n  Attributes:\n    staff_detector: An instance of `staves.base.BaseStaffDetector`.\n    image: The uint8 image tensor.\n    threshold: The image threshold. int.\n    thresholded_image: Whether each pixel of the image is black.\n    max_gap: Multiple values for the maximum gap allowed in a line segment, in\n      pixels. Tensor (1D) of ints.\n    min_length: The minimum length of a line segment, in pixels. int.\n  \"\"\"\n\n  def __init__(\n      self,\n      staff_detector,\n      threshold=127,\n      max_gap_staffline_distance=_DEFAULT_MAX_GAP_STAFFLINE_DISTANCE,\n      min_length_staffline_distance=_DEFAULT_MIN_LENGTH_STAFFLINE_DISTANCE):\n    self.staff_detector = staff_detector\n    self.image = staff_detector.image\n    self.threshold = threshold\n    thresholded_image = tf.less(self.image, threshold)\n    self.filtered_image = morphology.binary_dilation(\n        _horizontal_filter(thresholded_image,\n                           staff_detector.staffline_thickness), 1)\n    staffline_distance = tf.reduce_mean(staff_detector.staffline_distance)\n    # Deterministically convert max_gap_staffline_distance to a list.\n    # We use a frozenset so there is no risk of mutating the default argument.\n    self.max_gap = tf.to_int32(\n        tf.round(\n            tf.to_float(staffline_distance) *\n            sorted(max_gap_staffline_distance)))\n    self.min_length = tf.to_int32(\n        tf.round(\n            tf.to_float(staffline_distance) * min_length_staffline_distance))\n\n  @property\n  @memoize.MemoizedFunction\n  def lines(self):\n    \"\"\"The vertical lines.\n\n    Returns:\n      int32 tensor of shape (num_lines, 2, 2), storing lines as\n          ((start_x, start_y), (end_x, end_y)).\n    \"\"\"\n    columns = tf.range(tf.shape(self.image)[1])\n\n    def map_max_gap(max_gap):\n      \"\"\"Process all columns with the given value for max_gap.\"\"\"\n      return functional_ops.flat_map_fn(\n          lambda column: self._verticals_in_column(max_gap, column), columns)\n\n    return functional_ops.flat_map_fn(map_max_gap, self.max_gap)\n\n  def _verticals_in_column(self, max_gap, column):\n    \"\"\"Gets the verticals from a single column.\n\n    Args:\n      max_gap: The scalar max_gap value to use. int tensor.\n      column: The scalar column index. int tensor.\n\n    Returns:\n      int32 tensor of shape (num_lines_in_column, 2, 2). All start_x and end_x\n          values are equal to column.\n    \"\"\"\n    image_column = self.filtered_image[:, column]\n    run_starts, run_lengths = segments.true_segments_1d(\n        image_column,\n        mode=segments.SegmentsMode.STARTS,\n        max_gap=max_gap,\n        min_length=self.min_length)\n    num_runs = tf.shape(run_starts)[0]\n    # x is the same for all runs in the column.\n    x = tf.fill([num_runs], column)\n    y0 = run_starts\n    y1 = run_starts + run_lengths - 1\n    return tf.stack([\n        tf.stack([x, y0], axis=1),\n        tf.stack([x, y1], axis=1),\n    ],\n                    axis=1)\n\n  @property\n  def data(self):\n    \"\"\"Returns the list of verticals tensors to be computed.\n\n    Returns:\n      A list of Tensors.\n    \"\"\"\n    return [self.lines]\n\n\ndef _horizontal_filter(image, staffline_thickness):\n  \"\"\"The vertical lines horizontal filter.\n\n  A black pixel in a vertical line must have a white pixel\n  `2 * staffline_thickness` pixels away, on the left and/or right.\n\n  Args:\n    image: 2D thresholded boolean image with black pixels as True.\n    staffline_thickness: The estimated staffline thickness, in pixels.\n\n  Returns:\n    The filtered image.\n  \"\"\"\n  # images.translate() requires a float image. Unlike the convention (255 or 1.0\n  # for white), the image is already thresholded here, so 1.0 is black and 0.0\n  # is white.\n  float_image = tf.cast(image, tf.float32)\n  gap = staffline_thickness * 2\n  return tf.logical_and(\n      image,\n      tf.logical_or(\n          tf.equal(images.translate(float_image, -gap, 0), 0),\n          tf.equal(images.translate(float_image, gap, 0), 0)))\n\n\nclass ComputedVerticals(object):\n  \"\"\"Computed vertical lines holder.\n\n  The result of `ColumnBasedVerticals.compute()`. Holds a NumPy array with the\n      vertical lines.\n  \"\"\"\n\n  def __init__(self, lines):\n    self.lines = np.array(lines)\n\n  @property\n  def data(self):\n    return [self.lines]\n"
  },
  {
    "path": "moonlight/structure/verticals_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for vertical line detection.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.staves import testing\nfrom moonlight.structure import verticals\n\n\nclass ColumnBasedVerticalsTest(tf.test.TestCase):\n\n  def testVerticalLines_singleColumn(self):\n    image = np.zeros((20, 4), bool)\n    image[5:10, 0] = True\n    image[11:15, 1] = True\n    image[:5, 3] = True\n    staff_detector = testing.FakeStaves(\n        tf.constant(np.where(image, 0, 255), tf.uint8),\n        staves_t=None,\n        staffline_distance_t=[1],\n        staffline_thickness_t=0.5)\n    verticals_detector = verticals.ColumnBasedVerticals(\n        staff_detector,\n        max_gap_staffline_distance=[1],\n        min_length_staffline_distance=3)\n    lines_t = verticals_detector.lines\n    with self.test_session():\n      lines = [line.tolist() for line in lines_t.eval()]\n    # Start is dilated by 1 pixel since the start is actually in this column.\n    self.assertIn([[0, 4], [0, 14]], lines)\n    # Only the end is contained in this column, so it is dilated but the start\n    # is not.\n    self.assertIn([[1, 5], [1, 15]], lines)\n    self.assertNotIn([[2, 4], [2, 14]], lines)\n    self.assertNotIn([[2, 5], [2, 15]], lines)\n    self.assertIn([[3, 0], [3, 5]], lines)\n    # Out of bounds.\n    self.assertNotIn([[4, 0], [4, 5]], lines)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/testdata/BUILD",
    "content": "# Description:\n# Music scores used for inference and testing the OMR pipeline.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\nfilegroup(\n    name = \"images\",\n    srcs = [\"IMSLP00747-000.png\"],\n)\n\nfilegroup(\n    name = \"musicxml\",\n    srcs = glob([\"*.xml\"]),\n)\n"
  },
  {
    "path": "moonlight/testdata/IMSLP00747-000.LICENSE.md",
    "content": "The image was obtained from IMSLP:\n\nhttps://imslp.org/wiki/Special:ReverseLookup/747\n\nIt was published in 1853 and is in the public domain worldwide.\n"
  },
  {
    "path": "moonlight/testdata/IMSLP00747.golden.LICENSE.md",
    "content": "The score was transcribed by the Moonlight authors. The MusicXML transcription\nis released under the Apache License, version 2.0, as with all code.\n"
  },
  {
    "path": "moonlight/testdata/IMSLP00747.golden.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- Copyright 2018 Google LLC\n   -\n   - Licensed under the Apache License, Version 2.0 (the \"License\");\n   - you may not use this file except in compliance with the License.\n   - You may obtain a copy of the License at\n   -\n   -     https://www.apache.org/licenses/LICENSE-2.0\n   -\n   - Unless required by applicable law or agreed to in writing, software\n   - distributed under the License is distributed on an \"AS IS\" BASIS,\n   - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   - See the License for the specific language governing permissions and\n   - limitations under the License.\n   -->\n<!DOCTYPE score-partwise PUBLIC \"-//Recordare//DTD MusicXML 3.0 Partwise//EN\" \"http://www.musicxml.org/dtds/partwise.dtd\">\n<score-partwise>\n  <work>\n    <work-title>Invention No. 1</work-title>\n    </work>\n  <identification>\n    <creator type=\"composer\">J. S. Bach</creator>\n    <encoding>\n      <software>MuseScore 2.1.0</software>\n      <encoding-date>2017-09-18</encoding-date>\n      <encoder>Dan Ringwalt (ringwalt@google.com)</encoder>\n      <supports element=\"accidental\" type=\"yes\"/>\n      <supports element=\"beam\" type=\"yes\"/>\n      <supports element=\"print\" attribute=\"new-page\" type=\"yes\" value=\"yes\"/>\n      <supports element=\"print\" attribute=\"new-system\" type=\"yes\" value=\"yes\"/>\n      <supports element=\"stem\" type=\"yes\"/>\n      </encoding>\n    </identification>\n  <defaults>\n    <scaling>\n      <millimeters>7.05556</millimeters>\n      <tenths>40</tenths>\n      </scaling>\n    <page-layout>\n      <page-height>1683.36</page-height>\n      <page-width>1190.88</page-width>\n      <page-margins type=\"even\">\n        <left-margin>56.6929</left-margin>\n        <right-margin>56.6929</right-margin>\n        <top-margin>56.6929</top-margin>\n        <bottom-margin>113.386</bottom-margin>\n        </page-margins>\n      <page-margins type=\"odd\">\n        <left-margin>56.6929</left-margin>\n        <right-margin>56.6929</right-margin>\n        <top-margin>56.6929</top-margin>\n        <bottom-margin>113.386</bottom-margin>\n        </page-margins>\n      </page-layout>\n    <word-font font-family=\"FreeSerif\" font-size=\"10\"/>\n    <lyric-font font-family=\"FreeSerif\" font-size=\"11\"/>\n    </defaults>\n  <credit page=\"1\">\n    <credit-words default-x=\"595.44\" default-y=\"1626.67\" justify=\"center\" valign=\"top\" font-size=\"24\">Invention No. 1</credit-words>\n    </credit>\n  <credit page=\"1\">\n    <credit-words default-x=\"1134.19\" default-y=\"1526.67\" justify=\"right\" valign=\"bottom\" font-size=\"12\">J. S. Bach</credit-words>\n    </credit>\n  <part-list>\n    <score-part id=\"P1\">\n      <part-name>Piano</part-name>\n      <part-abbreviation>Pno.</part-abbreviation>\n      <score-instrument id=\"P1-I1\">\n        <instrument-name>Piano</instrument-name>\n        </score-instrument>\n      <midi-device id=\"P1-I1\" port=\"1\"></midi-device>\n      <midi-instrument id=\"P1-I1\">\n        <midi-channel>1</midi-channel>\n        <midi-program>1</midi-program>\n        <volume>78.7402</volume>\n        <pan>0</pan>\n        </midi-instrument>\n      </score-part>\n    </part-list>\n  <part id=\"P1\">\n    <measure number=\"1\" width=\"556.47\">\n      <print>\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <top-system-distance>170.00</top-system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <attributes>\n        <divisions>8</divisions>\n        <key>\n          <fifths>0</fifths>\n          </key>\n        <time symbol=\"common\">\n          <beats>4</beats>\n          <beat-type>4</beat-type>\n          </time>\n        <staves>2</staves>\n        <clef number=\"1\">\n          <sign>G</sign>\n          <line>2</line>\n          </clef>\n        <clef number=\"2\">\n          <sign>F</sign>\n          <line>4</line>\n          </clef>\n        </attributes>\n      <direction placement=\"above\">\n        <direction-type>\n          <metronome parentheses=\"no\">\n            <beat-unit>quarter</beat-unit>\n            <per-minute>80</per-minute>\n            </metronome>\n          </direction-type>\n        <staff>1</staff>\n        <sound tempo=\"80\"/>\n        </direction>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <staff>1</staff>\n        </note>\n      <note default-x=\"109.27\" default-y=\"-50.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"138.97\" default-y=\"-45.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"168.68\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"198.39\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"228.09\" default-y=\"-45.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"257.80\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"287.51\" default-y=\"-50.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"317.21\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"376.63\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"436.04\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <notations>\n          <ornaments>\n            <inverted-mordent/>\n            </ornaments>\n          </notations>\n        </note>\n      <note default-x=\"495.45\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note>\n        <rest/>\n        <duration>16</duration>\n        <voice>5</voice>\n        <type>half</type>\n        <staff>2</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <staff>2</staff>\n        </note>\n      <note default-x=\"346.92\" default-y=\"-130.00\">\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"376.63\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"406.33\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"436.04\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"465.75\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"495.45\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"525.16\" default-y=\"-130.00\">\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"2\" width=\"500.02\">\n      <note default-x=\"12.00\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"42.40\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"72.80\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"103.20\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"133.61\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"164.01\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"194.41\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"224.81\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"255.21\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"316.02\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"376.82\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <notations>\n          <ornaments>\n            <inverted-mordent/>\n            </ornaments>\n          </notations>\n        </note>\n      <note default-x=\"437.62\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"72.80\" default-y=\"-145.00\">\n        <pitch>\n          <step>G</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note>\n        <rest/>\n        <duration>8</duration>\n        <voice>5</voice>\n        <type>quarter</type>\n        <staff>2</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <staff>2</staff>\n        </note>\n      <note default-x=\"285.61\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"316.02\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"346.42\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"376.82\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"407.22\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"437.62\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"468.02\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"3\" width=\"534.04\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>137.07</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"51.25\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"81.33\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"111.40\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"141.48\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"171.55\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"201.62\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"231.70\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"261.77\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"291.85\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"321.92\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"352.00\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"382.07\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"412.14\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"442.22\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"472.29\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"502.37\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"51.25\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"111.40\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"171.55\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"231.70\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"291.85\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"352.00\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"412.14\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"472.29\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"4\" width=\"522.45\">\n      <note default-x=\"12.00\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"43.80\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"75.61\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"107.41\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"139.21\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"171.02\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"202.82\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"234.62\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"266.43\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"298.23\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"330.03\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"361.84\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"393.64\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"425.44\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"457.25\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"489.05\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"75.61\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"139.21\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"202.82\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"266.43\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"330.03\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"393.64\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>8</duration>\n        <tie type=\"start\"/>\n        <voice>5</voice>\n        <type>quarter</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      </measure>\n    <measure number=\"5\" width=\"557.16\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>137.07</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"53.75\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"116.48\" default-y=\"-45.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"179.21\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>6</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <dot/>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <notations>\n          <ornaments>\n            <inverted-mordent/>\n            </ornaments>\n          </notations>\n        </note>\n      <note default-x=\"273.30\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">backward hook</beam>\n        </note>\n      <note default-x=\"304.66\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"336.02\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"367.38\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"398.75\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"430.11\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"461.47\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"492.84\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"524.20\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"53.75\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"85.12\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"116.48\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"147.84\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"179.21\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"210.57\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"241.93\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"273.30\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"304.66\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"367.38\" default-y=\"-135.00\">\n        <pitch>\n          <step>B</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"430.11\" default-y=\"-130.00\">\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"492.84\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"6\" width=\"499.33\">\n      <note default-x=\"12.00\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"43.09\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"74.17\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"105.26\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"136.35\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"167.43\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"198.52\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"229.61\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"260.69\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"291.78\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>32nd</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        <beam number=\"3\">begin</beam>\n        </note>\n      <note default-x=\"311.21\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>32nd</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        <beam number=\"3\">end</beam>\n        </note>\n      <note default-x=\"330.64\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"361.73\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"392.81\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <notations>\n          <ornaments>\n            <inverted-mordent/>\n            </ornaments>\n          </notations>\n        </note>\n      <note default-x=\"435.56\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"466.64\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"74.17\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"136.35\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"198.52\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"260.69\" default-y=\"-135.00\">\n        <pitch>\n          <step>B</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>6</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <dot/>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"361.73\" default-y=\"-130.00\">\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">backward hook</beam>\n        </note>\n      <note default-x=\"392.81\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"435.56\" default-y=\"-160.00\">\n        <pitch>\n          <step>D</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"7\" width=\"536.23\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>137.07</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"52.25\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>8</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <staff>1</staff>\n        </note>\n      <note default-x=\"323.59\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"353.74\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"383.89\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"414.04\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"444.19\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"474.34\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"504.48\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <staff>2</staff>\n        </note>\n      <note default-x=\"82.40\" default-y=\"-145.00\">\n        <pitch>\n          <step>G</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"112.55\" default-y=\"-140.00\">\n        <pitch>\n          <step>A</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"142.70\" default-y=\"-135.00\">\n        <pitch>\n          <step>B</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"172.85\" default-y=\"-130.00\">\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"203.00\" default-y=\"-140.00\">\n        <pitch>\n          <step>A</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"233.15\" default-y=\"-135.00\">\n        <pitch>\n          <step>B</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"263.29\" default-y=\"-145.00\">\n        <pitch>\n          <step>G</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"293.44\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"353.74\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"414.04\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"474.34\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"8\" width=\"520.26\">\n      <note default-x=\"18.42\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <notations>\n          <ornaments>\n            <inverted-mordent/>\n            </ornaments>\n          </notations>\n        </note>\n      <note>\n        <rest/>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>8</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <staff>1</staff>\n        </note>\n      <note default-x=\"299.80\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"331.07\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"362.33\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"393.60\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"424.86\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"456.13\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"487.40\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"18.42\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"49.68\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"80.95\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"112.21\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"143.48\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"174.74\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"206.01\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"237.27\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"268.54\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"331.07\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"393.60\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"456.13\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"9\" width=\"550.24\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>137.07</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"51.25\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>8</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <staff>1</staff>\n        </note>\n      <note default-x=\"336.73\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"367.00\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"397.28\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"427.55\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"457.82\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"488.09\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"518.36\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"51.25\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <attributes>\n        <clef number=\"2\">\n          <sign>G</sign>\n          <line>2</line>\n          </clef>\n        </attributes>\n      <note default-x=\"94.56\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"124.83\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"155.10\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"185.37\" default-y=\"-150.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"215.65\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"245.92\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"276.19\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"306.46\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"367.00\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"427.55\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"488.09\" default-y=\"-150.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"10\" width=\"506.26\">\n      <note default-x=\"12.00\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>8</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <staff>1</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <staff>1</staff>\n        </note>\n      <note default-x=\"288.95\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"319.72\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"350.49\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"381.26\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"412.03\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"443.11\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <alter>1</alter>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"473.89\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"42.77\" default-y=\"-130.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"73.54\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"104.32\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"135.09\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"165.86\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"196.63\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"227.40\" default-y=\"-130.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"258.17\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"319.72\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"381.26\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"443.11\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"11\" width=\"505.80\">\n      <print new-page=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <top-system-distance>70.00</top-system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"50.67\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"107.92\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <alter>1</alter>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"164.53\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"221.14\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"277.75\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"334.36\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"390.98\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <accidental>natural</accidental>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"447.59\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <alter>1</alter>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"50.67\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"79.61\" default-y=\"-125.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>flat</accidental>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"107.92\" default-y=\"-130.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"136.22\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"164.53\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"192.83\" default-y=\"-130.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"221.14\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"249.45\" default-y=\"-125.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"277.75\" default-y=\"-130.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"306.06\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"334.36\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"362.67\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"390.98\" default-y=\"-150.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"419.28\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"447.59\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"475.90\" default-y=\"-135.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"12\" width=\"550.69\">\n      <note default-x=\"12.00\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"77.38\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"142.75\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"208.13\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"273.50\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"338.88\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"404.65\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>8</duration>\n        <tie type=\"start\"/>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-140.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"44.69\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"77.38\" default-y=\"-150.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"110.06\" default-y=\"-155.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"142.75\" default-y=\"-160.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"175.44\" default-y=\"-150.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"208.13\" default-y=\"-155.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"240.81\" default-y=\"-145.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"273.50\" default-y=\"-150.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"306.19\" default-y=\"-155.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"338.88\" default-y=\"-160.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"371.56\" default-y=\"-165.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"404.65\" default-y=\"-170.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"437.33\" default-y=\"-160.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"470.02\" default-y=\"-165.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"502.71\" default-y=\"-155.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"13\" width=\"559.80\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>100.66</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <attributes>\n        <clef number=\"2\">\n          <sign>F</sign>\n          <line>4</line>\n          </clef>\n        </attributes>\n      <note default-x=\"53.17\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"84.74\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"116.30\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"147.87\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"179.43\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"211.00\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"242.56\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"274.12\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"305.69\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"337.25\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"368.82\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"400.38\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"431.94\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"463.51\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"495.07\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"526.64\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"53.17\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"116.30\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"179.43\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>6</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <dot/>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <notations>\n          <ornaments>\n            <mordent/>\n            </ornaments>\n          </notations>\n        </note>\n      <note default-x=\"274.12\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">backward hook</beam>\n        </note>\n      <note default-x=\"305.69\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"337.25\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"368.82\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"400.38\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>natural</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"431.94\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"463.51\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"495.07\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"526.64\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"14\" width=\"496.69\">\n      <note default-x=\"12.00\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"42.95\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"74.03\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"104.98\" default-y=\"15.00\">\n        <pitch>\n          <step>B</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"135.93\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"166.88\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"197.84\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"228.79\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"259.87\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"290.82\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"321.77\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"352.72\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"383.67\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"433.19\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"464.14\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"42.95\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"74.03\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"104.98\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"135.93\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"166.88\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"197.84\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"228.79\" default-y=\"-80.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"259.87\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"321.77\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"383.67\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"433.19\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"15\" width=\"559.02\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>100.66</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"51.25\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"82.79\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"114.33\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"145.87\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"177.41\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"208.95\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"240.49\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"272.03\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"303.21\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>1</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"51.25\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"114.33\" default-y=\"-140.00\">\n        <pitch>\n          <step>A</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note>\n        <rest/>\n        <duration>8</duration>\n        <voice>5</voice>\n        <type>quarter</type>\n        <staff>2</staff>\n        </note>\n      <note>\n        <rest/>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <staff>2</staff>\n        </note>\n      <note default-x=\"335.11\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"366.65\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"398.19\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"429.72\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"461.26\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"494.34\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>sharp</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"525.88\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"16\" width=\"497.47\">\n      <note default-x=\"12.36\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"42.58\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"72.80\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"103.02\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"133.24\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"163.46\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"193.68\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"223.90\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"253.75\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>1</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>5</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <note default-x=\"254.12\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"284.33\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"314.55\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"344.77\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"374.99\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"405.21\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"435.43\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"465.65\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"17\" width=\"540.42\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>100.66</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"53.17\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"83.53\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"113.88\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"144.23\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"174.59\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"204.94\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"235.29\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"265.64\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"295.64\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>1</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"52.81\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>5</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <note default-x=\"296.00\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"326.35\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"356.70\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"387.06\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"417.41\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"447.76\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"478.11\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"508.47\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"18\" width=\"516.07\">\n      <note default-x=\"12.36\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"43.74\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"75.13\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"106.51\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"137.89\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"169.27\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"200.65\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"232.04\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"263.06\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>1</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>16</duration>\n        <tie type=\"start\"/>\n        <voice>5</voice>\n        <type>half</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <notations>\n          <tied type=\"start\"/>\n          </notations>\n        </note>\n      <note default-x=\"263.42\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"294.80\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"326.18\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"357.56\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <accidental>flat</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"388.95\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"420.33\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"451.71\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"483.09\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"19\" width=\"526.92\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>100.66</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"53.17\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <tie type=\"stop\"/>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        <notations>\n          <tied type=\"stop\"/>\n          </notations>\n        </note>\n      <note default-x=\"82.68\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"112.19\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"141.70\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"171.21\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"200.72\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"230.23\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"259.74\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"289.25\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"318.76\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"348.27\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"377.78\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"407.29\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"436.80\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"466.31\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"495.82\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"53.17\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"112.19\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <accidental>flat</accidental>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"171.21\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"230.23\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"289.25\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"348.27\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"407.29\" default-y=\"-95.00\">\n        <pitch>\n          <step>C</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"466.31\" default-y=\"-100.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"20\" width=\"529.57\">\n      <note default-x=\"12.00\" default-y=\"0.00\">\n        <pitch>\n          <step>F</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"44.25\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"76.50\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"108.74\" default-y=\"15.00\">\n        <pitch>\n          <step>B</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"140.99\" default-y=\"20.00\">\n        <pitch>\n          <step>C</step>\n          <octave>6</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"173.24\" default-y=\"10.00\">\n        <pitch>\n          <step>A</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"205.49\" default-y=\"15.00\">\n        <pitch>\n          <step>B</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"237.74\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"269.98\" default-y=\"20.00\">\n        <pitch>\n          <step>C</step>\n          <octave>6</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"334.48\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"398.98\" default-y=\"-5.00\">\n        <pitch>\n          <step>E</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>1</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"463.47\" default-y=\"-10.00\">\n        <pitch>\n          <step>D</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"495.72\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"12.00\" default-y=\"-105.00\">\n        <pitch>\n          <step>A</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"76.50\" default-y=\"-80.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"140.99\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"205.49\" default-y=\"-90.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"269.98\" default-y=\"-85.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"302.23\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"334.48\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"366.73\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"398.98\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"431.22\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"463.47\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"495.72\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"21\" width=\"660.85\">\n      <print new-system=\"yes\">\n        <system-layout>\n          <system-margins>\n            <left-margin>21.00</left-margin>\n            <right-margin>-0.00</right-margin>\n            </system-margins>\n          <system-distance>100.66</system-distance>\n          </system-layout>\n        <staff-layout number=\"2\">\n          <staff-distance>65.00</staff-distance>\n          </staff-layout>\n        </print>\n      <note default-x=\"51.25\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"89.25\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>flat</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"127.25\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"165.25\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"203.25\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"241.25\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"279.25\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"317.25\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <alter>-1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"355.25\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"393.25\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <accidental>natural</accidental>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"431.25\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"469.25\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"507.25\" default-y=\"-45.00\">\n        <pitch>\n          <step>D</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"545.25\" default-y=\"-15.00\">\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"583.25\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"621.25\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>1</voice>\n        <type>16th</type>\n        <stem>up</stem>\n        <staff>1</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"51.25\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"127.25\" default-y=\"-130.00\">\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"203.25\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        </note>\n      <note default-x=\"279.25\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      <note default-x=\"355.25\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        <beam number=\"2\">begin</beam>\n        </note>\n      <note default-x=\"393.25\" default-y=\"-125.00\">\n        <pitch>\n          <step>D</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"431.25\" default-y=\"-120.00\">\n        <pitch>\n          <step>E</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">continue</beam>\n        <beam number=\"2\">continue</beam>\n        </note>\n      <note default-x=\"469.25\" default-y=\"-115.00\">\n        <pitch>\n          <step>F</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>2</duration>\n        <voice>5</voice>\n        <type>16th</type>\n        <stem>down</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        <beam number=\"2\">end</beam>\n        </note>\n      <note default-x=\"507.25\" default-y=\"-110.00\">\n        <pitch>\n          <step>G</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">begin</beam>\n        </note>\n      <note default-x=\"583.25\" default-y=\"-145.00\">\n        <pitch>\n          <step>G</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>4</duration>\n        <voice>5</voice>\n        <type>eighth</type>\n        <stem>up</stem>\n        <staff>2</staff>\n        <beam number=\"1\">end</beam>\n        </note>\n      </measure>\n    <measure number=\"22\" width=\"395.64\">\n      <note default-x=\"28.41\" default-y=\"-40.00\">\n        <pitch>\n          <step>E</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>32</duration>\n        <voice>1</voice>\n        <type>whole</type>\n        <staff>1</staff>\n        <notations>\n          <arpeggiate/>\n          </notations>\n        </note>\n      <note default-x=\"28.41\" default-y=\"-30.00\">\n        <chord/>\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>32</duration>\n        <voice>1</voice>\n        <type>whole</type>\n        <staff>1</staff>\n        <notations>\n          <arpeggiate/>\n          </notations>\n        </note>\n      <note default-x=\"28.41\" default-y=\"-15.00\">\n        <chord/>\n        <pitch>\n          <step>C</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>32</duration>\n        <voice>1</voice>\n        <type>whole</type>\n        <staff>1</staff>\n        <notations>\n          <arpeggiate/>\n          </notations>\n        </note>\n      <backup>\n        <duration>32</duration>\n        </backup>\n      <note default-x=\"28.41\" default-y=\"-165.00\">\n        <pitch>\n          <step>C</step>\n          <octave>2</octave>\n          </pitch>\n        <duration>32</duration>\n        <voice>5</voice>\n        <type>whole</type>\n        <staff>2</staff>\n        <notations>\n          <arpeggiate/>\n          </notations>\n        </note>\n      <note default-x=\"28.41\" default-y=\"-130.00\">\n        <chord/>\n        <pitch>\n          <step>C</step>\n          <octave>3</octave>\n          </pitch>\n        <duration>32</duration>\n        <voice>5</voice>\n        <type>whole</type>\n        <staff>2</staff>\n        <notations>\n          <arpeggiate/>\n          </notations>\n        </note>\n      <barline location=\"right\">\n        <bar-style>light-heavy</bar-style>\n        </barline>\n      </measure>\n    </part>\n  </score-partwise>\n"
  },
  {
    "path": "moonlight/testdata/README.md",
    "content": "# OMR Corpus\n\nContains test data for unit testing the OMR pipeline. Currently contains a\nsingle page of a music score, to be used in all unit tests, and simple generated\nscores containing a few measures.\n\nThe score was obtained from [IMSLP](http://imslp.org) and is in the public\ndomain in the United States.\n"
  },
  {
    "path": "moonlight/testdata/TWO_MEASURE_SAMPLE.LICENSE.md",
    "content": "The score was composed by the Moonlight authors. The MusicXML transcription is\nreleased under the Apache License, version 2.0, as with all code.\n"
  },
  {
    "path": "moonlight/testdata/TWO_MEASURE_SAMPLE.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- Copyright 2018 Google LLC\n   -\n   - Licensed under the Apache License, Version 2.0 (the \"License\");\n   - you may not use this file except in compliance with the License.\n   - You may obtain a copy of the License at\n   -\n   -     https://www.apache.org/licenses/LICENSE-2.0\n   -\n   - Unless required by applicable law or agreed to in writing, software\n   - distributed under the License is distributed on an \"AS IS\" BASIS,\n   - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   - See the License for the specific language governing permissions and\n   - limitations under the License.\n   -->\n<!DOCTYPE score-partwise PUBLIC \"-//Recordare//DTD MusicXML 2.0 Partwise//EN\" \"http://www.musicxml.org/dtds/partwise.dtd\">\n<score-partwise>\n  <identification>\n    <encoding>\n      <software>MuseScore 1.3</software>\n      <encoding-date>2017-10-13</encoding-date>\n      </encoding>\n    </identification>\n  <defaults>\n    <scaling>\n      <millimeters>7.05556</millimeters>\n      <tenths>40</tenths>\n      </scaling>\n    <page-layout>\n      <page-height>1683.78</page-height>\n      <page-width>1190.55</page-width>\n      <page-margins type=\"even\">\n        <left-margin>56.6929</left-margin>\n        <right-margin>56.6929</right-margin>\n        <top-margin>56.6929</top-margin>\n        <bottom-margin>113.386</bottom-margin>\n        </page-margins>\n      <page-margins type=\"odd\">\n        <left-margin>56.6929</left-margin>\n        <right-margin>56.6929</right-margin>\n        <top-margin>56.6929</top-margin>\n        <bottom-margin>113.386</bottom-margin>\n        </page-margins>\n      </page-layout>\n    </defaults>\n  <credit page=\"1\">\n    <credit-words default-x=\"595.276\" default-y=\"1627.09\" font-size=\"24\" justify=\"center\" valign=\"top\">Easy Test</credit-words>\n    </credit>\n  <credit page=\"1\">\n    <credit-words default-x=\"595.276\" default-y=\"1570.39\" font-size=\"14\" justify=\"center\" valign=\"top\">Just the basics</credit-words>\n    </credit>\n  <part-list>\n    <part-group type=\"start\" number=\"1\">\n      <group-symbol>brace</group-symbol>\n      </part-group>\n    <score-part id=\"P1\">\n      <part-name>Piano</part-name>\n      <part-abbreviation>Pno.</part-abbreviation>\n      <score-instrument id=\"P1-I3\">\n        <instrument-name>Piano</instrument-name>\n        </score-instrument>\n      <midi-instrument id=\"P1-I3\">\n        <midi-channel>1</midi-channel>\n        <midi-program>1</midi-program>\n        <volume>78.7402</volume>\n        <pan>0</pan>\n        </midi-instrument>\n      </score-part>\n    </part-list>\n  <part id=\"P1\">\n    <measure number=\"1\" width=\"530.84\">\n      <print>\n        <system-layout>\n          <system-margins>\n            <left-margin>87.63</left-margin>\n            <right-margin>0.00</right-margin>\n            </system-margins>\n          <top-system-distance>180.00</top-system-distance>\n          </system-layout>\n        </print>\n      <attributes>\n        <divisions>1</divisions>\n        <key>\n          <fifths>1</fifths>\n          <mode>major</mode>\n          </key>\n        <time>\n          <beats>4</beats>\n          <beat-type>4</beat-type>\n          </time>\n        <clef>\n          <sign>G</sign>\n          <line>2</line>\n          </clef>\n        </attributes>\n      <note default-x=\"90.15\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>up</stem>\n        </note>\n      <note default-x=\"199.92\" default-y=\"-25.00\">\n        <pitch>\n          <step>A</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>up</stem>\n        </note>\n      <note default-x=\"309.70\" default-y=\"-20.00\">\n        <pitch>\n          <step>B</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>down</stem>\n        </note>\n      <note default-x=\"419.47\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>up</stem>\n        </note>\n      </measure>\n    <measure number=\"2\" width=\"458.69\">\n      <note default-x=\"12.00\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>up</stem>\n        </note>\n      <note default-x=\"121.77\" default-y=\"-35.00\">\n        <pitch>\n          <step>F</step>\n          <alter>1</alter>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>up</stem>\n        </note>\n      <note default-x=\"231.55\" default-y=\"-30.00\">\n        <pitch>\n          <step>G</step>\n          <octave>4</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>up</stem>\n        </note>\n      <note default-x=\"341.32\" default-y=\"5.00\">\n        <pitch>\n          <step>G</step>\n          <octave>5</octave>\n          </pitch>\n        <duration>1</duration>\n        <voice>1</voice>\n        <type>quarter</type>\n        <stem>down</stem>\n        </note>\n      <barline location=\"right\">\n        <bar-style>light-heavy</bar-style>\n        </barline>\n      </measure>\n    </part>\n  </score-partwise>\n"
  },
  {
    "path": "moonlight/tools/BUILD",
    "content": "package(default_visibility = [\"//moonlight:__subpackages__\"])\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_binary(\n    name = \"gen_structure_test_case\",\n    srcs = [\"gen_structure_test_case.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # disable_tf2\n        # absl dep\n        \"//moonlight:engine\",\n    ],\n)\n\npy_binary(\n    name = \"export_kmeans_centroids\",\n    srcs = [\"export_kmeans_centroids.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":export_kmeans_centroids_lib\",\n        # disable_tf2\n    ],\n)\n\npy_library(\n    name = \"export_kmeans_centroids_lib\",\n    srcs = [\"export_kmeans_centroids.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # absl dep\n        \"//moonlight/glyphs:corpus\",\n        \"//moonlight/glyphs:knn_model\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"export_kmeans_centroids_test\",\n    srcs = [\"export_kmeans_centroids_test.py\"],\n    srcs_version = \"PY2AND3\",\n    # Original centroids were deleted from Piper, this test can no longer be run at head.\n    # Will delete soon assuming we don't need it again.\n    tags = [\n        \"manual\",\n        \"notap\",\n    ],\n    deps = [\n        \":export_kmeans_centroids_lib\",\n        # disable_tf2\n        \"//moonlight:engine\",\n        \"//moonlight/glyphs:saved_classifier\",\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/tools/export_kmeans_centroids.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tool to convert existing KNN tfrecords to a saved model.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import app\nimport tensorflow as tf\n\nfrom moonlight.glyphs import corpus\nfrom moonlight.glyphs import knn_model\n\n\ndef run(tfrecords_filename, export_dir):\n  with tf.Session():\n    height, width = corpus.get_patch_shape(tfrecords_filename)\n    patches, labels = corpus.parse_corpus(tfrecords_filename, height, width)\n\n  knn_model.export_knn_model(patches, labels, export_dir)\n\n\ndef main(argv):\n  _, infile, outdir = argv\n  run(infile, outdir)\n\n\nif __name__ == '__main__':\n  app.run(main)\n"
  },
  {
    "path": "moonlight/tools/export_kmeans_centroids_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"End to end test for exporting the KNN model.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport tempfile\n\nimport librosa\nimport tensorflow as tf\n\nfrom moonlight import engine\nfrom moonlight.glyphs import saved_classifier\nfrom moonlight.tools import export_kmeans_centroids\n\n\nclass ExportKmeansCentroidsTest(tf.test.TestCase):\n\n  def testEndToEnd(self):\n    with tempfile.TemporaryDirectory() as tmpdir:\n      with engine.get_included_labels_file() as centroids:\n        export_dir = os.path.join(tmpdir, 'export')\n        export_kmeans_centroids.run(centroids.name, export_dir)\n\n      # Now load the saved model.\n      omr = engine.OMREngine(\n          glyph_classifier_fn=saved_classifier.SavedConvolutional1DClassifier\n          .glyph_classifier_fn(export_dir))\n      filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                              '../testdata/IMSLP00747-000.png')\n      notes = omr.run(filename, output_notesequence=True)\n      # TODO(ringw): Fix the extra note that is detected before the actual\n      # first eighth note.\n      self.assertEqual(librosa.note_to_midi('C4'), notes.notes[1].pitch)\n      self.assertEqual(librosa.note_to_midi('D4'), notes.notes[2].pitch)\n      self.assertEqual(librosa.note_to_midi('E4'), notes.notes[3].pitch)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/tools/gen_structure_test_case.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Prints assert statements about the number of systems, staves, and barlines.\n\nAfter running the tool, please verify that the output is completely correct\nbefore copying and pasting it into omr_regression_test.py.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport re\n\nfrom absl import app\n\nfrom moonlight import engine\n\n\ndef main(argv):\n  pages = argv[1:]\n  assert pages, 'Pass one or more PNG files'\n  omr = engine.OMREngine()\n  for i, filename in enumerate(pages):\n    escaped_filename = re.sub(r'([\\'\\\\])', r'\\\\\\0', filename)\n    page = omr.run(filename).page[0]\n    # TODO(ringw): Use a real templating system (e.g. jinja or mako).\n    if i > 0:\n      print('')\n    print('  def test%s_structure(self):' % _sanitized_basename(filename))\n    print('    page = engine.OMREngine().run(')\n    print('        \\'%s\\').page[0]' % escaped_filename)\n    print('    self.assertEqual(len(page.system), %d)' % len(page.system))\n    for i, system in enumerate(page.system):\n      print('')\n      print('    self.assertEqual(len(page.system[%d].staff), %d)' %\n            (i, len(system.staff)))\n      print('    self.assertEqual(len(page.system[%d].bar), %d)' %\n            (i, len(system.bar)))\n\n\ndef _sanitized_basename(filename):\n  filename, unused_ext = os.path.splitext(os.path.basename(filename))\n  return re.sub('[^A-z0-9]+', '_', filename)\n\n\nif __name__ == '__main__':\n  app.run(main)\n"
  },
  {
    "path": "moonlight/training/clustering/BUILD",
    "content": "# Description:\n# Unsupervised learning pipeline for OMR glyph classification.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"staffline_patches_dofn\",\n    srcs = [\"staffline_patches_dofn.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # apache-beam dep\n        \"//moonlight/staves:staffline_extractor\",\n        \"//moonlight/util:more_iter_tools\",\n        # numpy dep\n        # six dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"staffline_patches_dofn_test\",\n    size = \"medium\",\n    srcs = [\"staffline_patches_dofn_test.py\"],\n    data = [\"//moonlight/testdata:images\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_patches_dofn\",\n        # disable_tf2\n        # absl/testing dep\n        # apache-beam dep\n        # tensorflow dep\n    ],\n)\n\npy_binary(\n    name = \"staffline_patches_kmeans_pipeline\",\n    srcs = [\"staffline_patches_kmeans_pipeline.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_patches_kmeans_pipeline_lib\",\n        # disable_tf2\n    ],\n)\n\npy_library(\n    name = \"staffline_patches_kmeans_pipeline_lib\",\n    srcs = [\"staffline_patches_kmeans_pipeline.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_patches_dofn\",\n        # absl dep\n        # apache-beam dep\n        \"//moonlight/pipeline:pipeline_flags\",\n        # tensorflow dep\n        # tensorflow.contrib.learn dep\n    ],\n)\n\npy_test(\n    name = \"staffline_patches_kmeans_pipeline_test\",\n    srcs = [\"staffline_patches_kmeans_pipeline_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":staffline_patches_kmeans_pipeline_lib\",\n        # disable_tf2\n        # absl dep\n        # absl/testing dep\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"kmeans_labeler_request_handler\",\n    srcs = [\"kmeans_labeler_request_handler.py\"],\n    data = [\"kmeans_labeler_template.html\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # pillow dep\n        # mako dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # six dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"kmeans_labeler_request_handler_test\",\n    srcs = [\"kmeans_labeler_request_handler_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":kmeans_labeler_request_handler\",\n        # disable_tf2\n        # absl/testing dep\n        # mako dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_binary(\n    name = \"kmeans_labeler\",\n    srcs = [\"kmeans_labeler.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":kmeans_labeler_request_handler\",\n        # disable_tf2\n        # pillow dep\n        # absl dep\n        # mako dep\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        # numpy dep\n        # six dep\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/training/clustering/kmeans_labeler.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Serves an HTML page for labeling the k-means clusters.\n\nTakes cluster TFRecords generated by `staffline_patches_kmeans_pipeline.py` and\nserves an HTML page containing cluster images. When the HTML form containing\nlabels is submitted, it (by defaults) overwrites the clusters file, adding the\nlabels.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import flags\nimport numpy as np\nfrom six.moves import BaseHTTPServer\nimport tensorflow as tf\nfrom tensorflow.python.lib.io import tf_record\n\nfrom moonlight.training.clustering import kmeans_labeler_request_handler as handler\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('clusters_path', None, 'Path to the input patch TFRecords')\nflags.DEFINE_string(\n    'output_path', None, 'Path to the output patch TFRecords. Defaults to '\n    'overwriting clusters_path.')\nflags.DEFINE_integer('port', 8000, 'Port for serving the labeler.')\n\n\ndef load_clusters(input_path):\n  \"\"\"Loads TFRecords of Examples representing the k-means clusters.\n\n  Examples are typically the output of `staffline_patches_kmeans_pipeline.py`.\n\n  Args:\n    input_path: Path to the TFRecords of Examples.\n\n  Returns:\n    A NumPy array of shape (num_clusters, patch_height, patch_width).\n  \"\"\"\n\n  def parse_example(example_str):\n    example = tf.train.Example()\n    example.ParseFromString(example_str)\n    height = example.features.feature['height'].int64_list.value[0]\n    width = example.features.feature['width'].int64_list.value[0]\n    return np.asarray(\n        example.features.feature['features'].float_list.value).reshape(\n            (height, width))\n\n  return np.asarray([\n      parse_example(example)\n      for example in tf_record.tf_record_iterator(input_path)\n  ])\n\n\ndef main(_):\n  server_address = ('localhost', FLAGS.port)\n  clusters = load_clusters(FLAGS.clusters_path)\n  # Default to overwriting the input.\n  output_path = FLAGS.output_path or FLAGS.clusters_path\n\n  def create_request_handler(*args):\n    return handler.LabelerHTTPRequestHandler(clusters, output_path, *args)\n\n  server = BaseHTTPServer.HTTPServer(server_address, create_request_handler)\n  print('Listening on port %d...', FLAGS.port)\n  server.serve_forever()\n\n\nif __name__ == '__main__':\n  tf.app.run()\n"
  },
  {
    "path": "moonlight/training/clustering/kmeans_labeler_request_handler.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"The k-means labeler HTTP request handler.\n\nDisplays the clusters on an HTML page with dropdowns for each cluster's label.\nOn POST, updates the output to include the cluster labels.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport base64\nimport cgi\nfrom mako import template as mako_template\nimport numpy as np\nfrom PIL import Image\nimport six\nfrom six import moves\nfrom six.moves import BaseHTTPServer\nfrom six.moves import http_client\nfrom tensorflow.core.example import example_pb2\nfrom tensorflow.python.lib.io import tf_record\nfrom tensorflow.python.platform import resource_loader\n\nfrom moonlight.protobuf import musicscore_pb2\n\n\nclass LabelerHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):\n  \"\"\"The HTTP request handler for the k-means labeler server.\n\n  Attributes:\n    clusters: NumPy array of clusters. Shape (num_clusters, patch_height,\n      patch_width).\n    output_path: Path to write the TFRecords.\n  \"\"\"\n\n  def __init__(self, clusters, output_path, *args):\n    self.clusters = clusters\n    self.output_path = output_path\n    BaseHTTPServer.BaseHTTPRequestHandler.__init__(self, *args)\n\n  def do_GET(self):\n    template_path = resource_loader.get_path_to_datafile(\n        'kmeans_labeler_template.html')\n    template = mako_template.Template(open(template_path).read())\n    page = create_page(self.clusters, template)\n    self.send_response(http_client.OK)\n    self.send_header('Content-Type', 'text/html; charset=utf-8')\n    self.end_headers()\n    self.wfile.write(page)\n\n  def do_POST(self):\n    post_vars = cgi.parse_qs(\n        self.rfile.read(int(self.headers.getheader('content-length'))))\n    labels = [\n        post_vars['cluster%d' % i][0]\n        for i in moves.xrange(self.clusters.shape[0])\n    ]\n    examples = create_examples(self.clusters, labels)\n\n    with tf_record.TFRecordWriter(self.output_path) as writer:\n      for example in examples:\n        writer.write(example.SerializeToString())\n    self.send_response(http_client.OK)\n    self.end_headers()\n    self.wfile.write('Success')  # printed in the labeler alert\n\n\ndef create_page(clusters, template):\n  \"\"\"Renders the labeler HTML.\n\n  Args:\n    clusters: NumPy array of clusters.\n    template: Mako template for the page.\n\n  Returns:\n    The labeler HTML string.\n  \"\"\"\n\n  # Tuples (index, preview, is_content).\n  cluster_info = [\n      (i,) + _process_cluster(cluster) for i, cluster in enumerate(clusters)\n  ]\n  content_clusters = [cluster for cluster in cluster_info if cluster[2]]\n  non_content_clusters = [cluster for cluster in cluster_info if not cluster[2]]\n  return template.render(\n      content_clusters=content_clusters,\n      empty_clusters=non_content_clusters,\n      # Skip the unknown glyph type.\n      glyph_types=musicscore_pb2.Glyph.Type.keys()[1:])\n\n\ndef _process_cluster(cluster):\n  \"\"\"Processes a cluster cluster image.\n\n  Args:\n    cluster: 2D NumPy array.\n\n  Returns:\n    The preview image (PNG encoded in an HTML data URL).\n    is_content: boolean; whether the cluster is considered content.\n  \"\"\"\n  image_arr = create_highlighted_image(cluster)\n  image = Image.fromarray(image_arr)\n  buf = six.BytesIO()\n  image.save(buf, 'PNG', optimize=True)\n  buf.seek(0)\n  preview = 'data:image/png;base64,' + str(base64.b64encode(buf.read()))\n  # The cluster is likely non-content (does not contain a glyph) if the\n  # max standard deviation across all rows or all columns is low. Show those\n  # patches at the bottom of the page so that they can still be double checked\n  # by hand.\n  is_content = min(cluster.std(axis=0).max(), cluster.std(axis=1).max()) > 0.1\n  return preview, is_content\n\n\ndef create_highlighted_image(cluster, enlarge_ratio=10):\n  \"\"\"Enlarges the \"cluster\" image and draws a crosshairs highlight.\n\n  Args:\n    cluster: 2D NumPy array.\n    enlarge_ratio: Scale of the output image.\n\n  Returns:\n    An enlarged and highlighted image as a NumPy array. 3D (height, width, 3).\n        A \"crosshairs\" pattern is drawn which highlights the center pixel of the\n        image.\n  \"\"\"\n  # Enlarge the image, and repeat on the 3rd axis to get RGB channels.\n  image_arr = np.repeat(\n      np.repeat(\n          np.repeat(\n              (cluster * 255).astype(np.uint8)[:, :, None],\n              enlarge_ratio,\n              axis=0),\n          enlarge_ratio,\n          axis=1),\n      3,\n      axis=2)\n  # Calculate the vertical and horizontal slice of the image to be highlighted.\n  vertical_start = (image_arr.shape[0] - enlarge_ratio) // 2\n  vertical_stop = (image_arr.shape[0] + enlarge_ratio) // 2\n  vertical_slice = slice(vertical_start, vertical_stop)\n  horiz_slice = slice(image_arr.shape[1] // 2 - 5, image_arr.shape[1] // 2 + 6)\n  # Highlight the vertical slice of the image to be partially red.\n  image_arr[vertical_slice] = (\n      image_arr[vertical_slice] / 2 + np.array([127., 0., 0.]))\n  # Highlight the horizontal slice of the image, avoiding the section that was\n  # already highlighted.\n  image_arr[:vertical_start, horiz_slice] = (\n      image_arr[:vertical_start, horiz_slice] / 2 + np.array([127., 0., 0.]))\n  image_arr[vertical_stop:, horiz_slice] = (\n      image_arr[vertical_stop:, horiz_slice] / 2 + np.array([127., 0., 0.]))\n  return image_arr\n\n\ndef create_examples(clusters, labels):\n  \"\"\"Creates Examples from the clusters and label strings.\n\n  Args:\n    clusters: NumPy array of shape (num_clusters, patch_height, patch_width).\n    labels: List of string labels, which are names in musicscore_pb2.Glyph.Type.\n      Length `num_clusters`.\n\n  Returns:\n    A list of Example protos of length `num_clusters`.\n  \"\"\"\n  examples = []\n  for cluster, label in zip(clusters, labels):\n    example = example_pb2.Example()\n    features = example.features\n    features.feature['patch'].float_list.value.extend(cluster.ravel())\n    features.feature['height'].int64_list.value.append(cluster.shape[0])\n    features.feature['width'].int64_list.value.append(cluster.shape[1])\n    label_num = musicscore_pb2.Glyph.Type.Value(label)\n    example.features.feature['label'].int64_list.value.append(label_num)\n    examples.append(example)\n  return examples\n"
  },
  {
    "path": "moonlight/training/clustering/kmeans_labeler_request_handler_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the k-means labeler request handler.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\n\nfrom absl.testing import absltest\nfrom mako import template as mako_template\nimport numpy as np\nfrom tensorflow.core.example import example_pb2\nfrom tensorflow.core.example import feature_pb2\nfrom tensorflow.python.platform import resource_loader\n\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.training.clustering import kmeans_labeler_request_handler\n\n\nclass KmeansLabelerRequestHandlerTest(absltest.TestCase):\n\n  def testCreatePage(self):\n    num_clusters = 10\n    clusters = np.random.random((num_clusters, 12, 14))\n    template_src = resource_loader.get_path_to_datafile(\n        'kmeans_labeler_template.html')\n    template = mako_template.Template(open(template_src).read())\n    html = kmeans_labeler_request_handler.create_page(clusters, template)\n    self.assertEqual(len(re.findall('<img', html)), num_clusters)\n    self.assertEqual(len(re.findall('<select', html)), num_clusters)\n\n  def testCreateExamples(self):\n    clusters = np.random.random((5, 12, 14))\n    labels = ['NONE', 'CLEF_TREBLE', 'NONE', 'NOTEHEAD_FILLED', 'CLEF_BASS']\n    examples = kmeans_labeler_request_handler.create_examples(clusters, labels)\n    self.assertEqual(len(examples), 5)\n    for i in range(5):\n      glyph_type = musicscore_pb2.Glyph.Type.Value(labels[i])\n      features = feature_pb2.Features()\n      features.feature['patch'].float_list.value.extend(clusters[i].ravel())\n      features.feature['height'].int64_list.value.append(12)\n      features.feature['width'].int64_list.value.append(14)\n      features.feature['label'].int64_list.value.append(glyph_type)\n      self.assertEqual(examples[i], example_pb2.Example(features=features))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/training/clustering/kmeans_labeler_template.html",
    "content": "<%def name=\"cluster_preview(cluster)\">\n  ## cluster is a tuple (index, preview, is_content).\n  <div class=\"cluster_preview\">\n    <img src=\"${ cluster[1] }\">\n    <select name=\"cluster${ cluster[0] }\">\n      % for glyph_type in glyph_types:\n      <option value=\"${ glyph_type }\">${ glyph_type }</option>\n      % endfor\n    </select>\n    <br>\n  </div>\n</%def>\n<html>\n  <head>\n    <title>Magenta OMR Patch Labeler</title>\n    <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n    <script>\n      $(document).ready(function(){\n        $(\"#form\").submit(function(e){\n          e.preventDefault();\n          $.ajax({\n            url: \"/\",\n            type: \"post\",\n            data: $(\"#form\").serialize(),\n            timeout: 2000\n          }).always(function(data){\n            alert(data);\n          });\n        });\n      });\n    </script>\n    <style>\n      .cluster_preview {\n        width: 200px;\n        margin-bottom: 10px;\n        float: left;\n      }\n    </style>\n  </head>\n  <body>\n    <form id=\"form\" action=\"javascript:false\">\n      <h2>Clusters (likely content)</h2>\n      <div>\n        % for cluster in content_clusters:\n        ${ cluster_preview(cluster) }\n        % endfor\n        <br clear=\"all\"/>\n      </div>\n      <h2>Clusters (likely empty)</h2>\n      <div>\n        % for cluster in empty_clusters:\n        ${ cluster_preview(cluster) }\n        % endfor\n      </div>\n      <br clear=\"all\"/>\n      <center>\n        <input type=\"submit\" name=\"Save\"/>\n      </center>\n    </form>\n  </body>\n</html>\n"
  },
  {
    "path": "moonlight/training/clustering/staffline_patches_dofn.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Extracts non-empty patches of extracted stafflines.\n\nExtracts vertical slices of the image where glyphs are expected\n(see `staffline_extractor.py`), and takes horizontal windows of the slice which\nwill be clustered. Some patches will have a glyph roughly in their center, and\nthe corresponding cluster centroids will be labeled as such.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport logging\n\nimport apache_beam as beam\nfrom apache_beam import metrics\nfrom moonlight.staves import staffline_extractor\nfrom moonlight.util import more_iter_tools\nimport numpy as np\nfrom six.moves import filter\nimport tensorflow as tf\n\n\ndef _filter_patch(patch, min_num_dark_pixels=10):\n  unused_patch_name, patch = patch\n  return np.greater_equal(np.sum(np.less(patch, 0.5)), min_num_dark_pixels)\n\n\nclass StafflinePatchesDoFn(beam.DoFn):\n  \"\"\"Runs the staffline patches graph.\"\"\"\n\n  def __init__(self, patch_height, patch_width, num_stafflines, timeout_ms,\n               max_patches_per_page):\n    self.patch_height = patch_height\n    self.patch_width = patch_width\n    self.num_stafflines = num_stafflines\n    self.timeout_ms = timeout_ms\n    self.max_patches_per_page = max_patches_per_page\n    self.total_pages_counter = metrics.Metrics.counter(self.__class__,\n                                                       'total_pages')\n    self.failed_pages_counter = metrics.Metrics.counter(self.__class__,\n                                                        'failed_pages')\n    self.successful_pages_counter = metrics.Metrics.counter(\n        self.__class__, 'successful_pages')\n    self.empty_pages_counter = metrics.Metrics.counter(self.__class__,\n                                                       'empty_pages')\n    self.total_patches_counter = metrics.Metrics.counter(\n        self.__class__, 'total_patches')\n    self.emitted_patches_counter = metrics.Metrics.counter(\n        self.__class__, 'emitted_patches')\n\n  def start_bundle(self):\n    self.extractor = staffline_extractor.StafflinePatchExtractor(\n        patch_height=self.patch_height,\n        patch_width=self.patch_width,\n        run_options=tf.RunOptions(timeout_in_ms=self.timeout_ms))\n    self.session = tf.Session(graph=self.extractor.graph)\n\n  def process(self, png_path):\n    self.total_pages_counter.inc()\n    try:\n      with self.session.as_default():\n        patches_iter = self.extractor.page_patch_iterator(png_path)\n    # pylint: disable=broad-except\n    except Exception:\n      logging.exception('Skipping failed music score (%s)', png_path)\n      self.failed_pages_counter.inc()\n      return\n    patches_iter = filter(_filter_patch, patches_iter)\n\n    if 0 < self.max_patches_per_page:\n      # Subsample patches.\n      patches = more_iter_tools.iter_sample(patches_iter,\n                                            self.max_patches_per_page)\n    else:\n      patches = list(patches_iter)\n\n    if not patches:\n      self.empty_pages_counter.inc()\n    self.total_patches_counter.inc(len(patches))\n\n    # Serialize each patch as an Example.\n    for patch_name, patch in patches:\n      example = tf.train.Example()\n      example.features.feature['name'].bytes_list.value.append(\n          patch_name.encode('utf-8'))\n      example.features.feature['features'].float_list.value.extend(\n          patch.ravel())\n      example.features.feature['height'].int64_list.value.append(patch.shape[0])\n      example.features.feature['width'].int64_list.value.append(patch.shape[1])\n      yield example\n\n    self.successful_pages_counter.inc()\n    # Patches are sub-sampled by this point.\n    self.emitted_patches_counter.inc(len(patches))\n\n  def finish_bundle(self):\n    self.session.close()\n    del self.extractor\n    del self.session\n"
  },
  {
    "path": "moonlight/training/clustering/staffline_patches_dofn_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the staffline patches DoFn graph.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport tempfile\n\nfrom absl.testing import absltest\nimport apache_beam as beam\nimport tensorflow as tf\nfrom tensorflow.python.lib.io import tf_record\n\nfrom moonlight.staves import staffline_extractor\nfrom moonlight.training.clustering import staffline_patches_dofn\n\nPATCH_HEIGHT = 9\nPATCH_WIDTH = 7\nNUM_STAFFLINES = 9\nTIMEOUT_MS = 60000\nMAX_PATCHES_PER_PAGE = 10\n\n\nclass StafflinePatchesDoFnTest(absltest.TestCase):\n\n  def testPipeline_corpusImage(self):\n    filename = os.path.join(tf.resource_loader.get_data_files_path(),\n                            '../../testdata/IMSLP00747-000.png')\n    with tempfile.NamedTemporaryFile() as output_examples:\n      # Run the pipeline to get the staffline patches.\n      with beam.Pipeline() as pipeline:\n        dofn = staffline_patches_dofn.StafflinePatchesDoFn(\n            PATCH_HEIGHT, PATCH_WIDTH, NUM_STAFFLINES, TIMEOUT_MS,\n            MAX_PATCHES_PER_PAGE)\n        # pylint: disable=expression-not-assigned\n        (pipeline | beam.transforms.Create([filename])\n         | beam.transforms.ParDo(dofn) | beam.io.WriteToTFRecord(\n             output_examples.name,\n             beam.coders.ProtoCoder(tf.train.Example),\n             shard_name_template=''))\n      # Get the staffline images from a local TensorFlow session.\n      extractor = staffline_extractor.StafflinePatchExtractor(\n          staffline_extractor.DEFAULT_NUM_SECTIONS, PATCH_HEIGHT, PATCH_WIDTH)\n      with tf.Session(graph=extractor.graph):\n        expected_patches = [\n            tuple(patch.ravel())\n            for unused_key, patch in extractor.page_patch_iterator(filename)\n        ]\n      for example_bytes in tf_record.tf_record_iterator(output_examples.name):\n        example = tf.train.Example()\n        example.ParseFromString(example_bytes)\n        patch_pixels = tuple(\n            example.features.feature['features'].float_list.value)\n        if patch_pixels not in expected_patches:\n          self.fail('Missing patch {}'.format(patch_pixels))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/training/clustering/staffline_patches_kmeans_pipeline.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Glyph classification unsupervised pipeline.\n\nExtracts patches from stafflines, and runs k-means on the patches. Each patch\nwill be labeled and used for k-nearest-neighbors classification.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport random\nimport shutil\nimport tempfile\n\nfrom absl import flags\nimport apache_beam as beam\nfrom apache_beam.transforms import combiners\nfrom moonlight.pipeline import pipeline_flags\nfrom moonlight.training.clustering import staffline_patches_dofn\nimport tensorflow as tf\nfrom tensorflow.contrib import learn as contrib_learn\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom tensorflow.python.lib.io import file_io\nfrom tensorflow.python.lib.io import tf_record\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_multi_string('music_pattern', [],\n                          'Pattern for the input music score PNGs.')\nflags.DEFINE_string('output_path', None, 'Path to the output TFRecords.')\nflags.DEFINE_integer('patch_height', 18,\n                     'The normalized height of a staffline.')\nflags.DEFINE_integer('patch_width', 15,\n                     'The width of a horizontal patch of a staffline.')\nflags.DEFINE_integer('num_stafflines', 19,\n                     'The number of stafflines to extract.')\nflags.DEFINE_integer('num_pages', 0, 'Subsample the pages to run on.')\nflags.DEFINE_integer('num_outputs', 0, 'Number of output patches.')\nflags.DEFINE_integer('max_patches_per_page', 10,\n                     'Sample patches per page if above this amount.')\nflags.DEFINE_integer('timeout_ms', 600000, 'Timeout for processing a page.')\nflags.DEFINE_integer('kmeans_num_clusters', 1000, 'Number of k-means clusters.')\nflags.DEFINE_integer('kmeans_batch_size', 10000,\n                     'Batch size for mini-batch k-means.')\nflags.DEFINE_integer('kmeans_num_steps', 100,\n                     'Number of k-means training steps.')\n\n\ndef train_kmeans(patch_file_pattern,\n                 num_clusters,\n                 batch_size,\n                 train_steps,\n                 min_eval_frequency=None):\n  \"\"\"Runs TensorFlow K-Means over TFRecords.\n\n  Args:\n    patch_file_pattern: Pattern that matches TFRecord file(s) holding Examples\n      with image patches.\n    num_clusters: Number of output clusters.\n    batch_size: Size of a k-means minibatch.\n    train_steps: Number of steps for k-means training.\n    min_eval_frequency: The minimum number of steps between evaluations. Of\n      course, evaluation does not occur if no new snapshot is available, hence,\n      this is the minimum.  If 0, the evaluation will only happen after\n      training.  If None, defaults to 1. To avoid checking for new checkpoints\n      too frequent, the interval is further limited to be at least\n      check_interval_secs between checks. See\n      third_party/tensorflow/contrib/learn/python/learn/experiment.py for\n      details.\n\n  Returns:\n    A NumPy array of shape (num_clusters, patch_height * patch_width). The\n        cluster centers.\n  \"\"\"\n\n  def input_fn():\n    \"\"\"The tf.learn input_fn.\n\n    Returns:\n      features, a float32 tensor of shape\n          (batch_size, patch_height * patch_width).\n      None for labels (not applicable to k-means).\n    \"\"\"\n    examples = contrib_learn.read_batch_examples(\n        patch_file_pattern,\n        batch_size,\n        tf.TFRecordReader,\n        queue_capacity=batch_size * 2)\n    features = tf.parse_example(\n        examples, {\n            'features':\n                tf.FixedLenFeature(FLAGS.patch_height * FLAGS.patch_width,\n                                   tf.float32)\n        })['features']\n    return features, None  # no labels\n\n  def experiment_fn(run_config, unused_hparams):\n    \"\"\"The tf.learn experiment_fn.\n\n    Args:\n      run_config: The run config to be passed to the KMeansClustering.\n      unused_hparams: Hyperparameters; not applicable.\n\n    Returns:\n      A tf.contrib.learn.Experiment.\n    \"\"\"\n    kmeans = contrib_learn.KMeansClustering(\n        num_clusters=num_clusters, config=run_config)\n    return contrib_learn.Experiment(\n        estimator=kmeans,\n        train_steps=train_steps,\n        train_input_fn=input_fn,\n        eval_steps=1,\n        eval_input_fn=input_fn,\n        min_eval_frequency=min_eval_frequency)\n\n  output_dir = tempfile.mkdtemp(prefix='staffline_patches_kmeans')\n  try:\n    learn_runner.run(\n        experiment_fn, run_config=contrib_learn.RunConfig(model_dir=output_dir))\n    num_features = FLAGS.patch_height * FLAGS.patch_width\n    clusters_t = tf.Variable(\n        tf.zeros((num_clusters, num_features)),  # Dummy init op\n        name='clusters')\n    with tf.Session() as sess:\n      tf.train.Saver(var_list=[clusters_t]).restore(\n          sess, os.path.join(output_dir, 'model.ckpt-%d' % train_steps))\n      return clusters_t.eval()\n  finally:\n    shutil.rmtree(output_dir)\n\n\ndef main(_):\n  tf.logging.info('Building the pipeline...')\n  records_dir = tempfile.mkdtemp(prefix='staffline_kmeans')\n  try:\n    patch_file_prefix = os.path.join(records_dir, 'patches')\n    with pipeline_flags.create_pipeline() as pipeline:\n      filenames = file_io.get_matching_files(FLAGS.music_pattern)\n      assert filenames, 'Must have matched some filenames'\n      if 0 < FLAGS.num_pages < len(filenames):\n        filenames = random.sample(filenames, FLAGS.num_pages)\n      filenames = pipeline | beam.transforms.Create(filenames)\n      patches = filenames | beam.ParDo(\n          staffline_patches_dofn.StafflinePatchesDoFn(\n              patch_height=FLAGS.patch_height,\n              patch_width=FLAGS.patch_width,\n              num_stafflines=FLAGS.num_stafflines,\n              timeout_ms=FLAGS.timeout_ms,\n              max_patches_per_page=FLAGS.max_patches_per_page))\n      if FLAGS.num_outputs:\n        patches |= combiners.Sample.FixedSizeGlobally(FLAGS.num_outputs)\n      patches |= beam.io.WriteToTFRecord(\n          patch_file_prefix, beam.coders.ProtoCoder(tf.train.Example))\n      tf.logging.info('Running the pipeline...')\n    tf.logging.info('Running k-means...')\n    patch_files = file_io.get_matching_files(patch_file_prefix + '*')\n    clusters = train_kmeans(patch_files, FLAGS.kmeans_num_clusters,\n                            FLAGS.kmeans_batch_size, FLAGS.kmeans_num_steps)\n    tf.logging.info('Writing the centroids...')\n    with tf_record.TFRecordWriter(FLAGS.output_path) as writer:\n      for cluster in clusters:\n        example = tf.train.Example()\n        example.features.feature['features'].float_list.value.extend(cluster)\n        example.features.feature['height'].int64_list.value.append(\n            FLAGS.patch_height)\n        example.features.feature['width'].int64_list.value.append(\n            FLAGS.patch_width)\n        writer.write(example.SerializeToString())\n    tf.logging.info('Done!')\n  finally:\n    shutil.rmtree(records_dir)\n\n\nif __name__ == '__main__':\n  tf.app.run(main)\n"
  },
  {
    "path": "moonlight/training/clustering/staffline_patches_kmeans_pipeline_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for staffline patches k-means.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tempfile\n\nfrom absl import flags\nfrom absl.testing import absltest\nimport numpy as np\n\nfrom tensorflow.core.example import example_pb2\nfrom tensorflow.python.lib.io import tf_record\n\nfrom moonlight.training.clustering import staffline_patches_kmeans_pipeline\n\nFLAGS = flags.FLAGS\n\nNUM_CLUSTERS = 20\nBATCH_SIZE = 100\nTRAIN_STEPS = 5\n\n\nclass StafflinePatchesKmeansPipelineTest(absltest.TestCase):\n\n  def testKmeans(self):\n    num_features = FLAGS.patch_height * FLAGS.patch_width\n    dummy_data = np.random.random((500, num_features))\n    with tempfile.NamedTemporaryFile(mode='r') as patches_file:\n      with tf_record.TFRecordWriter(patches_file.name) as patches_writer:\n        for patch in dummy_data:\n          example = example_pb2.Example()\n          example.features.feature['features'].float_list.value.extend(patch)\n          patches_writer.write(example.SerializeToString())\n      clusters = staffline_patches_kmeans_pipeline.train_kmeans(\n          patches_file.name,\n          NUM_CLUSTERS,\n          BATCH_SIZE,\n          TRAIN_STEPS,\n          min_eval_frequency=0)\n      self.assertEqual(clusters.shape, (NUM_CLUSTERS, num_features))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/training/generation/BUILD",
    "content": "# Description:\n# Generated training data for OMR.\npackage(default_visibility = [\"//moonlight:__subpackages__\"])\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"generation\",\n    srcs = [\"generation.py\"],\n    data = [\"vexflow_generator.js\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"@com_google_protobuf//:protobuf_python\",\n        # apache-beam dep\n        \"//moonlight:engine\",\n        \"//moonlight/protobuf:protobuf_py_pb2\",\n        \"//moonlight/staves:staffline_distance\",\n        \"//moonlight/staves:staffline_extractor\",\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"generation_test\",\n    srcs = [\"generation_test.py\"],\n    srcs_version = \"PY2AND3\",\n    tags = [\"manual\"],\n    deps = [\n        \":generation\",\n        # disable_tf2\n        \"//moonlight/staves:staffline_extractor\",\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"image_noise\",\n    srcs = [\"image_noise.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # tensorflow dep\n        # tensorflow.contrib.image py dep\n    ],\n)\n\npy_binary(\n    name = \"vexflow_generator_pipeline\",\n    srcs = [\"vexflow_generator_pipeline.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":generation\",\n        \":image_noise\",\n        # disable_tf2\n        # absl dep\n        # apache-beam dep\n        \"//moonlight/pipeline:pipeline_flags\",\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/training/generation/generation.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"VexFlow labeled data generation.\n\nWraps the node.js generator, which generates a random measure of music as SVG,\nand the ground truth glyphs present in the image as a `Page` message.\n\nEach invocation generates a batch of images. There is a tradeoff between the\nstartup time of node.js for each invocation, and keeping the output size small\nenough to pipe into Python.\n\nThe final outputs are positive and negative example patches. Positive examples\nare centered on an outputted glyph, and have that glyph's type. Negative\nexamples are at least a few pixels away from any glyph, and have type NONE.\nSince negative examples could be a few pixels away from a glyph, we get negative\nexamples that overlap with partial glyph(s), but are centered too far away from\na glyph to be considered a positive example. Currently, every single glyph\nresults in a single positive example, and negative examples are randomly\nsampled.\n\nAll glyphs are emitted to RecordIO, where they are outputted in a single\ncollection for training. We currently do not store the entire generated image\nanywhere. This could be added later in order to try other classification\napproaches.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport json\nimport os.path\nimport random\nimport subprocess\nimport sys\n\nimport apache_beam as beam\nfrom apache_beam.metrics import Metrics\nimport numpy as np\nimport tensorflow as tf\n\nfrom google.protobuf import text_format\nfrom moonlight import engine\nfrom moonlight.protobuf import musicscore_pb2\nfrom moonlight.staves import staffline_distance\nfrom moonlight.staves import staffline_extractor\n\n# Every image is expected to contain at least 3 glyphs.\nPOSITIVE_EXAMPLES_PER_IMAGE = 3\n\n\ndef _normalize_path(filename):\n  \"\"\"Normalizes a relative path to a command to spawn.\n\n  Args:\n    filename: String; relative or absolute path.\n\n  Returns:\n    The normalized path. This is necessary because in our use case,\n    vexflow_generator_pipeline will live in a different directory from\n    vexflow_generator, and there are symlinks to both directories in the same\n    parent directory. Without normalization, `..` would reference the parent of\n    the actual directory that was symlinked. With normalization, it references\n    the directory that contains the symlink to the working directory.\n  \"\"\"\n  if filename.startswith('/'):\n    return filename\n  else:\n    return os.path.normpath(\n        os.path.join(os.path.dirname(sys.argv[0]), filename))\n\n\nclass PageGenerationDoFn(beam.DoFn):\n  \"\"\"Generates the PNG images and ground truth for each batch.\n\n  Takes in a batch number, and outputs a tuple of PNG contents (bytes) and the\n  labeled staff (Staff message).\n  \"\"\"\n\n  def __init__(self, num_pages_per_batch, vexflow_generator_command,\n               svg_to_png_command):\n    self.num_pages_per_batch = num_pages_per_batch\n    self.vexflow_generator_command = vexflow_generator_command\n    self.svg_to_png_command = svg_to_png_command\n\n  def process(self, batch_num):\n    for page in self.get_pages_for_batch(batch_num, self.num_pages_per_batch):\n      staff = musicscore_pb2.Staff()\n      text_format.Parse(page['page'], staff)\n      # TODO(ringw): Fix the internal proto pickling issue so that we don't\n      # have to serialize the staff here.\n      yield self._svg_to_png(page['svg']), staff.SerializeToString()\n\n  def get_pages_for_batch(self, batch_num, num_pages_per_batch):\n    \"\"\"Generates the music score pages in a single batch.\n\n    The generator takes in a seed for the RNG for each page, and outputs all\n    pages at once. The seeds for all batches are consecutive for determinism,\n    starting from 0, but each seed to the Mersenne Twister RNG should result in\n    completely different output.\n\n    Args:\n      batch_num: The index of the batch to output.\n      num_pages_per_batch: The number of pages to generate in each batch.\n\n    Returns:\n      A list of dicts holding `svg` (XML text) and `page` (text-format\n          `tensorflow.moonlight.Staff` proto).\n    \"\"\"\n    return self.get_pages(\n        range(batch_num * num_pages_per_batch,\n              (batch_num + 1) * num_pages_per_batch))\n\n  def get_pages(self, seeds):\n    vexflow_generator_command = list(self.vexflow_generator_command)\n    # If vexflow_generator_command is relative, it is relative to the pipeline\n    # binary.\n    vexflow_generator_command[0] = _normalize_path(vexflow_generator_command[0])\n\n    seeds = ','.join(map(str, seeds))\n    return json.loads(\n        subprocess.check_output(vexflow_generator_command +\n                                ['--random_seeds=' + seeds]))\n\n  def _svg_to_png(self, svg):\n    svg_to_png_command = list(self.svg_to_png_command)\n    svg_to_png_command[0] = _normalize_path(svg_to_png_command[0])\n    popen = subprocess.Popen(\n        svg_to_png_command,\n        stdin=subprocess.PIPE,\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE)\n    stdout, stderr = popen.communicate(input=svg)\n    if popen.returncode != 0:\n      raise ValueError('convert failed with status %d\\nstderr:\\n%s' %\n                       (popen.returncode, stderr))\n    return stdout\n\n\nclass PatchExampleDoFn(beam.DoFn):\n  \"\"\"Extracts labeled patches from generated VexFlow music scores.\"\"\"\n\n  def __init__(self,\n               negative_example_distance,\n               patch_width,\n               negative_to_positive_example_ratio,\n               noise_fn=lambda x: x):\n    self.negative_example_distance = negative_example_distance\n    self.patch_width = patch_width\n    self.negative_to_positive_example_ratio = negative_to_positive_example_ratio\n    self.noise_fn = noise_fn\n    self.patch_counter = Metrics.counter(self.__class__, 'num_patches')\n\n  def start_bundle(self):\n    # TODO(ringw): Expose a cleaner way to set this value.\n    # The image is too small for the default min staffline distance score.\n    # pylint: disable=protected-access\n    staffline_distance._MIN_STAFFLINE_DISTANCE_SCORE = 100\n\n    self.omr = engine.OMREngine()\n\n  def process(self, item):\n    png_contents, staff_message = item\n    staff_message = musicscore_pb2.Staff.FromString(staff_message)\n    with tf.Session(graph=self.omr.graph) as sess:\n      # Load the image, then feed it in to apply noise.\n      # Randomly rotate the image and apply noise, then dump it back out as a\n      # PNG.\n      # TODO(ringw): Expose a way to pass in the image contents to the main\n      # OMR TF graph.\n      img = tf.to_float(tf.image.decode_png(png_contents))\n      # Collapse the RGB channels, if any. No-op for a monochrome PNG.\n      img = tf.reduce_mean(img[:, :, :3], axis=2)[:, :, None]\n      # Fix the stafflines being #999.\n      img = tf.clip_by_value(img * 2. - 255., 0., 255.)\n      img = self.noise_fn(img)\n      # Get a 2D uint8 image array for OMR.\n      noisy_image = sess.run(\n          tf.cast(tf.clip_by_value(img, 0, 255)[:, :, 0], tf.uint8))\n      # Run OMR staffline extraction and staffline distance estimation. The\n      # stafflines are used to get patches from the generated image.\n      stafflines, image_staffline_distance = sess.run(\n          [\n              self.omr.glyph_classifier.staffline_extractor.extract_staves(),\n              self.omr.structure.staff_detector.staffline_distance[0]\n          ],\n          feed_dict={self.omr.image: noisy_image})\n    if stafflines.shape[0] != 1:\n      raise ValueError('Image should have one detected staff, got shape: ' +\n                       str(stafflines.shape))\n    positive_example_count = 0\n    negative_example_whitelist = np.ones(\n        (stafflines.shape[staffline_extractor.Axes.POSITION],\n         stafflines.shape[staffline_extractor.Axes.X]), np.bool)\n    # Blacklist xs where the patch would overlap with either end.\n    negative_example_overlap_from_end = max(self.negative_example_distance,\n                                            self.patch_width // 2)\n    negative_example_whitelist[:, :negative_example_overlap_from_end] = False\n    negative_example_whitelist[:,\n                               -negative_example_overlap_from_end - 1:] = False\n    all_positive_examples = []\n    for glyph in staff_message.glyph:\n      staffline = staffline_extractor.get_staffline(glyph.y_position,\n                                                    stafflines[0])\n      glyph_x = int(\n          round(glyph.x *\n                self.omr.glyph_classifier.staffline_extractor.target_height /\n                (image_staffline_distance * self.omr.glyph_classifier\n                 .staffline_extractor.staffline_distance_multiple)))\n      example = self._create_example(staffline, glyph_x, glyph.type)\n      if example:\n        staffline_index = staffline_extractor.y_position_to_index(\n            glyph.y_position,\n            stafflines.shape[staffline_extractor.Axes.POSITION])\n        # Blacklist the area adjacent to the glyph, even if it is not selected\n        # as a positive example below.\n        negative_example_whitelist[staffline_index, glyph_x -\n                                   self.negative_example_distance + 1:glyph_x +\n                                   self.negative_example_distance] = False\n        all_positive_examples.append(example)\n        positive_example_count += 1\n\n    for example in random.sample(all_positive_examples,\n                                 POSITIVE_EXAMPLES_PER_IMAGE):\n      yield example\n      self.patch_counter.inc()\n\n    negative_example_staffline, negative_example_x = np.where(\n        negative_example_whitelist)\n    negative_example_inds = np.random.choice(\n        len(negative_example_staffline),\n        int(positive_example_count * self.negative_to_positive_example_ratio))\n    negative_example_staffline = negative_example_staffline[\n        negative_example_inds]\n    negative_example_x = negative_example_x[negative_example_inds]\n    for staffline, x in zip(negative_example_staffline, negative_example_x):\n      example = self._create_example(stafflines[0, staffline], x,\n                                     musicscore_pb2.Glyph.NONE)\n      assert example, 'Negative example xs should always be in range'\n      yield example\n      self.patch_counter.inc()\n\n  def _create_example(self, staffline, x, label):\n    start_x = x - self.patch_width // 2\n    limit_x = x + self.patch_width // 2 + 1\n    assert limit_x - start_x == self.patch_width\n    # x is the last axis of staffline\n    if 0 <= start_x <= limit_x < staffline.shape[-1]:\n      patch = staffline[:, start_x:limit_x]\n      example = tf.train.Example()\n      example.features.feature['patch'].float_list.value.extend(patch.ravel())\n      example.features.feature['label'].int64_list.value.append(label)\n      example.features.feature['height'].int64_list.value.append(patch.shape[0])\n      example.features.feature['width'].int64_list.value.append(patch.shape[1])\n      return example\n    else:\n      return None\n"
  },
  {
    "path": "moonlight/training/generation/generation_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for labeled data generation.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\n\nimport tensorflow as tf\n\nfrom moonlight.staves import staffline_extractor\nfrom moonlight.training.generation import generation\n\nPATCH_WIDTH = 15\n\n\nclass GenerationTest(tf.test.TestCase):\n\n  def testDoFn(self):\n    page_gen = generation.PageGenerationDoFn(\n        num_pages_per_batch=1,\n        vexflow_generator_command=[\n            os.path.join(tf.resource_loader.get_data_files_path(),\n                         'vexflow_generator')\n        ],\n        svg_to_png_command=['/usr/bin/env', 'convert', 'svg:-', 'png:-'])\n    page_gen.start_bundle()\n    patch_examples = generation.PatchExampleDoFn(\n        negative_example_distance=5,\n        patch_width=PATCH_WIDTH,\n        negative_to_positive_example_ratio=1.0)\n    patch_examples.start_bundle()\n    examples = [\n        example for page in page_gen.process(0)\n        for example in patch_examples.process(page)\n    ]\n    page_gen.finish_bundle()\n    patch_examples.finish_bundle()\n    self.assertGreater(len(examples), 4)\n    for example in examples:\n      self.assertEqual(\n          len(example.features.feature['patch'].float_list.value),\n          PATCH_WIDTH * staffline_extractor.DEFAULT_TARGET_HEIGHT)\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/training/generation/image_noise.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Applies noise to an image for generating training data.\n\nAll noise assumes a monochrome image with white (255) as background.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport math\n\nimport tensorflow as tf\nfrom tensorflow.contrib import image as contrib_image\n\n\ndef placeholder_image():\n  return tf.placeholder(tf.uint8, shape=(None, None), name='placeholder_image')\n\n\ndef random_rotation(image, angle=math.pi / 180):\n  return 255. - contrib_image.rotate(\n      255. - tf.to_float(image),\n      tf.random_uniform((), -angle, angle),\n      interpolation='BILINEAR')\n\n\ndef gaussian_noise(image, stddev=5):\n  return image + tf.random_normal(tf.shape(image), stddev=stddev)\n"
  },
  {
    "path": "moonlight/training/generation/vexflow_generator.js",
    "content": "// Copyright 2018 Google LLC\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     https://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n/**\n * @fileoverview VexFlow random labeled data generator.\n *\n * Outputs a JSON list of dicts with \"svg\" (XML text SVG image) and \"page\" (text\n * format Staff proto holding the glyph coordinates).\n */\n\nconst ArgumentParser = require('argparse').ArgumentParser;\nconst jsdom = require('jsdom');\nconst Random = require('random-js');\nconst Vex = require('vexflow');\nconst VF = Vex.Flow;\n\nconst parser = new ArgumentParser();\nparser.addArgument(\n    ['--random_seeds'],\n    {help: 'Generate a labeled image for each comma-separated random seed.'});\nconst args = parser.parseArgs();\n\n\n/**\n * @param {?object} value Any value\n * @param {string=} opt_message The message, if value is null or undefined\n * @return {!object} The non-null value\n * @throws {ValueError} if value is null or undefined\n */\nfunction checkNotNull(value, opt_message) {\n  // undefined == null too.\n  if (value == null) {\n    throw ValueError(opt_message);\n  }\n  return value;\n}\n\n\n/**\n * VexFlow line numbers start at the first ledger line below the staff, and\n * increment by a half for each note. OMR y positions start at the third staff\n * line, and increment by 1 for each note.\n * @param {number} line The VexFlow line number.\n * @return {number} An integer y position.\n */\nfunction vexflowLineToOMR(line) {\n  return (line - 3) * 2;\n}\n\n\n/**\n * Converts an absolute Y coordinate on a staff to an OMR y position.\n * @param {!Vex.Flow.Stave} stave The VexFlow stave.\n * @param {number} y The y coordinate, in pixels.\n * @return {number} An OMR y position.\n */\nfunction absoluteYToOMR(stave, y) {\n  const staff_center_y = stave.getYForLine(2);\n  return Math.round((staff_center_y - y) * 2 / stave.space(1));\n}\n\n\nlet allGlyphs, staffCenterLine, stafflineDistance;\n/** Resets the dumped VexFlow information before starting a new page. */\nfunction resetPageState() {\n  allGlyphs = [];\n  staffCenterLine = null;\n  stafflineDistance = null;\n}\nresetPageState();\n\n\nconst CLEF_LINE_FOR_OMR = {\n  // Treble or \"G\" clef is centered 2 spaces below the center line (on G).\n  'treble': -2,\n  // Bass or \"F\" clef is centered 2 spaces above the center line (on F).\n  'bass': +2\n};\nconst drawClef = checkNotNull(VF.Clef.prototype.draw, 'Clef.draw');\n/** Draws the clef and dumps its position to allGlyphs. */\nVF.Clef.prototype.draw = function() {\n  if (this.type == 'treble' || this.type == 'bass') {\n    const x = Math.round(this.getX() + this.getWidth() / 2);\n    allGlyphs.push(`glyph {\ntype: CLEF_${this.type.toUpperCase()}\nx: ${x}\ny_position: ${CLEF_LINE_FOR_OMR[this.type]}\n}`);\n  }\n  drawClef.apply(this, arguments);\n};\n\n\nconst drawStave = checkNotNull(VF.Stave.prototype.draw, 'Stave.draw');\n/** Dumps the staff information. */\nVF.Stave.prototype.draw = function() {\n  stafflineDistance = this.space(1);\n  const y = this.getYForLine(2);\n  const x0 = this.getX();\n  const x1 = this.getX() + this.getWidth();\n  staffCenterLine = `center_line {\n  x: ${x0}\n  y: ${y}\n}\ncenter_line {\n  x: ${x1}\n  y: ${y}\n}\n`;\n  drawStave.apply(this, arguments);\n};\n\nconst drawNotehead = checkNotNull(VF.NoteHead.prototype.draw, 'Notehead.draw');\n/** Draws the notehead and dumps its position to allGlyphs. */\nVF.NoteHead.prototype.draw = function() {\n  // The notehead x seems to be the left end.\n  const x = Math.round(this.getAbsoluteX() + this.getWidth() / 2);\n  const y_position = vexflowLineToOMR(this.getLine());\n  allGlyphs.push(`glyph {\n# TODO(ringw): NOTEHEAD_FILLED vs NOTEHEAD_EMPTY.\ntype: NOTEHEAD_FILLED\nx: ${x}\ny_position: ${y_position}\n}`);\n  drawNotehead.apply(this, arguments);\n};\n\nconst ACCIDENTAL_TYPES = {\n  'b': 'FLAT',\n  '#': 'SHARP',\n  'n': 'NATURAL'\n};\nconst drawAccidental =\n    checkNotNull(VF.Accidental.prototype.draw, 'Accidental.draw');\n/** Draws the accidental and dumps its position to allGlyphs. */\nVF.Accidental.prototype.draw = function() {\n  if (this.type in ACCIDENTAL_TYPES) {\n    const note_start = this.note.getModifierStartXY(this.position, this.index);\n    // The modifier x (note_start.x + this.x_shift) seems to be the right end of\n    // the glyph.\n    const x = Math.round(note_start.x + this.x_shift - this.getWidth() / 2);\n    const y = note_start.y + this.y_shift;\n    const y_position = absoluteYToOMR(this.note.getStave(), y);\n    allGlyphs.push(`glyph {\ntype: ${ACCIDENTAL_TYPES[this.type]}\nx: ${x}\ny_position: ${y_position}\n}`);\n  }\n  drawAccidental.apply(this, arguments);\n};\n\n\n/**\n * @param {!Random} random The random generator\n * @param {!object<number>} probs Map from key to probability. The probability\n *     values must sum to 1.\n * @return {!object} The sampled key from probs.\n */\nfunction discreteSample(random, probs) {\n  let cumulativeProb = 0;\n  const randomUniform = random.real(0, 1);\n  for (let key of Object.keys(probs)) {\n    if (randomUniform < cumulativeProb + probs[key]) {\n      return key;\n    }\n    cumulativeProb += probs[key];\n  }\n  throw ValueError('Probabilities sum to ' + cumulativeProb);\n}\n\n\nconst PROB_LEDGER_NOTE = 0.1;\nconst PROB_MODIFIERS = {\n  '#': 0.25,\n  'b': 0.25,\n  '##': 0.02,\n  'bb': 0.02,\n  'n': 0.15,\n  '': 0.31,\n};\nclass Clef {\n  /**\n   * Samples a random note to display for the clef.\n   * @param {!Random} random The random generator\n   * @return {string} The note name, with accidental.\n   */\n  genNote(random) {\n    const modifier = discreteSample(random, PROB_MODIFIERS);\n    return random.pick(this.baseNotes_()).replace('(?=[0-9])', modifier);\n  }\n\n  // TODO(ringw): Why does the bass clef render notes in the same positions\n  // as a treble clef? Fix this and add a different range of notes for bass.\n  /**\n   * @return {!array<string>} The base note names (without accidentals) for\n   * notes that lie on the staff, or are within 2 ledger lines.\n   * @private\n   */\n  baseNotes_() {\n    return [\n      'A3', 'B3', 'C4', 'D4', 'E4', 'F4', 'G4', 'A4', 'B4', 'C5', 'D5', 'E5',\n      'F5', 'G5', 'A5', 'B5', 'C6'\n    ];\n  }\n}\n\n\nclass TrebleClef extends Clef {\n  /** @return {string} the name used by VexFlow for the clef. */\n  name() {\n    return 'treble';\n  }\n}\n\n\nclass BassClef extends Clef {\n  /** @return {string} the name used by VexFlow for the clef. */\n  name() {\n    return 'bass';\n  }\n}\n\n\nconst CLEFS = [new TrebleClef(), new BassClef()];\n\n\nif (!args.random_seeds) {\n  throw Error('--random_seeds is required');\n}\nconst seedStrings = args.random_seeds.split(',');\nconst seeds = [];\nseedStrings.forEach(function(seedString) {\n  const seed = parseInt(seedString, 10);\n  if (isNaN(seed)) {\n    throw Error('Seed is not an integer: ' + seedString);\n  }\n  seeds.push(seed);\n});\n\n\nglobal.document = new jsdom.JSDOM('<div id=\"vexflow-div\" />').window.document;\n\n\nconst vf = new Vex.Flow.Factory(\n    {renderer: {elementId: 'vexflow-div', width: 500, height: 200}});\nconst staveConstructor = vf.Stave;\n// TODO(ringw): Support passing Vex.Flow.Stave options through addStave(), to\n// avoid overriding \"Stave\" here.\n/**\n * @param {Object<string, ?>} params Staff params map\n * @return {!vf.Stave} New Stave instance\n */\nvf.Stave = function(params) {\n  const paramsCopy = {};\n  Object.assign(paramsCopy, params);\n  const options = {fill_style: '#000000'};\n  Object.assign(options, paramsCopy.options);\n  paramsCopy.options = options;\n  return staveConstructor.apply(this, [paramsCopy]);\n};\n\npages = [];\nseeds.forEach(function(seed) {\n  const random = new Random(Random.engines.mt19937().seed(seed));\n  const clef = random.pick(CLEFS);\n\n  const score = vf.EasyScore();\n  const system = vf.System();\n\n  const notes = [];\n  for (let i = 0; i < 4; i++) {\n    notes.push(clef.genNote(random));\n  }\n  // TODO(ringw): Random durations.\n  notes[0] = notes[0] + '/q';\n\n  system\n      .addStave({\n        voices: [score.voice(score.notes(notes.join(', ')))],\n      })\n      .addClef(clef.name())\n      .addTimeSignature('4/4');\n\n  vf.draw();\n\n  let page_message = `staffline_distance: ${stafflineDistance}\n${staffCenterLine}\n`;\n  allGlyphs.forEach(function(glyph) {\n    page_message = page_message + glyph;\n  });\n  pages.push({\n    'svg': document.getElementById('vexflow-div').innerHTML,\n    'page': page_message\n  });\n\n  resetPageState();\n});\n\nprocess.stdout.write(JSON.stringify(pages));\n"
  },
  {
    "path": "moonlight/training/generation/vexflow_generator_pipeline.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A pipeline for generating labeled patch data from VexFlow.\n\nSee `generation.py` for details on the output data.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl import app\nfrom absl import flags\nimport apache_beam as beam\nfrom moonlight.pipeline import pipeline_flags\nfrom moonlight.training.generation import generation\nfrom moonlight.training.generation import image_noise\nimport tensorflow as tf\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_integer('num_positive_examples', 1000000,\n                     'The number of positive examples to generate.')\nflags.DEFINE_string('examples_path', '', 'The path of the output examples.')\nflags.DEFINE_integer('num_shards', None, 'Fixed number of shards (optional)')\nflags.DEFINE_multi_string('vexflow_generator_command', [\n    '/usr/bin/env',\n    'node',\n    'vexflow_generator.js',\n], 'Command line to run the node.js vexflow generator.')\nflags.DEFINE_float('negative_to_positive_example_ratio', 10,\n                   'Ratio of negative to positive examples.')\nflags.DEFINE_integer('num_pages_per_batch', 100,\n                     'The number of pages to emit in every node.js run.')\nflags.DEFINE_multi_string(\n    'svg_to_png_command', [\n        '/usr/bin/env',\n        'convert',\n        'svg:-',\n        'png:-',\n    ], 'Command line to convert a SVG (stdin) to PNG (stdout).')\nflags.DEFINE_integer('patch_width', 15, 'Width of a staffline patch.')\nflags.DEFINE_integer(\n    'negative_example_distance', 3,\n    'The minimum distance of a negative example from any glyph.')\n\n\ndef main(_):\n  with pipeline_flags.create_pipeline() as pipeline:\n    num_pages = (FLAGS.num_positive_examples +\n                 generation.POSITIVE_EXAMPLES_PER_IMAGE -\n                 1) // generation.POSITIVE_EXAMPLES_PER_IMAGE\n    num_batches = (num_pages + FLAGS.num_pages_per_batch -\n                   1) // FLAGS.num_pages_per_batch\n    batch_nums = pipeline | beam.transforms.Create(list(range(num_batches)))\n    pages = batch_nums | beam.ParDo(\n        generation.PageGenerationDoFn(\n            num_pages_per_batch=FLAGS.num_pages_per_batch,\n            vexflow_generator_command=FLAGS.vexflow_generator_command,\n            svg_to_png_command=FLAGS.svg_to_png_command))\n\n    def noise_fn(image):\n      # TODO(ringw): Add better noise, maybe using generative adversarial\n      # networks trained on real scores from IMSLP.\n      return image_noise.gaussian_noise(image_noise.random_rotation(image))\n\n    examples = pages | beam.ParDo(\n        generation.PatchExampleDoFn(\n            negative_example_distance=FLAGS.negative_example_distance,\n            patch_width=FLAGS.patch_width,\n            negative_to_positive_example_ratio=FLAGS\n            .negative_to_positive_example_ratio,\n            noise_fn=noise_fn))\n    examples |= beam.io.WriteToTFRecord(\n        FLAGS.examples_path,\n        beam.coders.ProtoCoder(tf.train.Example),\n        num_shards=FLAGS.num_shards)\n\n\nif __name__ == '__main__':\n  app.run(main)\n"
  },
  {
    "path": "moonlight/util/BUILD",
    "content": "# Description:\n# General utilities for OMR.\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"util\",\n    deps = [\n        \":functional_ops\",\n        \":memoize\",\n        \":more_iter_tools\",\n        \":patches\",\n        \":run_length\",\n        \":segments\",\n    ],\n)\n\npy_library(\n    name = \"functional_ops\",\n    srcs = [\"functional_ops.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [],  # tensorflow dep\n)\n\npy_test(\n    name = \"functional_ops_test\",\n    srcs = [\"functional_ops_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":functional_ops\",\n        # disable_tf2\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"memoize\",\n    srcs = [\"memoize.py\"],\n    srcs_version = \"PY2AND3\",\n)\n\npy_library(\n    name = \"more_iter_tools\",\n    srcs = [\"more_iter_tools.py\"],\n    srcs_version = \"PY2AND3\",\n)\n\npy_test(\n    name = \"more_iter_tools_test\",\n    srcs = [\"more_iter_tools_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":more_iter_tools\",\n        # absl dep\n        # absl/testing dep\n        # numpy dep\n        # six dep\n    ],\n)\n\npy_library(\n    name = \"patches\",\n    srcs = [\"patches.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [],  # tensorflow dep\n)\n\npy_test(\n    name = \"patches_test\",\n    srcs = [\"patches_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":patches\",\n        # disable_tf2\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"run_length\",\n    srcs = [\"run_length.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # tensorflow dep\n        # tensorflow.contrib.image py dep\n    ],\n)\n\npy_test(\n    name = \"run_length_test\",\n    srcs = [\"run_length_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":run_length\",\n        # disable_tf2\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"segments\",\n    srcs = [\"segments.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # enum34 dep\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"segments_test\",\n    size = \"small\",\n    srcs = [\"segments_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":segments\",\n        # disable_tf2\n        # absl/testing dep\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/util/functional_ops.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Functional op helpers.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\n\ndef flat_map_fn(fn, elems, dtype=None):\n  \"\"\"Flat maps `fn` on items unpacked from `elems` on dimension 0.\n\n  Analogous to `tf.map_fn`, but concatenates the result(s) along dimension 0.\n\n  Args:\n    fn: The callable to be performed.\n    elems: The tensor of elements to apply `fn` to.\n    dtype: The dtype of the output of `fn`.\n\n  Returns:\n    A tensor with the same rank as the input, and same dimensions except for\n        dimension 0. The function results for each element, concatenated along\n        dimension 0.\n  \"\"\"\n  elems = tf.convert_to_tensor(elems)\n  n = tf.shape(elems)[0]\n\n  zero_elem = tf.zeros(tf.shape(elems)[1:], elems.dtype)\n  dummy_output = fn(zero_elem)\n  output_elem_shape = tf.shape(dummy_output)[1:]\n\n  initial_results = tf.zeros(\n      tf.concat([[0], output_elem_shape], axis=0), dtype=dtype or elems.dtype)\n\n  def compute(i, results):\n    elem_results = fn(elems[i])\n    return i + 1, tf.concat([results, elem_results], axis=0)\n\n  return tf.while_loop(\n      lambda i, _: i < n,\n      compute, [0, initial_results],\n      shape_invariants=[tf.TensorShape(()),\n                        tf.TensorShape(None)],\n      parallel_iterations=1)[1]\n"
  },
  {
    "path": "moonlight/util/functional_ops_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the functional ops helpers.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import functional_ops\n\n\nclass FunctionalOpsTest(tf.test.TestCase):\n\n  def testFlatMap(self):\n    with self.test_session():\n      items = functional_ops.flat_map_fn(tf.range, [1, 3, 0, 5])\n      self.assertAllEqual(items.eval(), [0, 0, 1, 2, 0, 1, 2, 3, 4])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/util/memoize.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Simple memoizer for a function/method/property.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n\nclass MemoizedFunction(object):\n  \"\"\"Decorates a function to be memoized.\n\n  Caches all invocations of the function with unique arguments. The arguments\n  must be hashable.\n\n  Decorated functions are not threadsafe. This decorator is currently used for\n  TensorFlow graph construction, which happens in a single thread.\n  \"\"\"\n\n  def __init__(self, function):\n    self._function = function\n    self._results = {}\n\n  def __call__(self, *args):\n    \"\"\"Calls the function to be memoized.\n\n    Args:\n      *args: The args to pass through to the function. Keyword arguments are not\n        supported.\n\n    Raises:\n      TypeError if an argument is unhashable.\n\n    Returns:\n      The memoized return value of the wrapped function. The return value will\n      be computed exactly once for each unique argument tuple.\n    \"\"\"\n    if args in self._results:\n      return self._results[args]\n    self._results[args] = self._function(*args)\n    return self._results[args]\n"
  },
  {
    "path": "moonlight/util/more_iter_tools.py",
    "content": "\"\"\"More iterator utilities.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport random\n\n\ndef iter_sample(iterator, count, rand=None):\n  \"\"\"Performs reservoir sampling on an iterator.\n\n  The output is a list. The entire iterator must be read in one shot to\n  determine any output element, and `count` elements need to be stored in\n  memory.\n\n  Args:\n    iterator: An iterator/generator.\n    count: The number of elements to sample.\n    rand: Optional random object which is already seeded.\n\n  Returns:\n    A list with length `count`, or the contents of `iterator` if smaller.\n  \"\"\"\n\n  rand = rand or random.Random()\n  result = []\n  for index, elem in enumerate(iterator):\n    # Fill the result with count elements.\n    if index < count:\n      result.append(elem)\n\n    # Replace an existing element uniformly randomly, but the probability of\n    # replacing any element is steadily decreasing.\n    random_index = rand.randint(0, index)\n    if random_index < count:\n      result[random_index] = elem\n\n  return result\n"
  },
  {
    "path": "moonlight/util/more_iter_tools_test.py",
    "content": "\"\"\"Tests for more_iter_tools.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport random\n\nfrom absl.testing import absltest\nfrom moonlight.util import more_iter_tools\nimport numpy as np\nfrom six import moves\n\n\nclass MoreIterToolsTest(absltest.TestCase):\n\n  def testSample_count_0(self):\n    self.assertEqual([], more_iter_tools.iter_sample(moves.range(100), 0))\n\n  def testSample_iter_empty(self):\n    self.assertEqual([], more_iter_tools.iter_sample(moves.range(0), 10))\n\n  def testSample_distribution(self):\n    sample = more_iter_tools.iter_sample(\n        moves.range(0, 100000), 9999, rand=random.Random(12345))\n    self.assertEqual(9999, len(sample))\n\n    # Create a histogram with 10 bins.\n    bins = np.bincount([elem // 10000 for elem in sample])\n    self.assertEqual(10, len(bins))\n\n    # Samples should be distributed roughly uniformly into bins.\n    expected_bin_count = 9999 // 10\n    for bin_count in bins:\n      self.assertTrue(\n          np.allclose(bin_count, expected_bin_count, rtol=0.1),\n          '{} within 10% of {}'.format(bin_count, expected_bin_count))\n\n\nif __name__ == '__main__':\n  absltest.main()\n"
  },
  {
    "path": "moonlight/util/patches.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Extracts patches from image(s).\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport six\nimport tensorflow as tf\n\n\ndef patches_1d(images, patch_width):\n  \"\"\"Extract patches along the last dimension of `images`.\n\n  Thin wrapper around `tf.extract_image_patches` that only takes horizontal\n  slices of the input, and reshapes the output to N-D images.\n\n  Args:\n    images: The image(s) to extract patches from. Shape at least 2D, with shape\n      images_shape + (height, width). The image height and width must be\n      statically known (set on `images.get_shape()`).\n    patch_width: Width of a patch. int or long (must be statically known).\n\n  Returns:\n    Patches extracted from each image in `images`. Shape\n        images_shape + (width - patch_width + 1, height, width).\n\n  Raises:\n    ValueError: If patch_width is not an int or long.\n  \"\"\"\n  if not isinstance(patch_width, six.integer_types):\n    raise ValueError(\"patch_width must be an integer\")\n  # The shape of the input, excluding image height and width.\n  images_shape = tf.shape(images)[:-2]\n  # num_images is not necessary to know at graph creation time.\n  num_images = tf.reduce_prod(images_shape)\n  # image_height must be statically known for tf.extract_image_patches.\n  image_height = int(images.get_shape()[-2])\n  image_width = tf.shape(images)[-1]\n\n  def do_extract_patches():\n    \"\"\"Returns the image patches, assuming images.shape[0] > 0.\"\"\"\n    # patch_width must be an int, not a Tensor.\n    # Reshape to (num_images, height, width, channels).\n    images_nhwc = tf.reshape(\n        images, tf.stack([num_images, image_height, image_width, 1]))\n    patches = tf.extract_image_patches(\n        images_nhwc, [1, image_height, patch_width, 1],\n        strides=[1, 1, 1, 1],\n        rates=[1, 1, 1, 1],\n        padding=\"VALID\")\n    patches_shape = tf.concat(\n        [images_shape, [tf.shape(patches)[2], image_height, patch_width]],\n        axis=0)\n    return tf.reshape(patches, patches_shape)\n\n  def empty_patches():\n    return tf.zeros(\n        tf.concat([images_shape, [0, image_height, patch_width]], axis=0),\n        images.dtype)\n\n  return tf.cond(tf.greater(num_images, 0), do_extract_patches, empty_patches)\n"
  },
  {
    "path": "moonlight/util/patches_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the patches utility.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import patches\nfrom six import moves\n\n\nclass PatchesTest(tf.test.TestCase):\n\n  def test2D(self):\n    image_t = tf.random_uniform((100, 200))\n    image_t.set_shape((100, 200))\n    patch_width = 10\n    patches_t = patches.patches_1d(image_t, patch_width)\n    with self.test_session() as sess:\n      image_arr, patches_arr = sess.run((image_t, patches_t))\n      self.assertEqual(patches_arr.shape,\n                       (200 - patch_width + 1, 100, patch_width))\n      for i in moves.xrange(patches_arr.shape[0]):\n        self.assertAllEqual(patches_arr[i], image_arr[:, i:i + patch_width])\n\n  def test4D(self):\n    height = 15\n    width = 20\n    image_t = tf.random_uniform((4, 8, height, width))\n    image_t.set_shape((None, None, height, width))\n    patch_width = 10\n    patches_t = patches.patches_1d(image_t, patch_width)\n    with self.test_session() as sess:\n      image_arr, patches_arr = sess.run((image_t, patches_t))\n      self.assertEqual(patches_arr.shape,\n                       (4, 8, width - patch_width + 1, height, patch_width))\n      for i in moves.xrange(patches_arr.shape[0]):\n        for j in moves.xrange(patches_arr.shape[1]):\n          for k in moves.xrange(patches_arr.shape[2]):\n            self.assertAllEqual(patches_arr[i, j, k],\n                                image_arr[i, j, :, k:k + patch_width])\n\n  def testEmpty(self):\n    height = 15\n    width = 20\n    image_t = tf.zeros((1, 2, 0, 3, 4, height, width))\n    patch_width = 10\n    patches_t = patches.patches_1d(image_t, patch_width)\n    with self.test_session():\n      self.assertEqual(patches_t.eval().shape,\n                       (1, 2, 0, 3, 4, 0, height, patch_width))\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/util/run_length.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Image run length encoding.\n\nEach run is a subsequence of consecutive pixels with the same value. The\nrun-length encoding is the list of all runs in order, with their lengths and\nvalues.\n\nSee: https://en.wikipedia.org/wiki/Run-length_encoding\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow.contrib import image as contrib_image\n\n\ndef vertical_run_length_encoding(image):\n  \"\"\"Returns the runs in each column of the image.\n\n  A run is a subsequence of consecutive pixels that all have the same value.\n  Internally, we treat the image as batches of single-column images in order to\n  use connected component analysis.\n\n  Args:\n    image: A 2D image.\n\n  Returns:\n    The column index of each vertical run.\n    The value in the image for each vertical run.\n    The length of each vertical run.\n  \"\"\"\n  with tf.name_scope('run_length_encoding'):\n    image = tf.convert_to_tensor(image, name='image', dtype=tf.bool)\n    # Set arbitrary, distinct, nonzero values for True and False pixels.\n    # True pixels map to 2, and False pixels map to 1.\n    # Transpose the image, and insert an extra dimension. This creates a batch\n    # of \"images\" for connected component analysis, where each image is a single\n    # column of the original image. Therefore, the connected components are\n    # actually runs from a single column.\n    components = contrib_image.connected_components(\n        tf.to_int32(tf.expand_dims(tf.transpose(image), axis=1)) + 1)\n    # Flatten in order to use with unsorted segment ops.\n    flat_components = tf.reshape(components, [-1])\n\n    num_components = tf.maximum(0, tf.reduce_max(components) + 1)\n    # Get the column index corresponding to each pixel present in\n    # flat_components.\n    column_indices = tf.reshape(\n        tf.tile(\n            # Count 0 through `width - 1` on axis 0, then repeat each element\n            # `height` times.\n            tf.expand_dims(tf.range(tf.shape(image)[1]), axis=1),\n            multiples=[1, tf.shape(image)[0]]),\n        # pyformat: disable\n        [-1])\n    # Take the column index for each component. For each component index k,\n    # we want any entry of column_indices where the corresponding entry in\n    # flat_components is k. column_indices should be the same for all pixels in\n    # the same component, so we can just take the max of all of them. Disregard\n    # component 0, which just represents all of the zero pixels across the\n    # entire array (should be empty, because we pass in a nonzero image).\n    component_columns = tf.unsorted_segment_max(column_indices, flat_components,\n                                                num_components)[1:]\n    # Take the original value of each component. Again, the value should be the\n    # same for all pixels in a single component, so we can just take the max.\n    component_values = tf.unsorted_segment_max(\n        tf.to_int32(tf.reshape(tf.transpose(image), [-1])), flat_components,\n        num_components)[1:]\n    # Take the length of each component (run), by counting the number of pixels\n    # in the component.\n    component_lengths = tf.to_int32(tf.bincount(flat_components)[1:])\n    return component_columns, component_values, component_lengths\n"
  },
  {
    "path": "moonlight/util/run_length_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for run length encoding.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import run_length\n\n\nclass RunLengthTest(tf.test.TestCase):\n\n  def testEmpty(self):\n    with self.test_session() as sess:\n      columns, values, lengths = sess.run(\n          run_length.vertical_run_length_encoding(tf.zeros((0, 0), tf.bool)))\n    self.assertAllEqual(columns, [])\n    self.assertAllEqual(values, [])\n    self.assertAllEqual(lengths, [])\n\n  def testBooleanImage(self):\n    img = tf.cast(\n        [\n            [0, 0, 1, 0, 0, 1],\n            # pyformat: disable\n            [1, 1, 1, 1, 1, 1],\n            [0, 0, 1, 1, 1, 1],\n            [0, 1, 1, 0, 1, 0]\n        ],\n        tf.bool)\n    with self.test_session() as sess:\n      columns, values, lengths = sess.run(\n          run_length.vertical_run_length_encoding(img))\n    self.assertAllEqual(columns,\n                        [0] * 3 + [1] * 4 + [2] + [3] * 3 + [4] * 2 + [5] * 2)\n    self.assertAllEqual(values, [0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0])\n    self.assertAllEqual(lengths, [1, 1, 2, 1, 1, 1, 1, 4, 1, 2, 1, 1, 3, 3, 1])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/util/segments.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Segment/run length utilities.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport enum\nimport tensorflow as tf\n\n\nclass SegmentsMode(enum.Enum):\n  \"\"\"The valid modes for segmentation.\"\"\"\n  # Return the start position of each segment\n  STARTS = 1\n  # Return the floored center position of each segment\n  CENTERS = 2\n\n\ndef true_segments_1d(segments,\n                     mode=SegmentsMode.CENTERS,\n                     max_gap=0,\n                     min_length=0,\n                     name=None):\n  \"\"\"Labels contiguous True runs in segments.\n\n  Args:\n    segments: 1D boolean tensor.\n    mode: The SegmentsMode. Returns the start of each segment (STARTS), or the\n      rounded center of each segment (CENTERS).\n    max_gap: Fill gaps of length at most `max_gap` between true segments. int.\n    min_length: Minimum length of a returned segment. int.\n    name: Optional name for the op.\n\n  Returns:\n    run_centers: int32 tensor. Depending on `mode`, either the start of each\n        True run, or the (rounded) center of each True run.\n    run_lengths: int32; the lengths of each True run.\n  \"\"\"\n  with tf.name_scope(name, \"true_segments\", [segments]):\n    segments = tf.convert_to_tensor(segments, tf.bool)\n    run_starts, run_lengths = _segments_1d(segments, mode=SegmentsMode.STARTS)\n    # Take only the True runs. After whichever run is True first, the True runs\n    # are every other run.\n    first_run = tf.cond(\n        # First value is False, or all values are False. Handles empty segments\n        # correctly.\n        tf.logical_or(tf.reduce_any(segments[0:1]), ~tf.reduce_any(segments)),\n        lambda: tf.constant(0),\n        lambda: tf.constant(1))\n\n    num_runs = tf.shape(run_starts)[0]\n    run_nums = tf.range(num_runs)\n    is_true_run = tf.equal(run_nums % 2, first_run % 2)\n    # Find gaps between True runs that can be merged.\n    is_gap = tf.logical_and(\n        tf.not_equal(run_nums % 2, first_run % 2),\n        tf.logical_and(\n            tf.greater(run_nums, first_run), tf.less(run_nums, num_runs - 1)))\n    fill_gap = tf.logical_and(is_gap, tf.less_equal(run_lengths, max_gap))\n\n    # Segment the consecutive runs of True or False values based on whether they\n    # are True, or are a gap of False values that can be bridged. Then, flatten\n    # the runs of runs.\n    runs_to_merge = tf.logical_or(is_true_run, fill_gap)\n    run_of_run_starts, _ = _segments_1d(runs_to_merge, mode=SegmentsMode.STARTS)\n\n    # Get the start of every new run from the original run starts.\n    merged_run_starts = tf.gather(run_starts, run_of_run_starts)\n    # Make an array mapping the original runs to their run of runs. Increment\n    # the number for every run of run start except for the first one, so that\n    # the array has values from 0 to num_run_of_runs.\n    merged_run_inds = tf.cumsum(\n        tf.sparse_to_dense(\n            sparse_indices=tf.cast(run_of_run_starts[1:, None], tf.int64),\n            output_shape=tf.cast(num_runs[None], tf.int64),\n            sparse_values=tf.ones_like(run_of_run_starts[1:])))\n    # Sum the lengths of the original runs that were merged.\n    merged_run_lengths = tf.segment_sum(run_lengths, merged_run_inds)\n\n    if mode is SegmentsMode.CENTERS:\n      merged_starts_or_centers = (\n          merged_run_starts + tf.floordiv(merged_run_lengths - 1, 2))\n    else:\n      merged_starts_or_centers = merged_run_starts\n\n    # If there are no true values, increment first_run to 1, so we will skip\n    # the single (false) run.\n    first_run += tf.to_int32(tf.logical_not(tf.reduce_any(segments)))\n\n    merged_starts_or_centers = merged_starts_or_centers[first_run::2]\n    merged_run_lengths = merged_run_lengths[first_run::2]\n\n    # Only take segments at least min_length long.\n    is_long_enough = tf.greater_equal(merged_run_lengths, min_length)\n    is_long_enough.set_shape([None])\n    merged_starts_or_centers = tf.boolean_mask(merged_starts_or_centers,\n                                               is_long_enough)\n    merged_run_lengths = tf.boolean_mask(merged_run_lengths, is_long_enough)\n\n    return merged_starts_or_centers, merged_run_lengths\n\n\ndef _segments_1d(values, mode, name=None):\n  \"\"\"Labels consecutive runs of the same value.\n\n  Args:\n    values: 1D tensor of any type.\n    mode: The SegmentsMode. Returns the start of each segment (STARTS), or the\n      rounded center of each segment (CENTERS).\n    name: Optional name for the op.\n\n  Returns:\n    run_centers: int32 tensor; the centers of each run with the same consecutive\n        values.\n    run_lengths: int32 tensor; the lengths of each run.\n\n  Raises:\n    ValueError: if mode is not recognized.\n  \"\"\"\n  with tf.name_scope(name, \"segments\", [values]):\n\n    def do_segments(values):\n      \"\"\"Actually does segmentation.\n\n      Args:\n        values: 1D tensor of any type. Non-empty.\n\n      Returns:\n        run_centers: int32 tensor\n        run_lengths: int32 tensor\n\n      Raises:\n        ValueError: if mode is not recognized.\n      \"\"\"\n      length = tf.shape(values)[0]\n      values = tf.convert_to_tensor(values)\n      # The first run has id 0, so we don't increment the id.\n      # Otherwise, the id is incremented when the value changes.\n      run_start_bool = tf.concat(\n          [[False], tf.not_equal(values[1:], values[:-1])], axis=0)\n      # Cumulative sum the run starts to get the run ids.\n      segment_ids = tf.cumsum(tf.cast(run_start_bool, tf.int32))\n      if mode is SegmentsMode.STARTS:\n        run_centers = tf.segment_min(tf.range(length), segment_ids)\n      elif mode is SegmentsMode.CENTERS:\n        run_centers = tf.segment_mean(\n            tf.cast(tf.range(length), tf.float32), segment_ids)\n        run_centers = tf.cast(tf.floor(run_centers), tf.int32)\n      else:\n        raise ValueError(\"Unexpected mode: %s\" % mode)\n      run_lengths = tf.segment_sum(tf.ones([length], tf.int32), segment_ids)\n      return run_centers, run_lengths\n\n    def empty_segments():\n      return (tf.zeros([0], tf.int32), tf.zeros([0], tf.int32))\n\n    return tf.cond(\n        tf.greater(tf.shape(values)[0], 0), lambda: do_segments(values),\n        empty_segments)\n\n\ndef peaks(values, minval=None, invalidate_distance=0, name=None):\n  \"\"\"Labels peaks in values.\n\n  Args:\n    values: 1D tensor of a numeric type.\n    minval: Minimum value which is considered a peak.\n    invalidate_distance: Invalidates nearby potential peaks. The peaks are\n      searched sequentially by descending value, and from left to right for\n      equal values. Once a peak is found in this order, it invalidates any peaks\n      yet to be seen that are <= invalidate_distance away. A distance of 0\n      effectively produces no invalidation.\n    name: Optional name for the op.\n\n  Returns:\n    peak_centers: The (rounded) centers of each peak, which are locations where\n        the value is higher than the value before and after. If there is a run\n        of equal values at the peak, the rounded center of the run is returned.\n        int32 1D tensor.\n  \"\"\"\n  with tf.name_scope(name, \"peaks\", [values]):\n    values = tf.convert_to_tensor(values, name=\"values\")\n    invalidate_distance = tf.convert_to_tensor(\n        invalidate_distance, name=\"invalidate_distance\", dtype=tf.int32)\n    # Segment the values and find local maxima.\n    # Take the center of each run of consecutive equal values.\n    segment_centers, _ = _segments_1d(values, mode=SegmentsMode.CENTERS)\n    segment_values = tf.gather(values, segment_centers)\n    # If we have zero or one segments, there are no peaks. Just use zeros as the\n    # edge values in that case.\n    first_val, second_val, penultimate_val, last_val = tf.cond(\n        # pyformat: disable\n        tf.greater_equal(tf.shape(segment_values)[0], 2),\n        lambda: tuple(segment_values[i] for i in (0, 1, -2, -1)),\n        lambda: tuple(tf.constant(0, values.dtype) for i in range(4)))\n    # Each segment must be greater than the segment before and after it.\n    segment_is_peak = tf.concat(\n        [[first_val > second_val],\n         tf.greater(segment_values[1:-1],\n                    tf.maximum(segment_values[:-2], segment_values[2:])),\n         [last_val > penultimate_val]],\n        axis=0)\n    if minval is not None:\n      # Filter the peaks by minval.\n      segment_is_peak = tf.logical_and(segment_is_peak,\n                                       tf.greater_equal(segment_values, minval))\n\n    # Get the center coordinates of each peak, and sort by descending value.\n    all_peaks = tf.boolean_mask(segment_centers, segment_is_peak)\n    num_peaks = tf.shape(all_peaks)[0]\n    peak_values = tf.boolean_mask(segment_values, segment_is_peak)\n    _, peak_order = tf.nn.top_k(peak_values, k=num_peaks, sorted=True)\n    all_peaks = tf.gather(all_peaks, peak_order)\n    all_peaks.set_shape([None])\n\n    # Loop over the peaks, accepting one at a time and possibly invalidating\n    # other ones.\n    def loop_condition(_, current_peaks):\n      return tf.shape(current_peaks)[0] > 0\n\n    def loop_body(accepted_peaks, current_peaks):\n      peak = current_peaks[0]\n      remaining_peaks = current_peaks[1:]\n\n      keep_peaks = tf.greater(\n          tf.abs(remaining_peaks - peak), invalidate_distance)\n      remaining_peaks = tf.boolean_mask(remaining_peaks, keep_peaks)\n\n      return tf.concat([accepted_peaks, [peak]], axis=0), remaining_peaks\n\n    accepted_peaks = tf.while_loop(\n        loop_condition,\n        loop_body, [tf.zeros([0], all_peaks.dtype), all_peaks],\n        shape_invariants=[tf.TensorShape([None]),\n                          tf.TensorShape([None])])[0]\n    # Sort the peaks by index.\n    # TODO(ringw): Add a tf.sort op that sorts in ascending order.\n    sorted_negative_peaks, _ = tf.nn.top_k(\n        -accepted_peaks, k=tf.shape(accepted_peaks)[0], sorted=True)\n    return -sorted_negative_peaks\n"
  },
  {
    "path": "moonlight/util/segments_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the segmentation utilities.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom absl.testing import absltest\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.util import segments\n\n\n# TODO(ringw): Remove the multiple inheritance once tf.test.TestCase extends\n# absltest.\nclass SegmentsTest(tf.test.TestCase, absltest.TestCase):\n\n  def test_true_segments_1d(self):\n    # Arbitrary boolean array to get True and False runs from.\n    values = tf.constant([True, True, True, False, True, False, True, True])\n    centers, lengths = segments.true_segments_1d(\n        values, mode=segments.SegmentsMode.CENTERS)\n    with self.test_session():\n      self.assertAllEqual(centers.eval(), [1, 4, 6])\n      self.assertAllEqual(lengths.eval(), [3, 1, 2])\n    starts, lengths = segments.true_segments_1d(\n        values, mode=segments.SegmentsMode.STARTS)\n    with self.test_session():\n      self.assertAllEqual(starts.eval(), [0, 4, 6])\n      self.assertAllEqual(lengths.eval(), [3, 1, 2])\n\n  def test_true_segments_1d_large(self):\n    # Arbitrary boolean array to get True and False runs from.\n    run_values = [False, True, False, True, False, True, False, True]\n    run_lengths = [3, 5, 2, 6, 4, 8, 7, 1]\n    values = tf.constant(np.repeat(run_values, run_lengths))\n    centers, lengths = segments.true_segments_1d(\n        values, mode=segments.SegmentsMode.CENTERS)\n    with self.test_session():\n      self.assertAllEqual(centers.eval(), [\n          sum(run_lengths[:1]) + (run_lengths[1] - 1) // 2,\n          sum(run_lengths[:3]) + (run_lengths[3] - 1) // 2,\n          sum(run_lengths[:5]) + (run_lengths[5] - 1) // 2,\n          sum(run_lengths[:7]) + (run_lengths[7] - 1) // 2\n      ])\n      self.assertAllEqual(lengths.eval(), run_lengths[1::2])\n    starts, lengths = segments.true_segments_1d(\n        values, mode=segments.SegmentsMode.STARTS)\n    with self.test_session():\n      self.assertAllEqual(starts.eval(), [\n          sum(run_lengths[:1]),\n          sum(run_lengths[:3]),\n          sum(run_lengths[:5]),\n          sum(run_lengths[:7])\n      ])\n      self.assertAllEqual(lengths.eval(), run_lengths[1::2])\n\n  def test_true_segments_1d_empty(self):\n    for mode in list(segments.SegmentsMode):\n      for max_gap in [0, 1]:\n        centers, lengths = segments.true_segments_1d([],\n                                                     mode=mode,\n                                                     max_gap=max_gap)\n        with self.test_session():\n          self.assertAllEqual(centers.eval(), [])\n          self.assertAllEqual(lengths.eval(), [])\n\n  def test_true_segments_1d_max_gap(self):\n    # Arbitrary boolean array to get True and False runs from.\n    values = tf.constant([\n        False, False,\n        True, True, True,\n        False, False,\n        True,\n        False, False, False, False, False, False,\n        True, True, True, True,\n        False,\n        True, True,\n        False, False,\n        True,\n    ])  # pyformat: disable\n    centers, lengths = segments.true_segments_1d(values, max_gap=0)\n    with self.test_session():\n      self.assertAllEqual(centers.eval(), [3, 7, 15, 19, 23])\n      self.assertAllEqual(lengths.eval(), [3, 1, 4, 2, 1])\n    centers, lengths = segments.true_segments_1d(values, max_gap=1)\n    with self.test_session():\n      self.assertAllEqual(centers.eval(), [3, 7, 17, 23])\n      self.assertAllEqual(lengths.eval(), [3, 1, 7, 1])\n    for max_gap in range(2, 6):\n      centers, lengths = segments.true_segments_1d(values, max_gap=max_gap)\n      with self.test_session():\n        self.assertAllEqual(centers.eval(), [4, 18])\n        self.assertAllEqual(lengths.eval(), [6, 10])\n    centers, lengths = segments.true_segments_1d(values, max_gap=6)\n    with self.test_session():\n      self.assertAllEqual(centers.eval(), [12])\n      self.assertAllEqual(lengths.eval(), [22])\n\n  # TODO(ringw): Make these tests parameterized when absl is released.\n  def test_true_segments_1d_all_false_length_1(self):\n    self._test_true_segments_1d_all_false(1)\n\n  def test_true_segments_1d_all_false_length_2(self):\n    self._test_true_segments_1d_all_false(2)\n\n  def test_true_segments_1d_all_false_length_8(self):\n    self._test_true_segments_1d_all_false(8)\n\n  def test_true_segments_1d_all_false_length_11(self):\n    self._test_true_segments_1d_all_false(11)\n\n  def _test_true_segments_1d_all_false(self, length):\n    centers, lengths = segments.true_segments_1d(tf.zeros(length, tf.bool))\n    with self.test_session():\n      self.assertAllEqual(centers.eval(), [])\n      self.assertAllEqual(lengths.eval(), [])\n\n  def test_true_segments_1d_min_length_0(self):\n    self._test_true_segments_1d_min_length(0)\n\n  def test_true_segments_1d_min_length_1(self):\n    self._test_true_segments_1d_min_length(1)\n\n  def test_true_segments_1d_min_length_2(self):\n    self._test_true_segments_1d_min_length(2)\n\n  def test_true_segments_1d_min_length_3(self):\n    self._test_true_segments_1d_min_length(3)\n\n  def test_true_segments_1d_min_length_4(self):\n    self._test_true_segments_1d_min_length(4)\n\n  def test_true_segments_1d_min_length_5(self):\n    self._test_true_segments_1d_min_length(5)\n\n  def test_true_segments_1d_min_length_6(self):\n    self._test_true_segments_1d_min_length(6)\n\n  def _test_true_segments_1d_min_length(self, min_length):\n    # Arbitrary boolean array to get True and False runs from.\n    values = tf.constant([\n        False, False, False,\n        True,\n        False,\n        True, True,\n        False,\n        True,\n        False,\n        True, True, True, True,\n        False,\n        True, True,\n    ])  # pyformat: disable\n    all_centers = np.asarray([3, 5, 8, 11, 15])\n    all_lengths = np.asarray([1, 2, 1, 4, 2])\n    expected_centers = all_centers[all_lengths >= min_length]\n    expected_lengths = all_lengths[all_lengths >= min_length]\n    centers, lengths = segments.true_segments_1d(values, min_length=min_length)\n    with self.test_session():\n      self.assertAllEqual(expected_centers, centers.eval())\n      self.assertAllEqual(expected_lengths, lengths.eval())\n\n  def test_peaks(self):\n    values = tf.constant([5, 3, 1, 1, 0, 1, 2, 3, 3, 3, 2, 3, 4, 1, 2])\n    with self.test_session():\n      self.assertAllEqual(segments.peaks(values).eval(), [0, 8, 12, 14])\n      self.assertAllEqual(segments.peaks(values, minval=3).eval(), [0, 8, 12])\n\n  def test_peaks_empty(self):\n    with self.test_session():\n      self.assertAllEqual(segments.peaks([]).eval(), [])\n\n  def test_peaks_invalidate_distance(self):\n    values = tf.constant([0, 0, 10, 0, 5, 3, 2, 1, 2, 3, 8, 8, 7, 8])\n    with self.test_session():\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=0).eval(), [2, 4, 10, 13])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=1).eval(), [2, 4, 10, 13])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=2).eval(), [2, 10, 13])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=3).eval(), [2, 10])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=4).eval(), [2, 10])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=7).eval(), [2, 10])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=8).eval(), [2, 13])\n      self.assertAllEqual(\n          segments.peaks(values, invalidate_distance=99).eval(), [2])\n\n  def test_peaks_array_filled_with_same_value(self):\n    for value in (0, 42, 4.2):\n      arr = tf.fill([100], value)\n      with self.test_session():\n        self.assertEmpty(segments.peaks(arr).eval())\n\n  def test_peaks_one_segment(self):\n    values = tf.constant([0, 0, 0, 0, 3, 0, 0, 0, 0])\n    with self.test_session():\n      self.assertAllEqual(segments.peaks(values).eval(), [4])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/vision/BUILD",
    "content": "# Description:\n# General computer vision routines for OMR.\n\npackage(\n    default_visibility = [\"//moonlight:__subpackages__\"],\n)\n\nlicenses([\"notice\"])  # Apache 2.0\n\npy_library(\n    name = \"vision\",\n    deps = [\n        \":hough\",\n        \":images\",\n        \":morphology\",\n    ],\n)\n\npy_library(\n    name = \"images\",\n    srcs = [\"images.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        # tensorflow dep\n        # tensorflow.contrib.image py dep\n    ],\n)\n\npy_test(\n    name = \"images_test\",\n    srcs = [\"images_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":images\",\n        # disable_tf2\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"hough\",\n    srcs = [\"hough.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \"//moonlight/util:segments\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"hough_test\",\n    srcs = [\"hough_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":hough\",\n        # disable_tf2\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n\npy_library(\n    name = \"morphology\",\n    srcs = [\"morphology.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":images\",\n        # tensorflow dep\n    ],\n)\n\npy_test(\n    name = \"morphology_test\",\n    srcs = [\"morphology_test.py\"],\n    srcs_version = \"PY2AND3\",\n    deps = [\n        \":morphology\",\n        # disable_tf2\n        # numpy dep\n        # tensorflow dep\n    ],\n)\n"
  },
  {
    "path": "moonlight/vision/hough.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Hough transform for line detection.\n\nTransforms a boolean image to the Hough space, where each entry corresponds to a\nline parameterized by the angle `theta` clockwise from vertical (in radians),\nand the distance `rho` (in pixels; the distance from coordinate `(0, 0)` in the\nimage to the closest point in the line).\n\nFor performance, the image should be sparse, containing mostly False elements,\nbecause `tf.where(image)` will be called.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.util import segments\n\n\ndef hough_lines(image, thetas):\n  \"\"\"Hough transform of a boolean image.\n\n  Args:\n    image: The image. 2D boolean tensor. Should be sparse (mostly Falses).\n    thetas: 1D float32 tensor of possible angles from the vertical for the line.\n\n  Returns:\n    The Hough space for the image. Shape `(num_theta, num_rho)`, where `num_rho`\n        is `sqrt(height**2 + width**2)`.\n  \"\"\"\n  coords = tf.cast(tf.where(image), thetas.dtype)\n  rho = tf.cast(\n      # x cos theta + y sin theta\n      tf.expand_dims(coords[:, 1], 0) * tf.cos(thetas)[:, None] +\n      tf.expand_dims(coords[:, 0], 0) * tf.sin(thetas)[:, None],\n      tf.int32)\n\n  height = tf.cast(tf.shape(image)[0], tf.float64)\n  width = tf.cast(tf.shape(image)[1], tf.float64)\n  num_rho = tf.cast(tf.ceil(tf.sqrt(height * height + width * width)), tf.int32)\n  hough_bins = _bincount_2d(rho, num_rho)\n  return hough_bins\n\n\ndef hough_peaks(hough_bins, thetas, minval=0, invalidate_distance=0):\n  \"\"\"Finds the peak lines in Hough space.\n\n  Args:\n    hough_bins: Hough bins returned by `hough_lines`.\n    thetas: Angles; argument given to `hough_lines`.\n    minval: Minimum vote count for a Hough bin to be considered. int or float.\n    invalidate_distance: When selecting a line `(rho, theta)`, invalidate all\n      lines with the same theta and `+- invalidate_distance` from `rho`.\n        int32. Caveat: this should only be used if all theta values are similar.\n          If thetas cover a wide range, this will invalidate lines that might\n          not even intersect.\n\n  Returns:\n    Tensor of peak rho indices (int32).\n    Tensor of peak theta values (float32).\n  \"\"\"\n  thetas = tf.convert_to_tensor(thetas)\n  bin_score_dtype = thetas.dtype  # floating point score derived from hough_bins\n  minval = tf.convert_to_tensor(minval)\n  if minval.dtype.is_floating:\n    minval = tf.ceil(minval)\n  invalidate_distance = tf.convert_to_tensor(\n      invalidate_distance, dtype=tf.int32)\n  # Choose the theta with the highest bin value for each rho.\n  selected_theta_ind = tf.argmax(hough_bins, axis=0)\n  # Take the Hough bin value for each rho and the selected theta.\n  hough_bins = tf.gather_nd(\n      hough_bins,\n      tf.stack([\n          tf.cast(selected_theta_ind, tf.int32),\n          tf.range(tf.shape(hough_bins)[1])\n      ],\n               axis=1))\n  # hough_bins are integers. Subtract a penalty (< 1) for lines that are not\n  # horizontal or vertical, so that we break ties in favor of the more\n  # horizontal or vertical line.\n  infinitesimal = tf.constant(1e-10, bin_score_dtype)\n  # Decrease minval so we don't discard bins that are penalized, if they\n  # originally equalled minval.\n  minval = tf.cast(minval, bin_score_dtype) - infinitesimal\n  selected_thetas = tf.gather(thetas, selected_theta_ind)\n  # min(|sin(t)|, |cos(t)|) is 0 for horizontal and vertical angles, and between\n  # 0 and 1 otherwise.\n  penalty = tf.multiply(\n      tf.minimum(\n          tf.abs(tf.sin(selected_thetas)), tf.abs(tf.cos(selected_thetas))),\n      infinitesimal)\n  bin_score = tf.cast(hough_bins, bin_score_dtype) - penalty\n  # Find the peaks in the 1D hough_bins array.\n  peak_rhos = segments.peaks(\n      bin_score, minval=minval, invalidate_distance=invalidate_distance)\n  # Get the actual angles for each selected peak.\n  peak_thetas = tf.gather(thetas, tf.gather(selected_theta_ind, peak_rhos))\n  return peak_rhos, peak_thetas\n\n\ndef _bincount_2d(values, num_values):\n  \"\"\"Bincounts each row of values.\n\n  Args:\n    values: The values to bincount. 2D integer tensor.\n    num_values: The number of columns of the output. Entries in `values` that\n      are `>= num_values` will be ignored.\n\n  Returns:\n    The bin counts. Shape `(values.shape[0], num_values)`. The `i`th row\n        contains the result of\n        `tf.bincount(values[i, :], maxlength=num_values)`.\n  \"\"\"\n  num_rows = tf.shape(values)[0]\n  # Convert the values in each row to a consecutive range of ids that will not\n  # overlap with the other rows.\n  row_values = values + tf.range(num_rows)[:, None] * num_values\n  # Remove entries that would collide with other rows.\n  values_flat = tf.boolean_mask(row_values,\n                                (0 <= values) & (values < num_values))\n  bins_length = num_rows * num_values\n  bins = tf.bincount(values_flat, minlength=bins_length, maxlength=bins_length)\n  return tf.reshape(bins, [num_rows, num_values])\n"
  },
  {
    "path": "moonlight/vision/hough_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for the Hough transform.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.vision import hough\n\n\nclass HoughTest(tf.test.TestCase):\n\n  def testHorizontalLines(self):\n    image = np.asarray(\n        [[0, 0, 0, 0, 0, 0, 0],\n         [0, 1, 1, 1, 1, 1, 0],\n         [0, 0, 0, 0, 0, 0, 0],\n         [0, 0, 0, 0, 0, 0, 0],\n         [1, 1, 1, 1, 1, 1, 1],\n         [0, 0, 0, 0, 0, 0, 0],\n         [1, 1, 1, 1, 0, 0, 0],\n         [0, 0, 0, 0, 0, 0, 0]])  # pyformat: disable\n    thetas = np.asarray([np.pi / 2, np.pi / 4, 0, -np.pi / 4])\n    with self.test_session() as sess:\n      hough_bins = sess.run(hough.hough_lines(image, thetas))\n    self.assertAllEqual(\n        hough_bins,\n        # theta pi/2 gives the horizontal projection (sum each row).\n        [[0, 5, 0, 0, 7, 0, 4, 0, 0, 0, 0],\n         # theta pi/4 rotates the lines counter-clockwise from horizontal, and\n         # higher rho values go down and right into the image.\n         [0, 1, 3, 2, 5, 2, 2, 1, 0, 0, 0],\n         # theta 0 gives the vertical projection (sum each column).\n         [2, 3, 3, 3, 2, 2, 1, 0, 0, 0, 0],\n         # theta -pi/4 rotates the lines counter-clockwise from vertical, and\n         # higher rho values go up and right away from the image.\n         [5, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0]])  # pyformat: disable\n\n  def testHoughPeaks_verticalLines(self):\n    image = np.asarray(\n        [[0, 0, 0, 0, 0, 0, 0, 0],\n         [0, 1, 0, 0, 1, 0, 0, 0],\n         [0, 1, 0, 0, 1, 0, 0, 0],\n         [0, 1, 0, 0, 0, 1, 0, 0],\n         [0, 1, 0, 0, 0, 1, 0, 0],\n         [0, 1, 0, 0, 0, 0, 1, 0],\n         [0, 1, 0, 0, 0, 0, 1, 0]])  # pyformat: disable\n    # Test the full range of angles.\n    thetas = np.linspace(-np.pi, np.pi, 101)\n    hough_bins = hough.hough_lines(image, thetas)\n    peak_rho_t, peak_theta_t = hough.hough_peaks(hough_bins, thetas)\n    with self.test_session() as sess:\n      peak_rho, peak_theta = sess.run((peak_rho_t, peak_theta_t))\n    # Vertical line\n    self.assertEqual(peak_rho[0], 1)\n    self.assertAlmostEqual(peak_theta[0], 0)\n    # Rotated line\n    self.assertEqual(peak_rho[1], 3)\n    self.assertAlmostEqual(peak_theta[1], -np.pi / 8, places=1)\n\n  def testHoughPeaks_minval(self):\n    image = np.asarray(\n        [[0, 0, 0, 0, 0],\n         [0, 1, 1, 1, 0],\n         [0, 0, 0, 0, 0]])  # pyformat: disable\n    thetas = np.linspace(0, np.pi / 2, 17)\n    hough_bins = hough.hough_lines(image, thetas)\n    peak_rho_t, peak_theta_t = hough.hough_peaks(hough_bins, thetas, minval=2)\n    with self.test_session() as sess:\n      peak_rho, peak_theta = sess.run((peak_rho_t, peak_theta_t))\n      self.assertEqual(peak_rho.shape, (1,))\n      self.assertEqual(peak_theta.shape, (1,))\n\n  def testHoughPeaks_minvalTooLarge(self):\n    image = np.asarray(\n        [[0, 0, 0, 0, 0],\n         [0, 1, 1, 1, 0],\n         [0, 0, 0, 0, 0]])  # pyformat: disable\n    thetas = np.linspace(0, np.pi / 2, 17)\n    hough_bins = hough.hough_lines(image, thetas)\n    peak_rho_t, peak_theta_t = hough.hough_peaks(hough_bins, thetas, minval=3.1)\n    with self.test_session() as sess:\n      peak_rho, peak_theta = sess.run((peak_rho_t, peak_theta_t))\n      self.assertEqual(peak_rho.shape, (0,))\n      self.assertEqual(peak_theta.shape, (0,))\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/vision/images.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Image utilities.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow.contrib import image as contrib_image\n\n\n# TODO(ringw): Replace once github.com/tensorflow/tensorflow/pull/10748\n# is in.\ndef translate(image, x, y):\n  \"\"\"Translates the image.\n\n  Args:\n    image: A 2D float32 tensor.\n    x: The x shift of the output, in pixels.\n    y: The y shift of the output, in pixels.\n\n  Returns:\n    The translated image tensor.\n  \"\"\"\n  # TODO(ringw): Fix mixing scalar constants and scalar tensors here.\n  one = tf.constant(1, tf.float32)\n  zero = tf.constant(0, tf.float32)\n  # The inverted transformation matrix expected by tf.contrib.image.transform.\n  # The last entry is the 3x3 matrix is left out and is always 1.\n  translation_matrix = tf.convert_to_tensor(\n      [one, zero, tf.to_float(-x),\n       zero, one, tf.to_float(-y),\n       zero, zero], tf.float32)  # pyformat: disable\n  return contrib_image.transform(image, translation_matrix)\n"
  },
  {
    "path": "moonlight/vision/images_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for image utilities.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.vision import images\n\n\nclass ImagesTest(tf.test.TestCase):\n\n  def testTranslate(self):\n    with self.test_session():\n      arr = tf.reshape(tf.range(9), (3, 3))\n      self.assertAllEqual(\n          images.translate(arr, 0, -1).eval(),\n          [[3, 4, 5], [6, 7, 8], [0, 0, 0]])\n      self.assertAllEqual(\n          images.translate(arr, 0, 1).eval(), [[0, 0, 0], [0, 1, 2], [3, 4, 5]])\n      self.assertAllEqual(\n          images.translate(arr, -1, 0).eval(),\n          [[1, 2, 0], [4, 5, 0], [7, 8, 0]])\n      self.assertAllEqual(\n          images.translate(arr, 1, 0).eval(), [[0, 0, 1], [0, 3, 4], [0, 6, 7]])\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "moonlight/vision/morphology.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Binary morphology ops.\n\nSee: https://en.wikipedia.org/wiki/Mathematical_morphology#Binary_morphology\n\nFrom the link above, these functions use a structuring element of `Z^2`--the\nneighbors of a pixel are the pixels above, below, left, and right, which are\nassumed to be False if they lie outside the image.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\nfrom moonlight.vision import images\n\n\ndef binary_erosion(image, n):\n  \"\"\"The binary erosion of a boolean image.\n\n  True pixels that border a False pixel will be set to False.\n\n  Args:\n    image: 2D boolean tensor.\n    n: Integer scalar tensor. Repeat the erosion `n` times.\n\n  Returns:\n    The eroded image.\n  \"\"\"\n  with tf.name_scope(\"binary_erosion\"):\n    image = tf.convert_to_tensor(image, tf.bool, \"image\")\n    result = _repeated_morphological_op(tf.to_float(image), tf.logical_and, n)\n    return tf.cast(result, tf.bool)\n\n\ndef binary_dilation(image, n):\n  \"\"\"The binary dilation of a boolean image.\n\n  False pixels that border a True pixel will be set to True.\n\n  Args:\n    image: 2D boolean tensor.\n    n: Integer scalar tensor. Repeat the dilation `n` times.\n\n  Returns:\n    The dilated image.\n  \"\"\"\n  with tf.name_scope(\"binary_dilation\"):\n    image = tf.convert_to_tensor(image, tf.bool, \"image\")\n    result = _repeated_morphological_op(tf.to_float(image), tf.logical_or, n)\n    return tf.cast(result, tf.bool)\n\n\ndef _repeated_morphological_op(float_image, binary_op, n):\n\n  def body(i, image):\n    return i + 1, _single_morphological_op(image, binary_op)\n\n  return tf.while_loop(lambda i, _: tf.less(i, n), body,\n                       [tf.constant(0), float_image])[1]\n\n\ndef _single_morphological_op(float_image, binary_op):\n  with tf.name_scope(\"_single_morphological_op\"):\n    input_image = float_image\n    for x, y in [(-1, 0), (0, -1), (1, 0), (0, 1)]:\n      float_image = tf.to_float(\n          binary_op(\n              tf.cast(float_image, tf.bool),\n              tf.cast(images.translate(input_image, x, y), tf.bool)))\n    return float_image\n"
  },
  {
    "path": "moonlight/vision/morphology_test.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Tests for binary morphology.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom moonlight.vision import morphology\n\n\nclass MorphologyTest(tf.test.TestCase):\n\n  def testMorphology_false(self):\n    for op in [morphology.binary_erosion, morphology.binary_dilation]:\n      with self.test_session():\n        self.assertAllEqual(\n            op(tf.zeros((5, 3), tf.bool), n=1).eval(), np.zeros((5, 3),\n                                                                np.bool))\n\n  def testErosion_small(self):\n    with self.test_session():\n      self.assertAllEqual(\n          morphology.binary_erosion(\n              tf.cast([[0, 1, 0], [1, 1, 1], [0, 1, 0]], tf.bool), n=1).eval(),\n          [[0, 0, 0], [0, 1, 0], [0, 0, 0]])\n\n  def testErosion(self):\n    with self.test_session():\n      self.assertAllEqual(\n          morphology.binary_erosion(\n              tf.cast(\n                  [[1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0],\n                   [0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1],\n                   [1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1],\n                   [0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0]],\n                  tf.bool),\n              n=1).eval(),\n          np.asarray(\n              [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n               [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],\n               [0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0],\n               [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],\n              np.bool))  # pyformat: disable\n\n  def testDilation(self):\n    with self.test_session():\n      self.assertAllEqual(\n          morphology.binary_dilation(\n              tf.cast(\n                  [[1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0],\n                   [0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1],\n                   [1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1],\n                   [0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0]],\n                  tf.bool),\n              n=1).eval(),\n          np.asarray(\n              [[1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1],\n               [1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n               [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n               [1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1]],\n              np.bool))  # pyformat: disable\n\n\nif __name__ == '__main__':\n  tf.test.main()\n"
  },
  {
    "path": "requirements.txt",
    "content": "# List of all packages used by Moonlight.\nabsl-py\n# TODO(ringw): Get the latest apache beam version working in py2 tests.\napache_beam==2.5.0; python_version < '3.0'\napache_beam==2.11.0; python_version >= '3.0'\nenum34\nlibrosa==0.4.0\nlxml\njoblib==0.11.0\nMako\nnumpy>=1.14.2\npandas\nPillow\nprotobuf==3.6.1\nscipy\ntensorflow==1.15.4\ntensorflow-estimator==1.15.1\n"
  },
  {
    "path": "sandbox/README.md",
    "content": "# Moonlight Sandbox\n\nThis directory has the symlinks necessary to import Moonlight after building.\nYou can either add the directory to your PYTHONPATH, or run Python from this\ndirectory.\n\n    git clone https://github.com/tensorflow/moonlight\n    cd moonlight\n    # You may want to run this inside a virtualenv.\n    pip install -r requirements.txt\n    # Builds dependencies and sets up the symlinks that we point to.\n    bazel build moonlight:omr\n\n    cd sandbox\n    python\n    >>> from moonlight import engine\n"
  },
  {
    "path": "six.BUILD",
    "content": "py_library(\n    name = \"six\",\n    srcs = [\"six.py\"],\n    visibility = [\"//visibility:public\"],\n    srcs_version = \"PY2AND3\",\n)"
  },
  {
    "path": "tools/bazel_0.20.0-linux-x86_64.deb.sha256",
    "content": "ea47050fe839a7f5fb6c3ac1cc876a70993e614bab091aefc387715c3cb48a86  bazel_0.20.0-linux-x86_64.deb\n"
  },
  {
    "path": "tools/travis_tests.sh",
    "content": "#!/bin/bash\n# Print commands before running them.\nset -x\n\n# Apache Beam only supports Python 2 :(\n# Filter tests with tags = [\"py2only\"] for Python 3 (the TRAVIS_PYTHON_VERSION\n# environment variable starts with a \"3\").\nif [ \"${TRAVIS_PYTHON_VERSION:0:1}\" = 3 ]; then\n  PYTHON_VERSION_FILTERS=--test_tag_filters=-py2only\nfi\n\n# Test that we can build and import the \"engine\" module in the sandbox.\nbazel build --incompatible_remove_native_http_archive=false //moonlight:omr\nPYTHONPATH=sandbox python -m moonlight.engine\n\nbazel test --incompatible_remove_native_http_archive=false \\\n    --test_output=errors --local_test_jobs=1 $PYTHON_VERSION_FILTERS \\\n    //moonlight/...\n"
  }
]