[
  {
    "path": ".github/build.sh",
    "content": "#!/bin/sh\ncurl -fsLO https://raw.githubusercontent.com/scijava/scijava-scripts/main/ci-build.sh\nsh ci-build.sh\n"
  },
  {
    "path": ".github/setup.sh",
    "content": "#!/bin/sh\ncurl -fsLO https://raw.githubusercontent.com/scijava/scijava-scripts/main/ci-setup-github-actions.sh\nsh ci-setup-github-actions.sh\n\n# Let the Linux build handle artifact deployment.\nif [ \"$(uname)\" != Linux ]\nthen\n  echo \"No deploy -- non-Linux build\"\n  echo \"NO_DEPLOY=1\" >> $GITHUB_ENV\nfi\n"
  },
  {
    "path": ".github/workflows/build.yml",
    "content": "name: build\n\non:\n  push:\n    branches:\n      - master\n      - development\n    tags:\n      - \"*-[0-9]+.*\"\n  pull_request:\n    branches:\n      - master\n      - development\n\njobs:\n  build:\n    strategy:\n      matrix:\n        os: [ubuntu-latest, windows-latest]\n    runs-on: ${{ matrix.os }}\n\n    steps:\n      - uses: actions/checkout@v4\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: '3.10'\n      - name: Set up Java\n        uses: actions/setup-java@v4\n        with:\n          java-version: '8'\n          distribution: 'zulu'\n          cache: 'maven'\n      - name: Set up CI environment\n        run: .github/setup.sh\n        shell: bash\n      - name: Execute the build\n        run: .github/build.sh\n        shell: bash\n        env:\n          GPG_KEY_NAME: ${{ secrets.GPG_KEY_NAME }}\n          GPG_PASSPHRASE: ${{ secrets.GPG_PASSPHRASE }}\n          MAVEN_USER: ${{ secrets.MAVEN_USER }}\n          MAVEN_PASS: ${{ secrets.MAVEN_PASS }}\n          OSSRH_PASS: ${{ secrets.OSSRH_PASS }}\n          SIGNING_ASC: ${{ secrets.SIGNING_ASC }}\n"
  },
  {
    "path": ".gitignore",
    "content": "*.class\n\n# Mobile Tools for Java (J2ME)\n.mtj.tmp/\n\n# Package Files #\n*.jar\n*.war\n*.ear\n\n# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml\nhs_err_pid*\n\n/target/\n\n.classpath\n.project\n.settings\n\n.idea\n*.iml\n"
  },
  {
    "path": "LICENSE.md",
    "content": "## BSD 2-Clause License\n\nCopyright (c) 2017-2026, Stephan Saalfeld\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n  this list of conditions and the following disclaimer in the documentation\n  and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "README.md",
    "content": "# N5 [![Build Status](https://github.com/saalfeldlab/n5/actions/workflows/build.yml/badge.svg)](https://github.com/saalfeldlab/n5/actions/workflows/build.yml)\n\n\nThe N5 API specifies the primitive operations needed to store large chunked n-dimensional tensors, and arbitrary meta-data in a hierarchy of groups similar to HDF5.\n\nOther than HDF5, N5 is not bound to a specific backend.  This repository includes a simple [file-system backend](#file-system-specification).  There are also an [HDF5 backend](https://github.com/saalfeldlab/n5-hdf5), a [Zarr backend](https://github.com/saalfeldlab/n5-zarr), a [Google Cloud backend](https://github.com/saalfeldlab/n5-google-cloud), and an [AWS-S3 backend](https://github.com/saalfeldlab/n5-aws-s3).\n\nAt this time, N5 supports:\n\n* arbitrary group hierarchies\n* arbitrary meta-data (stored as JSON or HDF5 attributes)\n* chunked n-dimensional tensor datasets\n* value-datatypes: [u]int8, [u]int16, [u]int32, [u]int64, float32, float64\n* compression: raw, gzip, zlib, bzip2, xz, and lz4 are included in this repository, custom compression schemes can be added\n\nChunked datasets can be sparse, i.e. empty chunks do not need to be stored.\n\n## File-system specification\n\n*version 4.0.0*\n\nN5 group is not a single file but simply a directory on the file system.  Meta-data is stored as a JSON file per each group/ directory.  Tensor datasets can be chunked and chunks are stored as individual files.  This enables parallel reading and writing on a cluster.\n\n1. All directories of the file system are N5 groups.\n2. A JSON file `attributes.json` in a directory contains arbitrary attributes.  A group without attributes may not have an `attributes.json` file.\n3. The version of this specification is 4.0.0 and is stored in the \"n5\" attribute of the root group \"/\".\n4. A dataset is a group with the mandatory attributes:\n   * dimensions (e.g. [100, 200, 300]),\n   * blockSize (e.g. [64, 64, 64]),\n   * dataType (one of {uint8, uint16, uint32, uint64, int8, int16, int32, int64, float32, float64, object})\n   * compression as a struct with the mandatory attribute type that specifies the compression scheme, currently available are:\n     * raw (no parameters),\n     * bzip2 with parameters\n       * blockSize ([1-9], default 9)\n     * gzip with parameters\n       * level (integer, default -1)\n     * lz4 with parameters\n       * blockSize (integer, default 65536)\n     * xz with parameters\n       * preset (integer, default 6).\n       \n   Custom compression schemes with arbitrary parameters can be added using [compression annotations](#extensible-compression-schemes), e.g. [N5 Blosc](https://github.com/saalfeldlab/n5-blosc) and [N5 ZStandard](https://github.com/JaneliaSciComp/n5-zstandard/).\n5. Chunks are stored in a directory hierarchy that enumerates their positive integer position in the chunk grid (e.g. `0/4/1/7` for chunk grid position p=(0, 4, 1, 7)).\n6. Datasets are sparse, i.e. there is no guarantee that all chunks of a dataset exist.\n7. Chunks cannot be larger than 2GB (2<sup>31</sup>Bytes).\n8. All chunks of a chunked dataset have the same size except for end-chunks that may be smaller, therefore\n9. Chunks are stored in the following binary format:\n    * mode (uint16 big endian, default = 0x0000, varlength = 0x0001, object = 0x0002)\n    * number of dimensions (uint16 big endian)\n    * dimension 1[,...,n] (uint32 big endian)\n    * [ mode == varlength ? number of elements (uint32 big endian) ]\n    * compressed data (big endian)\n    \n    Example:\n    \n    A 3-dimensional `uint16` datablock of 1&times;2&times;3 pixels with raw compression storing the values (1,2,3,4,5,6) starts with:\n    \n    ```hexdump\n    00000000: 00 00        ..      # 0 (default mode)\n    00000002: 00 03        ..      # 3 (number of dimensions)\n    00000004: 00 00 00 01  ....    # 1 (dimensions)\n    00000008: 00 00 00 02  ....    # 2\n    0000000c: 00 00 00 03  ....    # 3\n    ```\n    \n    followed by data stored as raw or compressed big endian values.  For raw:\n    \n    ```hexdump\n    00000010: 00 01        ..      # 1\n    00000012: 00 02        ..      # 2\n    00000014: 00 03        ..      # 3\n    00000016: 00 04        ..      # 4\n    00000018: 00 05        ..      # 5\n    0000001a: 00 06        ..      # 6\n    ```\n    \n    for bzip2 compression:\n    \n    ```hexdump\n    00000010: 42 5a 68 39  BZh9\n    00000014: 31 41 59 26  1AY&\n    00000018: 53 59 02 3e  SY.>\n    0000001c: 0d d2 00 00  ....\n    00000020: 00 40 00 7f  .@..\n    00000024: 00 20 00 31  . .1\n    00000028: 0c 01 0d 31  ...1\n    0000002c: a8 73 94 33  .s.3\n    00000030: 7c 5d c9 14  |]..\n    00000034: e1 42 40 08  .B@.\n    00000038: f8 37 48     .7H\n\n    ```\n    \n    for gzip compression:\n    \n    ```hexdump\n    00000010: 1f 8b 08 00  ....\n    00000014: 00 00 00 00  ....\n    00000018: 00 00 63 60  ..c`\n    0000001c: 64 60 62 60  d`b`\n    00000020: 66 60 61 60  f`a`\n    00000024: 65 60 03 00  e`..\n    00000028: aa ea 6d bf  ..m.\n    0000002c: 0c 00 00 00  ....\n    ```\n    \n    for xz compression:\n    \n    ```hexdump\n    00000010: fd 37 7a 58  .7zX\n    00000014: 5a 00 00 04  Z...\n    00000018: e6 d6 b4 46  ...F\n    0000001c: 02 00 21 01  ..!.\n    00000020: 16 00 00 00  ....\n    00000024: 74 2f e5 a3  t/..\n    00000028: 01 00 0b 00  ....\n    0000002c: 01 00 02 00  ....\n    00000030: 03 00 04 00  ....\n    00000034: 05 00 06 00  ....\n    00000038: 0d 03 09 ca  ....\n    0000003c: 34 ec 15 a7  4...\n    00000040: 00 01 24 0c  ..$.\n    00000044: a6 18 d8 d8  ....\n    00000048: 1f b6 f3 7d  ...}\n    0000004c: 01 00 00 00  ....\n    00000050: 00 04 59 5a  ..YZ\n    ```\n    \n## Extensible compression schemes\n\nCustom compression schemes can be implemented using the annotation discovery mechanism of SciJava.  Implement the [`BlockReader`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/BlockReader.java) and [`BlockWriter`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/BlockWriter.java) interfaces for the compression scheme and create a parameter class implementing the [`Compression`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/Compression.java) interface that is annotated with the [`CompressionType`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/Compression.java#L51) and [`CompressionParameter`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/Compression.java#L63) annotations.  Typically, all this can happen in a single class such as in [`GzipCompression`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/GzipCompression.java).\n\n## Disclaimer\n\nHDF5 is a great format that provides a wealth of conveniences that I do not want to miss.  Its inefficiency for parallel writing, however, limits its applicability for handling of very large n-dimensional data.\n\nN5 uses the native filesystem of the target platform and JSON files to specify basic and custom meta-data as attributes.  It aims at preserving the convenience of HDF5 where possible but doesn't try too hard to be a full replacement.\n"
  },
  {
    "path": "doc/LICENSE.md",
    "content": "## BSD 2-Clause License\n\nCopyright (c) @project.inceptionYear@-@current.year@, Stephan Saalfeld\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n  this list of conditions and the following disclaimer in the documentation\n  and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "doc/README.md",
    "content": "# N5 [![Build Status](https://github.com/saalfeldlab/n5/actions/workflows/build.yml/badge.svg)](https://github.com/saalfeldlab/n5/actions/workflows/build.yml)\n\n\nThe N5 API specifies the primitive operations needed to store large chunked n-dimensional tensors, and arbitrary meta-data in a hierarchy of groups similar to HDF5.\n\nOther than HDF5, N5 is not bound to a specific backend.  This repository includes a simple [file-system backend](#file-system-specification).  There are also an [HDF5 backend](https://github.com/saalfeldlab/n5-hdf5), a [Zarr backend](https://github.com/saalfeldlab/n5-zarr), a [Google Cloud backend](https://github.com/saalfeldlab/n5-google-cloud), and an [AWS-S3 backend](https://github.com/saalfeldlab/n5-aws-s3).\n\nAt this time, N5 supports:\n\n* arbitrary group hierarchies\n* arbitrary meta-data (stored as JSON or HDF5 attributes)\n* chunked n-dimensional tensor datasets\n* value-datatypes: [u]int8, [u]int16, [u]int32, [u]int64, float32, float64\n* compression: raw, gzip, zlib, bzip2, xz, and lz4 are included in this repository, custom compression schemes can be added\n\nChunked datasets can be sparse, i.e. empty chunks do not need to be stored.\n\n## File-system specification\n\n*version @n5-spec.version@*\n\nN5 group is not a single file but simply a directory on the file system.  Meta-data is stored as a JSON file per each group/ directory.  Tensor datasets can be chunked and chunks are stored as individual files.  This enables parallel reading and writing on a cluster.\n\n1. All directories of the file system are N5 groups.\n2. A JSON file `attributes.json` in a directory contains arbitrary attributes.  A group without attributes may not have an `attributes.json` file.\n3. The version of this specification is @n5-spec.version@ and is stored in the \"n5\" attribute of the root group \"/\".\n4. A dataset is a group with the mandatory attributes:\n   * dimensions (e.g. [100, 200, 300]),\n   * blockSize (e.g. [64, 64, 64]),\n   * dataType (one of {uint8, uint16, uint32, uint64, int8, int16, int32, int64, float32, float64, object})\n   * compression as a struct with the mandatory attribute type that specifies the compression scheme, currently available are:\n     * raw (no parameters),\n     * bzip2 with parameters\n       * blockSize ([1-9], default 9)\n     * gzip with parameters\n       * level (integer, default -1)\n     * lz4 with parameters\n       * blockSize (integer, default 65536)\n     * xz with parameters\n       * preset (integer, default 6).\n       \n   Custom compression schemes with arbitrary parameters can be added using [compression annotations](#extensible-compression-schemes), e.g. [N5 Blosc](https://github.com/saalfeldlab/n5-blosc) and [N5 ZStandard](https://github.com/JaneliaSciComp/n5-zstandard/).\n5. Chunks are stored in a directory hierarchy that enumerates their positive integer position in the chunk grid (e.g. `0/4/1/7` for chunk grid position p=(0, 4, 1, 7)).\n6. Datasets are sparse, i.e. there is no guarantee that all chunks of a dataset exist.\n7. Chunks cannot be larger than 2GB (2<sup>31</sup>Bytes).\n8. All chunks of a chunked dataset have the same size except for end-chunks that may be smaller, therefore\n9. Chunks are stored in the following binary format:\n    * mode (uint16 big endian, default = 0x0000, varlength = 0x0001, object = 0x0002)\n    * number of dimensions (uint16 big endian)\n    * dimension 1[,...,n] (uint32 big endian)\n    * [ mode == varlength ? number of elements (uint32 big endian) ]\n    * compressed data (big endian)\n    \n    Example:\n    \n    A 3-dimensional `uint16` datablock of 1&times;2&times;3 pixels with raw compression storing the values (1,2,3,4,5,6) starts with:\n    \n    ```hexdump\n    00000000: 00 00        ..      # 0 (default mode)\n    00000002: 00 03        ..      # 3 (number of dimensions)\n    00000004: 00 00 00 01  ....    # 1 (dimensions)\n    00000008: 00 00 00 02  ....    # 2\n    0000000c: 00 00 00 03  ....    # 3\n    ```\n    \n    followed by data stored as raw or compressed big endian values.  For raw:\n    \n    ```hexdump\n    00000010: 00 01        ..      # 1\n    00000012: 00 02        ..      # 2\n    00000014: 00 03        ..      # 3\n    00000016: 00 04        ..      # 4\n    00000018: 00 05        ..      # 5\n    0000001a: 00 06        ..      # 6\n    ```\n    \n    for bzip2 compression:\n    \n    ```hexdump\n    00000010: 42 5a 68 39  BZh9\n    00000014: 31 41 59 26  1AY&\n    00000018: 53 59 02 3e  SY.>\n    0000001c: 0d d2 00 00  ....\n    00000020: 00 40 00 7f  .@..\n    00000024: 00 20 00 31  . .1\n    00000028: 0c 01 0d 31  ...1\n    0000002c: a8 73 94 33  .s.3\n    00000030: 7c 5d c9 14  |]..\n    00000034: e1 42 40 08  .B@.\n    00000038: f8 37 48     .7H\n\n    ```\n    \n    for gzip compression:\n    \n    ```hexdump\n    00000010: 1f 8b 08 00  ....\n    00000014: 00 00 00 00  ....\n    00000018: 00 00 63 60  ..c`\n    0000001c: 64 60 62 60  d`b`\n    00000020: 66 60 61 60  f`a`\n    00000024: 65 60 03 00  e`..\n    00000028: aa ea 6d bf  ..m.\n    0000002c: 0c 00 00 00  ....\n    ```\n    \n    for xz compression:\n    \n    ```hexdump\n    00000010: fd 37 7a 58  .7zX\n    00000014: 5a 00 00 04  Z...\n    00000018: e6 d6 b4 46  ...F\n    0000001c: 02 00 21 01  ..!.\n    00000020: 16 00 00 00  ....\n    00000024: 74 2f e5 a3  t/..\n    00000028: 01 00 0b 00  ....\n    0000002c: 01 00 02 00  ....\n    00000030: 03 00 04 00  ....\n    00000034: 05 00 06 00  ....\n    00000038: 0d 03 09 ca  ....\n    0000003c: 34 ec 15 a7  4...\n    00000040: 00 01 24 0c  ..$.\n    00000044: a6 18 d8 d8  ....\n    00000048: 1f b6 f3 7d  ...}\n    0000004c: 01 00 00 00  ....\n    00000050: 00 04 59 5a  ..YZ\n    ```\n    \n## Extensible compression schemes\n\nCustom compression schemes can be implemented using the annotation discovery mechanism of SciJava.  Implement the [`BlockReader`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/BlockReader.java) and [`BlockWriter`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/BlockWriter.java) interfaces for the compression scheme and create a parameter class implementing the [`Compression`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/Compression.java) interface that is annotated with the [`CompressionType`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/Compression.java#L51) and [`CompressionParameter`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/Compression.java#L63) annotations.  Typically, all this can happen in a single class such as in [`GzipCompression`](https://github.com/saalfeldlab/n5/blob/master/src/main/java/org/janelia/saalfeldlab/n5/GzipCompression.java).\n\n## Disclaimer\n\nHDF5 is a great format that provides a wealth of conveniences that I do not want to miss.  Its inefficiency for parallel writing, however, limits its applicability for handling of very large n-dimensional data.\n\nN5 uses the native filesystem of the target platform and JSON files to specify basic and custom meta-data as attributes.  It aims at preserving the convenience of HDF5 where possible but doesn't try too hard to be a full replacement.\n"
  },
  {
    "path": "doc/n5-eclipse-style.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n<profiles version=\"13\">\n<profile kind=\"CodeFormatterProfile\" name=\"n5-eclipse-style\" version=\"13\">\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_ellipsis\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_declarations\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_annotation_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_allocation_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_at_in_annotation_type_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_for_statment\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.new_lines_at_block_boundaries\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_parameters\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.insert_new_line_for_parameter\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_package\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_method_invocation\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_enum_constant\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_after_imports\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_while\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.insert_new_line_before_root_tags\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_annotation_type_member_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_throws\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_switch_statement\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.format_javadoc_comments\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indentation.size\" value=\"2\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_postfix_operator\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_enum_constant_declaration\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_increments\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_arguments\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_inits\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_anonymous_type_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_for\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.disabling_tag\" value=\"@formatter:off\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.continuation_indentation\" value=\"2\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_enum_constants\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_imports\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_after_package\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_binary_operator\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_local_declarations\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_if_while_statement\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_arguments_in_enum_constant\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_parameterized_type_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.indent_root_tags\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.wrap_before_or_operator_multicatch\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.enabling_tag\" value=\"@formatter:on\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_closing_brace_in_block\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.count_line_length_from_starting_position\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_return\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_method_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_parameter\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.keep_then_statement_on_same_line\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_field\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_explicitconstructorcall_arguments\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_block\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_prefix_operator\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_between_type_declarations\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_brace_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_for\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_catch\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_method\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_switch\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_parameterized_type_references\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_anonymous_type_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_parenthesized_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_enum_constant\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.never_indent_line_comments_on_first_column\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.compiler.problem.enumIdentifier\" value=\"error\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_and_in_type_parameter\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_inits\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_statements_compare_to_block\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_anonymous_type_declaration\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_question_in_wildcard\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_invocation_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_switch\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.line_length\" value=\"80\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.use_on_off_tags\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_brackets_in_array_allocation_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_constant\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_invocation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_assignment_operator\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_type_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_for\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.preserve_white_space_between_code_and_line_comments\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_local_variable\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_method_declaration\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_invocation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_union_type_in_multicatch\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_colon_in_for\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.number_of_blank_lines_at_beginning_of_method_body\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_arguments\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.keep_else_statement_on_same_line\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_binary_expression\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_catch_clause\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_parameterized_type_reference\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_field_declarations\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_annotation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_arguments_in_explicit_constructor_call\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_annotation_declaration_header\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_superinterfaces\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_colon_in_default\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_question_in_conditional\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_block\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_constructor_declaration\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_lambda_body\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.compact_else_if\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_parameters\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_catch\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_invocation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.put_empty_statement_on_new_line\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_parameters_in_constructor_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_type_parameters\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_invocation_arguments\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_arguments_in_method_invocation\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_constructor_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.compiler.problem.assertIdentifier\" value=\"error\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_block_comment\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_before_catch_in_try_statement\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_try\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_at_end_of_file_if_missing\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_javadoc_comment\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_array_initializer\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_binary_operator\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_unary_operator\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_expressions_in_array_initializer\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.format_line_comment_starting_on_first_column\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.number_of_empty_lines_to_preserve\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_annotation\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_colon_in_case\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_ellipsis\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_try_resources\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_colon_in_assert\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_if\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_and_in_type_parameter\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_type_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_parenthesized_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.format_line_comments\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_colon_in_labeled_statement\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.align_type_members_on_columns\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_assignment\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_module_statements\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_method_body\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_type_header\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_enum_constant\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_type_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_first_class_body_declaration\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_conditional_expression\" value=\"80\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_before_closing_brace_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_parameters\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.format_guardian_clause_on_one_line\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_if\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_type\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_block\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_enum_declaration\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_block_in_case\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_constructor_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.format_header\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_arguments_in_allocation_expression\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_invocation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_while\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode\" value=\"enabled\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_switch\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_method_declaration\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.join_wrapped_lines\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_constructor_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.wrap_before_conditional_operator\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_cases\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_allocation_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_synchronized\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.align_fields_grouping_blank_lines\" value=\"2147483647\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.new_lines_at_javadoc_boundaries\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_annotation_type_declaration\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_colon_in_for\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_resources_in_try\" value=\"80\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.use_tabs_only_for_leading_indentations\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_try_clause\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_selector_in_method_invocation\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.never_indent_block_comments_on_first_column\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.compiler.source\" value=\"9\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_synchronized\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_throws\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.tabulation.size\" value=\"4\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_constant\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_allocation_expression\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_colon_in_conditional\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.format_source_code\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_try\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_try_resources\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_field\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.continuation_indentation_for_array_initializer\" value=\"2\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_question_in_wildcard\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_method\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_superclass_in_type_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_enum_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_throw\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.wrap_before_assignment_operator\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_colon_in_labeled_statement\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.compiler.codegen.targetPlatform\" value=\"9\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_switch\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_superinterfaces\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_parameters\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_type_annotation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_brace_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_parenthesized_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.format_html\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation_type_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_parameters\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_method_delcaration\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_compact_if\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_empty_lines\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_type_arguments\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_parameterized_type_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_unary_operator\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_enum_constant\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_arguments_in_annotation\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_declarations\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.keep_empty_array_initializer_on_one_line\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_switch\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_before_else_in_if_statement\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_assignment_operator\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_constructor_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_new_chunk\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_label\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_declaration_header\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_allocation_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_constructor_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_colon_in_conditional\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_parameterized_type_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_parameters\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_cast\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_colon_in_assert\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_member_type\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_before_while_in_do_statement\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_type_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_parameterized_type_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_arguments_in_qualified_allocation_expression\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_after_opening_brace_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_breaks_compare_to_cases\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_if\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_semicolon\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_postfix_operator\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_try\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_cast\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.format_block_comments\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_lambda_arrow\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.keep_imple_if_on_one_line\" value=\"false\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_parameters_in_method_declaration\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_brackets_in_array_type_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_parameters\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_for\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_throws\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_allocation_expression\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_statements_compare_to_body\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_multiple_fields\" value=\"16\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_constant_arguments\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_prefix_operator\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_array_initializer\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.wrap_before_binary_operator\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_method_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_parameters\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_catch\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.compiler.compliance\" value=\"9\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_comma_in_annotation\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_constant_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.parentheses_positions_in_lambda_declaration\" value=\"common_lines\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_between_empty_braces_in_array_initializer\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_colon_in_case\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_local_declarations\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_annotation_type_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_reference\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.wrap_outer_expressions_when_nested\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_closing_paren_in_cast\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_enum_constant\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.brace_position_for_type_declaration\" value=\"end_of_line\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_before_package\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_for\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_synchronized\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_increments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation_type_member_declaration\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.alignment_for_expressions_in_for_loop_header\" value=\"0\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_while\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_enum_constant\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_explicitconstructorcall_arguments\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_annotation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_parameters\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_constant_header\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_lambda_arrow\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_constructor_declaration\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_throws\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.join_lines_in_comments\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_parameters\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_question_in_conditional\" value=\"insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.comment.indent_parameter_description\" value=\"true\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_new_line_before_finally_in_try_statement\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.tabulation.char\" value=\"tab\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_field_declarations\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.blank_lines_between_import_groups\" value=\"1\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.lineSplit\" value=\"160\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_annotation\" value=\"do not insert\"/>\n<setting id=\"org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_switch\" value=\"insert\"/>\n</profile>\n</profiles>\n"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n\t<modelVersion>4.0.0</modelVersion>\n\n\t<parent>\n\t\t<groupId>org.scijava</groupId>\n\t\t<artifactId>pom-scijava</artifactId>\n\t\t<version>43.0.0</version>\n\t\t<relativePath />\n\t</parent>\n\n\t<groupId>org.janelia.saalfeldlab</groupId>\n\t<artifactId>n5</artifactId>\n\t<version>4.0.1-SNAPSHOT</version>\n\n\t<name>N5</name>\n\t<description>Not HDF5</description>\n\t<url>https://github.com/saalfeldlab/n5</url>\n\t<inceptionYear>2017</inceptionYear>\n\t<organization>\n\t\t<name>Saalfeld Lab</name>\n\t\t<url>http://saalfeldlab.janelia.org/</url>\n\t</organization>\n\t<licenses>\n\t\t<license>\n\t\t\t<name>Simplified BSD License</name>\n\t\t\t<distribution>repo</distribution>\n\t\t</license>\n\t</licenses>\n\n\t<developers>\n\t\t<developer>\n\t\t\t<id>axtimwalde</id>\n\t\t\t<name>Stephan Saalfeld</name>\n\t\t\t<url>http://imagej.net/User:Saalfeld</url>\n\t\t\t<roles>\n\t\t\t\t<role>founder</role>\n\t\t\t\t<role>lead</role>\n\t\t\t\t<role>developer</role>\n\t\t\t\t<role>debugger</role>\n\t\t\t\t<role>reviewer</role>\n\t\t\t\t<role>support</role>\n\t\t\t\t<role>maintainer</role>\n\t\t\t</roles>\n\t\t</developer>\n\t\t<developer>\n\t\t\t<id>bogovicj</id>\n\t\t\t<name>John Bogovic</name>\n\t\t\t<url>http://imagej.net/User:Bogovic</url>\n\t\t\t<roles>\n\t\t\t\t<role>lead</role>\n\t\t\t\t<role>developer</role>\n\t\t\t\t<role>debugger</role>\n\t\t\t\t<role>reviewer</role>\n\t\t\t\t<role>support</role>\n\t\t\t\t<role>maintainer</role>\n\t\t\t</roles>\n\t\t</developer>\n\t\t<developer>\n\t\t\t<name>Caleb Hulbert</name>\n\t\t\t<roles>\n\t\t\t\t<role>lead</role>\n\t\t\t\t<role>developer</role>\n\t\t\t\t<role>debugger</role>\n\t\t\t\t<role>reviewer</role>\n\t\t\t\t<role>support</role>\n\t\t\t\t<role>maintainer</role>\n\t\t\t</roles>\n\t\t</developer>\n\t</developers>\n\t<contributors>\n\t\t<contributor>\n\t\t\t<name>Stephan Saalfeld</name>\n\t\t\t<properties>\n\t\t\t\t<id>axtimwalde</id>\n\t\t\t</properties>\n\t\t</contributor>\n\t\t<contributor>\n\t\t\t<name>John Bogovic</name>\n\t\t\t<properties>\n\t\t\t\t<id>bogovicj</id>\n\t\t\t</properties>\n\t\t</contributor>\n\t\t<contributor>\n\t\t\t<name>Igor Pisarev</name>\n\t\t\t<properties>\n\t\t\t\t<id>igorpisarev</id>\n\t\t\t</properties>\n\t\t</contributor>\n\t\t<contributor>\n\t\t\t<name>Neil Thistlethwaite</name>\n\t\t</contributor>\n\t\t<contributor>\n\t\t\t<name>Andrew Champion</name>\n\t\t</contributor>\n\t\t<contributor>\n\t\t\t<name>Philipp Hanslovsky</name>\n\t\t\t<properties>\n\t\t\t\t<id>hanslovsky</id>\n\t\t\t</properties>\n\t\t</contributor>\n\t\t<contributor>\n\t\t\t<name>Caleb Hulbert</name>\n\t\t</contributor>\n        <contributor>\n\t\t\t<name>Tobias Pietzsch</name>\n\t\t\t<properties>\n\t\t\t\t<id>tpietzsch</id>\n\t\t\t</properties>\n\t\t</contributor>\n        <contributor>\n            <name>Mark Kittisopikul</name>\n\t\t</contributor>\n\t</contributors>\n\n\t<mailingLists>\n\t\t<mailingList>\n\t\t\t<name>Image.sc Forum</name>\n\t\t\t<archive>https://forum.image.sc/tag/n5</archive>\n\t\t</mailingList>\n\t</mailingLists>\n\n\t<scm>\n\t\t<connection>scm:git:git://github.com/saalfeldlab/n5</connection>\n\t\t<developerConnection>scm:git:git@github.com:saalfeldlab/n5</developerConnection>\n\t\t<tag>HEAD</tag>\n\t\t<url>https://github.com/saalfeldlab/n5</url>\n\t</scm>\n\t<issueManagement>\n\t\t<system>GitHub</system>\n\t\t<url>https://github.com/saalfedlab/n5/issues</url>\n\t</issueManagement>\n\t<ciManagement>\n\t\t<system>GitHub Actions</system>\n\t\t<url>https://github.com/saalfeldlab/n5/actions</url>\n\t</ciManagement>\n\n\t<properties>\n\t\t<package-name>org.janelia.saalfeldlab.n5</package-name>\n\t\t\n\t\t<license.licenseName>bsd_2</license.licenseName>\n\t\t<license.projectName>Not HDF5</license.projectName>\n\t\t<license.organizationName>Saalfeld Lab</license.organizationName>\n\t\t<license.copyrightOwners>Stephan Saalfeld</license.copyrightOwners>\n\t\t<license.skipUpdateLicense>true</license.skipUpdateLicense>\n\t\t<license.skipUpdateProjectLicense>true</license.skipUpdateProjectLicense>\n\n\t\t<!-- NB: Deploy releases to the SciJava Maven repository. -->\n\t\t<releaseProfiles>sign,deploy-to-scijava</releaseProfiles>\n\n\t\t<n5-spec.version>4.0.0</n5-spec.version>\n\n\t\t<!-- Only used for test scope -->\n\t\t<n5-universe.version>2.4.0-alpha-9</n5-universe.version>\n\t\t<n5-aws-s3.version>4.4.0-alpha-9</n5-aws-s3.version>\n\t\t<n5-blosc.version>2.0.0-alpha-4</n5-blosc.version>\n\t\t<n5-google-cloud.version>5.2.0-alpha-7</n5-google-cloud.version>\n\t\t<n5-hdf5.version>2.3.0-alpha-7</n5-hdf5.version>\n\t\t<n5-imglib2.version>7.1.0-alpha-8</n5-imglib2.version>\n\t\t<n5-zarr.version>2.0.0-alpha-8</n5-zarr.version>\n\t\t<n5-zstandard.version>2.0.0-alpha-4</n5-zstandard.version>\n\t</properties>\n\n\t<dependencies>\n\t\t<dependency>\n\t\t\t<groupId>org.tukaani</groupId>\n\t\t\t<artifactId>xz</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.lz4</groupId>\n\t\t\t<artifactId>lz4-java</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>com.google.code.gson</groupId>\n\t\t\t<artifactId>gson</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.scijava</groupId>\n\t\t\t<artifactId>scijava-common</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.apache.commons</groupId>\n\t\t\t<artifactId>commons-compress</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>commons-io</groupId>\n\t\t\t<artifactId>commons-io</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>commons-codec</groupId>\n\t\t\t<artifactId>commons-codec</artifactId>\n\t\t</dependency>\n\n\t\t<!-- Test dependencies -->\n\t\t<dependency>\n\t\t\t<groupId>junit</groupId>\n\t\t\t<artifactId>junit</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.janelia.saalfeldlab</groupId>\n\t\t\t<artifactId>n5-universe</artifactId>\n\t\t\t<exclusions>\n\t\t\t\t<exclusion>\n\t\t\t\t\t<groupId>org.janelia.saalfeldlab</groupId>\n\t\t\t\t\t<artifactId>n5</artifactId>\n\t\t\t\t</exclusion>\n\t\t\t</exclusions>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>net.imagej</groupId>\n\t\t\t<artifactId>ij</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>net.imglib2</groupId>\n\t\t\t<artifactId>imglib2</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>net.imglib2</groupId>\n\t\t\t<artifactId>imglib2-ij</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>cisd</groupId>\n\t\t\t<artifactId>jhdf5</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.apache.commons</groupId>\n\t\t\t<artifactId>commons-collections4</artifactId>\n\t\t\t<version>${commons-collections4.version}</version>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\n\t\t<dependency>\n\t\t\t<groupId>info.picocli</groupId>\n\t\t\t<artifactId>picocli</artifactId>\n\t\t\t<version>4.3.2</version>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\n\t\t<!-- JMH -->\n\t\t<dependency>\n\t\t\t<groupId>org.openjdk.jmh</groupId>\n\t\t\t<artifactId>jmh-core</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.openjdk.jmh</groupId>\n\t\t\t<artifactId>jmh-generator-annprocess</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\t</dependencies>\n\n\t<repositories>\n\t\t<repository>\n\t\t\t<id>scijava.public</id>\n\t\t\t<url>https://maven.scijava.org/content/groups/public</url>\n\t\t</repository>\n\t</repositories>\n\n\t<build>\n\t\t<pluginManagement>\n\t\t\t<plugins>\n\t\t\t\t<plugin>\n\t\t\t\t\t<groupId>org.codehaus.mojo</groupId>\n\t\t\t\t\t<artifactId>build-helper-maven-plugin</artifactId>\n\t\t\t\t\t<executions>\n\t\t\t\t\t\t<execution>\n\t\t\t\t\t\t\t<id>timestamp-property</id>\n\t\t\t\t\t\t\t<goals>\n\t\t\t\t\t\t\t\t<goal>timestamp-property</goal>\n\t\t\t\t\t\t\t</goals>\n\t\t\t\t\t\t\t<phase>validate</phase>\n\t\t\t\t\t\t\t<configuration>\n\t\t\t\t\t\t\t\t<name>current.year</name>\n\t\t\t\t\t\t\t\t<pattern>yyyy</pattern>\n\t\t\t\t\t\t\t</configuration>\n\t\t\t\t\t\t</execution>\n\t\t\t\t\t</executions>\n\t\t\t\t</plugin>\n\t\t\t\t<plugin>\n\t\t\t\t\t<artifactId>maven-resources-plugin</artifactId>\n\t\t\t\t\t<executions>\n\t\t\t\t\t\t<execution>\n\t\t\t\t\t\t\t<id>copy</id>\n\t\t\t\t\t\t\t<phase>compile</phase>\n\t\t\t\t\t\t\t<goals>\n\t\t\t\t\t\t\t\t<goal>copy-resources</goal>\n\t\t\t\t\t\t\t</goals>\n\t\t\t\t\t\t\t<configuration>\n\t\t\t\t\t\t\t\t<outputDirectory>${basedir}</outputDirectory>\n\t\t\t\t\t\t\t\t<resources>\n\t\t\t\t\t\t\t\t\t<resource>\n\t\t\t\t\t\t\t\t\t\t<directory>doc</directory>\n\t\t\t\t\t\t\t\t\t\t<includes>\n\t\t\t\t\t\t\t\t\t\t\t<include>*.md</include>\n\t\t\t\t\t\t\t\t\t\t</includes>\n\t\t\t\t\t\t\t\t\t\t<filtering>true</filtering>\n\t\t\t\t\t\t\t\t\t</resource>\n\t\t\t\t\t\t\t\t</resources>\n\t\t\t\t\t\t\t</configuration>\n\t\t\t\t\t\t</execution>\n\t\t\t\t\t</executions>\n\t\t\t\t</plugin>\n\t\t\t</plugins>\n\t\t</pluginManagement>\n\t</build>\n</project>\n"
  },
  {
    "path": "scripts/fsLockValidation",
    "content": "#!/bin/bash\n\n#\n# Run FsLockingValidation N times in parallel.\n#\n# Usage:\n#   [N=<processes>] ./fsLockValidation [FsLockingValidation options...]\n#\n# Example:\n#   N=4 ./fsLockValidation --file /tmp/shard-test --num-repeats 5000\n\nN=${N:-2}\n\nOWN_DIR=\"$(readlink -f \"$(dirname \"${BASH_SOURCE[0]}\")\")\"\nPOM=\"$OWN_DIR/../pom.xml\"\nTARGET=\"$OWN_DIR/../target\"\n\nif [ ! -d \"$TARGET/test-classes\" ]; then\n    echo \"test-classes not found -- run 'mvn test-compile' in n5/ first\" >&2\n    exit 1\nfi\n\necho \"Building classpath...\"\nCP=$(mvn -q -f \"$POM\" dependency:build-classpath \\\n        -DincludeScope=test \\\n        -Dmdep.outputFile=/dev/stdout \\\n        2>/dev/null)\nCP=\"$TARGET/test-classes:$TARGET/classes:$CP\"\n\ncleanup() {\n    kill \"${pids[@]}\" 2>/dev/null\n    wait \"${pids[@]}\" 2>/dev/null\n}\ntrap cleanup INT TERM EXIT\n\necho \"Launching $N processes...\"\npids=()\nfor i in $(seq 1 $N); do\n    java -cp \"$CP\" org.janelia.saalfeldlab.n5.kva.FsLockingValidation \"$@\" &\n    pids+=($!)\ndone\n\n# Wait for all processes and collect exit codes\nall_ok=true\nfor i in \"${!pids[@]}\"; do\n    pid=${pids[$i]}\n    wait \"$pid\"\n    code=$?\n    if [ $code -ne 0 ]; then\n        echo \"Process $((i+1)) (pid $pid) exited with code $code\" >&2\n        all_ok=false\n    fi\ndone\n\nif $all_ok; then\n    echo \"All $N processes completed without errors.\"\nelse\n    echo \"One or more processes reported errors.\" >&2\n    exit 1\nfi\n"
  },
  {
    "path": "scripts/writeLockTest.sh",
    "content": "#!/usr/bin/env bash\nset -euo pipefail\n\nTEST_DIR=/tmp/write-lock-test.zarr\n\n[ -d \"$TEST_DIR\" ] && rm -r -- \"$TEST_DIR\"\n\nif [ ! -d \"test-classes\" ]; then\n    mvn test-compile\nfi\n\nmvn -e -q exec:java \\\n  -Dexec.mainClass=org.janelia.saalfeldlab.n5.WriteLockExp \\\n  -Dexec.classpathScope=test \\\n  -Dexec.args=\"/tmp/write-lock-test.zarr 0\" &\npid1=$!\n\nmvn -e -q exec:java \\\n  -Dexec.mainClass=org.janelia.saalfeldlab.n5.WriteLockExp \\\n  -Dexec.classpathScope=test \\\n  -Dexec.args=\"/tmp/write-lock-test.zarr 1\" &\npid2=$!\n\ntrap 'kill \"$pid1\" \"$pid2\" 2>/dev/null || true' EXIT INT TERM\n\nwait \"$pid1\"\nwait \"$pid2\""
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/AbstractDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.util.function.ToIntFunction;\n\n/**\n * Abstract base class for {@link DataBlock} implementations.\n *\n * @param <T>\n *            the block data type\n *\n * @author Stephan Saalfeld\n */\npublic abstract class AbstractDataBlock<T> implements DataBlock<T> {\n\n\tprotected final int[] size;\n\tprotected final long[] gridPosition;\n\tprotected final T data;\n\tprivate final ToIntFunction<T> numElements;\n\n\tpublic AbstractDataBlock(\n\t\t\tfinal int[] size,\n\t\t\tfinal long[] gridPosition,\n\t\t\tfinal T data,\n\t\t\tfinal ToIntFunction<T> numElements) {\n\n\t\tthis.size = size;\n\t\tthis.gridPosition = gridPosition;\n\t\tthis.data = data;\n\t\tthis.numElements = numElements;\n\t}\n\n\t@Override\n\tpublic int[] getSize() {\n\n\t\treturn size;\n\t}\n\n\t@Override\n\tpublic long[] getGridPosition() {\n\n\t\treturn gridPosition;\n\t}\n\n\t@Override\n\tpublic T getData() {\n\n\t\treturn data;\n\t}\n\n\t@Override\n\tpublic int getNumElements() {\n\n\t\treturn numElements.applyAsInt(data);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/BufferedKvaLockedChannel.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.apache.commons.io.input.ProxyInputStream;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\nimport java.io.*;\n\nclass BufferedKvaLockedChannel implements LockedChannel {\n\n    private final KeyValueAccess kva;\n    private final String key;\n    private ByteArrayOutputStream baos = null;\n\n    BufferedKvaLockedChannel(final KeyValueAccess kva, final String key) {\n        this.kva = kva;\n        this.key = key;\n    }\n\n    @Override\n    public Reader newReader() throws N5Exception.N5IOException {\n\n        return new InputStreamReader(newInputStream());\n    }\n\n    @Override\n    public InputStream newInputStream() throws N5Exception.N5IOException {\n\n        VolatileReadData volatileReadData = kva.createReadData(key);\n        return new ProxyInputStream(volatileReadData.inputStream()) {\n            @Override\n            public void close() throws IOException {\n                super.close();\n                volatileReadData.close();\n            }\n        };\n    }\n\n    @Override\n    public Writer newWriter() throws N5Exception.N5IOException {\n\n        return new BufferedWriter(new OutputStreamWriter(newOutputStream()));\n    }\n\n    @Override\n    public OutputStream newOutputStream() throws N5Exception.N5IOException {\n        if (baos == null)\n            baos = new ByteArrayOutputStream();\n        return baos;\n    }\n\n    @Override\n    public void close() throws IOException {\n        if (baos != null && baos.size() > 0)\n            kva.write(key, ReadData.from(baos.toByteArray()));\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/ByteArrayDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class ByteArrayDataBlock extends AbstractDataBlock<byte[]> {\n\n\tpublic ByteArrayDataBlock(final int[] size, final long[] gridPosition, final byte[] data) {\n\n\t\tsuper(size, gridPosition, data, a -> a.length);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/Bzip2Compression.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.IOException;\n\nimport org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream;\nimport org.apache.commons.compress.compressors.bzip2.BZip2CompressorOutputStream;\nimport org.janelia.saalfeldlab.n5.Compression.CompressionType;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@CompressionType(\"bzip2\")\n@NameConfig.Name(\"bzip2\")\npublic class Bzip2Compression implements Compression {\n\n\tprivate static final long serialVersionUID = -4873117458390529118L;\n\n\t@CompressionParameter\n\t@NameConfig.Parameter\n\tprivate final int blockSize;\n\n\tpublic Bzip2Compression(final int blockSize) {\n\n\t\tthis.blockSize = blockSize;\n\t}\n\n\tpublic Bzip2Compression() {\n\n\t\tthis(BZip2CompressorOutputStream.MAX_BLOCKSIZE);\n\t}\n\n\t@Override\n\tpublic boolean equals(final Object other) {\n\n\t\tif (other == null || other.getClass() != Bzip2Compression.class)\n\t\t\treturn false;\n\t\telse\n\t\t\treturn blockSize == ((Bzip2Compression)other).blockSize;\n\t}\n\n\t@Override\n\tpublic ReadData decode(final ReadData readData) throws N5IOException {\n\t\ttry {\n\t\t\treturn ReadData.from(new BZip2CompressorInputStream(readData.inputStream()));\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData encode(final ReadData readData) {\n\t\treturn readData.encode(out -> new BZip2CompressorOutputStream(out, blockSize));\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/CachedGsonKeyValueN5Reader.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.lang.reflect.Type;\n\nimport com.google.gson.JsonSyntaxException;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.cache.N5JsonCache;\nimport org.janelia.saalfeldlab.n5.cache.N5JsonCacheableContainer;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonObject;\n\n/**\n * {@link N5Reader} implementation through {@link KeyValueAccess} with JSON\n * attributes parsed with {@link Gson}.\n *\n */\npublic interface CachedGsonKeyValueN5Reader extends GsonKeyValueN5Reader, N5JsonCacheableContainer {\n\n\tdefault N5JsonCache newCache() {\n\n\t\treturn new N5JsonCache(this);\n\t}\n\n\tboolean cacheMeta();\n\n\tN5JsonCache getCache();\n\n\t@Override\n\tdefault JsonElement getAttributesFromContainer(final String normalPathName, final String normalCacheKey) {\n\n\t\t// this implementation doesn't use cache key, but rather depends on\n\t\t// attributesPath being implemented\n\t\treturn GsonKeyValueN5Reader.super.getAttributes(normalPathName);\n\t}\n\n\t@Override\n\tdefault DatasetAttributes getDatasetAttributes(final String pathName) {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tfinal JsonElement attributes;\n\n\t\tif (!datasetExists(pathName))\n\t\t\treturn null;\n\n\t\tif (cacheMeta()) {\n\t\t\tattributes = getCache().getAttributes(normalPath, getAttributesKey());\n\t\t} else {\n\t\t\tattributes = GsonKeyValueN5Reader.super.getAttributes(normalPath);\n\t\t}\n\n\t\treturn createDatasetAttributes(attributes);\n\t}\n\n\tdefault DatasetAttributes normalGetDatasetAttributes(final String pathName) throws N5IOException {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tfinal JsonElement attributes = GsonKeyValueN5Reader.super.getAttributes(normalPath);\n\t\treturn createDatasetAttributes(attributes);\n\t}\n\n\t@Override\n\tdefault <T> T getAttribute(\n\t\t\tfinal String pathName,\n\t\t\tfinal String key,\n\t\t\tfinal Class<T> clazz) throws N5Exception {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String normalizedAttributePath = N5URI.normalizeAttributePath(key);\n\n\t\tfinal JsonElement attributes;\n\t\tif (cacheMeta()) {\n\t\t\tattributes = getCache().getAttributes(normalPathName, getAttributesKey());\n\t\t} else {\n\t\t\tattributes = GsonKeyValueN5Reader.super.getAttributes(normalPathName);\n\t\t}\n\t\ttry {\n\t\t\treturn GsonUtils.readAttribute(attributes, normalizedAttributePath, clazz, getGson());\n\t\t} catch (JsonSyntaxException | NumberFormatException | ClassCastException e) {\n\t\t\tthrow new N5Exception.N5ClassCastException(e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> T getAttribute(\n\t\t\tfinal String pathName,\n\t\t\tfinal String key,\n\t\t\tfinal Type type) throws N5Exception {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String normalizedAttributePath = N5URI.normalizeAttributePath(key);\n\t\tJsonElement attributes;\n\t\tif (cacheMeta()) {\n\t\t\tattributes = getCache().getAttributes(normalPathName, getAttributesKey());\n\t\t} else {\n\t\t\tattributes = GsonKeyValueN5Reader.super.getAttributes(normalPathName);\n\t\t}\n\t\ttry {\n\t\t\treturn GsonUtils.readAttribute(attributes, normalizedAttributePath, type, getGson());\n\t\t} catch (JsonSyntaxException | NumberFormatException | ClassCastException e) {\n\t\t\tthrow new N5Exception.N5ClassCastException(e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault boolean exists(final String pathName) {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tif (cacheMeta())\n\t\t\treturn getCache().isGroup(normalPathName, getAttributesKey());\n\t\telse {\n\t\t\treturn existsFromContainer(normalPathName, null);\n\t\t}\n\t}\n\n\t@Override\n\tdefault boolean existsFromContainer(final String normalPathName, final String normalCacheKey) {\n\n\t\tfinal KeyValueAccess kva = getKeyValueAccess();\n\t\tif (normalCacheKey == null)\n\t\t\treturn kva.isDirectory(kva.compose(getURI(), normalPathName));\n\t\telse\n\t\t\treturn kva.isFile(kva.compose(getURI(), normalPathName, normalCacheKey));\n\t}\n\n\t@Override\n\tdefault boolean groupExists(final String pathName) {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tif (cacheMeta())\n\t\t\treturn getCache().isGroup(normalPathName, null);\n\t\telse {\n\t\t\treturn isGroupFromContainer(normalPathName);\n\t\t}\n\t}\n\n\t@Override\n\tdefault boolean isGroupFromContainer(final String normalPathName) {\n\n\t\treturn GsonKeyValueN5Reader.super.groupExists(normalPathName);\n\t}\n\n\t@Override\n\tdefault boolean isGroupFromAttributes(final String normalCacheKey, final JsonElement attributes) {\n\n\t\treturn true;\n\t}\n\n\t@Override\n\tdefault boolean datasetExists(final String pathName) throws N5IOException {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tif (cacheMeta()) {\n\t\t\treturn getCache().isDataset(normalPathName, getAttributesKey());\n\t\t}\n\t\treturn isDatasetFromContainer(normalPathName);\n\t}\n\n\t@Override\n\tdefault boolean isDatasetFromContainer(final String normalPathName) throws N5IOException {\n\n\t\treturn normalGetDatasetAttributes(normalPathName) != null;\n\t}\n\n\t@Override\n\tdefault boolean isDatasetFromAttributes(final String normalCacheKey, final JsonElement attributes) {\n\n\t\treturn isGroupFromAttributes(normalCacheKey, attributes) && createDatasetAttributes(attributes) != null;\n\t}\n\n\t/**\n\t * Reads or creates the attributes map of a group or dataset.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @return the attribute\n\t * @throws N5IOException if an IO error occurs while reading the attribute\n\t */\n\t@Override\n\tdefault JsonElement getAttributes(final String pathName) throws N5IOException {\n\n\t\tfinal String groupPath = N5URI.normalizeGroupPath(pathName);\n\n\t\t/* If cached, return the cache */\n\t\tif (cacheMeta()) {\n\t\t\treturn getCache().getAttributes(groupPath, getAttributesKey());\n\t\t} else {\n\t\t\treturn GsonKeyValueN5Reader.super.getAttributes(groupPath);\n\t\t}\n\t}\n\n\t@Override\n\tdefault String[] list(final String pathName) throws N5IOException {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tif (cacheMeta()) {\n\t\t\treturn getCache().list(normalPath);\n\t\t} else {\n\t\t\treturn GsonKeyValueN5Reader.super.list(normalPath);\n\t\t}\n\t}\n\n\t@Override\n\tdefault String[] listFromContainer(final String normalPathName) {\n\n\t\t// this implementation doesn't use cache key, but rather depends on\n\t\treturn GsonKeyValueN5Reader.super.list(normalPathName);\n\t}\n\n\t/**\n\t * Check for attributes that are required for a group to be a dataset.\n\t *\n\t * @param attributes\n\t *            to check for dataset attributes\n\t * @return if {@link DatasetAttributes#DIMENSIONS_KEY} and\n\t *         {@link DatasetAttributes#DATA_TYPE_KEY} are present\n\t */\n\tstatic boolean hasDatasetAttributes(final JsonElement attributes) {\n\n\t\tif (attributes == null || !attributes.isJsonObject()) {\n\t\t\treturn false;\n\t\t}\n\n\t\tfinal JsonObject metadataCache = attributes.getAsJsonObject();\n\t\treturn metadataCache.has(DatasetAttributes.DIMENSIONS_KEY)\n\t\t\t\t&& metadataCache.has(DatasetAttributes.DATA_TYPE_KEY);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/CachedGsonKeyValueN5Writer.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\n\n/**\n * Cached default implementation of {@link N5Writer} with JSON attributes parsed\n * with {@link Gson}.\n */\npublic interface CachedGsonKeyValueN5Writer extends CachedGsonKeyValueN5Reader, GsonKeyValueN5Writer {\n\n\t@Override\n\tdefault void setVersion(final String path) throws N5Exception {\n\n\t\tfinal Version version = getVersion();\n\t\tif (!VERSION.isCompatible(version))\n\t\t\tthrow new N5IOException(\"Incompatible version \" + version + \" (this is \" + VERSION + \").\");\n\n\t\tif (!VERSION.equals(version))\n\t\t\tsetAttribute(\"/\", VERSION_KEY, VERSION.toString());;\n\t}\n\n\t@Override\n\tdefault void createGroup(final String path) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(path);\n\t\t// avoid hitting the backend if this path is already a group according to the cache\n\t\t// else if exists is true (then a dataset is present) so throw an exception to avoid\n\t\t// overwriting / invalidating existing data\n\t\tif (groupExists(normalPath))\n\t\t\treturn;\n\t\telse if (datasetExists(normalPath))\n\t\t\tthrow new N5Exception(\"Can't make a group on existing dataset.\");\n\n\t\tgetKeyValueAccess().createDirectories(absoluteGroupPath(normalPath));\n\n\t\tif (cacheMeta()) {\n\t\t\t// check all nodes that are parents of the added node, if they have\n\t\t\t// a children set, add the new child to it\n\t\t\tgetKeyValueAccess().parent(normalPath);\n\t\t\tString[] pathParts = getKeyValueAccess().components(normalPath);\n\t\t\tString parent = N5URI.normalizeGroupPath(\"/\");\n\t\t\tif (pathParts.length == 0) {\n\t\t\t\tpathParts = new String[]{\"\"};\n\t\t\t}\n\t\t\tfor (final String child : pathParts) {\n\n\t\t\t\tfinal String childPath = parent.isEmpty() ? child : parent + \"/\" + child;\n\t\t\t\tgetCache().initializeNonemptyCache(childPath, getAttributesKey());\n\t\t\t\tgetCache().updateCacheInfo(childPath, getAttributesKey());\n\n\t\t\t\t// only add if the parent exists and has children cached already\n\t\t\t\tif (parent != null && !child.isEmpty())\n\t\t\t\t\tgetCache().addChildIfPresent(parent, child);\n\n\t\t\t\tparent = childPath;\n\t\t\t}\n\t\t}\n\t}\n\n\t@Override\n\tdefault void writeAttributes(\n\t\t\tfinal String normalGroupPath,\n\t\t\tfinal JsonElement attributes) throws N5Exception {\n\n\t\twriteAndCacheAttributes(normalGroupPath, attributes);\n\t}\n\n\tdefault void writeAndCacheAttributes(\n\t\t\tfinal String normalGroupPath,\n\t\t\tfinal JsonElement attributes) throws N5Exception {\n\n\t\tGsonKeyValueN5Writer.super.writeAttributes(normalGroupPath, attributes);\n\n\t\tif (cacheMeta()) {\n\t\t\tJsonElement nullRespectingAttributes = attributes;\n\t\t\t/*\n\t\t\t * Gson only filters out nulls when you write the JsonElement. This\n\t\t\t * means it doesn't filter them out when caching.\n\t\t\t * To handle this, we explicitly writer the existing JsonElement to\n\t\t\t * a new JsonElement.\n\t\t\t * The output is identical to the input if:\n\t\t\t * - serializeNulls is true\n\t\t\t * - no null values are present\n\t\t\t * - caching is turned off\n\t\t\t */\n\t\t\tif (!getGson().serializeNulls()) {\n\t\t\t\tnullRespectingAttributes = getGson().toJsonTree(attributes);\n\t\t\t}\n\t\t\t/* Update the cache, and write to the writer */\n\t\t\tgetCache().updateCacheInfo(normalGroupPath, getAttributesKey(), nullRespectingAttributes);\n\t\t}\n\t}\n\n\t@Override\n\tdefault boolean remove(final String path) throws N5Exception {\n\n\t\t// GsonKeyValueN5Writer.super.remove(path)\n\t\t/*\n\t\t * the lines below duplicate the single line above but would have to call\n\t\t * normalizeGroupPath again the below duplicates code, but avoids extra work\n\t\t */\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(path);\n\t\tfinal String groupPath = absoluteGroupPath(normalPath);\n\n\t\tif (getKeyValueAccess().isDirectory(groupPath))\n\t\t\tgetKeyValueAccess().delete(groupPath);\n\n\t\tif (cacheMeta()) {\n\t\t\tfinal String parentPath = getKeyValueAccess().parent(normalPath);\n\t\t\tgetCache().removeCache(parentPath, normalPath);\n\t\t}\n\n\t\t/* an IOException should have occurred if anything had failed midway */\n\t\treturn true;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/ChannelLock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.io.UncheckedIOException;\nimport java.nio.channels.FileChannel;\nimport java.nio.channels.FileLock;\nimport java.nio.channels.OverlappingFileLockException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.StandardOpenOption;\n\n/**\n * Holds a channel and system-level file lock (shared for writing, non-shared\n * for reading) and keeps it open until this {@code ChannelLock} is {@link\n * #close() closed}.\n */\nclass ChannelLock implements Closeable {\n\n\tprivate final FileChannel channel;\n\n\t/**\n\t * Hold a hard reference to the {@code FileLock} to make sure it is not\n\t * prematurely released.\n\t * <p>\n\t * NB: We do not call {@code lock.release()} in {@link #close}, because at\n\t * this point the channel might be already closed (by an external writer).\n\t * {@code lock.release()} will throw an exception if the channel is already\n\t * closed. Instead, we just close the channel which will automatically\n\t * release the lock.\n\t */\n\t@SuppressWarnings({\"unused\", \"FieldCanBeLocal\"})\n\tprivate final FileLock lock;\n\n\tprivate ChannelLock(final FileChannel channel, final FileLock lock) {\n\t\tthis.channel = channel;\n\t\tthis.lock = lock;\n\t}\n\n\tpublic void close() throws IOException {\n\n\t\t// NB: We do not call lock.release() here, because it may throw an\n\t\t// exception if the channel is already closed. Instead, we just close\n\t\t// the channel. This will automatically release the lock. (And it is ok\n\t\t// to close an already closed channel.)\n\n\t\tchannel.close();\n\t}\n\n\tFileChannel getChannel() {\n\t\treturn channel;\n\t}\n\n\t/**\n\t * Create a {@link FileChannel} on the given {@code path} and lock it with a\n\t * system-level {@link FileLock}. If there is an existing overlapping file\n\t * lock, this method will block until the existing lock is released and the\n\t * channel could be locked (by us).\n\t * <p>\n\t * The {@code FileLock} is exclusive if the {@code path} is locked {@code\n\t * forWriting}, and shared otherwise.\n\t * <p>\n\t * If the {@code path} is locked {@code forWriting} non-existing file and\n\t * the parent directories are created as needed.\n\t *\n\t * @throws IOException if an error occurs while opening the channel, or if\n\t * the calling thread is interrupted while waiting for the {@code FileLock}.\n\t */\n\tstatic ChannelLock lock(final Path path, final boolean forWriting, final LockingPolicy policy) throws IOException {\n\n\t\tfinal FileChannel channel = openFileChannel(path, forWriting);\n\t\tif (policy == LockingPolicy.UNSAFE) {\n\t\t\treturn new ChannelLock(channel, null);\n\t\t}\n\t\ttry {\n\t\t\twhile (true) {\n\t\t\t\ttry {\n\t\t\t\t\tfinal FileLock lock = channel.lock(0, Long.MAX_VALUE, !forWriting);\n\t\t\t\t\treturn new ChannelLock(channel, lock);\n\t\t\t\t} catch (final OverlappingFileLockException e) {\n\t\t\t\t\ttry {\n\t\t\t\t\t\tThread.sleep(100);\n\t\t\t\t\t} catch (final InterruptedException ie) {\n\t\t\t\t\t\tThread.currentThread().interrupt();\n\t\t\t\t\t\tthrow new IOException(\"Interrupted while waiting for file lock\", ie);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (Exception e) {\n\t\t\tif (policy == LockingPolicy.STRICT) {\n\t\t\t\tcloseQuietly(channel);\n\t\t\t\tthrow e;\n\t\t\t} else {\n\t\t\t\treturn new ChannelLock(channel, null);\n\t\t\t}\n\t\t}\n\t}\n\n\t/**\n\t * Opens a file channel. If the channel is opened {@code forWriting},\n\t * then this may create the file and the parent directories as needed.\n\t *\n\t * @throws IOException\n\t * \t\tif the channel cannot be opened\n\t */\n\tprivate static FileChannel openFileChannel(final Path path, final boolean forWriting) throws IOException {\n\n\t\tif (forWriting) {\n\t\t\tfinal Path parent = path.getParent();\n\t\t\tif (parent != null) {\n\t\t\t\tFiles.createDirectories(parent);\n\t\t\t}\n\t\t\treturn FileChannel.open(path, StandardOpenOption.READ, StandardOpenOption.WRITE, StandardOpenOption.CREATE);\n\t\t} else {\n\t\t\treturn FileChannel.open(path, StandardOpenOption.READ);\n\t\t}\n\t}\n\n\tprivate static void closeQuietly(final FileChannel fileChannel) {\n\t\tif (fileChannel != null) {\n\t\t\ttry {\n\t\t\t\tfileChannel.close();\n\t\t\t} catch (final IOException | UncheckedIOException ignored) {\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/Compression.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.Serializable;\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Inherited;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\nimport org.janelia.saalfeldlab.n5.codec.DataCodec;\nimport org.janelia.saalfeldlab.n5.codec.CodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.scijava.annotations.Indexable;\n\n/**\n * This interface is used to indicate that a {@link DataCodec} can be\n * serialized as a \"compression\" for the N5 format (using the N5 API).\n * <p>\n * N5Readers and N5Writers for the N5 format can declare DataCodecs that\n * implement this interface so that the {@link CompressionAdapter} is used for\n * serialization.\n * <p>\n * See also: an alternative method for serializing general {@link CodecInfo}s is\n * with the {@link NameConfigAdapter}. This interface remains for legacy\n * (de)serialization.\n *\n * @author Stephan Saalfeld\n */\npublic interface Compression extends Serializable, DataCodec, DataCodecInfo {\n\n\t/**\n\t * Annotation for runtime discovery of compression schemes.\n\t *\n\t */\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Inherited\n\t@Target(ElementType.TYPE)\n\t@Indexable\n\t@interface CompressionType {\n\n\t\tString value();\n\t}\n\n\t/**\n\t * Annotation for runtime discovery of compression schemes.\n\t *\n\t */\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Inherited\n\t@Target(ElementType.FIELD)\n\t@interface CompressionParameter {}\n\n\tdefault String getType() {\n\n\t\tfinal CompressionType compressionType = getClass().getAnnotation(CompressionType.class);\n\t\tif (compressionType == null)\n\t\t\treturn null;\n\t\telse\n\t\t\treturn compressionType.value();\n\t}\n\n\t@Override\n\tdefault DataCodec create() {\n\t\treturn this;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/CompressionAdapter.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.lang.reflect.Constructor;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Type;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.Map.Entry;\n\nimport org.janelia.saalfeldlab.n5.Compression.CompressionParameter;\nimport org.janelia.saalfeldlab.n5.Compression.CompressionType;\nimport org.scijava.annotations.Index;\nimport org.scijava.annotations.IndexItem;\n\nimport com.google.gson.JsonDeserializationContext;\nimport com.google.gson.JsonDeserializer;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonObject;\nimport com.google.gson.JsonParseException;\nimport com.google.gson.JsonSerializationContext;\nimport com.google.gson.JsonSerializer;\n\n/**\n * Compression adapter, auto-discovers annotated compression implementations\n * in the classpath.\n *\n * @author Stephan Saalfeld\n */\npublic class CompressionAdapter implements JsonDeserializer<Compression>, JsonSerializer<Compression> {\n\n\tprivate static CompressionAdapter instance = null;\n\n\tprivate final HashMap<String, Constructor<? extends Compression>> compressionConstructors = new HashMap<>();\n\tprivate final HashMap<String, HashMap<String, Class<?>>> compressionParameters = new HashMap<>();\n\n\tprivate static ArrayList<Field> getDeclaredFields(Class<?> clazz) {\n\n\t\tfinal ArrayList<Field> fields = new ArrayList<>();\n\t\tfields.addAll(Arrays.asList(clazz.getDeclaredFields()));\n\t\tfor (clazz = clazz.getSuperclass(); clazz != null; clazz = clazz.getSuperclass())\n\t\t\tfields.addAll(Arrays.asList(clazz.getDeclaredFields()));\n\t\treturn fields;\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\tpublic static synchronized void update(final boolean override) {\n\n\t\tif (override || instance == null) {\n\n\t\t\tfinal CompressionAdapter newInstance = new CompressionAdapter();\n\n\t\t\tfinal ClassLoader classLoader = Thread.currentThread().getContextClassLoader();\n\t\t\tfinal Index<CompressionType> annotationIndex = Index.load(CompressionType.class, classLoader);\n\t\t\tfor (final IndexItem<CompressionType> item : annotationIndex) {\n\t\t\t\tClass<? extends Compression> clazz;\n\t\t\t\ttry {\n\t\t\t\t\tclazz = (Class<? extends Compression>)Class.forName(item.className());\n\t\t\t\t\tfinal String type = clazz.getAnnotation(CompressionType.class).value();\n\n\t\t\t\t\tfinal Constructor<? extends Compression> constructor = clazz.getDeclaredConstructor();\n\n\t\t\t\t\tfinal HashMap<String, Class<?>> parameters = new HashMap<>();\n\t\t\t\t\tfinal ArrayList<Field> fields = getDeclaredFields(clazz);\n\t\t\t\t\tfor (final Field field : fields) {\n\t\t\t\t\t\tif (field.getAnnotation(CompressionParameter.class) != null) {\n\t\t\t\t\t\t\tparameters.put(field.getName(), field.getType());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tnewInstance.compressionConstructors.put(type, constructor);\n\t\t\t\t\tnewInstance.compressionParameters.put(type, parameters);\n\t\t\t\t} catch (final NoClassDefFoundError | ClassNotFoundException | NoSuchMethodException | ClassCastException\n\t\t\t\t\t\t| UnsatisfiedLinkError e) {\n\t\t\t\t\tSystem.err.println(\"Compression '\" + item.className() + \"' could not be registered\");\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tinstance = newInstance;\n\t\t}\n\t}\n\n\tpublic static void update() {\n\n\t\tupdate(false);\n\t}\n\n\t@Override\n\tpublic JsonElement serialize(\n\t\t\tfinal Compression compression,\n\t\t\tfinal Type typeOfSrc,\n\t\t\tfinal JsonSerializationContext context) {\n\n\t\tfinal String type = compression.getType();\n\t\tfinal Class<? extends Compression> clazz = compression.getClass();\n\n\t\tfinal JsonObject json = new JsonObject();\n\t\tjson.addProperty(\"type\", type);\n\n\t\tfinal HashMap<String, Class<?>> parameterTypes = compressionParameters.get(type);\n\t\ttry {\n\t\t\tfor (final Entry<String, Class<?>> parameterType : parameterTypes.entrySet()) {\n\t\t\t\tfinal String name = parameterType.getKey();\n\t\t\t\tfinal Field field = clazz.getDeclaredField(name);\n\t\t\t\tfinal boolean isAccessible = field.isAccessible();\n\t\t\t\tfield.setAccessible(true);\n\t\t\t\tfinal Object value = field.get(compression);\n\t\t\t\tfield.setAccessible(isAccessible);\n\t\t\t\tjson.add(parameterType.getKey(), context.serialize(value));\n\t\t\t}\n\t\t} catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) {\n\t\t\te.printStackTrace(System.err);\n\t\t\treturn null;\n\t\t}\n\n\t\treturn json;\n\t}\n\n\t@Override\n\tpublic Compression deserialize(\n\t\t\tfinal JsonElement json,\n\t\t\tfinal Type typeOfT,\n\t\t\tfinal JsonDeserializationContext context) throws JsonParseException {\n\n\t\tfinal JsonObject jsonObject = json.getAsJsonObject();\n\t\tfinal JsonElement jsonType = jsonObject.get(\"type\");\n\t\tif (jsonType == null)\n\t\t\treturn null;\n\n\t\tfinal String type = jsonType.getAsString();\n\t\tfinal Constructor<? extends Compression> constructor = compressionConstructors.get(type);\n\t\tfinal Compression compression;\n\t\ttry {\n\t\t\tcompression = constructor.newInstance();\n\t\t\tfinal HashMap<String, Class<?>> parameterTypes = compressionParameters.get(type);\n\t\t\tfor (final Entry<String, Class<?>> parameterType : parameterTypes.entrySet()) {\n\t\t\t\tfinal String name = parameterType.getKey();\n\t\t\t\tif (jsonObject.has(name)) {\n\t\t\t\t\tfinal Object parameter = context.deserialize(jsonObject.get(name), parameterType.getValue());\n\t\t\t\t\tReflectionUtils.setFieldValue(compression, name, parameter);\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException\n\t\t\t\t| SecurityException | NoSuchFieldException e) {\n\t\t\te.printStackTrace(System.err);\n\t\t\treturn null;\n\t\t}\n\n\t\treturn compression;\n\t}\n\n\tpublic static CompressionAdapter getJsonAdapter() {\n\n\t\tif (instance == null)\n\t\t\tupdate();\n\t\treturn instance;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/DataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\n/**\n * Interface for data blocks. A data block has data, a position on the block\n * grid, and a size.\n *\n * @param <T> type of the data contained in the DataBlock\n *\n * @author Stephan Saalfeld\n */\npublic interface DataBlock<T> {\n\n\t/**\n\t * Returns the size of this data block.\n\t * <p>\n\t * The size of a data block is expected to be smaller than or equal to the\n\t * spacing of the block grid. The dimensionality of size is expected to be\n\t * equal to the dimensionality of the dataset. Consistency is not enforced.\n\t *\n\t * @return size of the data block\n\t */\n\tint[] getSize();\n\n\t/**\n\t * Returns the position of this data block on the block grid relative to dataset.\n\t * <p>\n\t * The dimensionality of the grid position is expected to be equal to the\n\t * dimensionality of the dataset. Consistency is not enforced.\n\t *\n\t * @return position on the block grid\n\t */\n\tlong[] getGridPosition();\n\n\t/**\n\t * Returns the data object held by this data block.\n\t *\n\t * @return data object\n\t */\n\tT getData();\n\n\t/**\n\t * Returns the number of elements in this {@link DataBlock}. This number is\n\t * not necessarily equal {@link #getNumElements(int[])\n\t * getNumElements(getSize())}.\n\t *\n\t * @return the number of elements\n\t */\n\tint getNumElements();\n\n\t/**\n\t * Returns the number of elements in a box of given size.\n\t *\n\t * @param size\n\t *            the size\n\t * @return the number of elements\n\t */\n\tstatic int getNumElements(final int[] size) {\n\n\t\tint n = size[0];\n\t\tfor (int i = 1; i < size.length; ++i)\n\t\t\tn *= size[i];\n\t\treturn n;\n\t}\n\n\t/**\n\t * Factory for creating {@code DataBlock<T>}.\n\t *\n\t * @param <T>\n\t * \t\ttype of the data contained in the DataBlock\n\t */\n\tinterface DataBlockFactory<T> {\n\n\t\t/**\n\t\t * Create a new {@link DataBlock} with the given {@code blockSize}, {@code gridPosition}, and {@code data} content.\n\t\t *\n\t\t * @param blockSize\n\t\t * \t\tthe block size\n\t\t * @param gridPosition\n\t\t * \t\tthe grid position\n\t\t * @param data\n\t\t * \t\tthe data object\n\t\t *\n\t\t * @return a new DataBlock\n\t\t */\n\t\tDataBlock<T> createDataBlock(int[] blockSize, long[] gridPosition, T data);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/DataType.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.lang.reflect.Type;\n\nimport com.google.gson.JsonDeserializationContext;\nimport com.google.gson.JsonDeserializer;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonParseException;\nimport com.google.gson.JsonPrimitive;\nimport com.google.gson.JsonSerializationContext;\nimport com.google.gson.JsonSerializer;\n\n/**\n * Enumerates available data types.\n *\n * @author Stephan Saalfeld\n */\npublic enum DataType {\n\n\tUINT8(\n\t\t\t\"uint8\",\n\t\t\t(blockSize, gridPosition, numElements) -> new ByteArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew byte[numElements])),\n\tUINT16(\n\t\t\t\"uint16\",\n\t\t\t(blockSize, gridPosition, numElements) -> new ShortArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew short[numElements])),\n\tUINT32(\n\t\t\t\"uint32\",\n\t\t\t(blockSize, gridPosition, numElements) -> new IntArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew int[numElements])),\n\tUINT64(\n\t\t\t\"uint64\",\n\t\t\t(blockSize, gridPosition, numElements) -> new LongArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew long[numElements])),\n\tINT8(\n\t\t\t\"int8\",\n\t\t\t(blockSize, gridPosition, numElements) -> new ByteArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew byte[numElements])),\n\tINT16(\n\t\t\t\"int16\",\n\t\t\t(blockSize, gridPosition, numElements) -> new ShortArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew short[numElements])),\n\tINT32(\n\t\t\t\"int32\",\n\t\t\t(blockSize, gridPosition, numElements) -> new IntArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew int[numElements])),\n\tINT64(\n\t\t\t\"int64\",\n\t\t\t(blockSize, gridPosition, numElements) -> new LongArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew long[numElements])),\n\tFLOAT32(\n\t\t\t\"float32\",\n\t\t\t(blockSize, gridPosition, numElements) -> new FloatArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew float[numElements])),\n\tFLOAT64(\n\t\t\t\"float64\",\n\t\t\t(blockSize, gridPosition, numElements) -> new DoubleArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew double[numElements])),\n\tSTRING(\n\t\t\t\"string\",\n\t\t\t(blockSize, gridPosition, numElements) -> new StringDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew String[numElements])),\n\tOBJECT(\n\t\t\t\"object\",\n\t\t\t(blockSize, gridPosition, numElements) -> new ByteArrayDataBlock(\n\t\t\t\t\tblockSize,\n\t\t\t\t\tgridPosition,\n\t\t\t\t\tnew byte[numElements]));\n\n\tprivate final String label;\n\n\tprivate final DataBlockFactory dataBlockFactory;\n\n\tDataType(final String label, final DataBlockFactory dataBlockFactory) {\n\n\t\tthis.label = label;\n\t\tthis.dataBlockFactory = dataBlockFactory;\n\t}\n\n\t@Override\n\tpublic String toString() {\n\n\t\treturn label;\n\t}\n\n\tpublic static DataType fromString(final String string) {\n\n\t\tfor (final DataType value : values())\n\t\t\tif (value.toString().equals(string))\n\t\t\t\treturn value;\n\t\treturn null;\n\t}\n\n\t/**\n\t * Factory for {@link DataBlock DataBlocks}.\n\t *\n\t * @param blockSize\n\t *            the block size\n\t * @param gridPosition\n\t *            the grid position\n\t * @param numElements\n\t *            the number of elements (not necessarily one element per block\n\t *            element)\n\t * @return the data block\n\t */\n\tpublic DataBlock<?> createDataBlock(final int[] blockSize, final long[] gridPosition, final int numElements) {\n\n\t\treturn dataBlockFactory.createDataBlock(blockSize, gridPosition, numElements);\n\t}\n\n\t/**\n\t * Factory for {@link DataBlock DataBlocks} with one data element for each\n\t * block element (e.g. pixel image).\n\t *\n\t * @param blockSize\n\t *            the block size\n\t * @param gridPosition\n\t *            the grid position\n\t * @return the data block\n\t */\n\tpublic DataBlock<?> createDataBlock(final int[] blockSize, final long[] gridPosition) {\n\n\t\treturn dataBlockFactory.createDataBlock(blockSize, gridPosition, DataBlock.getNumElements(blockSize));\n\t}\n\n\tprivate interface DataBlockFactory {\n\n\t\tDataBlock<?> createDataBlock(final int[] blockSize, final long[] gridPosition, final int numElements);\n\t}\n\n\tstatic public class JsonAdapter implements JsonDeserializer<DataType>, JsonSerializer<DataType> {\n\n\t\t@Override\n\t\tpublic DataType deserialize(\n\t\t\t\tfinal JsonElement json,\n\t\t\t\tfinal Type typeOfT,\n\t\t\t\tfinal JsonDeserializationContext context) throws JsonParseException {\n\n\t\t\treturn DataType.fromString(json.getAsString());\n\t\t}\n\n\t\t@Override\n\t\tpublic JsonElement serialize(\n\t\t\t\tfinal DataType src,\n\t\t\t\tfinal Type typeOfSrc,\n\t\t\t\tfinal JsonSerializationContext context) {\n\n\t\t\treturn new JsonPrimitive(src.toString());\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/DatasetAttributes.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport com.google.gson.JsonDeserializationContext;\nimport com.google.gson.JsonDeserializer;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonNull;\nimport com.google.gson.JsonObject;\nimport com.google.gson.JsonParseException;\nimport com.google.gson.JsonSerializationContext;\nimport com.google.gson.JsonSerializer;\n\nimport org.janelia.saalfeldlab.n5.codec.BlockCodec;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.CodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.DatasetAccess;\nimport org.janelia.saalfeldlab.n5.shard.DefaultDatasetAccess;\nimport org.janelia.saalfeldlab.n5.shard.ShardCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\n\nimport java.io.Serializable;\nimport java.lang.reflect.Type;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.stream.Collectors;\n\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DatasetCodec;\nimport org.janelia.saalfeldlab.n5.codec.DatasetCodecInfo;\n\n\n/**\n * Mandatory dataset attributes:\n *\n * <ol>\n * <li>long[] : dimensions</li>\n * <li>int[] : blockSize</li>\n * <li>{@link DataType} : dataType</li>\n * <li>{@link CodecInfo}... : encode/decode routines</li>\n * </ol>\n *\n * @author Stephan Saalfeld\n */\npublic class DatasetAttributes implements Serializable {\n\n\tprivate static final long serialVersionUID = -4521467080388947553L;\n\n\tpublic static final String DIMENSIONS_KEY = \"dimensions\";\n\tpublic static final String BLOCK_SIZE_KEY = \"blockSize\";\n\tpublic static final String SHARD_SIZE_KEY = \"shardSize\";\n\tpublic static final String DATA_TYPE_KEY = \"dataType\";\n\tpublic static final String COMPRESSION_KEY = \"compression\";\n\tpublic static final String CODEC_KEY = \"codecs\";\n\n\tpublic static final String[] N5_DATASET_ATTRIBUTES = new String[]{\n\t\t\tDIMENSIONS_KEY, BLOCK_SIZE_KEY, DATA_TYPE_KEY, COMPRESSION_KEY, CODEC_KEY\n\t};\n\n\t/* version 0 */\n\tprotected static final String compressionTypeKey = \"compressionType\";\n\n\tprivate final long[] dimensions;\n\n\t// number of samples per chunk per dimension\n\tprivate final int[] chunkSize;\n\n\t// number of samples per block per dimension\n\t// identical to chunkSize for non-sharded datasets\n\tprivate final int[] blockSize;\n\n\tprivate final DataType dataType;\n\n\tprivate final JsonElement defaultValue;\n\n\tprivate final BlockCodecInfo blockCodecInfo;\n\tprivate final DataCodecInfo[] dataCodecInfos;\n\tprivate final DatasetCodecInfo[] datasetCodecInfos;\n\n\tprivate transient final DatasetAccess<?> access;\n\n\tpublic DatasetAttributes(\n\t\t\tfinal long[] dimensions,\n\t\t\tfinal int[] blockSize,\n\t\t\tfinal DataType dataType,\n\t\t\tfinal JsonElement defaultValue,\n\t\t\tfinal BlockCodecInfo blockCodecInfo,\n\t\t\tfinal DatasetCodecInfo[] datasetCodecInfos,\n\t\t\tfinal DataCodecInfo... dataCodecInfos) {\n\n\t\tthis.dimensions = dimensions;\n\t\tthis.dataType = dataType;\n\t\tthis.blockSize = blockSize;\n\t\tthis.defaultValue = defaultValue == null ? JsonNull.INSTANCE : defaultValue;\n\n\t\tthis.blockCodecInfo = blockCodecInfo == null ? defaultBlockCodecInfo() : blockCodecInfo;\n\t\tthis.datasetCodecInfos = datasetCodecInfos;\n\n\t\tif (dataCodecInfos == null)\n\t\t\tthis.dataCodecInfos = new DataCodecInfo[0];\n\t\telse\n\t\t\tthis.dataCodecInfos = Arrays.stream(dataCodecInfos)\n\t\t\t\t\t.filter(it -> it != null && !(it instanceof RawCompression))\n\t\t\t\t\t.toArray(DataCodecInfo[]::new);\n\n\t\taccess = createDatasetAccess();\n\t\tchunkSize = access.getGrid().getBlockSize(0);\n\t}\n\n\tpublic DatasetAttributes(\n\t\t\tfinal long[] dimensions,\n\t\t\tfinal int[] outerBlockSize,\n\t\t\tfinal DataType dataType,\n\t\t\tfinal BlockCodecInfo blockCodecInfo,\n\t\t\tfinal DatasetCodecInfo[] datasetCodecInfos,\n\t\t\tfinal DataCodecInfo... dataCodecInfos) {\n\n\t\tthis(dimensions, outerBlockSize, dataType, JsonNull.INSTANCE,\n\t\t\t\tblockCodecInfo, datasetCodecInfos, dataCodecInfos);\n\t}\n\n\tpublic DatasetAttributes(\n\t\t\tfinal long[] dimensions,\n\t\t\tfinal int[] outerBlockSize,\n\t\t\tfinal DataType dataType,\n\t\t\tfinal BlockCodecInfo blockCodecInfo,\n\t\t\tfinal DataCodecInfo... dataCodecInfos) {\n\n\t\tthis(dimensions, outerBlockSize, dataType, blockCodecInfo, null, dataCodecInfos);\n\t}\n\n\t/**\n\t * Constructs a DatasetAttributes instance with specified dimensions, block size, data type,\n\t * and single compressor with default codec.\n\t *\n\t * @param dimensions  the dimensions of the dataset\n\t * @param blockSize   the size of the blocks in the dataset\n\t * @param dataType    the data type of the dataset\n\t * @param dataCodecInfos the codecs used encode/decode the data\n\t */\n\tpublic DatasetAttributes(\n\t\t\tfinal long[] dimensions,\n\t\t\tfinal int[] blockSize,\n\t\t\tfinal DataType dataType,\n\t\t\tfinal DataCodecInfo... dataCodecInfos) {\n\n\t\tthis(dimensions, blockSize, dataType, null, dataCodecInfos);\n\t}\n\n\t/**\n\t * Constructs a DatasetAttributes instance with specified dimensions, block size, data type, and default codecs\n\t *\n\t * @param dimensions the dimensions of the dataset\n\t * @param blockSize  the size of the blocks in the dataset\n\t * @param dataType   the data type of the dataset\n\t */\n\tpublic DatasetAttributes(\n\t\t\tfinal long[] dimensions,\n\t\t\tfinal int[] blockSize,\n\t\t\tfinal DataType dataType) {\n\n\t\tthis(dimensions, blockSize, dataType, new DataCodecInfo[0]);\n\t}\n\n\tprivate DatasetAccess<?> createDatasetAccess() {\n\n\t\tfinal int m = nestingDepth(blockCodecInfo);\n\n\t\t// There are m codecs: 1 DataBlock codecs, and m-1 shard codecs.\n\t\t// The inner-most codec (the DataBlock codec) is at index 0.\n\t\tfinal int[][] blockSizes = new int[m][];\n\n\t\t// NestedGrid validates block sizes, so instantiate it before creating the blockCodecs\n\t\t// blockCodecInfo.create below could fail unexpecedly with invalid\n\t\t// blockSizes so validate first\n\t\tblockSizes[m - 1] = blockSize;\n\t\tBlockCodecInfo tmpInfo = blockCodecInfo;\n\t\tfor (int l = m - 1; l > 0; --l) {\n\t\t\tfinal ShardCodecInfo info = (ShardCodecInfo)tmpInfo;\n\t\t\tblockSizes[l - 1] = info.getInnerBlockSize();\n\t\t\ttmpInfo = info.getInnerBlockCodecInfo();\n\t\t}\n\n\t\tBlockCodecInfo currentBlockCodecInfo = blockCodecInfo;\n\t\tDataCodecInfo[] currentDataCodecInfos = dataCodecInfos;\n\n\t\tDatasetCodecInfo[] datasetCodecInfos = this.datasetCodecInfos;\n\n\t\tfinal NestedGrid grid = new NestedGrid(blockSizes, dimensions);\n\t\tfinal BlockCodec<?>[] blockCodecs = new BlockCodec[m];\n\t\tfor (int l = m - 1; l >= 0; --l) {\n\t\t\tblockCodecs[l] = currentBlockCodecInfo.create(dataType, blockSizes[l], currentDataCodecInfos);\n\t\t\tif (l > 0) {\n\t\t\t\tfinal ShardCodecInfo info = (ShardCodecInfo) currentBlockCodecInfo;\n\t\t\t\tcurrentBlockCodecInfo = info.getInnerBlockCodecInfo();\n\t\t\t\tcurrentDataCodecInfos = info.getInnerDataCodecInfos();\n\t\t\t\tif (info.getInnerDataCodecInfos() != null) {\n\t\t\t\t\tif (datasetCodecInfos != null && datasetCodecInfos.length > 0) {\n\t\t\t\t\t\tthrow new N5Exception.N5JsonParseException(\"Found DatasetCodecs both inside and outside of shards. Not handled\");\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t\tdatasetCodecInfos = info.getInnerDatasetCodecInfos();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// add dataset codecs\n\t\tblockCodecs[0] = blockCodecWithDatasetCodecs(this, blockCodecs[0], datasetCodecInfos);\n\n\t\treturn new DefaultDatasetAccess<>(grid, blockCodecs);\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\tprivate static BlockCodec<?> blockCodecWithDatasetCodecs(final DatasetAttributes attributes, final BlockCodec<?> blockCodec,\n\t\t\tfinal DatasetCodecInfo[] datasetCodecInfos) {\n\n\t\tBlockCodec<?> result = blockCodec;\n\t\tif (datasetCodecInfos != null) {\n\t\t\tfor (final DatasetCodecInfo info : datasetCodecInfos) {\n\t\t\t\tresult = DatasetCodec.concatenate(info.create(attributes), (BlockCodec)result);\n\t\t\t}\n\t\t}\n\t\treturn result;\n\t}\n\n\tprivate static int nestingDepth(BlockCodecInfo info) {\n\n\t\tif (info instanceof ShardCodecInfo) {\n\t\t\treturn 1 + nestingDepth(((ShardCodecInfo)info).getInnerBlockCodecInfo());\n\t\t} else {\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\tprotected BlockCodecInfo defaultBlockCodecInfo() {\n\n\t\treturn new N5BlockCodecInfo();\n\t}\n\n\tpublic long[] getDimensions() {\n\n\t\treturn dimensions;\n\t}\n\n\tpublic int getNumDimensions() {\n\n\t\treturn dimensions.length;\n\t}\n\n\tpublic int[] getChunkSize() {\n\n\t\treturn chunkSize;\n\t}\n\n\tpublic int[] getBlockSize() {\n\n\t\treturn blockSize;\n\t}\n\n\tpublic JsonElement getDefaultValue() {\n\n\t\treturn defaultValue;\n\t}\n\n\tpublic boolean isSharded() {\n\n\t\treturn blockCodecInfo instanceof ShardCodecInfo;\n\t}\n\n\t/**\n\t * Only used for deserialization for N5 backwards compatibility.\n\t * {@link Compression} is no longer a special case. Prefer to reference {@link #getDataCodecInfos()}\n\t * Will return {@link RawCompression} if no compression is otherwise provided, for legacy compatibility.\n\t * <p>\n\t * Deprecated in favor of {@link #getDataCodecInfos()}.\n\t *\n\t * @return compression CodecInfo, if one was present, or else RawCompression\n\t */\n\t@Deprecated\n\tpublic Compression getCompression() {\n\n\t\treturn Arrays.stream(dataCodecInfos)\n\t\t\t\t.filter(it -> it instanceof Compression)\n\t\t\t\t.map(it -> (Compression)it)\n\t\t\t\t.findFirst()\n\t\t\t\t.orElse(new RawCompression());\n\t}\n\n\tpublic DataType getDataType() {\n\n\t\treturn dataType;\n\t}\n\n\t/**\n\t * Get the {@link DatasetAccess} for this dataset.\n\t *\n\t * @return the {@code DatasetAccess} for this dataset\n\t */\n\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\n\t\treturn (DatasetAccess<T>) access;\n\t}\n\n\t/**\n\t * Returns the {@code NestedGrid} for this dataset, from which block and\n\t * shard sizes are accessible.\n\t *\n\t * @return the NestedGrid\n\t */\n\tpublic NestedGrid getNestedBlockGrid() {\n\n\t\treturn getDatasetAccess().getGrid();\n\t}\n\n\tpublic BlockCodecInfo getBlockCodecInfo() {\n\n\t\treturn blockCodecInfo;\n\t}\n\n\tpublic DataCodecInfo[] getDataCodecInfos() {\n\n\t\treturn dataCodecInfos;\n\t}\n\n\tpublic DatasetCodecInfo[] getDatasetCodecInfos() {\n\n\t\treturn datasetCodecInfos;\n\t}\n\n\tpublic String relativeBlockPath(long... position) {\n\n\t\treturn Arrays.stream(position).mapToObj(Long::toString).collect(Collectors.joining(\"/\"));\n\t}\n\n\tpublic HashMap<String, Object> asMap() {\n\n\t\tfinal HashMap<String, Object> map = new HashMap<>();\n\t\tmap.put(DIMENSIONS_KEY, dimensions);\n\t\tmap.put(BLOCK_SIZE_KEY, chunkSize);\n\t\tmap.put(DATA_TYPE_KEY, dataType);\n\t\tmap.put(COMPRESSION_KEY, getCompression());\n\t\treturn map;\n\t}\n\n\tpublic static Builder builder(final long[] dimensions, final DataType dataType) {\n\n\t\treturn new Builder(dimensions, dataType);\n\t}\n\n\tpublic static Builder builder(final DatasetAttributes attributes) {\n\n\t\treturn new Builder(attributes);\n\t}\n\n\tprivate static final int[] DEFAULT_1D_BLOCK_SIZE = new int[]{65536};\n\tprivate static final int[] DEFAULT_2D_BLOCK_SIZE = new int[]{512,512};\n\tprivate static final int[] DEFAULT_3D_BLOCK_SIZE = new int[]{128,128,128};\n\tprivate static final int DEFAULT_ND_DIM_LEN = 64;\n\n\tprotected static int[] defaultBlockSize(final long[] dimensions) {\n\n\t\tfinal int[] blockSize;\n\t\tif (dimensions.length == 1)\n\t\t\tblockSize = DEFAULT_1D_BLOCK_SIZE.clone();\n\t\telse if (dimensions.length == 2)\n\t\t\tblockSize = DEFAULT_2D_BLOCK_SIZE.clone();\n\t\telse if (dimensions.length == 3)\n\t\t\tblockSize = DEFAULT_3D_BLOCK_SIZE.clone();\n\t\telse {\n\t\t\tblockSize = new int[dimensions.length];\n\t\t\tArrays.fill(blockSize, DEFAULT_ND_DIM_LEN);\n\t\t}\n\t\tfor (int i = 0; i < blockSize.length; i++)\n\t\t\tblockSize[i] = (int)Math.min(blockSize[i], dimensions[i]);\n\t\treturn blockSize;\n\t}\n\n\tpublic static class Builder {\n\n\t\tprivate final long[] dimensions;\n\t\tprivate final DataType dataType;\n\n\t\tprivate int[] blockSize;\n\t\tprivate DataCodecInfo[] dataCodecInfos = new DataCodecInfo[0];\n\n\t\tpublic Builder(final long[] dimensions, final DataType dataType) {\n\n\t\t\tthis.dimensions = dimensions.clone();\n\t\t\tthis.dataType = dataType;\n\t\t\tthis.blockSize = defaultBlockSize(dimensions);\n\t\t}\n\n\t\tpublic Builder(final DatasetAttributes attributes) {\n\n\t\t\tthis.dimensions = attributes.getDimensions();\n\t\t\tthis.dataType = attributes.getDataType();\n\t\t\tthis.blockSize = attributes.getBlockSize();\n\t\t\tthis.dataCodecInfos = attributes.getDataCodecInfos();\n\t\t}\n\n\t\tpublic Builder blockSize(final int[] blockSize) {\n\n\t\t\tthis.blockSize = blockSize.clone();\n\t\t\treturn this;\n\t\t}\n\n\t\t/**\n\t\t * Sets the compression codec. Has no effect if {@code compression} is\n\t\t * null or {@link RawCompression}.\n\t\t *\n\t\t * @param compression the compression to use\n\t\t * @return this builder\n\t\t */\n\t\tpublic Builder compression(final Compression compression) {\n\n\t\t\tif (compression != null && !(compression instanceof RawCompression))\n\t\t\t\tthis.dataCodecInfos = new DataCodecInfo[]{compression};\n\t\t\treturn this;\n\t\t}\n\n\t\tpublic DatasetAttributes build() {\n\t\t\tfinal int[] resolvedBlockSize = blockSize != null ? blockSize : defaultBlockSize(dimensions);\n\t\t\treturn new DatasetAttributes(dimensions, resolvedBlockSize, dataType, new N5BlockCodecInfo(), null, dataCodecInfos);\n\t\t}\n\t}\n\n\tprivate static DatasetAttributesAdapter adapter = null;\n\n\tpublic static DatasetAttributesAdapter getJsonAdapter() {\n\n\t\tif (adapter == null) {\n\t\t\tadapter = new DatasetAttributesAdapter();\n\t\t}\n\t\treturn adapter;\n\t}\n\n\tpublic static class DatasetAttributesAdapter implements JsonSerializer<DatasetAttributes>, JsonDeserializer<DatasetAttributes> {\n\n\t\t@Override public DatasetAttributes deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException {\n\n\t\t\tif (json == null || !json.isJsonObject())\n\t\t\t\treturn null;\n\t\t\tfinal JsonObject obj = json.getAsJsonObject();\n\t\t\tfinal boolean validKeySet = obj.has(DIMENSIONS_KEY)\n\t\t\t\t\t&& obj.has(BLOCK_SIZE_KEY)\n\t\t\t\t\t&& obj.has(DATA_TYPE_KEY)\n\t\t\t\t\t&& (obj.has(CODEC_KEY) || obj.has(COMPRESSION_KEY) || obj.has(compressionTypeKey));\n\n\t\t\tif (!validKeySet)\n\t\t\t\treturn null;\n\n\t\t\tfinal long[] dimensions = context.deserialize(obj.get(DIMENSIONS_KEY), long[].class);\n\t\t\tfinal int[] blockSize = context.deserialize(obj.get(BLOCK_SIZE_KEY), int[].class);\n\n\t\t\tfinal DataType dataType = context.deserialize(obj.get(DATA_TYPE_KEY), DataType.class);\n\n\t\t\tfinal BlockCodecInfo blockCodecInfo;\n\t\t\tfinal DataCodecInfo[] dataCodecs;\n\t\t\tif (obj.has(CODEC_KEY)) {\n\t\t\t\tfinal CodecInfo[] codecs = context.deserialize(obj.get(CODEC_KEY), CodecInfo[].class);\n\t\t\t\tblockCodecInfo = (BlockCodecInfo)codecs[0];\n\t\t\t\tdataCodecs = new DataCodecInfo[codecs.length - 1];\n\t\t\t\tfor (int i = 1; i < codecs.length; i++) {\n\t\t\t\t\tdataCodecs[i - 1] = (DataCodecInfo)codecs[i];\n\t\t\t\t}\n\t\t\t} else if (obj.has(COMPRESSION_KEY)) {\n\t\t\t\tfinal Compression compression = CompressionAdapter.getJsonAdapter().deserialize(obj.get(COMPRESSION_KEY), Compression.class, context);\n\t\t\t\tdataCodecs = new DataCodecInfo[]{compression};\n\t\t\t\tblockCodecInfo = new N5BlockCodecInfo();\n\t\t\t} else if (obj.has(compressionTypeKey)) {\n\t\t\t\tfinal Compression compression = getCompressionVersion0(obj.get(compressionTypeKey).getAsString());\n\t\t\t\tdataCodecs = new DataCodecInfo[]{compression};\n\t\t\t\tblockCodecInfo = new N5BlockCodecInfo();\n\t\t\t} else {\n\t\t\t\treturn null;\n\t\t\t}\n\n\t\t\treturn new DatasetAttributes(dimensions, blockSize, dataType, blockCodecInfo, dataCodecs);\n\t\t}\n\n\t\t//FIXME\n\t\t// this implements multi-codec serialization for N5. We probably don't want this now\n\t\t@Override public JsonElement serialize(DatasetAttributes src, Type typeOfSrc, JsonSerializationContext context) {\n\n\t\t\tfinal JsonObject obj = new JsonObject();\n\t\t\tobj.add(DIMENSIONS_KEY, context.serialize(src.dimensions));\n\t\t\tobj.add(BLOCK_SIZE_KEY, context.serialize(src.chunkSize));\n\t\t\tobj.add(DATA_TYPE_KEY, context.serialize(src.dataType));\n\n\t\t\tfinal DataCodecInfo[] codecs = src.dataCodecInfos;\n\t\t\t// length > 1 is actually invalid, but this is checked on construction\n\t\t\tif (codecs.length == 0)\n\t\t\t\tobj.add(COMPRESSION_KEY, context.serialize(new RawCompression()));\n\t\t\telse\n\t\t\t\tobj.add(COMPRESSION_KEY, context.serialize(codecs[0]));\n\n\t\t\treturn obj;\n\t\t}\n\n\t\tprivate static Compression getCompressionVersion0(final String compressionVersion0Name) {\n\n\t\t\tswitch (compressionVersion0Name) {\n\t\t\tcase \"raw\":\n\t\t\t\treturn new RawCompression();\n\t\t\tcase \"gzip\":\n\t\t\t\treturn new GzipCompression();\n\t\t\tcase \"bzip2\":\n\t\t\t\treturn new Bzip2Compression();\n\t\t\tcase \"lz4\":\n\t\t\t\treturn new Lz4Compression();\n\t\t\tcase \"xz\":\n\t\t\t\treturn new XzCompression();\n\t\t\t}\n\t\t\treturn null;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/DoubleArrayDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class DoubleArrayDataBlock extends AbstractDataBlock<double[]> {\n\n\tpublic DoubleArrayDataBlock(final int[] size, final long[] gridPosition, final double[] data) {\n\n\t\tsuper(size, gridPosition, data, a -> a.length);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/FileKeyLockManager.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.IOException;\nimport java.lang.ref.ReferenceQueue;\nimport java.lang.ref.WeakReference;\nimport java.nio.channels.FileLock;\nimport java.nio.file.Path;\nimport java.util.Collections;\nimport java.util.EnumMap;\nimport java.util.Map;\nimport java.util.concurrent.ConcurrentHashMap;\n\nimport static org.janelia.saalfeldlab.n5.LockingPolicy.STRICT;\n\n/**\n * Provides thread-safe and process-safe read/write locking for filesystem paths.\n * Uses thread locks for JVM coordination and file locks for inter-process coordination.\n */\nclass FileKeyLockManager {\n\n\tprivate static final Map<LockingPolicy, FileKeyLockManager> managers = Collections.synchronizedMap(new EnumMap<>(LockingPolicy.class));\n\n\tstatic FileKeyLockManager forPolicy(final LockingPolicy policy) {\n\t\treturn managers.computeIfAbsent(policy, FileKeyLockManager::new);\n\t}\n\n\t/**\n\t * @deprecated use {@link FileKeyLockManager#forPolicy(LockingPolicy)}\n\t */\n\t@Deprecated\n\tstatic final FileKeyLockManager FILE_LOCK_MANAGER = forPolicy(STRICT);\n\n\tprivate final LockingPolicy policy;\n\n\t/**\n\t * Create a new {@link FileKeyLockManager} with the specified locking policy.\n\t * <p>\n\t * The given locking {@link LockingPolicy policy} applies to OS-level locking.\n\t * For both the {@code STRICT} and {@code PERMISSIVE} policy, a {@link\n\t * FileLock} is obtained. If this fails, {@code STRICT} will throw an {@code\n\t * IOException}. {@code PERMISSIVE} will proceed without locking. {@code\n\t * UNSAFE} will not attempt OS-level locking, however will still manage\n\t * mutual exclusion of readers and writers in the same JVM. Trying to lock\n\t * the same path with different locking policies will throw an {@code\n\t * IOException}.\n\t *\n\t * @param policy\n\t * \t\tthe locking policy\n\t */\n\tprivate FileKeyLockManager(final LockingPolicy policy) {\n\t\tthis.policy = policy;\n\t}\n\n\tprivate final ConcurrentHashMap<String, WeakValue> locks = new ConcurrentHashMap<>();\n\n\tprivate final ReferenceQueue<KeyLockState> refQueue = new ReferenceQueue<>();\n\n\tprivate static class WeakValue extends WeakReference<KeyLockState> {\n\n\t\tfinal String key;\n\n\t\tWeakValue(\n\t\t\t\tfinal String key,\n\t\t\t\tfinal KeyLockState value,\n\t\t\t\tfinal ReferenceQueue<KeyLockState> queue) {\n\n\t\t\tsuper(value, queue);\n\t\t\tthis.key = key;\n\t\t}\n\t}\n\n\t/**\n\t * Remove entries from the cache whose references have been\n\t * garbage-collected.\n\t */\n\tprivate void cleanUp()\n\t{\n\t\twhile (true) {\n\t\t\tfinal WeakValue ref = (WeakValue) refQueue.poll();\n\t\t\tif (ref == null)\n\t\t\t\tbreak;\n\t\t\tlocks.remove(ref.key, ref);\n\t\t}\n\t}\n\n\tprivate KeyLockState keyLockState(final Path path, final LockingPolicy policy) throws IOException {\n\n\t\tfinal String key = path.toAbsolutePath().toString();\n\n\t\tcleanUp();\n\n\t\tfinal WeakValue existingRef = locks.get(key);\n\t\tKeyLockState state = existingRef == null ? null : existingRef.get();\n\t\tif (state != null) {\n\t\t\treturn state;\n\t\t}\n\n\t\tfinal KeyLockState newState = new KeyLockState(path, policy);\n\t\twhile (state == null) {\n\t\t\tfinal WeakValue ref = locks.compute(key,\n\t\t\t\t\t(k, v) -> (v != null && v.get() != null)\n\t\t\t\t\t\t\t? v\n\t\t\t\t\t\t\t: new WeakValue(k, newState, refQueue));\n\t\t\tstate = ref.get();\n\t\t}\n\t\treturn state;\n\t}\n\n\t/**\n\t * Acquires a read lock for the specified key. Multiple threads can hold\n\t * read locks for the same key simultaneously.\n\t * <p>\n\t * The first reader will acquire a shared file lock. Subsequent readers\n\t * only acquire the thread-level lock.\n\t *\n\t * @param path\n\t * \t\tthe key (file path) to lock for reading\n\t *\n\t * @return a {@link LockedChannel} that must be closed when done\n\t *\n\t * @throws IOException\n\t * \t\tif acquiring the file lock fails\n\t */\n\tpublic LockedFileChannel lockForReading(final Path path) throws IOException {\n\n\t\treturn keyLockState(path, policy).acquireRead();\n\t}\n\n\t/**\n\t * Acquires a write lock for the specified key. Only one thread can hold a\n\t * write lock for a key at a time, and no readers can hold locks.\n\t *\n\t * @param path\n\t * \t\tthe file path to lock for writing\n\t *\n\t * @return a {@link LockedChannel} that must be closed when done\n\t *\n\t * @throws IOException\n\t * \t\tif acquiring the file lock fails\n\t */\n\tpublic LockedFileChannel lockForWriting(final Path path) throws IOException {\n\n\t\treturn keyLockState(path, policy).acquireWrite();\n\t}\n\n\t/**\n\t * Returns the number of keys currently being tracked.\n\t *\n\t * @return the number of keys with associated locks\n\t */\n\tint size() {\n\n\t\treturn locks.size();\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/FileSystemKeyValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.UncheckedIOException;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.ByteBuffer;\nimport java.nio.file.DirectoryNotEmptyException;\nimport java.nio.file.FileAlreadyExistsException;\nimport java.nio.file.FileSystemException;\nimport java.nio.file.Files;\nimport java.nio.file.NoSuchFileException;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.nio.file.attribute.FileAttribute;\nimport java.util.Arrays;\nimport java.util.Comparator;\nimport java.util.Iterator;\nimport java.util.stream.Stream;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5NoSuchKeyException;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\n/**\n * Filesystem {@link KeyValueAccess}.\n *\n * @author Stephan Saalfeld\n * @author Igor Pisarev\n * @author Philipp Hanslovsky\n */\npublic class FileSystemKeyValueAccess implements KeyValueAccess {\n\n\tprivate final FileKeyLockManager fileKeyLockManager;\n\n\tpublic FileSystemKeyValueAccess() {\n\t\tfinal LockingPolicy policy = LockingPolicy.fromString(System.getProperty(\"n5.ioPolicy\", \"permissive\"));\n\t\tthis.fileKeyLockManager = FileKeyLockManager.forPolicy(policy);\n\t}\n\n\tprivate LockedFileChannel lockForReading(final Path path) throws N5IOException {\n\n\t\ttry {\n\t\t\treturn fileKeyLockManager.lockForReading(path);\n\t\t} catch (final NoSuchFileException e) {\n\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\"Failed to lock file for reading: \" + path, e);\n\t\t}\n\t}\n\n\tprivate LockedFileChannel lockForWriting(final Path path) throws N5IOException {\n\n\t\ttry {\n\t\t\treturn fileKeyLockManager.lockForWriting(path);\n\t\t} catch (final NoSuchFileException e) {\n\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\"Failed to lock file for writing: \" + path, e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic VolatileReadData createReadData(final String normalPath) {\n\t\treturn VolatileReadData.from(new FileLazyRead(Paths.get(normalPath)));\n\t}\n\n\t@Override\n\tpublic void write(final String normalPath, final ReadData data) throws N5IOException {\n\t\tfinal Path path = Paths.get(normalPath);\n\t\ttry (final LockedFileChannel channel = lockForWriting(path)) {\n\t\t\tdata.writeTo(channel.asOutputStream());\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic boolean isDirectory(final String normalPath) {\n\n\t\tfinal Path path = Paths.get(normalPath);\n\t\treturn Files.isDirectory(path);\n\t}\n\n\t@Override\n\tpublic boolean isFile(final String normalPath) {\n\n\t\tfinal Path path = Paths.get(normalPath);\n\t\treturn Files.isRegularFile(path);\n\t}\n\n\t@Override\n\tpublic boolean exists(final String normalPath) {\n\n\t\tfinal Path path = Paths.get(normalPath);\n\t\treturn Files.exists(path);\n\t}\n\n\t@Override\n\tpublic long size(final String normalPath) {\n\n\t\treturn size(Paths.get(normalPath));\n\t}\n\n\tprivate static long size(final Path path) {\n\n\t\ttry {\n\t\t\treturn Files.size(path);\n\t\t} catch (NoSuchFileException e) {\n\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic String[] listDirectories(final String normalPath) throws N5IOException {\n\n\t\tfinal Path path = Paths.get(normalPath);\n\t\ttry (final Stream<Path> pathStream = Files.list(path)) {\n\t\t\treturn pathStream\n\t\t\t\t\t.filter(Files::isDirectory)\n\t\t\t\t\t.map(a -> path.relativize(a).toString())\n\t\t\t\t\t.toArray(String[]::new);\n\t\t} catch (NoSuchFileException e) {\n\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\"Failed to list directories\", e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic String[] list(final String normalPath) throws N5IOException {\n\n\t\tfinal Path path = Paths.get(normalPath);\n\t\ttry (final Stream<Path> pathStream = Files.list(path)) {\n\t\t\treturn pathStream\n\t\t\t\t\t.map(a -> path.relativize(a).toString())\n\t\t\t\t\t.toArray(String[]::new);\n\t\t} catch (NoSuchFileException e) {\n\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\"Failed to list files\", e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic String[] components(final String path) {\n\n\t\tfinal Path fsPath = Paths.get(path);\n\t\tfinal Path root = fsPath.getRoot();\n\t\tfinal String separator = fsPath.getFileSystem().getSeparator();\n\t\tfinal String[] components;\n\t\tint o;\n\t\tif (root == null) {\n\t\t\tcomponents = new String[fsPath.getNameCount()];\n\t\t\to = 0;\n\t\t} else {\n\t\t\tcomponents = new String[fsPath.getNameCount() + 1];\n\t\t\tcomponents[0] = root.toString();\n\t\t\to = 1;\n\t\t}\n\n\t\tfor (int i = o; i < components.length; ++i) {\n\t\t\tString name = fsPath.getName(i - o).toString();\n\t\t\t/* Preserve trailing slash on final component if present*/\n\t\t\tif (i == components.length - 1) {\n\t\t\t\tfinal String trailingSeparator = path.endsWith(separator) ? separator : path.endsWith(\"/\") ? \"/\" : \"\";\n\t\t\t\tname += trailingSeparator;\n\t\t\t}\n\t\t\tcomponents[i] = name;\n\t\t}\n\t\treturn components;\n\t}\n\n\t@Override\n\tpublic String parent(final String path) {\n\n\t\tfinal Path parent = Paths.get(path).getParent();\n\t\tif (parent == null)\n\t\t\treturn null;\n\t\telse\n\t\t\treturn parent.toString();\n\t}\n\n\t@Override\n\tpublic String relativize(final String path, final String base) {\n\n\t\tfinal Path basePath = Paths.get(base);\n\t\treturn basePath.relativize(Paths.get(path)).toString();\n\t}\n\n\t/**\n\t * Returns a normalized path. It ensures correctness on both Unix and\n\t * Windows,\n\t * otherwise {@code pathName} is treated as UNC path on Windows, and\n\t * {@code Paths.get(pathName, ...)} fails with {@code InvalidPathException}.\n\t *\n\t * @param path the path\n\t * @return the normalized path, without leading slash\n\t */\n\t@Override\n\tpublic String normalize(final String path) {\n\n\t\treturn Paths.get(path).normalize().toString();\n\t}\n\n\t@Override\n\tpublic URI uri(final String normalPath) throws URISyntaxException {\n\n\t\t// normalize make absolute the scheme specific part only\n\t\ttry {\n\t\t\tfinal URI normalUri = URI.create(normalPath);\n\t\t\tif (normalUri.isAbsolute()) return normalUri.normalize();\n\t\t} catch (final IllegalArgumentException e) {\n\t\t\treturn new File(normalPath).toURI().normalize();\n\t\t}\n\t\treturn new File(normalPath).toURI().normalize();\n\n\t}\n\n\t@Override\n\tpublic String compose(final String... components) {\n\n\t\tif (components == null || components.length == 0)\n\t\t\treturn null;\n\t\tif (components.length == 1)\n\t\t\treturn Paths.get(components[0]).toString();\n\t\treturn Paths.get(components[0], Arrays.copyOfRange(components, 1, components.length)).normalize().toString();\n\t}\n\n\t@Override\n\tpublic String compose(URI uri, String... components) {\n\n\t\tPath composedPath;\n\t\tif (uri.isAbsolute())\n\t\t\tcomposedPath = Paths.get(uri);\n\t\telse\n\t\t\tcomposedPath = Paths.get(uri.toString());\n\t\tfor (String component : components) {\n\t\t\tif (component == null || component.isEmpty())\n\t\t\t\tcontinue;\n\t\t\tcomposedPath = composedPath.resolve(component);\n\t\t}\n\n\t\treturn composedPath.toAbsolutePath().toString();\n\t}\n\n\t@Override\n\tpublic void createDirectories(final String normalPath) throws N5IOException {\n\n\t\ttry {\n\t\t\tcreateDirectories(Paths.get(normalPath));\n\t\t} catch (NoSuchFileException e) {\n\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\"Failed to create directories\", e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic void delete(final String normalPath) throws N5IOException {\n\n\t\ttry {\n\t\t\tfinal Path path = Paths.get(normalPath);\n\n\t\t\tif (Files.isRegularFile(path))\n\t\t\t\ttry (final LockedFileChannel channel = lockForWriting(path)) {\n\t\t\t\t\tFiles.delete(path);\n\t\t\t\t}\n\t\t\telse {\n\t\t\t\ttry (final Stream<Path> pathStream = Files.walk(path)) {\n\t\t\t\t\tfor (final Iterator<Path> i = pathStream.sorted(Comparator.reverseOrder()).iterator(); i.hasNext();) {\n\t\t\t\t\t\tfinal Path childPath = i.next();\n\t\t\t\t\t\tif (Files.isRegularFile(childPath))\n\t\t\t\t\t\t\ttry (final LockedFileChannel channel = lockForWriting(childPath)) {\n\t\t\t\t\t\t\t\tFiles.delete(childPath);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\ttryDelete(childPath);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (NoSuchFileException ignore) {\n\t\t\t/* It doesn't exist; that's sufficient for us to not complain on a `delete` call */\n\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\"Failed to delete file at \" + normalPath, e);\n\t\t}\n\t}\n\n\tprotected static void tryDelete(final Path path) throws IOException {\n\n\t\ttry {\n\t\t\tFiles.delete(path);\n\t\t} catch (final DirectoryNotEmptyException e) {\n\t\t\t/*\n\t\t\t * Even though path is expected to be an empty directory, sometimes\n\t\t\t * deletion fails on network filesystems when lock files are not\n\t\t\t * cleared immediately after the leaves have been removed.\n\t\t\t */\n\t\t\ttry {\n\t\t\t\t/* wait and reattempt */\n\t\t\t\tThread.sleep(100);\n\t\t\t\tFiles.delete(path);\n\t\t\t} catch (final InterruptedException ex) {\n\t\t\t\te.printStackTrace();\n\t\t\t\tThread.currentThread().interrupt();\n\t\t\t}\n\t\t}\n\t}\n\n\t/**\n\t * This is a copy of {@link Files#createDirectories(Path, FileAttribute...)}\n\t * that follows symlinks.\n\t *\n\t * Workaround for https://bugs.openjdk.java.net/browse/JDK-8130464\n\t *\n\t * Creates a directory by creating all nonexistent parent directories first.\n\t * Unlike the {@link Files#createDirectories} method, an exception\n\t * is not thrown if the directory could not be created because it already\n\t * exists.\n\t *\n\t * <p>\n\t * The {@code attrs} parameter is optional {@link FileAttribute\n\t * file-attributes} to set atomically when creating the nonexistent\n\t * directories. Each file attribute is identified by its {@link\n\t * FileAttribute#name name}. If more than one attribute of the same name is\n\t * included in the array then all but the last occurrence is ignored.\n\t *\n\t * <p>\n\t * If this method fails, then it may do so after creating some, but not\n\t * all, of the parent directories.\n\t *\n\t * @param dir\n\t *            the directory to create\n\t *\n\t * @param attrs\n\t *            an optional list of file attributes to set atomically when\n\t *            creating the directory\n\t *\n\t * @return the directory\n\t *\n\t * @throws UnsupportedOperationException\n\t *             if the array contains an attribute that cannot be set\n\t *             atomically\n\t *             when creating the directory\n\t * @throws FileAlreadyExistsException\n\t *             if {@code dir} exists but is not a directory <i>(optional\n\t *             specific\n\t *             exception)</i>\n\t * @throws IOException\n\t *             if an I/O error occurs\n\t * @throws SecurityException\n\t *             in the case of the default provider, and a security manager\n\t *             is\n\t *             installed, the {@link SecurityManager#checkWrite(String)\n\t *             checkWrite}\n\t *             method is invoked prior to attempting to create a directory\n\t *             and\n\t *             its {@link SecurityManager#checkRead(String) checkRead} is\n\t *             invoked for each parent directory that is checked. If {@code\n\t *          dir} is not an absolute path then its {@link Path#toAbsolutePath\n\t *             toAbsolutePath} may need to be invoked to get its absolute\n\t *             path.\n\t *             This may invoke the security manager's {@link\n\t *             SecurityManager#checkPropertyAccess(String)\n\t *             checkPropertyAccess}\n\t *             method to check access to the system property\n\t *             {@code user.dir}\n\t */\n\tprotected static Path createDirectories(Path dir, final FileAttribute<?>... attrs) throws IOException {\n\n\t\t// attempt to create the directory\n\t\ttry {\n\t\t\tcreateAndCheckIsDirectory(dir, attrs);\n\t\t\treturn dir;\n\t\t} catch (final FileAlreadyExistsException x) {\n\t\t\t// file exists and is not a directory\n\t\t\tthrow x;\n\t\t} catch (final IOException x) {\n\t\t\t// parent may not exist or other reason\n\t\t}\n\t\tSecurityException se = null;\n\t\ttry {\n\t\t\tdir = dir.toAbsolutePath();\n\t\t} catch (final SecurityException x) {\n\t\t\t// don't have permission to get absolute path\n\t\t\tse = x;\n\t\t}\n\t\t// find a descendant that exists\n\t\tPath parent = dir.getParent();\n\t\twhile (parent != null) {\n\t\t\ttry {\n\t\t\t\tparent.getFileSystem().provider().checkAccess(parent);\n\t\t\t\tbreak;\n\t\t\t} catch (final NoSuchFileException x) {\n\t\t\t\t// does not exist\n\t\t\t}\n\t\t\tparent = parent.getParent();\n\t\t}\n\t\tif (parent == null) {\n\t\t\t// unable to find existing parent\n\t\t\tif (se == null) {\n\t\t\t\tthrow new FileSystemException(\n\t\t\t\t\t\tdir.toString(),\n\t\t\t\t\t\tnull,\n\t\t\t\t\t\t\"Unable to determine if root directory exists\");\n\t\t\t} else {\n\t\t\t\tthrow se;\n\t\t\t}\n\t\t}\n\n\t\t// create directories\n\t\tPath child = parent;\n\t\tfor (final Path name : parent.relativize(dir)) {\n\t\t\tchild = child.resolve(name);\n\t\t\tcreateAndCheckIsDirectory(child, attrs);\n\t\t}\n\t\treturn dir;\n\t}\n\n\t/**\n\t * This is a copy of a previous Files#createAndCheckIsDirectory(Path,\n\t * FileAttribute...) method that follows symlinks.\n\t *\n\t * Workaround for https://bugs.openjdk.java.net/browse/JDK-8130464\n\t *\n\t * Used by createDirectories to attempt to create a directory. A no-op if the\n\t * directory already exists.\n\t *\n\t * @param dir directory path\n\t * @param attrs file attributes\n\t * @throws IOException the exception\n\t */\n\tprotected static void createAndCheckIsDirectory(\n\t\t\tfinal Path dir,\n\t\t\tfinal FileAttribute<?>... attrs) throws IOException {\n\n\t\ttry {\n\t\t\tFiles.createDirectory(dir, attrs);\n\t\t} catch (final FileAlreadyExistsException x) {\n\t\t\tif (!Files.isDirectory(dir))\n\t\t\t\tthrow x;\n\t\t}\n\t}\n\n\t/**\n\t * Verify that the range {@code [offset, offset+length)} is fully contained in {@code [0, channelSize)}.\n\t *\n\t * @throws IndexOutOfBoundsException\n\t * \t\tif range is not fully contained\n\t */\n\tprivate static void validBounds(final long channelSize, final long offset, final long length) throws IndexOutOfBoundsException {\n\n\t\tif (offset < 0)\n\t\t\tthrow new IndexOutOfBoundsException(\"offset must be > 0, but was: \" + offset);\n\t\telse if (channelSize > 0 && offset >= channelSize)  // offset == 0 and channelSize == 0 is okay\n\t\t\tthrow new IndexOutOfBoundsException(\"offset (\" + offset + \") must be less than channel size (\" + channelSize + \")\");\n\t\telse if (length >= 0 && offset + length > channelSize)\n\t\t\tthrow new IndexOutOfBoundsException(\"offset + length (\" + (offset + length) + \") must be less than channel size (\" + channelSize + \")\");\n\t}\n\n\tprivate class FileLazyRead implements LazyRead {\n\n\t\tprivate final Path path;\n\t\tprivate LockedFileChannel lock; // TODO rename\n\n\t\tFileLazyRead(final Path path) {\n\t\t\tthis.path = path;\n\t\t\tlock = lockForReading(path);\n\t\t}\n\n\t\t@Override\n\t\tpublic long size() throws N5IOException {\n\n\t\t\tif (lock == null) {\n\t\t\t\tthrow new N5IOException(\"FileLazyRead is already closed.\");\n\t\t\t}\n\n\t\t\ttry {\n\t\t\t\treturn Files.size(path);\n\t\t\t} catch (NoSuchFileException e) {\n\t\t\t\tthrow new N5NoSuchKeyException(\"No such file\", e);\n\t\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData materialize(final long offset, final long length) {\n\n\t\t\tif (lock == null) {\n\t\t\t\tthrow new N5IOException(\"FileLazyRead is already closed.\");\n\t\t\t}\n\n\t\t\ttry {\n\t\t\t\tfinal long channelSize = lock.size();\n\t\t\t\tvalidBounds(channelSize, offset, length);\n\n\t\t\t\tfinal long size = length < 0 ? (channelSize - offset) : length;\n\t\t\t\tif (size > Integer.MAX_VALUE) {\n\t\t\t\t\tthrow new IndexOutOfBoundsException(\"Attempt to materialize too large data\");\n\t\t\t\t}\n\n\t\t\t\tfinal byte[] data = new byte[(int) size];\n\t\t\t\tlock.read(ByteBuffer.wrap(data), offset);\n\t\t\t\treturn ReadData.from(data);\n\t\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\t\tthrow new N5Exception.N5IOException(e);\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() throws IOException {\n\n\t\t\tif (lock != null) {\n\t\t\t\tlock.close();\n\t\t\t\tlock = null;\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/FloatArrayDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class FloatArrayDataBlock extends AbstractDataBlock<float[]> {\n\n\tpublic FloatArrayDataBlock(final int[] size, final long[] gridPosition, final float[] data) {\n\n\t\tsuper(size, gridPosition, data, a -> a.length);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/FsIoPolicy.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.io.UncheckedIOException;\nimport java.nio.ByteBuffer;\nimport java.nio.channels.Channels;\nimport java.nio.channels.FileChannel;\nimport java.nio.file.*;\n\nimport static org.janelia.saalfeldlab.n5.FileKeyLockManager.FILE_LOCK_MANAGER;\n\npublic class FsIoPolicy {\n\n    static final IoPolicy atomicWithFallback = IoPolicy.withFallback(new Atomic(), new Unsafe());\n\n    private static boolean validBounds(long channelSize, long offset, long length) {\n\n\t\tif (offset < 0)\n\t\t\tthrow new N5Exception(\"offset must be > 0, but was: \" + offset);\n\t\telse if (channelSize > 0 && offset >= channelSize)  // offset == 0 and channelSize == 0 is okay\n\t\t\tthrow new N5Exception(\"offset (\" + offset + \") must be less than channel size (\" + channelSize + \")\");\n\t\telse if (length >= 0 && offset + length > channelSize)\n\t\t\tthrow new N5Exception(\"offset + length (\" + (offset + length) + \") must be less than channel size (\" + channelSize + \")\");\n\n        return true;\n    }\n\n    /**\n     * Opens a file channel. If the channel is opened {@code forWriting},\n     * then this may create the file and the parent directories as needed.\n     *\n     * @throws IOException\n     * \t\tif the channel cannot be opened\n     */\n    static FileChannel openFileChannel(final Path path, final boolean forWriting) throws IOException {\n\n        if (forWriting) {\n            final Path parent = path.getParent();\n            /* if not null and not directory, it will call `createDirectories` but we expect it to throw an IOException */\n            if (parent != null && !parent.toFile().isDirectory()) {\n                Files.createDirectories(parent);\n            }\n            return FileChannel.open(path, StandardOpenOption.READ, StandardOpenOption.WRITE, StandardOpenOption.CREATE);\n        } else {\n            return FileChannel.open(path, StandardOpenOption.READ);\n        }\n    }\n\n\n    /**\n     * This method is necessary to handle the situtation where writing is successful, but `close` fails on the file channel.\n     * This has been observed to happen fairly consistently on MacOS when writing to a file mounted over SMB.\n     *\n     * @param readData to write to the {@code Path}\n     * @param path to write to\n     * @throws IOException if writing failed.\n     */\n    private static void writeToPathIgnoreCloseException(ReadData readData, Path path) throws IOException {\n\n        FileChannel channel = openFileChannel(path, true);\n        OutputStream os = Channels.newOutputStream(channel);\n\n        try {\n            readData.writeTo(os);\n            os.flush();\n            channel.force(true);\n        } catch (Throwable e) {\n            os.close();\n            channel.close();\n            throw e;\n        }\n\n        /* if we get here, the write succeeded, and the os/channel may not be closed yet */\n        try {\n            os.close();\n            channel.close();\n        } catch (IOException | UncheckedIOException ignore) {\n            /* Ignore; we know the data was written already. */\n        }\n    }\n\n    public static class Unsafe implements IoPolicy {\n        @Override\n        public void write(String key, ReadData readData) throws IOException {\n            final Path path = Paths.get(key);\n            writeToPathIgnoreCloseException(readData, path);\n        }\n\n        @Override\n        public VolatileReadData read(final String key) throws IOException {\n            final Path path = Paths.get(key);\n            FileLazyRead fileLazyRead = new FileLazyRead(path, false);\n            return VolatileReadData.from(fileLazyRead);\n        }\n\n        @Override\n        public void delete(final String key) throws IOException {\n            final Path path = Paths.get(key);\n            Files.deleteIfExists(path);\n        }\n    }\n\n    public static class Atomic implements IoPolicy {\n        @Override\n        public void write(String key, ReadData readData) throws IOException {\n            final Path path = Paths.get(key);\n            try (LockedFileChannel channel = FILE_LOCK_MANAGER.lockForWriting(path)) {\n                readData.writeTo(channel.asOutputStream());\n            }\n        }\n\n        @Override\n        public VolatileReadData read(String key) throws IOException {\n            final Path path = Paths.get(key);\n            FileLazyRead fileLazyRead = new FileLazyRead(path, true);\n            return VolatileReadData.from(fileLazyRead);\n        }\n\n        @Override\n        public void delete(final String key) throws IOException {\n            final Path path = Paths.get(key);\n            if (!Files.isRegularFile(path))\n                Files.delete(path);\n            try (LockedFileChannel ignore = FILE_LOCK_MANAGER.lockForWriting(path)) {\n                Files.delete(path);\n            }\n        }\n    }\n\n    static class FileLazyRead implements LazyRead {\n\n        private static final Closeable NO_OP = () -> { };\n\n        private final Path path;\n        private Closeable lock;\n\n        FileLazyRead(final Path path) throws IOException {\n            this(path, true);\n        }\n\n        FileLazyRead(final Path path, final boolean requireLock ) throws IOException {\n            this.path = path;\n            if (requireLock)\n                lock = FILE_LOCK_MANAGER.lockForReading(path);\n            else\n                lock = NO_OP;\n        }\n\n        @Override\n        public long size() throws N5Exception.N5IOException {\n\n            if (lock == null) {\n                throw new N5Exception.N5IOException(\"FileLazyRead is already closed.\");\n            }\n\n\t\t\ttry {\n\t\t\t\treturn Files.size(path);\n\t\t\t} catch (NoSuchFileException e) {\n\t\t\t\tthrow new N5Exception.N5NoSuchKeyException(\"No such file\", e);\n\t\t\t} catch (IOException | UncheckedIOException e) {\n\t\t\t\tthrow new N5Exception.N5IOException(e);\n\t\t\t}\n        }\n\n        @Override\n        public ReadData materialize(final long offset, final long length) {\n\n            if (lock == null) {\n                throw new N5Exception.N5IOException(\"FileLazyRead is already closed.\");\n            }\n\n            ReadData readData = null;\n            try (final FileChannel channel = FileChannel.open(path, StandardOpenOption.READ)) {\n\n                channel.position(offset);\n\n                final long channelSize = channel.size();\n                if (!validBounds(channelSize, offset, length)) {\n                    throw new IndexOutOfBoundsException();\n                }\n\n                final long size = length < 0 ? (channelSize - offset) : length;\n                if (size > Integer.MAX_VALUE) {\n                    throw new IndexOutOfBoundsException(\"Attempt to materialize too large data\");\n                }\n\n                final byte[] data = new byte[(int) size];\n                final ByteBuffer buf = ByteBuffer.wrap(data);\n                channel.read(buf);\n                readData = ReadData.from(data);\n\n            } catch (final NoSuchFileException e) {\n                throw new N5Exception.N5NoSuchKeyException(\"No such file\", e);\n            } catch (IOException | UncheckedIOException e) {\n                /* Occasionally (frequently for some source remote mounted file systems) this can throw exceptions during\n                 * `channel.close()` which is called automatically in the try-with-resources block. In this case, we have\n                 * successfully read the data, and we can return it, and ignore the exception.\n                 * */\n                if (readData == null)\n                    throw new N5Exception.N5IOException(e);\n            }\n            return readData;\n        }\n\n        @Override\n        public void close() throws IOException {\n\n            if (lock != null) {\n                lock.close();\n                lock = null;\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/GsonKeyValueN5Reader.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.InputStreamReader;\nimport java.io.UncheckedIOException;\nimport java.util.List;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.janelia.saalfeldlab.n5.shard.PositionValueAccess;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\n\n/**\n * {@link N5Reader} implementation through {@link KeyValueAccess} with JSON\n * attributes parsed with {@link Gson}.\n *\n */\npublic interface GsonKeyValueN5Reader extends GsonN5Reader {\n\n\tKeyValueAccess getKeyValueAccess();\n\n\tdefault boolean groupExists(final String normalPath) {\n\n\t\treturn getKeyValueAccess().isDirectory(absoluteGroupPath(normalPath));\n\t}\n\n\t@Override\n\tdefault boolean exists(final String pathName) {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\treturn groupExists(normalPath) || datasetExists(normalPath);\n\t}\n\n\t@Override\n\tdefault boolean datasetExists(final String pathName) throws N5Exception {\n\n\t\t// for n5, every dataset must be a group\n\t\treturn getDatasetAttributes(pathName) != null;\n\t}\n\n\t/**\n\t * Reads or creates the attributes map of a group or dataset.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @return the attribute\n\t * @throws N5Exception if the attributes cannot be read\n\t */\n\t@Override\n\tdefault JsonElement getAttributes(final String pathName) throws N5Exception {\n\n\t\tfinal String groupPath = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String attributesPath = absoluteAttributesPath(groupPath);\n\n\t\ttry (final VolatileReadData readData = getKeyValueAccess().createReadData(attributesPath);) {\n\t\t\tif (readData == null) {\n\t\t\t\treturn null;\n\t\t\t}\n\t\t\treturn GsonUtils.readAttributes(new InputStreamReader(readData.inputStream()), getGson());\n\t\t} catch (final N5Exception.N5NoSuchKeyException e) {\n\t\t\treturn null;\n\t\t} catch (final UncheckedIOException | N5IOException e) {\n\t\t\tthrow new N5IOException(\"Failed to read attributes from dataset \" + pathName, e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> DataBlock<T> readChunk(\n\t\t\tfinal String pathName,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(pathName),\n\t\t\t\t\tconvertedDatasetAttributes);\n\t\t\treturn convertedDatasetAttributes.<T> getDatasetAccess().readChunk(posKva, gridPosition);\n\n\t\t} catch (N5Exception.N5NoSuchKeyException e) {\n\t\t\treturn null;\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> List<DataBlock<T>> readChunks(\n\t\t\tfinal String pathName,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal List<long[]> blockPositions) throws N5Exception {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(pathName), convertedDatasetAttributes);\n\t\treturn convertedDatasetAttributes.<T> getDatasetAccess().readChunks(posKva, blockPositions);\n\t}\n\n\t@Override\n\tdefault <T> DataBlock<T> readBlock(\n\t\t\tfinal String pathName,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\n\t\tfinal DatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\tfinal int shardLevel = convertedDatasetAttributes.getNestedBlockGrid().numLevels() - 1;\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(pathName),\n\t\t\t\t\tconvertedDatasetAttributes);\n\t\t\treturn convertedDatasetAttributes.<T> getDatasetAccess().readBlock(posKva, gridPosition, shardLevel);\n\n\t\t} catch (N5Exception.N5NoSuchKeyException e) {\n\t\t\treturn null;\n\t\t}\n\t}\n\n\t@Override\n\tdefault String[] list(final String pathName) throws N5Exception {\n\n\t\treturn getKeyValueAccess().listDirectories(absoluteGroupPath(pathName));\n\t}\n\n\t/**\n\t * Constructs the absolute path (in terms of this store) for the group or\n\t * dataset.\n\t *\n\t * @param normalGroupPath\n\t *            normalized group path without leading slash\n\t * @return the absolute path to the group\n\t */\n\tdefault String absoluteGroupPath(final String normalGroupPath) {\n\n\t\treturn getKeyValueAccess().compose(getURI(), normalGroupPath);\n\t}\n\n\t/**\n\t * Constructs the absolute path (in terms of this store) for the attributes\n\t * file of a group or dataset.\n\t *\n\t * @param normalPath\n\t *            normalized group path without leading slash\n\t * @return the absolute path to the attributes\n\t */\n\tdefault String absoluteAttributesPath(final String normalPath) {\n\n\t\treturn getKeyValueAccess().compose(getURI(), normalPath, getAttributesKey());\n\t}\n\n\t@Override\n\tdefault boolean blockExists(\n\t\t\tfinal String pathName,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String blockPath = getKeyValueAccess().compose(getURI(), normalPath,\n\t\t\t\tdatasetAttributes.relativeBlockPath(gridPosition));\n\t\treturn getKeyValueAccess().isFile(blockPath);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/GsonKeyValueN5Writer.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.OutputStreamWriter;\nimport java.io.UncheckedIOException;\nimport java.nio.charset.StandardCharsets;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.Map;\n\nimport com.google.gson.JsonSyntaxException;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.shard.PositionValueAccess;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonNull;\nimport com.google.gson.JsonObject;\n\n/**\n * Default implementation of {@link N5Writer} with JSON attributes parsed with\n * {@link Gson}.\n */\npublic interface GsonKeyValueN5Writer extends GsonN5Writer, GsonKeyValueN5Reader {\n\n\t/**\n\t * TODO This overrides the version even if incompatible, check\n\t * if this is the desired behavior or if it is always overridden, e.g. as by\n\t * the caching version. If this is true, delete this implementation.\n\t *\n\t * @param path to the group to write the version into\n\t */\n\tdefault void setVersion(final String path) {\n\n\t\tif (!VERSION.equals(getVersion()))\n\t\t\tsetAttribute(\"/\", VERSION_KEY, VERSION.toString());\n\t}\n\n\tstatic String initializeContainer(\n\t\t\tfinal KeyValueAccess keyValueAccess,\n\t\t\tfinal String basePath) throws N5IOException {\n\n\t\tfinal String normBasePath = keyValueAccess.normalize(basePath);\n\t\tkeyValueAccess.createDirectories(normBasePath);\n\t\treturn normBasePath;\n\t}\n\n\t@Override\n\tdefault void createGroup(final String path) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(path);\n\t\tgetKeyValueAccess().createDirectories(absoluteGroupPath(normalPath));\n\t}\n\n\t/**\n\t * Helper method that writes an attributes tree into the store\n\t * <p>\n\t * TODO This method is not part of the public API and should be protected\n\t * in Java versions greater than 8\n\t *\n\t * @param normalGroupPath\n\t *            to write the attributes to\n\t * @param attributes\n\t *            to write\n\t * @throws N5Exception\n\t *             if unable to write the attributes at {@code normalGroupPath}\n\t */\n\tdefault void writeAttributes(\n\t\t\tfinal String normalGroupPath,\n\t\t\tfinal JsonElement attributes) throws N5Exception {\n\n\t\tfinal ReadData newAttributesReadData = ReadData.from(os -> {\n\t\t\tfinal OutputStreamWriter writer = new OutputStreamWriter(os, StandardCharsets.UTF_8);\n\t\t\tGsonUtils.writeAttributes(writer, attributes, getGson());\n\t\t});\n\n\t\ttry {\n\t\t\tgetKeyValueAccess().write(absoluteAttributesPath(normalGroupPath), newAttributesReadData);\n\t\t} catch (UncheckedIOException | N5IOException e) {\n\t\t\tthrow new N5Exception.N5IOException(\"Failed to write attributes into \" + normalGroupPath, e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault void setAttributes(\n\t\t\tfinal String path,\n\t\t\tfinal JsonElement attributes) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(path);\n\t\tif (!exists(normalPath))\n\t\t\tthrow new N5IOException(\"\" + normalPath + \" is not a group or dataset.\");\n\n\t\twriteAttributes(normalPath, attributes);\n\t}\n\n\t/**\n\t * Helper method that reads the existing map of attributes, JSON encodes,\n\t * inserts and overrides the provided attributes, and writes them back into\n\t * the attributes store.\n\t *\n\t * TODO This method is not part of the public API and should be protected\n\t * in Java greater than 8\n\t *\n\t * @param normalGroupPath\n\t *            to write the attributes to\n\t * @param attributes\n\t *            to write\n\t * @throws N5Exception\n\t *             if unable to read or write the attributes at\n\t *             {@code normalGroupPath}\n\t */\n\tdefault void writeAttributes(\n\t\t\tfinal String normalGroupPath,\n\t\t\tfinal Map<String, ?> attributes) throws N5Exception {\n\n\t\tif (attributes != null && !attributes.isEmpty()) {\n\t\t\tJsonElement root = getAttributes(normalGroupPath);\n\t\t\troot = root != null && root.isJsonObject()\n\t\t\t\t\t? root.getAsJsonObject()\n\t\t\t\t\t: new JsonObject();\n\t\t\troot = GsonUtils.insertAttributes(root, attributes, getGson());\n\t\t\twriteAttributes(normalGroupPath, root);\n\t\t}\n\t}\n\n\t@Override\n\tdefault void setAttributes(\n\t\t\tfinal String path,\n\t\t\tfinal Map<String, ?> attributes) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(path);\n\t\tif (!exists(normalPath))\n\t\t\tthrow new N5IOException(\"\" + normalPath + \" is not a group or dataset.\");\n\n\t\twriteAttributes(normalPath, attributes);\n\t}\n\n\t@Override\n\tdefault boolean removeAttribute(final String groupPath, final String attributePath) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(groupPath);\n\t\tfinal String absoluteNormalPath = getKeyValueAccess().compose(getURI(), normalPath);\n\t\tfinal String normalKey = N5URI.normalizeAttributePath(attributePath);\n\n\t\tif (!getKeyValueAccess().isDirectory(absoluteNormalPath))\n\t\t\treturn false;\n\n\t\tif (attributePath.equals(\"/\")) {\n\t\t\tsetAttributes(normalPath, JsonNull.INSTANCE);\n\t\t\treturn true;\n\t\t}\n\n\t\tfinal JsonElement attributes = getAttributes(normalPath);\n\t\tif (GsonUtils.removeAttribute(attributes, normalKey) != null) {\n\t\t\tsetAttributes(normalPath, attributes);\n\t\t\treturn true;\n\t\t}\n\t\treturn false;\n\t}\n\n\t@Override\n\tdefault <T> T removeAttribute(final String pathName, final String key, final Class<T> cls) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String normalKey = N5URI.normalizeAttributePath(key);\n\n\t\tfinal JsonElement attributes = getAttributes(normalPath);\n\t\tfinal T obj;\n\t\ttry {\n\t\t\tobj = GsonUtils.removeAttribute(attributes, normalKey, cls, getGson());\n\t\t} catch (JsonSyntaxException | NumberFormatException | ClassCastException e) {\n\t\t\tthrow new N5Exception.N5ClassCastException(e);\n\t\t}\n\t\tif (obj != null) {\n\t\t\tsetAttributes(normalPath, attributes);\n\t\t}\n\t\treturn obj;\n\t}\n\n\t@Override\n\tdefault boolean removeAttributes(final String pathName, final List<String> attributes) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tboolean removed = false;\n\t\tfor (final String attribute : attributes) {\n\t\t\tfinal String normalKey = N5URI.normalizeAttributePath(attribute);\n\t\t\tremoved |= removeAttribute(normalPath, normalKey);\n\t\t}\n\t\treturn removed;\n\t}\n\n\t@Override\n\tdefault <T> void writeRegion(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long[] min,\n\t\t\tfinal long[] size,\n\t\t\tfinal DataBlockSupplier<T> chunkSupplier,\n\t\t\tfinal boolean writeFully) throws N5Exception {\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(datasetPath), convertedDatasetAttributes);\n\t\t\tconvertedDatasetAttributes.<T>getDatasetAccess().writeRegion(posKva, min, size, chunkSupplier, writeFully);\n\t\t} catch (final UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\n\t\t\t\t\t\"Failed to write blocks into dataset \" + datasetPath, e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> void writeRegion(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long[] min,\n\t\t\tfinal long[] size,\n\t\t\tfinal DataBlockSupplier<T> chunkSupplier,\n\t\t\tfinal boolean writeFully,\n\t\t\tfinal ExecutorService exec) throws N5Exception, InterruptedException, ExecutionException {\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(datasetPath), convertedDatasetAttributes);\n\t\t\tconvertedDatasetAttributes.<T>getDatasetAccess().writeRegion(posKva, min, size, chunkSupplier, writeFully, exec);\n\t\t} catch (final UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\n\t\t\t\t\t\"Failed to write blocks into dataset \" + datasetPath, e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> void writeChunks(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal DataBlock<T>... chunks) throws N5Exception {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(datasetPath), convertedDatasetAttributes);\n\t\t\tconvertedDatasetAttributes.<T>getDatasetAccess().writeChunks(posKva, Arrays.asList(chunks));\n\t\t} catch (final UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\n\t\t\t\t\t\"Failed to write chunks into dataset \" + datasetPath, e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> void writeChunk(\n\t\t\tfinal String path,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal DataBlock<T> chunk) throws N5Exception {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(path), convertedDatasetAttributes);\n\t\t\tconvertedDatasetAttributes.<T> getDatasetAccess().writeChunk(posKva, chunk);\n\t\t} catch (final UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\n\t\t\t\t\t\"Failed to write chunk \" + Arrays.toString(chunk.getGridPosition()) + \" into dataset \" + path,\n\t\t\t\t\te);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> void writeBlock(\n\t\t\tfinal String path,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal DataBlock<T> dataBlock) throws N5Exception {\n\n\t\tfinal DatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\tfinal int shardLevel = convertedDatasetAttributes.getNestedBlockGrid().numLevels() - 1;\n\t\ttry {\n\t\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(path), convertedDatasetAttributes);\n\t\t\tconvertedDatasetAttributes.<T> getDatasetAccess().writeBlock(posKva, dataBlock, shardLevel);\n\t\t} catch (final UncheckedIOException e) {\n\t\t\tthrow new N5IOException(\n\t\t\t\t\t\"Failed to write block \" + Arrays.toString(dataBlock.getGridPosition()) + \" into dataset \" + path,\n\t\t\t\t\te);\n\t\t}\n\t}\n\n\t@Override\n\tdefault boolean remove(final String path) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(path);\n\t\tfinal String groupPath = absoluteGroupPath(normalPath);\n\t\tif (getKeyValueAccess().isDirectory(groupPath))\n\t\t\tgetKeyValueAccess().delete(groupPath);\n\n\t\t/* an IOException should have occurred if anything had failed midway */\n\t\treturn true;\n\t}\n\n\t@Override\n\tdefault boolean deleteBlock(\n\t\t\tfinal String path,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\n\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(path), datasetAttributes);\n\t\treturn posKva.remove(gridPosition);\n\t}\n\n\t@Override\n\tdefault boolean deleteChunk(\n\t\t\tfinal String path,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\n\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(path), datasetAttributes);\n\t\treturn datasetAttributes.getDatasetAccess().deleteChunk(posKva, gridPosition);\n\t}\n\n\t@Override\n\tdefault boolean deleteChunks(\n\t\t\tfinal String path,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal List<long[]> gridPositions) throws N5Exception {\n\n\t\tfinal PositionValueAccess posKva = PositionValueAccess.fromKva(getKeyValueAccess(), getURI(), N5URI.normalizeGroupPath(path), datasetAttributes);\n\t\treturn datasetAttributes.getDatasetAccess().deleteChunks(posKva, gridPositions);\n\t}\n}"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/GsonN5Reader.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.lang.reflect.Type;\nimport java.util.Map;\n\nimport com.google.gson.JsonDeserializationContext;\nimport com.google.gson.JsonParseException;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonSyntaxException;\n\n/**\n * {@link N5Reader} with JSON attributes parsed with {@link Gson}.\n *\n */\npublic interface GsonN5Reader extends N5Reader {\n\n\tGson getGson();\n\n\t/**\n\t * Get the key for the {@link KeyValueAccess}, that is used for storing attributes.\n\t * The N5 format uses \"attributes.json\".\n\t *\n\t * @return the attributes key\n\t */\n\tString getAttributesKey();\n\n\t@Override\n\tdefault Map<String, Class<?>> listAttributes(final String pathName) throws N5Exception {\n\n\t\treturn GsonUtils.listAttributes(getAttributes(pathName));\n\t}\n\n\t@Override\n\tdefault DatasetAttributes getDatasetAttributes(final String pathName) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(pathName);\n\t\tfinal JsonElement attributes = getAttributes(normalPath);\n\t\treturn createDatasetAttributes(attributes);\n\t}\n\n\tdefault DatasetAttributes createDatasetAttributes(final JsonElement attributes) {\n\n\t\tfinal JsonDeserializationContext context = new JsonDeserializationContext() {\n\n\t\t\t@Override public <T> T deserialize(JsonElement json, Type typeOfT) throws JsonParseException {\n\n\t\t\t\treturn getGson().fromJson(json, typeOfT);\n\t\t\t}\n\t\t};\n\n\t\treturn DatasetAttributes.getJsonAdapter().deserialize(attributes, DatasetAttributes.class, context);\n\t}\n\n\t@Override\n\tdefault <T> T getAttribute(final String pathName, final String key, final Class<T> clazz) throws N5Exception {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String normalizedAttributePath = N5URI.normalizeAttributePath(key);\n\n\t\tfinal JsonElement attributes = getAttributes(normalPathName);\n\t\ttry {\n\t\t\treturn GsonUtils.readAttribute(attributes, normalizedAttributePath, clazz, getGson());\n\t\t} catch (JsonSyntaxException | NumberFormatException | ClassCastException e) {\n\t\t\tthrow new N5Exception.N5ClassCastException(e);\n\t\t}\n\t}\n\n\t@Override\n\tdefault <T> T getAttribute(final String pathName, final String key, final Type type) throws N5Exception {\n\n\t\tfinal String normalPathName = N5URI.normalizeGroupPath(pathName);\n\t\tfinal String normalizedAttributePath = N5URI.normalizeAttributePath(key);\n\t\tfinal JsonElement attributes = getAttributes(normalPathName);\n\t\ttry {\n\t\t\treturn GsonUtils.readAttribute(attributes, normalizedAttributePath, type, getGson());\n\t\t} catch (JsonSyntaxException | NumberFormatException | ClassCastException e) {\n\t\t\tthrow new N5Exception.N5ClassCastException(e);\n\t\t}\n\t}\n\n\t/**\n\t * Reads or the attributes of a group or dataset.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @return the attributes identified by pathName\n\t * @throws N5Exception if the attribute cannot be returned\n\t */\n\tJsonElement getAttributes(final String pathName) throws N5Exception;\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/GsonN5Writer.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\n\n/**\n * {@link N5Writer} with JSON attributes parsed with {@link Gson}.\n *\n */\npublic interface GsonN5Writer extends GsonN5Reader, N5Writer {\n\n\t/**\n\t * Set the attributes of a group. This result of this method is equivalent\n\t * with {@link N5Writer#setAttribute(String, String, Object) N5Writer#setAttribute(groupPath, \"/\", attributes)}.\n\t *\n\t * @param groupPath\n\t *            to write the attributes to\n\t * @param attributes\n\t *            to write\n\t * @throws N5Exception if the attributes cannot be set\n\t */\n\tvoid setAttributes(\n\t\t\tfinal String groupPath,\n\t\t\tfinal JsonElement attributes) throws N5Exception;\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/GsonUtils.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.IOException;\nimport java.io.Reader;\nimport java.io.Writer;\nimport java.lang.reflect.Array;\nimport java.lang.reflect.Type;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.regex.Matcher;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonArray;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonObject;\nimport com.google.gson.JsonPrimitive;\nimport com.google.gson.JsonSyntaxException;\nimport com.google.gson.reflect.TypeToken;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5JsonParseException;\n\n/**\n * Utility class for working with  JSON.\n *\n * @author Stephan Saalfeld\n */\npublic interface GsonUtils {\n\n\t/**\n\t * Reads the attributes json from a given {@link Reader}.\n\t *\n\t * @param reader\n\t *            the reader\n\t * @param gson\n\t * \t\t\t  to parse Json from the {@code reader}\n\t * @return the root {@link JsonObject} of the attributes\n\t */\n\tstatic JsonElement readAttributes(final Reader reader, final Gson gson) {\n\n\t\treturn gson.fromJson(reader, JsonElement.class);\n\t}\n\n\tstatic <T> T readAttribute(\n\t\t\tfinal JsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal Class<T> cls,\n\t\t\tfinal Gson gson) throws JsonSyntaxException, NumberFormatException, ClassCastException {\n\n\t\treturn readAttribute(root, normalizedAttributePath, TypeToken.get(cls).getType(), gson);\n\t}\n\n\tstatic <T> T readAttribute(\n\t\t\tfinal JsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal Type type,\n\t\t\tfinal Gson gson) throws JsonSyntaxException, NumberFormatException, ClassCastException {\n\n\t\tfinal JsonElement attribute = getAttribute(root, normalizedAttributePath);\n\t\treturn parseAttributeElement(attribute, gson, type);\n\t}\n\n\t/**\n\t * Deserialize the {@code attribute} as {@link Type type} {@code T}.\n\t *\n\t * @param attribute\n\t *            to deserialize as {@link Type type}\n\t * @param gson\n\t *            used to deserialize {@code attribute}\n\t * @param type\n\t *            to deserialize {@code attribute} as\n\t * @param <T>\n\t *            return type represented by {@link Type type}\n\t * @return the deserialized attribute object, or {@code null} if\n\t *         {@code attribute} cannot deserialize to {@code T}\n\t */\n\t@SuppressWarnings(\"unchecked\")\n\tstatic <T> T parseAttributeElement(final JsonElement attribute, final Gson gson, final Type type) throws JsonSyntaxException, NumberFormatException, ClassCastException {\n\n\t\tif (attribute == null)\n\t\t\treturn null;\n\n\t\tfinal Class<?> clazz = (type instanceof Class<?>) ? ((Class<?>)type) : null;\n\t\tif (clazz != null && clazz.isAssignableFrom(HashMap.class)) {\n\t\t\tfinal Type mapType = new TypeToken<Map<String, Object>>() {\n\n\t\t\t}.getType();\n\t\t\tfinal Map<String, Object> retMap = gson.fromJson(attribute, mapType);\n\t\t\t// noinspection unchecked\n\t\t\treturn (T)retMap;\n\t\t}\n\t\tif (attribute instanceof JsonArray) {\n\t\t\tfinal JsonArray array = attribute.getAsJsonArray();\n\t\t\ttry {\n\t\t\t\tfinal T retArray = GsonUtils.getJsonAsArray(gson, array, type);\n\t\t\t\tif (retArray != null)\n\t\t\t\t\treturn retArray;\n\t\t\t} catch (final JsonSyntaxException e) {\n\t\t\t\tif (type == String.class)\n\t\t\t\t\t//noinspection unchecked\n\t\t\t\t\treturn (T)gson.toJson(attribute);\n\t\t\t}\n\t\t}\n\t\ttry {\n\t\t\tfinal T parsedResult = gson.fromJson(attribute, type);\n\t\t\tif (parsedResult == null)\n\t\t\t\tthrow new ClassCastException(\"Cannot parse json as type \" + type.getTypeName());\n\t\t\treturn parsedResult;\n\t\t} catch (final JsonSyntaxException e) {\n\t\t\tif (type == String.class)\n\t\t\t\t//noinspection unchecked\n\t\t\t\treturn (T)gson.toJson(attribute);\n\t\t\tthrow e;\n\t\t}\n\t}\n\n\t/**\n\t * Return the attribute at {@code normalizedAttributePath} as a\n\t * {@link JsonElement}. Does not attempt to parse the attribute.\n\t * to search for the {@link JsonElement} at location\n\t * {@code normalizedAttributePath}\n\t *\n\t * @param root\n\t * \t\t\tcontaining an attribute at normalizedAttributePath\n\t * @param normalizedAttributePath\n\t *            to the attribute\n\t * @return the attribute as a {@link JsonElement}.\n\t */\n\tstatic JsonElement getAttribute(JsonElement root, final String normalizedAttributePath) {\n\n\t\tfinal String[] pathParts = normalizedAttributePath.split(\"(?<!\\\\\\\\)/\");\n\t\tfor (int i = 0; i < pathParts.length; i++) {\n\t\t\tfinal String pathPart = pathParts[i];\n\t\t\tif (pathPart.isEmpty())\n\t\t\t\tcontinue;\n\t\t\tfinal String pathPartWithoutEscapeCharacters = pathPart\n\t\t\t\t\t.replaceAll(\"\\\\\\\\/\", \"/\")\n\t\t\t\t\t.replaceAll(\"\\\\\\\\\\\\[\", \"[\");\n\t\t\tif (root instanceof JsonObject && root.getAsJsonObject().get(pathPartWithoutEscapeCharacters) != null) {\n\t\t\t\tfinal JsonObject jsonObject = root.getAsJsonObject();\n\t\t\t\troot = jsonObject.get(pathPartWithoutEscapeCharacters);\n\t\t\t} else {\n\t\t\t\tfinal Matcher matcher = N5URI.ARRAY_INDEX.matcher(pathPart);\n\t\t\t\tif (root != null && root.isJsonArray() && matcher.matches()) {\n\t\t\t\t\tfinal int index = Integer.parseInt(matcher.group().replace(\"[\", \"\").replace(\"]\", \"\"));\n\t\t\t\t\tfinal JsonArray jsonArray = root.getAsJsonArray();\n\t\t\t\t\tif (index >= jsonArray.size()) {\n\t\t\t\t\t\treturn null;\n\t\t\t\t\t}\n\t\t\t\t\troot = jsonArray.get(index);\n\t\t\t\t} else {\n\t\t\t\t\treturn null;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn root;\n\t}\n\n\t/**\n\t * Best effort implementation of {@link N5Reader#listAttributes(String)}\n\t * with limited type resolution. Possible return types are\n\t * <ul>\n\t * <li>null</li>\n\t * <li>boolean</li>\n\t * <li>double</li>\n\t * <li>String</li>\n\t * <li>Object</li>\n\t * <li>boolean[]</li>\n\t * <li>double[]</li>\n\t * <li>String[]</li>\n\t * <li>Object[]</li>\n\t * </ul>\n\t *\n\t * @param root\n\t *            the json element\n\t * @return the attribute map\n\t */\n\tstatic Map<String, Class<?>> listAttributes(final JsonElement root) throws N5JsonParseException {\n\n\t\tif (root == null) {\n\t\t\treturn new HashMap<>();\n\t\t}\n\t\tif (!root.isJsonObject()) {\n\t\t\tthrow new N5JsonParseException(\"JsonElement found, but was not JsonObject\");\n\t\t}\n\n\t\tfinal HashMap<String, Class<?>> attributes = new HashMap<>();\n\t\troot.getAsJsonObject().entrySet().forEach(entry -> {\n\t\t\tfinal Class<?> clazz;\n\t\t\tfinal String key = entry.getKey();\n\t\t\tfinal JsonElement jsonElement = entry.getValue();\n\t\t\tif (jsonElement.isJsonNull())\n\t\t\t\tclazz = null;\n\t\t\telse if (jsonElement.isJsonPrimitive())\n\t\t\t\tclazz = classForJsonPrimitive((JsonPrimitive)jsonElement);\n\t\t\telse if (jsonElement.isJsonArray()) {\n\t\t\t\tfinal JsonArray jsonArray = (JsonArray)jsonElement;\n\t\t\t\tClass<?> arrayElementClass = Object.class;\n\t\t\t\tif (jsonArray.size() > 0) {\n\t\t\t\t\tfinal JsonElement firstElement = jsonArray.get(0);\n\t\t\t\t\tif (firstElement.isJsonPrimitive()) {\n\t\t\t\t\t\tarrayElementClass = classForJsonPrimitive(firstElement.getAsJsonPrimitive());\n\t\t\t\t\t\tfor (int i = 1; i < jsonArray.size() && arrayElementClass != Object.class; ++i) {\n\t\t\t\t\t\t\tfinal JsonElement element = jsonArray.get(i);\n\t\t\t\t\t\t\tif (element.isJsonPrimitive()) {\n\t\t\t\t\t\t\t\tfinal Class<?> nextArrayElementClass = classForJsonPrimitive(\n\t\t\t\t\t\t\t\t\t\telement.getAsJsonPrimitive());\n\t\t\t\t\t\t\t\tif (nextArrayElementClass != arrayElementClass)\n\t\t\t\t\t\t\t\t\tif (nextArrayElementClass == double.class && arrayElementClass == long.class)\n\t\t\t\t\t\t\t\t\t\tarrayElementClass = double.class;\n\t\t\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\t\t\tarrayElementClass = Object.class;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tarrayElementClass = Object.class;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tclazz = Array.newInstance(arrayElementClass, 0).getClass();\n\t\t\t\t} else\n\t\t\t\t\tclazz = Object[].class;\n\t\t\t} else\n\t\t\t\tclazz = Object.class;\n\t\t\tattributes.put(key, clazz);\n\t\t});\n\t\treturn attributes;\n\t}\n\n\tstatic <T> T getJsonAsArray(final Gson gson, final JsonArray array, final Class<T> cls) {\n\n\t\treturn getJsonAsArray(gson, array, TypeToken.get(cls).getType());\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\tstatic <T> T getJsonAsArray(final Gson gson, final JsonArray array, final Type type) {\n\n\t\tfinal Class<?> clazz = (type instanceof Class<?>) ? ((Class<?>)type) : null;\n\n\t\tif (type == boolean[].class) {\n\t\t\tfinal boolean[] retArray = new boolean[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal Boolean value = gson.fromJson(array.get(i), boolean.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == double[].class) {\n\t\t\tfinal double[] retArray = new double[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal double value = gson.fromJson(array.get(i), double.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == float[].class) {\n\t\t\tfinal float[] retArray = new float[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal float value = gson.fromJson(array.get(i), float.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == long[].class) {\n\t\t\tfinal long[] retArray = new long[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal long value = gson.fromJson(array.get(i), long.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == short[].class) {\n\t\t\tfinal short[] retArray = new short[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal short value = gson.fromJson(array.get(i), short.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == int[].class) {\n\t\t\tfinal int[] retArray = new int[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal int value = gson.fromJson(array.get(i), int.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == byte[].class) {\n\t\t\tfinal byte[] retArray = new byte[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal byte value = gson.fromJson(array.get(i), byte.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (type == char[].class) {\n\t\t\tfinal char[] retArray = new char[array.size()];\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tfinal char value = gson.fromJson(array.get(i), char.class);\n\t\t\t\tretArray[i] = value;\n\t\t\t}\n\t\t\treturn (T)retArray;\n\t\t} else if (clazz != null && clazz.isArray()) {\n\t\t\tfinal Class<?> componentCls = clazz.getComponentType();\n\t\t\tfinal Object[] clsArray = (Object[])Array.newInstance(componentCls, array.size());\n\t\t\tfor (int i = 0; i < array.size(); i++) {\n\t\t\t\tclsArray[i] = gson.fromJson(array.get(i), componentCls);\n\t\t\t}\n\t\t\t// noinspection unchecked\n\t\t\treturn (T)clsArray;\n\t\t}\n\t\treturn null;\n\t}\n\n\t/**\n\t * Return a reasonable class for a {@link JsonPrimitive}. Possible return\n\t * types are\n\t * <ul>\n\t * <li>boolean</li>\n\t * <li>double</li>\n\t * <li>String</li>\n\t * <li>Object</li>\n\t * </ul>\n\t *\n\t * @param jsonPrimitive\n\t *            the json primitive\n\t * @return the class\n\t */\n\tstatic Class<?> classForJsonPrimitive(final JsonPrimitive jsonPrimitive) {\n\n\t\tif (jsonPrimitive.isBoolean())\n\t\t\treturn boolean.class;\n\t\telse if (jsonPrimitive.isNumber()) {\n\t\t\tfinal Number number = jsonPrimitive.getAsNumber();\n\t\t\tif (number.longValue() == number.doubleValue())\n\t\t\t\treturn long.class;\n\t\t\telse\n\t\t\t\treturn double.class;\n\t\t} else if (jsonPrimitive.isString())\n\t\t\treturn String.class;\n\t\telse\n\t\t\treturn Object.class;\n\t}\n\n\t/**\n\t * If there is an attribute in {@code root} such that it can be parsed and\n\t * deserialized as {@code T},\n\t * then remove it from {@code root}, write {@code root} to the\n\t * {@code writer}, and return the removed attribute.\n\t * <p>\n\t * If there is an attribute at the location specified by\n\t * {@code normalizedAttributePath} but it cannot be deserialized to\n\t * {@code T}, then it is not removed.\n\t * <p>\n\t * If nothing is removed, then {@code root} is not written to the\n\t * {@code writer}.\n\t * to write the modified {@code root} to after removal of the attribute to\n\t * remove the attribute from\n\t *\n\t *\n\t * @param <T> the type of the attribute to be removed\n\t * @param writer to write the modified JsonElement, which no longer contains the removed attribute\n\t * @param root element to remove the attribute from\n\t * @param normalizedAttributePath\n\t *            to the attribute location of the attribute to remove to\n\t *            deserialize the attribute with of the removed attribute\n\t * @param cls of the type of the attribute to be removed\n\t * @param gson used to deserialize the JsonElement as {@code T}\n\t * @return the removed attribute, or null if nothing removed\n\t * @throws IOException\n\t *             if unable to write to the {@code writer}\n\t */\n\tstatic <T> T removeAttribute(\n\t\t\tfinal Writer writer,\n\t\t\tfinal JsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal Class<T> cls,\n\t\t\tfinal Gson gson) throws JsonSyntaxException, NumberFormatException, ClassCastException, IOException  {\n\n\t\tfinal T removed = removeAttribute(root, normalizedAttributePath, cls, gson);\n\t\tif (removed != null) {\n\t\t\twriteAttributes(writer, root, gson);\n\t\t}\n\t\treturn removed;\n\t}\n\n\t/**\n\t * If there is an attribute in {@code root} at location\n\t * {@code normalizedAttributePath} then remove it from {@code root}..\n\t * to write the modified {@code root} to after removal of the attribute to\n\t * remove the attribute from\n\t *\n\t * @param writer to write the modified JsonElement, which no longer contains the removed attribute\n\t * @param root to remove the attribute from\n\t * @param normalizedAttributePath\n\t *            to the attribute location to deserialize the attribute with\n\t * @param gson to write the attribute to the  {@code writer}\n\t * @return if the attribute was removed or not\n\t * @throws IOException if an error occurs while writing to the {@code writer}\n\t */\n\tstatic boolean removeAttribute(\n\t\t\tfinal Writer writer,\n\t\t\tfinal JsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal Gson gson) throws JsonSyntaxException, NumberFormatException, ClassCastException, IOException  {\n\n\t\tfinal JsonElement removed = removeAttribute(root, normalizedAttributePath, JsonElement.class, gson);\n\t\tif (removed != null) {\n\t\t\twriteAttributes(writer, root, gson);\n\t\t\treturn true;\n\t\t}\n\t\treturn false;\n\t}\n\n\t/**\n\t * If there is an attribute in {@code root} such that it can be parsed and\n\t * desrialized as {@code T},\n\t * then remove it from {@code root} and return the removed attribute.\n\t * <p>\n\t * If there is an attribute at the location specified by\n\t * {@code normalizedAttributePath} but it cannot be deserialized to\n\t * {@code T}, then it is not removed.\n\t * to remove the attribute from\n\t *\n\t * @param <T> the type of the attribute to be removed\n\t * @param root element to remove the attribute from\n\t * @param normalizedAttributePath\n\t *            to the attribute location of the attribute to remove to\n\t *            deserialize the attribute with of the removed attribute\n\t * @param cls of the type of the attribute to be removed\n\t * @param gson used to deserialize the JsonElement as {@code T}\n\t * @return the removed attribute, or null if nothing removed\n\t * @throws JsonSyntaxException if the attribute is not valid json\n\t * @throws NumberFormatException if {@code T} is a {@link Number} but the attribute cannot be parsed as {@code T}\n\t * @throws ClassCastException if an attribute exists at this path, but is not of type {@code T}\n\t */\n\tstatic <T> T removeAttribute(\n\t\t\tfinal JsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal Class<T> cls,\n\t\t\tfinal Gson gson) throws JsonSyntaxException, NumberFormatException, ClassCastException {\n\n\t\tfinal T attribute = GsonUtils.readAttribute(root, normalizedAttributePath, cls, gson);\n\t\tif (attribute != null) {\n\t\t\tremoveAttribute(root, normalizedAttributePath);\n\t\t}\n\t\treturn attribute;\n\t}\n\n\t/**\n\t * Remove and return the attribute at {@code normalizedAttributePath} as a\n\t * {@link JsonElement}. Does not attempt to parse the attribute. to search\n\t * for the {@link JsonElement} at location {@code normalizedAttributePath}\n\t *\n\t * @param root\n\t *            the root JsonElement\n\t * @param normalizedAttributePath\n\t *            to the attribute\n\t * @return the attribute as a {@link JsonElement}.\n\t */\n\tstatic JsonElement removeAttribute(JsonElement root, final String normalizedAttributePath) {\n\n\t\tfinal String[] pathParts = normalizedAttributePath.split(\"(?<!\\\\\\\\)/\");\n\t\tfor (int i = 0; i < pathParts.length; i++) {\n\t\t\tfinal String pathPart = pathParts[i];\n\t\t\tif (pathPart.isEmpty())\n\t\t\t\tcontinue;\n\t\t\tfinal String pathPartWithoutEscapeCharacters = pathPart\n\t\t\t\t\t.replaceAll(\"\\\\\\\\/\", \"/\")\n\t\t\t\t\t.replaceAll(\"\\\\\\\\\\\\[\", \"[\");\n\t\t\tif (root instanceof JsonObject && root.getAsJsonObject().get(pathPartWithoutEscapeCharacters) != null) {\n\t\t\t\tfinal JsonObject jsonObject = root.getAsJsonObject();\n\t\t\t\troot = jsonObject.get(pathPartWithoutEscapeCharacters);\n\t\t\t\tif (i == pathParts.length - 1) {\n\t\t\t\t\tjsonObject.remove(pathPartWithoutEscapeCharacters);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfinal Matcher matcher = N5URI.ARRAY_INDEX.matcher(pathPart);\n\t\t\t\tif (root != null && root.isJsonArray() && matcher.matches()) {\n\t\t\t\t\tfinal int index = Integer.parseInt(matcher.group().replace(\"[\", \"\").replace(\"]\", \"\"));\n\t\t\t\t\tfinal JsonArray jsonArray = root.getAsJsonArray();\n\t\t\t\t\tif (index >= jsonArray.size()) {\n\t\t\t\t\t\treturn null;\n\t\t\t\t\t}\n\t\t\t\t\troot = jsonArray.get(index);\n\t\t\t\t\tif (i == pathParts.length - 1) {\n\t\t\t\t\t\tjsonArray.remove(index);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\treturn null;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn root;\n\t}\n\n\t/**\n\t * Inserts {@code attribute} into {@code root} at location\n\t * {@code normalizedAttributePath} and write the resulting {@code root}.\n\t * <p>\n\t * If {@code root} is not a {@link JsonObject}, then it is overwritten with\n\t * an object containing {@code \"normalizedAttributePath\": attribute }\n\t *\n\t * @param writer\n\t *            the writer\n\t * @param root\n\t *            the root json element\n\t * @param normalizedAttributePath\n\t *            the attribute path\n\t * @param attribute\n\t *            the attribute\n\t * @param gson\n\t *            the gson\n\t * @param <T>\n\t *            the attribute type\n\t * @throws IOException\n\t *             the exception\n\t */\n\tstatic <T> void writeAttribute(\n\t\t\tfinal Writer writer,\n\t\t\tJsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal T attribute,\n\t\t\tfinal Gson gson) throws IOException {\n\n\t\troot = insertAttribute(root, normalizedAttributePath, attribute, gson);\n\t\twriteAttributes(writer, root, gson);\n\t}\n\n\t/**\n\t * Writes the attributes JsonElement to a given {@link Writer}.\n\t * This will overwrite any existing attributes.\n\t *\n\t * @param writer\n\t *            the writer\n\t * @param root\n\t *            the root json element\n\t * @param gson\n\t *            the gson\n\t * @param <T>\n\t *            the attribute type\n\t * @throws IOException\n\t *             the exception\n\t */\n\tstatic <T> void writeAttributes(\n\t\t\tfinal Writer writer,\n\t\t\tfinal JsonElement root,\n\t\t\tfinal Gson gson) throws IOException {\n\n\t\tgson.toJson(root, writer);\n\t\twriter.flush();\n\t}\n\n\tstatic JsonElement insertAttributes(JsonElement root, final Map<String, ?> attributes, final Gson gson) {\n\n\t\tfor (final Map.Entry<String, ?> attribute : attributes.entrySet()) {\n\t\t\troot = insertAttribute(root, N5URI.normalizeAttributePath(attribute.getKey()), attribute.getValue(), gson);\n\t\t}\n\t\treturn root;\n\t}\n\n\tstatic <T> JsonElement insertAttribute(\n\t\t\tJsonElement root,\n\t\t\tfinal String normalizedAttributePath,\n\t\t\tfinal T attribute,\n\t\t\tfinal Gson gson) {\n\n\t\tLinkedAttributePathToken<?> pathToken = N5URI.getAttributePathTokens(normalizedAttributePath);\n\n\t\t/* No path to traverse or build; just return the value */\n\t\tif (pathToken == null)\n\t\t\treturn gson.toJsonTree(attribute);\n\n\t\tJsonElement json = root;\n\t\twhile (pathToken != null) {\n\n\t\t\tfinal JsonElement parent = pathToken.setAndCreateParentElement(json);\n\n\t\t\t/*\n\t\t\t * We may need to create or override the existing root if it is\n\t\t\t * non-existent or incompatible.\n\t\t\t */\n\t\t\tfinal boolean rootOverriden = json == root && parent != json;\n\t\t\tif (root == null || rootOverriden) {\n\t\t\t\troot = parent;\n\t\t\t}\n\n\t\t\tjson = pathToken.writeChild(gson, attribute);\n\n\t\t\tpathToken = pathToken.next();\n\t\t}\n\t\treturn root;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/GzipCompression.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.zip.Deflater;\nimport java.util.zip.DeflaterOutputStream;\nimport java.util.zip.InflaterInputStream;\nimport org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream;\nimport org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream;\nimport org.apache.commons.compress.compressors.gzip.GzipParameters;\nimport org.janelia.saalfeldlab.n5.Compression.CompressionType;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@CompressionType(\"gzip\")\n@NameConfig.Name(\"gzip\")\npublic class GzipCompression implements Compression {\n\n\tprivate static final long serialVersionUID = 8630847239813334263L;\n\n\t/**\n\t * Explicit equivalent of {@link java.util.zip.Deflater#DEFAULT_COMPRESSION}: zlib defines\n\t * level 6 as \"a default compromise between speed and compression.\" An explicit value is used\n\t * instead of {@code DEFAULT_COMPRESSION} (-1) because -1 is not a valid level for Zarr codecs.\n\t *\n\t * @see <a href=\"https://www.zlib.net/manual.html\">zlib Manual</a>\n\t */\n\tprivate static final int N5_DEFAULT_GZIP_LEVEL = 6;\n\n\t@CompressionParameter\n\t@NameConfig.Parameter\n\tprivate final int level;\n\n\t/**\n\t * This is not a NameConfig.Parameter because this parameter must not be\n\t * serialized for zarr\n\t */\n\t@CompressionParameter\n\tprivate final boolean useZlib;\n\n\tprivate final transient GzipParameters parameters = new GzipParameters();\n\n\tpublic GzipCompression() {\n\n\t\tthis(N5_DEFAULT_GZIP_LEVEL);\n\t}\n\n\tpublic GzipCompression(final int level) {\n\n\t\tthis(level, false);\n\t}\n\n\tpublic GzipCompression(final int level, final boolean useZlib) {\n\n\t\tthis.level = level;\n\t\tthis.useZlib = useZlib;\n\t}\n\n\t@Override\n\tpublic boolean equals(final Object other) {\n\n\t\tif (other == null || other.getClass() != GzipCompression.class)\n\t\t\treturn false;\n\t\telse {\n\t\t\tfinal GzipCompression gz = ((GzipCompression)other);\n\t\t\treturn useZlib == gz.useZlib && level == gz.level;\n\t\t}\n\t}\n\n\tprivate InputStream decode(final InputStream in) throws IOException {\n\t\tif (useZlib) {\n\t\t\treturn new InflaterInputStream(in);\n\t\t} else {\n\t\t\treturn GzipCompressorInputStream.builder()\n\t\t\t\t\t.setInputStream(in)\n\t\t\t\t\t.setDecompressConcatenated(true)\n\t\t\t\t\t.get();\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData decode(final ReadData readData) throws N5IOException {\n\n\t\ttry {\n\t\t\treturn ReadData.from(decode(readData.inputStream()));\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData encode(final ReadData readData)  {\n\t\tif (useZlib) {\n\t\t\treturn readData.encode(out -> new DeflaterOutputStream(out, new Deflater(level)));\n\t\t} else {\n\t\t\treturn readData.encode(out -> {\n\t\t\t\tparameters.setCompressionLevel(level);\n\t\t\t\treturn new GzipCompressorOutputStream(out, parameters);\n\t\t\t});\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/HttpKeyValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.apache.commons.io.IOUtils;\n\nimport org.apache.commons.lang3.function.TriFunction;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.http.ListResponseParser;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\nimport java.io.Closeable;\nimport java.io.FileNotFoundException;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.io.OutputStream;\nimport java.io.Reader;\nimport java.io.Writer;\nimport java.net.HttpURLConnection;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.net.URL;\nimport java.nio.channels.NonWritableChannelException;\nimport java.nio.charset.StandardCharsets;\nimport java.util.ArrayList;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\n/**\n * A read-only {@link KeyValueAccess} implementation using HTTP. As a result, calling <code>lockForWriting</code>, <code>createDirectories</code>, or <code>delete</code> will throw an {@link N5Exception}.\n * <p>\n * The behavior of <code>list</code>, <code>listDirectories</code>, and <code>isDirectory</code> will depend on the server configuration. See the documentation of those methods for details.\n * <p>\n * Methods that take a \"normalPath\" as an argument expect absolute URIs.\n */\npublic class HttpKeyValueAccess implements KeyValueAccess {\n\n\tpublic static final String HEAD = \"HEAD\";\n\tpublic static final String GET = \"GET\";\n\n\tpublic static final String RANGE = \"Range\";\n\tpublic static final String ACCEPT_RANGE = \"Accept-Range\";\n\tpublic static final String BYTES = \"bytes\";\n\n\tprivate int readTimeoutMilliseconds;\n\tprivate int connectionTimeoutMilliseconds;\n\n\tprivate ListResponseParser listResponseParser = ListResponseParser.defaultListParser();\n\tprivate ListResponseParser listDirectoryResponseParser = ListResponseParser.defaultDirectoryListParser();\n\n\t/**\n\t * Opens an {@link HttpKeyValueAccess}\n\t *\n\t * @throws N5IOException if the access could not be created\n\t */\n\tpublic HttpKeyValueAccess() {\n\n\t\treadTimeoutMilliseconds = 5000;\n\t\tconnectionTimeoutMilliseconds = 5000;\n\t}\n\n\tpublic void setReadTimeout(int readTimeoutMilliseconds) {\n\n\t\tthis.readTimeoutMilliseconds = readTimeoutMilliseconds;\n\t}\n\n\tpublic void setConnectionTimeout(int connectionTimeoutMilliseconds) {\n\n\t\tthis.connectionTimeoutMilliseconds = connectionTimeoutMilliseconds;\n\t}\n\n\tpublic void setListParser(final ListResponseParser parser) {\n\n\t\tlistResponseParser = parser;\n\t}\n\n\tpublic void setListDirectoryParser(final ListResponseParser parser) {\n\n\t\tlistDirectoryResponseParser = parser;\n\t}\n\n\t@Override\n\tpublic String normalize(final String path) {\n\n\t\treturn N5URI.normalizeGroupPath(path);\n\t}\n\n\t@Override\n\tpublic URI uri(final String normalPath) throws URISyntaxException {\n\n\t\treturn new URI(normalPath);\n\t}\n\n\t/**\n\t * Test whether the {@code normalPath} exists.\n\t * <p>\n\t * Removes leading slash from {@code normalPath}, and then checks whether\n\t * either {@code path} or {@code path + \"/\"} is a key.\n\t *\n\t * @param normalPath is expected to be in normalized form, no further efforts are\n\t *                   made to normalize it.\n\t * @return {@code true} if {@code path} exists, {@code false} otherwise\n\t */\n\t@Override\n\tpublic boolean exists(final String normalPath) {\n\n\t\ttry {\n\t\t\trequireValidHttpResponse(normalPath, \"HEAD\", \"Error checking existence: \" + normalPath, true);\n\t\t\treturn true;\n\t\t} catch (N5Exception.N5NoSuchKeyException e) {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\t@Override public long size(String normalPath) {\n\n\t\tfinal HttpURLConnection head = requireValidHttpResponse(normalPath, \"HEAD\", \"Error checking existence: \" + normalPath, true);\n\t\treturn head.getContentLengthLong();\n\t}\n\n\t/**\n\t * Test whether the path is a directory.\n\t * <p>\n\t * Appends trailing \"/\" to {@code normalPath} if there is none, removes\n\t * leading \"/\", and then checks whether resulting {@code path} is a key.\n\t *\n\t * @param normalPath is expected to be in normalized form, no further efforts are\n\t *                   made to normalize it.\n\t * @return {@code true} if {@code path} (with trailing \"/\") exists as a key,\n\t * {@code false} otherwise\n\t */\n\t@Override\n\tpublic boolean isDirectory(final String normalPath) {\n\n\t\ttry {\n\t\t\trequireValidHttpResponse(getDirectoryPath(normalPath), HEAD, false, (code,  msg,http) -> {\n\t\t\t\tfinal N5Exception cause = validExistsResponse(code, \"Error checking directory: \" + normalPath, msg, true);\n\t\t\t\tif (code >= 300 && code < 400) {\n\t\t\t\t\tfinal String redirectLocation = http.getHeaderField(\"Location\");\n\t\t\t\t\tif (!(redirectLocation.endsWith(\"/\") || redirectLocation.endsWith(\"index.html\")))\n\t\t\t\t\t\treturn new N5Exception.N5NoSuchKeyException(\"Found File at \" + normalPath + \" but was not directory\");\n\t\t\t\t\treturn null;\n\t\t\t\t}\n\t\t\t\treturn cause;\n\t\t\t});\n\t\t\treturn true;\n\t\t} catch (N5Exception e) {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tprivate static String getDirectoryPath(String normalPath) {\n\n\t\tfinal String directoryNormalPath;\n\t\tif (normalPath.endsWith(\"/\"))\n\t\t\tdirectoryNormalPath = normalPath;\n\t\telse\n\t\t\tdirectoryNormalPath = normalPath + \"/\";\n\t\treturn directoryNormalPath;\n\t}\n\n\t/**\n\t * Test whether the path is a file.\n\t * <p>\n\t * Checks whether {@code normalPath} has no trailing \"/\", then removes\n\t * leading \"/\" and checks whether the resulting {@code path} is a key.\n\t *\n\t * @param normalPath is expected to be in normalized form, no further efforts are\n\t *                   made to normalize it.\n\t * @return {@code true} if {@code path} exists as a key and has no trailing\n\t * slash, {@code false} otherwise\n\t */\n\t@Override\n\tpublic boolean isFile(final String normalPath) {\n\n\t\t/* Files must not end in `/` And Don't accept a redirect to a location ending in `/` */\n\t\ttry {\n\t\t\trequireValidHttpResponse(getFilePath(normalPath), HEAD, false, (code, msg, http) -> {\n\t\t\t\tfinal N5Exception cause = validExistsResponse(code, \"Error accessing file: \" + normalPath, msg, true);\n\t\t\t\tif (code >= 300 && code < 400) {\n\t\t\t\t\tfinal String redirectLocation = http.getHeaderField(\"Location\");\n\t\t\t\t\tif (redirectLocation.endsWith(\"/\") || redirectLocation.endsWith(\"index.html\"))\n\t\t\t\t\t\treturn new N5Exception.N5NoSuchKeyException(\"Found key at \" + normalPath + \" but was directory\");\n\t\t\t\t}\n\t\t\t\treturn cause;\n\t\t\t});\n\t\t\treturn true;\n\t\t} catch (N5Exception e) {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tprivate static String getFilePath(String normalPath) {\n\n\t\tfinal String fileNormalPath = normalPath.replaceAll(\"/+$\", \"\");\n\t\treturn fileNormalPath;\n\t}\n\n\tprivate HttpURLConnection httpRequest(String normalPath, String method) throws IOException {\n\n\t\tfinal URL url = URI.create(normalPath).toURL();\n\t\tfinal HttpURLConnection connection = (HttpURLConnection)url.openConnection();\n\t\tconnection.setReadTimeout(readTimeoutMilliseconds);\n\t\tconnection.setConnectTimeout(connectionTimeoutMilliseconds);\n\t\tconnection.setRequestMethod(method);\n\t\treturn connection;\n\t}\n\n\t@Override\n\tpublic VolatileReadData createReadData(final String normalPath) {\n\t\treturn VolatileReadData.from(new HttpLazyRead(normalPath));\n\t}\n\n\tpublic LockedChannel lockForReading(final String normalPath) throws N5IOException {\n\t\t//TODO Caleb: Maybe check exists lazily when attempting to read\n\t\ttry {\n\t\t\tif (!exists(normalPath))\n\t\t\t\tthrow new N5Exception.N5NoSuchKeyException(\"Key does not exist: \" + normalPath);\n\t\t\treturn new HttpObjectChannel(uri(normalPath), 0, -1);\n\t\t} catch (URISyntaxException e) {\n\t\t\tthrow new N5Exception(\"Invalid URI Syntax\", e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic LockedChannel lockForWriting(final String normalPath) throws N5IOException {\n\n\t\tthrow new N5Exception(\"HttpKeyValueAccess is read-only\");\n\t}\n\n\t@Override\n\tpublic void write(final String normalPath, final ReadData data) throws N5IOException {\n\n\t\tthrow new N5Exception(\"HttpKeyValueAccess is read-only\");\n\t}\n\n\t/**\n\t * List all 'directory'-like children of a path.\n\t * <p>\n\t * Will throw an N5IOException both if a connection to the server can not be established, or the server does not allow listing.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return the directories\n\t * @throws N5IOException\n\t *             if an error occurs during listing\n\t */\n\t@Override\n\tpublic String[] listDirectories(final String normalPath) throws N5IOException {\n\n\t\treturn queryListEntries(normalPath, listDirectoryResponseParser, true);\n\t}\n\n\t/**\n\t * List all children of a path.\n\t * <p>\n\t * Will throw an N5IOException both if a connection to the server can not be\n\t * established, or the server does not allow listing.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further efforts are\n\t *            made to normalize it.\n\t * @return the the child paths\n\t * @throws N5IOException\n\t *             if an error occurs during listing\n\t */\n\t@Override\n\tpublic String[] list(final String normalPath) throws N5IOException {\n\n\t\treturn queryListEntries(normalPath, listResponseParser, true);\n\t}\n\n\tprivate String[] queryListEntries(String normalPath, ListResponseParser parser, boolean allowRedirect) throws N5IOException{\n\n\t\tfinal HttpURLConnection http = requireValidHttpResponse(normalPath, GET, \"Error listing directory at \" + normalPath, allowRedirect);\n\t\ttry {\n\t\t\tfinal String listResponse = responseToString(http.getInputStream());\n\t\t\treturn parser.parseListResponse(listResponse);\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(\"Error listing directory at \" + normalPath, e);\n\t\t}\n\t}\n\n\tprivate static N5Exception validExistsResponse(int code, String responseMsg, String message, boolean allowRedirect) {\n\t\tif (code >= 200 && code < (allowRedirect ? 400 : 300)) return null;\n\n\t\tfinal RuntimeException cause = new RuntimeException(message + \"( \"+ responseMsg + \")(\" + code + \")\");\n\t\tif (code == 404 | code == 410)\n\t\t\treturn new N5Exception.N5NoSuchKeyException(message, cause);\n\n\t\treturn new N5Exception(message, cause);\n\t}\n\n\tprivate HttpURLConnection requireValidHttpResponse(String uri, String method, String message, boolean allowRedirect) throws N5Exception {\n\t\treturn requireValidHttpResponse(uri, method, (code, msg, http) -> validExistsResponse(code, msg, message, allowRedirect));\n\t}\n\n\tprivate HttpURLConnection requireValidHttpResponse(String uri, String method, TriFunction<Integer, String, HttpURLConnection, N5Exception> filterCode) throws N5Exception {\n\t\treturn requireValidHttpResponse(uri, method, true, filterCode);\n\t}\n\n\tprivate HttpURLConnection requireValidHttpResponse(String uri, String method, boolean followRedirects, TriFunction<Integer, String, HttpURLConnection, N5Exception> filterCode) throws N5Exception {\n\n\t\tfinal int code;\n\t\tfinal HttpURLConnection http;\n\t\tfinal String responseMsg;\n\t\ttry {\n\t\t\thttp = httpRequest(uri, method);\n\t\t\thttp.setInstanceFollowRedirects(followRedirects);\n\t\t\tcode = http.getResponseCode();\n\t\t\tresponseMsg = http.getResponseMessage();\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(\"Could not validate HTTP Response\", e);\n\t\t}\n\n\t\tfinal N5Exception cause = filterCode.apply(code, responseMsg, http);\n\t\tif (cause != null) throw cause;\n\t\treturn http;\n\t}\n\n\tprivate String responseToString(InputStream inputStream) throws IOException {\n\n\t\treturn IOUtils.toString(inputStream, StandardCharsets.UTF_8.name());\n\t}\n\n\t@Override\n\tpublic void createDirectories(final String normalPath) {\n\n\t\tthrow new N5Exception(\"HttpKeyValueAccess is read-only\");\n\t}\n\n\t@Override\n\tpublic void delete(final String normalPath) {\n\n\t\tthrow new N5Exception(\"HttpKeyValueAccess is read-only\");\n\t}\n\n\tprivate class HttpObjectChannel implements LockedChannel {\n\n\t\tprotected final URI uri;\n\t\tprivate final long startByte;\n\t\tprivate final long size;\n\t\tprivate final ArrayList<Closeable> resources = new ArrayList<>();\n\n\t\tprotected HttpObjectChannel(final URI uri, long startByte, long size) {\n\n\t\t\tthis.uri = uri;\n\t\t\tthis.startByte = startByte;\n\t\t\tthis.size = size;\n\t\t}\n\n\t\tprivate boolean isPartialRead() {\n\t\t\treturn startByte > 0 || (size >= 0 && size != Long.MAX_VALUE);\n\t\t}\n\n\t\t@Override\n\t\tpublic InputStream newInputStream() throws N5IOException {\n\n\t\t\ttry {\n\t\t\t\tHttpURLConnection conn = (HttpURLConnection)uri.toURL().openConnection();\n\t\t\t\tif (isPartialRead()) {\n\t\t\t\t\tconn.setRequestProperty(RANGE, rangeString());\n\t\t\t\t\tfinal String acceptRanges = conn.getHeaderField(ACCEPT_RANGE);\n\t\t\t\t\tif (acceptRanges == null || !acceptRanges.equals(BYTES)) {\n\t\t\t\t\t\tconn.disconnect();\n\t\t\t\t\t\tconn = (HttpURLConnection)uri.toURL().openConnection();\n\t\t\t\t\t\treturn ReadData.from(conn.getInputStream()).materialize().slice(startByte, size).inputStream();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn conn.getInputStream();\n\t\t\t} catch (FileNotFoundException e) {\n\t\t\t\t/*default HttpURLConnection throws FileNotFoundException on 404 or 410 */\n\t\t\t\tthrow new N5Exception.N5NoSuchKeyException(\"Could not open stream for \" + uri, e);\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(\"Could not open stream for \" + uri, e);\n\t\t\t}\n\t\t}\n\n\t\tprivate String rangeString() {\n\n\t\t\tfinal String lastByte = (size > 0) ? Long.toString(startByte + size - 1) : \"\";\n\t\t\treturn String.format(\"%s=%d-%s\", BYTES, startByte, lastByte);\n\t\t}\n\n\t\t@Override\n\t\tpublic Reader newReader() {\n\n\t\t\tfinal InputStreamReader reader = new InputStreamReader(newInputStream(), StandardCharsets.UTF_8);\n\t\t\tsynchronized (resources) {\n\t\t\t\tresources.add(reader);\n\t\t\t}\n\t\t\treturn reader;\n\t\t}\n\n\t\t@Override\n\t\tpublic OutputStream newOutputStream() {\n\n\t\t\tthrow new NonWritableChannelException();\n\t\t}\n\n\t\t@Override\n\t\tpublic Writer newWriter() {\n\n\t\t\tthrow new NonWritableChannelException();\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() throws IOException {\n\n\t\t\tsynchronized (resources) {\n\t\t\t\tfor (final Closeable resource : resources) {\n\t\t\t\t\tresource.close();\n\t\t\t\t}\n\t\t\t\tresources.clear();\n\t\t\t}\n\t\t}\n\t}\n\n\tprivate class HttpLazyRead implements LazyRead {\n\n\t\tprivate final String normalKey;\n\n\t\tHttpLazyRead(String normalKey) {\n\t\t\tthis.normalKey = normalKey;\n\t\t}\n\n\t\t@Override\n\t\tpublic long size() {\n\t\t\treturn HttpKeyValueAccess.this.size(normalKey);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData materialize(long offset, long length) {\n\t\t\ttry (final HttpObjectChannel ch = new HttpObjectChannel(uri(normalKey), offset, length)) {\n\t\t\t\treturn ReadData.from(ch.newInputStream()).materialize();\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t} catch (URISyntaxException e) {\n\t\t\t\tthrow new N5Exception(e);\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() {\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/IntArrayDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class IntArrayDataBlock extends AbstractDataBlock<int[]> {\n\n\tpublic IntArrayDataBlock(final int[] size, final long[] gridPosition, final int[] data) {\n\n\t\tsuper(size, gridPosition, data, a -> a.length);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/IoPolicy.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\nimport java.io.IOException;\n\npublic interface IoPolicy {\n\n    void write(String key, ReadData readData) throws IOException;\n\n    VolatileReadData read(String key) throws IOException;\n\n    void delete(String key) throws IOException;\n\n    static IoPolicy withFallback(IoPolicy primary, IoPolicy fallback) {\n        return new IoPolicy() {\n\n            @Override\n            public void write(String key, ReadData readData) throws IOException {\n                try {\n                    primary.write(key, readData);\n                } catch (IOException e) {\n                    fallback.write(key, readData);\n                }\n            }\n\n            @Override\n            public VolatileReadData read(String key) throws IOException {\n                try {\n                    return primary.read(key);\n                } catch (IOException e) {\n                    return fallback.read(key);\n                }\n            }\n\n            @Override\n            public void delete(String key) throws IOException {\n                try {\n                    primary.delete(key);\n                } catch (IOException e) {\n                    fallback.delete(key);\n                }\n            }\n        };\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/KeyLockState.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.IOException;\nimport java.nio.channels.FileChannel;\nimport java.nio.file.Path;\nimport java.nio.file.StandardOpenOption;\nimport java.util.concurrent.Semaphore;\n\n/**\n * Per-key state that tracks both thread locks and file locks.\n */\nclass KeyLockState {\n\n\tprivate final Path path;\n\n\tprivate final LockingPolicy policy;\n\n\tpublic KeyLockState(final Path path, LockingPolicy policy) {\n\t\tthis.path = path;\n\t\tthis.policy = policy;\n\t}\n\n\t/**\n\t * The current system-level file lock (shared for reading, exclusive for\n\t * writing). Is {@link ChannelLock#close closed} when the last {@link\n\t * #releaseRead() Reader is released}, or the (one and only) {@link\n\t * #releaseWrite() Writer is released}.\n\t */\n\tprivate volatile ChannelLock channelLock;\n\n\t/**\n\t * Multiple Readers coordinate via this mutex. {@code numReaders} may only\n\t * be modified when {@code readerMutex} is held.\n\t */\n\tprivate final Semaphore readerMutex = new Semaphore(1);\n\tprivate int numReaders = 0;\n\n\t/**\n\t * This coordinates mutual exclusion between one writer and (the first of)\n\t * multiple readers. {@code channelLock} may only be created or closed when\n\t * {@code channelLockMutex} is held.\n\t */\n\tprivate final Semaphore channelLockMutex = new Semaphore(1);\n\n\tLockedFileChannel acquireRead() throws IOException {\n\n\t\ttry {\n\t\t\treaderMutex.acquire();\n\t\t\ttry {\n\t\t\t\tif (numReaders == 0) {\n\t\t\t\t\t// We are the first Reader, and are responsible for creating the channelLock\n\t\t\t\t\t// (Other concurrent Readers will still be blocked in readerMutex.)\n\n\t\t\t\t\t// If a Writer is still open, this will block us until the Writer is closed.\n\t\t\t\t\tchannelLockMutex.acquire();\n\n\t\t\t\t\ttry {\n\t\t\t\t\t\tchannelLock = ChannelLock.lock(path, false, policy);\n\t\t\t\t\t} catch (IOException e) {\n\t\t\t\t\t\t// Something went wrong. Back off.\n\t\t\t\t\t\tchannelLockMutex.release();\n\t\t\t\t\t\tthrow e;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// We have a FileChannel.\n\t\t\t\t// Create a LockedFileChannel that will releaseRead() when it is closed.\n\t\t\t\t++numReaders;\n\t\t\t\treturn new LockedFileChannel(channelLock.getChannel(), this::releaseRead);\n\t\t\t} finally {\n\t\t\t\treaderMutex.release();\n\t\t\t}\n\t\t} catch (InterruptedException e) {\n\t\t\tthrow new IOException(e);\n\t\t}\n\t}\n\n\tvoid releaseRead() throws IOException {\n\n\t\ttry {\n\t\t\treaderMutex.acquire();\n\t\t\ttry {\n\t\t\t\t--numReaders;\n\t\t\t\tif (numReaders == 0) {\n\t\t\t\t\t// We were the last Reader, and are responsible for releasing the channelLock\n\t\t\t\t\treleaseChannelLock();\n\t\t\t\t}\n\t\t\t} finally {\n\t\t\t\treaderMutex.release();\n\t\t\t}\n\t\t} catch (InterruptedException e) {\n\t\t\tthrow new IOException(e);\n\t\t}\n\t}\n\n\tprivate void releaseChannelLock() throws IOException {\n\n\t\ttry {\n\t\t\tchannelLock.close();\n\t\t} finally {\n\t\t\tchannelLockMutex.release();\n\t\t}\n\t}\n\n\tLockedFileChannel acquireWrite() throws IOException {\n\n\t\ttry {\n\t\t\t// If another Writer or Reader is still open, this will block until it is closed.\n\t\t\tchannelLockMutex.acquire();\n\n\t\t\ttry {\n\t\t\t\tchannelLock = ChannelLock.lock(path, true, policy);\n\t\t\t} catch (IOException e) {\n\t\t\t\t// Something went wrong. Back off.\n\t\t\t\tchannelLockMutex.release();\n\t\t\t\tthrow e;\n\t\t\t}\n\n\t\t\t// We have a WRITE ChannelLock.\n\t\t\t// Create a LockedFileChannel that will releaseWrite() when it is closed.\n\t\t\treturn new LockedFileChannel(channelLock.getChannel(), this::releaseWrite);\n\t\t} catch (InterruptedException e) {\n\t\t\tthrow new IOException(e);\n\t\t}\n\t}\n\n\tvoid releaseWrite() throws IOException {\n\t\treleaseChannelLock();\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/KeyValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.file.FileSystem;\nimport java.util.Arrays;\nimport java.util.stream.Collectors;\n\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\n/**\n * Key value read primitives used by {@link N5KeyValueReader}\n * implementations. This interface implements a subset of access primitives\n * provided by {@link FileSystem} to reduce the implementation burden for\n * backends lacking a {@link FileSystem} implementation (such as AWS-S3).\n *\n * @author Stephan Saalfeld\n */\npublic interface KeyValueAccess {\n\n\t/**\n\t * Split a path string into its components.\n\t *\n\t * @param path\n\t *            the path\n\t * @return the path components\n\t */\n\tdefault String[] components( final String path ) {\n\n\t\tString[] components = Arrays.stream(path.split(\"/\"))\n\t\t\t\t.filter(x -> !x.isEmpty())\n\t\t\t\t.toArray(String[]::new);\n\t\tif (components.length == 0)\n\t\t\treturn path.startsWith(\"/\") ? new String[]{\"/\"} : new String[]{\"\"};\n\n\t\tif (path.startsWith(\"/\") && !components[0].equals(\"/\")) {\n\t\t\tfinal String[] prependRoot = new String[components.length + 1];\n\t\t\tprependRoot[0] = \"/\";\n\t\t\tSystem.arraycopy(components, 0, prependRoot, 1, components.length);\n\t\t\tcomponents = prependRoot;\n\t\t}\n\n\t\tif (path.endsWith(\"/\") && !components[components.length - 1].endsWith(\"/\")) {\n\t\t\tcomponents[components.length - 1] = components[components.length - 1] + \"/\";\n\t\t}\n\t\treturn components;\n\t}\n\n\t/**\n\t * Compose a path from a base uri and subsequent components.\n\t *\n\t * @param uri the base path uri\n\t * @param components the path components\n\t * @return the path\n\t */\n\tdefault String compose( final URI uri, final String... components ) {\n\n\t\tint firstNonEmptyIdx = 0;\n\t\twhile (firstNonEmptyIdx < components.length && (components[firstNonEmptyIdx] == null || components[firstNonEmptyIdx].isEmpty())) {\n\t\t\tfirstNonEmptyIdx++;\n\t\t}\n\n\t\t/*If there are no non-empty components, there is nothing to compose against; return the uri. */\n\t\tif (components.length == firstNonEmptyIdx)\n\t\t\treturn uri.toString();\n\n\t\t/* allocate space for the initial path and the new components, skipping empty strings  */\n\t\tfinal int nonEmptysize = components.length - firstNonEmptyIdx;\n\t\tfinal String[] allComponents = new String[1 + nonEmptysize];\n\t\tif (uri.getPath().isEmpty())\n\t\t\t//TODO Caleb: This `isEmpty()` check is only necessary for Java 8. In newer versions\n\t\t\t//\tURI resolution is updated so that resolving and empty path with a new path adds\n\t\t\t//\ta leading `/` between the rest of the URI and the path part. In Java 8 it doesn't\n\t\t\t//  add the `/` so it ends up directly concatenating the path part with URI\n\t\t\tallComponents[0] = \"/\";\n\t\telse\n\t\t\tallComponents[0] = uri.getPath();\n\n\t\tSystem.arraycopy(components, firstNonEmptyIdx, allComponents, 1, nonEmptysize);\n\n\t\tURI composedUri = uri;\n\t\tfor (int i = 0; i < allComponents.length; i++) {\n\t\t\tfinal String component = allComponents[i];\n\t\t\tif (component == null || component.isEmpty())\n\t\t\t\tcontinue;\n\t\t\telse if (component.endsWith(\"/\") || i == allComponents.length - 1)\n\t\t\t\tcomposedUri = composedUri.resolve(N5URI.encodeAsUriPath(component));\n\t\t\telse\n\t\t\t\tcomposedUri = composedUri.resolve(N5URI.encodeAsUriPath(component + \"/\"));\n\t\t}\n\t\treturn composedUri.toString();\n\t}\n\n\t@Deprecated\n\tdefault String compose( final String... components ) {\n\n\t\treturn normalize(\n\t\t\t\tArrays.stream(components)\n\t\t\t\t\t\t.filter(x -> !x.isEmpty())\n\t\t\t\t\t\t.collect(Collectors.joining(\"/\"))\n\t\t);\n\n\t}\n\n\t/**\n\t * Get the parent of a path string.\n\t *\n\t * @param path\n\t *            the path\n\t * @return the parent path or null if the path has no parent\n\t */\n\tdefault String parent( final String path ) {\n\t\tfinal String removeTrailingSlash = path.replaceAll(\"/+$\", \"\");\n\t\treturn normalize(N5URI.getAsUri(removeTrailingSlash).resolve(\"\").toString());\n\t}\n\n\t/**\n\t * Relativize path relative to base.\n\t *\n\t * @param path\n\t *            the path\n\t * @param base\n\t *            the base path\n\t * @return the result or null if the path has no parent\n\t */\n\tdefault String relativize( final String path, final String base ) {\n\n\t\ttry {\n\t\t\t/*\n\t\t\t * Must pass absolute path to `uri`. if it already is, this is\n\t\t\t * redundant, and has no impact on the result. It's not true that\n\t\t\t * the inputs are always referencing absolute paths, but it doesn't\n\t\t\t * matter in this case, since we only care about the relative\n\t\t\t * portion of `path` to `base`, so the result always ignores the\n\t\t\t * absolute prefix anyway.\n\t\t\t */\n\t\t\treturn normalize(uri(\"/\" + base).relativize(uri(\"/\" + path)).toString());\n\t\t} catch (final URISyntaxException e) {\n\t\t\tthrow new N5Exception(\"Cannot relativize path (\" + path + \") with base (\" + base + \")\", e);\n\t\t}\n\t}\n\n\t/**\n\t * Normalize a path to canonical form. All paths pointing to the same\n\t * location return the same output. This is most important for cached\n\t * data pointing at the same location getting the same key.\n\t *\n\t * @param path\n\t *            the path\n\t * @return the normalized path\n\t */\n\tString normalize( final String path );\n\n\t/**\n\t * Get the absolute (including scheme) {@link URI} of the given path\n\t *\n\t * @param uriString\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return absolute URI\n\t * @throws URISyntaxException if the given path is not a proper URI\n\t */\n\tdefault URI uri(final String uriString) throws URISyntaxException {\n\t\ttry {\n\t\t\treturn URI.create(uriString);\n\t\t} catch (Exception ignore) {\n\t\t\treturn N5URI.encodeAsUri(uriString);\n\t\t}\n\t}\n\t/**\n\t * Test whether the path exists.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return true if the path exists\n\t */\n\tboolean exists( final String normalPath );\n\n\t/**\n\t * Returns the size in bytes of the object at the given normalPath if it exists.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return the size of the object in bytes.\n\t * @throws N5Exception.N5NoSuchKeyException if the given key does not exist\n\t */\n\tlong size( final String normalPath ) throws N5Exception.N5NoSuchKeyException;\n\n\t/**\n\t * Test whether the path is a directory.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return true if the path is a directory\n\t */\n\tboolean isDirectory( String normalPath );\n\n\t/**\n\t * Test whether the path is a file.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return true if the path is a file\n\t */\n\tboolean isFile( String normalPath ); // TODO: Looks un-used. Remove?\n\n\t/**\n\t * Create a {@link VolatileReadData} through which data at the normal key\n\t * can be read.\n\t * <p>\n\t * Implementations should read lazily if possible. Consumers may call {@link\n\t * ReadData#materialize()} to force a read operation if needed.\n\t * <p>\n\t * If supported by this KeyValueAccess implementation, partial reads are\n\t * possible by {@link ReadData#slice slicing} the returned {@code ReadData}.\n\t * <p>\n\t * The resulting {@code VolatileReadData} is potentially lazy. If the requested\n\t * key does not exist, it will throw {@code N5NoSuchKeyException}. Whether\n\t * the exception is thrown when {@link KeyValueAccess#createReadData(String)}] is called,\n\t * or when trying to materialize the {@code VolatileReadData} is implementation dependent.\n\t *\n\t * @param normalPath\n\t * \t\tis expected to be in normalized form, no further efforts are made to normalize it\n\t *\n\t * @return a ReadData\n\t *\n\t * @throws N5IOException\n\t * \t\tif an error occurs\n\t */\n\tVolatileReadData createReadData(final String normalPath) throws N5IOException;\n\n\t/**\n\t * Write {@code data} to the given {@code normalPath}.\n\t * <p>\n\t * Existing data at {@code normalPath} will be overridden.\n\t *\n\t * @param normalPath\n\t * \t\tis expected to be in normalized form, no further efforts are made to normalize it\n\t * @param data\n\t * \t\tthe data to write\n\t *\n\t * @throws N5IOException\n\t * \t\tif an error occurs\n\t */\n\tvoid write(String normalPath, ReadData data) throws N5IOException;\n\n\t/**\n\t * Create a lock on a path for reading. This isn't meant to be kept\n\t * around. Create, use, [auto]close, e.g.\n\t * <code>\n\t * try (final lock = store.lockForReading()) {\n\t *   ...\n\t * }\n\t * </code>\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return the locked channel\n\t * @throws N5IOException\n\t *             if a locked channel could not be created\n\t * @deprecated migrate to {@link KeyValueAccess#createReadData(String)}\n\t */\n\t@Deprecated\n\tdefault LockedChannel lockForReading( final String normalPath ) throws N5IOException {\n\t\treturn new BufferedKvaLockedChannel(this, normalPath);\n\t}\n\n\t/**\n\t * Create an exclusive lock on a path for writing. If the file doesn't exist\n\t * yet, it will be created, including all directories leading up to it. This\n\t * lock isn't meant to be kept around. Create, use, [auto]close, e.g.\n\t * <code>\n\t * try (final lock = store.lockForWriting()) {\n\t *   ...\n\t * }\n\t * </code>\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return the locked channel\n\t * @throws N5IOException\n\t *             if a locked channel could not be created\n\t * @deprecated migrate to {@link KeyValueAccess#write(String, ReadData)}\n\t */\n\t@Deprecated\n\tdefault LockedChannel lockForWriting( final String normalPath ) throws N5IOException {\n\t\treturn new BufferedKvaLockedChannel(this, normalPath);\n\t}\n\n\t/**\n\t * List all 'directory'-like children of a path.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return the directories\n\t * @throws N5IOException\n\t *             if an error occurs during listing\n\t */\n\tString[] listDirectories( final String normalPath ) throws N5IOException;\n\n\t/**\n\t * List all children of a path.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @return the the child paths\n\t * @throws N5IOException if an error occurs during listing\n\t */\n\tString[] list( final String normalPath ) throws N5IOException;\n\n\t/**\n\t * Create a directory and all parent paths along the way. The directory\n\t * and parent paths are discoverable. On a filesystem, this usually means\n\t * that the directories exist, on a key value store that is unaware of\n\t * directories, this may be implemented as creating an object for each path.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @throws N5IOException\n\t *             if an error occurs during creation\n\t */\n\tvoid createDirectories( final String normalPath ) throws N5IOException;\n\n\t/**\n\t * Delete a path. If the path is a directory, delete it recursively.\n\t *\n\t * @param normalPath\n\t *            is expected to be in normalized form, no further\n\t *            efforts are made to normalize it.\n\t * @throws N5IOException\n\t *            if an error occurs during deletion\n\t */\n\tvoid delete( final String normalPath ) throws N5IOException;\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/LinkedAttributePathToken.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.util.Iterator;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonArray;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonNull;\nimport com.google.gson.JsonObject;\nimport com.google.gson.JsonPrimitive;\n\npublic abstract class LinkedAttributePathToken<T extends JsonElement> implements Iterator<LinkedAttributePathToken<?>> {\n\n\t/**\n\t * The JsonElement which contains the mapping that this token represents.\n\t */\n\tprotected T parentJson;\n\n\t/**\n\t * The token representing the subsequent path element.\n\t */\n\tprotected LinkedAttributePathToken<?> childToken;\n\n\t/**\n\t * @return a reference object of type {@code T}\n\t */\n\tpublic abstract T getJsonType();\n\n\t/**\n\t * This method will return the child element, if the {@link #parentJson}\n\t * contains a child that\n\t * is valid for this token. If the {@link #parentJson} does not have a\n\t * child, this method will\n\t * create it. If the {@link #parentJson} does have a child, but it is not\n\t * {@link #jsonCompatible(JsonElement)}\n\t * with our {@link #childToken}, then this method will also create a new\n\t * (compatible) child, and replace\n\t * the existing incompatible child.\n\t *\n\t * @return the resulting child element\n\t */\n\tpublic abstract JsonElement getOrCreateChildElement();\n\n\t/**\n\t * This method will write into {@link #parentJson} the subsequent\n\t * {@link JsonElement}.\n\t * <br>\n\t * The written JsonElement will EITHER be:\n\t * <ul>\n\t * <li>The result of serializing {@code value} ( if {@link #childToken} is\n\t * {@code null} )</li>\n\t * <li>The {@link JsonElement} which represents our {@link #childToken} (See\n\t * {@link #getOrCreateChildElement()} )</li>\n\t * </ul>\n\t *\n\t * @param gson\n\t *            instance used to serialize {@code value }\n\t * @param value\n\t *            to write\n\t * @return the object that was written.\n\t */\n\tpublic JsonElement writeChild(final Gson gson, final Object value) {\n\n\t\tif (childToken != null)\n\t\t\t/*\n\t\t\t * If we have a child, get/create it and set current json element to\n\t\t\t * it\n\t\t\t */\n\t\t\treturn getOrCreateChildElement();\n\t\telse {\n\t\t\t/* We are done, no token remaining */\n\t\t\twriteValue(gson, value);\n\t\t\treturn getChildElement();\n\t\t}\n\t}\n\n\t/**\n\t * Check if the provided {@code json} is compatible with this token.\n\t * <br>\n\t * Compatibility means that the provided {@code JsonElement} does not\n\t * explicitly conflict with this token. That means that {@code null} is\n\t * always compatible. The only real incompatibility is when the\n\t * {@code JsonElement}\n\t * that we receive is different that we expect.\n\t *\n\t * @param json\n\t *            element that we are testing for compatibility.\n\t * @return false if {@code json} is not null and is not of type {@code T}\n\t */\n\t@SuppressWarnings(\"BooleanMethodIsAlwaysInverted\")\n\tpublic boolean jsonCompatible(final JsonElement json) {\n\n\t\treturn json == null || json.getClass() == getJsonType().getClass();\n\t}\n\n\t/**\n\t * Write {@code value} into {@link #parentJson}.\n\t * <br>\n\t * Should only be called when {@link #childToken} is null (i.e. this is the\n\t * last token in the attribute path).\n\t *\n\t * @param gson\n\t *            instance used to serialize {@code value}\n\t * @param value\n\t *            to serialize\n\t */\n\tprotected abstract void writeValue(Gson gson, Object value);\n\n\t/**\n\t * Write the {@link JsonElement} the corresponds to {@link #childToken} into\n\t * {@link #parentJson}.\n\t */\n\tprotected abstract void writeChildElement();\n\n\t/**\n\t * @return the element that is represented by {@link #childToken} if\n\t *         present.\n\t */\n\tprotected abstract JsonElement getChildElement();\n\n\t/**\n\t * If {@code json} is compatible with the type or token that we are, then\n\t * set {@link #parentJson} to {@code json}.\n\t * <br>\n\t * However, if {@code json} is either {@code null} or not\n\t * {@link #jsonCompatible(JsonElement)} then\n\t * {@link #parentJson} will be set to a new instance of {@code T}.\n\t *\n\t * @param json\n\t *            to attempt to set to {@link #parentJson}\n\t * @return the value set to {@link #parentJson}.\n\t */\n\t@SuppressWarnings(\"unchecked\")\n\tprotected JsonElement setAndCreateParentElement(final JsonElement json) {\n\n\t\tif (json == null || !jsonCompatible(json)) {\n\t\t\t// noinspection unchecked\n\t\t\tparentJson = (T)getJsonType().deepCopy();\n\t\t} else {\n\t\t\t// noinspection unchecked\n\t\t\tparentJson = (T)json;\n\t\t}\n\t\treturn parentJson;\n\t}\n\n\t/**\n\t * @return if we have a {@link #childToken}.\n\t */\n\t@Override\n\tpublic boolean hasNext() {\n\n\t\treturn childToken != null;\n\t}\n\n\t/**\n\t * @return {@link #childToken}\n\t */\n\t@Override\n\tpublic LinkedAttributePathToken<?> next() {\n\n\t\treturn childToken;\n\t}\n\n\tpublic static class ObjectAttributeToken extends LinkedAttributePathToken<JsonObject> {\n\n\t\tprivate static final JsonObject JSON_OBJECT = new JsonObject();\n\n\t\tprivate final String key;\n\n\t\tpublic ObjectAttributeToken(final String key) {\n\n\t\t\tthis.key = key;\n\t\t}\n\n\t\t/**\n\t\t * @return the {@link #key} this token maps it's\n\t\t *         {@link #getChildElement()} to.\n\t\t */\n\t\tpublic String getKey() {\n\n\t\t\treturn key;\n\t\t}\n\n\t\t@Override\n\t\tpublic String toString() {\n\n\t\t\treturn getKey();\n\t\t}\n\n\t\t@Override\n\t\tpublic JsonObject getJsonType() {\n\n\t\t\treturn JSON_OBJECT;\n\t\t}\n\n\t\t@Override\n\t\tpublic JsonElement getOrCreateChildElement() {\n\n\t\t\tfinal JsonElement childElement = parentJson.getAsJsonObject().get(key);\n\t\t\tif (!parentJson.has(key) || childToken != null && !childToken.jsonCompatible(childElement)) {\n\t\t\t\twriteChildElement();\n\t\t\t}\n\t\t\treturn getChildElement();\n\t\t}\n\n\t\t@Override\n\t\tprotected JsonElement getChildElement() {\n\n\t\t\treturn parentJson.get(key);\n\t\t}\n\n\t\t@Override\n\t\tprotected void writeValue(final Gson gson, final Object value) {\n\n\t\t\tparentJson.add(key, gson.toJsonTree(value));\n\t\t}\n\n\t\t@Override\n\t\tprotected void writeChildElement() {\n\n\t\t\tif (childToken != null)\n\t\t\t\tparentJson.add(key, childToken.getJsonType().deepCopy());\n\n\t\t}\n\n\t}\n\n\tpublic static class ArrayAttributeToken extends LinkedAttributePathToken<JsonArray> {\n\n\t\tprivate static final JsonArray JSON_ARRAY = new JsonArray();\n\n\t\tprivate final int index;\n\n\t\tpublic ArrayAttributeToken(final int index) {\n\n\t\t\tthis.index = index;\n\t\t}\n\n\t\t/**\n\t\t * @return the {@link #index} this token maps it's\n\t\t *         {@link #getChildElement()} to.\n\t\t */\n\t\tpublic int getIndex() {\n\n\t\t\treturn index;\n\t\t}\n\n\t\t@Override\n\t\tpublic String toString() {\n\n\t\t\treturn \"[\" + getIndex() + \"]\";\n\t\t}\n\n\t\t@Override\n\t\tpublic JsonArray getJsonType() {\n\n\t\t\treturn JSON_ARRAY;\n\t\t}\n\n\t\t@Override\n\t\tpublic JsonElement getOrCreateChildElement() {\n\n\t\t\tif (index >= parentJson.size() || childToken != null && !childToken.jsonCompatible(parentJson.get(index)))\n\t\t\t\twriteChildElement();\n\n\t\t\treturn parentJson.get(index);\n\t\t}\n\n\t\t@Override\n\t\tprotected void writeChildElement() {\n\n\t\t\tif (childToken != null) {\n\t\t\t\tfillArrayToIndex(parentJson, index, null);\n\t\t\t\tparentJson.set(index, childToken.getJsonType().deepCopy());\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tprotected JsonElement getChildElement() {\n\n\t\t\treturn parentJson.get(index);\n\t\t}\n\n\t\t@Override\n\t\tprotected void writeValue(final Gson gson, final Object value) {\n\n\t\t\tfillArrayToIndex(parentJson, index, value);\n\t\t\tparentJson.set(index, gson.toJsonTree(value));\n\t\t}\n\n\t\t/**\n\t\t * Fill {@code array} up to, and including, {@code index} with a default\n\t\t * value, determined by the type of {@code value}.\n\t\t * Importantly, this does NOT set {@code array[index]=value}.\n\t\t *\n\t\t * @param array\n\t\t *            to fill\n\t\t * @param index\n\t\t *            to fill the array to (inclusive)\n\t\t * @param value\n\t\t *            used to determine the default array fill value\n\t\t */\n\t\tprivate static void fillArrayToIndex(final JsonArray array, final int index, final Object value) {\n\n\t\t\tfinal JsonElement fillValue;\n\t\t\tif (valueRepresentsANumber(value)) {\n\t\t\t\tfillValue = new JsonPrimitive(0);\n\t\t\t} else {\n\t\t\t\tfillValue = JsonNull.INSTANCE;\n\t\t\t}\n\n\t\t\tfor (int i = array.size(); i <= index; i++) {\n\t\t\t\tarray.add(fillValue);\n\t\t\t}\n\t\t}\n\n\t\t/**\n\t\t * Check if {@value} represents a {@link Number}.\n\t\t * True if {@code value} either:\n\t\t * <ul>\n\t\t * <li>is a {@code Number}, or;</li>\n\t\t * <li>is a {@link JsonPrimitive} where\n\t\t * {@link JsonPrimitive#isNumber()}</li>\n\t\t * </ul>\n\t\t *\n\t\t * @param value\n\t\t *            we wish to check if represents a {@link Number} or not.\n\t\t * @return true if {@code value} represents a {@link Number}\n\t\t */\n\t\tprivate static boolean valueRepresentsANumber(final Object value) {\n\n\t\t\treturn value instanceof Number || (value instanceof JsonPrimitive && ((JsonPrimitive)value).isNumber());\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/LockedChannel.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\nimport java.io.Closeable;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.io.Reader;\nimport java.io.Writer;\n\n/**\n * A lock on a path that can create a {@link Reader}, {@link Writer},\n * {@link InputStream}, or {@link OutputStream}.\n *\n * @author Stephan Saalfeld\n */\npublic interface LockedChannel extends Closeable {\n\n\t/**\n\t * Create a UTF-8 {@link Reader}.\n\t *\n\t * @return the reader\n\t * @throws N5IOException\n\t *             if the reader could not be created\n\t */\n\tReader newReader() throws N5IOException;\n\n\t/**\n\t * Create a new {@link InputStream}.\n\t *\n\t * @return the input stream\n\t * @throws N5IOException\n\t *             if an input stream could not be created\n\t */\n\tInputStream newInputStream() throws N5IOException;\n\n\t/**\n\t * Create a new UTF-8 {@link Writer}.\n\t *\n\t * @return the writer\n\t * @throws N5IOException\n\t *             if a writer could not be created\n\t */\n\tWriter newWriter() throws N5IOException;\n\n\t/**\n\t * Create a new {@link OutputStream}.\n\t *\n\t * @return the output stream\n\t * @throws N5IOException\n\t *             if an output stream could not be created\n\t */\n\tOutputStream newOutputStream() throws N5IOException;\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/LockedFileChannel.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.channels.Channels;\nimport java.nio.channels.FileChannel;\nimport java.nio.channels.WritableByteChannel;\n\n/**\n * LockedFileChannel implementation for both read and write operations.\n * <p>\n * When closing this {@code LockedFileChannel}, {@code releaseLock} is called,\n * but the {@code channel} is not closed. The channel may be shared among\n * multiple {@code LockedFileChannel} for concurrent readers. {@code\n * releaseLock} should take care of closing the channel when the last reader (or\n * the only writer) is closed.\n */\nclass LockedFileChannel implements Closeable {\n\n\t// TODO: Consider splitting LockedFileChannel into read-only and write-only part.\n\t// TODO: Consider removing LockedFileChannel and having\n\t//       FileKeyLockManager.acquireRead() and FileKeyLockManager.acquireWrite()\n\t//       return appropriately wrapped SeekableByteChannel.\n\n\tprivate final FileChannel channel;\n\tprivate ReleaseLock releaseLock;\n\n\t@FunctionalInterface\n\tpublic interface ReleaseLock {\n\n\t\tvoid release() throws IOException;\n\t}\n\n\tLockedFileChannel(final FileChannel channel, final ReleaseLock releaseLock) {\n\n\t\tthis.channel = channel;\n\t\tthis.releaseLock = releaseLock;\n\t}\n\n\t/**\n\t * Returns the size of this channel's file.\n\t * <p>\n\t * See {@link FileChannel#size()}.\n\t */\n\tpublic long size() throws IOException {\n\t\treturn channel.size();\n\t}\n\n\t/**\n\t * Reads a sequence of bytes from this channel into the given buffer,\n\t * starting at the given file position.\n\t * <p>\n\t * See {@link FileChannel#read(ByteBuffer, long)}.\n\t */\n\tpublic int read(final ByteBuffer dst, final long position) throws IOException {\n\t\treturn channel.read(dst, position);\n\t}\n\n\t/**\n\t * Return an {@link OutputStream} that writes into this channel.\n\t * Closing the OutputStream will close this channel.\n\t */\n\tpublic OutputStream asOutputStream() {\n\t\treturn Channels.newOutputStream(new ClosingChannelWrapper());\n\t}\n\n\tvolatile boolean firstWrite = true;\n\n\tprivate class ClosingChannelWrapper implements WritableByteChannel {\n\n\t\t@Override\n\t\tpublic int write(final ByteBuffer src) throws IOException {\n\t\t\tif (!firstWrite)\n\t\t\t\treturn channel.write(src);\n\n\t\t\ttry {\n\t\t\t\tchannel.truncate(0);\n\t\t\t\treturn channel.write(src);\n\t\t\t} finally {\n\t\t\t\tfirstWrite = false;\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isOpen() {\n\t\t\treturn channel.isOpen();\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() throws IOException {\n\t\t\tchannel.close();\n\t\t\tLockedFileChannel.this.close();\n\t\t}\n\t}\n\n\t@Override\n\tpublic void close() throws IOException {\n\n\t\tif (releaseLock != null) {\n\t\t\treleaseLock.release();\n\t\t\treleaseLock = null;\n\t\t\t// Mote that setting releaseLock=null here drops the (method)\n\t\t\t// reference to LockKeyState, which potentially allows clearing the\n\t\t\t// WeakReference<LockKeyState> that FileLockManager holds.\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/LockingPolicy.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\n/**\n * File locking policy.\n * <p>\n * Usually, we want to coordinate reads and writs to a container such that\n * <ul>\n * <li>multiple readers can access a key simultaneously (blocking all writers).</li>\n * <li>A writer should have exclusive access to a key (blocking all other readers and writers).</li>\n * </ul>\n * However, this cannot always be enforced for all backends:\n * For SMB on macOS OS-level file locking is broken.\n * For AWS S3 and Google Cloud we can detect (but not prevent) concurrent modifications.\n * <p>\n * Sometimes we know that we can disregard locking, for example in read-only settings.\n * <p>\n * {@code IoPolicy} can be used to configure locking policy for backends ({@link\n * KeyValueAccess}) that support various locking strategies (potentially coming\n * with performance trade-offs).\n * <p>\n * The policy values ({@link #STRICT}, {@link #UNSAFE}, {@link #PERMISSIVE})\n * specify intent. Detailed interpretation is up to the backend implementation.\n */\npublic enum LockingPolicy {\n\n\t/**\n\t * Protect all reads and writes by locks.\n\t * Fail if locking is not possible.\n\t */\n\tSTRICT,\n\n\t/**\n\t * Reads and writes are unprotected.\n\t */\n\tUNSAFE,\n\n\t/**\n\t * Try to lock for all reads and writes.\n\t * Fall back to unprotected reads and writes if locking is not possible.\n\t * This is the default.\n\t */\n\tPERMISSIVE;\n\n\tstatic LockingPolicy fromString(final String s) {\n\t\tif (\"strict\".equalsIgnoreCase(s))\n\t\t\treturn STRICT;\n\t\telse if (\"unsafe\".equalsIgnoreCase(s))\n\t\t\treturn UNSAFE;\n//\t\telse if (\"permissive\".equalsIgnoreCase(s))\n//\t\t\treturn PERMISSIVE;\n\t\telse\n\t\t\treturn PERMISSIVE;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/LongArrayDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class LongArrayDataBlock extends AbstractDataBlock<long[]> {\n\n\tpublic LongArrayDataBlock(final int[] size, final long[] gridPosition, final long[] data) {\n\n\t\tsuper(size, gridPosition, data, a -> a.length);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/Lz4Compression.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport net.jpountz.lz4.LZ4BlockInputStream;\nimport net.jpountz.lz4.LZ4BlockOutputStream;\nimport org.janelia.saalfeldlab.n5.Compression.CompressionType;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@CompressionType(\"lz4\")\n@NameConfig.Name(\"lz4\")\npublic class Lz4Compression implements Compression {\n\n\tprivate static final long serialVersionUID = -9071316415067427256L;\n\n\t@CompressionParameter\n\t@NameConfig.Parameter\n\tprivate final int blockSize;\n\n\tpublic Lz4Compression(final int blockSize) {\n\n\t\tthis.blockSize = blockSize;\n\t}\n\n\tpublic Lz4Compression() {\n\n\t\tthis(1 << 16);\n\t}\n\n\t@Override\n\tpublic boolean equals(final Object other) {\n\n\t\tif (other == null || other.getClass() != Lz4Compression.class)\n\t\t\treturn false;\n\t\telse\n\t\t\treturn blockSize == ((Lz4Compression)other).blockSize;\n\t}\n\n\t@Override\n\tpublic ReadData decode(final ReadData readData) {\n\n\t\treturn ReadData.from(new LZ4BlockInputStream(readData.inputStream()));\n\t}\n\n\t@Override\n\tpublic ReadData encode(final ReadData readData) {\n\t\treturn readData.encode(out -> new LZ4BlockOutputStream(out, blockSize));\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5Exception.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class N5Exception extends RuntimeException {\n\n\tpublic N5Exception() {\n\n\t\tsuper();\n\t}\n\n\tpublic N5Exception(final String message) {\n\n\t\tsuper(message);\n\t}\n\n\tpublic N5Exception(final String message, final Throwable cause) {\n\n\t\tsuper(message, cause);\n\t}\n\n\tpublic N5Exception(final Throwable cause) {\n\n\t\tsuper(cause);\n\t}\n\n\tprotected N5Exception(\n\t\t\tfinal String message,\n\t\t\tfinal Throwable cause,\n\t\t\tfinal boolean enableSuppression,\n\t\t\tfinal boolean writableStackTrace) {\n\n\t\tsuper(message, cause, enableSuppression, writableStackTrace);\n\t}\n\n\tpublic static class N5IOException extends N5Exception{\n\n\t\tpublic N5IOException(final String message) {\n\n\t\t\tsuper(message);\n\t\t}\n\n\t\tpublic N5IOException(final String message, final Throwable cause) {\n\n\t\t\tsuper(message, cause);\n\t\t}\n\n\t\tpublic N5IOException(final Throwable cause) {\n\n\t\t\tsuper(cause);\n\t\t}\n\n\t\tprotected N5IOException(\n\t\t\t\tfinal String message,\n\t\t\t\tfinal Throwable cause,\n\t\t\t\tfinal boolean enableSuppression,\n\t\t\t\tfinal boolean writableStackTrace) {\n\n\t\t\tsuper(message, cause, enableSuppression, writableStackTrace);\n\t\t}\n\t}\n\n\t/**\n\t * This excpetion represents the situation when an attribute is requested by key as a specific Class,\n\t * \tthat attribute key <b>does</b> exist, but is not parseable as the desired Class\n\t */\n\tpublic static class N5ClassCastException extends N5Exception {\n\n\n\t\tpublic N5ClassCastException(final Class<?> cls) {\n\n\t\t\tsuper(\"Cannot cast as class \" + cls.getName());\n\t\t}\n\n\t\tpublic N5ClassCastException(final String message) {\n\n\t\t\tsuper(message);\n\t\t}\n\n\t\tpublic N5ClassCastException(final String message, final Throwable cause) {\n\n\t\t\tsuper(message, cause);\n\t\t}\n\n\t\tpublic N5ClassCastException(final Throwable cause) {\n\n\t\t\tsuper(cause);\n\t\t}\n\n\t\tprotected N5ClassCastException(\n\t\t\t\tfinal String message,\n\t\t\t\tfinal Throwable cause,\n\t\t\t\tfinal boolean enableSuppression,\n\t\t\t\tfinal boolean writableStackTrace) {\n\n\t\t\tsuper(message, cause, enableSuppression, writableStackTrace);\n\t\t}\n\t}\n\n\tpublic static class N5NoSuchKeyException extends N5IOException {\n\n\t\tpublic N5NoSuchKeyException(final String message) {\n\n\t\t\tsuper(message);\n\t\t}\n\n\t\tpublic N5NoSuchKeyException(final String message, final Throwable cause) {\n\n\t\t\tsuper(message, cause);\n\t\t}\n\n\t\tpublic N5NoSuchKeyException(final Throwable cause) {\n\n\t\t\tsuper(cause);\n\t\t}\n\n\t\tprotected N5NoSuchKeyException(\n\t\t\t\tfinal String message,\n\t\t\t\tfinal Throwable cause,\n\t\t\t\tfinal boolean enableSuppression,\n\t\t\t\tfinal boolean writableStackTrace) {\n\n\t\t\tsuper(message, cause, enableSuppression, writableStackTrace);\n\t\t}\n\t}\n\n\t/**\n\t * Exception to represent an error when attempting to parse json attributes\n\t */\n\tpublic static class N5JsonParseException extends N5Exception {\n\t\tpublic N5JsonParseException(final String message) {\n\n\t\t\tsuper(message);\n\t\t}\n\n\t\tpublic N5JsonParseException(final String message, final Throwable cause) {\n\n\t\t\tsuper(message, cause);\n\t\t}\n\n\t\tpublic N5JsonParseException(final Throwable cause) {\n\n\t\t\tsuper(cause);\n\t\t}\n\n\t\tprotected N5JsonParseException(\n\t\t\t\tfinal String message,\n\t\t\t\tfinal Throwable cause,\n\t\t\t\tfinal boolean enableSuppression,\n\t\t\t\tfinal boolean writableStackTrace) {\n\n\t\t\tsuper(message, cause, enableSuppression, writableStackTrace);\n\t\t}\n\t}\n\n\n\tpublic static class N5ConcurrentModificationException extends N5IOException {\n\n\t\tpublic N5ConcurrentModificationException(final String message) {\n\n\t\t\tsuper(message);\n\t\t}\n\n\t\tpublic N5ConcurrentModificationException(final String message, final Throwable cause) {\n\n\t\t\tsuper(message, cause);\n\t\t}\n\n\t\tpublic N5ConcurrentModificationException(final Throwable cause) {\n\n\t\t\tsuper(cause);\n\t\t}\n\n\t\tprotected N5ConcurrentModificationException(\n\t\t\t\tfinal String message,\n\t\t\t\tfinal Throwable cause,\n\t\t\t\tfinal boolean enableSuppression,\n\t\t\t\tfinal boolean writableStackTrace) {\n\n\t\t\tsuper(message, cause, enableSuppression, writableStackTrace);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5FSReader.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.nio.file.FileSystems;\n\nimport com.google.gson.Gson;\nimport com.google.gson.GsonBuilder;\n\n/**\n * {@link N5Reader} implementation through {@link KeyValueAccess} with JSON\n * attributes parsed with {@link Gson}.\n *\n * @author Stephan Saalfeld\n * @author Igor Pisarev\n * @author Philipp Hanslovsky\n */\n//public class N5FSReader extends N5KeyValueReader {\npublic class N5FSReader extends N5KeyValueReader {\n\n\t/**\n\t * Opens an {@link N5FSReader} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t *\n\t * @param basePath\n\t *            N5 base path\n\t * @param gsonBuilder\n\t *            the gson builder\n\t * @param cacheMeta\n\t *            cache attributes and meta data\n\t *            Setting this to true avoids frequent reading and parsing of\n\t *            JSON encoded attributes and other meta data that requires\n\t *            accessing the store. This is most interesting for high latency\n\t *            backends. Changes of cached attributes and meta data by an\n\t *            independent writer on the same container will not be tracked.\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be read or does not exist, if the N5\n\t *             version of the container is not compatible with this\n\t *             implementation.\n\t */\n\tpublic N5FSReader(final String basePath, final GsonBuilder gsonBuilder, final boolean cacheMeta)\n\t\t\tthrows N5Exception {\n\n\t\tsuper(\n\t\t\t\tnew FileSystemKeyValueAccess(),\n\t\t\t\tbasePath,\n\t\t\t\tgsonBuilder,\n\t\t\t\tcacheMeta);\n\n\t\tif (!exists(\"/\"))\n\t\t\tthrow new N5Exception.N5IOException(\"No container exists at \" + basePath);\n\t}\n\n\t/**\n\t * Opens an {@link N5FSReader} at a given base path.\n\t *\n\t * @param basePath\n\t *            N5 base path\n\t * @param cacheMeta\n\t *            cache attributes and meta data\n\t *            Setting this to true avoids frequent reading and parsing of\n\t *            JSON encoded attributes and other meta data that requires\n\t *            accessing the store. This is most interesting for high latency\n\t *            backends. Changes of cached attributes and meta data by an\n\t *            independent writer on the same container will not be tracked.\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be read or does not exist, if the N5\n\t *             version of the container is not compatible with this\n\t *             implementation.\n\t */\n\tpublic N5FSReader(final String basePath, final boolean cacheMeta) throws N5Exception {\n\n\t\tthis(basePath, new GsonBuilder(), cacheMeta);\n\t}\n\n\t/**\n\t * Opens an {@link N5FSReader} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t *\n\t * @param basePath\n\t *            N5 base path\n\t * @param gsonBuilder\n\t *            the gson builder\n\t * @throws N5Exception\n\t *             if the base path cannot be read or does not exist, if the N5\n\t *             version of the container is not compatible with this\n\t *             implementation.\n\t */\n\tpublic N5FSReader(final String basePath, final GsonBuilder gsonBuilder) throws N5Exception {\n\n\t\tthis(basePath, gsonBuilder, false);\n\t}\n\n\t/**\n\t * Opens an {@link N5FSReader} at a given base path.\n\t *\n\t * @param basePath\n\t *            N5 base path\n\t * @throws N5Exception\n\t *             if the base path cannot be read or does not exist, if the N5\n\t *             version of the container is not compatible with this\n\t *             implementation.\n\t */\n\tpublic N5FSReader(final String basePath) throws N5Exception {\n\n\t\tthis(basePath, new GsonBuilder(), false);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5FSWriter.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.nio.file.FileSystems;\n\nimport com.google.gson.GsonBuilder;\n\n/**\n * Filesystem {@link N5Writer} implementation with version compatibility check.\n *\n * @author Stephan Saalfeld\n */\npublic class N5FSWriter extends N5KeyValueWriter {\n\n\t/**\n\t * Opens an {@link N5FSWriter} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t *\n\t * If the base path does not exist, it will be created.\n\t *\n\t * If the base path exists and if the N5 version of the container is\n\t * compatible with this implementation, the N5 version of this container\n\t * will be set to the current N5 version of this implementation.\n\t *\n\t * @param basePath\n\t *            n5 base path\n\t * @param gsonBuilder\n\t *            the gson builder\n\t * @param cacheAttributes\n\t *            cache attributes and meta data\n\t *            Setting this to true avoidsfrequent reading and parsing of\n\t *            JSON encoded attributes andother meta data that requires\n\t *            accessing the store. This ismost interesting for high latency\n\t *            backends. Changes of cachedattributes and meta data by an\n\t *            independent writer on the samecontainer will not be tracked.\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be written to or cannot be created,\n\t *             if the N5 version of the container is not compatible with\n\t *             this implementation.\n\t */\n\tpublic N5FSWriter(final String basePath, final GsonBuilder gsonBuilder, final boolean cacheAttributes)\n\t\t\tthrows N5Exception {\n\n\t\tsuper(\n\t\t\t\tnew FileSystemKeyValueAccess(),\n\t\t\t\tbasePath,\n\t\t\t\tgsonBuilder,\n\t\t\t\tcacheAttributes);\n\t}\n\n\t/**\n\t * Opens an {@link N5FSWriter} at a given base path.\n\t *\n\t * If the base path does not exist, it will be created.\n\t *\n\t * If the base path exists and if the N5 version of the container is\n\t * compatible with this implementation, the N5 version of this container\n\t * will be set to the current N5 version of this implementation.\n\t *\n\t * @param basePath\n\t *            base path\n\t * @param cacheAttributes\n\t *            attributes and meta data\n\t *            Setting this to true avoidsfrequent reading and parsing of\n\t *            JSON encoded attributes andother meta data that requires\n\t *            accessing the store. This ismost interesting for high latency\n\t *            backends. Changes of cachedattributes and meta data by an\n\t *            independent writer on the samecontainer will not be tracked.\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be written to or cannot be created,\n\t *             if the N5 version of the container is not compatible with\n\t *             this implementation.\n\t */\n\tpublic N5FSWriter(final String basePath, final boolean cacheAttributes) throws N5Exception {\n\n\t\tthis(basePath, new GsonBuilder(), cacheAttributes);\n\t}\n\n\t/**\n\t * Opens an {@link N5FSWriter} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t * <p>\n\t * If the base path does not exist, it will be created.\n\t * </p>\n\t * <p>\n\t * If the base path exists and if the N5 version of the container is\n\t * compatible with this implementation, the N5 version of this container\n\t * will be set to the current N5 version of this implementation.\n\t * </p>\n\t *\n\t * @param basePath\n\t *            base path\n\t * @param gsonBuilder\n\t *            gson builder\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be written to or cannot be created,\n\t *             if the N5 version of the container is not compatible with\n\t *             this implementation.\n\t */\n\tpublic N5FSWriter(final String basePath, final GsonBuilder gsonBuilder) throws N5Exception {\n\n\t\tthis(basePath, gsonBuilder, false);\n\t}\n\n\t/**\n\t * Opens an {@link N5FSWriter} at a given base path.\n\t * <p>\n\t * If the base path does not exist, it will be created.\n\t * </p>\n\t * <p>\n\t * If the base path exists and if the N5 version of the container is\n\t * compatible with this implementation, the N5 version of this container\n\t * will be set to the current N5 version of this implementation.\n\t * </p>\n\t *\n\t * @param basePath\n\t *            n5 base path\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be written to or cannot be created,\n\t *             if the N5 version of the container is not compatible with\n\t *             this implementation.\n\t */\n\tpublic N5FSWriter(final String basePath) throws N5Exception {\n\n\t\tthis(basePath, new GsonBuilder());\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5KeyValueReader.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\n\nimport com.google.gson.JsonElement;\nimport org.janelia.saalfeldlab.n5.cache.N5JsonCache;\n\nimport com.google.gson.Gson;\nimport com.google.gson.GsonBuilder;\n\n/**\n * {@link N5Reader} implementation through {@link KeyValueAccess} with JSON\n * attributes parsed with {@link Gson}.\n *\n * @author Stephan Saalfeld\n * @author Igor Pisarev\n * @author Philipp Hanslovsky\n */\npublic class N5KeyValueReader implements CachedGsonKeyValueN5Reader {\n\n\tpublic static final String ATTRIBUTES_JSON = \"attributes.json\";\n\n\tprotected final KeyValueAccess keyValueAccess;\n\n\tprotected final Gson gson;\n\tprotected final boolean cacheMeta;\n\tprotected URI uri;\n\n\tprivate final N5JsonCache cache;\n\n\t/**\n\t * Opens an {@link N5KeyValueReader} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t *\n\t * @param keyValueAccess\n\t * \t\t\t  the KeyValueAccess backend used\n\t * @param basePath\n\t *            N5 base path\n\t * @param gsonBuilder\n\t * \t\t\t  the GsonBuilder\n\t * @param cacheMeta\n\t *            cache attributes and meta data\n\t *            Setting this to true avoidsfrequent reading and parsing of\n\t *            JSON encoded attributes andother meta data that requires\n\t *            accessing the store. This ismost interesting for high latency\n\t *            backends. Changes of cachedattributes and meta data by an\n\t *            independent writer will not betracked.\n\t *\n\t * @throws N5Exception\n\t *             if the base path cannot be read or does not exist, if the N5\n\t *             version of the container is not compatible with this\n\t *             implementation.\n\t */\n\tpublic N5KeyValueReader(\n\t\t\tfinal KeyValueAccess keyValueAccess,\n\t\t\tfinal String basePath,\n\t\t\tfinal GsonBuilder gsonBuilder,\n\t\t\tfinal boolean cacheMeta)\n\t\t\tthrows N5Exception {\n\n\t\tthis(true, keyValueAccess, basePath, gsonBuilder, cacheMeta, true);\n\t}\n\n\t/**\n\t * Opens an {@link N5KeyValueReader} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t *\n\t * @param checkVersion\n\t *            the version check\n\t * @param keyValueAccess\n\t *            the backend KeyValueAccess used\n\t * @param basePath\n\t *            base path\n\t * @param gsonBuilder\n\t *            the GsonBuilder\n\t * @param cacheMeta\n\t *            cache attributes and meta data Setting this to true avoids\n\t *            frequent reading and parsing of JSON encoded attributes and\n\t *            other meta data that requires accessing the store. This is\n\t *            most interesting for high latency backends. Changes of cached\n\t *            attributes and meta data by an independent writer will not be\n\t *            tracked.\n\t * @param checkExists\n\t *            if true, an N5IOException will be thrown if a container does\n\t *            not exist at the specified location\n\t * @throws N5Exception\n\t *             if the base path cannot be read or does not exist, if the N5\n\t *             version of the container is not compatible with this\n\t *             implementation.\n\t */\n\tprotected N5KeyValueReader(\n\t\t\tfinal boolean checkVersion,\n\t\t\tfinal KeyValueAccess keyValueAccess,\n\t\t\tfinal String basePath,\n\t\t\tfinal GsonBuilder gsonBuilder,\n\t\t\tfinal boolean cacheMeta,\n\t\t\tfinal boolean checkExists)\n\t\t\tthrows N5Exception {\n\n\t\tthis.keyValueAccess = keyValueAccess;\n\t\tthis.gson = registerGson(gsonBuilder).create();\n\t\tthis.cacheMeta = cacheMeta;\n\n\t\tif (this.cacheMeta)\n\t\t\tthis.cache = newCache();\n\t\telse\n\t\t\tthis.cache = null;\n\n\t\ttry {\n\t\t\turi = keyValueAccess.uri(basePath);\n\t\t} catch (final URISyntaxException e) {\n\t\t\tthrow new N5Exception(e);\n\t\t}\n\n\t\tboolean versionFound = false;\n\t\tif (checkVersion) {\n\t\t\t/* Existence checks, if any, go in subclasses */\n\t\t\t/* Check that version (if there is one) is compatible. */\n\t\t\tfinal Version version = getVersion();\n\t\t\tversionFound = !version.equals(NO_VERSION);\n\t\t\tif (!VERSION.isCompatible(version))\n\t\t\t\tthrow new N5Exception.N5IOException(\n\t\t\t\t\t\"Incompatible version \" + version + \" (this is \" + VERSION + \").\");\n\t\t}\n\n\t\t// if a version was found, the container exists - don't need to check again\n\t\tif (checkExists && (!versionFound && !inferExistence(\"/\")))\n\t\t\tthrow new N5Exception.N5IOException(\"No container exists at \" + basePath);\n\t}\n\n\tprivate boolean inferExistence(String path) {\n\n\t\tfinal JsonElement attributes = getAttributes(path);\n\t\treturn attributes != null || exists(path);\n\t}\n\n\tprotected GsonBuilder registerGson(final GsonBuilder gsonBuilder) {\n\n\t\tgsonBuilder.registerTypeAdapter(DataType.class, new DataType.JsonAdapter());\n\t\tgsonBuilder.registerTypeHierarchyAdapter(Compression.class, CompressionAdapter.getJsonAdapter());\n\t\tgsonBuilder.registerTypeHierarchyAdapter(DatasetAttributes.class, DatasetAttributes.getJsonAdapter());\n\t\tgsonBuilder.disableHtmlEscaping();\n\t\treturn gsonBuilder;\n\t}\n\n\t@Override\n\tpublic String getAttributesKey() {\n\n\t\treturn ATTRIBUTES_JSON;\n\t}\n\n\t@Override\n\tpublic Gson getGson() {\n\n\t\treturn gson;\n\t}\n\n\t@Override\n\tpublic KeyValueAccess getKeyValueAccess() {\n\n\t\treturn keyValueAccess;\n\t}\n\n\t@Override\n\tpublic URI getURI() {\n\n\t\treturn uri;\n\t}\n\n\t@Override\n\tpublic boolean cacheMeta() {\n\n\t\treturn cacheMeta;\n\t}\n\n\t@Override\n\tpublic N5JsonCache getCache() {\n\n\t\treturn this.cache;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5KeyValueWriter.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport com.google.gson.GsonBuilder;\n\n/**\n * Filesystem {@link N5Writer} implementation with version compatibility check.\n *\n * @author Stephan Saalfeld\n */\npublic class N5KeyValueWriter extends N5KeyValueReader implements CachedGsonKeyValueN5Writer {\n\n\t/**\n\t * Opens an {@link N5KeyValueWriter} at a given base path with a custom\n\t * {@link GsonBuilder} to support custom attributes.\n\t *\n\t * <p>\n\t * If the base path does not exist, it will be created.\n\t * <p>\n\t * If the base path exists and if the N5 version of the container is\n\t * compatible with this implementation, the N5 version of this container\n\t * will be set to the current N5 version of this implementation.\n\t *\n\t * @param keyValueAccess\n\t * \t\t\t  the backend key value access to use\n\t * @param basePath\n\t *            n5 base path\n\t * @param gsonBuilder\n\t *            the gson builder\n\t * @param cacheAttributes\n\t *            Setting this to true avoids frequent reading and parsing of\n\t *            JSON encoded attributes, this is most interesting for high\n\t *            latency file systems. Changes of attributes by an independent\n\t *            writer will not be tracked.\n\t * @throws N5Exception\n\t *             if the base path cannot be written to or cannot be created,\n\t *             if the N5 version of the container is not compatible with\n\t *             this implementation.\n\t */\n\tpublic N5KeyValueWriter(\n\t\t\tfinal KeyValueAccess keyValueAccess,\n\t\t\tfinal String basePath,\n\t\t\tfinal GsonBuilder gsonBuilder,\n\t\t\tfinal boolean cacheAttributes)\n\t\t\tthrows N5Exception {\n\n\t\tsuper(false, keyValueAccess, basePath, gsonBuilder, cacheAttributes, false);\n\n\t\tVersion version = null;\n\t\ttry {\n\t\t\tversion = getVersion();\n\t\t\tif (!VERSION.isCompatible(version))\n\t\t\t\tthrow new N5Exception.N5IOException(\"Incompatible version \" + version + \" (this is \" + VERSION + \").\");\n\t\t} catch (final NullPointerException e) {}\n\n\t\tif (version == null || version.equals(new Version(0, 0, 0, \"\"))) {\n\t\t\tcreateGroup(\"/\");\n\t\t\tsetVersion(\"/\");\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5Reader.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.Serializable;\nimport java.io.UncheckedIOException;\nimport java.lang.reflect.Type;\nimport java.net.URI;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Future;\nimport java.util.concurrent.LinkedBlockingQueue;\nimport java.util.function.Predicate;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\n\n/**\n * A simple structured container for hierarchies of chunked\n * n-dimensional datasets and attributes.\n *\n * @author Stephan Saalfeld\n * @see \"https://github.com/axtimwalde/n5\"\n */\npublic interface N5Reader extends AutoCloseable {\n\n\tclass Version {\n\n\t\tprivate final int major;\n\t\tprivate final int minor;\n\t\tprivate final int patch;\n\t\tprivate final String suffix;\n\n\t\tpublic Version(\n\t\t\t\tfinal int major,\n\t\t\t\tfinal int minor,\n\t\t\t\tfinal int patch,\n\t\t\t\tfinal String rest) {\n\n\t\t\tthis.major = major;\n\t\t\tthis.minor = minor;\n\t\t\tthis.patch = patch;\n\t\t\tthis.suffix = rest;\n\t\t}\n\n\t\tpublic Version(\n\t\t\t\tfinal int major,\n\t\t\t\tfinal int minor,\n\t\t\t\tfinal int patch) {\n\n\t\t\tthis(major, minor, patch, \"\");\n\t\t}\n\n\t\t/**\n\t\t * Creates a version from a SemVer compatible version string.\n\t\t * <p>\n\t\t * If the version string is null or not a SemVer version, this\n\t\t * version will be \"0.0.0\"\n\t\t * </p>\n\t\t *\n\t\t * @param versionString\n\t\t *            the string representation of the version\n\t\t */\n\t\tpublic Version(final String versionString) {\n\n\t\t\tboolean isSemVer = false;\n\t\t\tif (versionString != null) {\n\t\t\t\tfinal Matcher matcher = Pattern.compile(\"(\\\\d+)(\\\\.(\\\\d+))?(\\\\.(\\\\d+))?(.*)\").matcher(versionString);\n\t\t\t\tisSemVer = matcher.find();\n\t\t\t\tif (isSemVer) {\n\t\t\t\t\tmajor = Integer.parseInt(matcher.group(1));\n\t\t\t\t\tfinal String minorString = matcher.group(3);\n\t\t\t\t\tif (!minorString.equals(\"\"))\n\t\t\t\t\t\tminor = Integer.parseInt(minorString);\n\t\t\t\t\telse\n\t\t\t\t\t\tminor = 0;\n\t\t\t\t\tfinal String patchString = matcher.group(5);\n\t\t\t\t\tif (!patchString.equals(\"\"))\n\t\t\t\t\t\tpatch = Integer.parseInt(patchString);\n\t\t\t\t\telse\n\t\t\t\t\t\tpatch = 0;\n\t\t\t\t\tsuffix = matcher.group(6);\n\t\t\t\t} else {\n\t\t\t\t\tmajor = 0;\n\t\t\t\t\tminor = 0;\n\t\t\t\t\tpatch = 0;\n\t\t\t\t\tsuffix = \"\";\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tmajor = 0;\n\t\t\t\tminor = 0;\n\t\t\t\tpatch = 0;\n\t\t\t\tsuffix = \"\";\n\t\t\t}\n\t\t}\n\n\t\tpublic final int getMajor() {\n\n\t\t\treturn major;\n\t\t}\n\n\t\tpublic final int getMinor() {\n\n\t\t\treturn minor;\n\t\t}\n\n\t\tpublic final int getPatch() {\n\n\t\t\treturn patch;\n\t\t}\n\n\t\tpublic final String getSuffix() {\n\n\t\t\treturn suffix;\n\t\t}\n\n\t\t@Override\n\t\tpublic String toString() {\n\n\t\t\tfinal StringBuilder s = new StringBuilder();\n\t\t\ts.append(major);\n\t\t\ts.append(\".\");\n\t\t\ts.append(minor);\n\t\t\ts.append(\".\");\n\t\t\ts.append(patch);\n\t\t\ts.append(suffix);\n\n\t\t\treturn s.toString();\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean equals(final Object other) {\n\n\t\t\tif (other instanceof Version) {\n\t\t\t\tfinal Version otherVersion = (Version)other;\n\t\t\t\treturn (major == otherVersion.major) &\n\t\t\t\t\t\t(minor == otherVersion.minor) &\n\t\t\t\t\t\t(patch == otherVersion.patch) &\n\t\t\t\t\t\t(suffix.equals(otherVersion.suffix));\n\t\t\t} else\n\t\t\t\treturn false;\n\t\t}\n\n\t\t/**\n\t\t * Returns true if this implementation is compatible with a given\n\t\t * version.\n\t\t *\n\t\t * Currently, this means that the version is less than or equal to\n\t\t * 1.X.X.\n\t\t *\n\t\t * @param version\n\t\t *            the version\n\t\t * @return true if this version is compatible\n\t\t */\n\t\tpublic boolean isCompatible(final Version version) {\n\n\t\t\treturn version.getMajor() <= major;\n\t\t}\n\t}\n\n\t/**\n\t * SemVer version of this N5 spec.\n\t */\n\tVersion NO_VERSION = new Version(0, 0, 0);\n\n\t/**\n\t * SemVer version of this N5 spec.\n\t */\n\tVersion VERSION = new Version(4, 0, 0);\n\n\t/**\n\t * Version attribute key.\n\t */\n\tString VERSION_KEY = \"n5\";\n\n\t/**\n\t * Get the SemVer version of this container as specified in the 'version'\n\t * attribute of the root group.\n\t *\n\t * If no version is specified or the version string does not conform to\n\t * the SemVer format, 0.0.0 will be returned. For incomplete versions,\n\t * such as 1.2, the missing elements are filled with 0, i.e. 1.2.0 in this\n\t * case.\n\t *\n\t * @return the version\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\tdefault Version getVersion() throws N5Exception {\n\n\t\treturn new Version(getAttribute(\"/\", VERSION_KEY, String.class));\n\t}\n\n\t/**\n\t * Returns the URI of the container's root.\n\t *\n\t * @return the base path URI\n\t */\n\tURI getURI();\n\n\t/**\n\t * Reads an attribute.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @param key\n\t *            the key\n\t * @param clazz\n\t *            attribute class\n\t * @param <T>\n\t *            the attribute type\n\t * @return the attribute\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\t<T> T getAttribute(\n\t\t\tString pathName,\n\t\t\tString key,\n\t\t\tClass<T> clazz) throws N5Exception;\n\n\t/**\n\t * Reads an attribute.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @param key\n\t *            the key\n\t * @param type\n\t *            attribute Type (use this for specifying generic types)\n\t * @param <T>\n\t *            the attribute type\n\t * @return the attribute\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\t<T> T getAttribute(\n\t\t\tString pathName,\n\t\t\tString key,\n\t\t\tType type) throws N5Exception;\n\n\t/**\n\t * Get mandatory dataset attributes.\n\t *\n\t * @param pathName\n\t *            dataset path\n\t * @return dataset attributes or null if either dimensions or dataType are\n\t *         not set\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\tDatasetAttributes getDatasetAttributes(String pathName) throws N5Exception;\n\n\t/**\n\t * Some implementations may need to convert arbitrary DatasetAttributes to their specific equivalent variant.\n\t * Ideally, this method would be `protected`, but that's not valid for the interface. The default implementation\n\t * is the identity (returns the input DatasetAttributes unchanged).\n\t * <p>\n\t * The returned DatasetAttributes is not guaranteed to be a unique instance.\n\t *\n\t * @param attributes\n\t * \t\t\t\tto convert\n\t * @return the converted attributes\n\t */\n\tdefault DatasetAttributes getConvertedDatasetAttributes(final DatasetAttributes attributes) {\n\t\treturn attributes;\n\t}\n\n\n\t/**\n\t * Reads a chunk as a {@link DataBlock}.\n\t *\n\t * @param <T>\n\t *            the DataBlock data type\n\t * @param pathName\n\t *            dataset path\n\t * @param datasetAttributes\n\t *            the dataset attributes\n\t * @param gridPosition\n\t *            the grid position\n\t * @return the data block\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\t<T> DataBlock<T> readChunk(\n\t\t\tString pathName,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong... gridPosition) throws N5Exception;\n\n\t/**\n\t * Reads multiple chunks as {@link DataBlock}s.\n\t * <p>\n\t * Implementations may optimize / batch read operations when possible, e.g.\n\t * in the case that the datasets are sharded.\n\t *\n\t * @param <T>\n\t *            the DataBlock data type\n\t * @param pathName\n\t *            dataset path\n\t * @param datasetAttributes\n\t *            the dataset attributes\n\t * @param gridPositions\n\t *            a list of grid positions\n\t * @return a list of data blocks\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\tdefault <T> List<DataBlock<T>> readChunks(\n\t\t\tfinal String pathName,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal List<long[]> gridPositions) throws N5Exception {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\tfinal ArrayList<DataBlock<T>> blocks = new ArrayList<>();\n\t\tfor( final long[] p : gridPositions )\n\t\t\tblocks.add(readChunk(pathName, convertedDatasetAttributes, p));\n\n\t\treturn blocks;\n\t}\n\n\t/**\n\t * Reads a block, returning a {@link DataBlock}. Will be a chunk or shard (if the dataset is sharded).\n\t * <p>\n\t * A block is the highest (coarsest) level of the dataset's {@link NestedGrid}\n\t * This method's behavior is identical to {@link #readChunk} for un-sharded datasets.\n\t *\n\t * @param <T>\n\t *            the DataBlock data type\n\t * @param pathName\n\t *            dataset path\n\t * @param datasetAttributes\n\t *            the dataset attributes\n\t * @param gridPosition\n\t *            the position in the block grid\n\t * @return the data block\n\t * @throws N5Exception\n\t *             the exception\n\t *\n\t * @see DatasetAttributes#getNestedBlockGrid()\n\t */\n\t<T> DataBlock<T> readBlock(\n\t\t\tString pathName,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong... gridPosition) throws N5Exception;\n\n\t/**\n\t * Checks if a block exists at the given grid position without reading the data.\n\t * <p>\n\t * A block is the highest (coarsest) level of the dataset's {@link\n\t * NestedGrid}, that is, a shard if the dataset is sharded, or a chunk\n\t * otherwise.\n\t * <p>\n\t * This method only checks for the presence of the key value for the gridPosition, it does not\n\t * read or validate the contents. As a result, this method refers to chunks in un-sharded datasets\n\t * or shards in sharded datasets.\n\t *\n\t * @param pathName\n\t *            dataset path\n\t * @param datasetAttributes\n\t *            the dataset attributes\n\t * @param gridPosition\n\t *            the block grid position\n\t * @return true if the block file exists\n\t * @throws N5Exception\n\t *             the exception\n\t */\n\tboolean blockExists(\n\t\t\tString pathName,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong... gridPosition) throws N5Exception;\n\n\t/**\n\t * Load a {@link DataBlock} as a {@link Serializable}. The offset is given\n\t * in\n\t * {@link DataBlock} grid coordinates.\n\t *\n\t * @param dataset\n\t *            the dataset path\n\t * @param attributes\n\t *            the dataset attributes\n\t * @param <T>\n\t *            the data block type\n\t * @param gridPosition\n\t *            the grid position\n\t * @return the data block\n\t * @throws N5Exception\n\t *             the exception\n\t * @throws ClassNotFoundException\n\t *             the class not found exception\n\t */\n\tdefault <T> T readSerializedBlock(\n\t\t\tfinal String dataset,\n\t\t\tfinal DatasetAttributes attributes,\n\t\t\tfinal long... gridPosition) throws N5Exception, ClassNotFoundException {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(attributes);\n\t\tfinal DataBlock<byte[]> block = readChunk(dataset, convertedDatasetAttributes, gridPosition);\n\t\tif (block == null)\n\t\t\treturn null;\n\n\t\tfinal ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(block.getData());\n\t\ttry (ObjectInputStream in = new ObjectInputStream(byteArrayInputStream)) {\n\t\t\treturn (T) in.readObject();\n\t\t} catch (final IOException | UncheckedIOException e) {\n\t\t\tthrow new N5Exception.N5IOException(e);\n\t\t}\n\t}\n\n\t/**\n\t * Test whether a group or dataset exists at a given path.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @return true if the path exists\n\t */\n\tboolean exists(String pathName);\n\n\t/**\n\t * Test whether a dataset exists at a given path.\n\t *\n\t * @param pathName\n\t *            dataset path\n\t * @return true if a dataset exists\n\t * @throws N5Exception\n\t *             an exception is thrown if the existence of a dataset at the given path cannot be determined.\n\t */\n\tdefault boolean datasetExists(final String pathName) throws N5Exception {\n\n\t\treturn exists(pathName) && getDatasetAttributes(pathName) != null;\n\t}\n\n\t/**\n\t * List all groups (including datasets) in a group.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @return list of children\n\t * @throws N5Exception\n\t *             an exception is thrown if pathName is not a valid group\n\t */\n\tString[] list(String pathName) throws N5Exception;\n\n\t/**\n\t * Recursively list all groups and datasets in the given path.\n\t * Only paths that satisfy the provided filter will be included, but the\n\t * children of paths that were excluded may be included (filter does not\n\t * apply to the subtree).\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @param filter\n\t *            filter for children to be included\n\t * @return list of child groups and datasets\n\t * @throws N5Exception\n\t * \t\tan exception is thrown if pathName is not a valid group\n\t */\n\tdefault String[] deepList(\n\t\t\tfinal String pathName,\n\t\t\tfinal Predicate<String> filter) throws N5Exception {\n\n\t\tfinal String groupSeparator = getGroupSeparator();\n\t\tfinal String normalPathName = pathName\n\t\t\t\t.replaceAll(\n\t\t\t\t\t\t\"(^\" + groupSeparator + \"*)|(\" + groupSeparator + \"*$)\",\n\t\t\t\t\t\t\"\");\n\n\t\tfinal List<String> absolutePaths = deepList(this, normalPathName, false, filter);\n\t\treturn absolutePaths\n\t\t\t\t.stream()\n\t\t\t\t.map(a -> a.replaceFirst(normalPathName + \"(\" + groupSeparator + \"?)\", \"\"))\n\t\t\t\t.filter(a -> !a.isEmpty())\n\t\t\t\t.toArray(String[]::new);\n\t}\n\n\t/**\n\t * Recursively list all groups and datasets in the given path.\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @return list of groups and datasets\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t */\n\tdefault String[] deepList(final String pathName) throws N5Exception {\n\n\t\treturn deepList(pathName, a -> true);\n\t}\n\n\t/**\n\t * Recursively list all datasets in the given path. Only paths that satisfy the\n\t * provided filter will be included, but the children of paths that were\n\t * excluded may be included (filter does not apply to the subtree).\n\t *\n\t * <p>\n\t * This method delivers the same results as\n\t * </p>\n\t *\n\t * <pre>\n\t * {@code\n\t * n5.deepList(prefix, a -> {\n\t * \ttry {\n\t * \t\treturn n5.datasetExists(a) && filter.test(a);\n\t * \t} catch (final N5Exception e) {\n\t * \t\treturn false;\n\t * \t}\n\t * });\n\t * }\n\t * </pre>\n\t * <p>\n\t * but will execute {@link #datasetExists(String)} only once per node. This can\n\t * be relevant for performance on high latency backends such as cloud stores.\n\t * </p>\n\t *\n\t * @param pathName base group path\n\t * @param filter   filter for datasets to be included\n\t * @return list of groups\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t */\n\tdefault String[] deepListDatasets(\n\t\t\tfinal String pathName,\n\t\t\tfinal Predicate<String> filter) throws N5Exception {\n\n\t\tfinal String groupSeparator = getGroupSeparator();\n\t\tfinal String normalPathName = pathName\n\t\t\t\t.replaceAll(\n\t\t\t\t\t\t\"(^\" + groupSeparator + \"*)|(\" + groupSeparator + \"*$)\",\n\t\t\t\t\t\t\"\");\n\n\t\tfinal List<String> absolutePaths = deepList(this, normalPathName, true, filter);\n\t\treturn absolutePaths\n\t\t\t\t.stream()\n\t\t\t\t.map(a -> a.replaceFirst(normalPathName + \"(\" + groupSeparator + \"?)\", \"\"))\n\t\t\t\t.filter(a -> !a.isEmpty())\n\t\t\t\t.toArray(String[]::new);\n\t}\n\n\t/**\n\t * Recursively list all including datasets in the given path.\n\t *\n\t * <p>\n\t * This method delivers the same results as\n\t * </p>\n\t *\n\t * <pre>\n\t * {@code\n\t * n5.deepList(prefix, a -> {\n\t * \ttry {\n\t * \t\treturn n5.datasetExists(a);\n\t * \t} catch (final N5Exception e) {\n\t * \t\treturn false;\n\t * \t}\n\t * });\n\t * }\n\t * </pre>\n\t * <p>\n\t * but will execute {@link #datasetExists(String)} only once per node. This\n\t * can\n\t * be relevant for performance on high latency backends such as cloud\n\t * stores.\n\t * </p>\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @return list of groups\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t */\n\tdefault String[] deepListDatasets(final String pathName) throws N5Exception {\n\n\t\treturn deepListDatasets(pathName, a -> true);\n\t}\n\n\t/**\n\t * Helper method to recursively list all groups and datasets. This method is not part of the\n\t * public API and is accessible only because Java 8 does not support private\n\t * interface methods yet.\n\t * <p>\n\t * TODO make private when committing to Java versions newer than 8\n\t *\n\t * @param n5           the n5 reader\n\t * @param pathName     the base group path\n\t * @param datasetsOnly true if only dataset paths should be returned\n\t * @param filter       a dataset filter\n\t * @return the list of all children\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t */\n\tstatic ArrayList<String> deepList(\n\t\t\tfinal N5Reader n5,\n\t\t\tfinal String pathName,\n\t\t\tfinal boolean datasetsOnly,\n\t\t\tfinal Predicate<String> filter) throws N5Exception {\n\n\t\tfinal ArrayList<String> children = new ArrayList<>();\n\t\tfinal boolean isDataset = n5.datasetExists(pathName);\n\n\t\tfinal boolean passDatasetTest = datasetsOnly && !isDataset;\n\t\tif (!passDatasetTest && filter.test(pathName))\n\t\t\tchildren.add(pathName);\n\n\t\tif (!isDataset) {\n\t\t\tfinal String groupSeparator = n5.getGroupSeparator();\n\t\t\tfinal String[] baseChildren = n5.list(pathName);\n\t\t\tfor (final String child : baseChildren)\n\t\t\t\tchildren.addAll(deepList(n5, pathName + groupSeparator + child, datasetsOnly, filter));\n\t\t}\n\n\t\treturn children;\n\t}\n\n\t/**\n\t * Recursively list all groups (including datasets) in the given group, in\n\t * parallel, using the given {@link ExecutorService}. Only paths that\n\t * satisfy\n\t * the provided filter will be included, but the children of paths that were\n\t * excluded may be included (filter does not apply to the subtree).\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @param filter\n\t *            filter for children to be included\n\t * @param executor\n\t *            executor service\n\t * @return list of datasets\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t * @throws ExecutionException\n\t *             the execution exception\n\t * @throws InterruptedException\n\t *             the interrupted exception\n\t */\n\tdefault String[] deepList(\n\t\t\tfinal String pathName,\n\t\t\tfinal Predicate<String> filter,\n\t\t\tfinal ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\tfinal String groupSeparator = getGroupSeparator();\n\t\tfinal String normalPathName = pathName.replaceAll(\"(^\" + groupSeparator + \"*)|(\" + groupSeparator + \"*$)\", \"\");\n\t\tfinal ArrayList<String> results = new ArrayList<String>();\n\t\tfinal LinkedBlockingQueue<Future<String>> datasetFutures = new LinkedBlockingQueue<>();\n\t\tdeepListHelper(this, normalPathName, false, filter, executor, datasetFutures);\n\n\t\twhile (!datasetFutures.isEmpty()) {\n\t\t\tfinal String result = datasetFutures.poll().get();\n\t\t\tif (result != null && !result.equals(normalPathName))\n\t\t\t\tresults.add(result.substring(normalPathName.length() + groupSeparator.length()));\n\t\t}\n\n\t\treturn results.toArray(new String[0]);\n\t}\n\n\t/**\n\t * Recursively list all groups (including datasets) in the given group, in\n\t * parallel, using the given {@link ExecutorService}.\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @param executor\n\t *            executor service\n\t * @return list of groups\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t * @throws ExecutionException\n\t *             the execution exception\n\t * @throws InterruptedException\n\t *             this exception is thrown if execution is interrupted\n\t */\n\tdefault String[] deepList(\n\t\t\tfinal String pathName,\n\t\t\tfinal ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\treturn deepList(pathName, a -> true, executor);\n\t}\n\n\t/**\n\t * Recursively list all datasets in the given group, in parallel, using the\n\t * given {@link ExecutorService}. Only paths that satisfy the provided\n\t * filter\n\t * will be included, but the children of paths that were excluded may be\n\t * included (filter does not apply to the subtree).\n\t *\n\t * <p>\n\t * This method delivers the same results as\n\t * </p>\n\t *\n\t * <pre>\n\t * {@code\n\t * n5.deepList(prefix, a -> {\n\t * \ttry {\n\t * \t\treturn n5.datasetExists(a) && filter.test(a);\n\t * \t} catch (final N5Exception e) {\n\t * \t\treturn false;\n\t * \t}\n\t * }, exec);\n\t * }\n\t * </pre>\n\t * <p>\n\t * but will execute {@link #datasetExists(String)} only once per node. This\n\t * can\n\t * be relevant for performance on high latency backends such as cloud\n\t * stores.\n\t * </p>\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @param filter\n\t *            filter for datasets to be included\n\t * @param executor\n\t *            executor service\n\t * @return list of datasets\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t * @throws ExecutionException\n\t *             the execution exception\n\t * @throws InterruptedException\n\t *             this exception is thrown if execution is interrupted\n\t */\n\tdefault String[] deepListDatasets(\n\t\t\tfinal String pathName,\n\t\t\tfinal Predicate<String> filter,\n\t\t\tfinal ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\tfinal String groupSeparator = getGroupSeparator();\n\t\tfinal String normalPathName = pathName.replaceAll(\"(^\" + groupSeparator + \"*)|(\" + groupSeparator + \"*$)\", \"\");\n\t\tfinal ArrayList<String> results = new ArrayList<String>();\n\t\tfinal LinkedBlockingQueue<Future<String>> datasetFutures = new LinkedBlockingQueue<>();\n\t\tdeepListHelper(this, normalPathName, true, filter, executor, datasetFutures);\n\n\t\tdatasetFutures.poll().get(); // skip self\n\t\twhile (!datasetFutures.isEmpty()) {\n\t\t\tfinal String result = datasetFutures.poll().get();\n\t\t\tif (result != null)\n\t\t\t\tresults.add(result.substring(normalPathName.length() + groupSeparator.length()));\n\t\t}\n\n\t\treturn results.toArray(new String[0]);\n\t}\n\n\t/**\n\t * Recursively list all datasets in the given group, in parallel, using the\n\t * given {@link ExecutorService}.\n\t *\n\t * <p>\n\t * This method delivers the same results as\n\t * </p>\n\t *\n\t * <pre>\n\t * {@code\n\t * n5.deepList(prefix, a -> {\n\t * \ttry {\n\t * \t\treturn n5.datasetExists(a);\n\t * \t} catch (final N5Exception e) {\n\t * \t\treturn false;\n\t * \t}\n\t * }, exec);\n\t * }\n\t * </pre>\n\t * <p>\n\t * but will execute {@link #datasetExists(String)} only once per node. This\n\t * can\n\t * be relevant for performance on high latency backends such as cloud\n\t * stores.\n\t * </p>\n\t *\n\t * @param pathName\n\t *            base group path\n\t * @param executor\n\t *            executor service\n\t * @return list of groups\n\t * @throws N5Exception\n\t * \t\t\t\tan exception is thrown if pathName is not a valid group\n\t * @throws ExecutionException\n\t *             the execution exception\n\t * @throws InterruptedException\n\t *             this exception is thrown if execution is interrupted\n\t */\n\tdefault String[] deepListDatasets(\n\t\t\tfinal String pathName,\n\t\t\tfinal ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\treturn deepListDatasets(pathName, a -> true, executor);\n\t}\n\n\t/**\n\t * Helper method for parallel deep listing. This method is not part of the\n\t * public API and is accessible only because Java 8 does not support private\n\t * interface methods yet.\n\t *\n\t * TODO make private when committing to Java versions newer than 8\n\t *\n\t * @param n5\n\t *            the n5 reader\n\t * @param path\n\t *            the base path\n\t * @param datasetsOnly\n\t *            true if only dataset paths should be returned\n\t * @param filter\n\t *            filter for datasets to be included\n\t * @param executor\n\t *            the executor service\n\t * @param datasetFutures\n\t *            result futures\n\t */\n\tstatic void deepListHelper(\n\t\t\tfinal N5Reader n5,\n\t\t\tfinal String path,\n\t\t\tfinal boolean datasetsOnly,\n\t\t\tfinal Predicate<String> filter,\n\t\t\tfinal ExecutorService executor,\n\t\t\tfinal LinkedBlockingQueue<Future<String>> datasetFutures) {\n\n\t\tfinal String groupSeparator = n5.getGroupSeparator();\n\n\t\tdatasetFutures.add(executor.submit(() -> {\n\n\t\t\tboolean isDataset = false;\n\t\t\ttry {\n\t\t\t\tisDataset = n5.datasetExists(path);\n\t\t\t} catch (final N5Exception e) {}\n\n\t\t\tif (!isDataset) {\n\t\t\t\tString[] children = null;\n\t\t\t\ttry {\n\t\t\t\t\tchildren = n5.list(path);\n\t\t\t\t\tfor (final String child : children) {\n\t\t\t\t\t\tfinal String fullChildPath = path + groupSeparator + child;\n\t\t\t\t\t\tdeepListHelper(n5, fullChildPath, datasetsOnly, filter, executor, datasetFutures);\n\t\t\t\t\t}\n\t\t\t\t} catch (final N5Exception e) {}\n\t\t\t}\n\t\t\tfinal boolean passDatasetTest = datasetsOnly && !isDataset;\n\t\t\treturn !passDatasetTest && filter.test(path) ? path : null;\n\t\t}));\n\t}\n\n\t/**\n\t * List all attributes and their class of a group.\n\t *\n\t * @param pathName\n\t *            group path\n\t * @return a map of attribute keys to their inferred class\n\t * @throws N5Exception if an error occurred during listing\n\t */\n\tMap<String, Class<?>> listAttributes(String pathName) throws N5Exception;\n\n\t/**\n\t * Returns the symbol that is used to separate nodes in a group path.\n\t *\n\t * @return the group separator\n\t */\n\tdefault String getGroupSeparator() {\n\n\t\treturn \"/\";\n\t}\n\n\t/**\n\t * Creates a group path by concatenating all nodes with the node separator\n\t * defined by {@link #getGroupSeparator()}. The string will not have a\n\t * leading or trailing node separator symbol.\n\t *\n\t * @param nodes a collection of child node names\n\t * @return the full group path\n\t */\n\tdefault String groupPath(final String... nodes) {\n\n\t\tif (nodes == null || nodes.length == 0)\n\t\t\treturn \"\";\n\n\t\tfinal String groupSeparator = getGroupSeparator();\n\t\tfinal StringBuilder builder = new StringBuilder(nodes[0]);\n\n\t\tfor (int i = 1; i < nodes.length; ++i) {\n\n\t\t\tbuilder.append(groupSeparator);\n\t\t\tbuilder.append(nodes[i]);\n\t\t}\n\n\t\treturn builder.toString();\n\t}\n\n\t/**\n\t * Default implementation of {@link AutoCloseable#close()} for all\n\t * implementations that do not hold any closable resources.\n\t */\n\t@Override\n\tdefault void close() {}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5URI.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.Charset;\nimport java.nio.charset.CharsetDecoder;\nimport java.nio.charset.CoderResult;\nimport java.nio.charset.CodingErrorAction;\nimport java.nio.charset.StandardCharsets;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.Objects;\nimport java.util.concurrent.atomic.AtomicReference;\nimport java.util.function.Consumer;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\n/**\n * A {@link URI} for N5 containers, groups, datasets, and attributes.\n * <p>\n * Container paths are stored in the URI path. Group / dataset paths are stored in the URI query,\n * and attribute paths are stored in the URI fragment.\n */\npublic class N5URI {\n\n\tprivate static final Charset UTF8 = StandardCharsets.UTF_8;\n\tpublic static final Pattern ARRAY_INDEX = Pattern.compile(\"\\\\[([0-9]+)]\");\n\tfinal URI uri;\n\tprivate final String scheme;\n\tprivate final String container;\n\tprivate final String group;\n\tprivate final String attribute;\n\n\tpublic N5URI(final String uri) throws URISyntaxException {\n\n\t\tthis(encodeAsUri(uri));\n\t}\n\n\tpublic N5URI(final URI uri) {\n\n\t\tthis.uri = uri;\n\t\tscheme = uri.getScheme() == null ? null : uri.getScheme();\n\t\tfinal String schemeSpecificPartWithoutQuery = getSchemeSpecificPartWithoutQuery();\n\t\tif (uri.getScheme() == null) {\n\t\t\tcontainer = schemeSpecificPartWithoutQuery.replaceFirst(\"//\", \"\");\n\t\t} else {\n\t\t\tcontainer = uri.getScheme() + \":\" + schemeSpecificPartWithoutQuery;\n\t\t}\n\t\tgroup = uri.getQuery();\n\t\tattribute = decodeFragment(uri.getRawFragment());\n\t}\n\n\t/**\n\t * @return the container path\n\t */\n\tpublic String getContainerPath() {\n\n\t\treturn container;\n\t}\n\n\tpublic URI getURI() {\n\t\treturn uri;\n\t}\n\n\t/**\n\t * @return the group path, or root (\"/\") if none was provided\n\t */\n\tpublic String getGroupPath() {\n\n\t\treturn group != null ? group : \"/\";\n\t}\n\n\t/**\n\t * @return the normalized group path\n\t */\n\tpublic String normalizeGroupPath() {\n\n\t\treturn normalizeGroupPath(getGroupPath());\n\t}\n\n\t/**\n\t * @return the attribute path, or root (\"/\") if none was provided\n\t */\n\tpublic String getAttributePath() {\n\n\t\treturn attribute != null ? attribute : \"/\";\n\t}\n\n\t/**\n\t * @return the normalized attribute path\n\t */\n\tpublic String normalizeAttributePath() {\n\n\t\treturn normalizeAttributePath(getAttributePath());\n\t}\n\n\t/**\n\t * Parse this {@link N5URI} as a {@link LinkedAttributePathToken}.\n\t *\n\t * @see N5URI#getAttributePathTokens(String)\n\t * @return the linked attribute path token\n\t */\n\tpublic LinkedAttributePathToken<?> getAttributePathTokens() {\n\n\t\treturn getAttributePathTokens(normalizeAttributePath());\n\t}\n\n\t/**\n\t * Parses the {@link String normalizedAttributePath} to a list of\n\t * {@link LinkedAttributePathToken}.\n\t * This is useful for traversing or constructing a json representation of\n\t * the provided {@link String normalizedAttributePath}.\n\t * Note that {@link String normalizedAttributePath} should be normalized\n\t * prior to generating this list\n\t *\n\t * @param normalizedAttributePath\n\t *            to parse into {@link LinkedAttributePathToken}s\n\t * @return the head of the {@link LinkedAttributePathToken}s\n\t */\n\tpublic static LinkedAttributePathToken<?> getAttributePathTokens(final String normalizedAttributePath) {\n\n\t\tfinal String[] attributePathParts = normalizedAttributePath.replaceAll(\"^/\", \"\").split(\"(?<!\\\\\\\\)/\");\n\n\t\tif (attributePathParts.length == 0 || Arrays.stream(attributePathParts).allMatch(String::isEmpty))\n\t\t\treturn null;\n\n\t\tfinal AtomicReference<LinkedAttributePathToken<?>> firstTokenRef = new AtomicReference<>();\n\t\tfinal AtomicReference<LinkedAttributePathToken<?>> currentTokenRef = new AtomicReference<>();\n\t\tfinal Consumer<LinkedAttributePathToken<?>> updateCurrentToken = (newToken) -> {\n\t\t\tif (firstTokenRef.get() == null) {\n\t\t\t\tfirstTokenRef.set(newToken);\n\t\t\t\tcurrentTokenRef.set(firstTokenRef.get());\n\t\t\t} else {\n\t\t\t\tfinal LinkedAttributePathToken<?> currentToken = currentTokenRef.get();\n\t\t\t\tcurrentToken.childToken = newToken;\n\t\t\t\tcurrentTokenRef.set(newToken);\n\t\t\t}\n\t\t};\n\n\t\tfor (final String pathPart : attributePathParts) {\n\t\t\tfinal Matcher matcher = ARRAY_INDEX.matcher(pathPart);\n\t\t\tfinal LinkedAttributePathToken<?> newToken;\n\t\t\tif (matcher.matches()) {\n\t\t\t\tfinal int index = Integer.parseInt(matcher.group().replace(\"[\", \"\").replace(\"]\", \"\"));\n\t\t\t\tnewToken = new LinkedAttributePathToken.ArrayAttributeToken(index);\n\t\t\t} else {\n\t\t\t\tfinal String pathPartUnEscaped = pathPart.replaceAll(\"\\\\\\\\/\", \"/\").replaceAll(\"\\\\\\\\\\\\[\", \"[\");\n\t\t\t\tnewToken = new LinkedAttributePathToken.ObjectAttributeToken(pathPartUnEscaped);\n\t\t\t}\n\t\t\tupdateCurrentToken.accept(newToken);\n\t\t}\n\n\t\treturn firstTokenRef.get();\n\t}\n\n\tprivate String getSchemePart() {\n\n\t\treturn scheme == null ? \"\" : scheme + \"://\";\n\t}\n\n\tprivate String getContainerPart() {\n\n\t\treturn container;\n\t}\n\n\tprivate String getGroupPart() {\n\n\t\treturn group == null ? \"\" : \"?\" + group;\n\t}\n\n\tprivate String getAttributePart() {\n\n\t\treturn attribute == null ? \"\" : \"#\" + attribute;\n\t}\n\n\t@Override\n\tpublic String toString() {\n\n\t\treturn getContainerPart() + getGroupPart() + getAttributePart();\n\t}\n\n\tprivate String getSchemeSpecificPartWithoutQuery() {\n\n\t\t/* Why not substring \"?\"? */\n\t\treturn uri.getSchemeSpecificPart().replace(\"?\" + uri.getQuery(), \"\");\n\t}\n\n\t/**\n\t * N5URI is always considered absolute if a scheme is provided.\n\t * If no scheme is provided, the N5URI is absolute if it starts with either\n\t * \"/\" or \"[A-Z]:\"\n\t *\n\t * @return if the path for this N5URI is absolute\n\t */\n\tpublic boolean isAbsolute() {\n\n\t\tif (scheme != null)\n\t\t\treturn true;\n\t\tfinal String path = uri.getPath();\n\t\tif (!path.isEmpty()) {\n\t\t\tfinal char char0 = path.charAt(0);\n\t\t\treturn char0 == '/' || (path.length() >= 2 && path.charAt(1) == ':' && char0 >= 'A' && char0 <= 'Z');\n\t\t}\n\t\treturn false;\n\t}\n\n\t/**\n\t * Generate a new N5URI which is the result of resolving {@link N5URI\n\t * relativeN5Url} to this {@link N5URI}.\n\t * If relativeN5Url is not relative to this N5URI, then the resulting N5URI\n\t * is equivalent to relativeN5Url.\n\t *\n\t * @param relativeN5Url\n\t *            N5URI to resolve against ourselves\n\t * @return the result of the resolution.\n\t * @throws URISyntaxException\n\t *             if the uri is malformed\n\t */\n\tpublic N5URI resolve(final N5URI relativeN5Url) throws URISyntaxException {\n\n\t\tfinal URI thisUri = uri;\n\t\tfinal URI relativeUri = relativeN5Url.uri;\n\n\t\tfinal StringBuilder newUri = new StringBuilder();\n\n\t\tif (relativeUri.getScheme() != null) {\n\t\t\treturn relativeN5Url;\n\t\t}\n\t\tfinal String thisScheme = thisUri.getScheme();\n\t\tif (thisScheme != null) {\n\t\t\tnewUri.append(thisScheme).append(\":\");\n\t\t}\n\n\t\tif (relativeUri.getAuthority() != null) {\n\t\t\tnewUri\n\t\t\t\t\t.append(relativeUri.getAuthority())\n\t\t\t\t\t.append(relativeUri.getPath())\n\t\t\t\t\t.append(relativeN5Url.getGroupPart())\n\t\t\t\t\t.append(relativeN5Url.getAttributePart());\n\t\t\treturn new N5URI(newUri.toString());\n\t\t}\n\t\tfinal String thisAuthority = thisUri.getAuthority();\n\t\tif (thisAuthority != null) {\n\t\t\tnewUri.append(\"//\").append(thisAuthority);\n\t\t}\n\n\t\tfinal String path = relativeUri.getPath();\n\t\tif (!path.isEmpty()) {\n\t\t\tif (!relativeN5Url.isAbsolute()) {\n\t\t\t\tnewUri.append(thisUri.getPath()).append('/');\n\t\t\t}\n\t\t\tnewUri\n\t\t\t\t\t.append(path)\n\t\t\t\t\t.append(relativeN5Url.getGroupPart())\n\t\t\t\t\t.append(relativeN5Url.getAttributePart());\n\t\t\treturn new N5URI(newUri.toString());\n\t\t}\n\t\tnewUri.append(thisUri.getPath());\n\n\t\tfinal String query = relativeUri.getQuery();\n\t\tif (query != null) {\n\t\t\tif (query.charAt(0) != '/' && thisUri.getQuery() != null) {\n\t\t\t\tnewUri.append(this.getGroupPart()).append('/');\n\t\t\t\tnewUri.append(relativeUri.getQuery());\n\t\t\t} else {\n\t\t\t\tnewUri.append(relativeN5Url.getGroupPart());\n\t\t\t}\n\t\t\tnewUri.append(relativeN5Url.getAttributePart());\n\t\t\treturn new N5URI(newUri.toString());\n\t\t}\n\t\tnewUri.append(this.getGroupPart());\n\n\t\tfinal String fragment = relativeUri.getFragment();\n\t\tif (fragment != null) {\n\t\t\tif (fragment.charAt(0) != '/' && thisUri.getFragment() != null) {\n\t\t\t\tnewUri.append(this.getAttributePart()).append('/');\n\t\t\t} else {\n\t\t\t\tnewUri.append(relativeN5Url.getAttributePart());\n\t\t\t}\n\n\t\t\treturn new N5URI(newUri.toString());\n\t\t}\n\t\tnewUri.append(this.getAttributePart());\n\n\t\treturn new N5URI(newUri.toString());\n\t}\n\n\t/**\n\t * Generate a new N5URI which is the result of resolving {@link URI\n\t * relativeUri} to this {@link N5URI}.\n\t * If relativeUri is not relative to this N5URI, then the resulting N5URI is\n\t * equivalent to relativeUri.\n\t *\n\t * @param relativeUri\n\t *            URI to resolve against ourselves\n\t * @return the result of the resolution.\n\t * @throws URISyntaxException\n\t *             if the uri is malformed\n\t */\n\tpublic N5URI resolve(final URI relativeUri) throws URISyntaxException {\n\n\t\treturn resolve(new N5URI(relativeUri));\n\t}\n\n\t/**\n\t * Generate a new N5URI which is the result of resolving {@link String\n\t * relativeString} to this {@link N5URI}\n\t * If relativeString is not relative to this N5URI, then the resulting N5URI\n\t * is equivalent to relativeString.\n\t *\n\t * @param relativeString\n\t *            String to resolve against ourselves\n\t * @return the result of the resolution.\n\t * @throws URISyntaxException\n\t *             if the uri is malformed\n\t */\n\tpublic N5URI resolve(final String relativeString) throws URISyntaxException {\n\n\t\treturn resolve(new N5URI(relativeString));\n\t}\n\n\t/**\n\t * Normalize a POSIX path, resulting in removal of redundant \"/\", \"./\", and\n\t * resolution of relative \"../\".\n\t * <p>\n\t * NOTE: currently a private helper method only used by {@link N5URI#normalizeGroupPath(String)}.\n\t * \tIt's safe to do in that case since relative group paths should always be POSIX compliant.\n\t * \tA new helper method to understand other path types (e.g. Windows) may be necessary eventually.\n\t *\n\t * @param path\n\t *            to normalize\n\t * @return the normalized path\n\t */\n\tprivate static String normalizePath(String path) {\n\n\t\tpath = path == null ? \"\" : path;\n\t\tfinal char[] pathChars = path.toCharArray();\n\n\t\tfinal List<String> tokens = new ArrayList<>();\n\t\tfinal StringBuilder curToken = new StringBuilder();\n\t\tboolean escape = false;\n\t\tfor (final char character : pathChars) {\n\t\t\t/* Skip if we last saw escape */\n\t\t\tif (escape) {\n\t\t\t\tescape = false;\n\t\t\t\tcurToken.append(character);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t/* Check if we are escape character */\n\t\t\tif (character == '\\\\') {\n\t\t\t\tescape = true;\n\t\t\t} else if (character == '/') {\n\t\t\t\tif (tokens.isEmpty() && curToken.length() == 0) {\n\t\t\t\t\t/* If we are root, and the first token, then add the '/' */\n\t\t\t\t\tcurToken.append(character);\n\t\t\t\t}\n\n\t\t\t\t/*\n\t\t\t\t * The current token is complete, add it to the list, if it\n\t\t\t\t * isn't empty\n\t\t\t\t */\n\t\t\t\tfinal String newToken = curToken.toString();\n\t\t\t\tif (!newToken.isEmpty()) {\n\t\t\t\t\t/*\n\t\t\t\t\t * If our token is '..' then remove the last token instead\n\t\t\t\t\t * of adding a new one\n\t\t\t\t\t */\n\t\t\t\t\tif (newToken.equals(\"..\")) {\n\t\t\t\t\t\ttokens.remove(tokens.size() - 1);\n\t\t\t\t\t} else {\n\t\t\t\t\t\ttokens.add(newToken);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* reset for the next token */\n\t\t\t\tcurToken.setLength(0);\n\t\t\t} else {\n\t\t\t\tcurToken.append(character);\n\t\t\t}\n\t\t}\n\t\tfinal String lastToken = curToken.toString();\n\t\tif (!lastToken.isEmpty()) {\n\t\t\tif (lastToken.equals(\"..\")) {\n\t\t\t\ttokens.remove(tokens.size() - 1);\n\t\t\t} else {\n\t\t\t\ttokens.add(lastToken);\n\t\t\t}\n\t\t}\n\t\tif (tokens.isEmpty())\n\t\t\treturn \"\";\n\t\tString root = \"\";\n\t\tif (tokens.get(0).equals(\"/\")) {\n\t\t\ttokens.remove(0);\n\t\t\troot = \"/\";\n\t\t}\n\t\treturn root + tokens\n\t\t\t\t.stream()\n\t\t\t\t.filter(it -> !it.equals(\".\"))\n\t\t\t\t.filter(it -> !it.isEmpty())\n\t\t\t\t.reduce((l, r) -> l + \"/\" + r)\n\t\t\t\t.orElse(\"\");\n\t}\n\n\t/**\n\t * Normalize a group path relative to a container's root, resulting in\n\t * removal of redundant \"/\", \"./\", resolution of relative \"../\",\n\t * and removal of leading slashes.\n\t *\n\t * @param path\n\t *            to normalize\n\t * @return the normalized path\n\t */\n\tpublic static String normalizeGroupPath(final String path) {\n\n\t\t/*\n\t\t * Alternatively, could do something like the below in every\n\t\t * KeyValueReader implementation\n\t\t *\n\t\t * return keyValueAccess.relativize( N5URI.normalizeGroupPath(path),\n\t\t * basePath);\n\t\t *\n\t\t * has to be in the implementations, since KeyValueAccess doesn't have a\n\t\t * basePath.\n\t\t */\n\t\treturn normalizePath(path.startsWith(\"/\") || path.startsWith(\"\\\\\") ? path.substring(1) : path);\n\t}\n\n\tprivate enum N5UriPattern {\n\n\t\t/**\n\t\t * matches any `/` or `[N]` where `N` is non-negative\n\t\t */\n\t\tMULTI_PART_ATTRIBUTE(Pattern.compile(\".*((?<!\\\\\\\\)(/|\\\\[[0-9]+])).*\")),\n\t\t/**\n\t\t * matches `[N]` where `[` is the first character\n\t\t */\n\t\tATTRIBUTE_ARRAY_AT_START(Pattern.compile(\"^(?<array>\\\\[[0-9]+])\")),\n\t\t/**\n\t\t * matches `A[N]` where A is some non-empty preceding path\n\t\t */\n\t\tATTRIBUTE_ARRAY_EXCEPT_START(Pattern.compile(\"((?<!(^|\\\\\\\\))(?<array>\\\\[[0-9]+]))\")),\n\t\t/**\n\t\t * The following Pattern has 4 possible matches.\n\t\t * It is intended to be used to remove matching portions iteratively until no further matches are found:\n\t\t * <p>\n\t\t * The first 3 matches can remove redundant separators of the form:\n\t\t * <ul>\n\t\t * <li>(?<=/)/+ : `a///b` -> `a/b`</li>\n\t\t * <li>(?<=(/|^))(\\./)+ : `a/./b` -> `a/b`</li>\n\t\t * <li>((/|(?<=/))\\.)$ : `a/b/` -> `a/b`</li>\n\t\t * </ul>\n\t\t * The next match avoids removing `/` when it is NOT redundant (e.g. only character, or escaped):\n\t\t * <ul>\n\t\t *     <li>(?<!(^|\\\\))/$: `/` -> `/ , `/a/b/\\\\/` -> `/a/b/\\\\/`</li>\n\t\t * </ul>\n\t\t * The last match resolves relative paths:\n\t\t * <ul>\n\t\t *     <li>5. ((?<=^/)|^|(?<=(/|^))[^/]+(?<!(/|(/|^)\\.\\.))/)\\.\\./?</li>\n\t\t *     <ul>\n\t\t *\t\t\t<li>`a/../b` -> `b`</li>\n\t\t *\t\t\t<li>`/a/../b` -> `/b`</li>\n\t\t *\t\t\t<li>`../a/../b` -> `b`</li>\n\t\t *\t\t\t<li>`/../a/../b` -> `/b`</li>\n\t\t *\t\t\t<li>`/../a/../../b` -> `/b`</li>\n\t\t *     </ul>\n\t\t * </ul>\n\t\t *<p>\n\t\t */\n\t\tRELATIVE_ATTRIBUTE_PARTS(Pattern.compile( \"((?<=/)/+|(?<=(/|^))(\\\\./)+|((/|(?<=/))\\\\.)$|(?<!(^|\\\\\\\\))/$|((?<=^/)|^|(?<=(/|^))[^/]+(?<!(/|(/|^)\\\\.\\\\.))/)\\\\.\\\\./?)\" ));\n\n\t\tprivate final Pattern pattern;\n\n\t\tN5UriPattern(Pattern pattern) {\n\t\t\tthis.pattern = pattern;\n\t\t}\n\n\t\tboolean matches(final String input) {\n\t\t\treturn pattern.matcher(input).matches();\n\t\t}\n\n\t\tString replaceAll(final String input, final String replacement) {\n\t\t\treturn pattern.matcher(input).replaceAll(replacement);\n\t\t}\n\n\t\tstatic String appendSlashAfterArrayStart(final String input) {\n\t\t\treturn ATTRIBUTE_ARRAY_AT_START.replaceAll(input, \"${array}/\");\n\t\t}\n\n\t\tstatic String addSlashAroundArrayExceptStart(final String input) {\n\t\t\treturn ATTRIBUTE_ARRAY_EXCEPT_START.replaceAll(input, \"/${array}/\");\n\t\t}\n\n\t\tstatic String removeRelativePathParts(final String path) {\n\t\t\tint prevStringLenth = 0;\n\t\t\tString resolvedAttributePath = path;\n\t\t\twhile (prevStringLenth != resolvedAttributePath.length()) {\n\t\t\t\tprevStringLenth = resolvedAttributePath.length();\n\t\t\t\tresolvedAttributePath = RELATIVE_ATTRIBUTE_PARTS.replaceAll(resolvedAttributePath, \"\");\n\t\t\t}\n\t\t\treturn resolvedAttributePath;\n\t\t}\n\t}\n\n\t/**\n\t * Normalize the {@link String attributePath}.\n\t * <p>\n\t * Attribute paths have a few special characters:\n\t * <ul>\n\t * <li>\".\" which represents the current element</li>\n\t * <li>\"..\" which represent the previous elemnt</li>\n\t * <li>\"/\" which is used to separate elements in the json tree</li>\n\t * <li>[N] where N is an integer, refer to an index in the previous element\n\t * in the tree; the previous element must be an array.\n\t * <p>\n\t * Note: [N] also separates the previous and following elements, regardless\n\t * of whether it is preceded by \"/\" or not.\n\t * </li>\n\t * <li>\"\\\" which is an escape character, which indicates the subquent '/' or\n\t * '[N]' should not be interpreted as a path delimeter,\n\t * but as part of the current path name.</li>\n\t *\n\t * </ul>\n\t * <p>\n\t * When normalizing:\n\t * <ul>\n\t * <li>\"/\" are added before and after any indexing brackets [N]</li>\n\t * <li>any redundant \"/\" are removed</li>\n\t * <li>any relative \"..\" and \".\" are resolved</li>\n\t * </ul>\n\t * <p>\n\t * Examples of valid attribute paths, and their normalizations\n\t * <ul>\n\t * <li>/a/b/c becomes /a/b/c</li>\n\t * <li>/a/b/c/ becomes /a/b/c</li>\n\t * <li>///a///b///c becomes /a/b/c</li>\n\t * <li>/a/././b/c becomes /a/b/c</li>\n\t * <li>/a/b[1]c becomes /a/b/[1]/c</li>\n\t * <li>/a/b/[1]c becomes /a/b/[1]/c</li>\n\t * <li>/a/b[1]/c becomes /a/b/[1]/c</li>\n\t * <li>/a/b[1]/c/.. becomes /a/b/[1]</li>\n\t * </ul>\n\t *\n\t * @param attributePath\n\t *            to normalize\n\t * @return the normalized attribute path\n\t */\n\tpublic static String normalizeAttributePath(final String attributePath) {\n\n\t\t/*\n\t\t * Short circuit if there are no non-escaped `/` or array indices (e.g.\n\t\t * [N] where N is a non-negative integer)\n\t\t */\n\t\tif (!N5UriPattern.MULTI_PART_ATTRIBUTE.matches(attributePath))\n\t\t\treturn attributePath;\n\n\t\t/* Add separator after arrays at the beginning `[10]b` -> `[10]/b` */\n\t\tfinal String attrPathPlusFirstIndexSeparator = N5UriPattern.appendSlashAfterArrayStart(attributePath);\n\n\t\t/*\n\t\t * Add separator before and after arrays not at the beginning `a[10]b`\n\t\t * -> `a/[10]/b`\n\t\t */\n\t\tfinal String attrPathPlusIndexSeparators = N5UriPattern.addSlashAroundArrayExceptStart(attrPathPlusFirstIndexSeparator);\n\n\t\t/* remove relative path parts like `./`, `../`, `a//b, `a/b/` */\n\t\treturn N5UriPattern.removeRelativePathParts(attrPathPlusIndexSeparators);\n\t}\n\n\t/**\n\t * If uri is a valid URI, just return it as a URI. Else, encode if possible.\n\t *\n\t * @param uri as String to get as URI\n\t * @return URI from input. Encoded if necessary\n\t */\n\tpublic static URI getAsUri(final String uri) throws N5Exception {\n\n\t\ttry {\n\t\t\treturn URI.create(uri);\n\t\t} catch (Exception ignore) {\n\t\t\ttry {\n\t\t\t\treturn N5URI.encodeAsUri(uri);\n\t\t\t} catch (URISyntaxException e) {\n\t\t\t\tthrow new N5Exception(\"Could not encode as URI: \" + uri, e);\n\t\t\t}\n\t\t}\n\t}\n\n\tpublic static URI encodeAsUriPath(final String path) {\n\t\ttry {\n\t\t\treturn new URI(null, null, path, null);\n\t\t} catch (Exception e) {\n\t\t\tthrow new IllegalArgumentException(\"Could not encode as URI path component:  \" + path, e);\n\t\t}\n\t}\n\n\t/**\n\t * Encode the inpurt {@link String uri} so that illegal characters are\n\t * properly escaped prior to generating the resulting {@link URI}.\n\t *\n\t * @param uri\n\t *            to encode\n\t * @return the {@link URI} created from encoding the {@link String uri}\n\t * @throws URISyntaxException\n\t * \t\tif the provided String is not a valid URI\n\t */\n\tpublic static URI encodeAsUri(final String uri) throws URISyntaxException {\n\n\t\tif (uri.trim().length() == 0) {\n\t\t\t//TODO Caleb: ???\n\t\t\treturn new URI(uri);\n\t\t}\n\t\t/*\n\t\t * find last # symbol to split fragment on. If we don't remove it first,\n\t\t * then it will encode it, and not parse it separately\n\t\t * after we remove the temporary _N5 scheme\n\t\t */\n\t\tfinal int fragmentIdx = uri.lastIndexOf('#');\n\t\tfinal String uriWithoutFragment;\n\t\tfinal String fragment;\n\t\tif (fragmentIdx >= 0) {\n\t\t\turiWithoutFragment = uri.substring(0, fragmentIdx);\n\t\t\tfragment = uri.substring(fragmentIdx + 1);\n\t\t} else {\n\t\t\turiWithoutFragment = uri;\n\t\t\tfragment = null;\n\t\t}\n\t\t/* Edge case to handle when uriWithoutFragment is empty */\n\t\tfinal URI _n5Uri;\n\t\tif (uriWithoutFragment.length() == 0 && fragment != null && fragment.length() > 0) {\n\t\t\t_n5Uri = new URI(\"N5Internal\", \"//STAND_IN\", fragment);\n\t\t} else {\n\t\t\t_n5Uri = new URI(\"N5Internal\", uriWithoutFragment, fragment);\n\t\t}\n\n\t\tfinal URI n5Uri;\n\t\tif (fragment == null) {\n\t\t\tn5Uri = new URI(_n5Uri.getRawSchemeSpecificPart());\n\t\t} else {\n\t\t\tif (Objects.equals(_n5Uri.getPath(), \"\") && Objects.equals(_n5Uri.getAuthority(), \"STAND_IN\")) {\n\t\t\t\tn5Uri = new URI(\"#\" + _n5Uri.getRawFragment());\n\t\t\t} else {\n\t\t\t\tn5Uri = new URI(_n5Uri.getRawSchemeSpecificPart() + \"#\" + _n5Uri.getRawFragment());\n\t\t\t}\n\t\t}\n\t\treturn n5Uri;\n\t}\n\n\t/**\n\t * Generate an {@link N5URI} from a container, group, and attribute\n\t *\n\t * @param container\n\t *            of the N5Url\n\t * @param group\n\t *            of the N5Url\n\t * @param attribute\n\t *            of the N5Url\n\t * @return the {@link N5URI}\n\t * @throws URISyntaxException\n\t *             if the uri is malformed\n\t */\n\tpublic static N5URI from(\n\t\t\tfinal String container,\n\t\t\tfinal String group,\n\t\t\tfinal String attribute) throws URISyntaxException {\n\n\t\tfinal String containerPart = container != null ? container : \"\";\n\t\tfinal String groupPart = group != null ? \"?\" + group : \"\";\n\t\tfinal String attributePart = attribute != null ? \"#\" + attribute : \"\";\n\t\treturn new N5URI(containerPart + groupPart + attributePart);\n\t}\n\n\t/**\n\t * Intentionally copied from {@link URI} for internal use\n\t *\n\t * @see URI#decode(char)\n\t */\n\t@SuppressWarnings(\"JavadocReference\")\n\tprivate static int decode(final char c) {\n\n\t\tif ((c >= '0') && (c <= '9'))\n\t\t\treturn c - '0';\n\t\tif ((c >= 'a') && (c <= 'f'))\n\t\t\treturn c - 'a' + 10;\n\t\tif ((c >= 'A') && (c <= 'F'))\n\t\t\treturn c - 'A' + 10;\n\t\tassert false;\n\t\treturn -1;\n\t}\n\n\t/**\n\t * Intentionally copied from {@link URI} for internal use\n\t *\n\t * @see URI#decode(char, char)\n\t */\n\t@SuppressWarnings(\"JavadocReference\")\n\tprivate static byte decode(final char c1, final char c2) {\n\n\t\treturn (byte)(((decode(c1) & 0xf) << 4)\n\t\t\t\t| ((decode(c2) & 0xf) << 0));\n\t}\n\n\t/**\n\t * Modified from {@link URI#decode(String)} to ignore the listed exception,\n\t * where it doesn't decode escape values inside square braces.\n\t * <p>\n\t * As an example of the original implementation, a backslash inside a square\n\t * brace would be encoded to \"[%5C]\", and\n\t * when calling {@code decode(\"[%5C]\")} it would not decode to \"[\\]\" since\n\t * the encode escape sequence is inside square braces.\n\t * <p>\n\t * We keep all the decoding logic in this modified version, EXCEPT, that we\n\t * don't check for and ignore encoded sequences inside square braces.\n\t * <p>\n\t * Thus, {@code decode(\"[%5C]\")} -> \"[\\]\".\n\t *\n\t * @see URI#decode(char, char)\n\t */\n\t@SuppressWarnings(\"JavadocReference\")\n\tprivate static String decodeFragment(final String rawFragment) {\n\n\t\tif (rawFragment == null)\n\t\t\treturn rawFragment;\n\t\tfinal int n = rawFragment.length();\n\t\tif (n == 0)\n\t\t\treturn rawFragment;\n\t\tif (rawFragment.indexOf('%') < 0)\n\t\t\treturn rawFragment;\n\n\t\tfinal StringBuffer sb = new StringBuffer(n);\n\t\tfinal ByteBuffer bb = ByteBuffer.allocate(n);\n\t\tfinal CharBuffer cb = CharBuffer.allocate(n);\n\n\t\tfinal CharsetDecoder dec = UTF8\n\t\t\t\t.newDecoder()\n\t\t\t\t.onMalformedInput(CodingErrorAction.REPLACE)\n\t\t\t\t.onUnmappableCharacter(CodingErrorAction.REPLACE);\n\n\t\t// This is not horribly efficient, but it will do for now\n\t\tchar c = rawFragment.charAt(0);\n\n\t\tfor (int i = 0; i < n;) {\n\t\t\tassert c == rawFragment.charAt(i); // Loop invariant\n\t\t\tif (c != '%') {\n\t\t\t\tsb.append(c);\n\t\t\t\tif (++i >= n)\n\t\t\t\t\tbreak;\n\t\t\t\tc = rawFragment.charAt(i);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tbb.clear();\n\t\t\tfor (;;) {\n\t\t\t\tassert (n - i >= 2);\n\t\t\t\tbb.put(decode(rawFragment.charAt(++i), rawFragment.charAt(++i)));\n\t\t\t\tif (++i >= n)\n\t\t\t\t\tbreak;\n\t\t\t\tc = rawFragment.charAt(i);\n\t\t\t\tif (c != '%')\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbb.flip();\n\t\t\tcb.clear();\n\t\t\tdec.reset();\n\t\t\tCoderResult cr = dec.decode(bb, cb, true);\n\t\t\tassert cr.isUnderflow();\n\t\t\tcr = dec.flush(cb);\n\t\t\tassert cr.isUnderflow();\n\t\t\tsb.append(cb.flip());\n\t\t}\n\n\t\treturn sb.toString();\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/N5Writer.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.io.ObjectOutputStream;\nimport java.io.Serializable;\nimport java.io.UncheckedIOException;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\n\n/**\n * A simple structured container API for hierarchies of chunked\n * n-dimensional datasets and attributes.\n *\n * @author Stephan Saalfeld\n * @see \"https://github.com/axtimwalde/n5\"\n */\npublic interface N5Writer extends N5Reader {\n\n\t/**\n\t * Sets an attribute.\n\t *\n\t * @param groupPath group path\n\t * @param attributePath the key\n\t * @param attribute the attribute\n\t * @param <T> the attribute type\n\t * @throws N5Exception the exception\n\t */\n\tdefault <T> void setAttribute(\n\t\t\tfinal String groupPath,\n\t\t\tfinal String attributePath,\n\t\t\tfinal T attribute) throws N5Exception {\n\n\t\tsetAttributes(groupPath, Collections.singletonMap(attributePath, attribute));\n\t}\n\n\t/**\n\t * Sets a map of attributes.  The passed attributes are inserted into the\n\t * existing attribute tree.  New attributes, including their parent\n\t * objects will be added, existing attributes whose paths are not included\n\t * will remain unchanged, those whose paths are included will be overridden.\n\t *\n\t * @param groupPath group path\n\t * @param attributes the attribute map of attribute paths and values\n\t * @throws N5Exception the exception\n\t */\n\tvoid setAttributes(\n\t\t\tString groupPath,\n\t\t\tMap<String, ?> attributes) throws N5Exception;\n\n\t/**\n\t * Remove the attribute from group {@code pathName} with key {@code key}.\n\t *\n\t * @param groupPath group path\n\t * @param attributePath of attribute to remove\n\t * @return true if attribute removed, else false\n\t * @throws N5Exception the exception\n\t */\n\tboolean removeAttribute(String groupPath, String attributePath) throws N5Exception;\n\n\t/**\n\t * Remove the attribute from group {@code pathName} with key {@code key} and\n\t * type {@code T}.\n\t * <p>\n\t * If an attribute at {@code pathName} and {@code key} exists, but is not of\n\t * type {@code T}, it is not removed.\n\t *\n\t * @param groupPath group path\n\t * @param attributePath of attribute to remove\n\t * @param clazz of the attribute to remove\n\t * @param <T> of the attribute\n\t * @return the removed attribute, as {@code T}, or {@code null} if no\n\t *         matching attribute\n\t * @throws N5Exception if removing he attribute failed, parsing the attribute failed, or the attribute cannot be interpreted as T\n\t */\n\t<T> T removeAttribute(String groupPath, String attributePath, Class<T> clazz) throws N5Exception;\n\n\t/**\n\t * Remove attributes as provided by {@code attributes}.\n\t * <p>\n\t * If any element of {@code attributes} does not exist, it will be ignored.\n\t * If at least one attribute from {@code attributes} is removed, this will\n\t * return {@code true}.\n\t *\n\t *\n\t * @param groupPath group path\n\t * @param attributePaths to remove\n\t * @return true if any of the listed attributes were removed\n\t * @throws N5Exception the exception\n\t */\n\tdefault boolean removeAttributes(final String groupPath, final List<String> attributePaths) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(groupPath);\n\t\tboolean removed = false;\n\t\tfor (final String attribute : attributePaths) {\n\t\t\tremoved |= removeAttribute(normalPath, N5URI.normalizeAttributePath(attribute));\n\t\t}\n\t\treturn removed;\n\t}\n\n\t/**\n\t * Sets mandatory dataset attributes.\n\t *\n\t * @param datasetPath dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @throws N5Exception the exception\n\t */\n\tdefault void setDatasetAttributes(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes) throws N5Exception {\n\n\t\tsetAttributes(datasetPath, getConvertedDatasetAttributes(datasetAttributes).asMap());\n\t}\n\n\t/**\n\t * Set the SemVer version of this container as specified in the\n\t * {@link N5Reader#VERSION_KEY} attribute of the root group. This default\n\t * implementation writes the version only if the current version is not\n\t * equal {@link N5Reader#VERSION}.\n\t *\n\t * @throws N5Exception the exception\n\t */\n\tdefault void setVersion() throws N5Exception {\n\n\t\tif (!VERSION.equals(getVersion()))\n\t\t\tsetAttribute(\"/\", VERSION_KEY, VERSION.toString());\n\t}\n\n\t/**\n\t * Creates a group (directory)\n\t *\n\t * @param groupPath the path\n\t * @throws N5Exception the exception\n\t */\n\tvoid createGroup(String groupPath) throws N5Exception;\n\n\t/**\n\t * Removes a group or dataset (directory and all contained files).\n\t *\n\t * <p>\n\t * <code>{@link #remove(String) remove(\"\")}</code> or\n\t * <code>{@link #remove(String) remove(\"\")}</code> will delete this N5\n\t * container. Please note that no checks for safety will be performed,\n\t * e.g. <code>{@link #remove(String) remove(\"..\")}</code> will try to\n\t * recursively delete the parent directory of this N5 container which\n\t * only fails because it attempts to delete the parent directory before it\n\t * is empty.\n\t *\n\t * @param groupPath group path\n\t * @return true if removal was successful, false otherwise\n\t * @throws N5Exception the exception\n\t */\n\tboolean remove(String groupPath) throws N5Exception;\n\n\t/**\n\t * Removes the N5 container.\n\t *\n\t * @return true if removal was successful, false otherwise\n\t * @throws N5Exception the exception\n\t */\n\tdefault boolean remove() throws N5Exception {\n\n\t\treturn remove(\"/\");\n\t}\n\n\t/**\n\t * Creates a dataset. This does not create any data but the path and\n\t * mandatory attributes only. The returned DatasetAttributes should be used\n\t * for future read/write operations on this dataset. It may not be the same\n\t * DatasetAttributes object that was provided, depending on the implementation.\n\t *\n\t * @param datasetPath dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @return DatasetAttributes optimal attributes object to be used for read/write operations\n\t * @throws N5Exception\n\t */\n\tdefault DatasetAttributes createDataset(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes) throws N5Exception {\n\n\t\tfinal String normalPath = N5URI.normalizeGroupPath(datasetPath);\n\t\tcreateGroup(normalPath);\n\t\tfinal DatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\tsetDatasetAttributes(normalPath, convertedDatasetAttributes);\n\t\treturn convertedDatasetAttributes;\n\t}\n\n\t/**\n\t * Creates a dataset. This does not create any data but the path and\n\t * mandatory attributes only. Returns the DatasetAttributes object to be\n\t * used for future read/write operations on this dataset.\n\t *\n\t * @param datasetPath dataset path\n\t * @param dimensions the dataset dimensions\n\t * @param blockSize the block size\n\t * @param dataType the data type\n\t * @param compression the compression\n\t * @return DatasetAttributes optimal attributes object to be used for read/write operations\n\t * @throws N5Exception\n\t */\n\tdefault DatasetAttributes createDataset(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal long[] dimensions,\n\t\t\tfinal int[] blockSize,\n\t\t\tfinal DataType dataType,\n\t\t\tfinal Compression compression) throws N5Exception {\n\n\t\treturn createDataset(datasetPath, new DatasetAttributes(dimensions, blockSize, dataType, compression));\n\t}\n\n\t/**\n\t * Writes a chunk represented by a {@link DataBlock}.\n\t *\n\t * @param datasetPath dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param chunk the chunk as a DataBlock\n\t * @param <T> the data block data type\n\t * @throws N5Exception the exception\n\t */\n\t<T> void writeChunk(\n\t\t\tString datasetPath,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tDataBlock<T> chunk) throws N5Exception;\n\n\t/**\n\t * Write multiple chunks represented by {@link DataBlock}s, useful for aggregation.\n\t *\n\t * @param datasetPath dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param chunks the chunks\n\t * @param <T> the data block data type\n\t * @throws N5Exception the exception\n\t */\n\tdefault <T> void writeChunks(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal DataBlock<T>... chunks) throws N5Exception {\n\n\t\t// default method is naive\n\t\tDatasetAttributes convertedAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\tfor (DataBlock<T> block : chunks) {\n\t\t\twriteChunk(datasetPath, convertedAttributes, block);\n\t\t}\n\t}\n\n\t/**\n\t * Writes a block stored as a {@link DataBlock}.\n\t * <p>\n\t * A block is the highest (coarsest) level of the dataset's {@link NestedGrid},\n\t * that is, a shard if the dataset is sharded, or a chunk otherwise.\n\t * <p>\n\t * This method's behavior is identical to {@link #writeChunk} for un-sharded datasets.\n\t *\n\t * @param pathName dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param dataBlock the data block\n\t * @param <T> the data block data type\n\t * @throws N5Exception the exception\n\t *\n\t * @see DatasetAttributes#getNestedBlockGrid()\n\t */\n\t<T> void writeBlock(\n\t\t\tString pathName,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tDataBlock<T> dataBlock) throws N5Exception;\n\n\t@FunctionalInterface\n\tinterface DataBlockSupplier<T> {\n\n\t\t/**\n\t\t *\n\t\t * @param gridPos\n\t\t * @param existingDataBlock\n\t\t * \t\texisting data to be merged into the new data block (may be {@code null})\n\t\t *\n\t\t * @return data block at the given gridPos\n\t\t */\n\t\tDataBlock<T> get(long[] gridPos, DataBlock<T> existingDataBlock);\n\t}\n\n\t/**\n\t * @param datasetPath the dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param min min pixel coordinate of region to write\n\t * @param size size in pixels of region to write\n\t * @param chunkSupplier is asked to create chunks within the given region\n\t * @param writeFully if false, merge existing data in shards/blocks that overlap the region boundary. if true, override everything.\n\t * @throws N5Exception the exception\n\t */\n\t<T> void writeRegion(\n\t\t\tString datasetPath,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong[] min,\n\t\t\tlong[] size,\n\t\t\tDataBlockSupplier<T> chunkSupplier,\n\t\t\tboolean writeFully) throws N5Exception;\n\n\t/**\n\t * @param datasetPath the dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param min min pixel coordinate of region to write\n\t * @param size size in pixels of region to write\n\t * @param chunkSupplier is asked to create chunks within the given region\n\t * @param writeFully if false, merge existing data in shards/chunks that overlap the region boundary. if true, override everything.\n\t * @param exec used to parallelize over blocks (chunks and shards)\n\t * @throws N5Exception the exception\n\t */\n\t<T> void writeRegion(\n\t\t\tString datasetPath,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong[] min,\n\t\t\tlong[] size,\n\t\t\tDataBlockSupplier<T> chunkSupplier,\n\t\t\tboolean writeFully,\n\t\t\tExecutorService exec) throws N5Exception, InterruptedException, ExecutionException;\n\n\t/**\n\t * Deletes the block at {@code gridPosition}.\n\t * <p>\n\t * A block is the highest (coarsest) level of the dataset's {@link NestedGrid},\n\t * that is, a shard if the dataset is sharded, or a chunk otherwise.\n\t * Note that {@code gridPosition} is in units of blocks at the highest\n\t * (coarsest) level, too.\n\t * <p>\n\t * This method's behavior is identical to {@link #deleteChunk} for un-sharded datasets.\n\t *\n\t * @param datasetPath dataset path\n\t * @param gridPosition position of block to be deleted\n\t * @throws N5Exception if the block exists but could not be deleted\n\t *\n\t * @return {@code true} if the block at {@code gridPosition} existed and was deleted.\n\t */\n\tdefault boolean deleteBlock(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\t\tfinal DatasetAttributes datasetAttributes = getDatasetAttributes(datasetPath);\n\t\treturn deleteBlock(datasetPath, datasetAttributes, gridPosition);\n\t}\n\n\t/**\n\t * Deletes the block at {@code gridPosition}.\n\t * <p>\n\t * A block is the highest (coarsest) level of the dataset's {@link NestedGrid},\n\t * that is, a shard if the dataset is sharded, or a chunk otherwise.\n\t * Note that {@code gridPosition} is in units of blocks at the highest\n\t * (coarsest) level, too.\n\t * <p>\n\t * This method's behavior is identical to {@link #deleteChunk} for un-sharded datasets.\n\t *\n\t * @param datasetPath the dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param gridPosition position of block to be deleted\n\t * @throws N5Exception if the block exists but could not be deleted\n\t *\n\t * @return {@code true} if the block at {@code gridPosition} existed and was deleted.\n\t */\n\tboolean deleteBlock(\n\t\t\tString datasetPath,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong... gridPosition) throws N5Exception;\n\n\t/**\n\t * Deletes the chunk at {@code gridPosition}.\n\t * <p>\n\t * Note that {@code gridPosition} is in units of chunks.\n\t *\n\t * @param datasetPath dataset path\n\t * @param gridPosition position of chunk to be deleted\n\t * @throws N5Exception if the chunk exists but could not be deleted\n\t *\n\t * @return {@code true} if the chunk at {@code gridPosition} existed and was deleted.\n\t */\n\tdefault boolean deleteChunk(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\t\tfinal DatasetAttributes datasetAttributes = getDatasetAttributes(datasetPath);\n\t\treturn deleteChunk(datasetPath, datasetAttributes, gridPosition);\n\t}\n\n\t/**\n\t * Deletes the chunk at {@code gridPosition}.\n\t * <p>\n\t * Note that {@code gridPosition} is in units of chunks.\n\t *\n\t * @param datasetPath the dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param gridPosition position of chunk to be deleted\n\t * @throws N5Exception if the chunk exists but could not be deleted\n\t *\n\t * @return {@code true} if the chunk at {@code gridPosition} existed and was deleted.\n\t */\n\tboolean deleteChunk(\n\t\t\tString datasetPath,\n\t\t\tDatasetAttributes datasetAttributes,\n\t\t\tlong... gridPosition) throws N5Exception;\n\n\t/**\n\t * Deletes the chunks at the given {@code gridPositions}.\n\t * <p>\n\t * Note that {@code gridPositions} are in units of chunks.\n\t *\n\t * @param datasetPath dataset path\n\t * @param gridPositions a list of grid positions\n\t * @return {@code true} if any of the specified chunks existed and was deleted\n\t * @throws N5Exception if any of the chunks did exist but could not be deleted\n\t */\n\tdefault boolean deleteChunks(\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal List<long[]> gridPositions) throws N5Exception {\n\t\tboolean deleted = false;\n\t\tfor (long[] pos : gridPositions) {\n\t\t\tdeleted |= deleteChunk(datasetPath, datasetAttributes, pos);\n\t\t}\n\t\treturn deleted;\n\t}\n\n\t/**\n\t * Save a {@link Serializable} as an N5 {@link DataBlock} at a given offset.\n\t * The offset is given in {@link DataBlock} grid coordinates.\n\t *\n\t * @param object the object to serialize\n\t * @param datasetPath the dataset path\n\t * @param datasetAttributes the dataset attributes\n\t * @param gridPosition the grid position\n\t * @throws N5Exception the exception\n\t */\n\tdefault void writeSerializedBlock(\n\t\t\tfinal Serializable object,\n\t\t\tfinal String datasetPath,\n\t\t\tfinal DatasetAttributes datasetAttributes,\n\t\t\tfinal long... gridPosition) throws N5Exception {\n\n\t\tfinal ByteArrayOutputStream byteOutputStream = new ByteArrayOutputStream();\n\t\ttry (ObjectOutputStream out = new ObjectOutputStream(byteOutputStream)) {\n\t\t\tout.writeObject(object);\n\t\t} catch (final IOException | UncheckedIOException e) {\n\t\t\tthrow new N5Exception.N5IOException(e);\n\t\t}\n\t\tfinal byte[] bytes = byteOutputStream.toByteArray();\n\t\tfinal DataBlock<?> dataBlock = new ByteArrayDataBlock(null, gridPosition, bytes);\n\t\twriteChunk(datasetPath, datasetAttributes, dataBlock);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/NameConfigAdapter.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport com.google.gson.JsonArray;\nimport com.google.gson.JsonDeserializationContext;\nimport com.google.gson.JsonDeserializer;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonObject;\nimport com.google.gson.JsonParseException;\nimport com.google.gson.JsonSerializationContext;\nimport com.google.gson.JsonSerializer;\nimport org.janelia.saalfeldlab.n5.serialization.N5Annotations;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\nimport org.scijava.annotations.Index;\nimport org.scijava.annotations.IndexItem;\n\nimport java.lang.reflect.Constructor;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Type;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.Map.Entry;\n\n/**\n * T adapter, auto-discovers annotated T implementations in the classpath.\n *\n * @author Caleb Hulbert\n * \n * @param <T>\n *            the class this adapter (de)serializes\n */\npublic class NameConfigAdapter<T> implements JsonDeserializer<T>, JsonSerializer<T> {\n\n\tprivate static HashMap<Class<?>, NameConfigAdapter<?>> adapters = new HashMap<>();\n\n\tprivate static <V> void registerAdapter(Class<V> cls) {\n\n\t\tadapters.put(cls, new NameConfigAdapter<>(cls));\n\t\tupdate(adapters.get(cls));\n\t}\n\tprivate final HashMap<String, Constructor<? extends T>> constructors = new HashMap<>();\n\n\tprivate final HashMap<String, HashMap<String, Field>> parameters = new HashMap<>();\n\tprivate final HashMap<String, HashMap<String, String>> parameterNames = new HashMap<>();\n\tprivate static ArrayList<Field> getDeclaredFields(Class<?> clazz) {\n\n\t\tfinal ArrayList<Field> fields = new ArrayList<>();\n\t\tfields.addAll(Arrays.asList(clazz.getDeclaredFields()));\n\t\tfor (clazz = clazz.getSuperclass(); clazz != null; clazz = clazz.getSuperclass())\n\t\t\tfields.addAll(Arrays.asList(clazz.getDeclaredFields()));\n\t\treturn fields;\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\tpublic static synchronized <T> void update(final NameConfigAdapter<T> adapter) {\n\n\t\tfinal String prefix = adapter.type.getAnnotation(NameConfig.Prefix.class).value();\n\t\tfinal ClassLoader classLoader = Thread.currentThread().getContextClassLoader();\n\t\tfinal Index<NameConfig.Name> annotationIndex = Index.load(NameConfig.Name.class, classLoader);\n\t\tfor (final IndexItem<NameConfig.Name> item : annotationIndex) {\n\t\t\tClass<T> clazz;\n\t\t\ttry {\n\t\t\t\tclazz = (Class<T>)Class.forName(item.className());\n\n\t\t\t\tfinal NameConfig.Serialize serialize = clazz.getAnnotation(NameConfig.Serialize.class);\n\t\t\t\tif (serialize != null && !serialize.value())\n\t\t\t\t\tcontinue;\n\n\t\t\t\tfinal String name = clazz.getAnnotation(NameConfig.Name.class).value();\n\t\t\t\tfinal String type = prefix + \".\" + name;\n\n\t\t\t\tfinal Constructor<T> constructor = clazz.getDeclaredConstructor();\n\n\t\t\t\tfinal HashMap<String, Field> parameters = new HashMap<>();\n\t\t\t\tfinal HashMap<String, String> parameterNames = new HashMap<>();\n\t\t\t\tfinal ArrayList<Field> fields = getDeclaredFields(clazz);\n\t\t\t\tfor (final Field field : fields) {\n\t\t\t\t\tfinal NameConfig.Parameter parameter = field.getAnnotation(NameConfig.Parameter.class);\n\t\t\t\t\tif (parameter != null) {\n\n\t\t\t\t\t\tfinal String parameterName;\n\t\t\t\t\t\tif (parameter.value().equals(\"\"))\n\t\t\t\t\t\t\tparameterName = field.getName();\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tparameterName = parameter.value();\n\n\t\t\t\t\t\tparameterNames.put(field.getName(), parameterName);\n\n\t\t\t\t\t\tparameters.put(field.getName(), field);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tadapter.constructors.put(type, constructor);\n\t\t\t\tadapter.parameters.put(type, parameters);\n\t\t\t\tadapter.parameterNames.put(type, parameterNames);\n\t\t\t} catch (final ClassNotFoundException | NoSuchMethodException | ClassCastException\n\t\t\t\t\t\t   | UnsatisfiedLinkError e) {\n\n\t\t\t\tSystem.err.println(\"T '\" + item.className() + \"' could not be registered\");\n\t\t\t\te.printStackTrace(System.err);\n\t\t\t}\n\t\t}\n\t}\n\n\tprivate final Class<T> type;\n\n\tpublic NameConfigAdapter(Class<T> cls) {\n\t\tthis.type = cls;\n\t}\n\n\t@Override\n\tpublic JsonElement serialize(\n\t\t\tfinal T object,\n\t\t\tfinal Type typeOfSrc,\n\t\t\tfinal JsonSerializationContext context) {\n\n\t\tfinal Class<T> clazz = (Class<T>)object.getClass();\n\n\t\tfinal String name = clazz.getAnnotation(NameConfig.Name.class).value();\n\t\tfinal String prefix = type.getAnnotation(NameConfig.Prefix.class).value();\n\t\tfinal String type = prefix + \".\" + name;\n\n\t\tfinal JsonObject json = new JsonObject();\n\t\tjson.addProperty(\"name\", name);\n\t\tfinal JsonObject configuration = new JsonObject();\n\n\t\tfinal HashMap<String, Field> parameterTypes = parameters.get(type);\n\t\tfinal HashMap<String, String> parameterNameMap = parameterNames.get(type);\n\t\ttry {\n\t\t\tfor (final Entry<String, Field> parameterType : parameterTypes.entrySet()) {\n\t\t\t\tfinal String fieldName = parameterType.getKey();\n\t\t\t\tfinal Field field = clazz.getDeclaredField(fieldName);\n\t\t\t\tfinal boolean isAccessible = field.isAccessible();\n\t\t\t\tfield.setAccessible(true);\n\t\t\t\tfinal Object value = field.get(object);\n\t\t\t\tfield.setAccessible(isAccessible);\n\t\t\t\tfinal JsonElement serialized = context.serialize(value);\n\t\t\t\tif (field.getAnnotation(N5Annotations.ReverseArray.class) != null) {\n\t\t\t\t\tfinal JsonArray reversedArray = reverseJsonArray(serialized.getAsJsonArray());\n\t\t\t\t\tconfiguration.add(parameterNameMap.get(fieldName), reversedArray);\n\t\t\t\t} else\n\t\t\t\t\tconfiguration.add(parameterNameMap.get(fieldName), serialized);\n\n\t\t\t}\n\t\t\tif (!configuration.isEmpty())\n\t\t\t\tjson.add(\"configuration\", configuration);\n\t\t} catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) {\n\t\t\tnew RuntimeException(\"Could not serialize \" + clazz.getName(), e).printStackTrace(System.err);\n\t\t\treturn null;\n\t\t}\n\n\t\treturn json;\n\t}\n\n\t@Override\n\tpublic T deserialize(\n\t\t\tfinal JsonElement json,\n\t\t\tfinal Type typeOfT,\n\t\t\tfinal JsonDeserializationContext context) throws JsonParseException {\n\n\t\tfinal String prefix = type.getAnnotation(NameConfig.Prefix.class).value();\n\n\t\tfinal JsonObject objectJson = json.getAsJsonObject();\n\t\tfinal String name = objectJson.getAsJsonPrimitive(\"name\").getAsString();\n\t\tif (name == null) {\n\t\t\treturn null;\n\t\t}\n\n\t\tfinal String type = prefix + \".\" + name;\n\n\t\tfinal JsonObject configuration = objectJson.getAsJsonObject(\"configuration\");\n\t\t/* It's ok to be null if all parameters are optional.\n\t\t* Otherwise, return*/\n\t\tif (configuration == null) {\n\t\t\tfor (final Field field : parameters.get(type).values()) {\n\t\t\t\tif (!field.getAnnotation(NameConfig.Parameter.class).optional())\n\t\t\t\t\treturn null;\n\t\t\t}\n\t\t}\n\n\t\tfinal Constructor<? extends T> constructor = constructors.get(type);\n\t\tconstructor.setAccessible(true);\n\t\tfinal T object;\n\t\ttry {\n\t\t\tobject = constructor.newInstance();\n\t\t\tfinal HashMap<String, Field> parameterTypes = parameters.get(type);\n\t\t\tfinal HashMap<String, String> parameterNameMap = parameterNames.get(type);\n\t\t\tfor (final Entry<String, Field> parameterType : parameterTypes.entrySet()) {\n\t\t\t\tfinal String fieldName = parameterType.getKey();\n\t\t\t\tfinal String paramName = parameterNameMap.get(fieldName);\n\t\t\t\tfinal JsonElement paramJson = configuration == null ? null : configuration.get(paramName);\n\t\t\t\tfinal Field field = parameterType.getValue();\n\t\t\t\tif (paramJson != null) {\n\t\t\t\t\tfinal Object parameter;\n\t\t\t\t\tif (field.getAnnotation(N5Annotations.ReverseArray.class) != null) {\n\t\t\t\t\t\tfinal JsonArray reversedArray = reverseJsonArray(paramJson);\n\t\t\t\t\t\tparameter = context.deserialize(reversedArray, field.getType());\n\t\t\t\t\t} else\n\t\t\t\t\t\tparameter = context.deserialize(paramJson, field.getType());\n\t\t\t\t\tReflectionUtils.setFieldValue(object, fieldName, parameter);\n\t\t\t\t} else if (!field.getAnnotation(NameConfig.Parameter.class).optional()) {\n\t\t\t\t\t/* if param is null, and not optional, return null */\n\t\t\t\t\treturn null;\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException\n\t\t\t\t | SecurityException | NoSuchFieldException e) {\n\t\t\te.printStackTrace(System.err);\n\t\t\treturn null;\n\t\t}\n\n\t\treturn object;\n\t}\n\n\tprivate static JsonArray reverseJsonArray(JsonElement paramJson) {\n\n\t\tfinal JsonArray reversedJson = new JsonArray(paramJson.getAsJsonArray().size());\n\t\tfor (int i = paramJson.getAsJsonArray().size() - 1; i >= 0; i--) {\n\t\t\treversedJson.add(paramJson.getAsJsonArray().get(i));\n\t\t}\n\t\treturn reversedJson;\n\t}\n\n\tpublic static <T> NameConfigAdapter<T> getJsonAdapter(Class<T> cls) {\n\n\t\tif (adapters.get(cls) == null)\n\t\t\tregisterAdapter(cls);\n\t\treturn (NameConfigAdapter<T>) adapters.get(cls);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/RawCompression.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.janelia.saalfeldlab.n5.Compression.CompressionType;\nimport org.janelia.saalfeldlab.n5.codec.DeterministicSizeDataCodec;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@CompressionType(\"raw\")\n@NameConfig.Name(\"raw\")\npublic class RawCompression implements Compression, DeterministicSizeDataCodec {\n\n\tprivate static final long serialVersionUID = 7526445806847086477L;\n\n\t@Override\n\tpublic boolean equals(final Object other) {\n\n\t\treturn other != null && other.getClass() == RawCompression.class;\n\t}\n\n\t@Override\n\tpublic ReadData encode(final ReadData readData) {\n\t\treturn readData;\n\t}\n\n\t@Override\n\tpublic ReadData decode(final ReadData readData) {\n\t\treturn readData;\n\t}\n\n\t@Override\n\tpublic long encodedSize(final long size) {\n\t\treturn size;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/ReflectionUtils.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.lang.reflect.Field;\nimport java.lang.reflect.Modifier;\n\nclass ReflectionUtils {\n\n\tstatic <T> void setFieldValue(\n\t\t\tfinal Object object,\n\t\t\tfinal String fieldName,\n\t\t\tfinal T value) throws NoSuchFieldException, IllegalAccessException {\n\n\t\tField modifiersField;\n\t\tboolean isModifiersAccessible;\n\t\ttry {\n\t\t\tmodifiersField = Field.class.getDeclaredField(\"modifiers\");\n\t\t\tisModifiersAccessible = modifiersField.isAccessible();\n\t\t\tmodifiersField.setAccessible(true);\n\t\t} catch (final NoSuchFieldException e) {\n\t\t\t// Java 11+ does not allow to access modifiers\n\t\t\tmodifiersField = null;\n\t\t\tisModifiersAccessible = false;\n\t\t}\n\n\t\tfinal Field field = object.getClass().getDeclaredField(fieldName);\n\t\tfinal boolean isFieldAccessible = field.isAccessible();\n\t\tfield.setAccessible(true);\n\n\t\tif (modifiersField != null) {\n\t\t\tfinal int modifiers = field.getModifiers();\n\t\t\tmodifiersField.setInt(field, modifiers & ~Modifier.FINAL);\n\t\t\tfield.set(object, value);\n\t\t\tmodifiersField.setInt(field, modifiers);\n\t\t} else {\n\t\t\tfield.set(object, value);\n\t\t}\n\n\t\tfield.setAccessible(isFieldAccessible);\n\t\tif (modifiersField != null) {\n\t\t\tmodifiersField.setAccessible(isModifiersAccessible);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/ShortArrayDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\npublic class ShortArrayDataBlock extends AbstractDataBlock<short[]> {\n\n\tpublic ShortArrayDataBlock(final int[] size, final long[] gridPosition, final short[] data) {\n\n\t\tsuper(size, gridPosition, data, a -> a.length);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/StringDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.nio.ByteBuffer;\nimport java.nio.charset.Charset;\nimport java.nio.charset.StandardCharsets;\n\npublic class StringDataBlock extends AbstractDataBlock<String[]> {\n\n    protected static final Charset ENCODING = StandardCharsets.UTF_8;\n    protected static final String NULLCHAR = \"\\0\";\n    protected byte[] serializedData = null;\n    protected String[] actualData = null;\n\n    public StringDataBlock(final int[] size, final long[] gridPosition, final String[] data) {\n        super(size, gridPosition, new String[0], a -> a.length);\n        actualData = data;\n    }\n\n    public StringDataBlock(final int[] size, final long[] gridPosition, final byte[] data) {\n        super(size, gridPosition, new String[0], a -> a.length);\n        serializedData = data;\n    }\n\n    public void readData(final ByteBuffer buffer) {\n\n\t\tif (buffer.hasArray()) {\n\t\t\tif (buffer.array() != serializedData)\n\t\t\t\tbuffer.get(serializedData);\n\t\t\tactualData = deserialize(buffer.array());\n\t\t} else\n\t\t\tactualData = ENCODING.decode(buffer).toString().split(NULLCHAR);\n    }\n\n    protected byte[] serialize(String[] strings) {\n        final String flattenedArray = String.join(NULLCHAR, strings) + NULLCHAR;\n        return flattenedArray.getBytes(ENCODING);\n    }\n\n    protected String[] deserialize(byte[] rawBytes) {\n        final String rawChars = new String(rawBytes, ENCODING);\n        return rawChars.split(NULLCHAR);\n    }\n\n    @Override\n    public int getNumElements() {\n        if (serializedData == null)\n            serializedData = serialize(actualData);\n        return serializedData.length;\n    }\n\n    @Override\n    public String[] getData() {\n        if (actualData == null)\n            actualData = deserialize(serializedData);\n        return actualData;\n    }\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/XzCompression.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.IOException;\nimport org.apache.commons.compress.compressors.xz.XZCompressorInputStream;\nimport org.apache.commons.compress.compressors.xz.XZCompressorOutputStream;\nimport org.janelia.saalfeldlab.n5.Compression.CompressionType;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@CompressionType(\"xz\")\n@NameConfig.Name(\"xz\")\npublic class XzCompression implements Compression {\n\n\tprivate static final long serialVersionUID = -7272153943564743774L;\n\n\t@CompressionParameter\n\t@NameConfig.Parameter\n\tprivate final int preset;\n\n\tpublic XzCompression(final int preset) {\n\n\t\tthis.preset = preset;\n\t}\n\n\tpublic XzCompression() {\n\n\t\tthis(6);\n\t}\n\n\t@Override\n\tpublic boolean equals(final Object other) {\n\n\t\tif (other == null || other.getClass() != XzCompression.class)\n\t\t\treturn false;\n\t\telse\n\t\t\treturn preset == ((XzCompression)other).preset;\n\t}\n\n\t@Override\n\tpublic ReadData decode(final ReadData readData) throws N5IOException {\n\n\t\ttry {\n\t\t\treturn ReadData.from(new XZCompressorInputStream(readData.inputStream()));\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData encode(final ReadData readData) {\n\t\treturn readData.encode(out -> new XZCompressorOutputStream(out, preset));\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/cache/N5JsonCache.java",
    "content": "package org.janelia.saalfeldlab.n5.cache;\n\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.LinkedHashSet;\n\nimport org.janelia.saalfeldlab.n5.N5Exception;\n\nimport com.google.gson.JsonElement;\n\n/*\n * A cache containing JSON attributes and children for groups and\n * datasets stored in N5 containers. Used by {@link CachedGsonKeyValueN5Reader}\n * and {@link CachedGsonKeyValueN5Writer}.\n *\n */\npublic class N5JsonCache {\n\n\tpublic static final N5CacheInfo emptyCacheInfo = new N5CacheInfo();\n\n\tpublic static final EmptyJson emptyJson = new EmptyJson();\n\n\tprotected final N5JsonCacheableContainer container;\n\n\t/**\n\t * Data object for caching meta data. Elements that are null are not yet\n\t * cached.\n\t */\n\tprotected static class N5CacheInfo {\n\n\t\tprotected final HashMap<String, JsonElement> attributesCache = new HashMap<>();\n\t\tprotected LinkedHashSet<String> children = null;\n\t\tprotected boolean isDataset = false;\n\t\tprotected boolean isGroup = false;\n\n\t\tpublic JsonElement getCache(final String normalCacheKey) {\n\n\t\t\t// synchronize the method instead?\n\t\t\tsynchronized (attributesCache) {\n\t\t\t\treturn attributesCache.get(normalCacheKey);\n\t\t\t}\n\t\t}\n\n\t\tpublic boolean containsKey(final String normalCacheKey) {\n\n\t\t\t// synchronize the method instead?\n\t\t\tsynchronized (attributesCache) {\n\t\t\t\treturn attributesCache.containsKey(normalCacheKey);\n\t\t\t}\n\t\t}\n\n\t\tpublic boolean isDataset() {\n\n\t\t\treturn isDataset;\n\t\t}\n\n\t\tpublic boolean isGroup() {\n\n\t\t\treturn isGroup;\n\t\t}\n\t}\n\n\t@SuppressWarnings(\"deprecation\")\n\tprotected static class EmptyJson extends JsonElement {\n\n\t\t@Override\n\t\tpublic JsonElement deepCopy() {\n\n\t\t\tthrow new N5Exception(\"Do not copy EmptyJson, you naughty person\");\n\t\t}\n\n\t}\n\n\tprivate final HashMap<String, N5JsonCache.N5CacheInfo> containerPathToCache = new HashMap<>();\n\n\tpublic N5JsonCache(final N5JsonCacheableContainer container) {\n\n\t\tthis.container = container;\n\t}\n\n\tpublic JsonElement getAttributes(final String normalPathKey, final String normalCacheKey) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null) {\n\t\t\taddNewCacheInfo(normalPathKey, normalCacheKey, null);\n\t\t\tcacheInfo = getCacheInfo(normalPathKey);\n\t\t}\n\t\tif (cacheInfo == emptyCacheInfo || cacheInfo.getCache(normalCacheKey) == emptyJson) {\n\t\t\treturn null;\n\t\t}\n\t\tsynchronized (cacheInfo) {\n\t\t\tif (!cacheInfo.containsKey(normalCacheKey)) {\n\t\t\t\tupdateCacheInfo(normalPathKey, normalCacheKey, null);\n\t\t\t}\n\t\t}\n\n\t\tfinal JsonElement output = cacheInfo.getCache(normalCacheKey);\n\t\treturn output == null ? null : output.deepCopy();\n\t}\n\n\tpublic boolean isDataset(final String normalPathKey, final String normalCacheKey) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null) {\n\t\t\taddNewCacheInfo(normalPathKey, normalCacheKey, null);\n\t\t\tcacheInfo = getCacheInfo(normalPathKey);\n\t\t}\n\t\treturn cacheInfo.isDataset;\n\t}\n\n\tpublic boolean isGroup(final String normalPathKey, final String cacheKey) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null) {\n\t\t\taddNewCacheInfo(normalPathKey, cacheKey, null);\n\t\t\tcacheInfo = getCacheInfo(normalPathKey);\n\t\t}\n\t\treturn cacheInfo.isGroup;\n\t}\n\n\t/**\n\t * Returns true if a resource exists.\n\t *\n\t * @param normalPathKey\n\t *            the container path\n\t * @param normalCacheKey\n\t *            the cache key / resource (may be null)\n\t * @return true if exists\n\t */\n\tpublic boolean exists(final String normalPathKey, final String normalCacheKey) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null) {\n\t\t\taddNewCacheInfo(normalPathKey, normalCacheKey, null);\n\t\t\tcacheInfo = getCacheInfo(normalPathKey);\n\t\t}\n\t\treturn cacheInfo != emptyCacheInfo;\n\t}\n\n\tpublic String[] list(final String normalPathKey) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null) {\n\t\t\taddNewCacheInfo(normalPathKey);\n\t\t\tcacheInfo = getCacheInfo(normalPathKey);\n\t\t}\n\t\tif (cacheInfo == emptyCacheInfo)\n\t\t\tthrow new N5Exception.N5IOException(normalPathKey + \" is not a valid group\");\n\n\t\tif (cacheInfo.children == null)\n\t\t\taddChild(cacheInfo, normalPathKey);\n\n\t\tfinal String[] children = new String[cacheInfo.children.size()];\n\t\tint i = 0;\n\t\tfor (final String child : cacheInfo.children) {\n\t\t\tchildren[i++] = child;\n\t\t}\n\t\tArrays.sort(children);\n\t\treturn children;\n\t}\n\n\tpublic N5CacheInfo addNewCacheInfo(\n\t\t\tfinal String normalPathKey,\n\t\t\tfinal String normalCacheKey,\n\t\t\tfinal JsonElement uncachedAttributes) {\n\n\t\tN5CacheInfo cacheInfo;\n\t\tJsonElement attrsFromContainer = null;\n\t\tboolean groupExistsFromContainer = false;\n\t\ttry {\n\t\t\tattrsFromContainer = container.getAttributesFromContainer(normalPathKey, normalCacheKey);\n\t\t\tif (attrsFromContainer == null)\n\t\t\t\tgroupExistsFromContainer = container.existsFromContainer(normalPathKey, null);\n\t\t\tif (groupExistsFromContainer || attrsFromContainer != null)\n\t\t\t\tcacheInfo = newCacheInfo();\n\t\t\telse\n\t\t\t\tcacheInfo = emptyCacheInfo;\n\t\t} catch (N5Exception.N5NoSuchKeyException e) {\n\t\t\tcacheInfo = emptyCacheInfo;\n\t\t}\n\n\t\tif (cacheInfo != emptyCacheInfo) {\n\t\t\tif (normalCacheKey != null) {\n\t\t\t\tfinal JsonElement attributes = (uncachedAttributes == null)\n\t\t\t\t\t\t? attrsFromContainer\n\t\t\t\t\t\t: uncachedAttributes;\n\n\t\t\t\tupdateCacheAttributes(cacheInfo, normalCacheKey, attributes);\n\t\t\t\tupdateCacheIsGroup(cacheInfo, container.isGroupFromAttributes(normalCacheKey, attributes));\n\t\t\t\tupdateCacheIsDataset(cacheInfo, container.isDatasetFromAttributes(normalCacheKey, attributes));\n\t\t\t} else {\n\t\t\t\tupdateCacheIsGroup(cacheInfo, container.isGroupFromContainer(normalPathKey));\n\t\t\t\tupdateCacheIsDataset(cacheInfo, container.isDatasetFromContainer(normalPathKey));\n\t\t\t}\n\t\t}\n\t\tupdateCache(normalPathKey, cacheInfo);\n\t\treturn cacheInfo;\n\t}\n\n\tprivate N5CacheInfo addNewCacheInfo(final String normalPathKey) {\n\n\t\treturn addNewCacheInfo(normalPathKey, null, null);\n\t}\n\n\tprivate void addChild(final N5CacheInfo cacheInfo, final String normalPathKey) {\n\n\t\tif (cacheInfo.children == null)\n\t\t\tcacheInfo.children = new LinkedHashSet<>();\n\n\t\tfinal String[] children = container.listFromContainer(normalPathKey);\n\t\tCollections.addAll(cacheInfo.children, children);\n\t}\n\n\tprotected N5CacheInfo getOrMakeCacheInfo(final String normalPathKey) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null) {\n\t\t\treturn addNewCacheInfo(normalPathKey, null, null);\n\t\t}\n\n\t\tif (cacheInfo == emptyCacheInfo)\n\t\t\tcacheInfo = newCacheInfo();\n\n\t\treturn cacheInfo;\n\t}\n\n\t/**\n\t * Updates the cache attributes for the given normalPathKey\n\t * adding the appropriate node to the cache if necessary.\n\t *\n\t * @param normalPathKey\n\t *            the normalized path key\n\t * @param normalCacheKey\n\t * \t\t\tthe normalize key to cache\n\t */\n\tpublic void updateCacheInfo(final String normalPathKey, final String normalCacheKey) {\n\n\t\tfinal N5CacheInfo cacheInfo = getOrMakeCacheInfo(normalPathKey);\n\t\tfinal JsonElement attrs = cacheInfo.attributesCache.get(normalCacheKey);\n\t\tupdateCacheInfo(normalPathKey, normalCacheKey, attrs);\n\t}\n\n\t/**\n\t * Updates the cache attributes for the given normalPathKey and\n\t * normalCacheKey,\n\t * adding the appropriate node to the cache if necessary.\n\t *\n\t * @param normalPathKey\n\t *            the normalized path key\n\t * @param normalCacheKey\n\t *            the normalized cache key\n\t * @param uncachedAttributes\n\t *            attributes to be cached\n\t */\n\tpublic void updateCacheInfo(\n\t\t\tfinal String normalPathKey,\n\t\t\tfinal String normalCacheKey,\n\t\t\tfinal JsonElement uncachedAttributes) {\n\n\t\tfinal N5CacheInfo cacheInfo = getOrMakeCacheInfo(normalPathKey);\n\t\tif (normalCacheKey != null) {\n\t\t\tfinal JsonElement attributesToCache = uncachedAttributes == null\n\t\t\t\t\t? container.getAttributesFromContainer(normalPathKey, normalCacheKey)\n\t\t\t\t\t: uncachedAttributes;\n\n\t\t\tupdateCacheAttributes(cacheInfo, normalCacheKey, attributesToCache);\n\t\t\tupdateCacheIsGroup(cacheInfo, container.isGroupFromAttributes(normalCacheKey, attributesToCache));\n\t\t\tupdateCacheIsDataset(cacheInfo, container.isDatasetFromAttributes(normalCacheKey, attributesToCache));\n\t\t} else {\n\t\t\tupdateCacheIsGroup(cacheInfo, container.isGroupFromContainer(normalPathKey));\n\t\t\tupdateCacheIsDataset(cacheInfo, container.isDatasetFromContainer(normalPathKey));\n\t\t}\n\t\tupdateCache(normalPathKey, cacheInfo);\n\t}\n\n\tpublic void initializeNonemptyCache(final String normalPathKey, final String normalCacheKey) {\n\n\t\tfinal N5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tif (cacheInfo == null || cacheInfo == emptyCacheInfo) {\n\t\t\tfinal N5CacheInfo info = newCacheInfo();\n\t\t\tif (normalCacheKey != null)\n\t\t\t\tinfo.attributesCache.put(normalCacheKey, emptyJson);\n\n\t\t\tupdateCache(normalPathKey, info);\n\t\t}\n\t}\n\n\tpublic void setAttributes(final String normalPathKey, final String normalCacheKey, final JsonElement attributes) {\n\n\t\tN5CacheInfo cacheInfo = getCacheInfo(normalPathKey);\n\t\tboolean update = false;\n\t\tif (cacheInfo == null) {\n\t\t\treturn;\n\t\t}\n\n\t\tif (cacheInfo == emptyCacheInfo) {\n\t\t\tcacheInfo = newCacheInfo();\n\t\t\tupdate = true;\n\t\t}\n\n\t\tupdateCacheAttributes(cacheInfo, normalCacheKey, attributes);\n\n\t\tif (update)\n\t\t\tupdateCache(normalPathKey, cacheInfo);\n\t}\n\n\t/**\n\t * Adds child to the parent's children list, only if the parent has been\n\t * cached, and its children list already exists.\n\t *\n\t * @param parent\n\t *            parent path\n\t * @param child\n\t *            child path\n\t */\n\tpublic void addChildIfPresent(final String parent, final String child) {\n\n\t\tfinal N5CacheInfo cacheInfo = getCacheInfo(parent);\n\t\tif (cacheInfo == null)\n\t\t\treturn;\n\n\t\tif (cacheInfo.children != null)\n\t\t\tcacheInfo.children.add(child);\n\t}\n\n\t/**\n\t * Adds child to the parent's children list, only if the parent has been\n\t * cached, creating a children list if it does not already exist.\n\t *\n\t * @param parent\n\t *            parent path\n\t * @param child\n\t *            child path\n\t */\n\tpublic void addChild(final String parent, final String child) {\n\n\t\tfinal N5CacheInfo cacheInfo = getCacheInfo(parent);\n\t\tif (cacheInfo == null)\n\t\t\treturn;\n\n\t\tif (cacheInfo.children == null)\n\t\t\tcacheInfo.children = new LinkedHashSet<>();\n\n\t\tcacheInfo.children.add(child);\n\t}\n\n\tpublic void removeCache(final String normalParentPathKey, final String normalPathKey) {\n\n\t\t// this path and all children should be removed = set to emptyCacheInfo\n\t\tsynchronized (containerPathToCache) {\n\t\t\tcontainerPathToCache.put(normalPathKey, emptyCacheInfo);\n\t\t\tcontainerPathToCache.keySet().stream().filter(x -> {\n\t\t\t\treturn x.startsWith(normalPathKey + \"/\");\n\t\t\t}).forEach(x -> {\n\t\t\t\tcontainerPathToCache.put(x, emptyCacheInfo);\n\t\t\t});\n\t\t}\n\n\t\t// update the parent's children, if present (remove the normalPathKey)\n\t\tfinal N5CacheInfo parentCache = containerPathToCache.get(normalParentPathKey);\n\t\tif (parentCache != null && parentCache.children != null) {\n\t\t\tparentCache.children.remove(normalPathKey.replaceFirst(normalParentPathKey + \"/\", \"\"));\n\t\t}\n\t}\n\n\tprotected N5CacheInfo getCacheInfo(final String pathKey) {\n\n\t\tsynchronized (containerPathToCache) {\n\t\t\treturn containerPathToCache.get(pathKey);\n\t\t}\n\t}\n\n\tprotected N5CacheInfo newCacheInfo() {\n\n\t\treturn new N5CacheInfo();\n\t}\n\n\tprotected void updateCache(final String normalPathKey, final N5CacheInfo cacheInfo) {\n\n\t\tsynchronized (containerPathToCache) {\n\t\t\tcontainerPathToCache.put(normalPathKey, cacheInfo);\n\t\t}\n\t}\n\n\tprotected void updateCacheAttributes(\n\t\t\tfinal N5CacheInfo cacheInfo,\n\t\t\tfinal String normalCacheKey,\n\t\t\tfinal JsonElement attributes) {\n\n\t\tsynchronized (cacheInfo.attributesCache) {\n\t\t\tcacheInfo.attributesCache.put(normalCacheKey, attributes);\n\t\t}\n\t}\n\n\tprotected void updateCacheIsGroup(final N5CacheInfo cacheInfo, final boolean isGroup) {\n\n\t\tcacheInfo.isGroup = isGroup;\n\t}\n\n\tprotected void updateCacheIsDataset(final N5CacheInfo cacheInfo, final boolean isDataset) {\n\n\t\tcacheInfo.isDataset = isDataset;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/cache/N5JsonCacheableContainer.java",
    "content": "package org.janelia.saalfeldlab.n5.cache;\n\nimport org.janelia.saalfeldlab.n5.CachedGsonKeyValueN5Reader;\nimport org.janelia.saalfeldlab.n5.GsonKeyValueN5Reader;\nimport org.janelia.saalfeldlab.n5.N5Reader;\n\nimport com.google.gson.JsonElement;\n\n/**\n * An N5 container whose structure and attributes can be cached.\n * <p>\n * Implementations of interface methods must explicitly query the backing\n * storage unless noted otherwise. Cached implementations (e.g {@link CachedGsonKeyValueN5Reader}) call\n * these methods to update their {@link N5JsonCache}. Corresponding\n * {@link N5Reader} methods should use the cache, if present.\n */\npublic interface N5JsonCacheableContainer {\n\n\t/**\n\t * Returns a {@link JsonElement} containing attributes at a given path,\n\t * for a given cache key.\n\t *\n\t * @param normalPathName\n\t *            the normalized path name\n\t * @param normalCacheKey\n\t *            the cache key\n\t * @return the attributes as a json element.\n\t * @see GsonKeyValueN5Reader#getAttributes\n\t */\n\tJsonElement getAttributesFromContainer(final String normalPathName, final String normalCacheKey);\n\n\t/**\n\t * Query whether a resource exists in this container.\n\t *\n\t * @param normalPathName\n\t *            the normalized path name\n\t * @param normalCacheKey\n\t *            the normalized resource name (may be null).\n\t * @return true if the resouce exists\n\t */\n\tboolean existsFromContainer(final String normalPathName, final String normalCacheKey);\n\n\t/**\n\t * Query whether a path in this container is a group.\n\t *\n\t * @param normalPathName\n\t *            the normalized path name\n\t * @return true if the path is a group\n\t */\n\tboolean isGroupFromContainer(final String normalPathName);\n\n\t/**\n\t * Query whether a path in this container is a dataset.\n\t *\n\t * @param normalPathName\n\t *            the normalized path name\n\t * @return true if the path is a dataset\n\t * @see N5Reader#datasetExists\n\t */\n\tboolean isDatasetFromContainer(final String normalPathName);\n\n\t/**\n\t * \n\t * Returns true if a path is a group, given that the the given attributes exist\n\t * for the given cache key.\n\t * <p>\n\t * Should not call the backing storage.\n\t *\n\t * @param normalCacheKey\n\t *            the cache key\n\t * @param attributes\n\t *            the attributes\n\t * @return true if the path is a group\n\t */\n\tboolean isGroupFromAttributes(final String normalCacheKey, final JsonElement attributes);\n\n\t/**\n\t * Returns true if a path is a dataset, given that the the given attributes exist\n\t * for the given cache key.\n\t * <p>\n\t * Should not call the backing storage.\n\t *\n\t * @param normalCacheKey\n\t *            the cache key\n\t * @param attributes\n\t *            the attributes\n\t * @return true if the path is a dataset\n\t */\n\tboolean isDatasetFromAttributes(final String normalCacheKey, final JsonElement attributes);\n\n\t/**\n\t * List the children of a path for this container.\n\t *\n\t * @param normalPathName\n\t *            the normalized path name\n\t * @return list of children\n\t * @see N5Reader#list\n\t */\n\tString[] listFromContainer(final String normalPathName);\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/BlockCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * De/serialize {@link DataBlock} from/to {@link ReadData}.\n *\n * @param <T>\n * \t\ttype of the data contained in the DataBlock\n */\npublic interface BlockCodec<T> {\n\n\t/**\n\t * Serializes a {@link DataBlock} into a {@link ReadData} representation for\n\t * storage.\n\t * <p>\n\t * The encoding process serializes the block's data, applies any configured\n\t * compression, and may include metadata depending on the codec\n\t * implementation.\n\t *\n\t * @param dataBlock\n\t *            the data block to encode\n\t *\n\t * @return serialized representation of the data block\n\t *\n\t * @throws N5IOException\n\t *             if encoding or compression fails\n\t *\n\t * @see #decode(ReadData, long[])\n\t */\n\tReadData encode(DataBlock<T> dataBlock) throws N5IOException;\n\n\t/**\n\t * Deserializes a {@link DataBlock} from its {@link ReadData}\n\t * representation.\n\t * <p>\n\t * Reverses the encoding process by decompressing (if needed) and\n\t * deserializing the data.\n\t *\n\t * @param readData\n\t *            the serialized data to decode\n\t * @param gridPosition\n\t *            position of this block on the block grid (level 0 coordinates)\n\t *\n\t * @return reconstructed data block with deserialized data and grid position\n\t *\n\t * @throws N5IOException\n\t *             if decoding, decompression, or data validation fails\n\t *\n\t * @see #encode(DataBlock)\n\t */\n\tDataBlock<T> decode(ReadData readData, long[] gridPosition) throws N5IOException;\n\n\t/**\n\t * Given the {@code blockSize} of a {@code DataBlock<T>} return the size of\n\t * the encoded block in bytes.\n\t * <p>\n\t * A {@code UnsupportedOperationException} is thrown, if this {@code\n\t * BlockCodec} cannot determine encoded size independent of block content.\n\t * For example, if the block type contains var-length elements or if the\n\t * serializer uses a non-deterministic {@code DataCodec}.\n\t *\n\t * @param blockSize\n\t * \t\tsize of the block to be encoded\n\t *\n\t * @return size of the encoded block in bytes\n\t *\n\t * @throws UnsupportedOperationException\n\t * \t\tif this {@code DataBlockSerializer} cannot determine encoded size independent of block content\n\t */\n\tdefault long encodedSize(int[] blockSize) throws UnsupportedOperationException {\n\n\t\tthrow new UnsupportedOperationException();\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/BlockCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * Metadata and factory for a particular family of {@code BlockCodec}.\n * <p>\n * {@code BlockCodec}s encode {@link DataBlock}s into {@link ReadData} and\n * decode {@link ReadData} into {@link DataBlock}s.\n */\npublic interface BlockCodecInfo extends CodecInfo, DeterministicSizeCodecInfo {\n\n\tdefault long[] getKeyPositionForBlock(final DatasetAttributes attributes, final DataBlock<?> datablock) {\n\n\t\treturn datablock.getGridPosition();\n\t}\n\n\tdefault long[] getKeyPositionForBlock(final DatasetAttributes attributes, final long... blockPosition) {\n\n\t\treturn blockPosition;\n\t}\n\n\t@Override default long encodedSize(long size) {\n\n\t\treturn size;\n\t}\n\n\t@Override default long decodedSize(long size) {\n\n\t\treturn size;\n\t}\n\n\t<T> BlockCodec<T> create(DataType dataType, int[] blockSize, DataCodecInfo... codecs);\n\n\tdefault <T> BlockCodec<T> create(final DatasetAttributes attributes, final DataCodecInfo... codecInfos) {\n\n\t\treturn create(attributes.getDataType(), attributes.getBlockSize(), codecInfos);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/CodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport java.io.Serializable;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n/**\n * {@code CodecInfo}s are an untyped semantic layer for {@link BlockCodec}s, {@link DataCodec}s, and {@link DatasetCodec}s.\n * <p>\n * Modeled after <a href=\"https://zarr-specs.readthedocs.io/en/latest/v3/codecs/index.html\">Codecs</a> in\n * Zarr.\n */\n@NameConfig.Prefix(\"codec\")\npublic interface CodecInfo extends Serializable {\n\n\tString getType();\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/CodecParser.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport java.util.ArrayList;\n\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.N5Exception;\n\npublic class CodecParser {\n\n\tpublic DatasetCodecInfo[] datasetCodecInfos;\n\tpublic BlockCodecInfo blockCodecInfo;\n\tpublic DataCodecInfo[] dataCodecInfos;\n\n\tpublic CodecParser(CodecInfo[] codecs) {\n\n\t\tparse(codecs);\n\t}\n\n\tprivate void parse(CodecInfo[] codecs) {\n\n\t\tfinal ArrayList<DataCodecInfo> dataCodecList = new ArrayList<>();\n\t\tfinal ArrayList<DatasetCodecInfo> datasetCodecList = new ArrayList<>();\n\n\t\tboolean foundBlockCodec = false;\n\n\t\tint i = 0;\n\t\tint blockCodecIndex = -1;\n\t\tfor (CodecInfo codec : codecs) {\n\t\t\tif (!foundBlockCodec) {\n\n\t\t\t\tif (codec instanceof BlockCodecInfo) {\n\t\t\t\t\tblockCodecInfo = (BlockCodecInfo)codec;\n\t\t\t\t\tfoundBlockCodec = true;\n\t\t\t\t\tblockCodecIndex = i;\n\t\t\t\t} else if (codec instanceof DatasetCodecInfo)\n\t\t\t\t\tdatasetCodecList.add((DatasetCodecInfo)codec);\n\t\t\t\telse\n\t\t\t\t\tthrow new N5Exception(\"Codec at index \" + i + \" is a DataCodec, but came before a BlockCodec.\");\n\n\t\t\t} else if (codec instanceof BlockCodecInfo)\n\t\t\t\tthrow new N5Exception(\"Codec at index \" + i + \" is a BlockCodec, but came after a BlockCodec at position \" + blockCodecIndex);\n\t\t\telse if (codec instanceof DatasetCodecInfo)\n\t\t\t\tthrow new N5Exception(\"Codec at index \" + i + \" is a DatasetCodec, but came after a BlockCodec at position \" + blockCodecIndex);\n\t\t\telse\n\t\t\t\tdataCodecList.add((DataCodecInfo)codec);\n\n\t\t\ti++;\n\t\t}\n\n\t\tdatasetCodecInfos = datasetCodecList.stream().toArray(n -> new DatasetCodecInfo[n]);\n\t\tdataCodecInfos = dataCodecList.stream().toArray(n -> new DataCodecInfo[n]);\n\t}\n\n\tprivate static CodecInfo[] concatenateCodecs(DatasetAttributes attributes) {\n\n\t\tfinal CodecInfo[] codecs = new CodecInfo[attributes.getDataCodecInfos().length + 1];\n\t\tcodecs[0] = attributes.getBlockCodecInfo();\n\t\tSystem.arraycopy(attributes.getDataCodecInfos(), 0, codecs, 1, attributes.getDataCodecInfos().length);\n\t\treturn codecs;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/ConcatenatedDataCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\nclass ConcatenatedDataCodec implements DataCodec {\n\n\tprivate final DataCodec[] codecs;\n\n\tConcatenatedDataCodec(final DataCodec[] codecs) {\n\n\t\tif (codecs == null) {\n\t\t\tthrow new NullPointerException();\n\t\t}\n\t\tthis.codecs = codecs;\n\t}\n\n\t@Override\n\tpublic ReadData encode(ReadData readData) {\n\n\t\tfor (DataCodec codec : codecs) {\n\t\t\treadData = codec.encode(readData);\n\t\t}\n\t\treturn readData;\n\t}\n\n\t@Override\n\tpublic ReadData decode(ReadData readData) {\n\n\t\tfor (int i = codecs.length - 1; i >= 0; i--) {\n\t\t\tfinal DataCodec codec = codecs[i];\n\t\t\treadData = codec.decode(readData);\n\t\t}\n\t\treturn readData;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/ConcatenatedDeterministicSizeDataCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nclass ConcatenatedDeterministicSizeDataCodec extends ConcatenatedDataCodec implements DeterministicSizeDataCodec {\n\n\tprivate final DeterministicSizeDataCodec[] codecs;\n\n\tConcatenatedDeterministicSizeDataCodec(final DeterministicSizeDataCodec[] codecs) {\n\n\t\tsuper(codecs);\n\t\tthis.codecs = codecs;\n\t}\n\n\t@Override\n\tpublic long encodedSize(long size) {\n\n\t\tfor (DeterministicSizeDataCodec codec : codecs) {\n\t\t\tsize = codec.encodedSize(size);\n\t\t}\n\t\treturn size;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/DataCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport java.util.Arrays;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * {@code DataCodec}s transform one {@link ReadData} into another,\n * for example, compressing it.\n */\npublic interface DataCodec {\n\n\t/**\n\t * Decode the given {@link ReadData}.\n\t * <p>\n\t * The returned decoded {@code ReadData} reports {@link ReadData#length()\n\t * length()}{@code == decodedLength}. Decoding may be lazy or eager,\n\t * depending on the {@code DataCodec} implementation.\n\t *\n\t * @param readData\n\t * \t\tdata to decode\n\t *\n\t * @return decoded ReadData\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\tReadData decode(ReadData readData) throws N5IOException;\n\n\t/**\n\t * Encode the given {@link ReadData}.\n\t * <p>\n\t * Encoding may be lazy or eager, depending on the {@code DataCodec}\n\t * implementation.\n\t *\n\t * @param readData\n\t * \t\tdata to encode\n\t *\n\t * @return encoded ReadData\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\tReadData encode(ReadData readData) throws N5IOException;\n\n\t/**\n\t * Create a {@code DataCodec} that sequentially applies {@code codecs} in\n\t * the given order for encoding, and in reverse order for decoding.\n\t * <p>\n\t * If all {@code codecs} implement {@code DeterministicSizeDataCodec}, the\n\t * returned {@code DataCodec} will also be a {@code DeterministicSizeDataCodec}.\n\t *\n\t * @param codecs\n\t *            a list of DataCodecs\n\t * @return the concatenated DataCodec\n\t */\n\tstatic DataCodec concatenate(final DataCodec... codecs) {\n\n\t\tif (codecs == null)\n\t\t\tthrow new NullPointerException();\n\n\t\tif (codecs.length == 1)\n\t\t\treturn codecs[0];\n\n\t\tif (Arrays.stream(codecs).allMatch(DeterministicSizeDataCodec.class::isInstance))\n\t\t\treturn new ConcatenatedDeterministicSizeDataCodec(Arrays.copyOf(codecs, codecs.length, DeterministicSizeDataCodec[].class));\n\t\telse\n\t\t\treturn new ConcatenatedDataCodec(codecs);\n\t}\n\n\tstatic DataCodec create(final DataCodecInfo... codecInfos) {\n\n\t\tif (codecInfos == null)\n\t\t\tthrow new NullPointerException();\n\n\t\tfinal DataCodec[] codecs = new DataCodec[codecInfos.length];\n\t\tArrays.setAll(codecs, i -> codecInfos[i].create());\n\n\t\treturn DataCodec.concatenate(codecs);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/DataCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n/**\n * Used to create {@code DataCodec}s, which transform one {@link ReadData} into another,\n * for example, applying compression.\n */\n@NameConfig.Prefix(\"data-codec\")\npublic interface DataCodecInfo extends CodecInfo {\n\n\tDataCodec create();\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/DatasetCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * A Codec that transforms the contents of a {@link DataBlock}.\n * <p>\n * This class is N5's analogue to Zarr's array-to-array codec.\n *\n * @param <S> source data type (data contained in decoded blocks)\n * @param <T> target data type (data contained in encoded blocks)\n */\npublic interface DatasetCodec<S, T> {\n\n\t// TODO Name ideas:\n\t// \"ImageCodec\"?\n\t// BlockTransformationCodec\n\n\tDataBlock<T> encode(DataBlock<S> block) throws N5IOException;\n\n\tDataBlock<S> decode(DataBlock<T> dataBlock) throws N5IOException;\n\n\t/**\n\t * Create a {@code BlockCodec} that, for encoding, first applies {@code\n\t * datasetCodec} and then {@code blockCodec} (and does the same in reverse\n\t * order for decoding).\n\t *\n\t * @param <S>\n\t *            the source type of this codec\n\t * @param <T>\n\t *            the target type of this codec\n\t * @param datasetCodec\n\t *            the DatasetCodec to apply\n\t * @param blockCodec\n\t *            the wrapped BlockCodec\n\t * @return the concatenated BlockCodec\n\t */\n\tstatic <S, T> BlockCodec<S> concatenate(final DatasetCodec<S, T> datasetCodec, final BlockCodec<T> blockCodec) {\n\n\t\treturn new BlockCodec<S>() {\n\n\t\t\t@Override\n\t\t\tpublic ReadData encode(final DataBlock<S> dataBlock) throws N5IOException {\n\t\t\t\treturn blockCodec.encode(datasetCodec.encode(dataBlock));\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic DataBlock<S> decode(final ReadData readData, final long[] gridPosition) throws N5IOException {\n\t\t\t\treturn datasetCodec.decode(blockCodec.decode(readData, gridPosition));\n\t\t\t}\n\t\t};\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/DatasetCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.codec.transpose.TransposeCodec;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n/**\n * Used to create typed {@code DatasetCodec}s, which transform one {@link DataBlock} into another,\n * for example, by applying a transposition (@link {@link TransposeCodec}.\n */\n@NameConfig.Prefix(\"data-codec\") // TODO: is this Prefix correct?\npublic interface DatasetCodecInfo extends CodecInfo {\n\n\tDatasetCodec<?, ?> create(final DatasetAttributes attributes);\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/DeterministicSizeCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\n/**\n * A {@link CodecInfo} that can deterministically determine the size of encoded\n * data from the size of the raw data and vice versa from the data length alone\n * (i.e. encoding is data independent).\n */\npublic interface DeterministicSizeCodecInfo extends CodecInfo {\n\n\tlong encodedSize(long size);\n\n\tlong decodedSize(long size);\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/DeterministicSizeDataCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\n/**\n * A {@link DataCodec} that can deterministically determine the size of encoded\n * data from the size of the raw data (i.e. encoding is data independent).\n */\npublic interface DeterministicSizeDataCodec extends DataCodec {\n\n\t/**\n\t * Given {@code size} bytes of raw data, how many bytes will the encoded\n\t * data have.\n\t *\n\t * @param size in bytes\n\t * @return encoded size in bytes\n\t */\n\tlong encodedSize(long size);\n\n\t/**\n\t * Create a {@code DeterministicSizeDataCodec} that sequentially applies\n\t * {@code codecs} in the given order for encoding, and in reverse order for\n\t * decoding.\n\t *\n\t * @param codecs\n\t *            a list of DeterministicSizeDataCodec\n\t * @return the concatenated DeterministicSizeDataCodec\n\t */\n\tstatic DeterministicSizeDataCodec concatenate(final DeterministicSizeDataCodec... codecs) {\n\n\t\tif (codecs == null)\n\t\t\tthrow new NullPointerException();\n\n\t\tif (codecs.length == 1)\n\t\t\treturn codecs[0];\n\n\t\treturn new ConcatenatedDeterministicSizeDataCodec(codecs);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/FlatArrayCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport java.io.DataInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.ByteOrder;\nimport java.nio.IntBuffer;\nimport java.nio.charset.Charset;\nimport java.nio.charset.StandardCharsets;\nimport java.util.Arrays;\nimport java.util.function.IntFunction;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * De/serialize the {@link DataBlock#getData() data} contained in a {@code\n * DataBlock<T>} from/to a sequence of bytes.\n * <p>\n * Static fields {@code BYTE}, {@code SHORT_BIG_ENDIAN}, {@code\n * SHORT_LITTLE_ENDIAN}, etc. contain {@code FlatArrayCodec}s for all primitive\n * array types and big-endian / little-endian byte order.\n *\n * @param <T>\n * \t\ttype of the data contained in the DataBlock\n */\npublic abstract class FlatArrayCodec<T> {\n\n\tpublic abstract ReadData encode(T data) throws N5IOException;\n\n\tpublic abstract T decode(ReadData readData, int numElements) throws N5IOException;\n\n\tpublic int bytesPerElement() {\n\t\treturn bytesPerElement;\n\t}\n\n\tpublic T newArray(final int numElements) {\n\t\treturn dataFactory.apply(numElements);\n\t}\n\n\t// ------------------- instances  --------------------\n\t//\n\n\tpublic static final FlatArrayCodec<byte[]>   BYTE              = new ByteArrayCodec();\n\tpublic static final FlatArrayCodec<short[]>  SHORT_BIG_ENDIAN  = new ShortArrayCodec(ByteOrder.BIG_ENDIAN);\n\tpublic static final FlatArrayCodec<int[]>    INT_BIG_ENDIAN    = new IntArrayCodec(ByteOrder.BIG_ENDIAN);\n\tpublic static final FlatArrayCodec<long[]>   LONG_BIG_ENDIAN   = new LongArrayCodec(ByteOrder.BIG_ENDIAN);\n\tpublic static final FlatArrayCodec<float[]>  FLOAT_BIG_ENDIAN  = new FloatArrayCodec(ByteOrder.BIG_ENDIAN);\n\tpublic static final FlatArrayCodec<double[]> DOUBLE_BIG_ENDIAN = new DoubleArrayCodec(ByteOrder.BIG_ENDIAN);\n\n\tpublic static final FlatArrayCodec<short[]>  SHORT_LITTLE_ENDIAN  = new ShortArrayCodec(ByteOrder.LITTLE_ENDIAN);\n\tpublic static final FlatArrayCodec<int[]>    INT_LITTLE_ENDIAN    = new IntArrayCodec(ByteOrder.LITTLE_ENDIAN);\n\tpublic static final FlatArrayCodec<long[]>   LONG_LITTLE_ENDIAN   = new LongArrayCodec(ByteOrder.LITTLE_ENDIAN);\n\tpublic static final FlatArrayCodec<float[]>  FLOAT_LITTLE_ENDIAN  = new FloatArrayCodec(ByteOrder.LITTLE_ENDIAN);\n\tpublic static final FlatArrayCodec<double[]> DOUBLE_LITTLE_ENDIAN = new DoubleArrayCodec(ByteOrder.LITTLE_ENDIAN);\n\n\tpublic static final FlatArrayCodec<String[]> STRING = new N5StringArrayCodec();\n\tpublic static final FlatArrayCodec<String[]> ZARR_STRING = new ZarrStringArrayCodec();\n\tpublic static final FlatArrayCodec<byte[]>   OBJECT = new ObjectArrayCodec();\n\n\tpublic static FlatArrayCodec<short[]> SHORT(ByteOrder order) {\n\t\treturn order == ByteOrder.BIG_ENDIAN ? SHORT_BIG_ENDIAN : SHORT_LITTLE_ENDIAN;\n\t}\n\n\tpublic static FlatArrayCodec<int[]> INT(ByteOrder order) {\n\t\treturn order == ByteOrder.BIG_ENDIAN ? INT_BIG_ENDIAN : INT_LITTLE_ENDIAN;\n\t}\n\n\tpublic static FlatArrayCodec<long[]> LONG(ByteOrder order) {\n\t\treturn order == ByteOrder.BIG_ENDIAN ? LONG_BIG_ENDIAN : LONG_LITTLE_ENDIAN;\n\t}\n\n\tpublic static FlatArrayCodec<float[]> FLOAT(ByteOrder order) {\n\t\treturn order == ByteOrder.BIG_ENDIAN ? FLOAT_BIG_ENDIAN : FLOAT_LITTLE_ENDIAN;\n\t}\n\n\tpublic static FlatArrayCodec<double[]> DOUBLE(ByteOrder order) {\n\t\treturn order == ByteOrder.BIG_ENDIAN ? DOUBLE_BIG_ENDIAN : DOUBLE_LITTLE_ENDIAN;\n\t}\n\n\t// ---------------- implementations  -----------------\n\t//\n\n\tprivate final int bytesPerElement;\n\tprivate final IntFunction<T> dataFactory;\n\n\tprivate FlatArrayCodec(int bytesPerElement, IntFunction<T> dataFactory) {\n\t\tthis.bytesPerElement = bytesPerElement;\n\t\tthis.dataFactory = dataFactory;\n\t}\n\n\tprivate static final class ByteArrayCodec extends FlatArrayCodec<byte[]> {\n\n\t\tprivate ByteArrayCodec() {\n\t\t\tsuper(Byte.BYTES, byte[]::new);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final byte[] data) {\n\t\t\treturn ReadData.from(data);\n\t\t}\n\n\t\t@Override\n\t\tpublic byte[] decode(final ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal byte[] data = newArray(numElements);\n\t\t\ttry (final InputStream is = readData.limit(numElements).inputStream()) {\n\t\t\t\tnew DataInputStream(is).readFully(data);\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t\treturn data;\n\t\t}\n\t}\n\n\tprivate static final class ShortArrayCodec extends FlatArrayCodec<short[]> {\n\n\t\tprivate final ByteOrder order;\n\n\t\tShortArrayCodec(ByteOrder order) {\n\t\t\tsuper(Short.BYTES, short[]::new);\n\t\t\tthis.order = order;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final short[] data) throws N5IOException {\n\t\t\tfinal ByteBuffer serialized = ByteBuffer.allocate(Short.BYTES * data.length);\n\t\t\tserialized.order(order).asShortBuffer().put(data);\n\t\t\treturn ReadData.from(serialized);\n\t\t}\n\n\t\t@Override\n\t\tpublic short[] decode(final ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal short[] data = newArray(numElements);\n\t\t\treadData.limit(2 * numElements).toByteBuffer().order(order).asShortBuffer().get(data);\n\t\t\treturn data;\n\t\t}\n\t}\n\n\tprivate static final class IntArrayCodec extends FlatArrayCodec<int[]> {\n\n\t\tprivate final ByteOrder order;\n\n\t\tIntArrayCodec(ByteOrder order) {\n\t\t\tsuper(Integer.BYTES, int[]::new);\n\t\t\tthis.order = order;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final int[] data) throws N5IOException {\n\t\t\tfinal ByteBuffer serialized = ByteBuffer.allocate(Integer.BYTES * data.length);\n\t\t\tserialized.order(order).asIntBuffer().put(data);\n\t\t\treturn ReadData.from(serialized);\n\t\t}\n\n\t\t@Override\n\t\tpublic int[] decode(final ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal int[] data = newArray(numElements);\n\t\t\tfinal ByteBuffer byteBuffer = readData.limit(4 * numElements).toByteBuffer();\n\t\t\tfinal IntBuffer intBuffer = byteBuffer.order(order).asIntBuffer();\n\t\t\tintBuffer.get(data);\n\t\t\treturn data;\n\t\t}\n\t}\n\n\tprivate static final class LongArrayCodec extends FlatArrayCodec<long[]> {\n\n\t\tprivate final ByteOrder order;\n\n\t\tLongArrayCodec(ByteOrder order) {\n\t\t\tsuper(Long.BYTES, long[]::new);\n\t\t\tthis.order = order;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final long[] data) throws N5IOException {\n\t\t\tfinal ByteBuffer serialized = ByteBuffer.allocate(Long.BYTES * data.length);\n\t\t\tserialized.order(order).asLongBuffer().put(data);\n\t\t\treturn ReadData.from(serialized);\n\t\t}\n\n\t\t@Override\n\t\tpublic long[] decode(final ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal long[] data = newArray(numElements);\n\t\t\treadData.limit(8 * numElements).toByteBuffer().order(order).asLongBuffer().get(data);\n\t\t\treturn data;\n\t\t}\n\t}\n\n\tprivate static final class FloatArrayCodec extends FlatArrayCodec<float[]> {\n\n\t\tprivate final ByteOrder order;\n\n\t\tFloatArrayCodec(ByteOrder order) {\n\t\t\tsuper(Float.BYTES, float[]::new);\n\t\t\tthis.order = order;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final float[] data) throws N5IOException {\n\t\t\tfinal ByteBuffer serialized = ByteBuffer.allocate(Float.BYTES * data.length);\n\t\t\tserialized.order(order).asFloatBuffer().put(data);\n\t\t\treturn ReadData.from(serialized);\n\t\t}\n\n\t\t@Override\n\t\tpublic float[] decode(final ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal float[] data = newArray(numElements);\n\t\t\treadData.limit(4 * numElements).toByteBuffer().order(order).asFloatBuffer().get(data);\n\t\t\treturn data;\n\t\t}\n\t}\n\n\tprivate static final class DoubleArrayCodec extends FlatArrayCodec<double[]> {\n\n\t\tprivate final ByteOrder order;\n\n\t\tDoubleArrayCodec(ByteOrder order) {\n\t\t\tsuper(Double.BYTES, double[]::new);\n\t\t\tthis.order = order;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final double[] data) throws N5IOException {\n\t\t\tfinal ByteBuffer serialized = ByteBuffer.allocate(Double.BYTES * data.length);\n\t\t\tserialized.order(order).asDoubleBuffer().put(data);\n\t\t\treturn ReadData.from(serialized);\n\t\t}\n\n\t\t@Override\n\t\tpublic double[] decode(final ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal double[] data = newArray(numElements);\n\t\t\treadData.limit(8 * numElements).toByteBuffer().order(order).asDoubleBuffer().get(data);\n\t\t\treturn data;\n\t\t}\n\t}\n\n\tprivate static final class N5StringArrayCodec extends FlatArrayCodec<String[]> {\n\n\t\tprivate static final Charset ENCODING = StandardCharsets.UTF_8;\n\t\tprivate static final String NULLCHAR = \"\\0\";\n\n\t\tN5StringArrayCodec() {\n\t\t\tsuper( -1, String[]::new);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(String[] data) throws N5IOException {\n\t\t\tfinal String flattenedArray = String.join(NULLCHAR, data) + NULLCHAR;\n\t\t\treturn ReadData.from(flattenedArray.getBytes(ENCODING));\n\t\t}\n\n\t\t@Override\n\t\tpublic String[] decode(ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal byte[] serializedData = readData.allBytes();\n\t\t\tfinal String rawChars = new String(serializedData, ENCODING);\n\t\t\treturn rawChars.split(NULLCHAR);\n\t\t}\n\t}\n\n\tprivate static final class ZarrStringArrayCodec extends FlatArrayCodec<String[]> {\n\n\t\tprivate static final Charset ENCODING = StandardCharsets.UTF_8;\n\n\t\tZarrStringArrayCodec() {\n\t\t\tsuper( -1, String[]::new);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(String[] data) throws N5IOException {\n\t\t\tfinal int N = data.length;\n\t\t\tfinal byte[][] encodedStrings = Arrays.stream(data).map(str -> str.getBytes(ENCODING)).toArray(byte[][]::new);\n\t\t\tfinal int[] lengths = Arrays.stream(encodedStrings).mapToInt(a -> a.length).toArray();\n\t\t\tfinal int totalLength = Arrays.stream(lengths).sum();\n\t\t\tfinal ByteBuffer buf = ByteBuffer.wrap(new byte[totalLength + 4 * N + 4]);\n\t\t\tbuf.order(ByteOrder.LITTLE_ENDIAN);\n\t\t\tbuf.putInt(N);\n\t\t\tfor (int i = 0; i < N; ++i) {\n\t\t\t\tbuf.putInt(lengths[i]);\n\t\t\t\tbuf.put(encodedStrings[i]);\n\t\t\t}\n\t\t\treturn ReadData.from(buf.array());\n\t\t}\n\n\t\t@Override\n\t\tpublic String[] decode(ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal ByteBuffer serialized = readData.toByteBuffer();\n\t\t\tserialized.order(ByteOrder.LITTLE_ENDIAN);\n\n\t\t\t// sanity check to avoid out of memory errors\n\t\t\tif (serialized.limit() < 4)\n\t\t\t\tthrow new RuntimeException(\"Corrupt buffer, data seems truncated.\");\n\n\t\t\tfinal int n = serialized.getInt();\n\t\t\tif (serialized.limit() < n)\n\t\t\t\tthrow new RuntimeException(\"Corrupt buffer, data seems truncated.\");\n\n\t\t\tfinal String[] actualData = new String[n];\n\t\t\tfor (int i = 0; i < n; ++i) {\n\t\t\t\tfinal int length = serialized.getInt();\n\t\t\t\tfinal byte[] encodedString = new byte[length];\n\t\t\t\tserialized.get(encodedString);\n\t\t\t\tactualData[i] = new String(encodedString, ENCODING);\n\t\t\t}\n\t\t\treturn actualData;\n\t\t}\n\t}\n\n\tprivate static final class ObjectArrayCodec extends FlatArrayCodec<byte[]> {\n\n\t\tObjectArrayCodec() {\n\t\t\tsuper(-1, byte[]::new);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(byte[] data) throws N5IOException {\n\t\t\treturn ReadData.from(data);\n\t\t}\n\n\t\t@Override\n\t\tpublic byte[] decode(ReadData readData, int numElements) throws N5IOException {\n\t\t\tfinal byte[] data = newArray(numElements);\n\t\t\ttry (final InputStream is = readData.inputStream()) {\n\t\t\t\tnew DataInputStream(is).readFully(data);\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t\treturn data;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/IdentityCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@NameConfig.Name(IdentityCodec.TYPE)\npublic class IdentityCodec implements DeterministicSizeDataCodec, DataCodecInfo {\n\n\tprivate static final long serialVersionUID = 8354269325800855621L;\n\n\tpublic static final String TYPE = \"id\";\n\n\t@Override\n\tpublic String getType() {\n\n\t\treturn TYPE;\n\t}\n\n\t@Override\n\tpublic ReadData decode(ReadData readData) {\n\n\t\treturn readData;\n\t}\n\n\t@Override\n\tpublic ReadData encode(ReadData readData) {\n\n\t\treturn readData;\n\t}\n\n\t@Override public DataCodec create() {\n\n\t\treturn this;\n\t}\n\n\t@Override\n\tpublic long encodedSize(long size) {\n\n\t\treturn size;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/IndexCodecAdapter.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\npublic class IndexCodecAdapter {\n\n\tprivate final BlockCodecInfo blockCodecInfo;\n\tprivate final DeterministicSizeCodecInfo[] dataCodecs;\n\n\tpublic IndexCodecAdapter(final BlockCodecInfo blockCodecInfo, final DeterministicSizeCodecInfo... dataCodecs) {\n\n\t\tthis.blockCodecInfo = blockCodecInfo;\n\t\tthis.dataCodecs = dataCodecs;\n\t}\n\n\tpublic BlockCodecInfo getBlockCodecInfo() {\n\n\t\treturn blockCodecInfo;\n\t}\n\n\tpublic DataCodecInfo[] getDataCodecs() {\n\n\t\tfinal DataCodecInfo[] dataCodecs = new DataCodecInfo[this.dataCodecs.length];\n\t\tSystem.arraycopy(this.dataCodecs, 0, dataCodecs, 0, this.dataCodecs.length);\n\t\treturn dataCodecs;\n\t}\n\n\tpublic long encodedSize(long initialSize) {\n\t\tlong totalNumBytes = initialSize;\n\t\tfor (DeterministicSizeCodecInfo codec : dataCodecs) {\n\t\t\ttotalNumBytes = codec.encodedSize(totalNumBytes);\n\t\t}\n\t\treturn totalNumBytes;\n\t}\n\n\tpublic static IndexCodecAdapter create(final CodecInfo... codecs) {\n\t\tif (codecs == null || codecs.length == 0)\n\t\t\treturn new IndexCodecAdapter(new RawBlockCodecInfo());\n\n\t\tif (codecs[0] instanceof BlockCodecInfo)\n\t\t\treturn new IndexCodecAdapter((BlockCodecInfo)codecs[0]);\n\n\t\tfinal DeterministicSizeCodecInfo[] indexCodecs = new DeterministicSizeCodecInfo[codecs.length];\n\t\tSystem.arraycopy(codecs, 0, indexCodecs, 0, codecs.length);\n\t\treturn new IndexCodecAdapter(new RawBlockCodecInfo(), indexCodecs);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/N5BlockCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@NameConfig.Name(value = N5BlockCodecInfo.TYPE)\npublic class N5BlockCodecInfo implements BlockCodecInfo {\n\n\tprivate static final long serialVersionUID = 3523505403978222360L;\n\n\tpublic static final String TYPE = \"n5bytes\";\n\n\tprivate transient DatasetAttributes attributes;\n\n\t@Override public long[] getKeyPositionForBlock(DatasetAttributes attributes, DataBlock<?> datablock) {\n\n\t\treturn datablock.getGridPosition();\n\t}\n\n\t@Override public long[] getKeyPositionForBlock(DatasetAttributes attributes, long... blockPosition) {\n\n\t\treturn blockPosition;\n\t}\n\n\t@Override public long encodedSize(long size) {\n\n\t\tfinal int[] blockSize = attributes.getBlockSize();\n\t\tint headerSize = new N5BlockCodecs.BlockHeader(blockSize, DataBlock.getNumElements(blockSize)).getSize();\n\t\treturn headerSize + size;\n\t}\n\n\t@Override\n\tpublic String getType() {\n\n\t\treturn TYPE;\n\t}\n\n\t@Override\n\tpublic <T> BlockCodec<T> create(final DataType dataType, final int[] blockSize, final DataCodecInfo... codecInfos) {\n\t\treturn N5BlockCodecs.create(dataType, DataCodec.create(codecInfos));\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/N5BlockCodecs.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport java.io.DataInputStream;\nimport java.io.DataOutputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\n\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock.DataBlockFactory;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DoubleArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.FloatArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.LongArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.ShortArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.StringDataBlock;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\nimport static org.janelia.saalfeldlab.n5.N5Exception.*;\nimport static org.janelia.saalfeldlab.n5.codec.N5BlockCodecs.BlockHeader.MODE_DEFAULT;\nimport static org.janelia.saalfeldlab.n5.codec.N5BlockCodecs.BlockHeader.MODE_OBJECT;\nimport static org.janelia.saalfeldlab.n5.codec.N5BlockCodecs.BlockHeader.MODE_VARLENGTH;\nimport static org.janelia.saalfeldlab.n5.codec.N5BlockCodecs.BlockHeader.headerSizeInBytes;\n\npublic class N5BlockCodecs {\n\n\tprivate static final BlockCodecFactory<byte[]>   BYTE   = c -> new DefaultBlockCodec<>(FlatArrayCodec.BYTE, ByteArrayDataBlock::new, c);\n\tprivate static final BlockCodecFactory<short[]>  SHORT  = c -> new DefaultBlockCodec<>(FlatArrayCodec.SHORT_BIG_ENDIAN, ShortArrayDataBlock::new, c);\n\tprivate static final BlockCodecFactory<int[]>    INT    = c -> new DefaultBlockCodec<>(FlatArrayCodec.INT_BIG_ENDIAN, IntArrayDataBlock::new, c);\n\tprivate static final BlockCodecFactory<long[]>   LONG   = c -> new DefaultBlockCodec<>(FlatArrayCodec.LONG_BIG_ENDIAN, LongArrayDataBlock::new, c);\n\tprivate static final BlockCodecFactory<float[]>  FLOAT  = c -> new DefaultBlockCodec<>(FlatArrayCodec.FLOAT_BIG_ENDIAN, FloatArrayDataBlock::new, c);\n\tprivate static final BlockCodecFactory<double[]> DOUBLE = c -> new DefaultBlockCodec<>(FlatArrayCodec.DOUBLE_BIG_ENDIAN, DoubleArrayDataBlock::new, c);\n\tprivate static final BlockCodecFactory<String[]> STRING = c -> new StringBlockCodec(c);\n\tprivate static final BlockCodecFactory<byte[]>   OBJECT = c -> new ObjectBlockCodec(c);\n\n\tprivate N5BlockCodecs() {}\n\n\tpublic static <T> BlockCodec<T> create(\n\t\t\tfinal DataType dataType,\n\t\t\tfinal DataCodec codec) {\n\n\t\tfinal BlockCodecFactory<?> factory;\n\t\tswitch (dataType) {\n\t\tcase UINT8:\n\t\tcase INT8:\n\t\t\tfactory = N5BlockCodecs.BYTE;\n\t\t\tbreak;\n\t\tcase UINT16:\n\t\tcase INT16:\n\t\t\tfactory = N5BlockCodecs.SHORT;\n\t\t\tbreak;\n\t\tcase UINT32:\n\t\tcase INT32:\n\t\t\tfactory = N5BlockCodecs.INT;\n\t\t\tbreak;\n\t\tcase UINT64:\n\t\tcase INT64:\n\t\t\tfactory = N5BlockCodecs.LONG;\n\t\t\tbreak;\n\t\tcase FLOAT32:\n\t\t\tfactory = N5BlockCodecs.FLOAT;\n\t\t\tbreak;\n\t\tcase FLOAT64:\n\t\t\tfactory = N5BlockCodecs.DOUBLE;\n\t\t\tbreak;\n\t\tcase STRING:\n\t\t\tfactory = N5BlockCodecs.STRING;\n\t\t\tbreak;\n\t\tcase OBJECT:\n\t\t\tfactory = N5BlockCodecs.OBJECT;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tthrow new IllegalArgumentException(\"Unsupported data type: \" + dataType);\n\t\t}\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tfinal BlockCodecFactory<T> tFactory = (BlockCodecFactory<T>)factory;\n\t\treturn tFactory.create(codec);\n\t}\n\n\tprivate interface BlockCodecFactory<T> {\n\n\t\t/**\n\t\t * Create a {@link BlockCodec} that uses the specified {@code DataCodec}\n\t\t * and de/serializes {@code DataBlock<T>} to N5 format.\n\t\t *\n\t\t * @return N5 {@code BlockCodec} using the specified {@code DataCodec}\n\t\t */\n\t\tBlockCodec<T> create(DataCodec dataCodec);\n\t}\n\n\tabstract static class N5AbstractBlockCodec<T> implements BlockCodec<T> {\n\n\t\tfinal FlatArrayCodec<T> dataCodec;\n\t\tprivate final DataBlockFactory<T> dataBlockFactory;\n\t\tfinal DataCodec codec;\n\n\t\tN5AbstractBlockCodec(FlatArrayCodec<T> dataCodec, DataBlockFactory<T> dataBlockFactory, DataCodec codec) {\n\t\t\tthis.dataCodec = dataCodec;\n\t\t\tthis.dataBlockFactory = dataBlockFactory;\n\t\t\tthis.codec = codec;\n\t\t}\n\n\t\tabstract BlockHeader createBlockHeader(final DataBlock<T> dataBlock, ReadData blockData) throws N5IOException;\n\n\t\t@Override\n\t\tpublic ReadData encode(DataBlock<T> dataBlock) throws N5IOException {\n\t\t\treturn ReadData.from(out -> {\n\t\t\t\tfinal ReadData dataReadData = dataCodec.encode(dataBlock.getData());\n\t\t\t\tfinal BlockHeader header = createBlockHeader(dataBlock, dataReadData);\n\n\t\t\t\theader.writeTo(out);\n\t\t\t\tfinal ReadData encodedData = codec.encode(dataReadData);\n\t\t\t\tencodedData.writeTo(out);\n\t\t\t});\n\t\t}\n\n\t\tabstract BlockHeader decodeBlockHeader(final InputStream in) throws N5IOException;\n\n\t\t@Override\n\t\tpublic DataBlock<T> decode(final ReadData readData, final long[] gridPosition) throws N5IOException {\n\n\t\t\t// read block header with input stream since header is variable length\n\t\t\tfinal BlockHeader header;\n\t\t\ttry(final InputStream in = readData.inputStream()) {\n\t\t\t\theader = decodeBlockHeader(in);\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\n\t\t\t// determine length\n\t\t\t// and slice original read data so that bodyReadData is known length\n\t\t\tfinal int numElements = header.numElements();\n\t\t\tfinal long bodyLength = readData.length() - header.getSize();\n\t\t\tfinal ReadData bodyReadData = bodyLength > 0 ? readData.slice(header.getSize(), bodyLength) : ReadData.empty();\n\t\t\tfinal ReadData decodeData = codec.decode(bodyReadData);\n\n\t\t\t// the dataCodec knows the number of bytes per element\n\t\t\tfinal T data = dataCodec.decode(decodeData, numElements);\n\t\t\treturn dataBlockFactory.createDataBlock(header.blockSize(), gridPosition, data);\n\t\t}\n\t}\n\n\n\t/**\n\t * DataBlockCodec for all N5 data types, except STRING and OBJECT\n\t */\n\tprivate static class DefaultBlockCodec<T> extends N5AbstractBlockCodec<T> {\n\n\t\tDefaultBlockCodec(\n\t\t\t\tfinal FlatArrayCodec<T> dataCodec,\n\t\t\t\tfinal DataBlockFactory<T> dataBlockFactory,\n\t\t\t\tfinal DataCodec codec) {\n\n\t\t\tsuper(dataCodec, dataBlockFactory, codec);\n\t\t}\n\n\t\t@Override\n\t\tprotected BlockHeader createBlockHeader(final DataBlock<T> dataBlock, ReadData blockData) throws N5IOException {\n\n\t\t\treturn new BlockHeader(dataBlock.getSize(), dataBlock.getNumElements());\n\t\t}\n\n\t\t@Override\n\t\tprotected BlockHeader decodeBlockHeader(final InputStream in) throws N5IOException {\n\n\t\t\treturn BlockHeader.readFrom(in, MODE_DEFAULT, MODE_VARLENGTH);\n\t\t}\n\n\t\t@Override\n\t\tpublic long encodedSize(final int[] blockSize) throws UnsupportedOperationException {\n\t\t\tif (codec instanceof DeterministicSizeDataCodec) {\n\t\t\t\tfinal int bytesPerElement = dataCodec.bytesPerElement();\n\t\t\t\tfinal int numElements = DataBlock.getNumElements(blockSize);\n\t\t\t\tfinal int headerSize = headerSizeInBytes(MODE_DEFAULT, blockSize.length);\n\t\t\t\treturn headerSize + ((DeterministicSizeDataCodec) codec).encodedSize((long) numElements * bytesPerElement);\n\t\t\t} else {\n\t\t\t\tthrow new UnsupportedOperationException();\n\t\t\t}\n\t\t}\n\t}\n\n\t/**\n\t * DataBlockCodec for N5 data type STRING\n\t */\n\tprivate static class StringBlockCodec extends N5AbstractBlockCodec<String[]> {\n\n\t\tStringBlockCodec(final DataCodec codec) {\n\n\t\t\tsuper(FlatArrayCodec.STRING, StringDataBlock::new, codec);\n\t\t}\n\n\t\t@Override\n\t\tprotected BlockHeader createBlockHeader(final DataBlock<String[]> dataBlock, ReadData blockData) throws N5IOException {\n\n\t\t\treturn new BlockHeader(MODE_VARLENGTH, dataBlock.getSize(), (int)blockData.length());\n\t\t}\n\n\t\t@Override\n\t\tprotected BlockHeader decodeBlockHeader(final InputStream in) throws N5IOException {\n\n\t\t\treturn BlockHeader.readFrom(in, MODE_DEFAULT, MODE_VARLENGTH);\n\t\t}\n\t}\n\n\t/**\n\t * DataBlockCodec for N5 data type OBJECT\n\t */\n\tprivate static class ObjectBlockCodec extends N5AbstractBlockCodec<byte[]> {\n\n\t\tObjectBlockCodec(final DataCodec codec) {\n\n\t\t\tsuper(FlatArrayCodec.OBJECT, ByteArrayDataBlock::new, codec);\n\t\t}\n\n\t\t@Override\n\t\tprotected BlockHeader createBlockHeader(DataBlock<byte[]> dataBlock, ReadData blockData) {\n\n\t\t\treturn new BlockHeader(null, dataBlock.getNumElements());\n\t\t}\n\n\t\t@Override\n\t\tprotected BlockHeader decodeBlockHeader(final InputStream in) throws N5IOException {\n\n\t\t\treturn BlockHeader.readFrom(in, MODE_OBJECT);\n\t\t}\n\t}\n\n\tstatic class BlockHeader {\n\n\t\tpublic static final short MODE_DEFAULT = 0;\n\t\tpublic static final short MODE_VARLENGTH = 1;\n\t\tpublic static final short MODE_OBJECT = 2;\n\n\t\tprivate final short mode;\n\t\tprivate final int[] blockSize;\n\t\tprivate final int numElements;\n\n\t\tBlockHeader(final short mode, final int[] blockSize, final int numElements) {\n\n\t\t\tthis.mode = mode;\n\t\t\tthis.blockSize = blockSize;\n\t\t\tthis.numElements = numElements;\n\t\t}\n\n\t\tBlockHeader(final int[] blockSize, final int numElements) {\n\n\t\t\tif (blockSize == null) {\n\t\t\t\tthis.mode = MODE_OBJECT;\n\t\t\t} else if (DataBlock.getNumElements(blockSize) == numElements) {\n\t\t\t\tthis.mode = MODE_DEFAULT;\n\t\t\t} else {\n\t\t\t\tthis.mode = MODE_VARLENGTH;\n\t\t\t}\n\t\t\tthis.blockSize = blockSize;\n\t\t\tthis.numElements = numElements;\n\t\t}\n\n\t\tpublic int getSize() {\n\n\t\t\treturn headerSizeInBytes(mode, blockSize == null ? 0 : blockSize.length);\n\t\t}\n\n\t\tpublic int[] blockSize() {\n\n\t\t\treturn blockSize;\n\t\t}\n\n\t\tpublic int numElements() {\n\n\t\t\treturn numElements;\n\t\t}\n\n\t\tprivate static int[] readBlockSize(final DataInputStream dis) throws N5IOException {\n\n\t\t\ttry {\n\t\t\t\tfinal int nDim = dis.readShort();\n\t\t\t\tfinal int[] blockSize = new int[nDim];\n\t\t\t\tfor (int d = 0; d < nDim; ++d)\n\t\t\t\t\tblockSize[d] = dis.readInt();\n\t\t\t\treturn blockSize;\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t}\n\n\t\tprivate static void writeBlockSize(final int[] blockSize, final DataOutputStream dos) throws N5IOException {\n\n\t\t\ttry {\n\t\t\t\tdos.writeShort(blockSize.length);\n\t\t\t\tfor (final int size : blockSize)\n\t\t\t\t\tdos.writeInt(size);\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t}\n\n\t\tstatic int headerSizeInBytes(final short mode, final int numDimensions) {\n\t\t\tswitch (mode) {\n\t\t\tcase MODE_DEFAULT:\n\t\t\t\treturn 2 + // 1 short for mode\n\t\t\t\t\t\t2 + // 1 short for blockSize.length\n\t\t\t\t\t\t4 * numDimensions; // 1 int for each blockSize element\n\t\t\tcase MODE_VARLENGTH:\n\t\t\t\treturn 2 +// 1 short for mode\n\t\t\t\t\t\t2 + // 1 short for blockSize.length\n\t\t\t\t\t\t4 * numDimensions + // 1 int for each blockSize dimension\n\t\t\t\t\t\t4; // 1 int for numElements\n\t\t\tcase MODE_OBJECT:\n\t\t\t\treturn 2 + // 1 short for mode\n\t\t\t\t\t\t4; // 1 int for numElements\n\t\t\tdefault:\n\t\t\t\tthrow new N5Exception(\"unexpected mode: \" + mode);\n\t\t\t}\n\t\t}\n\n\t\tvoid writeTo(final OutputStream out) throws N5IOException {\n\n\t\t\ttry {\n\t\t\t\tfinal DataOutputStream dos = new DataOutputStream(out);\n\t\t\t\tdos.writeShort(mode);\n\t\t\t\tswitch (mode) {\n\t\t\t\tcase MODE_DEFAULT:// default\n\t\t\t\t\twriteBlockSize(blockSize, dos);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MODE_VARLENGTH:// varlength\n\t\t\t\t\twriteBlockSize(blockSize, dos);\n\t\t\t\t\tdos.writeInt(numElements);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MODE_OBJECT: // object\n\t\t\t\t\tdos.writeInt(numElements);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tthrow new N5Exception(\"unexpected mode: \" + mode);\n\t\t\t\t}\n\t\t\t\tdos.flush();\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\n\t\t}\n\n\t\tstatic BlockHeader readFrom(final InputStream in, short... allowedModes) throws N5IOException, N5Exception {\n\n\t\t\ttry {\n\t\t\t\tfinal DataInputStream dis = new DataInputStream(in);\n\t\t\t\tfinal short mode = dis.readShort();\n\t\t\t\tfinal int[] blockSize;\n\t\t\t\tfinal int numElements;\n\t\t\t\tswitch (mode) {\n\t\t\t\tcase MODE_DEFAULT:// default\n\t\t\t\t\tblockSize = readBlockSize(dis);\n\t\t\t\t\tnumElements = DataBlock.getNumElements(blockSize);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MODE_VARLENGTH:// varlength\n\t\t\t\t\tblockSize = readBlockSize(dis);\n\t\t\t\t\tnumElements = dis.readInt();\n\t\t\t\t\tbreak;\n\t\t\t\tcase MODE_OBJECT: // object\n\t\t\t\t\tblockSize = null;\n\t\t\t\t\tnumElements = dis.readInt();\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tthrow new N5Exception(\"Unexpected mode: \" + mode);\n\t\t\t\t}\n\n\t\t\t\tboolean modeIsOk = allowedModes == null || allowedModes.length == 0;\n\t\t\t\tfor (int i = 0; !modeIsOk && i < allowedModes.length; ++i) {\n\t\t\t\t\tif (mode == allowedModes[i]) {\n\t\t\t\t\t\tmodeIsOk = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!modeIsOk) {\n\t\t\t\t\tthrow new N5Exception(\"Unexpected mode: \" + mode);\n\t\t\t\t}\n\n\t\t\t\treturn new BlockHeader(mode, blockSize, numElements);\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/RawBlockCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport java.nio.ByteOrder;\n\nimport com.google.gson.JsonDeserializationContext;\nimport com.google.gson.JsonDeserializer;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonParseException;\nimport com.google.gson.JsonPrimitive;\nimport com.google.gson.JsonSerializationContext;\nimport com.google.gson.JsonSerializer;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n@NameConfig.Name(value = RawBlockCodecInfo.TYPE)\npublic class RawBlockCodecInfo implements BlockCodecInfo {\n\n\tprivate static final long serialVersionUID = 3282569607795127005L;\n\n\tpublic static final String TYPE = \"raw-bytes\";\n\n\t@NameConfig.Parameter(value = \"endian\", optional = true)\n\tprivate final ByteOrder byteOrder;\n\n\tpublic RawBlockCodecInfo() {\n\n\t\tthis(ByteOrder.BIG_ENDIAN);\n\t}\n\n\tpublic RawBlockCodecInfo(final ByteOrder byteOrder) {\n\n\t\tthis.byteOrder = byteOrder;\n\t}\n\n\t@Override\n\tpublic String getType() {\n\n\t\treturn TYPE;\n\t}\n\n\tpublic ByteOrder getByteOrder() {\n\t\treturn byteOrder;\n\t}\n\n\t@Override\n\tpublic <T> BlockCodec<T> create(final DataType dataType, final int[] blockSize, final DataCodecInfo... codecInfos) {\n\t\tensureValidByteOrder(dataType, getByteOrder());\n\t\treturn RawBlockCodecs.create(dataType, byteOrder, blockSize, DataCodec.create(codecInfos));\n\t}\n\n\tpublic static void ensureValidByteOrder(final DataType dataType, final ByteOrder byteOrder) {\n\n\t\tswitch (dataType) {\n\t\tcase INT8:\n\t\tcase UINT8:\n\t\tcase STRING:\n\t\tcase OBJECT:\n\t\t\treturn;\n\t\t}\n\n\t\tif (byteOrder == null)\n\t\t\tthrow new IllegalArgumentException(\"DataType (\" + dataType + \") requires ByteOrder, but was null\");\n\t}\n\n\tpublic static ByteOrderAdapter byteOrderAdapter = new ByteOrderAdapter();\n\n\tpublic static class ByteOrderAdapter implements JsonDeserializer<ByteOrder>, JsonSerializer<ByteOrder> {\n\n\t\t@Override\n\t\tpublic JsonElement serialize(ByteOrder src, java.lang.reflect.Type typeOfSrc,\n\t\t\t\tJsonSerializationContext context) {\n\n\t\t\tif (src.equals(ByteOrder.LITTLE_ENDIAN))\n\t\t\t\treturn new JsonPrimitive(\"little\");\n\t\t\telse\n\t\t\t\treturn new JsonPrimitive(\"big\");\n\t\t}\n\n\t\t@Override\n\t\tpublic ByteOrder deserialize(JsonElement json, java.lang.reflect.Type typeOfT,\n\t\t\t\tJsonDeserializationContext context) throws JsonParseException {\n\n\t\t\tif (json.getAsString().equals(\"little\"))\n\t\t\t\treturn ByteOrder.LITTLE_ENDIAN;\n\t\t\tif (json.getAsString().equals(\"big\"))\n\t\t\t\treturn ByteOrder.BIG_ENDIAN;\n\n\t\t\treturn null;\n\t\t}\n\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/RawBlockCodecs.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DoubleArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.FloatArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.LongArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.ShortArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.StringDataBlock;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\nimport java.nio.ByteOrder;\n\npublic class RawBlockCodecs {\n\n\tprivate static final BlockCodecFactory<byte[]>   BYTE   = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.BYTE, ByteArrayDataBlock::new, blockSize, codec);\n\tprivate static final BlockCodecFactory<short[]>  SHORT  = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.SHORT(byteOrder), ShortArrayDataBlock::new, blockSize, codec);\n\tprivate static final BlockCodecFactory<int[]>    INT    = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.INT(byteOrder), IntArrayDataBlock::new, blockSize, codec);\n\tprivate static final BlockCodecFactory<long[]>   LONG   = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.LONG(byteOrder), LongArrayDataBlock::new, blockSize, codec);\n\tprivate static final BlockCodecFactory<float[]>  FLOAT  = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.FLOAT(byteOrder), FloatArrayDataBlock::new, blockSize, codec);\n\tprivate static final BlockCodecFactory<double[]> DOUBLE = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.DOUBLE(byteOrder), DoubleArrayDataBlock::new, blockSize, codec);\n\tprivate static final BlockCodecFactory<String[]> STRING = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.ZARR_STRING, StringDataBlock::new,  blockSize, codec);\n\tprivate static final BlockCodecFactory<byte[]>   OBJECT = (byteOrder, blockSize, codec) -> new RawBlockCodec<>(FlatArrayCodec.OBJECT, ByteArrayDataBlock::new, blockSize, codec);\n\n\tprivate RawBlockCodecs() {}\n\n\tpublic static <T> BlockCodec<T> create(\n\t\t\tfinal DataType dataType,\n\t\t\tfinal ByteOrder byteOrder,\n\t\t\tfinal int[] blockSize,\n\t\t\tfinal DataCodec codec) {\n\t\tfinal BlockCodecFactory<?> factory;\n\t\tswitch (dataType) {\n\t\tcase UINT8:\n\t\tcase INT8:\n\t\t\tfactory = RawBlockCodecs.BYTE;\n\t\t\tbreak;\n\t\tcase UINT16:\n\t\tcase INT16:\n\t\t\tfactory = RawBlockCodecs.SHORT;\n\t\t\tbreak;\n\t\tcase UINT32:\n\t\tcase INT32:\n\t\t\tfactory = RawBlockCodecs.INT;\n\t\t\tbreak;\n\t\tcase UINT64:\n\t\tcase INT64:\n\t\t\tfactory = RawBlockCodecs.LONG;\n\t\t\tbreak;\n\t\tcase FLOAT32:\n\t\t\tfactory = RawBlockCodecs.FLOAT;\n\t\t\tbreak;\n\t\tcase FLOAT64:\n\t\t\tfactory = RawBlockCodecs.DOUBLE;\n\t\t\tbreak;\n\t\tcase STRING:\n\t\t\tfactory = RawBlockCodecs.STRING;\n\t\t\tbreak;\n\t\t// TODO: What about OBJECT?\n\t\tdefault:\n\t\t\tthrow new IllegalArgumentException(\"Unsupported data type: \" + dataType);\n\t\t}\n\t\tfinal BlockCodecFactory<T> tFactory = (BlockCodecFactory<T>) factory;\n\t\treturn tFactory.create(byteOrder, blockSize, codec);\n\t}\n\n\tprivate interface BlockCodecFactory<T> {\n\n\t\t/**\n\t\t * Create a {@link BlockCodec} that uses the specified {@code ByteOrder}\n\t\t * and {@code DataCodec} and de/serializes {@code DataBlock<T>} of the\n\t\t * specified {@code blockSize} to raw format.\n\t\t *\n\t\t * @return Raw {@code BlockCodec}\n\t\t */\n\t\tBlockCodec<T> create(ByteOrder byteOrder, int[] blockSize, DataCodec dataCodec);\n\t}\n\n\tprivate static class RawBlockCodec<T> implements BlockCodec<T> {\n\n\t\tprivate final FlatArrayCodec<T> dataCodec;\n\t\tprivate final DataBlock.DataBlockFactory<T> dataBlockFactory;\n\t\tprivate final int[] blockSize;\n\t\tprivate final int numElements;\n\t\tprivate final DataCodec codec;\n\n\t\tRawBlockCodec(\n\t\t\t\tfinal FlatArrayCodec<T> dataCodec,\n\t\t\t\tfinal DataBlock.DataBlockFactory<T> dataBlockFactory,\n\t\t\t\tfinal int[] blockSize,\n\t\t\t\tfinal DataCodec codec) {\n\n\t\t\tthis.dataCodec = dataCodec;\n\t\t\tthis.dataBlockFactory = dataBlockFactory;\n\t\t\tthis.blockSize = blockSize;\n\t\t\tthis.numElements = DataBlock.getNumElements(blockSize);\n\t\t\tthis.codec = codec;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(DataBlock<T> dataBlock) {\n\n\t\t\treturn ReadData.from(out -> {\n\t\t\t\tfinal ReadData blockData = dataCodec.encode(dataBlock.getData());\n\t\t\t\tcodec.encode(blockData).writeTo(out);\n\t\t\t});\n\t\t}\n\n\t\t@Override\n\t\tpublic DataBlock<T> decode(ReadData readData, long[] gridPosition) {\n\n\t\t\tfinal ReadData decodeData = codec.decode(readData);\n\t\t\tfinal T data = dataCodec.decode(decodeData, numElements);\n\t\t\treturn dataBlockFactory.createDataBlock(blockSize, gridPosition, data);\n\t\t}\n\n\t\t@Override\n\t\tpublic long encodedSize(final int[] blockSize) throws UnsupportedOperationException {\n\t\t\tif (codec instanceof DeterministicSizeDataCodec) {\n\t\t\t\tfinal int bytesPerElement = dataCodec.bytesPerElement();\n\t\t\t\tfinal int numElements = DataBlock.getNumElements(blockSize);\n\t\t\t\treturn ((DeterministicSizeDataCodec) codec).encodedSize((long) numElements * bytesPerElement);\n\t\t\t} else {\n\t\t\t\tthrow new UnsupportedOperationException();\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/checksum/ChecksumCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec.checksum;\n\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.ByteOrder;\nimport java.util.function.Supplier;\nimport java.util.zip.CheckedOutputStream;\nimport java.util.zip.Checksum;\n\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.codec.CodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodec;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DeterministicSizeDataCodec;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * A {@link CodecInfo} that appends a checksum to data when encoding and can\n * validate against that checksum when decoding.\n * <p>\n * Checksum codec instances are expected to be thread safe, but {@link Checksum}\n * implementations may not be. As a result, subclasses of this implementation\n * provide a {@link Supplier} for an appropriate Checksum type, a new instance\n * of which is created by {@link #getChecksum()} for each\n * {@link #encode(ReadData)} and {@link #decode(ReadData)} call.\n */\npublic abstract class ChecksumCodec implements DataCodec, DataCodecInfo, DeterministicSizeDataCodec {\n\n\tprivate static final long serialVersionUID = 3141427377277375077L;\n\n\tprivate int numChecksumBytes;\n\n\tprivate Supplier<Checksum> checksumSupplier;\n\n\tpublic ChecksumCodec(Supplier<Checksum> checksumSupplier, int numChecksumBytes) {\n\n\t\tthis.checksumSupplier = checksumSupplier;\n\t\tthis.numChecksumBytes = numChecksumBytes;\n\t}\n\n\t/**\n\t * Returns a new {@link Checksum} instance. \n\t *\n\t * @return the checksum \n\t */\n\tpublic Checksum getChecksum() {\n\n\t\treturn checksumSupplier.get();\n\t}\n\n\tpublic int numChecksumBytes() {\n\n\t\treturn numChecksumBytes;\n\t}\n\n\tprivate CheckedOutputStream createStream(OutputStream out) {\n\n\t\tfinal Checksum checksum = getChecksum();\n\t\treturn new CheckedOutputStream(out, checksum) {\n\t\t\tprivate boolean closed = false;\n\t\t\t@Override\n\t\t\tpublic void close() throws IOException {\n\t\t\t\tif (!closed) {\n\t\t\t\t\twriteChecksum(checksum, out);\n\t\t\t\t\tclosed = true;\n\t\t\t\t\tout.close();\n\t\t\t\t}\n\t\t\t}\n\t\t};\n\t}\n\n\t@Override public ReadData encode(ReadData readData) {\n\n\t\treturn readData.encode(this::createStream);\n\t}\n\n\t@Override public ReadData decode(ReadData readData) throws N5IOException {\n\n\t\tfinal ReadData rdm = readData.materialize();\n\t\tfinal long N = rdm.requireLength();\n\n\t\tfinal ReadData data = rdm.slice(0, N - numChecksumBytes);\n\t\tfinal long calculatedChecksum = computeChecksum(data);\n\n\t\tfinal ReadData checksumRd = rdm.slice(N - numChecksumBytes, numChecksumBytes);\n\t\tfinal long storedChecksum = readChecksum(checksumRd);\n\n\t\tif( calculatedChecksum != storedChecksum)\n\t\t\tthrow new N5Exception(String.format(\"Calculated checksum (%d) does not match stored checksum (%d).\",\n\t\t\t\t\tcalculatedChecksum, storedChecksum));\n\n\t\treturn data;\n\t}\n\n\t@Override\n\tpublic long encodedSize(final long size) {\n\n\t\treturn size + numChecksumBytes();\n\t}\n\n\tprotected long readChecksum(ReadData checksumData) {\n\n\t\t// the computed checksum is a long that can take values in [0, 2^32 - 1]\n\t\t// so convert the four bytes to an appropriate long\n\t\tByteBuffer buf = ByteBuffer.allocate(8);\n\t\tbuf.order(ByteOrder.LITTLE_ENDIAN);\n\t\tbuf.put(checksumData.allBytes());\n\t\tbuf.putInt(0);\n\t\tbuf.rewind();\n\t\treturn buf.getLong();\n\t}\n\n\tprotected long computeChecksum(ReadData data) {\n\t\tfinal Checksum checksum = getChecksum();\n\t\tchecksum.update(data.allBytes(), 0, (int)data.requireLength());\n\t\treturn checksum.getValue();\n\t}\n\n\t/**\n\t * Return the value of the checksum as a {@link ByteBuffer} to be serialized.\n\t *\n\t * @return a ByteBuffer representing the checksum value\n\t */\n\tpublic abstract ByteBuffer getChecksumValue(Checksum checksum);\n\n\tprotected void writeChecksum(Checksum checksum, OutputStream out) throws IOException {\n\n\t\tout.write(getChecksumValue(checksum).array());\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/checksum/ChecksumException.java",
    "content": "package org.janelia.saalfeldlab.n5.codec.checksum;\n\npublic class ChecksumException extends Exception {\n\n\tprivate static final long serialVersionUID = 905130066386622561L;\n\n\tpublic ChecksumException(final String message) {\n\n\t\tsuper(message);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/checksum/Crc32cChecksumCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec.checksum;\n\nimport org.apache.commons.codec.digest.PureJavaCrc32C;\nimport org.janelia.saalfeldlab.n5.codec.DataCodec;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\nimport java.nio.ByteBuffer;\nimport java.nio.ByteOrder;\nimport java.util.zip.Checksum;\n\n@NameConfig.Name(Crc32cChecksumCodec.TYPE)\npublic class Crc32cChecksumCodec extends ChecksumCodec {\n\n\tprivate static final long serialVersionUID = 7424151868725442500L;\n\n\tpublic static final String TYPE = \"crc32c\";\n\n\tpublic Crc32cChecksumCodec() {\n\n\t\tsuper(() -> new PureJavaCrc32C(), 4);\n\t}\n\n\t@Override\n\tpublic ByteBuffer getChecksumValue(Checksum checksum) {\n\n\t\tfinal ByteBuffer buf = ByteBuffer.allocate(numChecksumBytes());\n\t\tbuf.order(ByteOrder.LITTLE_ENDIAN).putInt((int)checksum.getValue());\n\t\treturn buf;\n\t}\n\n\t@Override\n\tpublic String getType() {\n\n\t\treturn TYPE;\n\t}\n\n\t@Override public DataCodec create() {\n\n\t\treturn this;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/transpose/Transpose.java",
    "content": "package org.janelia.saalfeldlab.n5.codec.transpose;\n\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.util.MemCopy;\n\nclass Transpose<T> {\n\n\t// TODO: detect when 1-sized dimensions are permuted. This should allow to\n\t//  simplify (or completely avoid) copying under certain conditions.\n\n\tpublic static int[] encode(final int[] decodedPos, final int[] order) {\n\t\tfinal int[] encodedPos = new int[decodedPos.length];\n\t\tencode(decodedPos, order, encodedPos);\n\t\treturn encodedPos;\n\t}\n\n\tpublic static int[] decode(final int[] encodedPos, final int[] order) {\n\t\tfinal int[] decodedPos = new int[encodedPos.length];\n\t\tdecode(encodedPos, order, decodedPos);\n\t\treturn decodedPos;\n\t}\n\n\tpublic static void encode(int[] decodedPos, int[] order, int[] encodedPos) {\n\t\tfor (int d = 0; d < order.length; d++)\n\t\t\tencodedPos[d] = decodedPos[order[d]];\n\t}\n\n\tpublic static void decode(int[] encodedPos, int[] order, int[] decodedPos) {\n\t\tfor (int d = 0; d < order.length; d++)\n\t\t\tdecodedPos[order[d]] = encodedPos[d];\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\tpublic static <T> Transpose<T> of(final DataType dataType, final int numDimensions) {\n\t\treturn (Transpose<T>) new Transpose<>(MemCopy.forDataType(dataType), numDimensions);\n\t}\n\n\tprivate final MemCopy<T> memCopy;\n\n\tprivate final int[] ssize;\n\tprivate final int[] tsize;\n\tprivate final int[] ssteps;\n\tprivate final int[] tsteps;\n\tprivate final int[] csteps;\n\n\tTranspose(final MemCopy<T> memCopy, final int n) {\n\t\tthis.memCopy = memCopy;\n\t\tssize = new int[n];\n\t\ttsize = new int[n];\n\t\tssteps = new int[n];\n\t\ttsteps = new int[n];\n\t\tcsteps = new int[n];\n\t}\n\n\tpublic void encode(final T decoded, final T encoded, final int[] decodedSize, final int[] order) {\n\t\tfinal int n = ssize.length;\n\n\t\tfor (int d = 0; d < n; ++d)\n\t\t\tssize[d] = decodedSize[d];\n\n\t\tfor (int d = 0; d < n; ++d)\n\t\t\ttsize[d] = decodedSize[order[d]];\n\n\t\tssteps[0] = 1;\n\t\tfor (int d = 0; d < n - 1; ++d)\n\t\t\tssteps[d + 1] = ssteps[d] * ssize[d];\n\n\t\ttsteps[0] = 1;\n\t\tfor (int d = 0; d < n - 1; ++d)\n\t\t\ttsteps[d + 1] = tsteps[d] * tsize[d];\n\n\t\tfor (int d = 0; d < n; ++d)\n\t\t\tcsteps[order[d]] = tsteps[d];\n\n\t\tcopyRecursively(decoded, 0, encoded, 0, n - 1);\n\t}\n\n\tpublic void decode(final T encoded, final T decoded, final int[] decodedSize, final int[] order) {\n\t\tfinal int n = ssize.length;\n\n\t\tfor (int d = 0; d < n; ++d)\n\t\t\tssize[d] = decodedSize[order[d]];\n\n\t\tssteps[0] = 1;\n\t\tfor (int d = 0; d < n - 1; ++d)\n\t\t\tssteps[d + 1] = ssteps[d] * ssize[d];\n\n\t\ttsteps[0] = 1;\n\t\tfor (int d = 0; d < n - 1; ++d)\n\t\t\ttsteps[d + 1] = tsteps[d] * decodedSize[d];\n\n\t\tfor (int d = 0; d < n; ++d)\n\t\t\tcsteps[d] = tsteps[order[d]];\n\n\t\tcopyRecursively(encoded, 0, decoded, 0, n - 1);\n\t}\n\n\tprivate void copyRecursively(final T src, final int srcPos, final T dest, final int destPos, final int d) {\n\t\tif (d == 0) {\n\t\t\tfinal int length = ssize[d];\n\t\t\tfinal int stride = csteps[d];\n\t\t\tmemCopy.copyStrided(src, srcPos, dest, destPos, stride, length);\n\t\t} else {\n\t\t\tfinal int length = ssize[d];\n\t\t\tfinal int srcStride = ssteps[d];\n\t\t\tfinal int destStride = csteps[d];\n\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\tcopyRecursively(src, srcPos + i * srcStride, dest, destPos + i * destStride, d - 1);\n\t\t}\n\t}\n\n\tTranspose<T> newInstance() {\n\t\treturn new Transpose<>(memCopy, ssize.length);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/transpose/TransposeCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.codec.transpose;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.codec.DatasetCodec;\n\npublic class TransposeCodec<T> implements DatasetCodec<T, T> {\n\n\tprivate final DataType dataType;\n\tprivate final int[] order;\n\n\tprivate final Transpose<T> transpose;\n\n\tpublic TransposeCodec(final DataType dataType, final int[] order) {\n\n\t\tthis.order = order;\n\t\tthis.dataType = dataType;\n\t\ttranspose = Transpose.of(dataType, order.length);\n\t}\n\n\t@Override\n\tpublic DataBlock<T> encode(final DataBlock<T> dataBlock) {\n\n\t\tDataBlock<T> encodedBlock = (DataBlock<T>)dataType.createDataBlock(dataBlock.getSize(), dataBlock.getGridPosition(), dataBlock.getNumElements());\n\t\ttranspose.encode(dataBlock.getData(), encodedBlock.getData(), dataBlock.getSize(), order);\n\t\treturn encodedBlock;\n\t}\n\n\t@Override\n\tpublic DataBlock<T> decode(final DataBlock<T> dataBlock) {\n\n\t\tDataBlock<T> decodedBlock = (DataBlock<T>)dataType.createDataBlock(dataBlock.getSize(), dataBlock.getGridPosition(), dataBlock.getNumElements());\n\t\ttranspose.decode((T)dataBlock.getData(), decodedBlock.getData(), dataBlock.getSize(), order);\n\t\treturn decodedBlock;\n\t}\n\n\tpublic static boolean isIdentity(final int[] permutation) {\n\n\t\tfor (int i = 0; i < permutation.length; i++)\n\t\t\tif (permutation[i] != i)\n\t\t\t\treturn false;\n\n\t\treturn true;\n\t}\n\n\tpublic static boolean isReversal(final int[] permutation) {\n\n\t\tfor (int i = 0; i < permutation.length; i++)\n\t\t\tif (permutation[i] != i)\n\t\t\t\treturn false;\n\n\t\treturn true;\n\t}\n\n\tpublic static int[] invertPermutation(final int[] p) {\n\n\t\tfinal int[] inv = new int[p.length];\n\t\tfor (int i = 0; i < p.length; i++)\n\t\t\tinv[p[i]] = i;\n\n\t\treturn inv;\n\t}\n\n\t/**\n\t * Composes two permutations: result[i] = first[second[i]].\n\t *\n\t * @param first\n\t *            the first permutation\n\t * @param second\n\t *            the second permutation\n\t * @return the composition of first and second\n\t */\n\tpublic static int[] concatenatePermutations(final int[] first, final int[] second) {\n\n\t\tint n = first.length;\n\t\tfinal int[] result = new int[n];\n\t\tfor (int i = 0; i < n; i++) {\n\t\t\tresult[i] = first[second[i]];\n\t\t}\n\t\treturn result;\n\t}\n\n\t/**\n\t * Conjugates a permutation with the reversal permutation: rev * p * rev^-1,\n\t * where rev is the permutation that reverses the elements.\n\t *\n\t * @param p\n\t *            the permutation to conjugate\n\t * @return the conjugated permutation\n\t */\n\tpublic static int[] conjugateWithReverse(final int[] p) {\n\n\t\tfinal int n = p.length;\n\t\tfinal int[] rev = new int[n];\n\t\tfor (int i = 0; i < n; i++)\n\t\t\trev[i] = n - i - 1;\n\n\t\t// note that rev is its own inverse\n\t\tint[] result = concatenatePermutations(rev, p); // result = rev * p\n\t\tresult = concatenatePermutations(result, rev); // result = rev * p *\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t// rev^-1\n\t\treturn result;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/codec/transpose/TransposeCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.codec.transpose;\n\nimport java.util.Arrays;\nimport java.util.stream.IntStream;\n\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.codec.DatasetCodecInfo;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\n\n/**\n * Describes a permutation of the dimensions of a block.\n * <p>\n * The {@code order} parameter parameterizes the permutation.\n * The ith element of the order array gives the destination index of the ith element of the input.\n * Example:\n * \t\torder  = [1, 2, 0]\n * \t\tinput  = [7, 8, 9] \t\t// interpret as a block size\n * \t\tresult = [9, 7, 8] \t\t// permuted block size\n *\n * <p>\n * See the specification of <a href=\"https://zarr-specs.readthedocs.io/en/latest/v3/codecs/transpose/index.html#transpose-codec\">Zarr's Transpose codec</a>.\n */\n@NameConfig.Name(value = TransposeCodecInfo.TYPE)\npublic class TransposeCodecInfo implements DatasetCodecInfo {\n\n\tpublic static final String TYPE = \"n5-transpose\";\n\n\t@NameConfig.Parameter\n\tprivate int[] order;\n\n\tpublic TransposeCodecInfo() {\n\t\t// for serialization\n\t}\n\n\tpublic TransposeCodecInfo(int[] order) {\n\n\t\tthis.order = order;\n\t}\n\n\t@Override\n\tpublic String getType() {\n\n\t\treturn TYPE;\n\t}\n\n\tpublic int[] getOrder() {\n\n\t\treturn order;\n\t}\n\n\t@Override\n\tpublic TransposeCodec<?> create(DatasetAttributes datasetAttributes) {\n\n\t\tvalidate();\n\t\treturn new TransposeCodec<>(datasetAttributes.getDataType(), getOrder());\n\t}\n\n\t@Override\n\tpublic boolean equals(Object obj) {\n\n\t\tif (obj instanceof TransposeCodecInfo)\n\t\t\treturn Arrays.equals(order, ((TransposeCodecInfo)obj).getOrder());\n\n\t\treturn false;\n\t}\n\n\tprivate void validate() {\n\n\t\tfinal boolean[] indexFound = new boolean[order.length];\n\t\tfor( int i : order )\n\t\t\tindexFound[i] = true;\n\n\t\tfinal int[] missingIndexes = IntStream.range(0, order.length).filter(i -> !indexFound[i]).toArray();\n\t\tif( missingIndexes.length > 0 )\n\t\t\tthrow new N5Exception(\"Invalid order for TransposeCodec. Missing indexes: \" + Arrays.toString(missingIndexes));\n\n\t}\n\n\tpublic static TransposeCodecInfo concatenate(TransposeCodecInfo[] infos) {\n\n\t\tif( infos == null || infos.length == 0)\n\t\t\treturn null;\n\t\telse if (infos.length == 1)\n\t\t\treturn infos[0];\n\n\t\t// copy the initial order so we don't modify to the original\n\t\tint[] order = new int[infos[0].order.length];\n\t\tSystem.arraycopy(infos[0].order, 0, order, 0, order.length);\n\n\t\tfor( int i = 1; i < infos.length; i++ )\n\t\t\torder = TransposeCodec.concatenatePermutations(order, infos[i].order);\n\n\t\treturn new TransposeCodecInfo(order);\n\t}\n\n}"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/http/ApacheListResponseParser.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport java.util.regex.Pattern;\n\n/**\n * {@link ListResponseParser}s for <a href=\"https://httpd.apache.org/\">Apache HTTP Servers</a>.\n */\nabstract class ApacheListResponseParser extends PatternListResponseParser {\n\n\tprivate static final Pattern LIST_ENTRY = Pattern.compile(\"alt=\\\"\\\\[(\\\\s*|DIR)\\\\]\\\".*href=[^>]+>(?<entry>[^<]+)\");\n\n\tprivate static final Pattern LIST_DIR_ENTRY = Pattern.compile(\"alt=\\\"\\\\[DIR\\\\]\\\".*href=[^>]+>(?<entry>[^<]+)\");\n\n\tApacheListResponseParser(Pattern pattern) {\n\n\t\tsuper(pattern);\n\t}\n\n\tstatic class ListDirectories extends ApacheListResponseParser {\n\n\t\tpublic ListDirectories() {\n\n\t\t\tsuper(LIST_DIR_ENTRY);\n\t\t}\n\t}\n\n\tstatic class ListAll extends ApacheListResponseParser {\n\n\t\tpublic ListAll() {\n\n\t\t\tsuper(LIST_ENTRY);\n\t\t}\n\t}\n\n\tpublic static ListResponseParser directoryParser() {\n\n\t\treturn new ListDirectories();\n\t}\n\n\tpublic static ListResponseParser parser() {\n\n\t\treturn new ListAll();\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/http/CandidateListResponseParser.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nclass CandidateListResponseParser implements ListResponseParser {\n\n\tprivate ListResponseParser[] candidateParsers;\n\n\tprivate ListResponseParser successfulParser;\n\n\tCandidateListResponseParser(ListResponseParser[] candidateParsers) {\n\n\t\tthis.candidateParsers = candidateParsers;\n\t}\n\n\t@Override\n\tpublic String[] parseListResponse(String response) {\n\n\t\tif (successfulParser != null)\n\t\t\treturn successfulParser.parseListResponse(response);\n\n\t\tString[] result = new String[0];\n\t\tfor (ListResponseParser parser : candidateParsers) {\n\n\t\t\tresult = parser.parseListResponse(response);\n\t\t\tif (result.length > 1) {\n\t\t\t\tsuccessfulParser = parser;\n\t\t\t\treturn result;\n\t\t\t}\n\t\t}\n\n\t\treturn result;\n\t}\n\n\tpublic ListResponseParser successfulParser() {\n\n\t\treturn successfulParser;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/http/ListResponseParser.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\npublic interface ListResponseParser {\n\n\t/**\n\t * Parse a String response for a list call, and return the results as a\n\t * String array.\n\t *\n\t * @param response\n\t *            an (http) response\n\t * @return the list elements it contains\n\t */\n\tString[] parseListResponse(final String response);\n\n\tpublic static ListResponseParser defaultListParser() {\n\n\t\treturn new CandidateListResponseParser(\n\t\t\t\tnew ListResponseParser[]{\n\t\t\t\t\t\tMicrosoftListResponseParser.parser(),\n\t\t\t\t\t\tApacheListResponseParser.parser(),\n\t\t\t\t\t\tPythonListResponseParser.parser()\n\t\t\t\t});\n\t}\n\n\tpublic static ListResponseParser defaultDirectoryListParser() {\n\n\t\treturn new CandidateListResponseParser(\n\t\t\t\tnew ListResponseParser[]{\n\t\t\t\t\t\tMicrosoftListResponseParser.directoryParser(),\n\t\t\t\t\t\tApacheListResponseParser.directoryParser(),\n\t\t\t\t\t\tPythonListResponseParser.directoryParser()\n\t\t\t\t});\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/http/MicrosoftListResponseParser.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport java.util.regex.Pattern;\n\n/**\n * {@link ListResponseParser}s for <a href=\"https://www.iis.net/\">Microsoft-IIS Servers</a>.\n */\nabstract class MicrosoftListResponseParser extends PatternListResponseParser {\n\n\tprivate static final Pattern LIST_ENTRY = Pattern.compile( \"HREF=[^>]+>(?!\\\\[To Parent Directory])(?<entry>[^<]+)\");\n\n\tprivate static final Pattern LIST_DIR_ENTRY = Pattern.compile( \"&lt;dir&gt;[^H]+HREF=[^>]+>(?!\\\\[To Parent Directory])(?<entry>[^<]+)\");\n\n\tMicrosoftListResponseParser(Pattern pattern) {\n\n\t\tsuper(pattern);\n\t}\n\n\tstatic class ListDirectories extends MicrosoftListResponseParser {\n\n\t\tpublic ListDirectories() {\n\n\t\t\tsuper(LIST_DIR_ENTRY);\n\t\t}\n\t}\n\n\tstatic class ListAll extends MicrosoftListResponseParser {\n\n\t\tpublic ListAll() {\n\n\t\t\tsuper(LIST_ENTRY);\n\t\t}\n\t}\n\n\tpublic static ListResponseParser directoryParser() {\n\n\t\treturn new ListDirectories();\n\t}\n\n\tpublic static ListResponseParser parser() {\n\n\t\treturn new ListAll();\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/http/PatternListResponseParser.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\n/**\n * A {@link ListResponseParser} that uses a {@link Pattern}.\n *\n * This implementation of parseListResponse returns all the groups named \"entry\"\n * for all matches of the given pattern.\n */\nclass PatternListResponseParser implements ListResponseParser {\n\n\tprotected Pattern pattern;\n\n\tpublic PatternListResponseParser(Pattern pattern) {\n\n\t\tthis.pattern = pattern;\n\t}\n\n\t@Override\n\tpublic String[] parseListResponse(final String response) {\n\n\t\tfinal Matcher matcher = pattern.matcher(response);\n\t\tfinal List<String> matches = new ArrayList<>();\n\t\twhile (matcher.find()) {\n\t\t\tmatches.add(matcher.group(\"entry\"));\n\t\t}\n\t\treturn matches.toArray(new String[0]);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/http/PythonListResponseParser.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport java.util.regex.Pattern;\n\n/**\n * {@link ListResponseParser}s for <a href=\"https://docs.python.org/3/library/http.server.html\">Python's SimpleHTTP Server</a>.\n */\nabstract class PythonListResponseParser extends PatternListResponseParser {\n\n\tprivate static final Pattern LIST_ENTRY = Pattern.compile(\"href=\\\"[^\\\"]+\\\">(?<entry>[^<]+)\");\n\n\tprivate static final Pattern LIST_DIR_ENTRY = Pattern.compile(\"href=\\\"[^\\\"]+\\\">(?<entry>[^<]+)/\");\n\n\tPythonListResponseParser(Pattern pattern) {\n\n\t\tsuper(pattern);\n\t}\n\n\tstatic class ListDirectories extends PythonListResponseParser {\n\n\t\tpublic ListDirectories() {\n\n\t\t\tsuper(LIST_DIR_ENTRY);\n\t\t}\n\t}\n\n\tstatic class ListAll extends PythonListResponseParser {\n\n\t\tpublic ListAll() {\n\n\t\t\tsuper(LIST_ENTRY);\n\t\t}\n\t}\n\n\tpublic static ListResponseParser directoryParser() {\n\n\t\treturn new ListDirectories();\n\t}\n\n\tpublic static ListResponseParser parser() {\n\n\t\treturn new ListAll();\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/ByteArrayReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.InputStream;\nimport java.util.Arrays;\n\nclass ByteArrayReadData implements ReadData {\n\n\tstatic final ReadData EMPTY = new ByteArrayReadData(new byte[0]);\n\n\tprivate final byte[] data;\n\tprivate final int offset;\n\tprivate final int length;\n\n\tByteArrayReadData(final byte[] data) {\n\n\t\tthis(data, 0, data.length);\n\t}\n\n\tByteArrayReadData(final byte[] data, final int offset, final int length) {\n\n\t\tif (!validBounds(data.length, offset, length))\n\t\t\tthrow new IndexOutOfBoundsException();\n\n\t\tthis.data = data;\n\t\tthis.offset = offset;\n\n\t\tif( length < 0 )\n\t\t\tthis.length = data.length - offset;\n\t\telse\n\t\t\tthis.length = length;\n\n\t}\n\n\t@Override\n\tpublic long length() {\n\t\treturn length;\n\t}\n\n\t@Override\n\tpublic long requireLength() {\n\t\treturn length;\n\t}\n\n\t@Override\n\tpublic InputStream inputStream() {\n\n\t\treturn new ByteArrayInputStream(data, offset, length);\n\t}\n\n\t@Override\n\tpublic byte[] allBytes() {\n\n\t\tif (offset == 0 && data.length == length) {\n\t\t\treturn data;\n\t\t} else {\n\t\t\treturn Arrays.copyOfRange(data, offset, offset + length);\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData materialize() {\n\t\treturn this;\n\t}\n\n\t@Override\n\tpublic ReadData slice(final long offset, final long length) {\n\n\t\tfinal int o = this.offset + (int)offset;\n\t\treturn new ByteArrayReadData(data, o, (int)length);\n\t}\n\n\tprivate static boolean validBounds(int arrayLength, int offset, int length) {\n\n\t\tif (offset < 0)\n\t\t\treturn false;\n\t\telse if (arrayLength > 0 && offset >= arrayLength) // offset == 0 and arrayLength == 0 is okay\n\t\t\treturn false;\n\t\telse if (length >= 0 && offset + length > arrayLength)\n\t\t\treturn false;\n\n\t\treturn true;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/InputStreamReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.io.DataInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport org.apache.commons.io.IOUtils;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\n// not thread-safe\nclass InputStreamReadData implements ReadData {\n\n\tprivate final InputStream inputStream;\n\tprivate final int length;\n\n\tInputStreamReadData(final InputStream inputStream, final int length) {\n\t\tthis.inputStream = inputStream;\n\t\tthis.length = length;\n\t}\n\n\t@Override\n\tpublic long length() {\n\t\treturn (bytes != null) ? bytes.length() : length;\n\t}\n\n\t@Override\n\tpublic long requireLength() throws N5IOException {\n\t\tlong l = length();\n\t\tif ( l >= 0 ) {\n\t\t\treturn l;\n\t\t} else {\n\t\t\treturn materialize().length();\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData slice(final long offset, final long length) throws N5IOException {\n\t\tmaterialize();\n\t\treturn bytes.slice(offset, length);\n\t}\n\n\t@Override\n\tpublic ReadData limit(final long length) {\n\t\treturn new InputStreamReadData(inputStream, (int)length);\n\t}\n\n\tprivate boolean inputStreamCalled = false;\n\n\t@Override\n\tpublic InputStream inputStream() throws IllegalStateException {\n\t\tif (bytes != null) {\n\t\t\treturn bytes.inputStream();\n\t\t} else if (!inputStreamCalled) {\n\t\t\tinputStreamCalled = true;\n\t\t\treturn inputStream;\n\t\t} else {\n\t\t\tthrow new IllegalStateException(\"InputStream() already called\");\n\t\t}\n\t}\n\n\t@Override\n\tpublic byte[] allBytes() throws N5IOException, IllegalStateException {\n\t\tmaterialize();\n\t\treturn bytes.allBytes();\n\t}\n\n\tprivate ByteArrayReadData bytes;\n\n\t@Override\n\tpublic ReadData materialize() throws N5IOException {\n\t\tif (bytes == null) {\n\t\t\tfinal byte[] data;\n\t\t\tif (length >= 0) {\n\t\t\t\tdata = new byte[length];\n\t\t\t\ttry (InputStream is = inputStream()) {\n\t\t\t\t\tnew DataInputStream(is).readFully(data);\n\t\t\t\t} catch (IOException e) {\n\t\t\t\t\tthrow new N5IOException(e);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\ttry (InputStream is = inputStream()) {\n\t\t\t\t\tdata = IOUtils.toByteArray(is);\n\t\t\t\t} catch (IOException e) {\n\t\t\t\t\tthrow new N5IOException(e);\n\t\t\t\t}\n\t\t\t}\n\t\t\tbytes = new ByteArrayReadData(data);\n\t\t}\n\t\treturn this;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/LazyGeneratedReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport org.apache.commons.io.output.CountingOutputStream;\nimport org.apache.commons.io.output.ProxyOutputStream;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\nclass LazyGeneratedReadData implements ReadData {\n\n\tLazyGeneratedReadData(final ReadData.Generator generator) {\n\t\tthis.generator = generator;\n\t}\n\n\t/**\n\t * Construct a {@code LazyReadData} that uses the given {@code OutputStreamOperator} to\n\t * encode the given {@code ReadData}.\n\t *\n\t * @param data\n\t * \t\tthe ReadData to encode\n\t * @param encoder\n\t * \t\tOutputStreamOperator to use for encoding\n\t */\n\tLazyGeneratedReadData(final ReadData data, final OutputStreamOperator encoder) {\n\t\tthis(outputStream -> {\n\t\t\ttry (final OutputStream deflater = encoder.apply(interceptClose.apply(outputStream))) {\n\t\t\t\tdata.writeTo(deflater);\n\t\t\t}\n\t\t});\n\t}\n\n\tprivate final ReadData.Generator generator;\n\n\tprivate ByteArrayReadData bytes;\n\n\tprivate long length = -1;\n\n\t@Override\n\tpublic ReadData materialize() throws N5IOException {\n\t\tif (bytes == null) {\n\t\t\ttry {\n\t\t\t\tfinal ByteArrayOutputStream baos = new ByteArrayOutputStream(8192);\n\t\t\t\tgenerator.writeTo(baos);\n\t\t\t\tbytes = new ByteArrayReadData(baos.toByteArray());\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new N5IOException(e);\n\t\t\t}\n\t\t}\n\t\treturn this;\n\t}\n\n\t@Override\n\tpublic long length() {\n\t\treturn (bytes != null) ? bytes.length() : length;\n\t}\n\n\t@Override\n\tpublic long requireLength() throws N5IOException {\n\t\tlong l = length();\n\t\tif ( l >= 0 ) {\n\t\t\treturn l;\n\t\t} else {\n\t\t\treturn materialize().length();\n\t\t}\n\t}\n\n\t@Override\n\tpublic ReadData slice(final long offset, final long length) throws N5IOException {\n\t\tmaterialize();\n\t\treturn bytes.slice(offset, length);\n\t}\n\n\t@Override\n\tpublic InputStream inputStream() throws N5IOException, IllegalStateException {\n\t\tmaterialize();\n\t\treturn bytes.inputStream();\n\t}\n\n\t@Override\n\tpublic byte[] allBytes() throws N5IOException, IllegalStateException {\n\t\tmaterialize();\n\t\treturn bytes.allBytes();\n\t}\n\n\t@Override\n\tpublic void writeTo(final OutputStream outputStream) throws N5IOException, IllegalStateException {\n\t\ttry {\n\t\t\tif (bytes != null) {\n\t\t\t\toutputStream.write(bytes.allBytes());\n\t\t\t} else {\n\t\t\t\tfinal CountingOutputStream cos = new CountingOutputStream(outputStream);\n\t\t\t\tgenerator.writeTo(cos);\n\t\t\t\tlength = cos.getByteCount();\n\t\t\t}\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t/**\n\t * {@code UnaryOperator} that wraps {@code OutputStream} to intercept {@code\n\t * close()} and call {@code flush()} instead\n\t */\n\tprivate static final OutputStreamOperator interceptClose = o -> new ProxyOutputStream(o) {\n\n\t\t@Override\n\t\tpublic void close() throws IOException {\n\t\t\tout.flush();\n\t\t}\n\t};\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/LazyRead.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.io.Closeable;\nimport java.util.Collection;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\n/**\n * A lazy reading strategy for lazy, partial reading of data from some source.\n * <p>\n * Implementations of this interface handle the specifics of accessing data from\n * their respective sources.\n *\n * @see LazyReadData\n */\npublic interface LazyRead extends Closeable {\n\n\t/**\n\t * Materializes a portion of the data into a concrete {@link ReadData}\n\t * instance.\n\t * <p>\n\t * This method performs the actual read operation from the underlying\n\t * source, loading only the requested portion of data. The implementation\n\t * should handle bounds checking and throw appropriate exceptions for\n\t * invalid ranges.\n\t *\n\t * @param offset\n\t * \t\tthe starting position in the data source\n\t * @param length\n\t * \t\tthe number of bytes to read, or -1 to read from offset to end\n\t *\n\t * @return a materialized {@link ReadData} instance containing the requested\n\t * data\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\tReadData materialize(long offset, long length) throws N5IOException;\n\n\t/**\n\t * Returns the total size of the data source in bytes.\n\t *\n\t * @return the size of the data source in bytes\n\t *\n\t * @throws N5IOException\n\t * \t\tif an I/O error occurs while trying to get the length\n\t */\n\tlong size() throws N5IOException;\n\n\t/**\n\t * Indicates that the given slices will be subsequently read.\n\t * {@code LazyRead} implementations (optionally) may take steps to prepare\n\t * for these subsequent slices.\n\t *\n\t * @param ranges\n\t * \t\tslice ranges to prefetch\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\tdefault void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/LazyReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.Collection;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\n\n/**\n * A {@link VolatileReadData} that is backed by a {@link LazyRead}.\n * <p>\n * The {@code LazyRead} is closed when this {@code LazyReadData} is closed.\n */\nclass LazyReadData implements VolatileReadData {\n\n    private final LazyRead lazyRead;\n    private ReadData materialized;\n    private final long offset;\n    private long length;\n\n\tLazyReadData(final LazyRead lazyRead) {\n\t\tthis(lazyRead, 0, -1);\n\t}\n\n    private LazyReadData(final LazyRead lazyRead, final long offset, final long length) {\n        this.lazyRead = lazyRead;\n        this.offset = offset;\n        this.length = length;\n    }\n\n    @Override\n\tpublic ReadData materialize() throws N5IOException {\n\t\tif (materialized == null) {\n\t\t\tmaterialized = lazyRead.materialize(offset, length);\n\t\t\tlength = materialized.length();\n\t\t}\n\t\treturn this;\n\t}\n\n\t/**\n\t * Returns a {@link ReadData} whose length is limited to the given value.\n\t * <p>\n\t * This implementation defers a material read operation if allowed\n\t * by the {@link LazyRead}.\n\t *\n\t * @param length\n\t *            the length of the resulting ReadData\n\t * @return a length-limited ReadData\n\t * @throws N5IOException\n\t *             if an I/O error occurs while trying to get the length\n\t */\n    @Override\n    public ReadData slice(final long offset, final long length) {\n        if (offset < 0)\n            throw new IndexOutOfBoundsException(\"Negative offset: \" + offset);\n\n        if (materialized != null)\n            return materialized.slice(offset, length);\n\n        // if a slice of indeterminate length is requested, but the\n        // length is already known, use the known length;\n        final long lengthArg;\n        if (this.length > 0 && length < 0)\n            lengthArg = this.length - offset;\n        else\n            lengthArg = length;\n\n        return new LazyReadData(lazyRead, this.offset + offset, lengthArg);\n    }\n\n    @Override\n    public InputStream inputStream() throws N5IOException, IllegalStateException {\n        materialize();\n\t\treturn materialized.inputStream();\n    }\n\n    @Override\n    public byte[] allBytes() throws N5IOException, IllegalStateException {\n\t\tmaterialize();\n\t\treturn materialized.allBytes();\n    }\n\n    @Override\n    public long length() {\n        return length;\n    }\n\n\t@Override\n\tpublic long requireLength() throws N5IOException {\n\t\tif (length < 0) {\n\t\t\tlength = lazyRead.size() - offset;\n\t\t}\n\t\treturn length;\n\t}\n\n\t@Override\n\tpublic void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\t\tlazyRead.prefetch(ranges);\n\t}\n\n\t@Override\n\tpublic void close() throws N5IOException {\n\t\ttry {\n\t\t\tlazyRead.close();\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/Range.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.Comparator;\n\n/**\n * A range specified as a {@link #offset}, {@link #length} pair.\n */\npublic interface Range {\n\n\t/**\n\t * Order {@code Range}s by {@link #offset}.\n\t * Ranges with the same offset are ordered by {@link #length}.\n\t */\n\tComparator<Range> COMPARATOR = Comparator\n\t\t\t.comparingLong(Range::offset)\n\t\t\t.thenComparingLong(Range::length);\n\n\t/**\n\t * @return start index (inclusive)\n\t */\n\tlong offset();\n\n\t/**\n\t * @return number of elements\n\t */\n\tlong length();\n\n\t/**\n\t * @return end index (exclusive)\n\t */\n\tdefault long end() {\n\t\treturn offset() + length();\n\t}\n\n\tstatic boolean equals(final Range r0, final Range r1) {\n\t\tif (r0 == null && r1 == null) {\n\t\t\treturn true;\n\t\t} else if (r0 == null || r1 == null) {\n\t\t\treturn false;\n\t\t} else {\n\t\t\treturn r0.offset() == r1.offset() && r0.length() == r1.length();\n\t\t}\n\t}\n\n\tstatic Range at(final long offset, final long length) {\n\n\t\tclass DefaultRange implements Range {\n\n\t\t\tprivate final long offset;\n\t\t\tprivate final long length;\n\n\t\t\tpublic DefaultRange(final long offset, final long length) {\n\t\t\t\tthis.offset = offset;\n\t\t\t\tthis.length = length;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic long offset() {\n\t\t\t\treturn offset;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic long length() {\n\t\t\t\treturn length;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic final boolean equals(final Object o) {\n\t\t\t\tif (!(o instanceof DefaultRange))\n\t\t\t\t\treturn false;\n\n\t\t\t\tfinal DefaultRange that = (DefaultRange) o;\n\t\t\t\treturn offset == that.offset && length == that.length;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic int hashCode() {\n\t\t\t\tint result = Long.hashCode(offset);\n\t\t\t\tresult = 31 * result + Long.hashCode(length);\n\t\t\t\treturn result;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic String toString() {\n\t\t\t\treturn \"Range{\" +\n\t\t\t\t\t\t\"offset=\" + offset +\n\t\t\t\t\t\t\", length=\" + length +\n\t\t\t\t\t\t'}';\n\t\t\t}\n\t\t}\n\n\t\treturn new DefaultRange(offset, length);\n\t}\n\n\t/**\n\t * Returns a potentially new collection of {@link Range}s such that \n\t * adjacent or overlapping Ranges are combined. \n\t * <p>\n\t * If the input ranges are non-adjacent the input instance is returned, but if aggregation \n\t * occurs, the result will be a new, sorted List.\n\t *\n\t * @param ranges\n\t * \t\ta collection of Ranges\n\t *\n\t * @return collection with adjacent or overlapping ranges merged\n\t */\n\tstatic Collection<? extends Range> aggregate(final Collection<? extends Range> ranges) {\n\n\t\tif (ranges.size() == 0)\n\t\t\treturn ranges;\n\n\t\tfinal ArrayList<Range> sortedRanges = new ArrayList<>(ranges);\n\t\tsortedRanges.sort(Range.COMPARATOR);\n\n\t\tfinal ArrayList<Range> result = new ArrayList<>();\n\t\tRange lo = null;\n\t\tfor (Range hi : sortedRanges) {\n\n\t\t\tif (lo == null)\n\t\t\t\tlo = hi;\n\t\t\telse if (lo.end() >= hi.offset()) {\n\t\t\t\t// merge\n\t\t\t\tlo = Range.at(lo.offset(), Math.max(lo.end(), hi.end()) - lo.offset());\n\t\t\t} else {\n\t\t\t\tresult.add(lo);\n\t\t\t\tlo = hi;\n\t\t\t}\n\t\t}\n\t\tresult.add(lo);\n\n\t\tfinal boolean wereMerges = ranges.size() != result.size();\n\t\treturn wereMerges ? result : ranges;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/ReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.util.Collection;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.codec.DataCodec;\n\n/**\n * An abstraction over {@code byte[]} data.\n * <p>\n * The data may come from a {@code byte[]} array, a {@code ByteBuffer}, an\n * {@code InputStream}, a {@code KeyValueAccess}.\n * <p>\n * {@code ReadData} instances can be created via one of the static {@link #from}\n * methods. For example, use {@link #from(InputStream, int)} to wrap an {@code\n * InputStream}.\n * <p>\n * {@code ReadData} may be lazy-loaded. For example, for {@code InputStream} and\n * {@code KeyValueAccess} sources, loading is deferred until the data is\n * accessed (e.g., {@link #allBytes()}, {@link #writeTo(OutputStream)}), or\n * explicitly {@link #materialize() materialized}.\n * <p>\n * {@code ReadData} can be {@link DataCodec#encode encoded} and {@link\n * DataCodec#decode decoded} by a {@link DataCodec}, which will also be lazy if\n * possible.\n */\npublic interface ReadData {\n\n\t/**\n\t * Returns number of bytes in this {@link ReadData}, if known. Otherwise\n\t * {@code -1}.\n\t *\n\t * @return number of bytes, if known, or -1\n\t */\n\tdefault long length() {\n\t\treturn -1;\n\t}\n\n\t/**\n\t * Returns number of bytes in this {@link ReadData}. If the length is not\n\t * currently know, this method may retrieve the length using I/O operations,\n\t * {@link #materialize} this {@code ReadData}, or perform any other steps\n\t * necessary to obtain the length.\n\t *\n\t * @return number of bytes\n\t *\n\t * @throws N5IOException\n\t * \t\tif an I/O error occurs while trying to get the length\n\t */\n\tlong requireLength() throws N5IOException;\n\n\t/**\n\t * Returns a {@link ReadData} whose length is limited to the given value.\n\t *\n\t * @param length\n\t *            the length of the resulting ReadData\n\t * @return a length-limited ReadData\n\t * @throws N5IOException\n\t *             if an I/O error occurs while trying to get the length\n\t */\n\tdefault ReadData limit(final long length) throws N5IOException {\n\t\treturn slice(0, length);\n\t}\n\n\t/**\n\t * Returns a new {@link ReadData} representing a slice, or subset\n\t * of this ReadData.\n\t *\n\t * @param offset the offset relative to this ReadData\n\t * @param length length of the returned ReadData\n\t * @return a slice\n\t * @throws N5IOException an exception\n\t */\n\tdefault ReadData slice(final long offset, final long length) throws N5IOException {\n\t\treturn materialize().slice(offset, length);\n\t}\n\n\t/**\n\t * Returns a new {@link ReadData} representing a slice, or subset\n\t * of this ReadData.\n\t *\n\t * @param range a range in this ReadData\n\t * @return a slice\n\t * @throws N5IOException an exception\n\t */\n\tdefault ReadData slice(final Range range) throws N5IOException {\n\t\treturn slice(range.offset(), range.length());\n\t}\n\n\t/**\n\t * Open a {@code InputStream} on this data.\n\t * <p>\n\t * Repeatedly calling this method may or may not work, depending on how\n\t * the underlying data is stored. For example, if the underlying data is\n\t * stored as a {@code byte[]} array, multiple streams can be opened. If\n\t * the underlying data is just an {@code InputStream} then this will be\n\t * returned on the first call.\n\t *\n\t * @return an InputStream on this data\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t * @throws IllegalStateException\n\t * \t\tif this method was already called once and cannot be called again.\n\t */\n\tInputStream inputStream() throws N5IOException, IllegalStateException;\n\n\t/**\n\t * Return the contained data as a {@code byte[]} array.\n\t * <p>\n\t * This may use {@link #inputStream()} to read the data.\n\t * Because repeatedly calling {@link #inputStream()} may not work,\n\t * <ol>\n\t * <li>this method may fail with {@code IllegalStateException} if {@code inputStream()} was already called</li>\n\t * <li>subsequent {@code inputStream()} calls may fail with {@code IllegalStateException}</li>\n\t * </ol>\n\t *\n\t * @return all contained data as a byte[] array\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t * @throws IllegalStateException\n\t * \t\tif {@link #inputStream()} was already called once and cannot be called again.\n\t */\n\tbyte[] allBytes() throws N5IOException, IllegalStateException;\n\n\t/**\n\t * Return the contained data as a {@code ByteBuffer}.\n\t * <p>\n\t * This may use {@link #inputStream()} to read the data.\n\t * Because repeatedly calling {@link #inputStream()} may not work,\n\t * <ol>\n\t * <li>this method may fail with {@code IllegalStateException} if {@code inputStream()} was already called</li>\n\t * <li>subsequent {@code inputStream()} calls may fail with {@code IllegalStateException}</li>\n\t * </ol>\n\t * The byte order of the returned {@code ByteBuffer} is {@code BIG_ENDIAN}.\n\t *\n\t * @return all contained data as a ByteBuffer\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t * @throws IllegalStateException\n\t * \t\tif {@link #inputStream()} was already called once and cannot be called again.\n\t */\n\tdefault ByteBuffer toByteBuffer() throws N5IOException, IllegalStateException {\n\t\treturn ByteBuffer.wrap(allBytes());\n\t}\n\n\t/**\n\t * Read the underlying data into a {@code byte[]} array, and return it as a {@code ReadData}.\n\t * (If this {@code ReadData} is already in a {@code byte[]} array or {@code\n\t * ByteBuffer}, just return {@code this}.)\n\t * <p>\n\t * The returned {@code ReadData} has a known {@link #length} and multiple\n\t * {@link #inputStream InputStreams} can be opened on it.\n\t * <p>\n\t * <em>Implementation note: This should be preferably implemented to return\n\t * {@code this}. For example, materialize into a new {@code byte[]}, {@code\n\t * ReadData}, or similar and then delegate to this materialized version\n\t * internally.</em>\n\t *\n\t * @return\n\t * \t\ta materialized ReadData.\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\tReadData materialize() throws N5IOException;\n\n\t/**\n\t * Write the contained data into an {@code OutputStream}.\n\t * <p>\n\t * This may use {@link #inputStream()} to read the data.\n\t * Because repeatedly calling {@link #inputStream()} may not work,\n\t * <ol>\n\t * <li>this method may fail with {@code IllegalStateException} if {@code inputStream()} was already called</li>\n\t * <li>subsequent {@code inputStream()} calls may fail with {@code IllegalStateException}</li>\n\t * </ol>\n\t *\n\t * @param outputStream\n\t * \t\tdestination to write to\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t * @throws IllegalStateException\n\t * \t\tif {@link #inputStream()} was already called once and cannot be called again.\n\t */\n\tdefault void writeTo(OutputStream outputStream) throws N5IOException, IllegalStateException {\n\t\ttry {\n\t\t\toutputStream.write(allBytes());\n\t\t} catch (IOException e) {\n\t\t\tthrow new N5IOException(e);\n\t\t}\n\t}\n\n\t/**\n\t * Indicates that the given slices will be subsequently read.\n\t * {@code ReadData} implementations (optionally) may take steps to prepare\n\t * for these subsequent slices.\n\t *\n\t * @param ranges\n\t * \t\tslice ranges to prefetch\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\tdefault void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\t}\n\n\t// ------------- Encoding / Decoding ----------------\n\t//\n\n\t/**\n\t * Returns a new ReadData that uses the given {@code OutputStreamOperator} to\n\t * encode this ReadData.\n\t *\n\t * @param encoder\n\t * \t\tOutputStreamOperator to use for encoding\n\t *\n\t * @return encoded ReadData\n\t */\n\tdefault ReadData encode(OutputStreamOperator encoder) {\n\t\treturn new LazyGeneratedReadData(this, encoder);\n\t}\n\n\t/**\n\t * {@code OutputStreamOperator} is {@link #apply applied} to an {@code\n\t * OutputStream} to transform it into another(e.g., compressed) {@code\n\t * OutputStream}.\n\t * <p>\n\t * This is basically {@code UnaryOperator<OutputStream>}, but {@link #apply}\n\t * throws {@code IOException}.\n\t */\n\t@FunctionalInterface\n\tinterface OutputStreamOperator {\n\n\t\tOutputStream apply(OutputStream o) throws IOException;\n\n\t}\n\n\t// --------------- Factory Methods ------------------\n\t//\n\n\t/**\n\t * Create a new {@code ReadData} that loads lazily from {@code inputStream}\n\t * and {@link #length() reports} the given {@code length}.\n\t * <p>\n\t * No effort is made to ensure that the {@code inputStream} in fact contains\n\t * exactly {@code length} bytes.\n\t *\n\t * @param inputStream\n\t * \t\tInputStream to read from\n\t * @param length\n\t * \t\treported length of the ReadData\n\t *\n\t * @return a new ReadData\n\t */\n\tstatic ReadData from(final InputStream inputStream, final int length) {\n\t\treturn new InputStreamReadData(inputStream, length);\n\t}\n\n\t/**\n\t * Create a new {@code ReadData} that loads lazily from {@code inputStream}\n\t * and reports {@link #length() length() == -1} (i.e., unknown length).\n\t *\n\t * @param inputStream\n\t * \t\tInputStream to read from\n\t *\n\t * @return a new ReadData\n\t */\n\tstatic ReadData from(final InputStream inputStream) {\n\t\treturn from(inputStream, -1);\n\t}\n\n\t/**\n\t * Create a new {@code ReadData} that wraps the specified portion of a\n\t * {@code byte[]} array.\n\t *\n\t * @param data\n\t * \t\tarray containing the data\n\t * @param offset\n\t * \t\tstart offset of the ReadData in the data array\n\t * @param length\n\t * \t\tlength of the ReadData (in bytes)\n\t *\n\t * @return a new ReadData\n\t */\n\tstatic ReadData from(final byte[] data, final int offset, final int length) {\n\t\treturn new ByteArrayReadData(data, offset, length);\n\t}\n\n\t/**\n\t * Create a new {@code ReadData} that wraps the given {@code byte[]} array.\n\t *\n\t * @param data\n\t * \t\tarray containing the data\n\t *\n\t * @return a new ReadData\n\t */\n\tstatic ReadData from(final byte[] data) {\n\t\treturn from(data, 0, data.length);\n\t}\n\n\t/**\n\t * Create a new {@code ReadData} that wraps the given {@code ByteBuffer}.\n\t *\n\t * @param data\n\t * \t\tbuffer containing the data\n\t *\n\t * @return a new ReadData\n\t */\n\tstatic ReadData from(final ByteBuffer data) {\n\t\tif (data.hasArray()) {\n\t\t\treturn from(data.array(), 0, data.limit());\n\t\t} else {\n\t\t\tthrow new UnsupportedOperationException(\"TODO. Direct ByteBuffer not supported yet.\");\n\t\t}\n\t}\n\n\t/**\n\t * Generator supplies the content of a ReadData by writing it to an {@link\n\t * OutputStream} on demand.\n\t * <p>\n\t * An example of a {@code Generator} is applying a compressor (an {@link\n\t * OutputStreamOperator}) to an existing ReadData.\n\t */\n\t@FunctionalInterface\n\tinterface Generator {\n\t\tvoid writeTo(OutputStream outputStream) throws IOException, IllegalStateException;\n\t}\n\n\t/**\n\t * Create a new {@code ReadData} that is lazily generated by the given {@link Generator}.\n\t *\n\t * @param generator\n\t * \t\tgenerates the data\n\t *\n\t * @return a new ReadData\n\t */\n\tstatic ReadData from(Generator generator) {\n\t\treturn new LazyGeneratedReadData(generator);\n\t}\n\n\t/**\n\t * Returns an empty {@code ReadData}.\n\t *\n\t * @return an empty ReadData\n\t */\n\tstatic ReadData empty() {\n\t\treturn ByteArrayReadData.EMPTY;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/VolatileReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.prefetch.AggregatingPrefetchLazyRead;\n\n/**\n * During its life-time, the content of a {@code VolatileReadData} should not be\n * mutated.\n * <p>\n * Implementations can enforce this by\n * <ul>\n * <li>locking underlying resources until the {@code VolatileReadData} is {@link #close() closed}), or</li>\n * <li>failing with {@link N5IOException} if such modifications are detected.</li>\n * </ul>\n */\npublic interface VolatileReadData extends ReadData, AutoCloseable {\n\n\t@Override\n\tvoid close() throws N5IOException;\n\n\t/**\n\t * Create a new {@code VolatileReadData} that loads (lazily) from {@code lazyRead}.\n\t * <p>\n\t * The returned {@code VolatileReadData} is responsible for {@link LazyRead#close() closing}\n\t *\n\t * @param lazyRead\n\t * \t\tprovides data\n\t *\n\t * @return a new VolatileReadData\n\t */\n\tstatic VolatileReadData from(final LazyRead lazyRead) {\n\t\tfinal LazyRead aggregatingLazyRead = new AggregatingPrefetchLazyRead(lazyRead);\n\t\treturn new LazyReadData(aggregatingLazyRead);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/prefetch/AggregatingPrefetchLazyRead.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.prefetch;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\n\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\n\n/**\n * A {@link SliceTrackingLazyRead} that implements {@link #prefetch} to\n * aggregate overlapping / adjacent ranges and then materialize each aggregated\n * range.\n */\npublic class AggregatingPrefetchLazyRead extends SliceTrackingLazyRead {\n\n\tpublic AggregatingPrefetchLazyRead(final LazyRead delegate) {\n\t\tsuper(delegate);\n\t}\n\n\t/**\n\t * Indicates that the given slices will be subsequently read.\n\t * <p>\n\t * This implementation groups overlapping / adjacent {@link Range}s into single read requests.\n\t *\n\t * @param ranges\n\t * \t\tslice ranges to prefetch\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\t@Override\n\tpublic void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\n\t\tfinal List<Range> filteredRanges = new ArrayList<>(ranges);\n\t\tfilteredRanges.removeIf(this::isCovered);\n\t\tfinal Collection<? extends Range> aggregatedRanges = Range.aggregate(filteredRanges);\n\t\tfor (final Range slice : aggregatedRanges) {\n\t\t\tmaterialize(slice.offset(), slice.length());\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/prefetch/EnclosingPrefetchLazyRead.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.prefetch;\n\nimport java.util.Collection;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\n\n/**\n * A {@link SliceTrackingLazyRead} that implements {@link #prefetch} to\n * materialize the bounding range of all requested ranges.\n */\npublic class EnclosingPrefetchLazyRead extends SliceTrackingLazyRead {\n\n\tpublic EnclosingPrefetchLazyRead(final LazyRead delegate) {\n\t\tsuper(delegate);\n\t}\n\n\t/**\n\t * Indicates that the given slices will be subsequently read.\n\t * {@code LazyRead} implementations (optionally) may take steps to prepare\n\t * for these subsequent slices.\n\t * <p>\n\t * Minimal implementation: Find offset and length covering all ranges that\n\t * are not yet fully covered by existing slices. Then materialize the slice\n\t * covering that range.\n\t *\n\t * @param ranges\n\t * \t\tslice ranges to prefetch\n\t *\n\t * @throws N5IOException\n\t * \t\tif any I/O error occurs\n\t */\n\t@Override\n\tpublic void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\n\t\tlong fromIndex = Long.MAX_VALUE;\n\t\tlong toIndex = Long.MIN_VALUE;\n\t\tfor (final Range slice : ranges) {\n\t\t\tif (!isCovered(slice)) {\n\t\t\t\tfromIndex = Math.min(fromIndex, slice.offset());\n\t\t\t\ttoIndex = Math.max(toIndex, slice.end());\n\t\t\t}\n\t\t}\n\n\t\tif (fromIndex < toIndex) {\n\t\t\tmaterialize(fromIndex, toIndex - fromIndex);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/prefetch/SliceTrackingLazyRead.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.prefetch;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * A {@link LazyRead} that wraps a delegate {@code LazyRead} and keeps track of\n * all slices that have been {@link #materialize materialized}.\n * <p>\n * When materializing a new slice, we first check whether it is completely\n * covered by a materialized slice that we already track. If so, then we just\n * return a slice on the existing materialized slice. If not, we materialize the\n * slice from the delegate track it.\n */\npublic class SliceTrackingLazyRead implements LazyRead {\n\n\tprotected static class Slice implements Range {\n\n\t\t// Offset and length in the delegate\n\t\tprivate final long offset;\n\t\tprivate final long length;\n\n\t\t// Data of this slice\n\t\tprivate final ReadData data;\n\n\t\tSlice(final long offset, final long length, final ReadData data) {\n\t\t\tthis.offset = offset;\n\t\t\tthis.length = length;\n\t\t\tthis.data = data;\n\t\t}\n\n\t\t@Override\n\t\tpublic long offset() {\n\t\t\treturn offset;\n\t\t}\n\n\t\t@Override\n\t\tpublic long length() {\n\t\t\treturn length;\n\t\t}\n\n\t\t@Override\n\t\tpublic String toString() {\n\t\t\treturn \"{\" + offset + \", \" + length + '}';\n\t\t}\n\t}\n\n\tprotected final List<Slice> slices = new ArrayList<>();\n\n\t/**\n\t * The {@code LazyRead} providing our data.\n\t */\n\tprivate final LazyRead delegate;\n\n\tpublic SliceTrackingLazyRead(final LazyRead delegate) {\n\t\tthis.delegate = delegate;\n\t}\n\n\t@Override\n\tpublic void close() throws IOException {\n\t\tdelegate.close();\n\t}\n\n\t@Override\n\tpublic ReadData materialize(final long offset, final long length) throws N5IOException {\n\t\tfinal Slice containing = Slices.findContainingSlice(slices, offset, length);\n\t\tif (containing != null) {\n\t\t\treturn containing.data.slice(offset - containing.offset, length);\n\t\t} else {\n\t\t\tfinal ReadData data = delegate.materialize(offset, length);\n\t\t\tSlices.addSlice(slices, new Slice(offset, length, data));\n\t\t\treturn data;\n\t\t}\n\t}\n\n\t@Override\n\tpublic long size() throws N5IOException {\n\t\treturn delegate.size();\n\t}\n\n\tprotected boolean isCovered(final Range slice) {\n\n\t\treturn Slices.findContainingSlice(slices, slice) != null;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/prefetch/Slices.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.prefetch;\n\nimport java.util.Collections;\nimport java.util.Comparator;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\n\nclass Slices {\n\n\tprivate Slices() {\n\t\t// utility class. should not be instantiated.\n\t}\n\n\t/**\n\t * In an ordered list of {@code slices}, find a slice that completely contains the given range.\n\t * <p>\n\t * Pre-conditions:\n\t * <ol>\n\t * <li>Slices are ordered by offset.</li>\n\t * <li>If two slices overlap, no slice is fully contained within the other.\n\t *     (Therefore, if {@code a.offset < b.offset} then {@code a.end < b.end}.)</li>\n\t * </ol>\n\t *\n\t * @param slices\n\t * \t\tordered list of slices\n\t * @param offset\n\t * \t\tstart of the range to cover\n\t * @param length\n\t * \t\tlength of the range to cover\n\t *\n\t * @return a slice that completely contains the requested range, or {@code null} if no such slice exists\n\t */\n\tstatic <T extends Range> T findContainingSlice(final List<T> slices, final long offset, final long length) {\n\n\t\t// Find the slice with the largest slice.offset <= offset.\n\t\tfinal int i = Collections.binarySearch(slices, Range.at(offset, 0), Comparator.comparingLong(Range::offset));\n\n\t\t// Largest index of a slice with slice.offset <= offset.\n\t\tfinal int index = i < 0 ? -i - 2 : i;\n\t\tif (index < 0) {\n\t\t\t// We find no overlapping slice, because\n\t\t\t// slices[0].offset is already too large.\n\t\t\treturn null;\n\t\t}\n\n\t\tfinal T slice = slices.get(index);\n\t\tif (slice.end() < offset + length) {\n\t\t\treturn null;\n\t\t}\n\n\t\treturn slice;\n\t}\n\n\t/**\n\t * In an ordered list of {@code slices}, find a slice that completely contains the given range.\n\t * <p>\n\t * Pre-conditions:\n\t * <ol>\n\t * <li>Slices are ordered by offset.</li>\n\t * <li>If two slices overlap, no slice is fully contained within the other.\n\t *     (Therefore, if {@code a.offset < b.offset} then {@code a.end < b.end}.)</li>\n\t * </ol>\n\t *\n\t * @param slices\n\t * \t\tordered list of slices\n\t * @param range\n\t * \t\trange to cover\n\t *\n\t * @return a slice that completely contains the requested range, or {@code null} if no such slice exists\n\t */\n\tstatic <T extends Range> T findContainingSlice(final List<T> slices, final Range range) {\n\t\treturn findContainingSlice(slices, range.offset(), range.length());\n\t}\n\n\t/**\n\t * Add a new {@code slice} to the {@code slice} list.\n\t * <p>\n\t * Note, that the new {@code slice} is expected to not be fully contained in\n\t * an existing slice!\n\t * <p>\n\t * Pre/post-conditions:\n\t * <ol>\n\t * <li>Slices are ordered by offset.</li>\n\t * <li>If two slices overlap, no slice is fully contained within the other.\n\t *     (Therefore, if {@code a.offset < b.offset} then {@code a.end < b.end}.)</li>\n\t * </ol>\n\t * <p>\n\t * The new {@code slice} will be inserted into the list at the correct position\n\t * (such that {@code slices} remains ordered by slice offset), and all existing\n\t * slices that are fully contained in the new {@code slice} will be removed.\n\t *\n\t * @param slices\n\t * \t\tordered list of slices\n\t * @param slice\n\t * \t\tslice to be inserted\n\t */\n\tstatic <T extends Range> void addSlice(final List<T> slices, final T slice) {\n\n\t\tfinal int i = Collections.binarySearch(slices, slice, Comparator.comparingLong(Range::offset));\n\t\tfinal int from = i < 0 ? -i - 1 : i;\n\n\t\tint to = from;\n\t\twhile (to < slices.size() && slices.get(to).end() <= slice.end()) {\n\t\t\t++to;\n\t\t}\n\n\t\tif (from == to) {\n\t\t\t// empty range: just insert\n\t\t\tslices.add(from, slice);\n\t\t} else {\n\t\t\t// overwrite the first element in range, remove the rest\n\t\t\tslices.set(from, slice);\n\t\t\tslices.subList(from + 1, to).clear();\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/segment/ConcatenatedReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.segment;\n\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * Implementation of a {@link SegmentedReadData} representing the concatenation\n * of several {@code SegmentedReadData}s.\n * <p>\n * {@code ConcatenatedReadData} contains the segments of all concatenated {@code\n * SegmentedReadData}s with appropriately offset locations.\n * <p>\n * In particular, it is also possible to concatenate {@code SegmentedReadData}s\n * with (yet) unknown length. (This is useful for postponing compression of\n * DataBlocks until they are actually written.) In that case, segment locations\n * are only available after all lengths become known. This happens when this\n * {@code ConcatenatedReadData} (or all its constituents) is\n * {@link #materialize() materialized} or {@link #writeTo(OutputStream) written}.\n */\nclass ConcatenatedReadData implements SegmentedReadData {\n\n\tprivate final List<SegmentedReadData> content;\n\tprivate final ReadData delegate;\n\tprivate final List<Segment> segments;\n\tprivate final List<Range> locations;\n\tprivate final Map<Segment, Range> segmentToLocation;\n\tprivate boolean locationsBuilt;\n\tprivate long length;\n\n\tConcatenatedReadData(final List<SegmentedReadData> content) {\n\t\tthis.content = content;\n\t\tdelegate = ReadData.from(os -> content.forEach(d -> d.writeTo(os)));\n\t\tsegments = new ArrayList<>();\n\t\tlocations = new ArrayList<>();\n\t\tsegmentToLocation = new HashMap<>();\n\t\tlocationsBuilt = false;\n\t\tlength = -1;\n\t}\n\n\t// constructor for slices\n\tprivate ConcatenatedReadData(final ReadData delegate, final List<Segment> segments,\n\t\t\tfinal List<Range> locations) {\n\t\tcontent = null;\n\t\tthis.delegate = delegate;\n\t\tthis.segments = segments;\n\t\tthis.locations = locations;\n\t\tsegmentToLocation = new HashMap<>();\n\t\tfor (int i = 0; i < segments.size(); i++) {\n\t\t\tsegmentToLocation.put(segments.get(i), locations.get(i));\n\t\t}\n\t\tlocationsBuilt = true;\n\t}\n\n\t/**\n\t * Verify that all {@code content} elements have known length.\n\t * Builds {@code segments} and {@code locations} if they have not been built yet.\n\t *\n\t * @throws IllegalStateException if any of the concatenated ReadData don't know their length yet\n\t */\n\tprivate void ensureKnownSize() throws IllegalStateException {\n\t\tif (!locationsBuilt) {\n\t\t\tlong offset = 0;\n\t\t\tfor (int i = 0; i < content.size(); i++) {\n\t\t\t\tfinal SegmentedReadData data = content.get(i);\n\t\t\t\tif (data.length() < 0) {\n\t\t\t\t\tthrow new IllegalStateException(\"Some of concatenated ReadData don't know their length yet.\");\n\t\t\t\t}\n\t\t\t\tsegments.addAll(data.segments());\n\t\t\t\tfor (Segment segment : data.segments()) {\n\t\t\t\t\tfinal Range l = data.location(segment);\n\t\t\t\t\tlocations.add(Range.at(l.offset() + offset, l.length()));\n\t\t\t\t}\n\t\t\t\toffset += data.length();\n\t\t\t}\n\t\t\tlength = offset;\n\n\t\t\tfor (int i = 0; i < segments.size(); i++) {\n\t\t\t\tsegmentToLocation.put(segments.get(i), locations.get(i));\n\t\t\t}\n\n\t\t\tlocationsBuilt = true;\n\t\t}\n\t}\n\n\t@Override\n\tpublic Range location(final Segment segment) throws IllegalArgumentException {\n\t\tensureKnownSize();\n\t\tfinal Range location = segmentToLocation.get(segment);\n\t\tif (location == null) {\n\t\t\tthrow new IllegalArgumentException();\n\t\t}\n\t\treturn location;\n\t}\n\n\t@Override\n\tpublic List<? extends Segment> segments() {\n\t\treturn segments;\n\t}\n\n\t@Override\n\tpublic long length() {\n\t\tif (length < 0) {\n\t\t\tlength = 0;\n\t\t\tfor (final ReadData data : content) {\n\t\t\t\tfinal long l = data.length();\n\t\t\t\tif (l < 0) {\n\t\t\t\t\tlength = -1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tlength += l;\n\t\t\t}\n\t\t}\n\t\treturn length;\n\t}\n\n\t@Override\n\tpublic long requireLength() throws N5IOException {\n\t\tif (length < 0) {\n\t\t\tlength = 0;\n\t\t\tfor (final ReadData data : content) {\n\t\t\t\tlength += data.requireLength();\n\t\t\t}\n\t\t}\n\t\treturn length;\n\t}\n\n\t@Override\n\tpublic SegmentedReadData slice(final Segment segment) throws IllegalArgumentException, N5IOException {\n\t\tensureKnownSize();\n\t\tfinal Range l = location(segment);\n\t\treturn slice(l.offset(), l.length());\n\t}\n\n\t@Override\n\tpublic SegmentedReadData slice(final long offset, final long length) throws N5IOException {\n\t\tensureKnownSize();\n\t\tfinal ReadData delegateSlice = delegate.slice(offset, length);\n\t\tfinal long sliceLength = delegateSlice.length();\n\n\t\t// fromIndex: find first segment with offset >= sourceOffset\n\t\tint fromIndex = Collections.binarySearch(locations, Range.at(offset, -1), Range.COMPARATOR);\n\t\tif (fromIndex < 0) {\n\t\t\tfromIndex = -fromIndex - 1;\n\t\t}\n\n\t\t// toIndex: find first segment with offset >= sourceOffset + length\n\t\tint toIndex = Collections.binarySearch(locations, Range.at(offset + sliceLength, -1), Range.COMPARATOR);\n\t\tif (toIndex < 0) {\n\t\t\ttoIndex = -toIndex - 1;\n\t\t}\n\n\t\t// contained: find segments in [fromIndex, toIndex) with s.offset() + s.length() <= sourceOffset + length\n\t\tfinal List<Segment> containedSegments = new ArrayList<>();\n\t\tfinal List<Range> containedSegmentLocations = new ArrayList<>();\n\t\tfor (int i = fromIndex; i < toIndex; ++i) {\n\t\t\tfinal Range l = locations.get(i);\n\t\t\tif (l.offset() + l.length() <= offset + sliceLength) {\n\t\t\t\tcontainedSegments.add(segments.get(i));\n\t\t\t\tcontainedSegmentLocations.add(Range.at(l.offset() - offset, l.length()));\n\t\t\t}\n\t\t}\n\n\t\treturn new ConcatenatedReadData(delegateSlice, containedSegments, containedSegmentLocations);\n\t}\n\n\t@Override\n\tpublic InputStream inputStream() throws N5IOException, IllegalStateException {\n\t\treturn delegate.inputStream();\n\t}\n\n\t@Override\n\tpublic byte[] allBytes() throws N5IOException, IllegalStateException {\n\t\treturn delegate.allBytes();\n\t}\n\n\t@Override\n\tpublic ByteBuffer toByteBuffer() throws N5IOException, IllegalStateException {\n\t\treturn delegate.toByteBuffer();\n\t}\n\n\t@Override\n\tpublic SegmentedReadData materialize() throws N5IOException {\n\t\tdelegate.materialize();\n\t\treturn this;\n\t}\n\n\t@Override\n\tpublic void writeTo(final OutputStream outputStream) throws N5IOException, IllegalStateException {\n\t\tdelegate.writeTo(outputStream);\n\t}\n\n\t@Override\n\tpublic ReadData encode(final OutputStreamOperator encoder) {\n\t\treturn delegate.encode(encoder);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/segment/DefaultSegmentedReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.segment;\n\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * Implementation of a {@link SegmentedReadData} wrapper around an existing {@code ReadData} delegate.\n * <p>\n * It is used for\n * <ul>\n * <li>adding segment information to vanilla ReadData (see {@link SegmentedReadData#wrap(ReadData, Range...)}),</li>\n * <li>representing slices into other {@code DefaultSegmentedReadData}.</li>\n * </ul\n */\nclass DefaultSegmentedReadData implements SegmentedReadData {\n\n\t/**\n\t * The {@code ReadData} providing our data. This is either {@code segmentSource} in\n\t * which case {@code offset==0}, or a slice into {@code segmentSource} in\n\t * which case {@code offset} is the offset of the slice.\n\t */\n\tprivate final ReadData delegate;\n\n\t/**\n\t * The {@link Segment#source() source} {@code ReadData} of all contained\n\t * segments.\n\t */\n\tprivate final ReadData segmentSource;\n\n\t/**\n\t * The offset of {@code delegate} with respect to {@code segmentSource}.\n\t */\n\tprivate final long offset;\n\n\t/**\n\t * Contained segments, ordered by location.\n\t */\n\tprivate final List<SegmentImpl> segments;\n\n\tprivate static class SegmentImpl implements Segment, Range {\n\n\t\tprivate final SegmentedReadData source;\n\t\tprivate final long offset;\n\t\tprivate final long length;\n\n\t\tpublic SegmentImpl(final SegmentedReadData source, final Range location) {\n\t\t\tthis(source, location.offset(), location.length());\n\t\t}\n\n\t\tpublic SegmentImpl(final SegmentedReadData source, final long offset, final long length) {\n\t\t\tthis.source = source;\n\t\t\tthis.offset = offset;\n\t\t\tthis.length = length;\n\t\t}\n\n\t\t@Override\n\t\tpublic long offset() {\n\t\t\treturn offset;\n\t\t}\n\n\t\t@Override\n\t\tpublic long length() {\n\t\t\treturn length;\n\t\t}\n\n\t\t@Override\n\t\tpublic SegmentedReadData source() {\n\t\t\treturn source;\n\t\t}\n\t}\n\n\tprivate static class EnclosingSegmentImpl extends SegmentImpl {\n\n\t\tpublic EnclosingSegmentImpl(final SegmentedReadData source) {\n\t\t\tsuper(source, 0, -1);\n\t\t}\n\n\t\t@Override\n\t\tpublic long length() {\n\t\t\treturn source().length();\n\t\t}\n\t}\n\n\t// assumes segments are ordered by location\n\tprivate DefaultSegmentedReadData(final ReadData delegate, final ReadData segmentSource, final long offset, final List<SegmentImpl> segments) {\n\t\tthis.delegate = delegate;\n\t\tthis.segmentSource = segmentSource;\n\t\tthis.offset = offset;\n\t\tthis.segments = segments;\n\t}\n\n\t// assumes segments are ordered by location\n\tprivate DefaultSegmentedReadData(final ReadData delegate, final List<SegmentImpl> segments) {\n\t\tthis.delegate = delegate;\n\t\tthis.segmentSource = this;\n\t\tthis.offset = 0;\n\t\tthis.segments = segments;\n\t}\n\n\t/**\n\t * Wrap {@code readData} and create segments at the given locations. The\n\t * order of segments in the returned {@link SegmentsAndData#segments()} list\n\t * matches the order of the given {@code locations} (while the {@link\n\t * #segments} in the {@link SegmentsAndData#data()} are ordered by offset).\n\t */\n\tstatic SegmentsAndData wrap(final ReadData readData, final List<Range> locations) {\n\t\tfinal List<SegmentImpl> sortedSegments = new ArrayList<>(locations.size());\n\t\tfinal DefaultSegmentedReadData data = new DefaultSegmentedReadData(readData, sortedSegments);\n\t\tfor (Range l : locations) {\n\t\t\tsortedSegments.add(new SegmentImpl(data, l));\n\t\t}\n\t\tfinal List<Segment> segments = new ArrayList<>(sortedSegments);\n\t\tsortedSegments.sort(Range.COMPARATOR);\n\n\t\treturn new SegmentsAndData() {\n\n\t\t\t@Override\n\t\t\tpublic List<Segment> segments() {\n\t\t\t\treturn segments;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic SegmentedReadData data() {\n\t\t\t\treturn data;\n\t\t\t}\n\t\t};\n\t}\n\n\t/**\n\t * Wrap the given {@code delegate} with a single segment fully containing it.\n\t */\n\tDefaultSegmentedReadData(final ReadData delegate) {\n\t\tthis.delegate = delegate;\n\t\tthis.segmentSource = this;\n\t\tthis.offset = 0;\n\t\tthis.segments = Collections.singletonList(new EnclosingSegmentImpl(this));\n\t}\n\n\t@Override\n\tpublic Range location(final Segment segment) {\n\t\tif (segmentSource.equals(segment.source()) && segment instanceof Range) {\n\t\t\tfinal Range l = (Range) segment;\n\t\t\treturn offset == 0 ? l : Range.at(l.offset() - offset, l.length());\n\t\t} else {\n\t\t\tthrow new IllegalArgumentException();\n\t\t}\n\t}\n\n\t@Override\n\tpublic List<? extends Segment> segments() {\n\t\treturn segments;\n\t}\n\n\t@Override\n\tpublic long length() {\n\t\treturn delegate.length();\n\t}\n\n\t@Override\n\tpublic long requireLength() throws N5IOException {\n\t\treturn delegate.requireLength();\n\t}\n\n\t@Override\n\tpublic SegmentedReadData slice(final Segment segment) throws IllegalArgumentException, N5IOException {\n\t\tif (segmentSource.equals(segment.source())) {\n\t\t\tif (segment instanceof EnclosingSegmentImpl) {\n\t\t\t\treturn this;\n\t\t\t} else if (segment instanceof SegmentImpl) {\n\t\t\t\tfinal SegmentImpl s = (SegmentImpl) segment;\n\t\t\t\treturn new DefaultSegmentedReadData(\n\t\t\t\t\t\tdelegate.slice(s.offset(), s.length()),\n\t\t\t\t\t\tsegmentSource,\n\t\t\t\t\t\tthis.offset + s.offset(),\n\t\t\t\t\t\tCollections.singletonList(s));\n\t\t\t}\n\t\t}\n\t\tthrow new IllegalArgumentException();\n\t}\n\n\t@Override\n\tpublic SegmentedReadData slice(final long offset, final long length) throws N5IOException {\n\n\t\tfinal ReadData delegateSlice = delegate.slice(offset, length);\n\t\tfinal long sourceOffset = this.offset + offset;\n\t\tfinal long sliceLength = delegateSlice.length();\n\n\n\t\t// fromIndex: find first segment with offset >= sourceOffset\n\t\tint fromIndex = Collections.binarySearch(segments, Range.at(sourceOffset, -1), Range.COMPARATOR);\n\t\tif (fromIndex < 0) {\n\t\t\tfromIndex = -fromIndex - 1;\n\t\t}\n\n\t\t// toIndex: find first segment with offset >= sourceOffset + length\n\t\tint toIndex;\n\t\tif (sliceLength < 0) {\n\t\t\ttoIndex = segments.size();\n\t\t} else {\n\t\t\ttoIndex = Collections.binarySearch(segments, Range.at(sourceOffset + sliceLength, -1), Range.COMPARATOR);\n\t\t\tif (toIndex < 0) {\n\t\t\t\ttoIndex = -toIndex - 1;\n\t\t\t}\n\t\t}\n\n\t\t// contained: find segments in [fromIndex, toIndex) with s.offset() + s.length() <= sourceOffset + length\n\t\tfinal List<SegmentImpl> candidates = segments.subList(fromIndex, toIndex);\n\t\tfinal List<SegmentImpl> sliceSegments;\n\t\tif (sliceLength < 0) {\n\t\t\tsliceSegments = candidates;\n\t\t} else {\n\t\t\tfinal ArrayList<SegmentImpl> contained = new ArrayList<>(candidates.size());\n\t\t\tcandidates.forEach(s -> {\n\t\t\t\tif (s.offset() + s.length() <= sourceOffset + sliceLength) {\n\t\t\t\t\tcontained.add(s);\n\t\t\t\t}\n\t\t\t});\n\t\t\tcontained.trimToSize();\n\t\t\tsliceSegments = contained;\n\t\t}\n\n\t\treturn new DefaultSegmentedReadData(\n\t\t\t\tdelegateSlice,\n\t\t\t\tsegmentSource,\n\t\t\t\tsourceOffset,\n\t\t\t\tsliceSegments);\n\t}\n\n\t@Override\n\tpublic InputStream inputStream() throws N5IOException, IllegalStateException {\n\t\treturn delegate.inputStream();\n\t}\n\n\t@Override\n\tpublic byte[] allBytes() throws N5IOException, IllegalStateException {\n\t\treturn delegate.allBytes();\n\t}\n\n\t@Override\n\tpublic ByteBuffer toByteBuffer() throws N5IOException, IllegalStateException {\n\t\treturn delegate.toByteBuffer();\n\t}\n\n\t@Override\n\tpublic SegmentedReadData materialize() throws N5IOException {\n\t\tdelegate.materialize();\n\t\treturn this;\n\t}\n\n\t@Override\n\tpublic void writeTo(final OutputStream outputStream) throws N5IOException, IllegalStateException {\n\t\tdelegate.writeTo(outputStream);\n\t}\n\n\t@Override\n\tpublic void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\t\tdelegate.prefetch(ranges);\n\t}\n\n\t/**\n\t * Returns a new ReadData that uses the given {@code OutputStreamOperator} to\n\t * encode this SegmentedReadData.\n\t * <p>\n\t * Note that segments are lost by encoding.\n\t *\n\t * @param encoder\n\t * \t\tOutputStreamOperator to use for encoding\n\t *\n\t * @return encoded ReadData\n\t */\n\t@Override\n\tpublic ReadData encode(final OutputStreamOperator encoder) {\n\t\treturn delegate.encode(encoder);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/segment/Segment.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.segment;\n\n/**\n * A particular segment in a source {@link SegmentedReadData}.\n */\npublic interface Segment {\n\n\t/**\n\t * Returns the {@code SegmentedReadData} on which this segment is originally\n\t * defined. (The segment is tracked through slices and concatenations, but\n\t * the source will remain the same.)\n\t * <p>\n\t * This is mostly just used internally to make {@code SegmentedReadData}\n\t * implementations easier. The only real use for {@code source()} outside of\n\t * that is to get a {@code ReadData} containing exactly this segment, using\n\t * {@code segment.source().slice(segment)}.\n\t *\n\t * @return the {@code SegmentedReadData} on which this segment is originally defined\n\t */\n\tSegmentedReadData source();\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/readdata/segment/SegmentedReadData.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.segment;\n\nimport java.io.OutputStream;\nimport java.util.Arrays;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\n\n/**\n * A {@code ReadData} which keeps track of contained {@link Segment}s.\n * <p>\n * A {@code Segment} refers to <em>the data</em> in a particular {@link Range}.\n * (That range can be obtained using {@link #location(Segment)}). Segments are\n * pulled along when {@link #slice(long, long)} slicing} or {@link #concatenate\n * concatenating} {@code SegmentedReadData} (and will have appropriately offset\n * {@link #location locations} in the slice/concatenation).\n */\npublic interface SegmentedReadData extends ReadData {\n\n\tinterface SegmentsAndData {\n\t\tList<Segment> segments();\n\t\tSegmentedReadData data();\n\t}\n\n\t/**\n\t * Wrap a {@link ReadData} and create one segment comprising the entire\n\t * {@code\n\t * readData}. The segment can be retrieved as the first (and only) element\n\t * of {@link SegmentedReadData#segments()}.\n\t * \n\t * @param readData\n\t *            the ReadData to wrap\n\t * @return the SegmentedReadData\n\t */\n\tstatic SegmentedReadData wrap(ReadData readData) {\n\t\treturn new DefaultSegmentedReadData(readData);\n\t}\n\n\t/**\n\t * Wrap {@code readData} and create segments at the given locations. The\n\t * order of segments in the returned {@link SegmentsAndData#segments()} list\n\t * matches the order of the given {@code locations} (while the\n\t * {@link #segments} in the {@link SegmentsAndData#data()} are ordered by\n\t * offset).\n\t * \n\t * @param readData\n\t *            the ReadData to wrap\n\t * @param locations\n\t *            the ranges for segments\n\t * @return the SegmentsAndData\n\t */\n\tstatic SegmentsAndData wrap(ReadData readData, Range... locations) {\n\t\treturn wrap(readData, Arrays.asList(locations));\n\t}\n\n\t/**\n\t * Wrap {@code readData} and create segments at the given locations. The\n\t * order of segments in the returned {@link SegmentsAndData#segments()} list\n\t * matches the order of the given {@code locations} (while the\n\t * {@link #segments} in the {@link SegmentsAndData#data()} are ordered by\n\t * offset).\n\t * \n\t * @param readData\n\t *            the ReadData to wrap\n\t * @param locations\n\t *            the ranges for segments\n\t * @return the SegmentsAndData\n\t */\n\tstatic SegmentsAndData wrap(ReadData readData, List<Range> locations) {\n\t\treturn DefaultSegmentedReadData.wrap(readData, locations);\n\t}\n\n\t/**\n\t * Return a {@link SegmentedReadData} representing the concatenation of the\n\t * given {@code readDatas}. The concatenation contains the segments of all\n\t * concatenated {@code readData}s with appropriately offset locations.\n\t * <p>\n\t * In particular, it is also possible to concatenate\n\t * {@code SegmentedReadData}s with (yet) unknown length. (This is useful for\n\t * postponing compression of DataBlocks until they are actually written.) In\n\t * that case, segment locations are only available after all lengths become\n\t * known. This happens when concatenation (or all its constituents) is\n\t * {@link #materialize() materialized} or {@link #writeTo(OutputStream)\n\t * written}.\n\t * \n\t * @param readDatas\n\t *            a list of ReadDatra to concatenate\n\t * @return the SegmentedReadData comprising all the input readDatas\n\t */\n\tstatic SegmentedReadData concatenate(List<SegmentedReadData> readDatas) {\n\t\treturn new ConcatenatedReadData(readDatas);\n\t}\n\n\t/**\n\t * Returns the location of {@code segment} in this {@code ReadData}.\n\t * <p>\n\t * Note that this {@code ReadData} is not necessarily the source of the\n\t * segment.\n\t * <p>\n\t * The returned {@code Range} may be {@code {offset=0, length=-1}}, which\n\t * means that the segment comprises this whole {@code ReadData} (and the\n\t * length of this {@code ReadData} is not yet known).\n\t *\n\t * @param segment\n\t *            the segment id\n\t *\n\t * @return location of the segment, or null\n\t *\n\t * @throws IllegalArgumentException\n\t *             if the segment is not contained in this ReadData\n\t */\n\tRange location(Segment segment) throws IllegalArgumentException;\n\n\t/**\n\t * Return all segments (fully) contained in this {@code ReadData}, ordered\n\t * by location (that is, sorted by {@link Range#COMPARATOR}).\n\t *\n\t * @return all segments contained in this {@code ReadData}.\n\t */\n\tList<? extends Segment> segments();\n\n\t@Override\n\tdefault SegmentedReadData limit(final long length) throws N5Exception.N5IOException {\n\t\treturn slice(0, length);\n\t}\n\n\t/**\n\t * Return a {@code SegmentedReadData} wrapping a slice containing exactly\n\t * the given segment.\n\t * <p>\n\t * The {@link #location} of the given {@code segment} in this ReadData\n\t * specifies the range to slice.\n\t *\n\t * @param segment segment to slice\n\t * @return a slice\n\t * @throws IllegalArgumentException\n\t * \t\tif the segment is not contained in this ReadData\n\t * @throws N5Exception.N5IOException\n\t */\n\tSegmentedReadData slice(Segment segment) throws IllegalArgumentException, N5Exception.N5IOException;\n\n\t/**\n\t * Returns a new {@link SegmentedReadData} representing a slice, or subset\n\t * of this ReadData.\n\t * <p>\n\t * The {@link #segments} of the returned SegmentedReadData are all segments\n\t * fully contained in the requested range.\n\t *\n\t * @param offset the offset relative to this ReadData\n\t * @param length length of the returned ReadData\n\t * @return a slice\n\t * @throws N5Exception.N5IOException an exception\n\t */\n\t@Override\n\tSegmentedReadData slice(long offset, long length) throws N5Exception.N5IOException;\n\n\t/**\n\t * Returns a new {@link SegmentedReadData} representing a slice, or subset\n\t * of this ReadData.\n\t * <p>\n\t * The {@link #segments} of the returned SegmentedReadData are all segments\n\t * fully contained in the requested range.\n\t *\n\t * @param range a range in this ReadData\n\t * @return a slice\n\t * @throws N5Exception.N5IOException an exception\n\t */\n\t@Override\n\tdefault SegmentedReadData slice(final Range range) throws N5Exception.N5IOException {\n\t\treturn slice(range.offset(), range.length());\n\t}\n\n\t@Override\n\tSegmentedReadData materialize() throws N5Exception.N5IOException;\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/serialization/JsonArrayUtils.java",
    "content": "package org.janelia.saalfeldlab.n5.serialization;\n\nimport com.google.gson.JsonArray;\nimport com.google.gson.JsonElement;\n\npublic class JsonArrayUtils {\n\n\t/**\n\t * Reverses the order of elements in a JSON array in-place.\n\t *\n\t * @param array the JSON array to reverse; must not be null\n\t * @see N5Annotations.ReverseArray\n\t */\n\tpublic static void reverse(final JsonArray array) {\n\n\t\tJsonElement a;\n\t\tfinal int max = array.size() - 1;\n\t\tfor (int i = (max - 1) / 2; i >= 0; --i) {\n\t\t\tfinal int j = max - i;\n\t\t\ta = array.get(i);\n\t\t\tarray.set(i, array.get(j));\n\t\t\tarray.set(j, a);\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/serialization/N5Annotations.java",
    "content": "package org.janelia.saalfeldlab.n5.serialization;\n\nimport java.io.Serializable;\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Inherited;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\n/**\n * Provides specialized annotations for N5 serialization behaviors.\n * <p>\n * This interface defines annotations that control specific serialization\n * transformations needed for N5 compatibility across different storage\n * formats, for example, when dealing with dimension ordering conventions.\n * \n * @see ReverseArray\n */\npublic interface N5Annotations extends Serializable {\n\n\t/**\n\t * Indicates that an array field should be reversed during serialization/deserialization.\n\t * <p>\n\t * This annotation is used to handle dimension ordering differences between storage formats.\n\t * For example, Zarr uses C-order (row-major) dimension ordering [Z, Y, X], while N5 uses\n\t * F-order (column-major) dimension ordering [X, Y, Z].\n\t * <p>\n\t * This ensures that dimension-related arrays maintain the correct semantic meaning\n\t * across different storage format conventions.\n\t */\n\t@Inherited\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Target(ElementType.FIELD)\n\t@interface ReverseArray {\n\t}\n}\n\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/serialization/NameConfig.java",
    "content": "package org.janelia.saalfeldlab.n5.serialization;\n\nimport org.scijava.annotations.Indexable;\n\nimport java.io.Serializable;\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Inherited;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\n/**\n * Configuration interface for N5 serialization naming and parameter annotations.\n * <p>\n * This interface provides a standardized way to configure serialization names and parameters\n * for N5 components such as compression algorithms and codecs. It defines annotations that\n * control how classes and their fields are serialized and deserialized for the N5 API.\n * <p>\n * Classes implementing this interface can use the provided annotations to:\n * <ul>\n *   <li>Define a serialization type name with {@link Name @Name}</li>\n *   <li>Specify a namespace prefix with {@link Prefix @Prefix}</li>\n *   <li>Mark fields as serialization parameters with {@link Parameter @Parameter}</li>\n * </ul>\n * \n * @see Name\n * @see Prefix\n * @see Parameter\n */\npublic interface NameConfig extends Serializable {\n\n\t/**\n\t * Defines a namespace prefix for serialization.\n\t * <p>\n\t * This annotation specifies a prefix that is prepended to the serialization\n\t * type name, creating a namespaced identifier. This is useful for organizing\n\t * related components into logical groups, usually all the components implementing\n\t * a particular interface.\n\t * \n\t * @see Name\n\t */\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Inherited\n\t@Target(ElementType.TYPE)\n\t@interface Prefix {\n\t\tString value();\n\t}\n\n\t/**\n\t * Specifies the serialization type name for a class.\n\t * <p>\n\t * This annotation defines the string identifier used during serialization and\n\t * deserialization to identify the type. The name should be unique within its\n\t * namespace and is typically a short, descriptive identifier.\n\t * \n\t * @see Prefix\n\t */\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Inherited\n\t@Target(ElementType.TYPE)\n\t@Indexable\n\t@interface Name {\n\t\tString value();\n\t}\n\n\t/**\n\t * Controls whether a class should be serializable as a {@code NameConfig}.\n\t * <p>\n\t * This annotation allows explicitly enabling or disabling serialization for a class.\n\t * <p>\n\t * By default, classes are serialized.\n\t */\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Inherited\n\t@Target(ElementType.TYPE)\n\t@Indexable @interface Serialize {\n\t\tboolean value() default true;\n\t}\n\n\t/**\n\t * Marks a field as a parameter to be serialized.\n\t * <p>\n\t * This annotation identifies fields that should be included during serialization\n\t * and deserialization. It supports both required and optional parameters.\n\t * <p>\n\t * The {@code value} attribute can be used to specify an alternative name for\n\t * the parameter during serialization. If not specified, the field name is used.\n\t * \n\t * @see Name\n\t */\n\t@Retention(RetentionPolicy.RUNTIME)\n\t@Inherited\n\t@Target(ElementType.FIELD)\n\t@interface Parameter {\n\t\t/**\n\t\t * Alternative name for the parameter during serialization.\n\t\t * If empty, the field name is used.\n\t\t * \n\t\t * @return the parameter name, or empty string for field name\n\t\t */\n\t\tString value() default \"\";\n\t\t\n\t\t/**\n\t\t * Whether this parameter is optional.\n\t\t * Optional parameters may be omitted during deserialization.\n\t\t * \n\t\t * @return {@code true} if the parameter is optional, {@code false} otherwise\n\t\t */\n\t\tboolean optional() default false;\n\t}\n\n\t/**\n\t * Returns the serialization type name for this instance.\n\t * <p>\n\t * This method retrieves the value from the {@link Name @Name} annotation\n\t * if present on the class.\n\t * \n\t * @return the type name from the {@code @Name} annotation, or {@code null} if not annotated\n\t */\n\tdefault String getType() {\n\n\t\tfinal Name type = getClass().getAnnotation(Name.class);\n\t\treturn type == null ? null : type.value();\n\n\t}\n}\n\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/DatasetAccess.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.List;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.N5Writer.DataBlockSupplier;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\n\n/**\n * Wrap an instantiated DataBlock/block codec hierarchy to implement (single and\n * batch) DataBlock read/write methods.\n *\n * @param <T>\n * \t\ttype of the data contained in the DataBlock\n */\npublic interface DatasetAccess<T> {\n\n\t/**\n\t * Read the chunk at the given {@code gridPosition}.\n\t * <p>\n\t * If the requested chunk doesn't exist, then this method will return {@code\n\t * null}. {@code N5IOException} will be thrown if something goes wrong\n\t * reading or decoding data for an existing key.\n\t *\n\t * @param pva\n\t * \t\tdataset storage\n\t * @param gridPosition\n\t * \t\tgrid position of the chunks to read\n\t *\n\t * @return the DataBlock or {@code null}\n\t *\n\t * @throws N5IOException\n\t * \t\tif any error occurs while reading or decoding the block\n\t */\n\tDataBlock<T> readChunk(PositionValueAccess pva, long[] gridPosition) throws N5IOException;\n\n\t/**\n\t * Read the chunks at the given {@code gridPositions}.\n\t * <p>\n\t * The returned {@code List<DataBlock<T>>} is in the same order as the\n\t * requested {@code gridPositions}. That is, the {@code DataBlock} at index\n\t * {@code i} has grid coordinates {@code gridPositions.get(i)}.\n\t * <p>\n\t * If a requested chunk doesn't exist, then the corresponding element in the\n\t * result list will be {@code null}. ({@code N5IOException} will only be\n\t * thrown if something goes wrong reading or decoding data for an existing\n\t * key.)\n\t *\n\t * @param pva\n\t * \t\tdataset storage\n\t * @param gridPositions\n\t * \t\tlist of grid positions of the chunks to read\n\t * @return list of DataBlocks\n\t *\n\t * @throws N5IOException\n\t * \t\tif any error occurs while reading or decoding blocks\n\t */\n\tList<DataBlock<T>> readChunks(PositionValueAccess pva, List<long[]> gridPositions) throws N5IOException;\n\n\t/**\n\t * Writes a chunk to the {@link DataBlock#getGridPosition() grid position}\n\t * specified by {@code chunk}.\n\t */\n\tvoid writeChunk(PositionValueAccess pva, DataBlock<T> chunk) throws N5IOException;\n\n\t/**\n\t * Writes multiple chunks to the {@link DataBlock#getGridPosition() grid\n\t * positions} specified by the respective {@code chunks}.\n\t */\n\tvoid writeChunks(PositionValueAccess pva, List<DataBlock<T>> chunks) throws N5IOException;\n\n\t/**\n\t * Deletes the chunk at {@code gridPosition}.\n\t *\n\t * @return true if it existed and was deleted.\n\t */\n\tboolean deleteChunk(PositionValueAccess pva, long[] gridPosition) throws N5IOException;\n\n\t/**\n\t * Deletes the chunks at the given {@code positions}.\n\t *\n\t * @return true if any existed and were deleted.\n\t */\n\tboolean deleteChunks(PositionValueAccess pva, List<long[]> gridPositions) throws N5IOException;\n\n\t/**\n\t * @param pva\n\t * @param min\n\t * \t\tmin pixel coordinate of region to write\n\t * @param size\n\t * \t\tsize in pixels of region to write\n\t * @param chunkSupplier\n\t * \t\tis asked to create chunks within the given region\n\t * @param writeFully\n\t * \t\tif false, merge existing data in blocks/chunks that overlap the region boundary. if true, override everything.\n\t *\n\t * @throws N5IOException\n\t */\n\tvoid writeRegion(\n\t\t\tPositionValueAccess pva,\n\t\t\tlong[] min,\n\t\t\tlong[] size,\n\t\t\tDataBlockSupplier<T> chunkSupplier,\n\t\t\tboolean writeFully\n\t) throws N5IOException;\n\n\t/**\n\t *\n\t * @param pva\n\t * @param min\n\t * \t\tmin pixel coordinate of region to write\n\t * @param size\n\t * \t\tsize in pixels of region to write\n\t * @param chunkSupplier\n\t * \t\tis asked to create chunks within the given region. must be thread-safe.\n\t * @param writeFully\n\t * \t\tif false, merge existing data in blocks/chunks that overlap the region boundary. if true, override everything.\n\t * @param exec\n\t * \t\tused to parallelize over chunks and blocks\n\t *\n\t * @throws N5Exception\n\t * @throws InterruptedException\n\t * @throws ExecutionException\n\t */\n\tvoid writeRegion(\n\t\t\tPositionValueAccess pva,\n\t\t\tlong[] min,\n\t\t\tlong[] size,\n\t\t\tDataBlockSupplier<T> chunkSupplier,\n\t\t\tboolean writeFully,\n\t\t\tExecutorService exec\n\t) throws N5Exception, InterruptedException, ExecutionException;\n\n\t/**\n\t * Read a block at {@code shardGridPosition} at the given nesting {@code\n\t * level}. The data is read as chunks and then rearranged and assembled into\n\t * a (large) {@code DataBlock}.\n\t * <p>\n\t * The {@code getGridPosition()} of the returned {@code DataBlock} is the\n\t * grid position with respect to {@code level}. For example, if {@code\n\t * level==1}, then this refers to the position on the shard grid.\n\t *\n\t * @param pva\n\t * \t\tdataset storage\n\t * @param shardGridPosition\n\t * \t\tposition of the shard to read (on the shard grid at {@code level})\n\t * @param level\n\t * \t\tgrid level of the shard/block to write.\n\t * @return\n\t * @throws N5IOException\n\t */\n\tDataBlock<T> readBlock(PositionValueAccess pva, long[] shardGridPosition, int level) throws N5IOException;\n\n\t/**\n\t * Write a full block at the given nesting {@code level}. The block data is\n\t * given as a (large) {@code DataBlock} that will be sliced, rearranged, and\n\t * written as chunks (level-0 DataBlocks).\n\t * <p>\n\t * {@code dataBlock.getGridPosition()} is the grid position with respect to\n\t * {@code level}. For example, if {@code level==1}, then this refers to the\n\t * position on the level-1 shard grid.\n\t *\n\t * @param pva\n\t * \t\tdataset storage\n\t * @param dataBlock\n\t * \t\tblock to write\n\t * @param level\n\t * \t\tgrid level of the block to write.\n\t *\n\t * @throws N5IOException\n\t */\n\tvoid writeBlock(PositionValueAccess pva, DataBlock<T> dataBlock, int level) throws N5IOException;\n\n\tNestedGrid getGrid();\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/DefaultDatasetAccess.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Comparator;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.ListIterator;\nimport java.util.Objects;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.stream.Collectors;\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DoubleArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.FloatArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.LongArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5NoSuchKeyException;\nimport org.janelia.saalfeldlab.n5.N5Writer.DataBlockSupplier;\nimport org.janelia.saalfeldlab.n5.ShortArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.StringDataBlock;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodec;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedPosition;\nimport org.janelia.saalfeldlab.n5.util.SubArrayCopy;\n\npublic class DefaultDatasetAccess<T> implements DatasetAccess<T> {\n\n\tprivate final NestedGrid grid;\n\tprivate final BlockCodec<?>[] codecs;\n\n\tpublic DefaultDatasetAccess(final NestedGrid grid, final BlockCodec<?>[] codecs) {\n\t\tthis.grid = grid;\n\t\tthis.codecs = codecs;\n\t}\n\n\tpublic NestedGrid getGrid() {\n\t\treturn grid;\n\t}\n\n\t@Override\n\tpublic DataBlock<T> readChunk(final PositionValueAccess pva, final long[] gridPosition) throws N5IOException {\n\t\tfinal NestedPosition position = grid.nestedPosition(gridPosition);\n\t\ttry (final VolatileReadData readData = pva.get(position.key())) {\n\t\t\treturn readChunkRecursive(readData, position, grid.numLevels() - 1);\n\t\t} catch (N5NoSuchKeyException ignored) {\n\t\t\treturn null;\n\t\t}\n\t}\n\n\tprivate DataBlock<T> readChunkRecursive(\n\t\t\tfinal ReadData readData,\n\t\t\tfinal NestedPosition position,\n\t\t\tfinal int level) {\n\t\tif (readData == null) {\n\t\t\treturn null;\n\t\t} else if (level == 0) {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<T> codec = (BlockCodec<T>) codecs[0];\n\t\t\treturn codec.decode(readData, position.absolute(0));\n\t\t} else {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\t\tfinal RawShard shard = codec.decode(readData, position.absolute(level)).getData();\n\t\t\treturn readChunkRecursive(shard.getElementData(position.relative(level - 1)), position, level - 1);\n\t\t}\n\t}\n\n\t@Override\n\tpublic List<DataBlock<T>> readChunks(final PositionValueAccess pva, final List<long[]> gridPositions) throws N5IOException {\n\n\t\t// for non-sharded datasets, just read the chunks individually\n\t\tif (grid.numLevels() == 1) {\n\t\t\treturn gridPositions.stream().map(pos -> readChunk(pva, pos)).collect(Collectors.toList());\n\t\t}\n\n\t\t// Create a list of ChunkRequests and sort it such that requests\n\t\t// from the same (nested) shard are grouped contiguously.\n\t\tfinal ChunkRequests<T> requests = createReadRequests(gridPositions);\n\t\tfinal List<ChunkRequest<T>> duplicates = requests.removeDuplicates();\n\n\t\tfinal List<ChunkRequests<T>> split = requests.split();\n\t\tfor (final ChunkRequests<T> subRequests : split) {\n\t\t\tfinal long[] key = subRequests.relativeGridPosition();\n\t\t\ttry (final VolatileReadData readData = pva.get(key)) {\n\t\t\t\treadChunksRecursive(readData, subRequests);\n\t\t\t} catch (N5NoSuchKeyException ignored) {\n\t\t\t\t// the key didn't exist (as we found out when lazy-reading the index).\n\t\t\t\t// we don't have to do anything: all subRequest blocks remain null.\n\t\t\t\t// on to the next shard.\n\t\t\t}\n\t\t}\n\n\t\treturn requests.chunks(duplicates);\n\t}\n\n\t/**\n\t * Bulk Read operation on a shard.\n\t *\n\t * @param readData for the corresponding shard\n\t * @param requests for chunks within the shard to be read\n\t */\n\tprivate void readChunksRecursive(\n\t\t\tfinal ReadData readData,\n\t\t\tfinal ChunkRequests<T> requests\n\t) {\n\t\tassert !requests.requests.isEmpty();\n\t\tassert requests.level > 0;\n\n\t\tif (readData == null) {\n\t\t\treturn;\n\t\t}\n\n\t\tfinal int level = requests.level();\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\tfinal RawShard shard = codec.decode(readData, requests.gridPosition()).getData();\n\n\t\tif (level == 1 ) {\n\t\t\t//Base case; read the chunks\n\n\t\t\t// TODO: collect all the elementPos that we will need and prefetch\n\t\t\t//       Probably best to add a prefetch method to RawShard?\n\n\t\t\t// Here's an attempt at that.\n\t\t\t// Don't love that we have to build a list of positions\n\t\t\t// Consider making DataBlockRequest package private and passing them directly\n\t\t\tfinal ArrayList<long[]> positions = new ArrayList<>();\n\t\t\tfor (final ChunkRequest<T> request : requests) {\n\t\t\t\tpositions.add(request.position.relative(0));\n\t\t\t}\n\t\t\tshard.prefetch(positions);\n\n\t\t\tfor (final ChunkRequest<T> request : requests) {\n\t\t\t\tfinal long[] elementPos = request.position.relative(0);\n\t\t\t\tfinal ReadData elementData = shard.getElementData(elementPos);\n\t\t\t\trequest.chunk = readChunkRecursive(elementData, request.position, 0);\n\t\t\t}\n\t\t} else { // level > 1\n\t\t\tfinal List<ChunkRequests<T>> split = requests.split();\n\t\t\tfor (final ChunkRequests<T> subRequests : split) {\n\t\t\t\tfinal long[] subShardPosition = subRequests.relativeGridPosition();\n\t\t\t\tfinal ReadData elementData = shard.getElementData(subShardPosition);\n\t\t\t\treadChunksRecursive(elementData, subRequests);\n\t\t\t}\n\t\t}\n\t}\n\n\t@Override\n\tpublic void writeChunk(final PositionValueAccess pva, final DataBlock<T> chunk) throws N5IOException {\n\n\t\tif (grid.numLevels() == 1) {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<T> codec = (BlockCodec<T>) codecs[0];\n\t\t\tpva.set(chunk.getGridPosition(), codec.encode(chunk));\n\t\t} else {\n\t\t\tfinal NestedPosition position = grid.nestedPosition(chunk.getGridPosition());\n\t\t\tfinal long[] key = position.key();\n\t\t\tfinal ReadData modifiedData;\n\t\t\ttry (final VolatileReadData existingData = pva.get(key)) {\n\t\t\t\tmodifiedData = writeChunkRecursive(existingData, chunk, position, grid.numLevels() - 1);\n\t\t\t\t// Here, we are about to write the shard data, but with the new block modified.\n\t\t\t\t// Need to make sure that the read operations happen now before pva.set acquires a write lock\n\t\t\t\tmodifiedData.materialize();\n\t\t\t}\n\t\t\tpva.set(key, modifiedData);\n\t\t}\n\t}\n\n\tprivate ReadData writeChunkRecursive(\n\t\t\tfinal ReadData existingReadData,\n\t\t\tfinal DataBlock<T> chunk,\n\t\t\tfinal NestedPosition position,\n\t\t\tfinal int level) {\n\t\tif (level == 0) {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<T> codec = (BlockCodec<T>) codecs[0];\n\t\t\treturn codec.encode(chunk);\n\t\t} else {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\t\tfinal long[] gridPos = position.absolute(level);\n\t\t\tfinal RawShard shard = getRawShard(existingReadData, codec, gridPos, level);\n\t\t\tfinal long[] elementPos = position.relative(level - 1);\n\t\t\tfinal ReadData existingElementData = (level == 1)\n\t\t\t\t\t? null // if level == 1, we don't need to extract the nested (DataBlock<T>) ReadData because it will be overridden anyway\n\t\t\t\t\t: shard.getElementData(elementPos);\n\t\t\tfinal ReadData modifiedElementData = writeChunkRecursive(existingElementData, chunk, position, level - 1);\n\t\t\tshard.setElementData(modifiedElementData, elementPos);\n\t\t\treturn codec.encode(new RawShardDataBlock(gridPos, shard));\n\t\t}\n\t}\n\n\t@Override\n\tpublic void writeChunks(final PositionValueAccess pva, final List<DataBlock<T>> chunks) throws N5IOException {\n\n\t\tif (grid.numLevels() == 1) {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<T> codec = (BlockCodec<T>) codecs[0];\n\t\t\tchunks.forEach(chunk -> pva.set(chunk.getGridPosition(), codec.encode(chunk)));\n\t\t} else {\n\t\t\t// Create a list of ChunkRequests, sorted such that requests from\n\t\t\t// the same (nested) shard are grouped contiguously.\n\t\t\tfinal ChunkRequests<T> requests = createWriteRequests(chunks);\n\t\t\trequests.removeDuplicates();\n\t\t\tfinal List<ChunkRequests<T>> split = requests.split();\n\t\t\tfor (final ChunkRequests<T> subRequests : split) {\n\t\t\t\tfinal boolean writeFully = subRequests.coversShard();\n\t\t\t\tfinal long[] shardKey = subRequests.relativeGridPosition();\n\t\t\t\tfinal ReadData modifiedData;\n\t\t\t\ttry (final VolatileReadData existingData = writeFully ? null : pva.get(shardKey)) {\n\t\t\t\t\tmodifiedData = writeChunksRecursive(existingData, subRequests);\n\t\t\t\t\t// Here, we are about to write the shard data, but with the new blocks modified.\n\t\t\t\t\t// Need to make sure that the read operations happen now before pva.set acquires a write lock\n\t\t\t\t\tmodifiedData.materialize();\n\t\t\t\t}\n\t\t\t\tpva.set(shardKey, modifiedData);\n\t\t\t}\n\t\t}\n\t}\n\n\t/**\n\t * Bulk Write operation on a shard.\n\t *\n\t * @param existingReadData encoded existing shard data (to decode and partially override)\n\t * @param requests for chunks within the shard to be written\n\t */\n\tprivate ReadData writeChunksRecursive(\n\t\t\tfinal ReadData existingReadData, // may be null\n\t\t\tfinal ChunkRequests<T> requests\n\t) {\n\t\tassert !requests.requests.isEmpty();\n\t\tassert requests.level > 0;\n\n\t\tfinal boolean writeFully = existingReadData == null;\n\n\t\tfinal int level = requests.level();\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\tfinal long[] gridPos = requests.gridPosition();\n\t\tfinal RawShard shard = getRawShard(existingReadData, codec, gridPos, level);\n\n\t\tif ( level == 1 ) {\n\t\t\t// Base case, write the blocks\n\t\t\tfor (final ChunkRequest<T> request : requests) {\n\t\t\t\tfinal ReadData elementData = writeChunkRecursive(null, request.chunk, request.position, 0);\n\t\t\t\tfinal long[] elementPos = request.position.relative(0);\n\t\t\t\tshard.setElementData(elementData, elementPos);\n\t\t\t}\n\t\t} else { // level > 1\n\t\t\tfinal List<ChunkRequests<T>> split = requests.split();\n\t\t\tfor (final ChunkRequests<T> subRequests : split) {\n\t\t\t\tfinal boolean nestedWriteFully = writeFully || subRequests.coversShard();\n\t\t\t\tfinal long[] elementPos = subRequests.relativeGridPosition();\n\t\t\t\tfinal ReadData existingElementData = nestedWriteFully ? null : shard.getElementData(elementPos);\n\t\t\t\tfinal ReadData modifiedElementData = writeChunksRecursive(existingElementData, subRequests);\n\t\t\t\tshard.setElementData(modifiedElementData, elementPos);\n\t\t\t}\n\t\t}\n\n\t\treturn codec.encode(new RawShardDataBlock(gridPos, shard));\n\t}\n\n\t@Override\n\tpublic void writeRegion(\n\t\t\tfinal PositionValueAccess pva,\n\t\t\tfinal long[] min,\n\t\t\tfinal long[] size,\n\t\t\tfinal DataBlockSupplier<T> chunkSupplier,\n\t\t\tfinal boolean writeFully\n\t) throws N5IOException {\n\n\t\tfinal Region region = new Region(min, size, grid);\n\n\t\tfor (long[] key : Region.gridPositions(region.minPos().key(), region.maxPos().key())) {\n\t\t\tfinal NestedPosition pos = grid.nestedPosition(key, grid.numLevels() - 1);\n\t\t\tfinal boolean nestedWriteFully = writeFully || region.fullyContains(pos);\n\t\t\tfinal ReadData modifiedData;\n\t\t\ttry (final VolatileReadData existingData = nestedWriteFully ? null : pva.get(key)) {\n\t\t\t\tmodifiedData = writeRegionRecursive(existingData, region, chunkSupplier, pos);\n\t\t\t\t// Here, we are about to write the shard data, but with the new shard modified.\n\t\t\t\t// Need to make sure that the read operations happen now before pva.set acquires a write lock\n\t\t\t\tif (existingData != null && modifiedData != null) {\n\t\t\t\t\tmodifiedData.materialize();\n\t\t\t\t}\n\t\t\t}\n\t\t\tpva.set(key, modifiedData);\n\t\t}\n\t}\n\n\t@Override\n\tpublic void writeRegion(\n\t\t\tfinal PositionValueAccess pva,\n\t\t\tfinal long[] min,\n\t\t\tfinal long[] size,\n\t\t\tfinal DataBlockSupplier<T> chunkSupplier,\n\t\t\tfinal boolean writeFully,\n\t\t\tfinal ExecutorService exec) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\tfinal Region region = new Region(min, size, grid);\n\n\t\tfor (long[] key : Region.gridPositions(region.minPos().key(), region.maxPos().key())) {\n\t\t\texec.submit(() -> {\n\t\t\t\tfinal NestedPosition pos = grid.nestedPosition(key, grid.numLevels() - 1);\n\t\t\t\tfinal boolean nestedWriteFully = writeFully || region.fullyContains(pos);\n\t\t\t\tfinal ReadData modifiedData;\n\t\t\t\ttry (final VolatileReadData existingData = nestedWriteFully ? null : pva.get(key)) {\n\t\t\t\t\tmodifiedData = writeRegionRecursive(existingData, region, chunkSupplier, pos);\n\t\t\t\t\t// Here, we are about to write the shard data, but with the new block modified.\n\t\t\t\t\t// Need to make sure that the read operations happen now before pva.set acquires a write lock\n\t\t\t\t\tif (existingData != null && modifiedData != null) {\n\t\t\t\t\t\tmodifiedData.materialize();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpva.set(key, modifiedData);\n\t\t\t});\n\t\t}\n\t}\n\n\tprivate ReadData writeRegionRecursive(\n\t\t\tfinal ReadData existingReadData, // may be null\n\t\t\tfinal Region region,\n\t\t\tfinal DataBlockSupplier<T> chunkSupplier,\n\t\t\tfinal NestedPosition position\n\t) {\n\t\tfinal boolean writeFully = existingReadData == null;\n\t\tfinal int level = position.level();\n\t\tif ( level == 0 ) {\n\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<T> codec = (BlockCodec<T>) codecs[0];\n\t\t\tfinal long[] gridPosition = position.absolute(0);\n\n\t\t\t// If the DataBlock is not fully contained in the region, we will\n\t\t\t// get existingReadData != null. In that case, we try to decode the\n\t\t\t// existing DataBlock and pass it to the BlockSupplier for modification.\n\t\t\t// (This might fail with N5NoSuchKeyException if existingReadData\n\t\t\t// lazily points to non-existent data.)\n\t\t\tDataBlock<T> existingChunk = null;\n\t\t\tif (existingReadData != null) {\n\t\t\t\ttry {\n\t\t\t\t\texistingChunk = codec.decode(existingReadData, gridPosition);\n\t\t\t\t} catch (N5NoSuchKeyException ignored) {\n\t\t\t\t}\n\t\t\t}\n\t\t\tfinal DataBlock<T> chunk = chunkSupplier.get(gridPosition, existingChunk);\n\n\t\t\t// null chunks may be provided when they contain only the fill value\n\t\t\t// and only non-empty chunks should be written, for example\n\t\t\tif (chunk == null)\n\t\t\t\treturn null;\n\n\t\t\treturn codec.encode(chunk);\n\t\t} else {\n\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\t\tfinal long[] gridPos = position.absolute(level);\n\t\t\tfinal RawShard shard = getRawShard(existingReadData, codec, gridPos, level);\n\t\t\tfor (NestedPosition pos : region.containedNestedPositions(position)) {\n\t\t\t\tfinal boolean nestedWriteFully = writeFully || region.fullyContains(pos);\n\t\t\t\tfinal long[] elementPos = pos.relative();\n\t\t\t\tfinal ReadData existingElementData = nestedWriteFully ? null : shard.getElementData(elementPos);\n\t\t\t\tfinal ReadData modifiedElementData = writeRegionRecursive(existingElementData, region, chunkSupplier, pos);\n\t\t\t\tshard.setElementData(modifiedElementData, elementPos);\n\t\t\t}\n\n\t\t\t// do not write empty shards\n\t\t\tif (shard.isEmpty())\n\t\t\t\treturn null;\n\n\t\t\treturn codec.encode(new RawShardDataBlock(gridPos, shard));\n\t\t}\n\t}\n\n\t//\n\t// -- deleteChunk ---------------------------------------------------------\n\n\t@Override\n\tpublic boolean deleteChunk(final PositionValueAccess pva, final long[] gridPosition) throws N5IOException {\n\t\tif (grid.numLevels() == 1) {\n\t\t\t// for non-sharded dataset, don't bother getting the value, just remove the key.\n\t\t\treturn pva.remove(gridPosition);\n\t\t} else {\n\t\t\tfinal NestedPosition position = grid.nestedPosition(gridPosition);\n\t\t\tfinal long[] key = position.key();\n\t\t\tfinal ReadData modifiedData;\n\t\t\ttry (final VolatileReadData existingData = pva.get(key)) {\n\t\t\t\tmodifiedData = deleteChunkRecursive(existingData, position, grid.numLevels() - 1);\n\t\t\t\tif (modifiedData == existingData) {\n\t\t\t\t\t// nothing changed, the blocks we wanted to delete didn't exist anyway\n\t\t\t\t\treturn false;\n\t\t\t\t} else if (modifiedData != null) {\n\t\t\t\t\t// Here, we are about to write the shard data, but without the chunk to be deleted.\n\t\t\t\t\t// Need to make sure that the read operations happen now before pva.set acquires a write lock\n\t\t\t\t\tmodifiedData.materialize();\n\t\t\t\t}\n\t\t\t} catch (final N5NoSuchKeyException e) {\n\t\t\t\t// the key didn't exist (as we found out when lazy-reading the index)\n\t\t\t\t// so nothing changed, the chunks we wanted to delete didn't exist anyway\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (modifiedData == null) {\n\t\t\t\treturn pva.remove(key);\n\t\t\t} else {\n\t\t\t\tpva.set(key, modifiedData);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t}\n\t}\n\n\tprivate ReadData deleteChunkRecursive(\n\t\t\tfinal ReadData existingReadData,\n\t\t\tfinal NestedPosition position,\n\t\t\tfinal int level) throws N5NoSuchKeyException {\n\t\tif (level == 0 || existingReadData == null) {\n\t\t\treturn null;\n\t\t} else {\n\t\t\t@SuppressWarnings(\"unchecked\")\n\t\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\t\tfinal long[] gridPos = position.absolute(level);\n\t\t\tfinal RawShard shard = codec.decode(existingReadData, gridPos).getData();\n\t\t\tfinal long[] elementPos = position.relative(level - 1);\n\t\t\tfinal ReadData existingElementData = shard.getElementData(elementPos);\n\t\t\tif (existingElementData == null) {\n\t\t\t\t// The chunk (or the whole nested shard containing it) does not exist.\n\t\t\t\t// This shard remains unchanged.\n\t\t\t\treturn existingReadData;\n\t\t\t} else {\n\t\t\t\tfinal ReadData modifiedElementData = deleteChunkRecursive(existingElementData, position, level - 1);\n\t\t\t\tif (modifiedElementData == existingElementData) {\n\t\t\t\t\t// The nested shard was not modified.\n\t\t\t\t\t// This shard remains unchanged.\n\t\t\t\t\treturn existingReadData;\n\t\t\t\t}\n\t\t\t\tshard.setElementData(modifiedElementData, elementPos);\n\t\t\t\tif (modifiedElementData == null) {\n\t\t\t\t\t// The chunk or nested shard was removed.\n\t\t\t\t\t// Check whether this shard becomes empty.\n\t\t\t\t\tif (shard.isEmpty()) {\n\t\t\t\t\t\t// This shard is empty and should be removed.\n\t\t\t\t\t\treturn null;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn codec.encode(new RawShardDataBlock(gridPos, shard));\n\t\t\t}\n\t\t}\n\t}\n\n\t//\n\t// -- deleteChunks --------------------------------------------------------\n\n\t@Override\n\tpublic boolean deleteChunks(final PositionValueAccess pva, final List<long[]> gridPositions) throws N5IOException {\n\n\t\t// for non-sharded datasets, just delete the chunks individually\n\t\tif (grid.numLevels() == 1) {\n\t\t\tboolean deleted = false;\n\t\t\tfor (long[] pos : gridPositions) {\n\t\t\t\tdeleted |= pva.remove(pos);\n\t\t\t}\n\t\t\treturn deleted;\n\t\t} else {\n\n\t\t\t// Create a list of ChunkRequests and sort it such that requests\n\t\t\t// from the same (nested) shard are grouped contiguously.\n\t\t\t// Despite the name, createReadRequests() works for delete requests as well ...\n\t\t\tfinal ChunkRequests<T> requests = createReadRequests(gridPositions);\n\t\t\trequests.removeDuplicates();\n\n\t\t\tboolean deleted = false;\n\t\t\tfinal List<ChunkRequests<T>> split = requests.split();\n\t\t\tfor (final ChunkRequests<T> subRequests : split) {\n\t\t\t\tfinal boolean writeFully = subRequests.coversShard();\n\t\t\t\tfinal long[] key = subRequests.relativeGridPosition();\n\t\t\t\tfinal ReadData modifiedData;\n\t\t\t\ttry (final VolatileReadData existingData = writeFully ? null : pva.get(key)) {\n\t\t\t\t\tmodifiedData = deleteChunksRecursive(existingData, subRequests);;\n\t\t\t\t\tif (modifiedData == existingData) {\n\t\t\t\t\t\t// nothing changed, the chunks we wanted to delete didn't exist anyway\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t} else if (existingData != null && modifiedData != null) {\n\t\t\t\t\t\t// Here, we are about to write the shard data, but without the chunk to be deleted.\n\t\t\t\t\t\t// Need to make sure that the read operations happen now before pva.set acquires a write lock\n\t\t\t\t\t\tmodifiedData.materialize();\n\t\t\t\t\t}\n\t\t\t\t} catch (final N5NoSuchKeyException e) {\n\t\t\t\t\t// the key didn't exist (as we found out when lazy-reading the index)\n\t\t\t\t\t// so nothing changed, the chunks we wanted to delete didn't exist anyway\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (modifiedData == null) {\n\t\t\t\t\tdeleted |= pva.remove(key);\n\t\t\t\t} else {\n\t\t\t\t\tpva.set(key, modifiedData);\n\t\t\t\t\tdeleted = true;\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn deleted;\n\t\t}\n\t}\n\n\t/**\n\t * Bulk Delete operation on a shard.\n\t *\n\t * @param existingReadData encoded existing shard data (to decode and partially override)\n\t * @param requests for chunks within the shard to be deleted\n\t */\n\tprivate ReadData deleteChunksRecursive(\n\t\t\tfinal ReadData existingReadData, // may be null\n\t\t\tfinal ChunkRequests<T> requests\n\t) {\n\t\tassert !requests.requests.isEmpty();\n\t\tassert requests.level > 0;\n\n\t\tif (existingReadData == null) {\n\t\t\treturn null;\n\t\t}\n\n\t\tfinal int level = requests.level();\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tfinal BlockCodec<RawShard> codec = (BlockCodec<RawShard>) codecs[level];\n\t\tfinal long[] gridPos = requests.gridPosition();\n\t\tfinal RawShard shard = codec.decode(existingReadData, gridPos).getData();\n\n\t\tboolean modified = false;\n\t\tboolean shardElementSetToNull = false;\n\t\tif ( level == 1 ) {\n\t\t\t// Base case, delete the chunks\n\t\t\tfor (final ChunkRequest<T> request : requests) {\n\t\t\t\tfinal long[] elementPos = request.position.relative(0);\n\t\t\t\tif (shard.getElementData(elementPos) != null) {\n\t\t\t\t\tshard.setElementData(null, elementPos);\n\t\t\t\t\tmodified = true;\n\t\t\t\t\tshardElementSetToNull = true;\n\t\t\t\t}\n\t\t\t}\n\t\t} else { // level > 1\n\t\t\tfinal List<ChunkRequests<T>> split = requests.split();\n\t\t\tfor (final ChunkRequests<T> subRequests : split) {\n\t\t\t\tfinal boolean writeFully = subRequests.coversShard();\n\t\t\t\tfinal long[] elementPos = subRequests.relativeGridPosition();\n\t\t\t\tfinal ReadData existingElementData = writeFully ? null : shard.getElementData(elementPos);\n\t\t\t\tfinal ReadData modifiedElementData = deleteChunksRecursive(existingElementData, subRequests);\n\t\t\t\tif (modifiedElementData != existingElementData) {\n\t\t\t\t\tshard.setElementData(modifiedElementData, elementPos);\n\t\t\t\t\tmodified = true;\n\t\t\t\t\tshardElementSetToNull |= (modifiedElementData == null);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (!modified) {\n\t\t\t// No nested shard or chunk was modified.\n\t\t\t// This shard remains unchanged.\n\t\t\treturn existingReadData;\n\t\t}\n\n\t\tif (shardElementSetToNull) {\n\t\t\t// At least one chunk or nested shard was removed.\n\t\t\t// Check whether this shard becomes empty.\n\t\t\tif (shard.index().allElementsNull()) {\n\t\t\t\t// This shard is empty and should be removed.\n\t\t\t\treturn null;\n\t\t\t}\n\t\t}\n\n\t\treturn codec.encode(new RawShardDataBlock(gridPos, shard));\n\t}\n\n\n\n\t//\n\t// -- readShard -----------------------------------------------------------\n\n\t// NB: How to handle the dataset borders?\n\t//\n\t// N5 format uses truncated DataBlocks at the dataset border.\n\t//\n\t// For the Zarr format, when a truncated DataBlock is written, it is\n\t// padded with zero values to the default DataBlock size. This will\n\t// happen also when writing DataBlocks into a Shard.\n\t//\n\t// However, Zarr will not fill up a Shard with empty DataBlocks to pad\n\t// it to the default Shard size. Instead, these blocks will be missing\n\t// in the Shard index.\n\t//\n\t// When we write a full Shard as a \"big DataBlock\", what do we expect?\n\t//\n\t// For N5 format probably we expect the big DataBlock to be truncated at\n\t// the dataset border. (N5 format doesn't have shards yet, so we are\n\t// relatively free what to do here. But this would be consistent.)\n\t//\n\t// For Zarr format, we either expect\n\t//  * a big DataBlock that is the default Shard size, or\n\t//  * a big DataBlock that is truncated after the last DataBlock that is\n\t//    (partially) in the dataset borders (but truncated at multiple of\n\t//    default DataBlock size, so \"slightly padded\").\n\t//\n\t// I'm not sure which, so we handle both cases for now. In any case, we\n\t// do not want to write DataBlocks that are completely outside the\n\t// dataset (even if the \"big DataBlock\" covers this area.)\n\t//\n\t// This works for writing. For reading, we'll have to decide what to return,\n\t// though... Potentially, there is no valid block at the border, so we\n\t// cannot determine where to put the border just from the data. We need to\n\t// rely on external input or heuristics.\n\t//\n\t// For now, we decided to always truncate the at the dataset border when we\n\t// read full shards. This will hopefully work where we need it, but it\n\t// introduces inconsistencies. There is a readShardInternal() implementation\n\t// which takes the expected shardSizeInPixels as an argument, so that we can\n\t// easily revisit and change this heuristic.\n\n\t@Override\n\tpublic DataBlock<T> readBlock(\n\t\t\tfinal PositionValueAccess pva,\n\t\t\tfinal long[] shardGridPosition,\n\t\t\tfinal int level\n\t) throws N5IOException {\n\n\t\tif (level == 0) {\n\t\t\treturn readChunk(pva, shardGridPosition);\n\t\t}\n\n\t\tfinal long[] shardPixelPos = grid.pixelPosition(shardGridPosition, level);\n\t\tfinal int[] defaultShardSize = grid.getBlockSize(level);\n\t\tfinal long[] datasetSize = grid.getDatasetSize();\n\n\t\tfinal int n = grid.numDimensions();\n\t\tfinal int[] shardSizeInPixels = new int[n];\n\t\tfor (int d = 0; d < n; ++d) {\n\t\t\tshardSizeInPixels[d] = Math.min(defaultShardSize[d], (int) (datasetSize[d] - shardPixelPos[d]));\n\t\t}\n\t\treturn readBlockInternal(pva, shardGridPosition, shardSizeInPixels, level);\n\t}\n\n\tprivate DataBlock<T> readBlockInternal(\n\t\t\tfinal PositionValueAccess pva,\n\t\t\tfinal long[] shardGridPosition,\n\t\t\tfinal int[] shardSizeInPixels, // expected size of this shard in pixels\n\t\t\tfinal int level\n\t) throws N5IOException {\n\n\t\tfinal int n = grid.numDimensions();\n\t\tfinal int[] defaultShardSize = grid.getBlockSize(level);\n\t\tfor (int d = 0; d < n; d++) {\n\t\t\tif (shardSizeInPixels[d] > defaultShardSize[d]) {\n\t\t\t\tthrow new IllegalArgumentException(\"Requested shard size is larger than the default shard size\");\n\t\t\t}\n\t\t}\n\n\t\t// level-0 block-grid position of the min chunk in the shard\n\t\tfinal long[] gridMin = grid.absolutePosition(shardGridPosition, level, 0);\n\n\t\t// level-0 block-grid position of the max chunk in the shard that we need to read.\n\t\t// (the shard might go beyond the dataset border, and we don't need to read anything there)\n\t\tfinal long[] gridMax = new long[n];\n\t\tfinal long[] datasetSizeInChunks = grid.getDatasetSizeInChunks();\n\t\tfinal int[] chunkSize = grid.getBlockSize(0);\n\t\tfor (int d = 0; d < n; ++d) {\n\t\t\tfinal int shardSizeInChunks = (shardSizeInPixels[d] + chunkSize[d] - 1) / chunkSize[d];\n\t\t\tfinal int gridSize = Math.min(shardSizeInChunks, (int) (datasetSizeInChunks[d] - gridMin[d]));\n\t\t\tgridMax[d] = gridMin[d] + gridSize - 1;\n\t\t}\n\n\t\t// read all chunks in (gridMin, gridMax) and filter out missing chunks\n\t\tfinal List<long[]> chunkPositions = Region.gridPositions(gridMin, gridMax);\n\t\tfinal List<DataBlock<T>> chunks = readChunks(pva, chunkPositions)\n\t\t\t\t.stream().filter(Objects::nonNull).collect(Collectors.toList());\n\t\tif (chunks.isEmpty()) {\n\t\t\treturn null;\n\t\t}\n\n\t\t// allocate shard and copy data from chunks\n\t\tfinal DataBlock<T> shard = DataBlockFactory.of(chunks.get(0).getData()).createDataBlock(shardSizeInPixels, shardGridPosition);\n\t\tfinal long[] shardPixelPos = grid.pixelPosition(shardGridPosition, level);\n\t\tfinal long[] chunkPixelPos = new long[n];\n\t\tfinal int[] srcPos = new int[n];\n\t\tfinal int[] destPos = new int[n];\n\t\tfinal int[] size = new int[n];\n\t\tfor (final DataBlock<T> chunk : chunks) {\n\t\t\t// copy chunk data that overlaps the shard\n\t\t\tgrid.pixelPosition(chunk.getGridPosition(), 0, chunkPixelPos);\n\t\t\tfinal int[] bsize = chunk.getSize();\n\t\t\tfor (int d = 0; d < n; d++) {\n\t\t\t\tdestPos[d] = (int) (chunkPixelPos[d] - shardPixelPos[d]);\n\t\t\t\tsize[d] = Math.min(bsize[d], shardSizeInPixels[d] - destPos[d]);\n\t\t\t}\n\t\t\tSubArrayCopy.copy(chunk.getData(), bsize, srcPos, shard.getData(), shardSizeInPixels, destPos, size);\n\t\t}\n\t\treturn shard;\n\t}\n\n\t//\n\t// -- writeBlock ----------------------------------------------------------\n\n\t@Override\n\tpublic void writeBlock(\n\t\t\tfinal PositionValueAccess pva,\n\t\t\tfinal DataBlock<T> dataBlock,\n\t\t\tfinal int level\n\t) throws N5IOException {\n\n\t\tif (level == 0) {\n\t\t\twriteChunk(pva, dataBlock);\n\t\t\treturn;\n\t\t}\n\n\t\tfinal T shardData = dataBlock.getData();\n\t\tfinal DataBlockFactory<T> blockFactory = DataBlockFactory.of(shardData);\n\n\t\tfinal int n = grid.numDimensions();\n\t\tfinal int[] chunkSize = grid.getBlockSize(0); // size of a standard (non-truncated) chunk\n\t\tfinal long[] datasetChunkSize = grid.getDatasetSizeInChunks();\n\n\t\tfinal long[] shardPixelMin = grid.pixelPosition(dataBlock.getGridPosition(), level);\n\t\tfinal int[] shardPixelSize = dataBlock.getSize();\n\n\t\t// the max chunk + 1 in the shard, if it isn't truncated by the dataset border\n\t\tfinal long[] shardChunkTo = new long[n];\n\t\tArrays.setAll(shardChunkTo, d -> (shardPixelMin[d] + shardPixelSize[d] + chunkSize[d] - 1) / chunkSize[d]);\n\n\t\t// level 0 grid positions of all chunks we want to extract\n\t\tfinal long[] gridMin = grid.absolutePosition(dataBlock.getGridPosition(), level, 0);\n\t\tfinal long[] gridMax = new long[n];\n\t\tArrays.setAll(gridMax, d -> Math.min(shardChunkTo[d], datasetChunkSize[d]) - 1);\n\t\tfinal List<long[]> chunkPositions = Region.gridPositions(gridMin, gridMax);\n\n\t\t// Max pixel coordinates + 1, of the region we want to copy. This should\n\t\t// always be shardPixelMin + shardPixelSize, except at the dataset\n\t\t// border, where we truncate to the smallest multiple of chunkSize still\n\t\t// overlapping the dataset.\n\t\tfinal long[] regionBound = new long[n];\n\t\tArrays.setAll(regionBound, d -> Math.min(shardPixelMin[d] + shardPixelSize[d], datasetChunkSize[d] * chunkSize[d]));\n\n\t\tfinal List<DataBlock<T>> chunks = new ArrayList<>(chunkPositions.size());\n\t\tfinal int[] srcPos = new int[n];\n\t\tfinal int[] destPos = new int[n];\n\t\tfinal int[] destSize = new int[n];\n\t\tfor ( long[] chunkPos : chunkPositions) {\n\t\t\tfinal long[] pixelMin = grid.pixelPosition(chunkPos, 0);\n\n\t\t\tfor (int d = 0; d < n; d++) {\n\t\t\t\tsrcPos[d] = (int) (pixelMin[d] - shardPixelMin[d]);\n\t\t\t\tdestSize[d] = Math.min(chunkSize[d], (int) (regionBound[d] - pixelMin[d]));\n\t\t\t}\n\n\t\t\t// This extracting chunks will not work if num_array_elements != num_block_elements.\n\t\t\t// But we'll deal with that later if it becomes a problem...\n\t\t\tfinal DataBlock<T> chunk = blockFactory.createDataBlock(destSize, chunkPos);\n\t\t\tSubArrayCopy.copy(shardData, shardPixelSize, srcPos, chunk.getData(), destSize, destPos, destSize);\n\t\t\tchunks.add(chunk);\n\t\t}\n\n\t\twriteChunks(pva, chunks);\n\t}\n\n\t//\n\t// -- helpers -------------------------------------------------------------\n\n\t/**\n\t * If {@code existingReadData != null} try to decode it into a RawShard.\n\t * Otherwise, or if this fails because we find that {@code existingReadData}\n\t * lazily points to non-existent data, return a new empty RawShard.\n\t *\n\t * @param existingReadData data to decode or null\n\t * @param codec shard codec\n\t * @param gridPos position of the shard on the shard grid of the given level\n\t * @param level level of the shard\n\t * @return the decode shard (or a new empty shard)\n\t */\n\tprivate RawShard getRawShard(\n\t\t\tfinal ReadData existingReadData,\n\t\t\tfinal BlockCodec<RawShard> codec,\n\t\t\tfinal long[] gridPos,\n\t\t\tfinal int level) {\n\t\tif (existingReadData != null) {\n\t\t\ttry {\n\t\t\t\treturn codec.decode(existingReadData, gridPos).getData();\n\t\t\t} catch (N5NoSuchKeyException ignored) {\n\t\t\t}\n\t\t}\n\t\treturn new RawShard(grid.relativeBlockSize(level));\n\t}\n\n\t/**\n\t * A request to read or write a chunk (level-0 DataBlock) at a given {@link #position}.\n\t * <p>\n\t * <em>Write requests</em> are constructed with {@link #position} and {@link #chunk}.\n\t * <p>\n\t * <em>Read requests</em> are constructed with only a {@link #position}, and\n\t * initially {@link #chunk chunk=null}. When the DataBlock is read, it will\n\t * be put into {@link #chunk}.\n\t * <p>\n\t * {@code ChunkRequest} are used for reading/writing a list of chunks\n\t * with {@link #readChunks} and {@link #writeChunks}. The {@link #index}\n\t * field is the position in the list of positions/chunks to read/write. For\n\t * processing, requests are re-ordered such that all requests from the same\n\t * (sub-)shard are grouped together. The {@link #index} field is used to\n\t * re-establish the order of results (only important for {@link #readChunks}).\n\t *\n\t * @param <T>\n\t * \t\ttype of the data contained in the DataBlock\n\t */\n\tprivate static final class ChunkRequest<T> {\n\n\t\tfinal NestedPosition position;\n\t\tfinal int index;\n\t\tDataBlock<T> chunk;\n\n\t\t// read request\n\t\tChunkRequest(final NestedPosition position, final int index) {\n\t\t\tthis.position = position;\n\t\t\tthis.index = index;\n\t\t\tthis.chunk = null;\n\t\t}\n\n\t\t// write request\n\t\tChunkRequest(final NestedPosition position, final DataBlock<T> chunk) {\n\t\t\tthis.position = position;\n\t\t\tthis.index = -1;\n\t\t\tthis.chunk = chunk;\n\t\t}\n\n\t\t@Override\n\t\tpublic String toString() {\n\t\t\treturn \"ChunkRequest{position=\" + position + \", index=\" + index + '}';\n\t\t}\n\t}\n\n\t/**\n\t * A list of {@code ChunkRequest}, ordered by {@code NestedPosition}.\n\t * All requests lie in the same shard at the given {@code level}.\n\t * <p>\n\t * {@code ChunkRequests} should be constructed using {@link\n\t * #createReadRequests} (for reading) or {@link #createWriteRequests} (for\n\t * writing).\n\t * <p>\n\t * When recursing into nested shard levels, {@code ChunkRequests} should\n\t * be {@link #split} to partition into sub-{@code ChunkRequests} that\n\t * each cover one shard.\n\t *\n\t * @param <T>\n\t * \t\ttype of the data contained in the DataBlocks\n\t */\n\tprivate static final class ChunkRequests<T> implements Iterable<ChunkRequest<T>> {\n\n\t\tprivate final NestedGrid grid;\n\t\tprivate final List<ChunkRequest<T>> requests;\n\t\tprivate final int level;\n\n\t\tprivate ChunkRequests(final List<ChunkRequest<T>> requests, final int level, final NestedGrid grid) {\n\t\t\tthis.requests = requests;\n\t\t\tthis.level = level;\n\t\t\tthis.grid = grid;\n\t\t}\n\n\t\t/**\n\t\t * Returns a map of duplicate requests. Each pair of consecutive\n\t\t * elements (A,B) of the list means that request A is a duplicate of\n\t\t * request B. A has been removed from the {@code requests} list, and B\n\t\t * remains in the {@code requests} list. After the (read) requests have\n\t\t * been processed, elements corresponding to A can be added into the\n\t\t * result list by using the {@code DataBlock} of B.\n\t\t * <p>\n\t\t * If {@code n} duplicates occur in {@code requests}, the resulting list\n\t\t * will have {@code 2*n} elements.\n\t\t */\n\t\tpublic List<ChunkRequest<T>> removeDuplicates() {\n\t\t\tList<ChunkRequest<T>> duplicates = new ArrayList<>();\n\t\t\tChunkRequest<T> previous = null;\n\t\t\tfinal ListIterator<ChunkRequest<T>> iter = requests.listIterator();\n\t\t\twhile (iter.hasNext()) {\n\t\t\t\tfinal ChunkRequest<T> current = iter.next();\n\t\t\t\tif (previous != null) {\n\t\t\t\t\tif (previous.position.equals(current.position)) {\n\t\t\t\t\t\titer.remove();\n\t\t\t\t\t\tduplicates.add(current);\n\t\t\t\t\t\tduplicates.add(previous);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tprevious = current;\n\t\t\t}\n\t\t\treturn duplicates;\n\t\t}\n\n\t\t@Override\n\t\tpublic Iterator<ChunkRequest<T>> iterator() {\n\t\t\treturn requests.iterator();\n\t\t}\n\n\t\t/**\n\t\t * All chunks contained in this {@code ChunkRequests} are in the\n\t\t * same shard at this nesting level.\n\t\t * <p>\n\t\t * Use {@link #split()} to partition into {@code ChunkRequests} with\n\t\t * nesting level {@link #level()}{@code -1}.\n\t\t *\n\t\t * @return nesting level\n\t\t */\n\t\tpublic int level() {\n\t\t\treturn level;\n\t\t}\n\n\t\t/**\n\t\t * Position on the shard grid at the level of this ChunkRequests\n\t\t * (of the one shard containing all the requested blocks).\n\t\t */\n\t\tpublic long[] gridPosition() {\n\t\t\treturn position().absolute(level);\n\t\t}\n\n\t\t/**\n\t\t * Relative grid position at the level of this ChunkRequests,\n\t\t * that is, relative offset within containing the (level+1) element.\n\t\t */\n\t\tpublic long[] relativeGridPosition() {\n\t\t\treturn position().relative(level);\n\t\t}\n\n\t\tprivate NestedPosition position() {\n\t\t\tif (requests.isEmpty())\n\t\t\t\tthrow new IllegalArgumentException();\n\t\t\treturn requests.get(0).position;\n\t\t}\n\n\t\t/**\n\t\t * Split into sub-requests, grouping by same position at nesting level {@link #level()}{@code -1}.\n\t\t */\n\t\tpublic List<ChunkRequests<T>> split() {\n\t\t\tfinal int subLevel = level - 1;\n\t\t\tfinal List<ChunkRequests<T>> subRequests = new ArrayList<>();\n\t\t\tfor (int i = 0; i < requests.size(); ) {\n\t\t\t\tfinal long[] ilpos = requests.get(i).position.absolute(subLevel);\n\t\t\t\tint j = i + 1;\n\t\t\t\tfor (; j < requests.size(); ++j) {\n\t\t\t\t\tfinal long[] jlpos = requests.get(j).position.absolute(subLevel);\n\t\t\t\t\tif (!Arrays.equals(ilpos, jlpos)) {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsubRequests.add(new ChunkRequests<>(requests.subList(i, j), subLevel, grid));\n\t\t\t\ti = j;\n\t\t\t}\n\t\t\treturn subRequests;\n\t\t}\n\n\t\t/**\n\t\t * Returns {@code true} if this {@code ChunkRequests} completely\n\t\t * fills its containing shard at nesting level {@link #level()}.\n\t\t * (This can be used to avoid reading a shard that will be completely\n\t\t * overwritten).\n\t\t * <p>\n\t\t * Note that this method only works correctly if the requests list\n\t\t * contains no duplicates. See {@link #removeDuplicates()}.\n\t\t */\n\t\tpublic boolean coversShard() {\n\t\t\tfinal long[] gridMin = grid.absolutePosition(position().absolute(level), level, 0);\n\t\t\tfinal long[] datasetSize = grid.getDatasetSizeInChunks(); // in units of DataBlocks\n\t\t\tfinal int[] defaultShardSize = grid.relativeToBaseBlockSize(level); // in units of DataBlocks\n\t\t\tint numElements = 1;\n\t\t\tfor (int d = 0; d < defaultShardSize.length; d++) {\n\t\t\t\tnumElements *= Math.min(defaultShardSize[d], (int) (datasetSize[d] - gridMin[d]));\n\t\t\t}\n\t\t\treturn requests.size() >= numElements;// NB: It should never be (requests.size() > numElements), unless there are duplicate blocks.\n\t\t}\n\n\t\t/**\n\t\t * Extract {@link ChunkRequest#chunk chunk}s from the requests,\n\t\t * in the order of {@link ChunkRequest#index indices}.\n\t\t * <p>\n\t\t * (This is used in {@link #readChunks} to collect chunks in the\n\t\t * requested order.)\n \t\t */\n\t\tpublic List<DataBlock<T>> chunks(final List<ChunkRequest<T>> duplicates) {\n\t\t\tfinal int size = requests.size() + duplicates.size() / 2;\n\t\t\tfinal DataBlock<T>[] blocks = new DataBlock[size];\n\t\t\trequests.forEach(r -> blocks[r.index] = r.chunk);\n\t\t\tfor (int i = 0; i < duplicates.size(); i += 2) {\n\t\t\t\tfinal ChunkRequest<T> a = duplicates.get(i * 2);\n\t\t\t\tfinal ChunkRequest<T> b = duplicates.get(i * 2 + 1);\n\t\t\t\tblocks[a.index] = b.chunk;\n\t\t\t}\n\t\t\treturn Arrays.asList(blocks);\n\t\t}\n\t}\n\n\t/**\n\t * Construct {@code ChunkRequests} from a list of level-0 grid positions\n\t * for reading.\n\t * <p>\n\t * The nesting level of the returned {@code ChunkRequests} is {@code\n\t * grid.numLevels()}, that is level of the highest-order shard + 1. This\n\t * implies that the requests are not guaranteed to be in the same shard (at\n\t * any level. {@link ChunkRequests#split() Splitting} the {@code\n\t * ChunkRequests} once will return a list of {@code ChunkRequests}\n\t * that each contain chunks from one highest-order shard.\n\t */\n\tprivate ChunkRequests<T> createReadRequests(final List<long[]> gridPositions) {\n\t\tfinal List<ChunkRequest<T>> requests = new ArrayList<>(gridPositions.size());\n\t\tfor (int i = 0; i < gridPositions.size(); i++) {\n\t\t\tfinal NestedPosition pos = grid.nestedPosition(gridPositions.get(i));\n\t\t\trequests.add(new ChunkRequest<>(pos, i));\n\t\t}\n\t\trequests.sort(Comparator.comparing(r -> r.position));\n\t\treturn new ChunkRequests<>(requests, grid.numLevels(), grid);\n\t}\n\n\t/**\n\t * Construct {@code ChunkRequests} from a list of chunks (level-0\n\t * DataBlocks) for writing.\n\t * <p>\n\t * The nesting level ot the returned {@code ChunkRequests} is {@code\n\t * grid.numLevels()}, that is level of the highest-order shard + 1. This\n\t * implies that the requests are not guaranteed to be in the same shard (at\n\t * any level. {@link ChunkRequests#split() Splitting} the {@code\n\t * ChunkRequests} once will return a list of {@code ChunkRequests}\n\t * that each contain chunks from one highest-order shard.\n\t */\n\tprivate ChunkRequests<T> createWriteRequests(final List<DataBlock<T>> dataBlocks) {\n\t\tfinal List<ChunkRequest<T>> requests = new ArrayList<>(dataBlocks.size());\n\t\tfor (final DataBlock<T> dataBlock : dataBlocks) {\n\t\t\tfinal NestedPosition pos = grid.nestedPosition(dataBlock.getGridPosition());\n\t\t\trequests.add(new ChunkRequest<>(pos, dataBlock));\n\t\t}\n\t\trequests.sort(Comparator.comparing(r -> r.position));\n\t\treturn new ChunkRequests<>(requests, grid.numLevels(), grid);\n\t}\n\n\t/**\n\t * Factory for the standard {@code DataBlock<T>}, where {@code T} is an\n\t * array type and the number of elements in a block corresponds to the\n\t * {@link DataBlock#getSize()}.\n\t * <p>\n\t * This is used by {@link #readBlock} and {@link #writeBlock} which\n\t * internally need to allocate new DataBlocks to split or merge a shard.\n\t */\n\tprivate interface DataBlockFactory<T> {\n\n\t\tDataBlock<T> createDataBlock(final int[] blockSize, final long[] gridPosition);\n\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tstatic <T> DataBlockFactory<T> of(T array) {\n\t\t\tif (array instanceof byte[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new ByteArrayDataBlock(size, pos, new byte[DataBlock.getNumElements(size)]);\n\t\t\t} else if (array instanceof short[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new ShortArrayDataBlock(size, pos, new short[DataBlock.getNumElements(size)]);\n\t\t\t} else if (array instanceof int[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new IntArrayDataBlock(size, pos, new int[DataBlock.getNumElements(size)]);\n\t\t\t} else if (array instanceof long[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new LongArrayDataBlock(size, pos, new long[DataBlock.getNumElements(size)]);\n\t\t\t} else if (array instanceof float[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new FloatArrayDataBlock(size, pos, new float[DataBlock.getNumElements(size)]);\n\t\t\t} else if (array instanceof double[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new DoubleArrayDataBlock(size, pos, new double[DataBlock.getNumElements(size)]);\n\t\t\t} else if (array instanceof String[]) {\n\t\t\t\treturn (size, pos) -> (DataBlock<T>) new StringDataBlock(size, pos, new String[DataBlock.getNumElements(size)]);\n\t\t\t} else {\n\t\t\t\tthrow new IllegalArgumentException(\"unsupported array type: \" + array.getClass().getSimpleName());\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/DefaultShardCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.Arrays;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodec;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.CodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.CodecParser;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DatasetCodecInfo;\nimport org.janelia.saalfeldlab.n5.serialization.N5Annotations;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\n\n/**\n * Default (and probably only) implementation of {@link ShardCodecInfo}.\n */\n@NameConfig.Name(value = \"sharding_indexed\")\npublic class DefaultShardCodecInfo implements ShardCodecInfo {\n\n\t@Override\n\tpublic String getType() {\n\t\treturn \"sharding_indexed\";\n\t}\n\n\t@N5Annotations.ReverseArray\n\t@NameConfig.Parameter(value = \"chunk_shape\")\n\tprivate final int[] innerBlockSize;\n\n\t@NameConfig.Parameter(value = \"index_location\", optional = true)\n\tprivate final IndexLocation indexLocation;\n\n\t@NameConfig.Parameter\n\tprivate CodecInfo[] codecs;\n\n\t@NameConfig.Parameter(value = \"index_codecs\")\n\tprivate CodecInfo[] indexCodecs;\n\n\tprivate transient DatasetCodecInfo[] innerDatasetCodecInfos;\n\n\tprivate transient BlockCodecInfo innerBlockCodecInfo;\n\n\tprivate transient DataCodecInfo[] innerDataCodecInfos;\n\n\tprivate transient BlockCodecInfo indexBlockCodecInfo;\n\n\tprivate transient DataCodecInfo[] indexDataCodecInfos;\n\n\tDefaultShardCodecInfo() {\n\t\t// for serialization\n\t\tthis(null, null, null, null, null, IndexLocation.END);\n\t}\n\n\tpublic DefaultShardCodecInfo(\n\t\t\tfinal int[] innerBlockSize,\n\t\t\tfinal DatasetCodecInfo[] innerDatasetCodecInfos,\n\t\t\tfinal BlockCodecInfo innerBlockCodecInfo,\n\t\t\tfinal DataCodecInfo[] innerDataCodecInfos,\n\t\t\tfinal BlockCodecInfo indexBlockCodecInfo,\n\t\t\tfinal DataCodecInfo[] indexDataCodecInfos,\n\t\t\tfinal IndexLocation indexLocation) {\n\n\t\tthis.innerBlockSize = innerBlockSize;\n\t\tthis.innerDatasetCodecInfos = innerDatasetCodecInfos;\n\t\tthis.innerBlockCodecInfo = innerBlockCodecInfo;\n\t\tthis.innerDataCodecInfos = innerDataCodecInfos;\n\t\tthis.indexBlockCodecInfo = indexBlockCodecInfo;\n\t\tthis.indexDataCodecInfos = indexDataCodecInfos;\n\t\tthis.indexLocation = indexLocation;\n\n\t\tcodecs = concatenateCodecs(innerBlockCodecInfo, innerDataCodecInfos);\n\t\tindexCodecs = concatenateCodecs(indexBlockCodecInfo, indexDataCodecInfos);\n\t}\n\t\n\tpublic DefaultShardCodecInfo(\n\t\t\tfinal int[] innerBlockSize,\n\t\t\tfinal BlockCodecInfo innerBlockCodecInfo,\n\t\t\tfinal DataCodecInfo[] innerDataCodecInfos,\n\t\t\tfinal BlockCodecInfo indexBlockCodecInfo,\n\t\t\tfinal DataCodecInfo[] indexDataCodecInfos,\n\t\t\tfinal IndexLocation indexLocation) {\n\n\t\tthis(innerBlockSize,\n\t\t\t\tnull,\n\t\t\t\tinnerBlockCodecInfo,\n\t\t\t\tinnerDataCodecInfos,\n\t\t\t\tindexBlockCodecInfo,\n\t\t\t\tindexDataCodecInfos,\n\t\t\t\tindexLocation);\n\t}\n\n\tprivate void build() {\n\n\t\tif (innerBlockCodecInfo != null)\n\t\t\treturn;\n\n\t\t// sets\n\t\t// innerBlockCodecInfo, innerDataCodecInfos\n\t\t// indexBlockCodecInfo, indexDataCodecInfos\n\t\t// from\n\t\t// codecs and indexCodecs\n\n\t\tfinal CodecParser parser = new CodecParser(codecs);\n\t\tinnerDatasetCodecInfos = parser.datasetCodecInfos;\n\t\tinnerBlockCodecInfo = parser.blockCodecInfo;\n\t\tinnerDataCodecInfos = parser.dataCodecInfos;\n\n\t\tif (indexCodecs[0] instanceof BlockCodecInfo)\n\t\t\tindexBlockCodecInfo = (BlockCodecInfo)indexCodecs[0];\n\t\telse\n\t\t\tthrow new N5Exception(\"Codec at index \" + 0 + \" must be a BlockCodec.\");\n\n\t\tindexDataCodecInfos = new DataCodecInfo[indexCodecs.length - 1];\n\t\tfor (int i = 1; i < indexCodecs.length; i++)\n\t\t\tindexDataCodecInfos[i - 1] = (DataCodecInfo)indexCodecs[i];\n\t}\n\n\t@Override\n\tpublic int[] getInnerBlockSize() {\n\t\treturn innerBlockSize;\n\t}\n\n\t@Override\n\tpublic DatasetCodecInfo[] getInnerDatasetCodecInfos() {\n\t\treturn innerDatasetCodecInfos;\n\t}\n\n\t@Override\n\tpublic BlockCodecInfo getInnerBlockCodecInfo() {\n\t\treturn innerBlockCodecInfo;\n\t}\n\n\t@Override\n\tpublic DataCodecInfo[] getInnerDataCodecInfos() {\n\t\treturn innerDataCodecInfos;\n\t}\n\n\t@Override\n\tpublic BlockCodecInfo getIndexBlockCodecInfo() {\n\t\treturn indexBlockCodecInfo;\n\t}\n\n\t@Override\n\tpublic DataCodecInfo[] getIndexDataCodecInfos() {\n\t\treturn indexDataCodecInfos;\n\t}\n\n\t@Override\n\tpublic IndexLocation getIndexLocation() {\n\t\treturn indexLocation;\n\t}\n\n\tpublic CodecInfo[] getCodecs() {\n\t\treturn codecs;\n\t}\n\n\tpublic CodecInfo[] getIndexCodecs() {\n\t\treturn indexCodecs;\n\t}\n\n\t@Override\n\tpublic RawShardCodec create(final int[] blockSize, final DataCodecInfo... codecs) {\n\n\t\tbuild();\n\n\t\t// Number of elements (DataBlocks, nested shards) in each dimension per shard.\n\t\tfinal int[] size = new int[blockSize.length];\n\t\t// blockSize argument is number of pixels in the shard\n\t\t// innerBlockSize is number of pixels in each shard element (nested shard or DataBlock)\n\t\tArrays.setAll(size, d -> blockSize[d] / innerBlockSize[d]);\n\n\t\tfinal BlockCodec<long[]> indexCodec = indexBlockCodecInfo.create(\n\t\t\t\tDataType.UINT64,\n\t\t\t\tShardIndex.blockSizeFromIndexSize(size),\n\t\t\t\tindexDataCodecInfos);\n\n\t\treturn new RawShardCodec(size, indexLocation, indexCodec);\n\t}\n\n\tprivate static CodecInfo[] concatenateCodecs(BlockCodecInfo blkInfo, DataCodecInfo[] dataInfos) {\n\n\t\tif (dataInfos == null) {\n\t\t\treturn new CodecInfo[]{blkInfo};\n\t\t}\n\n\t\tfinal CodecInfo[] allCodecs = new CodecInfo[dataInfos.length + 1];\n\t\tallCodecs[0] = blkInfo;\n\t\tSystem.arraycopy(dataInfos, 0, allCodecs, 1, dataInfos.length);\n\n\t\treturn allCodecs;\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/Nesting.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\n\nimport java.util.Arrays;\nimport java.util.Objects;\n\n/**\n * Container for classes that coordinate hierarchical grid structures for\n * sharded N5 datasets.\n * <p>\n * This class provides classes for representing and navigating nested/sharded\n * dataset layouts for the N5 API. In a sharded dataset, level-0 data blocks\n * (called <em>chunks</em>) are grouped into higher-level containers called\n * <em>shards</em>, which can themselves be nested within parent shards,\n * creating a multi-level hierarchy.\n * <p>\n * <b>Nesting levels:</b>\n * <ul>\n * <li><b>Level 0:</b> The finest granularity - individual chunks (level-0 data\n * blocks) containing actual pixel data\n * <li><b>Level 1:</b> First-level shards, which contain multiple level-0 data\n * blocks\n * <li><b>Level 2+:</b> Higher-level shards (if they exist), which contain\n * multiple lower-level shards\n * <li><b>Nesting depth:</b> The total number of hierarchical levels in the\n * structure\n * </ul>\n * <p>\n * <b>Contained Classes:</b>\n * <ul>\n * <li>{@link NestedGrid} - Defines the hierarchical grid structure with block\n * sizes at each nesting level. This is the grid \"schema\" that describes how the\n * hierarchy is organized.\n * <li>{@link NestedPosition} - Represents a specific position within a\n * {@code NestedGrid} at a particular nesting level, providing coordinate\n * transformations between levels.\n * </ul>\n *\n */\npublic class Nesting {\n\n\t/**\n\t * Represents the position of a block at a particular level of a nested hierarchy,\n\t * where the hierarchy is defined by a given {@link NestedGrid}.\n\t */\n\tpublic static class NestedPosition implements Comparable<NestedPosition> {\n\n\t\tprivate final NestedGrid grid;\n\t\tprivate final long[] position;\n\t\tprivate final int level;\n\n\t\tprotected NestedPosition(final NestedGrid grid, final long[] position, final int level) {\n\t\t\tthis.grid = grid;\n\t\t\tthis.position = position;\n\t\t\tthis.level = level;\n\t\t}\n\n\t\t/**\n\t\t * Get the nesting level of this position.\n\t\t * <p>\n\t\t * Positions with {@code level=0} refer to chunks, positions with\n\t\t * {@code level=1} refer to first-level shards (containing chunks),\n\t\t * and so on.\n\t\t *\n\t\t * @return nesting level\n\t\t */\n\t\tpublic int level() {\n\t\t\treturn level;\n\t\t}\n\n\t\tpublic int numDimensions() {\n\t\t\treturn grid.numDimensions();\n\t\t}\n\n\t\t/**\n\t\t * Get the relative grid position at {@code level}, that is, relative\n\t\t * offset within containing the {@code (level+1)} element.\n\t\t *\n\t\t * @param level\n\t\t * \t\trequested nesting level\n\t\t *\n\t\t * @return relative grid position\n\t\t */\n\t\tpublic long[] relative(final int level) {\n\t\t\treturn grid.relativePosition(position, this.level, level);\n\t\t}\n\n\t\t/**\n\t\t * Get the relative grid position at this positions {@link #level()},\n\t\t * that is, relative offset within containing the {@code (level+1)}\n\t\t * element.\n\t\t *\n\t\t * @return relative grid position\n\t\t */\n\t\tpublic long[] relative() {\n\t\t\treturn relative(level());\n\t\t}\n\n\t\t/**\n\t\t * Get the absolute grid position at {@code level}.\n\t\t *\n\t\t * @param level\n\t\t * \t\trequested nesting level\n\t\t *\n\t\t * @return absolute grid position\n\t\t */\n\t\tpublic long[] absolute(final int level) {\n\t\t\treturn grid.absolutePosition(position, this.level, level);\n\t\t}\n\n\t\tpublic long[] key() {\n\t\t\treturn relative(grid.numLevels() - 1);\n\t\t}\n\n\t\tpublic long[] pixelPosition() {\n\t\t\treturn grid.pixelPosition(position, level);\n\t\t}\n\n\t\tpublic long[] maxPixelPosition() {\n\t\t\treturn grid.maxPixelPosition(position, level);\n\t\t}\n\n\t\t@Override\n\t\tpublic String toString() {\n\t\t\tStringBuilder sb = new StringBuilder();\n\t\t\tsb.append('{');\n\t\t\tfor (int l = level; l < grid.numLevels(); ++l) {\n\t\t\t\tif (l > level) {\n\t\t\t\t\tsb.append(\" / \");\n\t\t\t\t}\n\t\t\t\tsb.append(Arrays.toString(relative(l)));\n\t\t\t}\n\t\t\tsb.append(\" (level \").append(level).append(\")}\");\n\t\t\treturn sb.toString();\n\t\t}\n\n\t\t// TODO: Consider making Comparable, equals, and hashCode assume that\n\t\t//       everything is on the same NestedGrid. This is how we use it in\n\t\t//       practice, and it would simplify things a little bit.\n\n\t\t@Override\n\t\tpublic int compareTo(NestedPosition o) {\n\n\t\t\tif (o.grid != grid)\n\t\t\t\tthrow new IllegalArgumentException(\"NestedPositions of different NestedGrids are not comparable\");\n\n\t\t\tfinal int levelInequality = Integer.compare(level, o.level);\n\t\t\tif (levelInequality != 0)\n\t\t\t\treturn levelInequality;\n\n\t\t\tfinal int[] sk = grid.relativeToBase[level];\n\t\t\tfor (int l = grid.numLevels() - 1; l >= 0; --l) {\n\t\t\t\tfinal int[] si = grid.relativeToBase[l];\n\t\t\t\tfinal boolean maxLevel = l < grid.numLevels - 1;\n\t\t\t\tfinal int[] rj = maxLevel ? grid.relativeToAdjacent[l + 1] : null;\n\t\t\t\tfor (int d = grid.numDimensions - 1; d >= 0; --d) {\n\t\t\t\t\tlong relative = position[d] * sk[d] / si[d];\n\t\t\t\t\tlong orelative = o.position[d] * sk[d] / si[d];\n\t\t\t\t\tif (maxLevel) {\n\t\t\t\t\t\trelative %= rj[d];\n\t\t\t\t\t\torelative %= rj[d];\n\t\t\t\t\t}\n\t\t\t\t\tfinal int posInequality = Long.compare(relative, orelative);\n\t\t\t\t\tif (posInequality != 0)\n\t\t\t\t\t\treturn posInequality;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn 0;\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean equals(final Object o) {\n\n\t\t\tif (o == this)\n\t\t\t\treturn true;\n\n\t\t\tif (!(o instanceof NestedPosition))\n\t\t\t\treturn false;\n\t\t\tfinal NestedPosition that = (NestedPosition) o;\n\n\t\t\treturn level == that.level && Objects.equals(grid, that.grid) && Objects.deepEquals(position, that.position);\n\t\t}\n\n\t\t@Override\n\t\tpublic int hashCode() {\n\n\t\t\treturn Objects.hash(grid, Arrays.hashCode(position), level);\n\t\t}\n\n\t\t// TODO: should we have prefix()? suffix()? head()? tail()?\n\t}\n\n\t/**\n\t * A nested grid of blocks used to coordinate the relationships of shards\n\t * and the blocks (chunks / sub-shards) they contain.\n\t * <p>\n\t * The nesting depth ({@link #numLevels()}) of the {@code NestedGrid} is 1\n\t * for non-sharded datasets, 2 for simple sharded datasets (where shards\n\t * contain chunks), and &ge;3 for nested sharded datasets.\n\t * <p>\n\t * Positions with {@code level=0} refer to the DataBlock grid, positions\n\t * with {@code level=1} refer to first-level Shard grid, and so on.\n\t */\n\tpublic static class NestedGrid {\n\n\t\tprivate final int numLevels;\n\t\tprivate final int numDimensions;\n\n\t\t/**\n\t\t * relativeToBase[i][d] is block size (in dimension d) at level i relative to level 0\n\t\t */\n\t\tprivate final int[][] relativeToBase;\n\n\t\t/**\n\t\t * relativeToAdjacent[i][d] is block size (in dimension d) at level i relative to level i-1\n\t\t */\n\t\tprivate final int[][] relativeToAdjacent;\n\n\t\t/**\n\t\t * blockSizes[l][d] is the block size in pixels at level l in dimension d\n\t\t */\n\t\tprivate final int[][] blockSizes;\n\n\t\t/**\n\t\t * dimensions of the dataset in pixels.\n\t\t */\n\t\tprivate final long[] datasetSize;\n\n\t\t/**\n\t\t * dimensions of the dataset in level-0 blocks.\n\t\t */\n\t\tprivate final long[] datasetSizeInChunks;\n\n\t\t/**\n\t\t * {@code blockSizes[l][d]} is the block size at level {@code l} in dimension {@code d}.\n\t\t * Level 0 contains the smallest blocks. blockSizes[l+1][d] must be a multiple of blockSizes[l][d].\n\t\t *\n\t\t * @param blockSizes\n\t\t * \t\tblock sizes for all levels and dimensions.\n\t\t * @param datasetSize\n\t\t * \t\tsize of the dataset.\n\t\t */\n\t\tpublic NestedGrid(int[][] blockSizes, long[] datasetSize) {\n\n\t\t\tif (blockSizes == null)\n\t\t\t\tthrow new IllegalArgumentException(\"blockSizes is null\");\n\n\t\t\tif (blockSizes[0] == null)\n\t\t\t\tthrow new IllegalArgumentException(\"blockSizes[0] is null\");\n\n\t\t\tthis.blockSizes = blockSizes;\n\t\t\tthis.datasetSize = datasetSize;\n\n\t\t\tnumLevels = blockSizes.length;\n\t\t\tnumDimensions = blockSizes[0].length;\n\t\t\trelativeToBase = new int[numLevels][numDimensions];\n\t\t\trelativeToAdjacent = new int[numLevels][numDimensions];\n\t\t\tfor (int l = 0; l < numLevels; ++l) {\n\t\t\t\tfinal int k = Math.max(0, l - 1);\n\n\t\t\t\tif (blockSizes[l] == null)\n\t\t\t\t\tthrow new IllegalArgumentException(\"blockSizes[\" + l + \"] null\");\n\n\t\t\t\tif (blockSizes[l].length != numDimensions)\n\t\t\t\t\tthrow new IllegalArgumentException(\n\t\t\t\t\t\t\tString.format(\"Block size at level %d has a different length (%d vs %d)\", l, numDimensions, blockSizes[l].length));\n\n\t\t\t\tfor (int d = 0; d < numDimensions; ++d) {\n\n\t\t\t\t\tif (blockSizes[l][d] <= 0) {\n\t\t\t\t\t\tthrow new IllegalArgumentException(\n\t\t\t\t\t\t\t\tString.format(\"Block sizes at level %d (%d) is negative for dimension %d.\",\n\t\t\t\t\t\t\t\t\t\tl, blockSizes[l][d], d));\n\t\t\t\t\t}\n\n\t\t\t\t\tif (blockSizes[l][d] < blockSizes[k][d]) {\n\t\t\t\t\t\tthrow new IllegalArgumentException(\n\t\t\t\t\t\t\t\tString.format(\"Block sizes at level %d (%d) is smaller than previous level (%d) \"\n\t\t\t\t\t\t\t\t\t\t\t\t+ \" for dimension %d.\",\n\t\t\t\t\t\t\t\t\t\tl, blockSizes[l][d], blockSizes[k][d], d));\n\t\t\t\t\t}\n\n\t\t\t\t\tif (blockSizes[l][d] % blockSizes[k][d] != 0) {\n\t\t\t\t\t\tthrow new IllegalArgumentException(\n\t\t\t\t\t\t\t\tString.format(\"Block sizes at level %d (%d) not a multiple of previous level (%d) \"\n\t\t\t\t\t\t\t\t\t\t\t\t+ \" for dimension %d.\",\n\t\t\t\t\t\t\t\t\t\tl, blockSizes[l][d], blockSizes[k][d], d));\n\t\t\t\t\t}\n\n\t\t\t\t\trelativeToBase[l][d] = blockSizes[l][d] / blockSizes[0][d];\n\t\t\t\t\trelativeToAdjacent[l][d] = blockSizes[l][d] / blockSizes[k][d];\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (datasetSize == null) {\n\t\t\t\tdatasetSizeInChunks = null;\n\t\t\t} else {\n\t\t\t\tdatasetSizeInChunks = new long[numDimensions];\n\t\t\t\tArrays.setAll(datasetSizeInChunks, d -> (datasetSize[d] + blockSizes[0][d] - 1) / blockSizes[0][d]);\n\t\t\t}\n\t\t}\n\n\t\tpublic NestedGrid(int[][] blockSizes) {\n\t\t\tthis(blockSizes, null);\n\t\t}\n\n\t\t/**\n\t\t * Create a {@code NestedPosition} at the specified nesting {@code\n\t\t * level} grid {@code position}.\n\t\t * <p>\n\t\t * Note that {@code position} is in units of grid elements at {@code\n\t\t * level}. Positions with {@code level=0} refer to the Chunk grid,\n\t\t * positions with {@code level=1} refer to first-level Shard grid, and\n\t\t * so on.\n\t\t * <p>\n\t\t * The returned {@code NestedPosition} will have\n\t\t * {@link NestedPosition#level() level()==level}.\n\t\t *\n\t\t * @param position\n\t\t * \t\tposition at {@code level}\n\t\t * @param level\n\t\t * \t\tnesting level of {@code position}\n\t\t *\n\t\t * @return a NestedPosition representation of the specified grid position and nesting level\n\t\t */\n\t\tpublic NestedPosition nestedPosition(final long[] position, final int level) {\n\t\t\treturn new NestedPosition(this, position, level);\n\t\t}\n\n\t\t/**\n\t\t * Create a {@code NestedPosition} at the specified chunk grid {@code\n\t\t * position} (that is, at nesting level 0).\n\t\t * <p>\n\t\t * Note that {@code position} is in units of chunks.\n\t\t * <p>\n\t\t * The returned {@code NestedPosition} will have\n\t\t * {@link NestedPosition#level() level()==0}.\n\t\t *\n\t\t * @param position\n\t\t * \t\tposition at level 0 (chunk grid)\n\t\t *\n\t\t * @return a NestedPosition representation of the specified chunk grid position\n\t\t */\n\t\tpublic NestedPosition nestedPosition(final long[] position) {\n\t\t\treturn nestedPosition(position, 0);\n\t\t}\n\n\t\tpublic int numLevels() {\n\t\t\treturn numLevels;\n\t\t}\n\n\t\tpublic int numDimensions() {\n\t\t\treturn numDimensions;\n\t\t}\n\n\t\t/**\n\t\t * Get the block size in pixels at the given {@code level}.\n\t\t */\n\t\tpublic int[] getBlockSize(final int level) {\n\t\t\treturn blockSizes[level];\n\t\t}\n\n\t\t/**\n\t\t * Computes the pixel position for the given {@code sourcePos} grid\n\t\t * position at {@code sourceLevel}.\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\ta grid position at {@code sourceLevel}\n\t\t * @param sourceLevel\n\t\t * \t\tnesting level of {@code sourcePos}\n\t\t * @param targetPos\n\t\t * \t\tthe pixel position will be stored here\n\t\t */\n\t\tpublic void pixelPosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel,\n\t\t\t\tfinal long[] targetPos) {\n\t\t\tfinal int[] s = blockSizes[sourceLevel];\n\t\t\tfor (int d = 0; d < numDimensions; ++d) {\n\t\t\t\ttargetPos[d] = sourcePos[d] * s[d];\n\t\t\t}\n\t\t}\n\n\t\t/**\n\t\t * Get the pixel position for the given {@code sourcePos} grid position\n\t\t * at {@code sourceLevel}.\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\ta grid position at {@code sourceLevel}\n\t\t * @param sourceLevel\n\t\t * \t\tnesting level of {@code sourcePos}\n\t\t *\n\t\t * @return the pixel position\n\t\t */\n\t\tpublic long[] pixelPosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel) {\n\t\t\tfinal long[] targetPos = new long[numDimensions];\n\t\t\tpixelPosition(sourcePos, sourceLevel, targetPos);\n\t\t\treturn targetPos;\n\t\t}\n\n\t\t/**\n\t\t * Get the maximum pixel position in the block (chunk/shard) at the\n\t\t * given {@code sourcePos} grid position at {@code sourceLevel}.\n\t\t * <p>\n\t\t * Note that this does not take into account {@link #getDatasetSize()\n\t\t * dataset dimensions}. That is, it is always assumed that the\n\t\t * chunk/shard has the default size.\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\ta grid position at {@code sourceLevel}\n\t\t * @param sourceLevel\n\t\t * \t\tnesting level of {@code sourcePos}\n\t\t * @param targetPos\n\t\t * \t\tthe pixel position will be stored here\n\t\t */\n\t\tpublic void maxPixelPosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel,\n\t\t\t\tfinal long[] targetPos) {\n\t\t\tfinal int[] s = blockSizes[sourceLevel];\n\t\t\tfor (int d = 0; d < numDimensions; ++d) {\n\t\t\t\ttargetPos[d] = (sourcePos[d] + 1) * s[d] - 1;\n\t\t\t}\n\t\t}\n\n\t\t/**\n\t\t * Get the maximum pixel position in the block (chunk/shard) at the\n\t\t * given {@code sourcePos} grid position at {@code sourceLevel}.\n\t\t * <p>\n\t\t * Note that this does not take into account {@link #getDatasetSize()\n\t\t * dataset dimensions}. That is, it is always assumed that the\n\t\t * chunk/shard has the default size.\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\ta grid position at {@code sourceLevel}\n\t\t * @param sourceLevel\n\t\t * \t\tnesting level of {@code sourcePos}\n\t\t *\n\t\t * @return the pixel position\n\t\t */\n\t\tpublic long[] maxPixelPosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel) {\n\t\t\tfinal long[] targetPos = new long[numDimensions];\n\t\t\tmaxPixelPosition(sourcePos, sourceLevel, targetPos);\n\t\t\treturn targetPos;\n\t\t}\n\n\t\t/**\n\t\t * Computes the absolute {@code targetPos} grid position at {@code\n\t\t * targetLevel} for the given {@code sourcePos} grid position at {@code\n\t\t * sourceLevel}.\n\t\t * <p>\n\t\t * For example, this can be used to compute the coordinates on the shard\n\t\t * grid ({@code targetLevel==1}) of the shard containing a given\n\t\t * chunk ({@code sourcePos} at {@code sourceLevel==0}).\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\ta grid position at {@code sourceLevel}\n\t\t * @param sourceLevel\n\t\t * \t\tnesting level of {@code sourcePos}\n\t\t * @param targetPos\n\t\t * \t\tthe grid position at {@code targetLevel} will be stored here\n\t\t * @param targetLevel\n\t\t * \t\tnesting level of {@code targetPos}\n\t\t */\n\t\tpublic void absolutePosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel,\n\t\t\t\tfinal long[] targetPos,\n\t\t\t\tfinal int targetLevel) {\n\t\t\tfinal int[] sk = relativeToBase[sourceLevel];\n\t\t\tfinal int[] si = relativeToBase[targetLevel];\n\t\t\tfor (int d = 0; d < numDimensions; ++d) {\n\t\t\t\ttargetPos[d] = sourcePos[d] * sk[d] / si[d];\n\t\t\t}\n\t\t}\n\n\t\t/**\n\t\t * Get the absolute grid position at {@code targetLevel} for the given\n\t\t * {@code sourcePos} grid position at {@code sourceLevel}.\n\t\t * <p>\n\t\t * For example, this can be used to compute the coordinates on the shard\n\t\t * grid ({@code targetLevel==1}) of the shard containing a given\n\t\t * chunk ({@code sourcePos} at {@code sourceLevel==0}).\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\tthe source position j\n\t\t * @param sourceLevel\n\t\t * \t\tthe source level\n\t\t * @param targetLevel\n\t\t * \t\tthe target level\n\t\t *\n\t\t * @return absolute position at the target level\n\t\t */\n\t\tpublic long[] absolutePosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel,\n\t\t\t\tfinal int targetLevel) {\n\t\t\tif (sourceLevel == targetLevel) {\n\t\t\t\treturn sourcePos;\n\t\t\t}\n\t\t\tfinal long[] targetPos = new long[numDimensions];\n\t\t\tabsolutePosition(sourcePos, sourceLevel, targetPos, targetLevel);\n\t\t\treturn targetPos;\n\t\t}\n\n\t\t/**\n\t\t * Get the absolute grid position at {@code targetLevel} for the given\n\t\t * {@code sourcePos} chunk grid position (level 0).\n\t\t * <p>\n\t\t * For example, this can be used to compute the coordinates on the shard\n\t\t * grid ({@code targetLevel==1}) of the shard containing a given\n\t\t * chunk ({@code sourcePos}.\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\tthe source position j\n\t\t * @param targetLevel\n\t\t * \t\tthe target level\n\t\t *\n\t\t * @return absolute position at the target level\n\t\t */\n\t\tpublic long[] absolutePosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int targetLevel) {\n\t\t\treturn absolutePosition(sourcePos, 0, targetLevel);\n\t\t}\n\n\t\t/**\n\t\t * Computes the {@code targetPos} grid position at {@code targetLevel}\n\t\t * for the given {@code sourcePos} grid position at {@code sourceLevel},\n\t\t * relative to the containing element at {@code targetLevel+1}.\n\t\t * (The containing element is a shard for {@code targetLevel+1 <\n\t\t * numLevels} or the dataset for {@code targetLevel+1 == numLevels}.)\n\t\t * <p>\n\t\t * For example, this can be used to compute the grid coordinates {@code\n\t\t * targetLevel==0} of a given chunk ({@code sourcePos} at {@code\n\t\t * sourceLevel==0}) within a shard (containing element at level {@code\n\t\t * targetLevel+1==1}).\n\t\t * </p>\n\t\t *\n\t\t * @param sourcePos\n\t\t * \t\ta grid position at {@code sourceLevel}\n\t\t * @param sourceLevel\n\t\t * \t\tnesting level of {@code sourcePos}\n\t\t * @param targetPos\n\t\t * \t\tthe grid position at {@code targetLevel} will be stored here\n\t\t * @param targetLevel\n\t\t * \t\tnesting level of {@code targetPos}\n\t\t */\n\t\tpublic void relativePosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel,\n\t\t\t\tfinal long[] targetPos,\n\t\t\t\tfinal int targetLevel) {\n\t\t\tabsolutePosition(sourcePos, sourceLevel, targetPos, targetLevel);\n\t\t\tif (targetLevel < numLevels - 1) {\n\t\t\t\tfinal int[] rj = relativeToAdjacent[targetLevel + 1];\n\t\t\t\tfor (int d = 0; d < numDimensions; ++d) {\n\t\t\t\t\ttargetPos[d] %= rj[d];\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tpublic long[] relativePosition(\n\t\t\t\tfinal long[] sourcePos,\n\t\t\t\tfinal int sourceLevel,\n\t\t\t\tfinal int targetLevel) {\n\t\t\tfinal long[] targetPos = new long[numDimensions];\n\t\t\trelativePosition(sourcePos, sourceLevel, targetPos, targetLevel);\n\t\t\treturn targetPos;\n\t\t}\n\n\t\t/**\n\t\t * Get size of a block at the given {@code level} relative to {@code\n\t\t * level-1} (that is, in units of {@code level-1} blocks).\n\t\t * <p>\n\t\t * For example {@code relativeBlockSize(1)} returns the number of\n\t\t * chunks in a (non-nested) shard.\n\t\t */\n\t\tpublic int[] relativeBlockSize(final int level) {\n\t\t\treturn relativeToAdjacent[level];\n\t\t}\n\n\t\t/**\n\t\t * Get size of a block at the given {@code level} relative to level {@code\n\t\t * 0} (that is, in units of chunks).\n\t\t * <p>\n\t\t * For example {@code relativeToBaseBlockSize(1)} returns the number of\n\t\t * chunks in a (non-nested) shard.\n\t\t */\n\t\tpublic int[] relativeToBaseBlockSize(final int level) {\n\t\t\treturn relativeToBase[level];\n\t\t}\n\n\t\t/**\n\t\t * Get the size of the dataset in pixels.\n\t\t * <p>\n\t\t * This might return {@code null}, if this {@code NestedGrid} was not\n\t\t * constructed with dataset dimensions.\n\t\t *\n\t\t * @return size of the dataset in pixels\n\t\t */\n\t\tpublic long[] getDatasetSize() {\n\t\t\treturn datasetSize;\n\t\t}\n\n\t\t/**\n\t\t * Get the size of the dataset in units of chunks.\n\t\t * <p>\n\t\t * This might return {@code null}, if this {@code NestedGrid} was not\n\t\t * constructed with dataset dimensions.\n\t\t *\n\t\t * @return size of the dataset in chunks\n\t\t */\n\t\tpublic long[] getDatasetSizeInChunks() {\n\t\t\treturn datasetSizeInChunks;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/PositionValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.net.URI;\n\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\n/**\n * Wrap a KeyValueAccess and a dataset URI to be able to get/set values (ReadData) by {@code long[]} key\n * indicating the position of a top-level DataBlock (a top-level shard for sharded datasets,\n * a chunk for non-sharded datasets).\n */\npublic interface PositionValueAccess {\n\n\t/**\n\t * Gets the {@link VolatileReadData} for the DataBlock at the given position\n\t * in the block grid.\n\t * <p>\n\t * If the requested key does not exist, either {@code null} is returned or a\n\t * lazy {@code VolatileReadData} that will throw {@code N5NoSuchKeyException}\n\t * when trying to materialize.\n\t *\n\t * @param key\n\t *            The position of the block\n\t * @return ReadData for the given key or {@code null} if the key doesn't exist\n\t * @throws N5Exception.N5IOException\n\t *             if an error occurs while reading\n\t */\n\tVolatileReadData get(long[] key) throws N5Exception.N5IOException;\n\n\t/**\n\t * Write the {@code data} for a DataBlock to the given position in the block\n\t * grid.\n\t *\n\t * @param key\n\t * \t\tThe grid position of the DataBlock to write\n\t * @param data\n\t * \t\tThe data to write\n\t *\n\t * @throws N5Exception.N5IOException\n\t * \t\tif an error occurs while writing\n\t */\n\tvoid set(long[] key, ReadData data) throws N5Exception.N5IOException;\n\n\tboolean exists(long[] key) throws N5Exception.N5IOException;\n\n\tboolean remove(long[] key) throws N5Exception.N5IOException;\n\n\tstatic PositionValueAccess fromKva(\n\t\t\tfinal KeyValueAccess kva,\n\t\t\tfinal URI uri,\n\t\t\tfinal String normalPath,\n\t\t\tfinal DatasetAttributes attributes) {\n\n\t\treturn new KvaPositionValueAccess(kva, uri, normalPath, attributes);\n\t}\n\n\tclass KvaPositionValueAccess implements PositionValueAccess {\n\n\t\tprivate final KeyValueAccess kva;\n\t\tprivate final URI uri;\n\t\tprivate final String normalPath;\n\t\tprivate final DatasetAttributes attributes;\n\n\t\tKvaPositionValueAccess(final KeyValueAccess kva,\n\t\t\t\tfinal URI uri,\n\t\t\t\tfinal String normalPath,\n\t\t\t\tfinal DatasetAttributes attributes) {\n\n\t\t\tthis.kva = kva;\n\t\t\tthis.uri = uri;\n\t\t\tthis.normalPath = normalPath;\n\t\t\tthis.attributes = attributes;\n\t\t}\n\n\t\t/**\n\t\t * Constructs the absolute path for a DataBlock at a given grid\n\t\t * position.\n\t\t *\n\t\t * @param gridPosition\n\t\t *            to the target data block\n\t\t * @return the absolute path to the data block ad gridPosition\n\t\t */\n\t\tprotected String absolutePath(final long... gridPosition) {\n\t\t\treturn kva.compose(uri, normalPath, attributes.relativeBlockPath(gridPosition));\n\t\t}\n\n\t\t@Override\n\t\tpublic VolatileReadData get(final long[] key) throws N5IOException {\n\t\t\ttry {\n\t\t\t\treturn kva.createReadData(absolutePath(key));\n\t\t\t} catch (N5Exception.N5NoSuchKeyException e) {\n\t\t\t\treturn null;\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean exists(final long[] key) throws N5IOException {\n\t\t\treturn kva.isFile(absolutePath(key));\n\t\t}\n\n\t\t@Override\n\t\tpublic void set(final long[] key, final ReadData data) throws N5IOException {\n\t\t\tif (data == null) {\n\t\t\t\tremove(key);\n\t\t\t} else {\n\t\t\t\tkva.write(absolutePath(key), data);\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean remove(final long[] gridPosition) throws N5IOException {\n\n\t\t\tfinal String key = absolutePath(gridPosition);\n\t\t\tif (!kva.isFile(key))\n\t\t\t\treturn false;\n\n\t\t\tkva.delete(key);\n\t\t\treturn true;\n\t\t}\n\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/RawShard.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.segment.Segment;\nimport org.janelia.saalfeldlab.n5.readdata.segment.SegmentedReadData;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.NDArray;\n\npublic class RawShard {\n\n\tprivate final SegmentedReadData sourceData;\n\n\tprivate final NDArray<Segment> index;\n\n\tRawShard(final int[] size) {\n\t\tsourceData = null;\n\t\tindex = new NDArray<>(size, Segment[]::new);\n\t}\n\n\tRawShard(final SegmentedReadData sourceData, final NDArray<Segment> index) {\n\t\tthis.sourceData = sourceData;\n\t\tthis.index = index;\n\t}\n\n\tRawShard(final ShardIndex.SegmentIndexAndData segmentIndexAndData) {\n\t\tthis(segmentIndexAndData.data(), segmentIndexAndData.index());\n\t}\n\n\t/**\n\t * The ReadData from which the shard was constructed, or {@code null} for a\n\t * new empty shard.\n\t * \n\t * @return this shard's source ReadData, or null.\n\t */\n\tpublic SegmentedReadData sourceData() {\n\t\treturn sourceData;\n\t}\n\n\t/**\n\t * Maps grid position of shard elements to {@link Segment}s that give the\n\t * byte range for the blocks in this shard.\n\t * \n\t * @return an NDArray of segments\n\t */\n\tpublic NDArray<Segment> index() {\n\t\treturn index;\n\t}\n\n\tpublic boolean isEmpty() {\n\t\treturn index().allElementsNull();\n\t}\n\n\tpublic ReadData getElementData(final long[] pos) {\n\t\tfinal Segment segment = index.get(pos);\n\t\treturn segment == null ? null : segment.source().slice(segment);\n\t}\n\n\tpublic void setElementData(final ReadData data, final long[] pos) {\n\t\tfinal Segment segment = data == null ? null : SegmentedReadData.wrap(data).segments().get(0);\n\t\tindex.set(segment, pos);\n\t}\n\n\tpublic void prefetch(List<long[]> positions) {\n\n\t\tfinal List<Range> ranges = new ArrayList<>(positions.size());\n\t\tfor (long[] pos : positions) {\n\t\t\tfinal Segment seg = index.get(pos);\n\t\t\tif (seg != null)\n\t\t\t\tranges.add(sourceData.location(seg));\n\t\t}\n\t\tsourceData.prefetch(ranges);\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/RawShardCodec.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport static org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation.START;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodec;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.segment.Segment;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.segment.SegmentedReadData;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.NDArray;\n\npublic class RawShardCodec implements BlockCodec<RawShard> {\n\n\t/**\n\t * Number of elements (chunks, nested shards) in each dimension per shard.\n\t */\n\tprivate final int[] size;\n\tprivate final IndexLocation indexLocation;\n\tprivate final BlockCodec<long[]> indexCodec;\n\tprivate final long indexBlockSizeInBytes;\n\n\tRawShardCodec(final int[] size, final IndexLocation indexLocation, final BlockCodec<long[]> indexCodec) {\n\n\t\tthis.size = size;\n\t\tthis.indexLocation = indexLocation;\n\t\tthis.indexCodec = indexCodec;\n\t\tindexBlockSizeInBytes = indexCodec.encodedSize(ShardIndex.blockSizeFromIndexSize(size));\n\t}\n\n\t@Override\n\tpublic ReadData encode(final DataBlock<RawShard> shard) throws N5Exception.N5IOException {\n\n\t\t// concatenate slices for all non-null segments in shard.getData().index()\n\t\tfinal NDArray<Segment> index = shard.getData().index();\n\t\tfinal List<SegmentedReadData> readDatas = new ArrayList<>();\n\t\t// TODO: Any clever ReadData grouping, slice merging, etc. should go here\n\t\t//       This basic implementation just slices ReadData for all non-null\n\t\t//       elements and concatenates in flat index order.\n\t\tfor (Segment segment : index.data) {\n\t\t\tif (segment != null) {\n\t\t\t\treadDatas.add(segment.source().slice(segment));\n\t\t\t}\n\t\t}\n\t\tfinal SegmentedReadData data = SegmentedReadData.concatenate(readDatas);\n\n\t\tfinal ReadData.Generator writer;\n\t\tif (indexLocation == START) {\n\t\t\tdata.materialize();\n\t\t\tfinal NDArray<Range> locations = ShardIndex.locations(index, data);\n\t\t\tfinal DataBlock<long[]> indexDataBlock = ShardIndex.toDataBlock(locations, indexBlockSizeInBytes);\n\t\t\tfinal ReadData indexReadData = indexCodec.encode(indexDataBlock);\n\t\t\twriter = out -> {\n\t\t\t\tindexReadData.writeTo(out);\n\t\t\t\tdata.writeTo(out);\n\t\t\t};\n\t\t} else { // indexLocation == END\n\t\t\twriter = out -> {\n\t\t\t\tdata.writeTo(out);\n\t\t\t\tfinal NDArray<Range> locations = ShardIndex.locations(index, data);\n\t\t\t\tfinal DataBlock<long[]> indexDataBlock = ShardIndex.toDataBlock(locations, 0);\n\t\t\t\tfinal ReadData indexReadData = indexCodec.encode(indexDataBlock);\n\t\t\t\tindexReadData.writeTo(out);\n\t\t\t};\n\t\t}\n\t\treturn ReadData.from(writer);\n\t}\n\n\t@Override\n\tpublic DataBlock<RawShard> decode(final ReadData readData, final long[] gridPosition) throws N5Exception.N5IOException {\n\n\t\tfinal long indexOffset = (indexLocation == START) ? 0 : (readData.requireLength() - indexBlockSizeInBytes);\n\t\tfinal ReadData indexReadData = readData.slice(indexOffset, indexBlockSizeInBytes);\n\t\tfinal DataBlock<long[]> indexDataBlock = indexCodec.decode(indexReadData, new long[size.length]);\n\t\tfinal NDArray<Range> locations = ShardIndex.fromDataBlock(indexDataBlock);\n\t\tfinal ShardIndex.SegmentIndexAndData segments = ShardIndex.segments(locations, readData);\n\t\treturn new RawShardDataBlock(gridPosition, new RawShard(segments));\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/RawShardDataBlock.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\n\n/**\n * Wrap a RawShard as a DataBlock.\n * This basically just adds a gridPosition for the shard.\n */\npublic class RawShardDataBlock implements DataBlock<RawShard> {\n\n\tprivate final long[] gridPosition;\n\n\tprivate final RawShard shard;\n\n\tRawShardDataBlock(final long[] gridPosition, final RawShard shard) {\n\t\tthis.gridPosition = gridPosition;\n\t\tthis.shard = shard;\n\t}\n\n\t// TODO: should this be the number of elements in the Shard (number of\n\t//       sub-shards / chunks) along each dimension, or the number of\n\t//       pixels alon each dimension?\n\t@Override\n\tpublic int[] getSize() {\n\t\treturn shard.index().size();\n\t}\n\n\t@Override\n\tpublic long[] getGridPosition() {\n\t\treturn gridPosition;\n\t}\n\n\t@Override\n\tpublic int getNumElements() {\n\t\treturn shard.index().numElements();\n\t}\n\n\t@Override\n\tpublic RawShard getData() {\n\t\treturn shard;\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/Region.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\n\n/**\n * Bounds, in pixel coordinates, of a region in a dataset.\n * <p>\n * Provides methods to find which chunks and shards are contained in the\n * region, iterate sub-NestedPositions, etc.\n */\npublic class Region {\n\n\t/**\n\t * The dimensions of the full dataset.\n\t * This is used to decide whether DataBlocks are on the border (and therefore possibly truncated).\n\t */\n\tprivate final long[] datasetDimensions;\n\n\t/**\n\t * The nested grid of the dataset\n\t */\n\tprivate final NestedGrid grid;\n\n\t/**\n\t * min pixel position in the region\n\t */\n\tprivate final long[] min;\n\n\t/**\n\t * size of the region in pixels\n\t */\n\tprivate final long[] size;\n\n\t/**\n\t * {@code NestedPosition} of the chunk containing the min pixel position.\n\t */\n\tprivate final Nesting.NestedPosition minPos;\n\n\t/**\n\t * {@code NestedPosition} of the chunk containing the max pixel position.\n\t */\n\tprivate final Nesting.NestedPosition maxPos;\n\n\tpublic Region(final long[] min, final long[] size, final NestedGrid grid) {\n\t\tthis.min = min;\n\t\tthis.size = size;\n\t\tthis.grid = grid;\n\t\tthis.datasetDimensions = grid.getDatasetSize();\n\n\t\tfinal int n = min.length;\n\t\tfinal int[] chunkSize = grid.getBlockSize(0);\n\n\t\tfinal long[] minChunk = new long[n];\n\t\tArrays.setAll(minChunk, d -> min[d] / chunkSize[d]);\n\t\tminPos = grid.nestedPosition(minChunk);\n\n\t\tfinal long[] maxChunk = new long[n];\n\t\tArrays.setAll(maxChunk, d -> (min[d] + size[d] - 1) / chunkSize[d]);\n\t\tmaxPos = grid.nestedPosition(maxChunk);\n\t}\n\n\t/**\n\t * Get the {@code NestedPosition} of the minimum chunk touched by the region.\n\t */\n\tpublic Nesting.NestedPosition minPos() {\n\t\treturn minPos;\n\t}\n\n\t/**\n\t * Get the {@code NestedPosition} of the maximum chunk touched by the region.\n\t */\n\tpublic Nesting.NestedPosition maxPos() {\n\t\treturn maxPos;\n\t}\n\n\t/**\n\t * Check whether the shard or chunk corresponding to the given position is\n\t * fully contained inside the region.\n\t * <p>\n\t * The {@link Nesting.NestedPosition#level() level} of {@code position} is\n\t * used to determine whether it refers to a chunk or a (potentially nested)\n\t * shard.\n\t *\n\t * @param position\n\t * \t\tthe NestedPosition to check\n\t *\n\t * @return true, if the given position is fully contained in this region\n\t */\n\tpublic boolean fullyContains(final Nesting.NestedPosition position) {\n\n\t\tfinal long[] pmin = position.pixelPosition();\n\t\tfor (int d = 0; d < pmin.length; d++) {\n\t\t\tif (pmin[d] < min[d]) {\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\n\t\tfinal long[] pmax = position.maxPixelPosition();\n\t\tfor (int d = 0; d < pmax.length; d++) {\n\t\t\tfinal long m = Math.min(pmax[d], datasetDimensions[d] - 1);\n\t\t\tif (m > min[d] + size[d] - 1) {\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\n\t\treturn true;\n\t}\n\n\t/**\n\t * Returns {@code NestedPosition}s of all nested elements in the given\n\t * {@code position} that are contained in this {@code Region}.\n\t * <p>\n\t * The returned {@code NestedPosition}s will all have level {@code\n\t * position.level()-1}.\n\t */\n\t// TODO: Revise to accept Consumer<NestedPosition> for handling each position\n\tList<Nesting.NestedPosition> containedNestedPositions(final Nesting.NestedPosition position) {\n\t\tfinal int level = position.level() - 1;\n\t\tfinal long[] gridMinOfRegion = minPos().absolute(level);\n\t\tfinal long[] gridMaxOfRegion = maxPos().absolute(level);\n\n\t\tfinal long[] gridMinOfPosition = position.absolute(level);\n\t\tfinal int[] gridSizeOfPosition = grid.relativeBlockSize(level + 1);\n\n\t\tfinal int n = grid.numDimensions();\n\t\tfinal long[] gridMin = new long[n];\n\t\tArrays.setAll(gridMin, d -> Math.max(gridMinOfRegion[d], gridMinOfPosition[d]));\n\t\tfinal long[] gridMax = new long[n];\n\t\tArrays.setAll(gridMax, d -> Math.min(gridMaxOfRegion[d], gridMinOfPosition[d] + gridSizeOfPosition[d] - 1));\n\n\t\tfinal List<long[]> gridPositions = gridPositions(gridMin, gridMax);\n\t\tfinal List<Nesting.NestedPosition> nestedPositions = new ArrayList<>();\n\t\tgridPositions.forEach(p -> nestedPositions.add(grid.nestedPosition(p, level)));\n\t\treturn nestedPositions;\n\t}\n\n\t// TODO: Revise to accept Consumer<long[]> for handling each position\n\tpublic static List<long[]> gridPositions(final long[] min, final long[] max) {\n\t\tfinal int n = min.length;\n\t\tfinal long[] pos = min.clone();\n\t\tint numElements = 1;\n\t\tfor (int d = 0; d < n; ++d) {\n\t\t\tnumElements *= (int) (max[d] - min[d] + 1);\n\t\t}\n\t\tlong[][] positions = new long[numElements][n];\n\t\tArrays.setAll(positions[0], j -> pos[j]);\n\t\tfor (int i = 1; i < numElements; i++) {\n\t\t\tfor (int d = 0; d < n; ++d) {\n\t\t\t\tif (++pos[d] <= max[d]) {\n\t\t\t\t\tArrays.setAll(positions[i], j -> pos[j]);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpos[d] = min[d];\n\t\t\t}\n\t\t}\n\t\treturn Arrays.asList(positions);\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/ShardCodecInfo.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DatasetCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\n\npublic interface ShardCodecInfo extends BlockCodecInfo {\n\n\t/**\n\t * Size in pixels of each shard element (either nested shard or chunk)\n\t *\n\t * @return the size of each shard element\n\t */\n\tint[] getInnerBlockSize();\n\n\t/**\n\t *\n\t * @return the collection of DatasetCodecInfo applied to data blocks for this shard\n\t */\n\tDatasetCodecInfo[] getInnerDatasetCodecInfos();\n\n\t/**\n\t * BlockCodecInfo for shard elements (either nested shard or DataBlock)\n\t *\n\t * @return the BlockCodecInfo for DataBlocks in this shard\n\t */\n\tBlockCodecInfo getInnerBlockCodecInfo();\n\n\t/**\n\t * @return the collection of DataCodecInfos applied to data blocks for this\n\t *         shard.\n\t */\n\tDataCodecInfo[] getInnerDataCodecInfos();\n\n\t/**\n\t * BlockCodec for shard index\n\t *\n\t * @return the BlockCodecInfo for this shard's index\n\t */\n\tBlockCodecInfo getIndexBlockCodecInfo();\n\n\t/**\n\t * Deterministic-size DataCodecs for index BlockCodec\n\t *\n\t * @return the collection of DataCodecInfos for this shard's index\n\t */\n\tDataCodecInfo[] getIndexDataCodecInfos();\n\n\tIndexLocation getIndexLocation();\n\n\t@SuppressWarnings(\"unchecked\")\n\t@Override\n\tdefault RawShardCodec create(DataType dataType, int[] blockSize, DataCodecInfo... codecs) {\n\t\treturn create(blockSize, codecs);\n\t}\n\n\tRawShardCodec create(int[] blockSize, DataCodecInfo... codecs);\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/shard/ShardIndex.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.function.IntFunction;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.LongArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.segment.Segment;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.segment.SegmentedReadData;\nimport org.janelia.saalfeldlab.n5.readdata.segment.SegmentedReadData.SegmentsAndData;\n\nimport com.google.gson.annotations.SerializedName;\n\npublic class ShardIndex {\n\n\tprivate ShardIndex() {\n\t\t// utility class. should not be instantiated.\n\t}\n\n\tpublic enum IndexLocation {\n\t\t@SerializedName(\"start\") START,\n\t\t@SerializedName(\"end\") END\n\t}\n\n\t/**\n\t * Access flat {@code T[]} array as n-dimensional array.\n\t *\n\t * @param <T>\n\t * \t\telement type\n\t */\n\tpublic static class NDArray<T> {\n\n\t\tfinal int[] size;\n\t\tprivate final int[] stride;\n\t\tfinal T[] data;\n\n\t\tNDArray(final int[] size, final IntFunction<T[]> createArray) {\n\t\t\tthis.size = size;\n\t\t\tstride = getStrides(size);\n\t\t\tdata = createArray.apply(getNumElements(size));\n\t\t}\n\n\t\tNDArray(final int[] size, final T[] data) {\n\t\t\tthis.size = size;\n\t\t\tstride = getStrides(size);\n\t\t\tthis.data = data;\n\t\t}\n\n\t\tT get(long... position) {\n\t\t\treturn data[index(position)];\n\t\t}\n\n\t\tvoid set(T value, long... position) {\n\t\t\tdata[index(position)] = value;\n\t\t}\n\n\t\tprivate int index(long... position) {\n\t\t\tint index = 0;\n\t\t\tfor (int i = 0; i < stride.length; i++) {\n\t\t\t\tindex += stride[i] * position[i];\n\t\t\t}\n\t\t\treturn index;\n\t\t}\n\n\t\tpublic int[] size() {\n\t\t\treturn size;\n\t\t}\n\n\t\tpublic int numElements() {\n\t\t\treturn data.length;\n\t\t}\n\n\t\tpublic boolean allElementsNull() {\n\t\t\tfor (T t : data) {\n\t\t\t\tif (t != null) {\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn true;\n\t\t}\n\t}\n\n\tstatic int getNumElements(final int[] size) {\n\t\tint numElements = 1;\n\t\tfor (int s : size) {\n\t\t\tnumElements *= s;\n\t\t}\n\t\treturn numElements;\n\t}\n\n\tstatic int[] getStrides(final int[] size) {\n\t\tfinal int n = size.length;\n\t\tfinal int[] stride = new int[n];\n\t\tstride[0] = 1;\n\t\tfor (int i = 1; i < n; i++) {\n\t\t\tstride[i] = stride[i - 1] * size[i - 1];\n\t\t}\n\t\treturn stride;\n\t}\n\n\t/**\n\t * Special value indicating an empty block entry in the index.\n\t * Used for both offset and length when a block doesn't exist.\n\t */\n\tstatic final long EMPTY_INDEX_NBYTES = 0xFFFFFFFFFFFFFFFFL;\n\n\t/**\n\t * Size of first dimension of the {@code DataBlock<long[]>} representation of the shard index.\n\t */\n\tprivate static final int LONGS_PER_BLOCK = 2;\n\n\tstatic NDArray<Range> fromDataBlock( final DataBlock<long[]> block ) {\n\n\t\tfinal long[] blockData = block.getData();\n\t\tfinal int[] size = indexSizeFromBlockSize(block.getSize());\n\t\tfinal int n = getNumElements(size);\n\t\tfinal Range[] locations = new Range[n];\n\n\t\tfor (int i = 0; i < n; i++) {\n\t\t\tlong offset = blockData[i * LONGS_PER_BLOCK];\n\t\t\tlong length = blockData[i * LONGS_PER_BLOCK + 1];\n\t\t\tif (offset != EMPTY_INDEX_NBYTES && length != EMPTY_INDEX_NBYTES) {\n\t\t\t\tlocations[i] = Range.at(offset, length);\n\t\t\t}\n\t\t}\n\t\treturn new NDArray<>(size, locations);\n\t}\n\n\tstatic DataBlock<long[]> toDataBlock( final NDArray<Range> locations, final long offset ) {\n\n\t\tfinal Range[] data = locations.data;\n\n\t\tfinal int[] blockSize = blockSizeFromIndexSize(locations.size);\n\t\tfinal long[] blockData = new long[data.length * 2];\n\n\t\tfor (int i = 0; i < data.length; ++i) {\n\t\t\tif (data[i] != null) {\n\t\t\t\tblockData[i * LONGS_PER_BLOCK] = data[i].offset() + offset;\n\t\t\t\tblockData[i * LONGS_PER_BLOCK + 1] = data[i].length();\n\t\t\t} else {\n\t\t\t\tblockData[i * LONGS_PER_BLOCK] = EMPTY_INDEX_NBYTES;\n\t\t\t\tblockData[i * LONGS_PER_BLOCK + 1] = EMPTY_INDEX_NBYTES;\n\t\t\t}\n\t\t}\n\t\treturn new LongArrayDataBlock(blockSize, new long[blockSize.length], blockData);\n\t}\n\n\t/**\n\t * Prepends a value to an array.\n\t *\n\t * @param value the value to prepend\n\t * @param array the original array\n\t * @return a new array with the value prepended\n\t */\n\tprivate static int[] prepend(final int value, final int[] array) {\n\n\t\tfinal int[] indexBlockSize = new int[array.length + 1];\n\t\tindexBlockSize[0] = value;\n\t\tSystem.arraycopy(array, 0, indexBlockSize, 1, array.length);\n\t\treturn indexBlockSize;\n\t}\n\n\t/**\n\t * Prepends {@code LONGS_PER_BLOCK} to the {@code indexSize} array.\n\t */\n\tstatic int[] blockSizeFromIndexSize(final int[] indexSize) {\n\t\treturn prepend(LONGS_PER_BLOCK, indexSize);\n\t}\n\n\t/**\n\t * Strips first element (should be {@code LONGS_PER_BLOCK}) from the {@code blockSize} array.\n\t */\n\tstatic int[] indexSizeFromBlockSize(final int[] blockSize) {\n\t\tassert blockSize[ 0 ] == LONGS_PER_BLOCK;\n\t\treturn Arrays.copyOfRange(blockSize, 1, blockSize.length);\n\t}\n\n\t/**\n\t * Retrieves the {@code SegmentLocation} of each non-null {@code Segment} in\n\t * {@code segments}. Returns a {@code NDArray<SegmentLocation>} with entries\n\t * corresponding tho the {@code segments} entries.\n\t */\n\tstatic NDArray<Range> locations(final NDArray<Segment> segments, final SegmentedReadData readData) {\n\n\t\tfinal Segment[] data = segments.data;\n\t\tfinal Range[] locations = new Range[data.length];\n\t\tfor (int i = 0; i < data.length; ++i) {\n\t\t\tfinal Segment segment = data[i];\n\t\t\tif ( segment != null ) {\n\t\t\t\tlocations[i] = readData.location(segment);\n\t\t\t}\n\t\t}\n\t\treturn new NDArray<>(segments.size, locations);\n\t}\n\n\tinterface SegmentIndexAndData {\n\t\tNDArray<Segment> index();\n\t\tSegmentedReadData data();\n\t}\n\n\t/**\n\t * Puts a {@code Segment} at each non-null {@code SegmentLocation} in {@code\n\t * locations} on the given {@code readData}. Returns both the {@code\n\t * SegmentedReadData} with these segments and a {@code NDArray<Segment>}\n\t * with segment entries corresponding to the {@code locations} entries.\n\t */\n\tstatic SegmentIndexAndData segments(final NDArray<Range> locations, final ReadData readData) {\n\n\t\tfinal Range[] locationsData = locations.data;\n\t\tfinal Segment[] segmentsData = new Segment[locationsData.length];\n\n\t\tfinal List<Range> presentLocations = new ArrayList<>();\n\t\tfor (int i = 0; i < locationsData.length; i++) {\n\t\t\tif (locationsData[i] != null) {\n\t\t\t\tpresentLocations.add(locationsData[i]);\n\t\t\t}\n\t\t}\n\n\t\tfinal SegmentsAndData segmentsAndData = SegmentedReadData.wrap(readData, presentLocations);\n\t\tfinal Iterator<Segment> presentSegments = segmentsAndData.segments().iterator();\n\t\tfor (int i = 0; i < locationsData.length; i++) {\n\t\t\tif (locationsData[i] != null) {\n\t\t\t\tsegmentsData[i] = presentSegments.next();\n\t\t\t}\n\t\t}\n\n\t\tfinal NDArray<Segment> index = new NDArray<>(locations.size, segmentsData);\n\t\tfinal SegmentedReadData data = segmentsAndData.data();\n\t\treturn new SegmentIndexAndData() {\n\t\t\t@Override public NDArray<Segment> index() {return index;}\n\t\t\t@Override public SegmentedReadData data() {return data;}\n\t\t};\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/util/FloatValueParser.java",
    "content": "package org.janelia.saalfeldlab.n5.util;\n\nimport org.apache.commons.codec.DecoderException;\nimport org.apache.commons.codec.binary.Hex;\nimport org.janelia.saalfeldlab.n5.N5Exception;\n\n/**\n * Parses {@link Float} and {@link Double} values from JSON hex strings.\n * <p>\n * This does not directly cover the Strings \"NaN\", \"Infinity\", and \"-Infinity\"\n * but they are parsable by the parseDouble and parseFloat methods. Rather, this\n * class handles converting to and from the hex representations of NaN,\n * -Infinity, Infinity, and all other allowable values.\n */\npublic class FloatValueParser {\n\n\t/**\n\t * Parses a hex string to a float value.\n\t * \n\t * @param hexString\n\t *            hex string in format \"0x\" followed by 8 hex digits\n\t * @return the float value\n\t * @throws N5Exception\n\t *             if the string format is invalid\n\t */\n\tpublic static float parseFloat(String hexString) throws N5Exception {\n\n\t\tvalidateFloat(hexString);\n\t\tfinal int intValue = Integer.parseUnsignedInt(hexString.substring(2), 16);\n\t\treturn Float.intBitsToFloat(intValue);\n\t}\n\n\t/**\n\t * Encodes a float value to a hex string.\n\t * \n\t * @param value\n\t *            the float to encode\n\t * @return hex string in format \"0x\" followed by 8 hex digits\n\t */\n\tpublic static String encodeFloat(float value) {\n\n\t\treturn String.format(\"0x%08x\", Float.floatToIntBits(value));\n\t}\n\n\tprivate static void validateFloat(String hexString) {\n\n\t\tif (!hexString.startsWith(\"0x\") || hexString.length() != 10)\n\t\t\tthrow new N5Exception(\"Could not parse string \" + hexString + \" as float.\");\n\t}\n\n\t/**\n\t * Parses a hex string to a double value.\n\t * \n\t * @param hexString\n\t *            hex string in format \"0x\" followed by 16 hex digits\n\t * @return the double value\n\t * @throws N5Exception\n\t *             if the string format is invalid\n\t */\n\tpublic static double parseDouble(String hexString) throws N5Exception {\n\n\t\tvalidateDouble(hexString);\n\t\tfinal long longValue = Long.parseUnsignedLong(hexString.substring(2), 16);\n\t\treturn Double.longBitsToDouble(longValue);\n\t}\n\n\t/**\n\t * Encodes a double value to a hex string.\n\t * \n\t * @param value\n\t *            the double to encode\n\t * @return hex string in format \"0x\" followed by 16 hex digits\n\t */\n\tpublic static String encodeDouble(double value) {\n\n\t\treturn String.format(\"0x%016x\", Double.doubleToLongBits(value));\n\t}\n\n\tprivate static void validateDouble(String hexString) {\n\n\t\tif (!hexString.startsWith(\"0x\") || hexString.length() != 18)\n\t\t\tthrow new N5Exception(\"Could not parse string \" + hexString + \" as double.\");\n\t}\n\n\t/**\n\t * Parses a hex string to a byte array.\n\t * \n\t * @param hexString\n\t *            hex string in format \"0x\" followed by hex digits\n\t * @return the decoded byte array\n\t * @throws N5Exception\n\t *             if the string format is invalid or decoding fails\n\t */\n\tpublic static byte[] parseBytes(String hexString) throws N5Exception {\n\n\t\tvalidateBytes(hexString);\n\t\ttry {\n\t\t\treturn Hex.decodeHex(hexString.substring(2));\n\t\t} catch (DecoderException e) {\n\t\t\tthrow new N5Exception(e);\n\t\t}\n\t}\n\n\tprivate static void validateBytes(String hexString) {\n\n\t\tif (!hexString.startsWith(\"0x\"))\n\t\t\tthrow new N5Exception(\"Could not parse string \" + hexString + \" to bytes.\");\n\t}\n\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/util/MemCopy.java",
    "content": "package org.janelia.saalfeldlab.n5.util;\n\nimport org.janelia.saalfeldlab.n5.DataType;\n\n/**\n * Low-level range copying methods between source and target primitve array\n * (type {@code T}, e.g., {@code double[]}).\n *\n * @param <T>\n * \t\tthe source/target type. Must be a primitive array type (e.g., {@code double[]})\n */\npublic interface MemCopy<T> {\n\n\tMemCopyByte BYTE = new MemCopyByte();\n\tMemCopyShort SHORT = new MemCopyShort();\n\tMemCopyInt INT = new MemCopyInt();\n\tMemCopyLong LONG = new MemCopyLong();\n\tMemCopyFloat FLOAT = new MemCopyFloat();\n\tMemCopyDouble DOUBLE = new MemCopyDouble();\n\n\tstatic MemCopy<?> forDataType(final DataType dataType) {\n\t\tswitch (dataType) {\n\t\tcase UINT8:\n\t\tcase INT8:\n\t\t\treturn BYTE;\n\t\tcase UINT16:\n\t\tcase INT16:\n\t\t\treturn SHORT;\n\t\tcase UINT32:\n\t\tcase INT32:\n\t\t\treturn INT;\n\t\tcase UINT64:\n\t\tcase INT64:\n\t\t\treturn LONG;\n\t\tcase FLOAT32:\n\t\t\treturn FLOAT;\n\t\tcase FLOAT64:\n\t\t\treturn DOUBLE;\n\t\tcase STRING:\n\t\tcase OBJECT:\n\t\t\tthrow new UnsupportedOperationException(\"TODO?\");\n\t\tdefault:\n\t\t\tthrow new IllegalArgumentException();\n\t\t}\n\t}\n\n\t/**\n\t * Copy {@code length} components from the {@code src} array to the {@code\n\t * dest} array. The components at positions {@code srcPos} through {@code\n\t * srcPos+length-1} in the source array are copied into positions {@code\n\t * destPos}, {@code destPos+destStride}, {@code destPos + 2*destStride},\n\t * etc., through {@code destPos+(length-1)*destStride} of the destination\n\t * array.\n\t */\n\tvoid copyStrided(T src, int srcPos, T dest, int destPos, int destStride, int length);\n\n\tclass MemCopyByte implements MemCopy<byte[]> {\n\n\t\t@Override\n\t\tpublic void copyStrided(final byte[] src, final int srcPos, final byte[] dest, final int destPos, final int destStride, final int length) {\n\t\t\tif (destStride == 1)\n\t\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, length);\n\t\t\telse\n\t\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\t\tdest[destPos + i * destStride] = src[srcPos + i];\n\t\t}\n\t}\n\n\tclass MemCopyShort implements MemCopy<short[]> {\n\n\t\t@Override\n\t\tpublic void copyStrided(final short[] src, final int srcPos, final short[] dest, final int destPos, final int destStride, final int length) {\n\t\t\tif (destStride == 1)\n\t\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, length);\n\t\t\telse\n\t\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\t\tdest[destPos + i * destStride] = src[srcPos + i];\n\t\t}\n\t}\n\n\tclass MemCopyInt implements MemCopy<int[]> {\n\n\t\t@Override\n\t\tpublic void copyStrided(final int[] src, final int srcPos, final int[] dest, final int destPos, final int destStride, final int length) {\n\t\t\tif (destStride == 1)\n\t\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, length);\n\t\t\telse\n\t\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\t\tdest[destPos + i * destStride] = src[srcPos + i];\n\t\t}\n\t}\n\n\tclass MemCopyLong implements MemCopy<long[]> {\n\n\t\t@Override\n\t\tpublic void copyStrided(final long[] src, final int srcPos, final long[] dest, final int destPos, final int destStride, final int length) {\n\t\t\tif (destStride == 1)\n\t\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, length);\n\t\t\telse\n\t\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\t\tdest[destPos + i * destStride] = src[srcPos + i];\n\t\t}\n\t}\n\n\tclass MemCopyFloat implements MemCopy<float[]> {\n\n\t\t@Override\n\t\tpublic void copyStrided(final float[] src, final int srcPos, final float[] dest, final int destPos, final int destStride, final int length) {\n\t\t\tif (destStride == 1)\n\t\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, length);\n\t\t\telse\n\t\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\t\tdest[destPos + i * destStride] = src[srcPos + i];\n\t\t}\n\t}\n\n\tclass MemCopyDouble implements MemCopy<double[]> {\n\n\t\t@Override\n\t\tpublic void copyStrided(final double[] src, final int srcPos, final double[] dest, final int destPos, final int destStride, final int length) {\n\t\t\tif (destStride == 1)\n\t\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, length);\n\t\t\telse\n\t\t\t\tfor (int i = 0; i < length; ++i)\n\t\t\t\t\tdest[destPos + i * destStride] = src[srcPos + i];\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/main/java/org/janelia/saalfeldlab/n5/util/SubArrayCopy.java",
    "content": "package org.janelia.saalfeldlab.n5.util;\n\n/**\n * Copy sub-region between flattened arrays (of different sizes).\n * <p>\n * The {@link #copy(Object, int[], int[], Object, int[], int[], int[]) SubArrayCopy.copy}\n * method requires the nD size of the flattened source and target arrays, the nD\n * starting position and nD size of the source region to copy, and the nD\n * starting position in the target to copy to.\n */\npublic interface SubArrayCopy\n{\n\t/**\n\t * Copy a nD region from {@code src} to {@code dest}, where {@code src} and\n\t * {@code dest} are flattened nD array of dimensions {@code srcSize} and\n\t * {@code destSize}, respectively.\n\t *\n\t * @param src\n\t * \t\tflattened nD source array\n\t * @param srcSize\n\t * \t\tdimensions of src\n\t * @param srcPos\n\t * \t\tstarting position, in src, of the range to copy\n\t * @param dest\n\t * \t\tflattened nD destination array\n\t * @param destSize\n\t * \t\tdimensions of dest\n\t * @param destPos\n\t * \t\tstarting position, in dest, of the range to copy\n\t * @param size\n\t * \t\tsize of the range to copy\n\t */\n\t// TODO: generic T instead of Object to make sure src and dest are the same primitive array type\n\tstatic void copy( Object src, int[] srcSize, int[] srcPos, Object dest, int[] destSize, int[] destPos, int[] size )\n\t{\n\t\tfinal int n = srcSize.length;\n\t\tassert srcPos.length == n;\n\t\tassert destSize.length == n;\n\t\tassert destPos.length == n;\n\t\tassert size.length == n;\n\n\t\tfinal int[] srcStrides = createAllocationSteps( srcSize );\n\t\tfinal int[] destStrides = createAllocationSteps( destSize );\n\t\tfinal int oSrc = positionToIndex( srcPos, srcSize );\n\t\tfinal int oDest = positionToIndex( destPos, destSize );\n\n\t\tcopyNDRangeRecursive( n - 1,\n\t\t\t\tsrc, srcStrides, oSrc,\n\t\t\t\tdest, destStrides, oDest,\n\t\t\t\tsize );\n\t}\n\n\t// TODO: maybe hide the implementation details below in an inner class\n\n\tstatic int positionToIndex( final int[] position, final int[] dimensions )\n\t{\n\t\tfinal int maxDim = dimensions.length - 1;\n\t\tint i = position[ maxDim ];\n\t\tfor ( int d = maxDim - 1; d >= 0; --d )\n\t\t\ti = i * dimensions[ d ] + position[ d ];\n\t\treturn i;\n\t}\n\n\t/**\n\t * Create allocation step array from the dimensions of an N-dimensional\n\t * array.\n\t *\n\t * @param dimensions\n\t * @param steps\n\t */\n\t// TODO: rename to something with \"stride\"\n\t// TODO: inline into method below (or always use this one)\n\tstatic void createAllocationSteps( final int[] dimensions, final int[] steps )\n\t{\n\t\tsteps[ 0 ] = 1;\n\t\tfor ( int d = 1; d < dimensions.length; ++d )\n\t\t\tsteps[ d ] = steps[ d - 1 ] * dimensions[ d - 1 ];\n\t}\n\n\t// TODO: rename to something with \"stride\"\n\t// TODO: inline above (or always use that one)\n\tstatic int[] createAllocationSteps( final int[] dimensions )\n\t{\n\t\tfinal int[] steps = new int[ dimensions.length ];\n\t\tcreateAllocationSteps( dimensions, steps );\n\t\treturn steps;\n\t}\n\n\n\t/**\n\t * Recursively copy a {@code (d+1)} dimensional region from {@code src} to\n\t * {@code dest}, where {@code src} and {@code dest} are flattened nD array\n\t * with strides {@code srcStrides} and {@code destStrides}, respectively.\n\t * <p>\n\t * For {@code d=0}, a 1D line of length {@code size[0]} is copied\n\t * (equivalent to {@code System.arraycopy}). For {@code d=1}, a 2D plane of\n\t * size {@code size[0] * size[1]} is copied, by recursively copying 1D\n\t * lines, starting {@code srcStrides[1]} (respectively {@code\n\t * destStrides[1]}) apart. For {@code d=2}, a 3D box is copied by\n\t * recursively copying 2D planes, etc.\n\t *\n\t * @param d\n\t * \t\tcurrent dimension\n\t * @param src\n\t * \t\tflattened nD source array\n\t * @param srcStrides\n\t *      nD strides of src\n\t * @param srcPos\n\t * \t\tflattened index (in src) to start copying from\n\t * @param dest\n\t * \t\tflattened nD destination array\n\t * @param destStrides\n\t *      nD strides of dest\n\t * @param destPos\n\t * \t\tflattened index (in dest) to start copying to\n\t * @param size\n\t * \t\tnD size of the range to copy\n\t */\n\tstatic <T> void copyNDRangeRecursive(\n\t\t\tfinal int d,\n\t\t\tfinal T src,\n\t\t\tfinal int[] srcStrides,\n\t\t\tfinal int srcPos,\n\t\t\tfinal T dest,\n\t\t\tfinal int[] destStrides,\n\t\t\tfinal int destPos,\n\t\t\tfinal int[] size\n\t) {\n\t\tfinal int len = size[d];\n\t\tif (d > 0) {\n\t\t\tfinal int stride_src = srcStrides[d];\n\t\t\tfinal int stride_dst = destStrides[d];\n\t\t\tfor (int i = 0; i < len; ++i)\n\t\t\t\tcopyNDRangeRecursive(d - 1,\n\t\t\t\t\t\tsrc, srcStrides, srcPos + i * stride_src,\n\t\t\t\t\t\tdest, destStrides, destPos + i * stride_dst,\n\t\t\t\t\t\tsize);\n\t\t} else\n\t\t\tSystem.arraycopy(src, srcPos, dest, destPos, len);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/AbstractN5Test.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertThrows;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\n\nimport java.io.IOException;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Random;\nimport java.util.UUID;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.Executors;\nimport java.util.function.Predicate;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5ClassCastException;\nimport org.janelia.saalfeldlab.n5.N5Reader.Version;\nimport org.janelia.saalfeldlab.n5.url.UriAttributeTest;\nimport org.junit.After;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\nimport com.google.gson.GsonBuilder;\nimport com.google.gson.JsonArray;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonNull;\nimport com.google.gson.JsonObject;\nimport com.google.gson.JsonParser;\nimport com.google.gson.reflect.TypeToken;\n\n/**\n * Abstract base class for testing N5 functionality.\n * Subclasses are expected to provide a specific N5 implementation to be tested by defining the {@link #createN5Writer()} method.\n * <p>\n * This class does not create sharded datasets. Its tests generally call read/writeBlock which are equivalent to read/writeChunk\n * for the cases being tested here. The test {@link #testReadChunkVsBlock} checks that the equivalence between these methods holds.\n *\n * @author Stephan Saalfeld &lt;saalfelds@janelia.hhmi.org&gt;\n * @author Igor Pisarev &lt;pisarevi@janelia.hhmi.org&gt;\n * @author John Bogovic &lt;bogovicj@janelia.hhmi.org&gt;\n * @author Caleb Hulbert &lt;hulbertc@janelia.hhmi.org&gt;\n */\npublic abstract class AbstractN5Test {\n\n\tstatic protected final String groupName = \"/test/group\";\n\tstatic protected final String[] subGroupNames = new String[]{\"a\", \"b\", \"c\"};\n\tstatic protected final String datasetName = \"/test/group/dataset\";\n\tstatic protected final long[] dimensions = new long[]{6, 15, 35};\n\tstatic protected final int[] blockSize = new int[]{3, 5, 7};\n\tstatic protected final int blockNumElements = blockSize[0] * blockSize[1] * blockSize[2];\n\n\tstatic protected byte[] byteBlock;\n\tstatic protected short[] shortBlock;\n\tstatic protected int[] intBlock;\n\tstatic protected long[] longBlock;\n\tstatic protected float[] floatBlock;\n\tstatic protected double[] doubleBlock;\n\n\tprotected final HashSet<N5Writer> tempWriters = new HashSet<>();\n\n\tprotected static Random random = new Random();\n\n\tpublic static URI createTempUri(String prefix, String suffix, URI base) {\n\t\tlong rand = random.nextLong();\n\t\tString name = prefix + Long.toUnsignedString(rand) + (suffix == null ? \"\" : suffix);\n\t\tif (base != null) {\n\t\t\tString basePath = base.getPath().isEmpty() || base.getPath().endsWith(\"/\") ? base.getPath() : base.getPath() + \"/\";\n\t\t\treturn base.resolve(basePath + name);\n\t\t}\n\t\treturn N5URI.getAsUri(name);\n\t}\n\n\tpublic N5Writer createTempN5Writer() {\n\n\t\ttry {\n\t\t\treturn createTempN5Writer(tempN5Location());\n\t\t} catch (URISyntaxException | IOException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\tpublic final N5Writer createTempN5Writer(String location) {\n\n\t\treturn createTempN5Writer(location, new GsonBuilder());\n\t}\n\n\tprotected final N5Writer createTempN5Writer(String location, GsonBuilder gson) {\n\n\t\tfinal N5Writer tempWriter;\n\t\ttry {\n\t\t\ttempWriter = createN5Writer(location, gson);\n\t\t} catch (IOException | URISyntaxException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t\ttempWriters.add(tempWriter);\n\t\treturn tempWriter;\n\t}\n\n\t@After\n\tpublic void removeTempWriters() {\n\n\t\tsynchronized (tempWriters) {\n\t\t\tfor (final N5Writer writer : tempWriters) {\n\t\t\t\ttry {\n\t\t\t\t\twriter.remove();\n\t\t\t\t} catch (final Exception e) {\n\t\t\t\t}\n\t\t\t}\n\t\t\ttempWriters.clear();\n\t\t}\n\t}\n\n\tprotected abstract String tempN5Location() throws URISyntaxException, IOException;\n\n\tprotected N5Writer createN5Writer() throws IOException, URISyntaxException {\n\n\t\treturn createN5Writer(tempN5Location());\n\t}\n\n\tprotected N5Writer createN5Writer(final String location) throws IOException, URISyntaxException {\n\n\t\treturn createN5Writer(location, new GsonBuilder());\n\t}\n\n\t/* Tests that override this should ensure that the `N5Writer` created will remove its container on close() */\n\tprotected abstract N5Writer createN5Writer(String location, GsonBuilder gson) throws IOException, URISyntaxException;\n\n\tprotected N5Reader createN5Reader(final String location) throws IOException, URISyntaxException {\n\n\t\treturn createN5Reader(location, new GsonBuilder());\n\t}\n\n\tprotected abstract N5Reader createN5Reader(String location, GsonBuilder gson) throws IOException, URISyntaxException;\n\n\tprotected Compression[] getCompressions() {\n\n\t\treturn new Compression[]{\n\t\t\t\tnew RawCompression(),\n\t\t\t\tnew Bzip2Compression(),\n\t\t\t\tnew GzipCompression(),\n\t\t\t\tnew GzipCompression(5, true),\n\t\t\t\tnew Lz4Compression(),\n\t\t\t\tnew XzCompression()\n\t\t};\n\t}\n\n\t@Before\n\tpublic void setUpOnce() {\n\n\t\tfinal Random rnd = new Random(111);\n\t\tbyteBlock = new byte[blockNumElements];\n\t\tshortBlock = new short[blockNumElements];\n\t\tintBlock = new int[blockNumElements];\n\t\tlongBlock = new long[blockNumElements];\n\t\tfloatBlock = new float[blockNumElements];\n\t\tdoubleBlock = new double[blockNumElements];\n\t\trnd.nextBytes(byteBlock);\n\t\tfor (int i = 0; i < blockNumElements; ++i) {\n\t\t\tshortBlock[i] = (short)rnd.nextInt();\n\t\t\tintBlock[i] = rnd.nextInt();\n\t\t\tlongBlock[i] = rnd.nextLong();\n\t\t\tfloatBlock[i] = Float.intBitsToFloat(rnd.nextInt());\n\t\t\tdoubleBlock[i] = Double.longBitsToDouble(rnd.nextLong());\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testCreateGroup() {\n\n\t\ttry (N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createGroup(groupName);\n\t\t\tassertTrue(\"Group does not exist: \" + groupName, n5.exists(groupName));\n\t\t\tfinal Path groupPath = Paths.get(groupName);\n\t\t\tString subGroup = \"\";\n\t\t\tfor (int i = 0; i < groupPath.getNameCount(); ++i) {\n\t\t\t\tsubGroup = subGroup + \"/\" + groupPath.getName(i);\n\t\t\t\tassertTrue(\"Group does not exist: \" + subGroup, n5.exists(subGroup));\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testSetAttributeDoesntCreateGroup() {\n\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\t\t\tfinal String testGroup = \"/group/should/not/exit\";\n\t\t\tassertFalse(writer.exists(testGroup));\n\t\t\tassertThrows(N5Exception.N5IOException.class, () -> writer.setAttribute(testGroup, \"test\", \"test\"));\n\t\t\tassertFalse(writer.exists(testGroup));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testCreateDataset() {\n\n\t\t\tfinal DatasetAttributes info;\n\t\t\ttry (N5Writer writer = createTempN5Writer()) {\n\t\t\t\twriter.createDataset(datasetName, dimensions, blockSize, DataType.UINT64, new RawCompression());\n\n\t\t\t\tassertTrue(\"Dataset does not exist\", writer.exists(datasetName));\n\n\t\t\t\tinfo = writer.getDatasetAttributes(datasetName);\n\t\t\t}\n\t\t\tassertArrayEquals(dimensions, info.getDimensions());\n\t\t\tassertArrayEquals(blockSize, info.getBlockSize());\n\t\t\tassertArrayEquals(\"blockSize == chunkSize when not sharded\", blockSize, info.getChunkSize());\n\t\t\tassertEquals(DataType.UINT64, info.getDataType());\n\t}\n\n\t@Test\n\tpublic void testBlocksLargerThanDimensions() {\n\n\t\t// Test case where block size is larger than dataset dimensions\n\t\tfinal long[] smallDimensions = new long[]{2, 3, 4};\n\t\tfinal int[] largeBlockSize = new int[]{5, 7, 10};\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\tfinal DatasetAttributes attributes = n5.createDataset(\n\t\t\t\t\tdatasetName, smallDimensions, largeBlockSize, DataType.UINT8, new RawCompression());\n\n\t\t\t// Create a block that is larger than the dataset dimensions\n\t\t\tfinal int numElements = largeBlockSize[0] * largeBlockSize[1] * largeBlockSize[2];\n\t\t\tfinal byte[] data = new byte[numElements];\n\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\tdata[i] = (byte)(i % 256);\n\t\t\t}\n\n\t\t\tfinal ByteArrayDataBlock dataBlock = new ByteArrayDataBlock(largeBlockSize, new long[]{0, 0, 0}, data);\n\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t// Read the block back\n\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\t\t\tassertNotNull(\"Block should be readable\", loadedDataBlock);\n\t\t\tassertArrayEquals(\"Block size should match\", largeBlockSize, loadedDataBlock.getSize());\n\t\t\tassertArrayEquals(\"Block data should match\", data, (byte[])loadedDataBlock.getData());\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testUnalignedBlocksTruncatedAtEnd() {\n\n\t\t// Test case where dimensions don't evenly divide by block size\n\t\tfinal long[] unalignedDimensions = new long[]{5, 14, 33};\n\t\tfinal int[] testBlockSize = new int[]{3, 5, 7};\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createDataset(datasetName, unalignedDimensions, testBlockSize, DataType.INT32, new RawCompression());\n\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\n\t\t\t// Test writing to the last block in dimension 0 (should be truncated to size 2 instead of 3)\n\t\t\tfinal int[] truncatedBlockSize0 = new int[]{2, 5, 7}; // [3-4] in dim 0\n\t\t\tfinal int numElements0 = truncatedBlockSize0[0] * truncatedBlockSize0[1] * truncatedBlockSize0[2];\n\t\t\tfinal int[] data0 = new int[numElements0];\n\t\t\tfor (int i = 0; i < numElements0; i++) {\n\t\t\t\tdata0[i] = i + 1000;\n\t\t\t}\n\t\t\tfinal IntArrayDataBlock dataBlock0 = new IntArrayDataBlock(truncatedBlockSize0, new long[]{1, 0, 0}, data0);\n\t\t\tn5.writeBlock(datasetName, attributes, dataBlock0);\n\n\t\t\tfinal DataBlock<?> loadedBlock0 = n5.readBlock(datasetName, attributes, 1, 0, 0);\n\t\t\tassertNotNull(\"Truncated block should be readable\", loadedBlock0);\n\t\t\tassertArrayEquals(\"Truncated block data should match\", data0, (int[])loadedBlock0.getData());\n\n\t\t\t// Test writing to the last block in dimension 1 (should be truncated to size 4 instead of 5)\n\t\t\tfinal int[] truncatedBlockSize1 = new int[]{3, 4, 7}; // [10-13] in dim 1\n\t\t\tfinal int numElements1 = truncatedBlockSize1[0] * truncatedBlockSize1[1] * truncatedBlockSize1[2];\n\t\t\tfinal int[] data1 = new int[numElements1];\n\t\t\tfor (int i = 0; i < numElements1; i++) {\n\t\t\t\tdata1[i] = i + 2000;\n\t\t\t}\n\t\t\tfinal IntArrayDataBlock dataBlock1 = new IntArrayDataBlock(truncatedBlockSize1, new long[]{0, 2, 0}, data1);\n\t\t\tn5.writeBlock(datasetName, attributes, dataBlock1);\n\n\t\t\tfinal DataBlock<?> loadedBlock1 = n5.readBlock(datasetName, attributes, 0, 2, 0);\n\t\t\tassertNotNull(\"Truncated block should be readable\", loadedBlock1);\n\t\t\tassertArrayEquals(\"Truncated block data should match\", data1, (int[])loadedBlock1.getData());\n\n\t\t\t// Test writing to the last block in dimension 2 (should be truncated to size 5 instead of 7)\n\t\t\tfinal int[] truncatedBlockSize2 = new int[]{3, 5, 5}; // [28-32] in dim 2\n\t\t\tfinal int numElements2 = truncatedBlockSize2[0] * truncatedBlockSize2[1] * truncatedBlockSize2[2];\n\t\t\tfinal int[] data2 = new int[numElements2];\n\t\t\tfor (int i = 0; i < numElements2; i++) {\n\t\t\t\tdata2[i] = i + 3000;\n\t\t\t}\n\t\t\tfinal IntArrayDataBlock dataBlock2 = new IntArrayDataBlock(truncatedBlockSize2, new long[]{0, 0, 4}, data2);\n\t\t\tn5.writeBlock(datasetName, attributes, dataBlock2);\n\n\t\t\tfinal DataBlock<?> loadedBlock2 = n5.readBlock(datasetName, attributes, 0, 0, 4);\n\t\t\tassertNotNull(\"Truncated block should be readable\", loadedBlock2);\n\t\t\tassertArrayEquals(\"Truncated block data should match\", data2, (int[])loadedBlock2.getData());\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadByteBlock() {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\tfor (final DataType dataType : new DataType[]{\n\t\t\t\t\tDataType.UINT8, DataType.INT8}) {\n\n\t\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\t\tfinal ByteArrayDataBlock dataBlock = new ByteArrayDataBlock(blockSize, new long[]{0, 0, 0}, byteBlock);\n\t\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\t\t\t\t\tassertArrayEquals(byteBlock, (byte[])loadedDataBlock.getData());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadStringBlock() {\n\n\t\t// test dataset; all characters are valid UTF8 but may have different numbers of bytes!\n\t\tfinal DataType dataType = DataType.STRING;\n\t\tfinal int[] blockSize = new int[]{3, 2, 1};\n\t\tfinal String[] stringBlock = new String[]{\"\", \"a\", \"bc\", \"de\", \"fgh\", \":-þ\"};\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\tfinal StringDataBlock dataBlock = new StringDataBlock(blockSize, new long[]{0L, 0L, 0L}, stringBlock);\n\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0L, 0L, 0L);\n\n\t\t\t\tassertArrayEquals(stringBlock, (String[])loadedDataBlock.getData());\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadShortBlock() {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\tfor (final DataType dataType : new DataType[]{\n\t\t\t\t\tDataType.UINT16,\n\t\t\t\t\tDataType.INT16}) {\n\n\t\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\t\tfinal ShortArrayDataBlock dataBlock = new ShortArrayDataBlock(blockSize, new long[]{0, 0, 0}, shortBlock);\n\t\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\n\t\t\t\t\tassertArrayEquals(shortBlock, (short[])loadedDataBlock.getData());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadIntBlock() {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\tfor (final DataType dataType : new DataType[]{\n\t\t\t\t\tDataType.UINT32,\n\t\t\t\t\tDataType.INT32}) {\n\n\t\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\t\tfinal IntArrayDataBlock dataBlock = new IntArrayDataBlock(blockSize, new long[]{0, 0, 0}, intBlock);\n\t\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\n\t\t\t\t\tassertArrayEquals(intBlock, (int[])loadedDataBlock.getData());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadLongBlock() {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\tfor (final DataType dataType : new DataType[]{\n\t\t\t\t\tDataType.UINT64,\n\t\t\t\t\tDataType.INT64}) {\n\n\t\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\t\tfinal LongArrayDataBlock dataBlock = new LongArrayDataBlock(blockSize, new long[]{0, 0, 0}, longBlock);\n\t\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\n\t\t\t\t\tassertArrayEquals(longBlock, (long[])loadedDataBlock.getData());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadFloatBlock() {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.FLOAT32, compression);\n\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\tfinal FloatArrayDataBlock dataBlock = new FloatArrayDataBlock(blockSize, new long[]{0, 0, 0}, floatBlock);\n\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\n\t\t\t\tassertArrayEquals(floatBlock, (float[])loadedDataBlock.getData(), 0.001f);\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadDoubleBlock() {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.FLOAT64, compression);\n\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\tfinal DoubleArrayDataBlock dataBlock = new DoubleArrayDataBlock(blockSize, new long[]{0, 0, 0}, doubleBlock);\n\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\n\t\t\t\tassertArrayEquals(doubleBlock, (double[])loadedDataBlock.getData(), 0.001);\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testReadChunkVsBlock() {\n\n\t\t// test that readBlock behaves the same as readChunk for unsharded datasets\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\n\t\t\t\tfinal short[] shortData1 = new short[shortBlock.length];\n\t\t\t\tfor( int i = 0; i < shortBlock.length; i++)\n\t\t\t\t\tshortData1[i] = (short)(2 * shortBlock[i] + 3);\n\n\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.INT16, compression);\n\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\tfinal ShortArrayDataBlock dataBlock0 = new ShortArrayDataBlock(blockSize, new long[]{0, 0, 0}, shortBlock);\n\t\t\t\tfinal ShortArrayDataBlock dataBlock1 = new ShortArrayDataBlock(blockSize, new long[]{1, 0, 0}, shortData1);\n\n\t\t\t\tn5.writeChunk(datasetName, attributes, dataBlock0);\n\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock1);\n\n\t\t\t\t// read with readBlock\n\t\t\t\tassertArrayEquals(shortBlock, (short[])n5.readBlock(datasetName, attributes, 0, 0, 0).getData());\n\t\t\t\tassertArrayEquals(shortData1, (short[])n5.readBlock(datasetName, attributes, 1, 0, 0).getData());\n\n\t\t\t\t// read with readChunk\n\t\t\t\tassertArrayEquals(shortBlock, (short[])n5.readChunk(datasetName, attributes, 0, 0, 0).getData());\n\t\t\t\tassertArrayEquals(shortData1, (short[])n5.readChunk(datasetName, attributes, 1, 0, 0).getData());\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testMode1WriteReadByteBlock() {\n\n\t\tfinal int[] differentBlockSize = new int[]{5, 10, 15};\n\n\t\tfor (final Compression compression : getCompressions()) {\n\t\t\tfor (final DataType dataType : new DataType[]{\n\t\t\t\t\tDataType.UINT8,\n\t\t\t\t\tDataType.INT8}) {\n\n\t\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\t\tn5.createDataset(datasetName, dimensions, differentBlockSize, dataType, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\t\t\tfinal ByteArrayDataBlock dataBlock = new ByteArrayDataBlock(differentBlockSize, new long[]{0, 0, 0}, byteBlock);\n\t\t\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t\t\tfinal DataBlock<?> loadedDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\n\t\t\t\t\tassertArrayEquals(byteBlock, (byte[])loadedDataBlock.getData());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriteReadSerializableBlock() throws ClassNotFoundException {\n\n\t\tfor (final Compression compression : getCompressions()) {\n\n\t\t\tfinal DataType dataType = DataType.OBJECT;\n\t\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\n\t\t\t\tfinal HashMap<String, ArrayList<double[]>> object = new HashMap<>();\n\t\t\t\tobject.put(\"one\", new ArrayList<>());\n\t\t\t\tobject.put(\"two\", new ArrayList<>());\n\t\t\t\tobject.get(\"one\").add(new double[]{1, 2, 3});\n\t\t\t\tobject.get(\"two\").add(new double[]{4, 5, 6, 7, 8});\n\n\t\t\t\tn5.writeSerializedBlock(object, datasetName, attributes, 0, 0, 0);\n\n\t\t\t\tfinal HashMap<String, ArrayList<double[]>> loadedObject = n5.readSerializedBlock(datasetName, attributes, new long[]{0, 0, 0});\n\n\t\t\t\tobject.forEach((key, value) -> assertArrayEquals(value.get(0), loadedObject.get(key).get(0), 0.01));\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\t@Ignore // TODO\n\tpublic void testWriteInvalidBlock() {\n\n\t\tfinal Compression compression = getCompressions()[0];\n\t\tfinal DataType dataType = DataType.UINT8;\n\n\t\tfinal int[] biggerBlockSize = Arrays.stream(blockSize).map(x -> x + 2).toArray();\n\t\tint nBigger = Arrays.stream(biggerBlockSize).reduce(1, (x, y) -> x * y);\n\n\t\tfinal int[] smallerBlockSize = Arrays.stream(blockSize).map(x -> x - 2).toArray();\n\t\tint nSmaller = Arrays.stream(smallerBlockSize).reduce(1, (x, y) -> x * y);\n\n\t\tint N = Arrays.stream(blockSize).reduce(1, (x, y) -> x * y);\n\n\t\tfinal Random rnd = new Random(7560);\n\t\tfinal byte[] biggerData = new byte[nBigger];\n\t\trnd.nextBytes(biggerData);\n\n\t\tfinal byte[] smallerData = new byte[nSmaller];\n\t\trnd.nextBytes(smallerData);\n\n\t\tfinal float[] floatData = new float[N];\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\n\t\t\tn5.createDataset(datasetName, dimensions, blockSize, dataType, compression);\n\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\n\t\t\t// write a block that is too large\n\t\t\tfinal ByteArrayDataBlock bigDataBlock = new ByteArrayDataBlock(biggerBlockSize, new long[]{0, 0, 0}, biggerData);\n\t\t\tn5.writeBlock(datasetName, attributes, bigDataBlock);\n\n\t\t\tfinal DataBlock<?> loadedBigDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\t\t\tassertArrayEquals(biggerData, (byte[])loadedBigDataBlock.getData());\n\n\t\t\t// write a block that is too small\n\t\t\tfinal ByteArrayDataBlock smallDataBlock = new ByteArrayDataBlock(smallerBlockSize, new long[]{0, 0, 0}, smallerData);\n\t\t\tn5.writeBlock(datasetName, attributes, smallDataBlock);\n\n\t\t\tfinal DataBlock<?> loadedSmallDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\t\t\tassertArrayEquals(smallerData, (byte[])loadedSmallDataBlock.getData());\n\n\t\t\t// write a block of the wrong type\n\t\t\tfinal FloatArrayDataBlock floatDataBlock = new FloatArrayDataBlock(blockSize, new long[]{0, 0, 0}, floatData);\n\t\t\tassertThrows(ClassCastException.class, () -> {\n\t\t\t\tn5.writeBlock(datasetName, attributes, floatDataBlock);\n\t\t\t});\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testOverwriteBlock() {\n\n\t\tfinal Compression compression = getCompressions()[0];\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.INT32, compression);\n\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\n\t\t\tfinal IntArrayDataBlock randomDataBlock = new IntArrayDataBlock(blockSize, new long[]{0, 0, 0}, intBlock);\n\t\t\tn5.writeBlock(datasetName, attributes, randomDataBlock);\n\t\t\tfinal DataBlock<?> loadedRandomDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\t\t\tassertArrayEquals(intBlock, (int[])loadedRandomDataBlock.getData());\n\n\t\t\t// test the case where the resulting file becomes shorter (because the data compresses better)\n\t\t\tfinal int[] emptyBlock = new int[DataBlock.getNumElements(blockSize)];\n\t\t\tfinal IntArrayDataBlock emptyDataBlock = new IntArrayDataBlock(blockSize, new long[]{0, 0, 0}, emptyBlock);\n\t\t\tn5.writeBlock(datasetName, attributes, emptyDataBlock);\n\t\t\tfinal DataBlock<?> loadedEmptyDataBlock = n5.readBlock(datasetName, attributes, 0, 0, 0);\n\t\t\tassertArrayEquals(emptyBlock, (int[])loadedEmptyDataBlock.getData());\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testAttributeParsingPrimitive() {\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\n\t\t\tn5.createGroup(groupName);\n\n\t\t\t/* Test parsing of int, int[], double, double[], String, and String[] types\n\t\t\t *\n\t\t\t *\tAll types are parseable as JsonElements or String\n\t\t\t *\n\t\t\t *\tints are also parseable as doubles and Strings\n\t\t\t *\tdoubles are also parseable as ints and Strings\n\t\t\t *\tStrings should be parsable as Strings\n\t\t\t *\n\t\t\t *\tint[]s should be parseable as double[]s and String[]s\n\t\t\t *\tdouble[]s should be parseable as double[]s and String[]s\n\t\t\t *\tString[]s should be parsable as String[]s\n\t\t\t */\n\n\t\t\tn5.setAttribute(groupName, \"key\", \"value\");\n\t\t\tassertEquals(\"value\", n5.getAttribute(groupName, \"key\", String.class));\n\t\t\tassertNotNull(n5.getAttribute(groupName, \"key\", JsonElement.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Integer.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", int[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Double.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", double[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", String[].class));\n\n\t\t\tn5.setAttribute(groupName, \"key\", new String[]{\"value\"});\n\t\t\tassertArrayEquals(new String[]{\"value\"}, n5.getAttribute(groupName, \"key\", String[].class));\n\t\t\tassertEquals(JsonParser.parseString(\"[\\\"value\\\"]\"), JsonParser.parseString(n5.getAttribute(groupName, \"key\", String.class)));\n\t\t\tassertNotNull(n5.getAttribute(groupName, \"key\", JsonElement.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Integer.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", int[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Double.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", double[].class));\n\n\t\t\tn5.setAttribute(groupName, \"key\", 1);\n\t\t\tassertEquals(1, (long)n5.getAttribute(groupName, \"key\", Integer.class));\n\t\t\tassertEquals(1.0, n5.getAttribute(groupName, \"key\", Double.class), 1e-9);\n\t\t\tassertEquals(\"1\", n5.getAttribute(groupName, \"key\", String.class));\n\t\t\tassertNotNull(n5.getAttribute(groupName, \"key\", JsonElement.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", int[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", double[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", String[].class));\n\n\t\t\tn5.setAttribute(groupName, \"key\", new int[]{2, 3});\n\t\t\tassertArrayEquals(new int[]{2, 3}, n5.getAttribute(groupName, \"key\", int[].class));\n\t\t\tassertArrayEquals(new double[]{2.0, 3.0}, n5.getAttribute(groupName, \"key\", double[].class), 1e-9);\n\t\t\tassertEquals(JsonParser.parseString(\"[2,3]\"), JsonParser.parseString(n5.getAttribute(groupName, \"key\", String.class)));\n\t\t\tassertArrayEquals(new String[]{\"2\", \"3\"}, n5.getAttribute(groupName, \"key\", String[].class));\n\t\t\tassertNotNull(n5.getAttribute(groupName, \"key\", JsonElement.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Integer.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Double.class));\n\n\t\t\tn5.setAttribute(groupName, \"key\", 0.1);\n\t\t\tassertEquals(0, (long)n5.getAttribute(groupName, \"key\", Integer.class));\n\t\t\tassertEquals(0.1, n5.getAttribute(groupName, \"key\", Double.class), 1e-9);\n\t\t\tassertEquals(\"0.1\", n5.getAttribute(groupName, \"key\", String.class));\n\t\t\tassertNotNull(n5.getAttribute(groupName, \"key\", JsonElement.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", int[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", double[].class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", String[].class));\n\n\t\t\tn5.setAttribute(groupName, \"key\", new double[]{0.2, 0.3});\n\t\t\tassertArrayEquals(new int[]{0, 0}, n5.getAttribute(groupName, \"key\", int[].class)); // TODO returns not null, is this right?\n\t\t\tassertArrayEquals(new double[]{0.2, 0.3}, n5.getAttribute(groupName, \"key\", double[].class), 1e-9);\n\t\t\tassertEquals(JsonParser.parseString(\"[0.2,0.3]\"), JsonParser.parseString(n5.getAttribute(groupName, \"key\", String.class)));\n\t\t\tassertArrayEquals(new String[]{\"0.2\", \"0.3\"}, n5.getAttribute(groupName, \"key\", String[].class));\n\t\t\tassertNotNull(n5.getAttribute(groupName, \"key\", JsonElement.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Integer.class));\n\t\t\tassertThrows(N5ClassCastException.class, () -> n5.getAttribute(groupName, \"key\", Double.class));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testAttributes() {\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\tassertNull(n5.getAttribute(groupName, \"test\", String.class));\n\t\t\tassertEquals(0, n5.listAttributes(groupName).size());\n\t\t\tn5.createGroup(groupName);\n\t\t\tassertNull(n5.getAttribute(groupName, \"test\", String.class));\n\n\t\t\tassertEquals(0, n5.listAttributes(groupName).size());\n\n\t\t\tn5.setAttribute(groupName, \"key1\", \"value1\");\n\t\t\tassertEquals(1, n5.listAttributes(groupName).size());\n\n\t\t\t/* class interface */\n\t\t\tassertEquals(\"value1\", n5.getAttribute(groupName, \"key1\", String.class));\n\t\t\t/* type interface */\n\t\t\tassertEquals(\"value1\", n5.getAttribute(groupName, \"key1\", new TypeToken<String>() {\n\n\t\t\t}.getType()));\n\n\t\t\tfinal Map<String, String> newAttributes = new HashMap<>();\n\t\t\tnewAttributes.put(\"key2\", \"value2\");\n\t\t\tnewAttributes.put(\"key3\", \"value3\");\n\t\t\tn5.setAttributes(groupName, newAttributes);\n\t\t\tassertEquals(3, n5.listAttributes(groupName).size());\n\t\t\t/* class interface */\n\t\t\tassertEquals(\"value1\", n5.getAttribute(groupName, \"key1\", String.class));\n\t\t\tassertEquals(\"value2\", n5.getAttribute(groupName, \"key2\", String.class));\n\t\t\tassertEquals(\"value3\", n5.getAttribute(groupName, \"key3\", String.class));\n\t\t\t/* type interface */\n\t\t\tassertEquals(\"value1\", n5.getAttribute(groupName, \"key1\", new TypeToken<String>() {\n\n\t\t\t}.getType()));\n\t\t\tassertEquals(\"value2\", n5.getAttribute(groupName, \"key2\", new TypeToken<String>() {\n\n\t\t\t}.getType()));\n\t\t\tassertEquals(\"value3\", n5.getAttribute(groupName, \"key3\", new TypeToken<String>() {\n\n\t\t\t}.getType()));\n\n\t\t\t// test the case where the resulting file becomes shorter\n\t\t\tn5.setAttribute(groupName, \"key1\", 1);\n\t\t\tn5.setAttribute(groupName, \"key2\", 2);\n\t\t\tassertEquals(3, n5.listAttributes(groupName).size());\n\t\t\t/* class interface */\n\t\t\tassertEquals(Integer.valueOf(1), n5.getAttribute(groupName, \"key1\", Integer.class));\n\t\t\tassertEquals(Integer.valueOf(2), n5.getAttribute(groupName, \"key2\", Integer.class));\n\t\t\tassertEquals(\"value3\", n5.getAttribute(groupName, \"key3\", String.class));\n\t\t\t/* type interface */\n\t\t\tassertEquals(Integer.valueOf(1), n5.getAttribute(groupName, \"key1\", new TypeToken<Integer>() {\n\n\t\t\t}.getType()));\n\t\t\tassertEquals(Integer.valueOf(2), n5.getAttribute(groupName, \"key2\", new TypeToken<Integer>() {\n\n\t\t\t}.getType()));\n\t\t\tassertEquals(\"value3\", n5.getAttribute(groupName, \"key3\", new TypeToken<String>() {\n\n\t\t\t}.getType()));\n\n\t\t\tn5.setAttribute(groupName, \"key1\", null);\n\t\t\tn5.setAttribute(groupName, \"key2\", null);\n\t\t\tn5.setAttribute(groupName, \"key3\", null);\n\t\t\tassertEquals(0, n5.listAttributes(groupName).size());\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testDatasetAttributes() {\n\n\t\tfinal String dset = \"\";\n\t\tfinal String key = \"user-attr\";\n\t\tfinal String value = \"value\";\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\n\t\t\tn5.setAttribute(dset, key, value);\n\t\t\tn5.setDatasetAttributes(dset, new DatasetAttributes(new long[]{5}, new int[]{5}, DataType.INT32));\n\t\t\tassertNotNull(n5.getDatasetAttributes(dset));\n\t\t\tassertEquals(value, n5.getAttribute(dset, key, String.class));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testNullAttributes() throws URISyntaxException, IOException {\n\n\t\t/* serializeNulls*/\n\t\ttry (N5Writer writer = createTempN5Writer(tempN5Location(), new GsonBuilder().serializeNulls())) {\n\n\t\t\twriter.createGroup(groupName);\n\t\t\twriter.setAttribute(groupName, \"nullValue\", null);\n\t\t\tassertNull(writer.getAttribute(groupName, \"nullValue\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"nullValue\", JsonElement.class));\n\t\t\tfinal HashMap<String, Object> nulls = new HashMap<>();\n\t\t\tnulls.put(\"anotherNullValue\", null);\n\t\t\tnulls.put(\"structured/nullValue\", null);\n\t\t\tnulls.put(\"implicitNulls[3]\", null);\n\t\t\twriter.setAttributes(groupName, nulls);\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"anotherNullValue\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"anotherNullValue\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"structured/nullValue\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"structured/nullValue\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[3]\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"implicitNulls[3]\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[1]\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"implicitNulls[1]\", JsonElement.class));\n\n\t\t\t/* Negative test; a value that truly doesn't exist will still return `null` but will also return `null` when querying as a `JsonElement` */\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[10]\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[10]\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"keyDoesn'tExist\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"keyDoesn'tExist\", JsonElement.class));\n\n\t\t\t/* check existing value gets overwritten */\n\t\t\twriter.setAttribute(groupName, \"existingValue\", 1);\n\t\t\tassertEquals((Integer)1, writer.getAttribute(groupName, \"existingValue\", Integer.class));\n\t\t\twriter.setAttribute(groupName, \"existingValue\", null);\n\t\t\tassertThrows(N5ClassCastException.class, () -> writer.getAttribute(groupName, \"existingValue\", Integer.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"existingValue\", JsonElement.class));\n\t\t}\n\n\t\t/* without serializeNulls*/\n\t\ttry (N5Writer writer = createTempN5Writer(tempN5Location(), new GsonBuilder())) {\n\n\t\t\twriter.createGroup(groupName);\n\t\t\twriter.setAttribute(groupName, \"nullValue\", null);\n\t\t\tassertNull(writer.getAttribute(groupName, \"nullValue\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"nullValue\", JsonElement.class));\n\t\t\tfinal HashMap<String, Object> nulls = new HashMap<>();\n\t\t\tnulls.put(\"anotherNullValue\", null);\n\t\t\tnulls.put(\"structured/nullValue\", null);\n\t\t\tnulls.put(\"implicitNulls[3]\", null);\n\t\t\twriter.setAttributes(groupName, nulls);\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"anotherNullValue\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"anotherNullValue\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"structured/nullValue\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"structured/nullValue\", JsonElement.class));\n\n\t\t\t/* Arrays are still filled with `null`, regardless of `serializeNulls()`*/\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[3]\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"implicitNulls[3]\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[1]\", Object.class));\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(groupName, \"implicitNulls[1]\", JsonElement.class));\n\n\t\t\t/* Negative test; a value that truly doesn't exist will still return `null` but will also return `null` when querying as a `JsonElement` */\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[10]\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"implicitNulls[10]\", JsonElement.class));\n\n\t\t\tassertNull(writer.getAttribute(groupName, \"keyDoesn'tExist\", Object.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"keyDoesn'tExist\", JsonElement.class));\n\n\t\t\t/* check existing value gets overwritten */\n\t\t\twriter.setAttribute(groupName, \"existingValue\", 1);\n\t\t\tassertEquals((Integer)1, writer.getAttribute(groupName, \"existingValue\", Integer.class));\n\t\t\twriter.setAttribute(groupName, \"existingValue\", null);\n\t\t\tassertNull(writer.getAttribute(groupName, \"existingValue\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(groupName, \"existingValue\", JsonElement.class));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testRemoveAttributes() throws IOException, URISyntaxException {\n\n\t\ttry (N5Writer writer = createTempN5Writer(tempN5Location(), new GsonBuilder().serializeNulls())) {\n\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\t/* Remove Test without Type */\n\t\t\tassertTrue(writer.removeAttribute(\"\", \"a/b/c\"));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\t/* Remove Test with correct Type */\n\t\t\tassertEquals((Integer)100, writer.removeAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\t/* Remove Test with incorrect Type */\n\t\t\tassertThrows(N5ClassCastException.class, () -> writer.removeAttribute(\"\", \"a/b/c\", Boolean.class));\n\t\t\tfinal Integer abcInteger = writer.removeAttribute(\"\", \"a/b/c\", Integer.class);\n\t\t\tassertEquals((Integer)100, abcInteger);\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\t/* Remove Test with non-leaf */\n\t\t\tassertTrue(writer.removeAttribute(\"\", \"a/b\"));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b\", JsonObject.class));\n\n\t\t\twriter.setAttribute(\"\", \"a\\\\b\\\\c/b\\\\[10]\\\\c/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a\\\\b\\\\c/b\\\\[10]\\\\c/c\", Integer.class));\n\t\t\t/* Remove Test with escape-requiring key  */\n\t\t\tassertTrue(writer.removeAttribute(\"\", \"a\\\\b\\\\c/b\\\\[10]\\\\c/c\"));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a\\\\b\\\\c/b\\\\[10]\\\\c/c\", Integer.class));\n\n\t\t\twriter.setAttribute(\"\", \"a/b[9]\", 10);\n\t\t\tassertEquals((Integer)10, writer.getAttribute(\"\", \"a/b[9]\", Integer.class));\n\t\t\tassertEquals((Integer)0, writer.getAttribute(\"\", \"a/b[8]\", Integer.class));\n\t\t\t/*Remove test with arrays */\n\t\t\tassertTrue(writer.removeAttribute(\"\", \"a/b[5]\"));\n\t\t\tassertEquals(9, writer.getAttribute(\"\", \"a/b\", JsonArray.class).size());\n\t\t\tassertEquals((Integer)0, writer.getAttribute(\"\", \"a/b[5]\", Integer.class));\n\t\t\tassertEquals((Integer)10, writer.getAttribute(\"\", \"a/b[8]\", Integer.class));\n\t\t\tassertTrue(writer.removeAttribute(\"\", \"a/b[8]\"));\n\t\t\tassertEquals(8, writer.getAttribute(\"\", \"a/b\", JsonArray.class).size());\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b[8]\", Integer.class));\n\t\t\tassertTrue(writer.removeAttribute(\"\", \"a/b\"));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b[9]\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b\", Integer.class));\n\n\t\t\t/* ensure old remove behavior no longer works (i.e. set to null no longer should remove) */\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", null);\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(\"\", \"a/b/c\", JsonNull.class));\n\t\t\twriter.removeAttribute(\"\", \"a/b/c\");\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", JsonNull.class));\n\n\t\t\t/* remove multiple, all present */\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\twriter.setAttribute(\"\", \"a/b/d\", \"test\");\n\t\t\twriter.setAttribute(\"\", \"a/c[9]\", 10);\n\t\t\twriter.setAttribute(\"\", \"a/c[5]\", 5);\n\n\t\t\tassertTrue(writer.removeAttributes(\"\", Arrays.asList(\"a/b/c\", \"a/b/d\", \"a/c[5]\")));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/d\", String.class));\n\t\t\tassertEquals(9, writer.getAttribute(\"\", \"a/c\", JsonArray.class).size());\n\t\t\tassertEquals((Integer)10, writer.getAttribute(\"\", \"a/c[8]\", Integer.class));\n\t\t\tassertEquals((Integer)0, writer.getAttribute(\"\", \"a/c[5]\", Integer.class));\n\n\t\t\t/* remove multiple, any  present */\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\twriter.setAttribute(\"\", \"a/b/d\", \"test\");\n\t\t\twriter.setAttribute(\"\", \"a/c[9]\", 10);\n\t\t\twriter.setAttribute(\"\", \"a/c[5]\", 5);\n\n\t\t\tassertTrue(writer.removeAttributes(\"\", Arrays.asList(\"a/b/c\", \"a/b/d\", \"a/x[5]\")));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/d\", String.class));\n\t\t\tassertEquals(10, writer.getAttribute(\"\", \"a/c\", JsonArray.class).size());\n\t\t\tassertEquals((Integer)10, writer.getAttribute(\"\", \"a/c[9]\", Integer.class));\n\t\t\tassertEquals((Integer)5, writer.getAttribute(\"\", \"a/c[5]\", Integer.class));\n\n\t\t\t/* remove multiple, none  present */\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\twriter.setAttribute(\"\", \"a/b/d\", \"test\");\n\t\t\twriter.setAttribute(\"\", \"a/c[9]\", 10);\n\t\t\twriter.setAttribute(\"\", \"a/c[5]\", 5);\n\n\t\t\tassertFalse(writer.removeAttributes(\"\", Arrays.asList(\"X/b/c\", \"Z/b/d\", \"a/x[5]\")));\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\tassertEquals(\"test\", writer.getAttribute(\"\", \"a/b/d\", String.class));\n\t\t\tassertEquals(10, writer.getAttribute(\"\", \"a/c\", JsonArray.class).size());\n\t\t\tassertEquals((Integer)10, writer.getAttribute(\"\", \"a/c[9]\", Integer.class));\n\t\t\tassertEquals((Integer)5, writer.getAttribute(\"\", \"a/c[5]\", Integer.class));\n\n\t\t\t/* Test path normalization */\n\t\t\twriter.setAttribute(\"\", \"a/b/c\", 100);\n\t\t\tassertEquals((Integer)100, writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\t\t\tassertEquals((Integer)100, writer.removeAttribute(\"\", \"a////b/x/../c\", Integer.class));\n\t\t\tassertNull(writer.getAttribute(\"\", \"a/b/c\", Integer.class));\n\n\t\t\twriter.createGroup(\"foo\");\n\t\t\twriter.setAttribute(\"foo\", \"a\", 100);\n\t\t\twriter.removeAttribute(\"foo\", \"a\");\n\t\t\tassertNull(writer.getAttribute(\"foo\", \"a\", Integer.class));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testRemoveContainer() throws IOException, URISyntaxException {\n\n\t\tfinal String location = tempN5Location();\n\t\ttry (final N5Writer n5 = createTempN5Writer(location)) {\n\t\t\ttry (N5Reader n5Reader = createN5Reader(location)) {\n\t\t\t\tassertNotNull(n5Reader);\n\t\t\t}\n\t\t\tassertTrue(n5.remove());\n\t\t\tassertThrows(Exception.class, () -> createN5Reader(location).close());\n\t\t}\n\t\tassertThrows(Exception.class, () -> createN5Reader(location).close());\n\t}\n\n\t@Test\n\tpublic void testUri() throws IOException, URISyntaxException {\n\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\t\t\ttry (final N5Reader reader = createN5Reader(writer.getURI().toString())) {\n\t\t\t\tassertEquals(writer.getURI(), reader.getURI());\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testRemoveGroup() {\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.UINT64, new RawCompression());\n\t\t\tn5.remove(groupName);\n\t\t\tassertFalse(\"Group still exists\", n5.exists(groupName));\n\t\t}\n\n\t}\n\n\t@Test\n\tpublic void testList() throws IOException, URISyntaxException {\n\n\t\ttry (final N5Writer listN5 = createTempN5Writer()) {\n\t\t\tlistN5.createGroup(groupName);\n\t\t\tassertArrayEquals(\"New Group should return empty array for list()\", new String[0], listN5.list(groupName));\n\t\t\tfor (final String subGroup : subGroupNames) {\n\t\t\t\tfinal String childGroup = groupName + \"/\" + subGroup;\n\t\t\t\tlistN5.createGroup(childGroup);\n\t\t\t\tassertArrayEquals(\"New Group should return empty array for list()\", new String[0], listN5.list(childGroup));\n\t\t\t}\n\n\t\t\tfinal String[] groupsList = listN5.list(groupName);\n\t\t\tArrays.sort(groupsList);\n\n\t\t\tassertArrayEquals(subGroupNames, groupsList);\n\t\t\t/* test reading a container this reader didn't create. Ensures cache initialization works as expected. */\n\t\t\ttry (final N5Reader listN5_2 = createN5Reader(listN5.getURI().toString())) {\n\n\t\t\t\tfinal String[] groupsList_2 = listN5_2.list(groupName);\n\t\t\t\tArrays.sort(groupsList_2);\n\n\t\t\t\tassertArrayEquals(subGroupNames, groupsList_2);\n\n\t\t\t}\n\t\t\t// test listing the root group (\"\" and \"/\" should give identical results)\n\t\t\tassertArrayEquals(new String[]{\"test\"}, listN5.list(\"\"));\n\t\t\tassertArrayEquals(new String[]{\"test\"}, listN5.list(\"/\"));\n\n\t\t\t// calling list on a non-existant group throws an exception\n\t\t\tassertThrows(N5Exception.class, () -> listN5.list(\"this-group-does-not-exist\"));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testDeepList() throws ExecutionException, InterruptedException {\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\n\t\t\tn5.createGroup(groupName);\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tn5.createGroup(groupName + \"/\" + subGroup);\n\n\t\t\tfinal List<String> groupsList = Arrays.asList(n5.deepList(\"/\"));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", groupsList.contains(groupName.replaceFirst(\"/\", \"\") + \"/\" + subGroup));\n\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", Arrays.asList(n5.deepList(\"\")).contains(groupName.replaceFirst(\"/\", \"\") + \"/\" + subGroup));\n\n\t\t\tfinal DatasetAttributes datasetAttributes = new DatasetAttributes(dimensions, blockSize, DataType.UINT64);\n\t\t\tfinal LongArrayDataBlock dataBlock = new LongArrayDataBlock(blockSize, new long[]{0, 0, 0}, new long[blockNumElements]);\n\t\t\tn5.createDataset(datasetName, datasetAttributes);\n\t\t\tn5.writeBlock(datasetName, datasetAttributes, dataBlock);\n\n\t\t\tfinal List<String> datasetList = Arrays.asList(n5.deepList(\"/\"));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", datasetList.contains(groupName.replaceFirst(\"/\", \"\") + \"/\" + subGroup));\n\t\t\tassertTrue(\"deepList contents\", datasetList.contains(datasetName.replaceFirst(\"/\", \"\")));\n\t\t\tassertFalse(\"deepList stops at datasets\", datasetList.contains(datasetName + \"/0\"));\n\n\t\t\tfinal List<String> datasetList2 = Arrays.asList(n5.deepList(\"\"));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", datasetList2.contains(groupName.replaceFirst(\"/\", \"\") + \"/\" + subGroup));\n\t\t\tassertTrue(\"deepList contents\", datasetList2.contains(datasetName.replaceFirst(\"/\", \"\")));\n\t\t\tassertFalse(\"deepList stops at datasets\", datasetList2.contains(datasetName + \"/0\"));\n\n\t\t\tfinal String prefix = \"/test\";\n\t\t\tfinal List<String> datasetList3 = Arrays.asList(n5.deepList(prefix));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", datasetList3.contains(\"group/\" + subGroup));\n\t\t\tassertTrue(\"deepList contents\", datasetList3.contains(datasetName.replaceFirst(prefix + \"/\", \"\")));\n\n\t\t\t// parallel deepList tests\n\t\t\tfinal List<String> datasetListP = Arrays.asList(n5.deepList(\"/\", Executors.newFixedThreadPool(2)));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", datasetListP.contains(groupName.replaceFirst(\"/\", \"\") + \"/\" + subGroup));\n\t\t\tassertTrue(\"deepList contents\", datasetListP.contains(datasetName.replaceFirst(\"/\", \"\")));\n\t\t\tassertFalse(\"deepList stops at datasets\", datasetListP.contains(datasetName + \"/0\"));\n\n\t\t\tfinal List<String> datasetListP2 = Arrays.asList(n5.deepList(\"\", Executors.newFixedThreadPool(2)));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", datasetListP2.contains(groupName.replaceFirst(\"/\", \"\") + \"/\" + subGroup));\n\t\t\tassertTrue(\"deepList contents\", datasetListP2.contains(datasetName.replaceFirst(\"/\", \"\")));\n\t\t\tassertFalse(\"deepList stops at datasets\", datasetListP2.contains(datasetName + \"/0\"));\n\n\t\t\tfinal List<String> datasetListP3 = Arrays.asList(n5.deepList(prefix, Executors.newFixedThreadPool(2)));\n\t\t\tfor (final String subGroup : subGroupNames)\n\t\t\t\tassertTrue(\"deepList contents\", datasetListP3.contains(\"group/\" + subGroup));\n\t\t\tassertTrue(\"deepList contents\", datasetListP3.contains(datasetName.replaceFirst(prefix + \"/\", \"\")));\n\t\t\tassertFalse(\"deepList stops at datasets\", datasetListP3.contains(datasetName + \"/0\"));\n\n\t\t\t// test filtering\n\t\t\tfinal Predicate<String> isCalledDataset = d -> d.endsWith(\"/dataset\");\n\t\t\tfinal Predicate<String> isBorC = d -> d.matches(\".*/[bc]$\");\n\n\t\t\tfinal List<String> datasetListFilter1 = Arrays.asList(n5.deepList(prefix, isCalledDataset));\n\t\t\tassertTrue(\n\t\t\t\t\t\"deepList filter \\\"dataset\\\"\",\n\t\t\t\t\tdatasetListFilter1.stream().map(x -> prefix + x).allMatch(isCalledDataset));\n\n\t\t\tfinal List<String> datasetListFilter2 = Arrays.asList(n5.deepList(prefix, isBorC));\n\t\t\tassertTrue(\n\t\t\t\t\t\"deepList filter \\\"b or c\\\"\",\n\t\t\t\t\tdatasetListFilter2.stream().map(x -> prefix + x).allMatch(isBorC));\n\n\t\t\tfinal List<String> datasetListFilterP1 =\n\t\t\t\t\tArrays.asList(n5.deepList(prefix, isCalledDataset, Executors.newFixedThreadPool(2)));\n\t\t\tassertTrue(\n\t\t\t\t\t\"deepList filter \\\"dataset\\\"\",\n\t\t\t\t\tdatasetListFilterP1.stream().map(x -> prefix + x).allMatch(isCalledDataset));\n\n\t\t\tfinal List<String> datasetListFilterP2 =\n\t\t\t\t\tArrays.asList(n5.deepList(prefix, isBorC, Executors.newFixedThreadPool(2)));\n\t\t\tassertTrue(\n\t\t\t\t\t\"deepList filter \\\"b or c\\\"\",\n\t\t\t\t\tdatasetListFilterP2.stream().map(x -> prefix + x).allMatch(isBorC));\n\n\t\t\t// test dataset filtering\n\t\t\tfinal List<String> datasetListFilterD = Arrays.asList(n5.deepListDatasets(prefix));\n\t\t\tassertTrue(\n\t\t\t\t\t\"deepListDataset\",\n\t\t\t\t\tdatasetListFilterD.size() == 1 && (prefix + \"/\" + datasetListFilterD.get(0)).equals(datasetName));\n\t\t\tassertArrayEquals(\n\t\t\t\t\tdatasetListFilterD.toArray(),\n\t\t\t\t\tn5.deepList(prefix, n5::datasetExists));\n\n\t\t\tfinal List<String> datasetListFilterDandBC = Arrays.asList(n5.deepListDatasets(prefix, isBorC));\n\t\t\tassertEquals(\"deepListDatasetFilter\", 0, datasetListFilterDandBC.size());\n\t\t\tassertArrayEquals(\n\t\t\t\t\tdatasetListFilterDandBC.toArray(),\n\t\t\t\t\tn5.deepList(prefix, a -> n5.datasetExists(a) && isBorC.test(a)));\n\n\t\t\tfinal List<String> datasetListFilterDP =\n\t\t\t\t\tArrays.asList(n5.deepListDatasets(prefix, Executors.newFixedThreadPool(2)));\n\t\t\tassertTrue(\n\t\t\t\t\t\"deepListDataset Parallel\",\n\t\t\t\t\tdatasetListFilterDP.size() == 1 && (prefix + \"/\" + datasetListFilterDP.get(0)).equals(datasetName));\n\t\t\tassertArrayEquals(\n\t\t\t\t\tdatasetListFilterDP.toArray(),\n\t\t\t\t\tn5.deepList(prefix, n5::datasetExists, Executors.newFixedThreadPool(2)));\n\n\t\t\tfinal List<String> datasetListFilterDandBCP =\n\t\t\t\t\tArrays.asList(n5.deepListDatasets(prefix, isBorC, Executors.newFixedThreadPool(2)));\n\t\t\tassertEquals(\"deepListDatasetFilter Parallel\", 0, datasetListFilterDandBCP.size());\n\t\t\tassertArrayEquals(\n\t\t\t\t\tdatasetListFilterDandBCP.toArray(),\n\t\t\t\t\tn5.deepList(prefix, a -> n5.datasetExists(a) && isBorC.test(a), Executors.newFixedThreadPool(2)));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testExists() {\n\n\t\tfinal String groupName2 = groupName + \"-2\";\n\t\tfinal String datasetName2 = datasetName + \"-2\";\n\t\tfinal String notExists = groupName + \"-notexists\";\n\t\ttry (N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createDataset(datasetName2, dimensions, blockSize, DataType.UINT64, new RawCompression());\n\t\t\tassertTrue(n5.exists(datasetName2));\n\t\t\tassertTrue(n5.datasetExists(datasetName2));\n\n\t\t\tn5.createGroup(groupName2);\n\t\t\tassertTrue(n5.exists(groupName2));\n\t\t\tassertFalse(n5.datasetExists(groupName2));\n\n\t\t\tassertFalse(n5.exists(notExists));\n\t\t\tassertFalse(n5.datasetExists(notExists));\n\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testListAttributes() {\n\n\t\ttry (N5Writer n5 = createTempN5Writer()) {\n\t\t\tfinal String groupName2 = groupName + \"-2\";\n\t\t\tfinal String datasetName2 = datasetName + \"-2\";\n\t\t\tn5.createDataset(datasetName2, dimensions, blockSize, DataType.UINT64, new RawCompression());\n\t\t\tn5.setAttribute(datasetName2, \"attr1\", new double[]{1.1, 2.1, 3.1});\n\t\t\tn5.setAttribute(datasetName2, \"attr2\", new String[]{\"a\", \"b\", \"c\"});\n\t\t\tn5.setAttribute(datasetName2, \"attr3\", 1.1);\n\t\t\tn5.setAttribute(datasetName2, \"attr4\", \"a\");\n\t\t\tn5.setAttribute(datasetName2, \"attr5\", new long[]{1, 2, 3});\n\t\t\tn5.setAttribute(datasetName2, \"attr6\", 1);\n\t\t\tn5.setAttribute(datasetName2, \"attr7\", new double[]{1, 2, 3.1});\n\t\t\tn5.setAttribute(datasetName2, \"attr8\", new Object[]{\"1\", 2, 3.1});\n\n\t\t\tMap<String, Class<?>> attributesMap = n5.listAttributes(datasetName2);\n\t\t\tassertEquals(attributesMap.get(\"attr1\"), double[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr2\"), String[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr3\"), double.class);\n\t\t\tassertEquals(attributesMap.get(\"attr4\"), String.class);\n\t\t\tassertEquals(attributesMap.get(\"attr5\"), long[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr6\"), long.class);\n\t\t\tassertEquals(attributesMap.get(\"attr7\"), double[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr8\"), Object[].class);\n\n\t\t\tn5.createGroup(groupName2);\n\t\t\tn5.setAttribute(groupName2, \"attr1\", new double[]{1.1, 2.1, 3.1});\n\t\t\tn5.setAttribute(groupName2, \"attr2\", new String[]{\"a\", \"b\", \"c\"});\n\t\t\tn5.setAttribute(groupName2, \"attr3\", 1.1);\n\t\t\tn5.setAttribute(groupName2, \"attr4\", \"a\");\n\t\t\tn5.setAttribute(groupName2, \"attr5\", new long[]{1, 2, 3});\n\t\t\tn5.setAttribute(groupName2, \"attr6\", 1);\n\t\t\tn5.setAttribute(groupName2, \"attr7\", new double[]{1, 2, 3.1});\n\t\t\tn5.setAttribute(groupName2, \"attr8\", new Object[]{\"1\", 2, 3.1});\n\n\t\t\tattributesMap = n5.listAttributes(groupName2);\n\t\t\tassertEquals(attributesMap.get(\"attr1\"), double[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr2\"), String[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr3\"), double.class);\n\t\t\tassertEquals(attributesMap.get(\"attr4\"), String.class);\n\t\t\tassertEquals(attributesMap.get(\"attr5\"), long[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr6\"), long.class);\n\t\t\tassertEquals(attributesMap.get(\"attr7\"), double[].class);\n\t\t\tassertEquals(attributesMap.get(\"attr8\"), Object[].class);\n\t\t}\n\n\t}\n\n\t@Test\n\tpublic void testVersion() throws NumberFormatException, IOException, URISyntaxException {\n\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\n\t\t\tfinal Version n5Version = writer.getVersion();\n\n\t\t\tassertEquals(n5Version, N5Reader.VERSION);\n\n\t\t\tfinal Version incompatibleVersion = new Version(N5Reader.VERSION.getMajor() + 1, N5Reader.VERSION.getMinor(), N5Reader.VERSION.getPatch());\n\t\t\twriter.setAttribute(\"/\", N5Reader.VERSION_KEY, incompatibleVersion.toString());\n\t\t\tfinal Version version = writer.getVersion();\n\t\t\tassertFalse(N5Reader.VERSION.isCompatible(version));\n\n\t\t\tassertThrows(N5Exception.N5IOException.class, () -> createTempN5Writer(writer.getURI().toString()));\n\n\t\t\tfinal Version compatibleVersion = new Version(N5Reader.VERSION.getMajor(), N5Reader.VERSION.getMinor(), N5Reader.VERSION.getPatch());\n\t\t\twriter.setAttribute(\"/\", N5Reader.VERSION_KEY, compatibleVersion.toString());\n\t\t}\n\n\t}\n\n\t@Test\n\tpublic void testReaderCreation() throws IOException, URISyntaxException {\n\n\t\t// non-existent location should fail\n\t\tfinal String location = tempN5Location() + \"-\" + UUID.randomUUID();\n\t\tassertThrows(\"Non-existent location throws error\", N5Exception.N5IOException.class,\n\t\t\t\t() -> {\n\t\t\t\t\ttry (N5Reader test = createN5Reader(location)) {\n\t\t\t\t\t\ttest.list(\"/\");\n\t\t\t\t\t}\n\t\t\t\t});\n\n\t\ttry (N5Writer writer = createTempN5Writer(location)) {\n\t\t\ttry (N5Reader n5r = createN5Reader(location)) {\n\t\t\t\tassertNotNull(n5r);\n\t\t\t}\n\n\t\t\t// existing directory without attributes is okay;\n\t\t\t// Remove and create to remove attributes store\n\t\t\twriter.removeAttribute(\"/\", \"/\");\n\t\t\ttry (N5Reader na = createN5Reader(location)) {\n\t\t\t\tassertNotNull(na);\n\t\t\t}\n\n\t\t\t// existing location with attributes, but no version\n\t\t\twriter.removeAttribute(\"/\", \"/\");\n\t\t\twriter.setAttribute(\"/\", \"mystring\", \"ms\");\n\t\t\ttry (N5Reader wa = createN5Reader(location)) {\n\t\t\t\tassertNotNull(wa);\n\t\t\t}\n\n\t\t\t// existing directory with incompatible version should fail\n\t\t\twriter.removeAttribute(\"/\", \"/\");\n\t\t\tfinal String invalidVersion = new Version(N5Reader.VERSION.getMajor() + 1, N5Reader.VERSION.getMinor(), N5Reader.VERSION.getPatch()).toString();\n\t\t\twriter.setAttribute(\"/\", N5Reader.VERSION_KEY, invalidVersion);\n\t\t\tassertThrows(\"Incompatible version throws error\", N5Exception.class, () -> {\n\t\t\t\ttry (final N5Reader ignored = createN5Reader(location)) {\n\t\t\t\t\t/*Only try with resource to ensure `close()` is called.*/\n\t\t\t\t}\n\t\t\t});\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testDelete() {\n\n\t\ttry (N5Writer n5 = createTempN5Writer()) {\n\t\t\tfinal String datasetName = AbstractN5Test.datasetName + \"-test-delete\";\n\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.UINT8, new RawCompression());\n\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\tfinal long[] position1 = {0, 0, 0};\n\t\t\tfinal long[] position2 = {0, 1, 2};\n\n\t\t\t// no blocks should exist to begin with\n\t\t\tassertNull(n5.readBlock(datasetName, attributes, position1));\n\n\t\t\tfinal ByteArrayDataBlock dataBlock = new ByteArrayDataBlock(blockSize, position1, byteBlock);\n\t\t\tn5.writeBlock(datasetName, attributes, dataBlock);\n\n\t\t\t// block should exist at position1 but not at position2\n\t\t\tfinal DataBlock<?> readBlock = n5.readChunk(datasetName, attributes, position1);\n\t\t\tassertNotNull(readBlock);\n\t\t\tassertTrue(readBlock instanceof ByteArrayDataBlock);\n\t\t\tassertArrayEquals(byteBlock, ((ByteArrayDataBlock)readBlock).getData());\n\n\t\t\tassertTrue(\"deleting existing chunk should return true\", n5.deleteChunk(datasetName, position1));\n\t\t\tassertFalse(\"deleting non-existing chunk should return false\", n5.deleteChunk(datasetName, position1));\n\t\t\tassertFalse(\"deleting non-existing chunk should return false\", n5.deleteChunk(datasetName, position2));\n\n\t\t\t// for an unsharded dataset, deleteChunk and deleteBlock behave identically\n\t\t\tassertFalse(\"deleting non-existing block should return false\", n5.deleteBlock(datasetName, position1));\n\t\t\tassertFalse(\"deleting non-existing block should return false\", n5.deleteBlock(datasetName, position2));\n\n\t\t\tn5.writeChunk(datasetName, attributes, dataBlock);\n\t\t\tassertTrue(\"deleting existing block should return true\", n5.deleteBlock(datasetName, position1));\n\n\t\t\t// no block should exist anymore\n\t\t\tassertNull(n5.readBlock(datasetName, attributes, position1));\n\t\t\tassertNull(n5.readBlock(datasetName, attributes, position2));\n\t\t}\n\t}\n\n\tpublic static class TestData<T> {\n\n\t\tpublic String groupPath;\n\t\tpublic String attributePath;\n\t\tpublic T attributeValue;\n\t\tpublic Class<T> attributeClass;\n\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tpublic TestData(final String groupPath, final String key, final T attributeValue) {\n\n\t\t\tthis.groupPath = groupPath;\n\t\t\tthis.attributePath = key;\n\t\t\tthis.attributeValue = attributeValue;\n\t\t\tthis.attributeClass = (Class<T>)attributeValue.getClass();\n\t\t}\n\t}\n\n\tprotected static void testAttributePathEquivalence(final N5Writer writer, final String groupPath, final String[] equivalentPaths ) {\n\n\t\tif( equivalentPaths.length == 0 )\n\t\t\treturn;\n\n\t\tint i = 0;\n\t\tfinal String first = equivalentPaths[i];\n\t\twriter.setAttribute(groupPath, first, i);\n\t\tassertEquals(i, writer.getAttribute(groupPath, first, Integer.class).intValue());\n\n\n\t\twriter.getAttribute(groupPath, first, int.class);\n\n\t\tfor (i = 1; i < equivalentPaths.length; i++) {\n\t\t\tfinal String path = equivalentPaths[i];\n\t\t\tassertEquals(path + \" not equivalent to \" + first, i-1, (int)writer.getAttribute(groupPath, path, int.class));\n\t\t\twriter.setAttribute(groupPath, path, i);\n\t\t\tassertEquals(path + \" set behaved incorrectly\", i, (int)writer.getAttribute(groupPath, path, int.class));\n\t\t\tassertEquals(path + \" not equivalent to \" + first, i, (int)writer.getAttribute(groupPath, first, int.class));\n\t\t}\n\t}\n\n\tprotected static void addAndTest(final N5Writer writer, final ArrayList<TestData<?>> existingTests, final TestData<?> testData) {\n\t\t/* test a new value on existing path */\n\t\twriter.setAttribute(testData.groupPath, testData.attributePath, testData.attributeValue);\n\t\tassertEquals(testData.attributeValue, writer.getAttribute(testData.groupPath, testData.attributePath, testData.attributeClass));\n\t\tassertEquals(testData.attributeValue,\n\t\t\t\twriter.getAttribute(testData.groupPath, testData.attributePath, TypeToken.get(testData.attributeClass).getType()));\n\n\t\t/* previous values should still be there, but we remove first if the test we just added overwrites. */\n\t\texistingTests.removeIf(test -> {\n\t\t\ttry {\n\t\t\t\tfinal String normalizedTestKey = N5URI.from(null, \"\", test.attributePath).normalizeAttributePath().replaceAll(\"^/\", \"\");\n\t\t\t\tfinal String normalizedTestDataKey = N5URI.from(null, \"\", testData.attributePath).normalizeAttributePath().replaceAll(\"^/\", \"\");\n\t\t\t\treturn normalizedTestKey.equals(normalizedTestDataKey);\n\t\t\t} catch (final URISyntaxException e) {\n\t\t\t\tthrow new RuntimeException(e);\n\t\t\t}\n\t\t});\n\t\trunTests(writer, existingTests);\n\t\texistingTests.add(testData);\n\t}\n\n\tprotected static void runTests(final N5Writer writer, final ArrayList<TestData<?>> existingTests) {\n\n\t\tfor (final TestData<?> test : existingTests) {\n\t\t\tassertEquals(test.attributeValue, writer.getAttribute(test.groupPath, test.attributePath, test.attributeClass));\n\t\t\tassertEquals(test.attributeValue, writer.getAttribute(test.groupPath, test.attributePath, TypeToken.get(test.attributeClass).getType()));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void customObjectTest() {\n\n\t\tfinal String testGroup = \"test\";\n\t\tfinal ArrayList<TestData<?>> existingTests = new ArrayList<>();\n\n\t\tfinal UriAttributeTest.TestDoubles doubles1 = new UriAttributeTest.TestDoubles(\n\t\t\t\t\"doubles\",\n\t\t\t\t\"doubles1\",\n\t\t\t\tnew double[]{5.7, 4.5, 3.4});\n\t\tfinal UriAttributeTest.TestDoubles doubles2 = new UriAttributeTest.TestDoubles(\n\t\t\t\t\"doubles\",\n\t\t\t\t\"doubles2\",\n\t\t\t\tnew double[]{5.8, 4.6, 3.5});\n\t\tfinal UriAttributeTest.TestDoubles doubles3 = new UriAttributeTest.TestDoubles(\n\t\t\t\t\"doubles\",\n\t\t\t\t\"doubles3\",\n\t\t\t\tnew double[]{5.9, 4.7, 3.6});\n\t\tfinal UriAttributeTest.TestDoubles doubles4 = new UriAttributeTest.TestDoubles(\n\t\t\t\t\"doubles\",\n\t\t\t\t\"doubles4\",\n\t\t\t\tnew double[]{5.10, 4.8, 3.7});\n\n\t\ttry (N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createGroup(testGroup);\n\t\t\taddAndTest(n5, existingTests, new TestData<>(testGroup, \"/doubles[1]\", doubles1));\n\t\t\taddAndTest(n5, existingTests, new TestData<>(testGroup, \"/doubles[2]\", doubles2));\n\t\t\taddAndTest(n5, existingTests, new TestData<>(testGroup, \"/doubles[3]\", doubles3));\n\t\t\taddAndTest(n5, existingTests, new TestData<>(testGroup, \"/doubles[4]\", doubles4));\n\n\t\t\t/* Test overwrite custom */\n\t\t\taddAndTest(n5, existingTests, new TestData<>(testGroup, \"/doubles[1]\", doubles4));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testAttributePaths() {\n\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\n\t\t\tfinal String testGroup = \"test\";\n\t\t\twriter.createGroup(testGroup);\n\n\t\t\tfinal ArrayList<TestData<?>> existingTests = new ArrayList<>();\n\n\t\t\t/* Test a new value by path */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/a/b/c/key1\", \"value1\"));\n\t\t\t/* test a new value on existing path */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/a/b/key2\", \"value2\"));\n\t\t\t/* test replacing an existing value */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/a/b/c/key1\", \"new_value1\"));\n\n\t\t\t/* Test a new value with arrays */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/array[0]/b/c/key1\", \"array_value1\"));\n\t\t\t/* test replacing an existing value */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/array[0]/b/c/key1\", \"new_array_value1\"));\n\t\t\t/* test a new value on existing path with arrays */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/array[0]/d[3]/key2\", \"array_value2\"));\n\t\t\t/* test a new value on existing path with nested arrays */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/array[1][2]/[3]key2\", \"array2_value2\"));\n\t\t\t/* test with syntax variants */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/array[1][2]/[3]key2\", \"array3_value3\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"array[1]/[2][3]/key2\", \"array3_value4\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/array/[1]/[2]/[3]/key2\", \"array3_value5\"));\n\t\t\t/* test with whitespace*/\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \" \", \"space\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"\\n\", \"newline\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"\\t\", \"tab\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"\\r\\n\", \"windows_newline\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \" \\n\\t \\t \\n \\r\\n\\r\\n\", \"mixed\"));\n\t\t\t/* test URI encoded characters inside square braces */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[ ]\", \"space\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[\\n]\", \"newline\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[\\t]\", \"tab\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[\\r\\n]\", \"windows_newline\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[ ][\\n][\\t][ \\t \\n \\r\\n][\\r\\n]\", \"mixed\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[ ][\\\\n][\\\\t][ \\\\t \\\\n \\\\r\\\\n][\\\\r\\\\n]\", \"mixed\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"[\\\\]\", \"backslash\"));\n\n\t\t\t/* Non String tests */\n\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/an/integer/test\", 1));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/a/double/test\", 1.0));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/a/float/test\", 1.0F));\n\t\t\tfinal TestData<Boolean> booleanTest = new TestData<>(testGroup, \"/a/boolean/test\", true);\n\t\t\taddAndTest(writer, existingTests, booleanTest);\n\n\t\t\t/* overwrite structure*/\n\t\t\texistingTests.remove(booleanTest);\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/a/boolean[2]/test\", true));\n\n\t\t\t/* Fill an array with number */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/double_array[5]\", 5.0));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/double_array[1]\", 1.0));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/double_array[2]\", 2.0));\n\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/double_array[4]\", 4.0));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/double_array[0]\", 0.0));\n\n\t\t\t/* We intentionally skipped index 3, it should be `0` */\n\t\t\tassertEquals((Integer)0, writer.getAttribute(testGroup, \"/filled/double_array[3]\", Integer.class));\n\n\t\t\t/* Fill an array with Object */\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/string_array[5]\", \"f\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/string_array[1]\", \"b\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/string_array[2]\", \"c\"));\n\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/string_array[4]\", \"e\"));\n\t\t\taddAndTest(writer, existingTests, new TestData<>(testGroup, \"/filled/string_array[0]\", \"a\"));\n\n\t\t\t/* path is relative to root */\n\t\t\ttestAttributePathEquivalence( writer, testGroup, new String[] {\n\t\t\t\t\t\"/keyAtRoot\", \"keyAtRoot\", \"./keyAtRoot\", \"././keyAtRoot\",\n\t\t\t\t\t\"../keyAtRoot\", \"/../keyAtRoot\", \"/../../keyAtRoot\", \"/../bye/../keyAtRoot\"\n\t\t\t});\n\n\n\t\t\t/* the parent of the root is the root */\n\n\t\t\t/* We intentionally skipped index 3, but it should have been pre-populated with JsonNull */\n\t\t\tassertEquals(JsonNull.INSTANCE, writer.getAttribute(testGroup, \"/filled/string_array[3]\", JsonNull.class));\n\n\t\t\t/* Ensure that escaping does NOT interpret the json path structure, but rather it adds the keys opaquely*/\n\t\t\tfinal HashMap<String, Object> testAttributes = new HashMap<>();\n\t\t\ttestAttributes.put(\"\\\\/z\\\\/y\\\\/x\", 10);\n\t\t\ttestAttributes.put(\"q\\\\/r\\\\/t\", 11);\n\t\t\ttestAttributes.put(\"\\\\/l\\\\/m\\\\[10]\\\\/n\", 12);\n\t\t\ttestAttributes.put(\"\\\\/\", 13);\n\t\t\t/* intentionally the same as above, but this time it should be added as an opaque key*/\n\t\t\ttestAttributes.put(\"\\\\/a\\\\/b/key2\", \"value2\");\n\t\t\twriter.setAttributes(testGroup, testAttributes);\n\n\t\t\tassertEquals((Integer)10, writer.getAttribute(testGroup, \"\\\\/z\\\\/y\\\\/x\", Integer.class));\n\t\t\tassertEquals((Integer)11, writer.getAttribute(testGroup, \"q\\\\/r\\\\/t\", Integer.class));\n\t\t\tassertEquals((Integer)12, writer.getAttribute(testGroup, \"\\\\/l\\\\/m\\\\[10]\\\\/n\", Integer.class));\n\t\t\tassertEquals((Integer)13, writer.getAttribute(testGroup, \"\\\\/\", Integer.class));\n\n\t\t\t/* We are passing a different type for the same key (\"/\").\n\t\t\t * This means it will try ot grab the exact match first, but then fail, and continuat on\n\t\t\t * to try and grab the value as a json structure. I should grab the root, and match the empty string case */\n\t\t\tassertEquals(writer.getAttribute(testGroup, \"\", JsonObject.class), writer.getAttribute(testGroup, \"/\", JsonObject.class));\n\n\t\t\t/* Lastly, ensure grabbing nonsense returns null */\n\t\t\tassertNull(writer.getAttribute(testGroup, \"/this/key/does/not/exist\", Object.class));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testAttributePathEscaping() {\n\n\t\tfinal JsonObject emptyObj = new JsonObject();\n\n\t\tfinal String slashKey = \"/\";\n\t\tfinal String abcdefKey = \"abc/def\";\n\t\tfinal String zeroKey = \"[0]\";\n\t\tfinal String bracketsKey = \"]] [] [[\";\n\t\tfinal String doubleBracketsKey = \"[[2][33]]\";\n\t\tfinal String doubleBackslashKey = \"\\\\\\\\\\\\\\\\\"; //Evaluates to `\\\\` through java and json\n\n\t\tfinal String dataString = \"dataString\";\n\t\tfinal String rootSlash = jsonKeyVal(slashKey, dataString);\n\t\tfinal String abcdef = jsonKeyVal(abcdefKey, dataString);\n\t\tfinal String zero = jsonKeyVal(zeroKey, dataString);\n\t\tfinal String brackets = jsonKeyVal(bracketsKey, dataString);\n\t\tfinal String doubleBrackets = jsonKeyVal(doubleBracketsKey, dataString);\n\t\tfinal String doubleBackslash = jsonKeyVal(doubleBackslashKey, dataString);\n\n\t\ttry (N5Writer n5 = createTempN5Writer()) {\n\n\t\t\t// \"/\" as key\n\t\t\tString grp = \"a\";\n\t\t\tn5.createGroup(grp);\n\t\t\tn5.setAttribute(grp, \"\\\\/\", dataString);\n\t\t\tassertEquals(dataString, n5.getAttribute(grp, \"\\\\/\", String.class));\n\t\t\tString jsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertTrue(jsonContents.contains(rootSlash));\n\n\t\t\t// \"abc/def\" as key\n\t\t\tgrp = \"b\";\n\t\t\tn5.createGroup(grp);\n\t\t\tn5.setAttribute(grp, \"abc\\\\/def\", dataString);\n\t\t\tassertEquals(dataString, n5.getAttribute(grp, \"abc\\\\/def\", String.class));\n\t\t\tjsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertTrue(jsonContents.contains(abcdef));\n\n\t\t\t// \"[0]\"  as a key\n\t\t\tgrp = \"c\";\n\t\t\tn5.createGroup(grp);\n\t\t\tn5.setAttribute(grp, \"\\\\[0]\", dataString);\n\t\t\tassertEquals(dataString, n5.getAttribute(grp, \"\\\\[0]\", String.class));\n\t\t\tjsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertTrue(jsonContents.contains(zero));\n\n\t\t\t// \"]] [] [[\"  as a key\n\t\t\tgrp = \"d\";\n\t\t\tn5.createGroup(grp);\n\t\t\tn5.setAttribute(grp, bracketsKey, dataString);\n\t\t\tassertEquals(dataString, n5.getAttribute(grp, bracketsKey, String.class));\n\t\t\tjsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertTrue(jsonContents.contains(brackets));\n\n\t\t\t// \"[[2][33]]\"\n\t\t\tgrp = \"e\";\n\t\t\tn5.createGroup(grp);\n\t\t\tn5.setAttribute(grp, \"[\\\\[2]\\\\[33]]\", dataString);\n\t\t\tassertEquals(dataString, n5.getAttribute(grp, \"[\\\\[2]\\\\[33]]\", String.class));\n\t\t\tjsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertTrue(jsonContents.contains(doubleBrackets));\n\n\t\t\t// \"\\\\\" as key\n\t\t\tgrp = \"f\";\n\t\t\tn5.createGroup(grp);\n\t\t\tn5.setAttribute(grp, \"\\\\\\\\\", dataString);\n\t\t\tassertEquals(dataString, n5.getAttribute(grp, \"\\\\\\\\\", String.class));\n\t\t\tjsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertTrue(jsonContents.contains(doubleBackslash));\n\n\t\t\t// clear\n\t\t\tn5.setAttribute(grp, \"/\", emptyObj);\n\t\t\tjsonContents = n5.getAttribute(grp, \"/\", String.class);\n\t\t\tassertFalse(jsonContents.contains(doubleBackslash));\n\n\t\t}\n\t}\n\n\t/*\n\t * For readability above\n\t */\n\tprivate String jsonKeyVal(final String key, final String val) {\n\n\t\treturn String.format(\"\\\"%s\\\":\\\"%s\\\"\", key, val);\n\t}\n\n\t@Test\n\tpublic void testRootLeaves() {\n\n\t\t/* Test retrieving non-JsonObject root leaves */\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n\t\t\tn5.createGroup(groupName);\n\t\t\tn5.setAttribute(groupName, \"/\", \"String\");\n\n\t\t\tfinal JsonElement stringPrimitive = n5.getAttribute(groupName, \"/\", JsonElement.class);\n\t\t\tassertTrue(stringPrimitive.isJsonPrimitive());\n\t\t\tassertEquals(\"String\", stringPrimitive.getAsString());\n\t\t\tn5.setAttribute(groupName, \"/\", 0);\n\t\t\tfinal JsonElement intPrimitive = n5.getAttribute(groupName, \"/\", JsonElement.class);\n\t\t\tassertTrue(intPrimitive.isJsonPrimitive());\n\t\t\tassertEquals(0, intPrimitive.getAsInt());\n\t\t\tn5.setAttribute(groupName, \"/\", true);\n\t\t\tfinal JsonElement booleanPrimitive = n5.getAttribute(groupName, \"/\", JsonElement.class);\n\t\t\tassertTrue(booleanPrimitive.isJsonPrimitive());\n\t\t\tassertTrue(booleanPrimitive.getAsBoolean());\n\t\t\tn5.setAttribute(groupName, \"/\", null);\n\t\t\tfinal JsonElement jsonNull = n5.getAttribute(groupName, \"/\", JsonElement.class);\n\t\t\tassertTrue(jsonNull.isJsonNull());\n\t\t\tassertEquals(JsonNull.INSTANCE, jsonNull);\n\t\t\tn5.setAttribute(groupName, \"[5]\", \"array\");\n\t\t\tfinal JsonElement rootJsonArray = n5.getAttribute(groupName, \"/\", JsonElement.class);\n\t\t\tassertTrue(rootJsonArray.isJsonArray());\n\t\t\tfinal JsonArray rootArray = rootJsonArray.getAsJsonArray();\n\t\t\tassertEquals(\"array\", rootArray.get(5).getAsString());\n\t\t\tassertEquals(JsonNull.INSTANCE, rootArray.get(3));\n\t\t\tassertThrows(IndexOutOfBoundsException.class, () -> rootArray.get(10));\n\t\t}\n\n\t\t/* Test with new root's each time */\n\t\tfinal ArrayList<TestData<?>> tests = new ArrayList<>();\n\t\ttests.add(new TestData<>(groupName, \"\", \"empty_root\"));\n\t\ttests.add(new TestData<>(groupName, \"/\", \"replace_empty_root\"));\n\t\ttests.add(new TestData<>(groupName, \"[0]\", \"array_root\"));\n\n\t\tfor (final TestData<?> testData : tests) {\n\t\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\t\t\t\twriter.createGroup(testData.groupPath);\n\t\t\t\twriter.setAttribute(testData.groupPath, testData.attributePath, testData.attributeValue);\n\t\t\t\tassertEquals(testData.attributeValue, writer.getAttribute(testData.groupPath, testData.attributePath, testData.attributeClass));\n\t\t\t\tassertEquals(testData.attributeValue,\n\t\t\t\t\t\twriter.getAttribute(testData.groupPath, testData.attributePath, TypeToken.get(testData.attributeClass).getType()));\n\t\t\t}\n\t\t}\n\n\t\t/* Test with replacing an existing root-leaf */\n\t\ttests.clear();\n\t\ttests.add(new TestData<>(groupName, \"\", \"empty_root\"));\n\t\ttests.add(new TestData<>(groupName, \"/\", \"replace_empty_root\"));\n\t\ttests.add(new TestData<>(groupName, \"[0]\", \"array_root\"));\n\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\t\t\twriter.createGroup(groupName);\n\t\t\tfor (final TestData<?> testData : tests) {\n\t\t\t\twriter.setAttribute(testData.groupPath, testData.attributePath, testData.attributeValue);\n\t\t\t\tassertEquals(testData.attributeValue, writer.getAttribute(testData.groupPath, testData.attributePath, testData.attributeClass));\n\t\t\t\tassertEquals(testData.attributeValue,\n\t\t\t\t\t\twriter.getAttribute(testData.groupPath, testData.attributePath, TypeToken.get(testData.attributeClass).getType()));\n\t\t\t}\n\t\t}\n\n\t\t/* Test with replacing an existing root non-leaf*/\n\t\ttests.clear();\n\t\tfinal TestData<Integer> rootAsObject = new TestData<>(groupName, \"/some/non/leaf[3]/structure\", 100);\n\t\tfinal TestData<Integer> rootAsPrimitive = new TestData<>(groupName, \"\", 200);\n\t\tfinal TestData<Integer> rootAsArray = new TestData<>(groupName, \"/\", 300);\n\t\ttests.add(rootAsPrimitive);\n\t\ttests.add(rootAsArray);\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\t\t\twriter.createGroup(groupName);\n\t\t\tfor (final TestData<?> test : tests) {\n\t\t\t\t/* Set the root as Object*/\n\t\t\t\twriter.setAttribute(rootAsObject.groupPath, rootAsObject.attributePath, rootAsObject.attributeValue);\n\t\t\t\tassertEquals(rootAsObject.attributeValue,\n\t\t\t\t\t\twriter.getAttribute(rootAsObject.groupPath, rootAsObject.attributePath, rootAsObject.attributeClass));\n\n\t\t\t\t/* Override the root with something else */\n\t\t\t\twriter.setAttribute(test.groupPath, test.attributePath, test.attributeValue);\n\t\t\t\t/* Verify original root is gone */\n\t\t\t\tassertNull(writer.getAttribute(rootAsObject.groupPath, rootAsObject.attributePath, rootAsObject.attributeClass));\n\t\t\t\t/* verify new root exists */\n\t\t\t\tassertEquals(test.attributeValue, writer.getAttribute(test.groupPath, test.attributePath, test.attributeClass));\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testWriterSeparation() {\n\n\t\ttry (N5Writer writer1 = createTempN5Writer()) {\n\t\t\ttry (N5Writer writer2 = createTempN5Writer()) {\n\n\t\t\t\tassertTrue(writer1.exists(\"/\"));\n\t\t\t\tassertTrue(writer2.exists(\"/\"));\n\n\t\t\t\tassertTrue(writer1.remove());\n\t\t\t\tassertTrue(writer2.exists(\"/\"));\n\t\t\t\tassertFalse(writer1.exists(\"/\"));\n\n\t\t\t\tassertTrue(writer2.remove());\n\t\t\t\tassertFalse(writer2.exists(\"/\"));\n\t\t\t}\n\t\t}\n\t}\n\n\tprotected String[] illegalChars() {\n\t\treturn new String[]{\" \", \"#\", \"%\"};\n\t}\n\n\t@Test\n\tpublic void testPathsWithIllegalUriCharacters() throws IOException, URISyntaxException {\n\n\t\ttry (N5Writer writer = createTempN5Writer()) {\n\t\t\ttry (N5Reader reader = createN5Reader(writer.getURI().toString())) {\n\n\t\t\t\tfinal String[] illegalChars = illegalChars();\n\t\t\t\tfor (final String illegalChar : illegalChars) {\n\t\t\t\t\tfinal String groupWithIllegalChar = \"test\" + illegalChar + \"group\";\n\t\t\t\t\tassertThrows(\"list over group should throw prior to create\", N5Exception.N5IOException.class, () -> writer.list(groupWithIllegalChar));\n\t\t\t\t\twriter.createGroup(groupWithIllegalChar);\n\t\t\t\t\tassertTrue(\"Newly created group should exist\", writer.exists(groupWithIllegalChar));\n\t\t\t\t\tassertArrayEquals(\"list over empty group should be empty list\", new String[0], writer.list(groupWithIllegalChar));\n\t\t\t\t\twriter.setAttribute(groupWithIllegalChar, \"/a/b/key1\", \"value1\");\n\t\t\t\t\tfinal String attrFromWriter = writer.getAttribute(groupWithIllegalChar, \"/a/b/key1\", String.class);\n\t\t\t\t\tfinal String attrFromReader = reader.getAttribute(groupWithIllegalChar, \"/a/b/key1\", String.class);\n\t\t\t\t\tassertEquals(\"value1\", attrFromWriter);\n\t\t\t\t\tassertEquals(\"value1\", attrFromReader);\n\n\n\t\t\t\t\tfinal String datasetWithIllegalChar = \"test\" + illegalChar + \"dataset\";\n\t\t\t\t\tfinal DatasetAttributes datasetAttributes = new DatasetAttributes(dimensions, blockSize, DataType.UINT64, new RawCompression());\n\t\t\t\t\twriter.createDataset(datasetWithIllegalChar, datasetAttributes);\n\t\t\t\t\tfinal DatasetAttributes datasetFromWriter = writer.getDatasetAttributes(datasetWithIllegalChar);\n\t\t\t\t\tfinal DatasetAttributes datasetFromReader = reader.getDatasetAttributes(datasetWithIllegalChar);\n\t\t\t\t\tassertDatasetAttributesEquals(datasetAttributes, datasetFromWriter);\n\t\t\t\t\tassertDatasetAttributesEquals(datasetAttributes, datasetFromReader);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tpublic static void assertBlockEquals(final DataBlock<?> expected, final DataBlock<?> actual) {\n\t\tassertEquals(\"Datablocks are different type\",  expected.getClass(), actual.getClass());\n\n\t\tAssert.assertArrayEquals(\"read block position should be same as block position when unsharded\", expected.getGridPosition(), actual.getGridPosition());\n\t\tAssert.assertArrayEquals(\"read block size should equal block size when unsharded\", expected.getSize(), actual.getSize());\n\n\t\tfinal Object expectedData = expected.getData();\n\t\tfinal Object actualData = actual.getData();\n\n\t\tfinal String dataEqualsMsg = \"block written through shard should be identical\";\n\t\tif (expectedData instanceof byte[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (byte[])expectedData, (byte[])expectedData);\n\t\telse if (expectedData instanceof short[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (short[])expectedData, (short[])actualData);\n\t\telse if (expectedData instanceof int[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (int[])expectedData, (int[])actualData);\n\t\telse if (expectedData instanceof long[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (long[])expectedData, (long[])actualData);\n\t\telse if (expectedData instanceof float[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (float[])expectedData, (float[])actualData, 0f);\n\t\telse if (expectedData instanceof double[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (double[])expectedData, (double[])actualData, 0d);\n\t\telse if (expectedData instanceof String[])\n\t\t\tassertArrayEquals(dataEqualsMsg, (String[])expectedData, (String[])actualData);\n\t\telse\n\t\t\tfail(\"Unsupported data type for block data: \" + expectedData.getClass());\n\t}\n\n\tprotected void assertDatasetAttributesEquals(final DatasetAttributes expected, final DatasetAttributes actual) {\n\t\tassertArrayEquals(expected.getDimensions(), actual.getDimensions());\n\t\tassertArrayEquals(expected.getBlockSize(), actual.getBlockSize());\n\t\tassertEquals(expected.getDataType(), actual.getDataType());\n\n\t\t// TODO would be nice to check this somehow maybe make a DatasetAttributes.equals method?\n//\t\tassertArrayEquals(expected.getDataCodecInfos(), actual.getDataCodecInfos());\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/DatasetAttributesTest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertThrows;\nimport static org.junit.Assert.assertTrue;\n\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.DefaultShardCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\nimport org.junit.Test;\n\n/**\n * Unit tests for DatasetAttributes class.\n */\npublic class DatasetAttributesTest {\n\n\t/**\n\t * Test that validateBlockShardSizes method accepts valid shard and chunk size combinations.\n\t */\n\t@Test\n\tpublic void testValidateBlockShardSizesValid() {\n\n\t\t// Test case 1: shard size equals block size\n\t\tlong[] dimensions = new long[]{100, 200, 300};\n\t\tint[] shardSize = new int[]{64, 64, 64};\n\t\tint[] chunkSize = new int[]{64, 64, 64};\n\t\tDataType dataType = DataType.UINT8;\n\n\t\t// This should not throw any exception\n\t\tDatasetAttributes attrs = shardDatasetAttributes(dimensions, shardSize, chunkSize, dataType);\n\t\tassertEquals(chunkSize, attrs.getChunkSize());\n\t\tassertEquals(shardSize, attrs.getBlockSize());\n\t\tNestedGrid grid = attrs.getNestedBlockGrid();\n\t\tassertEquals(chunkSize, grid.getBlockSize(0));\n\t\tassertEquals(shardSize, grid.getBlockSize(1));\n\n\t\t// Test case 2: shard size is a multiple of block size\n\t\tshardSize = new int[]{128};\n\t\tchunkSize = new int[]{64};\n\t\tattrs = shardDatasetAttributes(new long[]{128}, shardSize, chunkSize, dataType);\n\t\tassertEquals(chunkSize, attrs.getChunkSize());\n\t\tassertEquals(shardSize, attrs.getBlockSize());\n\t\tgrid = attrs.getNestedBlockGrid();\n\t\tassertEquals(chunkSize, grid.getBlockSize(0));\n\t\tassertEquals(shardSize, grid.getBlockSize(1));\n\n\t\t// Test case 3: different multiples per dimension\n\t\tshardSize = new int[]{128, 256, 32, 2};\n\t\tchunkSize = new int[]{32, 64, 32, 1};\n\t\tattrs = shardDatasetAttributes(new long[]{128, 128, 128, 128}, shardSize, chunkSize, dataType );\n\t\tassertEquals(chunkSize, attrs.getChunkSize());\n\t\tassertEquals(shardSize, attrs.getBlockSize());\n\t\tgrid = attrs.getNestedBlockGrid();\n\t\tassertEquals(chunkSize, grid.getBlockSize(0));\n\t\tassertEquals(shardSize, grid.getBlockSize(1));\n\n\t\t// Test case 4: large multiples\n\t\tshardSize = new int[]{1024, 2048, 512};\n\t\tchunkSize = new int[]{32, 64, 16};\n\t\tattrs = shardDatasetAttributes(dimensions, shardSize, chunkSize, dataType);\n\t\tassertEquals(chunkSize, attrs.getChunkSize());\n\t\tassertEquals(shardSize, attrs.getBlockSize());\n\t\tgrid = attrs.getNestedBlockGrid();\n\t\tassertEquals(chunkSize, grid.getBlockSize(0));\n\t\tassertEquals(shardSize, grid.getBlockSize(1));\n\t}\n\n\tprivate static DatasetAttributes shardDatasetAttributes(\n\t\t\tlong[] dimensions, int[] shardSize, int[] chunkSize, DataType dataType) {\n\n\t\tDefaultShardCodecInfo blockCodecInfo = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tnew N5BlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[]{new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[]{new RawCompression()},\n\t\t\t\tIndexLocation.END);\n\n\t\treturn new DatasetAttributes(dimensions, shardSize, dataType, blockCodecInfo);\n\t}\n\n\t@Test\n\tpublic void builderTests() {\n\n\t\tfinal long[] dims = new long[]{100, 200, 300};\n\t\tfinal int[] blk = new int[]{32, 32, 32};\n\n\t\t// default blockSize uses defaultBlockSize, not full dimensions\n\t\tfinal DatasetAttributes defaultBlk = DatasetAttributes.builder(dims, DataType.FLOAT32).build();\n\t\tassertArrayEquals(DatasetAttributes.defaultBlockSize(dims), defaultBlk.getBlockSize());\n\n\t\t// blockSize is reflected in output\n\t\tfinal DatasetAttributes withBlk = DatasetAttributes.builder(dims, DataType.FLOAT32)\n\t\t\t\t.blockSize(blk).build();\n\t\tassertArrayEquals(blk, withBlk.getBlockSize());\n\t\tassertFalse(withBlk.isSharded());\n\n\t\t// compression sets a data codec; RawCompression is a no-op\n\t\tfinal DatasetAttributes withGzip = DatasetAttributes.builder(dims, DataType.FLOAT32)\n\t\t\t\t.blockSize(blk).compression(new GzipCompression()).build();\n\t\tassertEquals(1, withGzip.getDataCodecInfos().length);\n\n\t\tfinal DatasetAttributes withRaw = DatasetAttributes.builder(dims, DataType.FLOAT32)\n\t\t\t\t.blockSize(blk).compression(new RawCompression()).build();\n\t\tassertEquals(0, withRaw.getDataCodecInfos().length);\n\n\t\t// round-trip through Builder(DatasetAttributes)\n\t\tfinal DatasetAttributes roundTrip = DatasetAttributes.builder(withGzip).build();\n\t\tassertArrayEquals(withGzip.getDimensions(), roundTrip.getDimensions());\n\t\tassertArrayEquals(withGzip.getBlockSize(), roundTrip.getBlockSize());\n\t\tassertEquals(withGzip.getDataType(), roundTrip.getDataType());\n\t\tassertEquals(withGzip.getDataCodecInfos().length, roundTrip.getDataCodecInfos().length);\n\t}\n\n\t/**\n\t * Test that validateBlockShardSizes method rejects invalid shard and block size combinations.\n\t */\n\t@Test\n\tpublic void testValidateBlockShardSizesInvalid() {\n\n\t\tfinal long[] dimensions = new long[]{100, 200, 300};\n\t\tfinal DataType dataType = DataType.UINT8;\n\n\t\t// Block size too smallcompression != null && !(compression instanceof RawCompression)\n\t\tIllegalArgumentException ex0 = assertThrows(\n\t\t\t\tIllegalArgumentException.class,\n\t\t\t\t() -> shardDatasetAttributes(dimensions, new int[]{1, 1, 1}, new int[]{1, 0, -1}, dataType));\n\t\tassertTrue(ex0.getMessage().contains(\"negative\"));\n\n\t\t// Different number of dimensions\n\t\tIllegalArgumentException ex1 = assertThrows(\n\t\t\t\tIllegalArgumentException.class,\n\t\t\t\t() -> shardDatasetAttributes(dimensions, new int[]{64, 64}, new int[]{32, 32, 32}, dataType));\n\t\tassertTrue(ex1.getMessage().contains(\"different length\"));\n\n\t\t// Shard size smaller than block size\n\t\tIllegalArgumentException ex2 = assertThrows(\n\t\t\t\tIllegalArgumentException.class,\n\t\t\t\t() -> shardDatasetAttributes(dimensions, new int[]{32, 64, 64}, new int[]{64, 64, 64}, dataType));\n\t\tassertTrue(ex2.getMessage().contains(\"is smaller than previous\"));\n\n\t\t// Shard size not a multiple of block size\n\t\tIllegalArgumentException ex3 = assertThrows(\n\t\t\t\tIllegalArgumentException.class,\n\t\t\t\t() -> shardDatasetAttributes(dimensions, new int[]{100, 100, 100}, new int[]{64, 64, 64}, dataType));\n\t\tassertTrue(ex3.getMessage().contains(\"not a multiple of previous level\"));\n\n\t\t// Multiple violations - shard smaller than block in one dimension\n\t\tIllegalArgumentException ex4 = assertThrows(\n\t\t\t\tIllegalArgumentException.class,\n\t\t\t\t() -> shardDatasetAttributes(dimensions, new int[]{128, 32, 128}, new int[]{64, 64, 64}, dataType));\n\t\tassertTrue(ex4.getMessage().contains(\"is smaller than previous\"));\n\n\t\t// Edge case - shard size of 0\n\t\tassertThrows(\n\t\t\t\tIllegalArgumentException.class,\n\t\t\t\t() -> shardDatasetAttributes(dimensions, new int[]{0, 64, 64}, new int[]{64, 64, 64}, dataType));\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/FileKeyLockManagerTest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.file.Files;\nimport java.nio.file.NoSuchFileException;\nimport java.nio.file.Path;\nimport java.util.Arrays;\nimport java.util.concurrent.CountDownLatch;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.atomic.AtomicInteger;\nimport java.util.concurrent.atomic.AtomicReference;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.janelia.saalfeldlab.n5.FileKeyLockManager.FILE_LOCK_MANAGER;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertThrows;\nimport static org.junit.Assert.assertTrue;\n\npublic class FileKeyLockManagerTest {\n\n\tprivate Path tempDir;\n\n\t@Before\n\tpublic void setUp() throws IOException {\n\n\t\ttempDir = Files.createTempDirectory(\"fklm-test\");\n\t}\n\n\t@After\n\tpublic void tearDown() throws IOException {\n\n\t\tif (tempDir != null) {\n\t\t\tFiles.walk(tempDir)\n\t\t\t\t\t.sorted((a, b) -> -a.compareTo(b))\n\t\t\t\t\t.forEach(p -> {\n\t\t\t\t\t\ttry {\n\t\t\t\t\t\t\tFiles.deleteIfExists(p);\n\t\t\t\t\t\t} catch (IOException e) {\n\t\t\t\t\t\t\t// ignore\n\t\t\t\t\t\t}\n\t\t\t\t\t});\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testConcurrentReads() throws Exception {\n\n\t\t// create a test file\n\t\tfinal Path testFile = tempDir.resolve(\"test.txt\");\n\t\tfinal byte[] testContent = \"test content for concurrent reads\".getBytes();\n\t\tFiles.write(testFile, testContent);\n\n\t\tfinal int numReaders = 5;\n\t\tfinal ExecutorService executor = Executors.newFixedThreadPool(numReaders);\n\n\t\tfinal CountDownLatch startLatch = new CountDownLatch(1);\n\t\tfinal CountDownLatch readersReady = new CountDownLatch(numReaders);\n\t\tfinal CountDownLatch readersFinished = new CountDownLatch(numReaders);\n\t\tfinal AtomicInteger concurrentReaders = new AtomicInteger(0);\n\t\tfinal AtomicInteger maxConcurrentReaders = new AtomicInteger(0);\n\t\tfinal AtomicInteger successfulReads = new AtomicInteger(0);\n\n\t\tfor (int i = 0; i < numReaders; i++) {\n\t\t\texecutor.submit(() -> {\n\t\t\t\ttry {\n\t\t\t\t\treadersReady.countDown();\n\t\t\t\t\tstartLatch.await();\n\n\t\t\t\t\ttry (final LockedFileChannel lock = FILE_LOCK_MANAGER.lockForReading(testFile)) {\n\t\t\t\t\t\tfinal int concurrent = concurrentReaders.incrementAndGet();\n\t\t\t\t\t\tmaxConcurrentReaders.updateAndGet(max -> Math.max(max, concurrent));\n\n\t\t\t\t\t\t// actually read from the channel\n\t\t\t\t\t\tfinal byte[] buf = new byte[testContent.length];\n\t\t\t\t\t\tfinal int bytesRead = lock.read(ByteBuffer.wrap(buf), 0);\n\t\t\t\t\t\tif (bytesRead > 0 && Arrays.equals(buf, testContent)) {\n\t\t\t\t\t\t\tsuccessfulReads.incrementAndGet();\n\t\t\t\t\t\t}\n\t\t\t\t\t\tThread.sleep(100);\n\t\t\t\t\t\tconcurrentReaders.decrementAndGet();\n\t\t\t\t\t}\n\t\t\t\t} catch (Exception e) {\n\t\t\t\t\te.printStackTrace();\n\t\t\t\t} finally {\n\t\t\t\t\treadersFinished.countDown();\n\t\t\t\t}\n\t\t\t});\n\t\t}\n\n\t\tassertTrue(\"All readers should be ready\", readersReady.await(5, TimeUnit.SECONDS));\n\n\t\tfinal long startTime = System.currentTimeMillis();\n\t\tstartLatch.countDown();\n\n\t\tassertTrue(\"All readers should finish\", readersFinished.await(10, TimeUnit.SECONDS));\n\t\tfinal long duration = System.currentTimeMillis() - startTime;\n\n\t\texecutor.shutdown();\n\t\tassertTrue(\"Executor should terminate\", executor.awaitTermination(5, TimeUnit.SECONDS));\n\n\t\tSystem.out.println(\"Test completed in \" + duration + \"ms\");\n\t\tSystem.out.println(\"Maximum concurrent readers: \" + maxConcurrentReaders.get());\n\t\tSystem.out.println(\"Successful reads: \" + successfulReads.get());\n\n\t\tassertEquals(\"All readers should have read successfully\", numReaders, successfulReads.get());\n\t\tassertTrue(\"Multiple readers should have been reading concurrently\", maxConcurrentReaders.get() > 1);\n\t\tassertTrue(\"Concurrent execution should be faster than sequential\", duration < numReaders * 100);\n\t}\n\n\t@Test\n\tpublic void testReadWriteExclusion() throws Exception {\n\n\t\t// create a test file\n\t\tfinal Path testFile = tempDir.resolve(\"test2.txt\");\n\t\tFiles.write(testFile, \"initial content\".getBytes());\n\n\t\tfinal String writtenContent = \"written by writer\";\n\n\t\t// acquire a write lock and write to the file\n\t\tfinal OutputStream writeLock = FILE_LOCK_MANAGER.lockForWriting(testFile).asOutputStream();\n\t\twriteLock.write(writtenContent.getBytes());\n\n\t\t// try to acquire a read lock from another thread - should block\n\t\tfinal CountDownLatch readAttempted = new CountDownLatch(1);\n\t\tfinal CountDownLatch readAcquired = new CountDownLatch(1);\n\t\tfinal AtomicReference<String> readContent = new AtomicReference<>();\n\n\t\tnew Thread(() -> {\n\t\t\treadAttempted.countDown();\n\t\t\ttry (final LockedFileChannel readLock = FILE_LOCK_MANAGER.lockForReading(testFile)) {\n\t\t\t\t// actually read from the channel\n\t\t\t\tfinal byte[] buf = new byte[writtenContent.getBytes().length];\n\t\t\t\tfinal int charsRead = readLock.read(ByteBuffer.wrap(buf), 0);\n\t\t\t\tif (charsRead > 0) {\n\t\t\t\t\treadContent.set(new String(buf, 0, charsRead));\n\t\t\t\t}\n\t\t\t\treadAcquired.countDown();\n\t\t\t} catch (IOException e) {\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}).start();\n\n\t\tassertTrue(readAttempted.await(1, TimeUnit.SECONDS));\n\n\t\tassertFalse(\"Read lock should not be acquired while write lock is held\",\n\t\t\t\treadAcquired.await(200, TimeUnit.MILLISECONDS));\n\n\t\twriteLock.close();\n\n\t\tassertTrue(\"Read lock should be acquired after write lock is released\",\n\t\t\t\treadAcquired.await(2, TimeUnit.SECONDS));\n\n\t\tassertEquals(\"Reader should see written content\", writtenContent, readContent.get());\n\t}\n\n\tprivate class CleanUpHelper implements Closeable {\n\n\t\tprivate final Path path;\n\t\tprivate final LockedFileChannel channel;\n\n\t\tCleanUpHelper() throws IOException, InterruptedException {\n\t\t\tpath = tempDir.resolve(\"trigger.txt\");\n\t\t\tFiles.write(path, \"trigger\".getBytes());\n\t\t\tchannel = FILE_LOCK_MANAGER.lockForReading(path);\n\t\t\ttryWaitForSize(-1, 10);\n\t\t}\n\n\t\tprivate void tryWaitForSize(final int expectedSize) throws IOException, InterruptedException {\n\t\t\ttryWaitForSize(expectedSize, 100);\n\t\t}\n\n\t\tprivate void tryWaitForSize(final int expectedSize, final int numIterations) throws IOException, InterruptedException {\n\n\t\t\t// Wait a bit, trigger GC, loop a few times to try to trigger stale WeakReferences to be processed\n\t\t\tfor (int i = 0; i < numIterations; ++i) {\n\t\t\t\tThread.sleep(10);\n\t\t\t\tSystem.gc();\n\n\t\t\t\t// FileKeyLockManager.cleanUp() is called on the side during\n\t\t\t\t// normal usage. We keep locking and unlocking \"trigger.txt\" for\n\t\t\t\t// which we already hold a read lock. This will keep the\n\t\t\t\t// FileKeyLockManager working, but will not lead to\n\t\t\t\t// removal/insertion of KeyLockState for \"trigger.txt\".\n\t\t\t\ttry (LockedFileChannel temp = FILE_LOCK_MANAGER.lockForReading(path)) {\n\t\t\t\t}\n\n\t\t\t\tif (FILE_LOCK_MANAGER.size() == expectedSize) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() throws IOException {\n\t\t\tchannel.close();\n\t\t}\n\t}\n\n\t@Test\n\t// TODO: Remove? This test relies on garbage collection behaviour and is inherently fragile.\n\tpublic void testLockCleanup() throws Exception {\n\n\t\t// create test files\n\t\tfinal Path testFile1 = tempDir.resolve(\"key1.txt\");\n\t\tfinal Path testFile2 = tempDir.resolve(\"key2.txt\");\n\t\tfinal Path testFile3 = tempDir.resolve(\"key3.txt\");\n\t\tfinal byte[] content = \"content\".getBytes();\n\t\tFiles.write(testFile1, content);\n\t\tFiles.write(testFile2, content);\n\t\tFiles.write(testFile3, content);\n\n\t\tfinal CleanUpHelper cleanup = new CleanUpHelper();\n\t\tfinal int initialSize = FILE_LOCK_MANAGER.size();\n\n\t\tfinal LockedFileChannel lock1 = FILE_LOCK_MANAGER.lockForReading(testFile1);\n\t\tfinal OutputStream lock2 = FILE_LOCK_MANAGER.lockForWriting(testFile2).asOutputStream();\n\t\tfinal LockedFileChannel lock3 = FILE_LOCK_MANAGER.lockForReading(testFile3);\n\n\t\t// actually perform I/O on each lock\n\t\tfinal byte[] buf = new byte[content.length];\n\t\tlock1.read(ByteBuffer.wrap(buf),0);\n\t\tlock2.write(content);\n\t\tlock3.read(ByteBuffer.wrap(buf),0);\n\n\t\tassertEquals(\"Should have 3 new keys\", initialSize + 3, FILE_LOCK_MANAGER.size());\n\n\t\t// close lock1 - entry should be auto-removed\n\t\tlock1.close();\n\t\tcleanup.tryWaitForSize(initialSize + 2);\n\t\tassertEquals(\"key1 should be auto-removed on close\", initialSize + 2, FILE_LOCK_MANAGER.size());\n\n\t\t// close remaining locks - entries should be auto-removed\n\t\tlock2.close();\n\t\tlock3.close();\n\t\tcleanup.tryWaitForSize(initialSize);\n\t\tassertEquals(\"All entries should be auto-removed on close\", initialSize, FILE_LOCK_MANAGER.size());\n\t}\n\n\t@Test\n\tpublic void testWriteLockCreatesFile() throws Exception {\n\n\t\t// file does not exist - write lock creates it via CREATE option\n\t\tfinal Path testFile = tempDir.resolve(\"newfile.txt\");\n\t\tfinal String content = \"written to new file\";\n\n\t\tassertFalse(\"File should not exist initially\", Files.exists(testFile));\n\n\t\ttry (final LockedFileChannel writeLock = FILE_LOCK_MANAGER.lockForWriting(testFile)) {\n\t\t\tassertTrue(\"File should be created by write lock\", Files.exists(testFile));\n\t\t\t// actually write to the file\n\t\t\twriteLock.asOutputStream().write(content.getBytes());\n\t\t}\n\n\t\t// verify written content\n\t\tassertEquals(\"Content should be written\", content, new String(Files.readAllBytes(testFile)));\n\t}\n\n\t@Test\n\tpublic void testWriteLockCreatesParentDirectories() throws Exception {\n\n\t\t// parent directories do not exist - write lock creates them\n\t\tfinal Path testFile = tempDir.resolve(\"a/b/c/newfile.txt\");\n\t\tfinal String content = \"written to nested file\";\n\n\t\tassertFalse(\"File should not exist initially\", Files.exists(testFile));\n\t\tassertFalse(\"Parent should not exist initially\", Files.exists(testFile.getParent()));\n\n\t\ttry (final LockedFileChannel writeLock = FILE_LOCK_MANAGER.lockForWriting(testFile)) {\n\t\t\tassertTrue(\"File should be created by write lock\", Files.exists(testFile));\n\t\t\tassertTrue(\"Parent directories should be created\", Files.exists(testFile.getParent()));\n\t\t\t// actually write to the file\n\t\t\twriteLock.asOutputStream().write(content.getBytes());\n\t\t}\n\n\t\t// verify written content\n\t\tassertEquals(\"Content should be written\", content, new String(Files.readAllBytes(testFile)));\n\t}\n\n\t@Test\n\tpublic void testReadLockRequiresExistingFile() throws Exception {\n\t\tfinal Path testFile = tempDir.resolve(\"nonexistent.txt\");\n\t\tassertThrows(\"Should not acquire read lock for non-existent file\", NoSuchFileException.class, () -> FILE_LOCK_MANAGER.lockForReading(testFile));\n\t}\n\n\t@Test\n\t// TODO: Remove? This test relies on garbage collection behaviour and is inherently fragile.\n\tpublic void testLocksMapEmptyAfterProperClose() throws Exception {\n\n\t\tfinal Path testFile = tempDir.resolve(\"proper-close.txt\");\n\t\tFiles.write(testFile, \"content\".getBytes());\n\n\t\tfinal CleanUpHelper cleanup = new CleanUpHelper();\n\t\tfinal int initialSize = FILE_LOCK_MANAGER.size();\n\n\t\ttry (final LockedFileChannel lock = FILE_LOCK_MANAGER.lockForWriting(testFile)) {\n\t\t\tlock.asOutputStream().write(\"new content\".getBytes());\n\t\t}\n\n\t\tcleanup.tryWaitForSize(initialSize);\n\t\tassertEquals(\"locks map should be back to initial size after proper close\",\n\t\t\t\tinitialSize, FILE_LOCK_MANAGER.size());\n\t}\n\n\t@Test\n\t// TODO: Remove? This test relies on garbage collection behaviour and is inherently fragile.\n\tpublic void testLocksMapEmptyAfterLeakedChannelIsGCd() throws Exception {\n\n\t\tfinal Path testFile = tempDir.resolve(\"leaked-channel.txt\");\n\t\tFiles.write(testFile, \"content\".getBytes());\n\n\t\tfinal CleanUpHelper cleanup = new CleanUpHelper();\n\t\tfinal int initialSize = FILE_LOCK_MANAGER.size();\n\n\t\t// acquire lock in a separate scope so reference can be GC'd\n\t\tacquireAndLeakLock(testFile);\n\n\t\tcleanup.tryWaitForSize(initialSize);\n\n\t\tassertEquals(\"locks map should be back to initial size after leaked channel is GC'd\",\n\t\t\t\tinitialSize, FILE_LOCK_MANAGER.size());\n\t}\n\n\tprivate void acquireAndLeakLock(final Path path) throws IOException {\n\t\t// acquire lock but don't close it - just let the reference go out of scope\n\n\t\t@SuppressWarnings(\"resource\")\n\t\tLockedFileChannel leaked = FILE_LOCK_MANAGER.lockForWriting(path);\n\t\tleaked.asOutputStream().write(\"leaked content\".getBytes());\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/FsLockTest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.junit.Test;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.util.concurrent.*;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\n\npublic class FsLockTest {\n\n    private static final FileKeyLockManager LOCK_MANAGER = FileKeyLockManager.FILE_LOCK_MANAGER;\n\n    private static String tempPathName() {\n\n        try {\n            final File tmpFile = Files.createTempDirectory(\"fs-key-lock-test-\").toFile();\n            tmpFile.delete();\n            tmpFile.mkdir();\n            tmpFile.deleteOnExit();\n            return tmpFile.getCanonicalPath();\n        } catch (final Exception e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n    @Test\n    public void testReadLock() throws IOException {\n\n        final Path path = Paths.get(tempPathName(), \"lock\");\n        path.toFile().createNewFile();\n        assertTrue(\"File Created\", path.toFile().exists());\n        LockedFileChannel lock = LOCK_MANAGER.lockForReading(path);\n        lock.close();\n        lock = LOCK_MANAGER.lockForReading(path);\n\n        final ExecutorService exec = Executors.newSingleThreadExecutor();\n        final Future<Void> future = exec.submit(() -> {\n            LOCK_MANAGER.lockForWriting(path).close();\n            return null;\n        });\n\n        try {\n            System.out.println(\"Trying to acquire locked readable channel...\");\n            System.out.println(future.get(3, TimeUnit.SECONDS));\n            fail(\"Lock broken!\");\n        } catch (final TimeoutException e) {\n            System.out.println(\"Lock held!\");\n            future.cancel(true);\n        } catch (final InterruptedException | ExecutionException e) {\n            future.cancel(true);\n            System.out.println(\"Test was interrupted!\");\n        } finally {\n            lock.close();\n            Files.delete(path);\n        }\n\n        exec.shutdownNow();\n    }\n\n    @Test\n    public void testWriteLock() throws IOException {\n\n        final Path path = Paths.get(tempPathName(), \"lock\");\n        final LockedFileChannel lock = LOCK_MANAGER.lockForWriting(path);\n        System.out.println(\"locked\");\n\n        final ExecutorService exec = Executors.newSingleThreadExecutor();\n        final Future<Void> future = exec.submit(() -> {\n            LOCK_MANAGER.lockForReading(path).close();\n            return null;\n        });\n\n        try {\n            System.out.println(\"Trying to acquire locked writable channel...\");\n            System.out.println(future.get(3, TimeUnit.SECONDS));\n            fail(\"Lock broken!\");\n        } catch (final TimeoutException e) {\n            System.out.println(\"Lock held!\");\n            future.cancel(true);\n        } catch (final InterruptedException | ExecutionException e) {\n            future.cancel(true);\n            System.out.println(\"Test was interrupted!\");\n        } finally {\n            lock.close();\n            Files.delete(path);\n        }\n\n        exec.shutdownNow();\n    }\n\n    @Test\n    public void testFSLockRelease() throws IOException, ExecutionException, InterruptedException, TimeoutException {\n\n\n        final Path path = Paths.get(tempPathName(), \"lock\");\n        final ExecutorService exec = Executors.newFixedThreadPool(2);\n\n        // first thread acquires the lock, waits for 200ms then should release it\n        exec.submit(() -> {\n            try {\n                try(final LockedFileChannel lock = LOCK_MANAGER.lockForWriting(path)) {\n\t\t\t\t\tlock.size();\n                    Thread.sleep(200);\n                }\n            } catch (IOException e) {\n                e.printStackTrace();\n            } catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        });\n\n        // second thread waits for the lock.\n        // it should get it within a few seconds.\n        final Future<Void> future = exec.submit(() -> {\n            LOCK_MANAGER.lockForWriting(path).close();\n            return null;\n        });\n\n        future.get(3, TimeUnit.SECONDS);\n        Files.delete(path);\n        exec.shutdownNow();\n    }\n\n    @Test\n    public void testReadLockBehavior() throws IOException, InterruptedException, ExecutionException, TimeoutException {\n\n        final Path path = Paths.get(tempPathName(), \"read-lock\");\n        path.toFile().createNewFile();\n\n        final ExecutorService exec = Executors.newFixedThreadPool(3);\n\n        final AtomicBoolean v = new AtomicBoolean(false);\n\n        // first thread acquires a read lock, waits for 200ms\n        Future<?> f = exec.submit(() -> {\n            try {\n                try(final LockedFileChannel lock = LOCK_MANAGER.lockForReading(path)) {\n\t\t\t\t\tlock.size();\n                    Thread.sleep(200);\n\n                    // ensure that the other thread updated the value\n                    assertTrue(v.get());\n\n                }\n            } catch (IOException e) {\n                e.printStackTrace();\n            } catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        });\n\n        // second thread gets a read lock\n        // and should not be blocked\n        // this thread updates the boolean\n        exec.submit(() -> {\n            try( final LockedFileChannel lock = LOCK_MANAGER.lockForReading(path)) {\n\t\t\t\tlock.size();\n                v.set(true);\n            }\n            return null;\n        });\n\n        f.get(3, TimeUnit.SECONDS);\n        exec.shutdownNow();\n        Files.delete(path);\n    }\n\n    @Test\n    public void testWriteLockBehavior() throws IOException, ExecutionException, InterruptedException, TimeoutException {\n\n\n        final Path path = Paths.get(tempPathName(), \"lock\");\n        final ExecutorService exec = Executors.newFixedThreadPool(2);\n\n        // first thread acquires the lock, waits for 200ms then should release it\n        exec.submit(() -> {\n            try {\n                try(final LockedFileChannel lock = LOCK_MANAGER.lockForWriting(path)) {\n                    lock.size();\n                    Thread.sleep(200);\n                }\n            } catch (IOException e) {\n                e.printStackTrace();\n            } catch (InterruptedException e) {\n                e.printStackTrace();\n            }\n        });\n\n        // second thread waits for the lock.\n        // it should get it within a few seconds.\n        final Future<Void> future = exec.submit(() -> {\n            LOCK_MANAGER.lockForWriting(path).close();\n            return null;\n        });\n\n        future.get(3, TimeUnit.SECONDS);\n        Files.delete(path);\n        exec.shutdownNow();\n    }\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/N5Benchmark.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport static org.junit.Assert.fail;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.util.ArrayList;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\n\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\nimport ch.systemsx.cisd.base.mdarray.MDShortArray;\nimport ch.systemsx.cisd.hdf5.HDF5Factory;\nimport ch.systemsx.cisd.hdf5.HDF5IntStorageFeatures;\nimport ch.systemsx.cisd.hdf5.IHDF5ShortWriter;\nimport ch.systemsx.cisd.hdf5.IHDF5Writer;\nimport ij.IJ;\nimport ij.ImagePlus;\nimport ij.io.Opener;\nimport ij.process.ShortProcessor;\nimport net.imglib2.Cursor;\nimport net.imglib2.img.imageplus.ImagePlusImg;\nimport net.imglib2.img.imageplus.ImagePlusImgs;\nimport net.imglib2.type.numeric.integer.UnsignedShortType;\nimport net.imglib2.view.Views;\n\n/**\n *\n *\n * @author Stephan Saalfeld &lt;saalfelds@janelia.hhmi.org&gt;\n */\npublic class N5Benchmark {\n\n\tprivate static String testDirPath;\n\n\tstatic {\n\t\ttry {\n\t\t\ttestDirPath = Files.createTempDirectory(\"n5-benchmark-\").toFile().getCanonicalPath();\n\t\t} catch (final IOException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\tprivate static String datasetName = \"/dataset\";\n\n\tprivate static N5Writer n5;\n\n\tprivate static short[] data;\n\n\tprivate static final Compression[] compressions = {\n\t\t\tnew RawCompression(),\n\t\t\tnew Bzip2Compression(),\n\t\t\tnew GzipCompression(),\n\t\t\tnew Lz4Compression(),\n\t\t\tnew XzCompression()\n\t};\n\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@BeforeClass\n\tpublic static void setUpBeforeClass() throws Exception {\n\n\t\tfinal File testDir = new File(testDirPath);\n\t\ttestDir.mkdirs();\n\t\tif (!(testDir.exists() && testDir.isDirectory()))\n\t\t\tthrow new IOException(\"Could not create benchmark directory for HDF5Utils benchmark.\");\n\n\t\tdata = new short[64 * 64 * 64];\n\t\tfinal ImagePlus imp = new Opener().openURL(\"https://imagej.net/ij/images/t1-head-raw.zip\");\n\t\tfinal ImagePlusImg<UnsignedShortType, ?> img = (ImagePlusImg<UnsignedShortType, ?>)(Object)ImagePlusImgs.from(imp);\n\t\tfinal Cursor<UnsignedShortType> cursor = Views.flatIterable(Views.interval(img, new long[]{100, 100, 30}, new long[]{163, 163, 93})).cursor();\n\t\tfor (int i = 0; i < data.length; ++i)\n\t\t\tdata[i] = (short)cursor.next().get();\n\n\t\tn5 = new N5FSWriter(testDirPath);\n\t}\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@AfterClass\n\tpublic static void rampDownAfterClass() throws Exception {\n\n\t\tn5.remove(\"\");\n\t}\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@Before\n\tpublic void setUp() throws Exception {}\n\n\t/**\n\t * Generates some files for documentation of the binary format.  Not a test.\n\t */\n//\t@Test\n\tpublic void testDocExample() {\n\n\t\tfinal short[] dataBlockData = new short[]{1, 2, 3, 4, 5, 6};\n\t\tfor (final Compression compression : compressions) {\n\t\t\ttry {\n\t\t\t\tfinal String compressedDatasetName = datasetName + \".\" + compression.getType();\n\t\t\t\tn5.createDataset(compressedDatasetName, new long[]{1, 2, 3}, new int[]{1, 2, 3}, DataType.UINT16, compression);\n\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(compressedDatasetName);\n\t\t\t\tfinal ShortArrayDataBlock dataBlock = new ShortArrayDataBlock(new int[]{1, 2, 3}, new long[]{0, 0, 0}, dataBlockData);\n\t\t\t\tn5.writeChunk(compressedDatasetName, attributes, dataBlock);\n\t\t\t} catch (final N5Exception e) {\n\t\t\t\tfail(e.getMessage());\n\t\t\t}\n\t\t}\n\t}\n\n//\t@Test\n\tpublic void benchmarkWritingSpeed() {\n\n\t\tfinal int nBlocks = 5;\n\n\t\tfor (int i = 0; i < 1; ++i) {\n\t\t\tfor (final Compression compression : compressions) {\n\n\t\t\t\tfinal long t = System.currentTimeMillis();\n\t\t\t\ttry {\n\t\t\t\t\tfinal String compressedDatasetName = datasetName + \".\" + compression.getType();\n\t\t\t\t\tn5.createDataset(compressedDatasetName, new long[]{64 * nBlocks, 64 * nBlocks, 64 * nBlocks}, new int[]{64, 64, 64}, DataType.UINT16, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(compressedDatasetName);\n\t\t\t\t\tfor (int z = 0; z < nBlocks; ++z)\n\t\t\t\t\t\tfor (int y = 0; y < nBlocks; ++y)\n\t\t\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\t\t\tfinal ShortArrayDataBlock dataBlock = new ShortArrayDataBlock(new int[]{64, 64, 64}, new long[]{x, y, z}, data);\n\t\t\t\t\t\t\t\tn5.writeChunk(compressedDatasetName, attributes, dataBlock);\n\t\t\t\t\t\t\t}\n\t\t\t\t} catch (final N5Exception e) {\n\t\t\t\t\tfail(e.getMessage());\n\t\t\t\t}\n\t\t\t\tSystem.out.println(String.format(\"%d : %s : %fs\", i, compression.getType(), 0.001 * (System.currentTimeMillis() - t)));\n\t\t\t}\n\n\t\t\t/* TIF blocks */\n\t\t\tlong t = System.currentTimeMillis();\n\t\t\tfinal String compressedDatasetName = testDirPath + \"/\" + datasetName + \".tif\";\n\t\t\tnew File(compressedDatasetName).mkdirs();\n\t\t\tfor (int z = 0; z < nBlocks; ++z)\n\t\t\t\tfor (int y = 0; y < nBlocks; ++y)\n\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\tfinal ImagePlus impBlock = new ImagePlus(\"\", new ShortProcessor(64, 64 * 64, data, null));\n\t\t\t\t\t\tIJ.saveAsTiff(impBlock, compressedDatasetName + \"/\" + x + \"-\" + y + \"-\" + z + \".tif\");\n\t\t\t\t\t}\n\t\t\tSystem.out.println(String.format(\"%d : tif : %fs\", i, 0.001 * (System.currentTimeMillis() - t)));\n\n\t\t\t/* HDF5 raw */\n\t\t\tt = System.currentTimeMillis();\n\t\t\tString hdf5Name = testDirPath + \"/\" + datasetName + \".h5\";\n\t\t\tIHDF5Writer hdf5Writer = HDF5Factory.open(hdf5Name);\n\t\t\tIHDF5ShortWriter uint16Writer = hdf5Writer.uint16();\n\t\t\tuint16Writer.createMDArray(\n\t\t\t\t\tdatasetName,\n\t\t\t\t\tnew long[]{64 * nBlocks, 64 * nBlocks, 64 * nBlocks},\n\t\t\t\t\tnew int[]{64, 64, 64},\n\t\t\t\t\tHDF5IntStorageFeatures.INT_NO_COMPRESSION);\n\t\t\tfor (int z = 0; z < nBlocks; ++z)\n\t\t\t\tfor (int y = 0; y < nBlocks; ++y)\n\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\tfinal MDShortArray targetCell = new MDShortArray(data, new int[]{64, 64, 64});\n\t\t\t\t\t\tuint16Writer.writeMDArrayBlockWithOffset(datasetName, targetCell, new long[]{64 * z, 64 * y, 64 * x});\n\t\t\t\t\t}\n\t\t\tSystem.out.println(String.format(\"%d : hdf5 raw : %fs\", i, 0.001 * (System.currentTimeMillis() - t)));\n\t\t\tnew File(hdf5Name).delete();\n\n\t\t\t/* HDF5 gzip */\n\t\t\tt = System.currentTimeMillis();\n\t\t\thdf5Name = testDirPath + \"/\" + datasetName + \".gz.h5\";\n\t\t\thdf5Writer = HDF5Factory.open(hdf5Name);\n\t\t\tuint16Writer = hdf5Writer.uint16();\n\t\t\tuint16Writer.createMDArray(\n\t\t\t\t\tdatasetName,\n\t\t\t\t\tnew long[]{64 * nBlocks, 64 * nBlocks, 64 * nBlocks},\n\t\t\t\t\tnew int[]{64, 64, 64},\n\t\t\t\t\tHDF5IntStorageFeatures.INT_AUTO_SCALING_DEFLATE);\n\t\t\tfor (int z = 0; z < nBlocks; ++z)\n\t\t\t\tfor (int y = 0; y < nBlocks; ++y)\n\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\tfinal MDShortArray targetCell = new MDShortArray(data, new int[]{64, 64, 64});\n\t\t\t\t\t\tuint16Writer.writeMDArrayBlockWithOffset(datasetName, targetCell, new long[]{64 * z, 64 * y, 64 * x});\n\t\t\t\t\t}\n\t\t\tSystem.out.println(String.format(\"%d : hdf5 gzip : %fs\", i, 0.001 * (System.currentTimeMillis() - t)));\n\t\t\tnew File(hdf5Name).delete();\n\t\t}\n\t}\n\n\t@Test\n\t@Ignore\n\tpublic void benchmarkParallelWritingSpeed() {\n\n\t\tfinal int nBlocks = 5;\n\n\t\tfor (int i = 1; i <= 16; i *= 2 ) {\n\n\t\t\tSystem.out.println( i + \" threads.\");\n\n\t\t\tfinal ExecutorService exec = Executors.newFixedThreadPool(i);\n\t\t\tfinal ArrayList<Future<Boolean>> futures = new ArrayList<>();\n\t\t\tlong t;\n\n\t\t\tfor (final Compression compression : compressions) {\n\t\t\t\tt = System.currentTimeMillis();\n\t\t\t\ttry {\n\t\t\t\t\tfinal String compressedDatasetName = datasetName + \".\" + compression.getType();\n\t\t\t\t\tn5.createDataset(compressedDatasetName, new long[]{64 * nBlocks, 64 * nBlocks, 64 * nBlocks}, new int[]{64, 64, 64}, DataType.UINT16, compression);\n\t\t\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(compressedDatasetName);\n\t\t\t\t\tfor (int z = 0; z < nBlocks; ++z) {\n\t\t\t\t\t\tfinal int fz = z;\n\t\t\t\t\t\tfor (int y = 0; y < nBlocks; ++y) {\n\t\t\t\t\t\t\tfinal int fy = y;\n\t\t\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\t\t\tfinal int fx = x;\n\t\t\t\t\t\t\t\tfutures.add(\n\t\t\t\t\t\t\t\t\t\texec.submit(\n\t\t\t\t\t\t\t\t\t\t\t\t() -> {\n\t\t\t\t\t\t\t\t\t\t\t\t\tfinal ShortArrayDataBlock dataBlock = new ShortArrayDataBlock(new int[]{64, 64, 64}, new long[]{fx, fy, fz}, data);\n\t\t\t\t\t\t\t\t\t\t\t\t\tn5.writeChunk(compressedDatasetName, attributes, dataBlock);\n\t\t\t\t\t\t\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t\t\t\t\t\t\t}));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tfor (final Future<Boolean> f : futures)\n\t\t\t\t\t\tf.get();\n\n\t\t\t\t\tSystem.out.println(String.format(\"%d : %s : %fs\", i, compression.getType(), 0.001 * (System.currentTimeMillis() - t)));\n\t\t\t\t} catch (final N5Exception | InterruptedException | ExecutionException e) {\n\t\t\t\t\tfail(e.getMessage());\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* TIF blocks */\n\t\t\tfutures.clear();\n\t\t\tt = System.currentTimeMillis();\n\t\t\tfinal String compressedDatasetName = testDirPath + \"/\" + datasetName + \".tif\";\n\t\t\ttry {\n\t\t\t\tnew File(compressedDatasetName).mkdirs();\n\t\t\t\tfor (int z = 0; z < nBlocks; ++z) {\n\t\t\t\t\tfinal int fz = z;\n\t\t\t\t\tfor (int y = 0; y < nBlocks; ++y) {\n\t\t\t\t\t\tfinal int fy = y;\n\t\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\t\tfinal int fx = x;\n\t\t\t\t\t\t\tfutures.add(\n\t\t\t\t\t\t\t\t\texec.submit(\n\t\t\t\t\t\t\t\t\t\t\t() -> {\n\t\t\t\t\t\t\t\t\t\t\t\tfinal ImagePlus impBlock = new ImagePlus(\"\", new ShortProcessor(64, 64 * 64, data, null));\n\t\t\t\t\t\t\t\t\t\t\t\tIJ.saveAsTiff(impBlock, compressedDatasetName + \"/\" + fx + \"-\" + fy + \"-\" + fz + \".tif\");\n\t\t\t\t\t\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t\t\t\t\t\t}));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfor (final Future<Boolean> f : futures)\n\t\t\t\t\tf.get();\n\n\t\t\t\tSystem.out.println(String.format(\"%d : tif : %fs\", i, 0.001 * (System.currentTimeMillis() - t)));\n\t\t\t} catch (final InterruptedException | ExecutionException e) {\n\t\t\t\tfail(e.getMessage());\n\t\t\t}\n\n\t\t\t/* HDF5 raw */\n\t\t\tfutures.clear();\n\t\t\tt = System.currentTimeMillis();\n\t\t\ttry {\n\t\t\t\tfinal String hdf5Name = testDirPath + \"/\" + datasetName + \".h5\";\n\t\t\t\tfinal IHDF5Writer hdf5Writer = HDF5Factory.open( hdf5Name );\n\t\t\t\tfinal IHDF5ShortWriter uint16Writer = hdf5Writer.uint16();\n\t\t\t\tuint16Writer.createMDArray(\n\t\t\t\t\t\tdatasetName,\n\t\t\t\t\t\tnew long[]{64 * nBlocks, 64 * nBlocks, 64 * nBlocks},\n\t\t\t\t\t\tnew int[]{64, 64, 64},\n\t\t\t\t\t\tHDF5IntStorageFeatures.INT_NO_COMPRESSION);\n\t\t\t\tfor (int z = 0; z < nBlocks; ++z) {\n\t\t\t\t\tfinal int fz = z;\n\t\t\t\t\tfor (int y = 0; y < nBlocks; ++y) {\n\t\t\t\t\t\tfinal int fy = y;\n\t\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\t\tfinal int fx = x;\n\t\t\t\t\t\t\tfutures.add(\n\t\t\t\t\t\t\t\t\texec.submit(\n\t\t\t\t\t\t\t\t\t\t\t() -> {\n\t\t\t\t\t\t\t\t\t\t\t\tfinal MDShortArray targetCell = new MDShortArray(data, new int[]{64, 64, 64});\n\t\t\t\t\t\t\t\t\t\t\t\tuint16Writer.writeMDArrayBlockWithOffset(datasetName, targetCell, new long[]{64 * fz, 64 * fy, 64 * fx});\n\t\t\t\t\t\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t\t\t\t\t\t}));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfor (final Future<Boolean> f : futures)\n\t\t\t\t\tf.get();\n\n\t\t\t\thdf5Writer.close();\n\t\t\t\tSystem.out.println(String.format(\"%d : hdf5 raw : %fs\", i, 0.001 * (System.currentTimeMillis() - t)));\n\t\t\t\tnew File(hdf5Name).delete();\n\t\t\t} catch (final InterruptedException | ExecutionException e) {\n\t\t\t\tfail(e.getMessage());\n\t\t\t}\n\n\t\t\t/* HDF5 gzip */\n\t\t\tfutures.clear();\n\t\t\tt = System.currentTimeMillis();\n\t\t\ttry {\n\t\t\t\tfinal String hdf5Name = testDirPath + \"/\" + datasetName + \".gz.h5\";\n\t\t\t\tfinal IHDF5Writer hdf5Writer = HDF5Factory.open( hdf5Name );\n\t\t\t\tfinal IHDF5ShortWriter uint16Writer = hdf5Writer.uint16();\n\t\t\t\tuint16Writer.createMDArray(\n\t\t\t\t\t\tdatasetName,\n\t\t\t\t\t\tnew long[]{64 * nBlocks, 64 * nBlocks, 64 * nBlocks},\n\t\t\t\t\t\tnew int[]{64, 64, 64},\n\t\t\t\t\t\tHDF5IntStorageFeatures.INT_AUTO_SCALING_DEFLATE);\n\t\t\t\tfor (int z = 0; z < nBlocks; ++z) {\n\t\t\t\t\tfinal int fz = z;\n\t\t\t\t\tfor (int y = 0; y < nBlocks; ++y) {\n\t\t\t\t\t\tfinal int fy = y;\n\t\t\t\t\t\tfor (int x = 0; x < nBlocks; ++x) {\n\t\t\t\t\t\t\tfinal int fx = x;\n\t\t\t\t\t\t\tfutures.add(\n\t\t\t\t\t\t\t\t\texec.submit(\n\t\t\t\t\t\t\t\t\t\t\t() -> {\n\t\t\t\t\t\t\t\t\t\t\t\tfinal MDShortArray targetCell = new MDShortArray(data, new int[]{64, 64, 64});\n\t\t\t\t\t\t\t\t\t\t\t\tuint16Writer.writeMDArrayBlockWithOffset(datasetName, targetCell, new long[]{64 * fz, 64 * fy, 64 * fx});\n\t\t\t\t\t\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t\t\t\t\t\t}));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfor (final Future<Boolean> f : futures)\n\t\t\t\t\tf.get();\n\n\t\t\t\thdf5Writer.close();\n\t\t\t\tSystem.out.println(String.format(\"%d : hdf5 gzip : %fs\", i, 0.001 * (System.currentTimeMillis() - t)));\n\t\t\t\tnew File(hdf5Name).delete();\n\t\t\t} catch (final InterruptedException | ExecutionException e) {\n\t\t\t\tfail(e.getMessage());\n\t\t\t}\n\n\t\t\texec.shutdown();\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/N5CachedFSTest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertThrows;\nimport static org.junit.Assert.assertTrue;\n\nimport java.io.IOException;\nimport java.net.URISyntaxException;\nimport java.nio.file.FileSystems;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Set;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\n\nimport org.junit.Test;\n\nimport com.google.gson.GsonBuilder;\nimport com.google.gson.JsonElement;\n\npublic class N5CachedFSTest extends N5FSTest {\n\n\t@Override\n\tprotected N5Writer createN5Writer(final String location, final GsonBuilder gson) throws IOException, URISyntaxException {\n\n\t\treturn createN5Writer(location, gson, true);\n\t}\n\n\t@Override\n\tprotected N5Reader createN5Reader(final String location, final GsonBuilder gson) throws IOException, URISyntaxException {\n\n\t\treturn createN5Reader(location, gson, true);\n\t}\n\n\tprotected N5Writer createN5Writer(final String location, final GsonBuilder gson, final boolean cache) throws IOException, URISyntaxException {\n\n\t\treturn new N5FSWriter(location, gson, cache);\n\t}\n\n\tprotected N5Writer createN5Writer(final String location, final boolean cache) throws IOException, URISyntaxException {\n\n\t\treturn createN5Writer(location, new GsonBuilder(), cache);\n\t}\n\n\tprotected N5Reader createN5Reader(final String location, final GsonBuilder gson, final boolean cache) throws IOException, URISyntaxException {\n\n\t\treturn new N5FSReader(location, gson, cache);\n\t}\n\n\t@Override protected N5Writer createN5Writer() throws IOException, URISyntaxException {\n\n\t\treturn new N5FSWriter(tempN5Location(), new GsonBuilder(), true) {\n\t\t\t@Override public void close() {\n\n\t\t\t\tsuper.close();\n\t\t\t\tremove();\n\t\t\t}\n\t\t};\n\t}\n\n\t@Test\n\tpublic void cacheTest() throws IOException, URISyntaxException {\n\t\t/* Test the cache by setting many attributes, then manually deleting the underlying file.\n\t\t* The only possible way for the test to succeed is if it never again attempts to read the file, and relies on the cache. */\n\t\ttry (N5KeyValueWriter n5 = (N5KeyValueWriter) createN5Writer()) {\n\t\t\tfinal String cachedGroup = \"cachedGroup\";\n\t\t\tfinal String attributesPath = n5.absoluteAttributesPath(cachedGroup);\n\n\n\t\t\tfinal ArrayList<TestData<?>> tests = new ArrayList<>();\n\t\t\tn5.createGroup(cachedGroup);\n\t\t\taddAndTest(n5, tests, new TestData<>(cachedGroup, \"a/b/c\", 100));\n\t\t\taddAndTest(n5, tests, new TestData<>(cachedGroup, \"a/a[5]\", \"asdf\"));\n\t\t\taddAndTest(n5, tests, new TestData<>(cachedGroup, \"a/a[2]\", 0));\n\n\t\t\tFiles.delete(Paths.get(attributesPath));\n\t\t\trunTests(n5, tests);\n\t\t}\n\n\t\ttry (N5KeyValueWriter n5 = (N5KeyValueWriter)createN5Writer(tempN5Location(), false)) {\n\t\t\tfinal String cachedGroup = \"cachedGroup\";\n\t\t\tfinal String attributesPath = n5.absoluteAttributesPath(cachedGroup);\n\n\t\t\tfinal ArrayList<TestData<?>> tests = new ArrayList<>();\n\t\t\tn5.createGroup(cachedGroup);\n\t\t\taddAndTest(n5, tests, new TestData<>(cachedGroup, \"a/b/c\", 100));\n\t\t\taddAndTest(n5, tests, new TestData<>(cachedGroup, \"a/a[5]\", \"asdf\"));\n\t\t\taddAndTest(n5, tests, new TestData<>(cachedGroup, \"a/a[2]\", 0));\n\n\t\t\tFiles.delete(Paths.get(attributesPath));\n\t\t\tassertThrows(AssertionError.class, () -> runTests(n5, tests));\n\t\t\tn5.remove();\n\t\t}\n\t}\n\n\t@Test\n\tpublic void cacheGroupDatasetTest() throws IOException, URISyntaxException {\n\n\t\tfinal String datasetName = \"dd\";\n\t\tfinal String groupName = \"gg\";\n\n\t\tfinal String tmpLocation = tempN5Location();\n\t\ttry (GsonKeyValueN5Writer w1 = (GsonKeyValueN5Writer) createN5Writer(tmpLocation);\n\t\t\t\tGsonKeyValueN5Writer w2 = (GsonKeyValueN5Writer) createN5Writer(tmpLocation);) {\n\n\t\t\t// create a group, both writers know it exists\n\t\t\tw1.createGroup(groupName);\n\t\t\tassertTrue(w1.exists(groupName));\n\t\t\tassertTrue(w2.exists(groupName));\n\n\t\t\t// one writer removes the group\n\t\t\tw2.remove(groupName);\n\t\t\tassertTrue(w1.exists(groupName));\t// w1's cache thinks group still exists\n\t\t\tassertFalse(w2.exists(groupName));\t// w2 knows group has been removed\n\n\t\t\t// create a dataset\n\t\t\tw1.createDataset(datasetName, dimensions, blockSize, DataType.UINT8, new RawCompression());\n\t\t\tassertTrue(w1.exists(datasetName));\n\t\t\tassertTrue(w2.exists(datasetName));\n\n\t\t\tassertNotNull(w1.getDatasetAttributes(datasetName));\n\t\t\tassertNotNull(w2.getDatasetAttributes(datasetName));\n\n\t\t\t// one writer removes the data\n\t\t\tw2.remove(datasetName);\n\t\t\tassertTrue(w1.exists(datasetName));  // w1's cache thinks group still exists\n\t\t\tassertFalse(w2.exists(datasetName)); // w2 knows group has been removed\n\n\t\t\tassertNotNull(w1.getDatasetAttributes(datasetName));\n\t\t\tassertNull(w2.getDatasetAttributes(datasetName));\n\n\t\t\tw1.remove();\n\t\t}\n\t}\n\n\t@Test\n\tpublic void cacheBehaviorTest() throws IOException, URISyntaxException {\n\n\t\tfinal String loc = tempN5Location();\n\t\t// make an uncached n5 writer\n\t\ttry (final N5TrackingStorage n5 = new N5TrackingStorage(new FileSystemKeyValueAccess(), loc,\n\t\t\t\tnew GsonBuilder(), true)) {\n\n\t\t\tcacheBehaviorHelper(n5);\n\t\t\tn5.remove();\n\t\t}\n\t}\n\n\tpublic static void cacheBehaviorHelper(final TrackingStorage n5) throws IOException, URISyntaxException {\n\n\t\t// non existant group\n\t\tfinal String groupA = \"groupA\";\n\t\tfinal String groupB = \"groupB\";\n\n\t\t// expected backend method call counts\n\t\tint expectedExistCount = 0;\n\t\tfinal int expectedGroupCount = 0;\n\t\tfinal int expectedDatasetCount = 0;\n\t\tint expectedAttributeCount = 0;\n\t\tint expectedListCount = 0;\n\t\tint expectedWriteAttributeCount = 0;\n\n\t\tboolean exists = n5.exists(groupA);\n\t\tboolean groupExists = n5.groupExists(groupA);\n\t\tboolean datasetExists = n5.datasetExists(groupA);\n\t\tassertFalse(exists); // group does not exist\n\t\tassertFalse(groupExists); // group does not exist\n\t\tassertFalse(datasetExists); // dataset does not exist\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.createGroup(groupA);\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t// group B\n\t\texists = n5.exists(groupB);\n\t\tgroupExists = n5.groupExists(groupB);\n\t\tdatasetExists = n5.datasetExists(groupB);\n\t\tassertFalse(exists); // group now exists\n\t\tassertFalse(groupExists); // group now exists\n\t\tassertFalse(datasetExists); // dataset does not exist\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\texists = n5.exists(groupA);\n\t\tgroupExists = n5.groupExists(groupA);\n\t\tdatasetExists = n5.datasetExists(groupA);\n\t\tassertTrue(exists); // group now exists\n\t\tassertTrue(groupExists); // group now exists\n\t\tassertFalse(datasetExists); // dataset does not exist\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tfinal String cachedGroup = \"cachedGroup\";\n\t\t// should not check existence when creating a group\n\t\tn5.createGroup(cachedGroup);\n\t\tn5.createGroup(cachedGroup); // be annoying\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t// should not check existence when this instance created a group\n\t\tn5.exists(cachedGroup);\n\t\tn5.groupExists(cachedGroup);\n\t\tn5.datasetExists(cachedGroup);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t// should not read attributes from container when setting them\n\t\tn5.setAttribute(cachedGroup, \"one\", 1);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(++expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.setAttribute(cachedGroup, \"two\", 2);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(++expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.removeAttribute(cachedGroup, \"one\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(++expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.removeAttribute(cachedGroup, \"one\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.removeAttribute(cachedGroup, \"cow\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.removeAttribute(cachedGroup, \"two\", Integer.class);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(++expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.list(\"\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(++expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.list(cachedGroup);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(++expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t/*\n\t\t * Check existence for groups that have not been made by this reader but isGroup\n\t\t * and isDatatset must be false if it does not exists so then should not be\n\t\t * called.\n\t\t *\n\t\t * Similarly, attributes can not exist for a non-existent group, so should not\n\t\t * attempt to get attributes from the container.\n\t\t *\n\t\t * Finally,listing on a non-existent group is pointless, so don't call the\n\t\t * backend storage\n\t\t */\n\t\tfinal String nonExistentGroup = \"doesNotExist\";\n\t\tn5.exists(nonExistentGroup);\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.groupExists(nonExistentGroup);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.datasetExists(nonExistentGroup);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.getAttributes(nonExistentGroup);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tassertThrows(N5Exception.class, () -> n5.list(nonExistentGroup));\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tfinal String a = \"a\";\n\t\tfinal String ab = \"a/b\";\n\t\tfinal String abc = \"a/b/c\";\n\t\t// create \"a/b/c\"\n\t\tn5.createGroup(abc);\n\t\tassertTrue(n5.exists(abc));\n\t\tassertTrue(n5.groupExists(abc));\n\t\tassertFalse(n5.datasetExists(abc));\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t// ensure that backend need not be checked when testing existence of \"a/b\"\n\t\t// TODO how does this work\n\t\tassertTrue(n5.exists(ab));\n\t\tassertTrue(n5.groupExists(ab));\n\t\tassertFalse(n5.datasetExists(ab));\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t// remove a nested group\n\t\t// checks for all children should not require a backend check\n\t\tn5.remove(a);\n\t\tassertFalse(n5.exists(a));\n\t\tassertFalse(n5.groupExists(a));\n\t\tassertFalse(n5.datasetExists(a));\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tassertFalse(n5.exists(ab));\n\t\tassertFalse(n5.groupExists(ab));\n\t\tassertFalse(n5.datasetExists(ab));\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tassertFalse(n5.exists(abc));\n\t\tassertFalse(n5.groupExists(abc));\n\t\tassertFalse(n5.datasetExists(abc));\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tn5.createGroup(\"a\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\t\tn5.createGroup(\"a/a\");\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\t\tn5.createGroup(\"a/b\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\t\tn5.createGroup(\"a/c\");\n\t\tassertEquals(++expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\tfinal Set<String> abcListSet = Arrays.stream(n5.list(\"a\")).collect(Collectors.toSet());\n\t\tassertEquals(Stream.of(\"a\", \"b\", \"c\").collect(Collectors.toSet()), abcListSet);\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(++expectedListCount, n5.getListCallCount()); // list incremented\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t// remove a\n\t\tn5.remove(\"a/a\");\n\t\tfinal Set<String> bc = Arrays.stream(n5.list(\"a\")).collect(Collectors.toSet());\n\t\tassertEquals(Stream.of(\"b\", \"c\").collect(Collectors.toSet()), bc);\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(expectedGroupCount, n5.getGroupCallCount());\n\t\tassertEquals(expectedDatasetCount, n5.getDatasetCallCount());\n\t\tassertEquals(expectedAttributeCount, n5.getAttrCallCount());\n\t\tassertEquals(expectedListCount, n5.getListCallCount()); // list NOT incremented\n\t\tassertEquals(expectedWriteAttributeCount, n5.getWriteAttrCallCount());\n\n\t\t/*Check exists should only increment exists if attributes do no exist. Create a new writer and inject\n\t\t* a new group with attributes unbeknownst to this writer */\n\t\ttry (N5Writer writer = new N5FSWriter(n5.getURI().toString(), false)) {\n\t\t\twriter.createGroup(\"sneaky_group\");\n\t\t\twriter.setAttribute(\"sneaky_group\", \"sneaky_attribute\", \"BOO!\");\n\t\t}\n\t\tn5.exists(\"sneaky_group\");\n\t\tassertEquals(expectedExistCount, n5.getExistCallCount());\n\t\tassertEquals(++expectedAttributeCount, n5.getAttrCallCount());\n\n\t\t// TODO repeat the above exercise when creating dataset\n\t}\n\n\tpublic static interface TrackingStorage extends CachedGsonKeyValueN5Writer {\n\n\t\tpublic int getAttrCallCount();\n\t\tpublic int getExistCallCount();\n\t\tpublic int getGroupCallCount();\n\t\tpublic int getGroupAttrCallCount();\n\t\tpublic int getDatasetCallCount();\n\t\tpublic int getDatasetAttrCallCount();\n\t\tpublic int getListCallCount();\n\t\tpublic int getWriteAttrCallCount();\n\t}\n\n\tpublic static class N5TrackingStorage extends N5KeyValueWriter implements TrackingStorage {\n\n\t\tpublic int attrCallCount = 0;\n\t\tpublic int existsCallCount = 0;\n\t\tpublic int groupCallCount = 0;\n\t\tpublic int groupAttrCallCount = 0;\n\t\tpublic int datasetCallCount = 0;\n\t\tpublic int datasetAttrCallCount = 0;\n\t\tpublic int listCallCount = 0;\n\t\tpublic int writeAttrCallCount = 0;\n\n\t\tpublic N5TrackingStorage(final KeyValueAccess keyValueAccess, final String basePath,\n\t\t\t\tfinal GsonBuilder gsonBuilder, final boolean cacheAttributes) throws IOException {\n\n\t\t\tsuper(keyValueAccess, basePath, gsonBuilder, cacheAttributes);\n\t\t}\n\n\t\t@Override\n\t\tpublic JsonElement getAttributesFromContainer(final String key, final String cacheKey) {\n\t\t\tattrCallCount++;\n\t\t\treturn super.getAttributesFromContainer(key, cacheKey);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean existsFromContainer(final String path, final String cacheKey) {\n\t\t\texistsCallCount++;\n\t\t\treturn super.existsFromContainer(path, cacheKey);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isGroupFromContainer(final String key) {\n\t\t\tgroupCallCount++;\n\t\t\treturn super.isGroupFromContainer(key);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isGroupFromAttributes(final String normalCacheKey, final JsonElement attributes) {\n\t\t\tgroupAttrCallCount++;\n\t\t\treturn super.isGroupFromAttributes(normalCacheKey, attributes);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isDatasetFromContainer(final String key) {\n\t\t\tdatasetCallCount++;\n\t\t\treturn super.isDatasetFromContainer(key);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isDatasetFromAttributes(final String normalCacheKey, final JsonElement attributes) {\n\t\t\tdatasetAttrCallCount++;\n\t\t\treturn super.isDatasetFromAttributes(normalCacheKey, attributes);\n\t\t}\n\n\t\t@Override\n\t\tpublic String[] listFromContainer(final String key) {\n\t\t\tlistCallCount++;\n\t\t\treturn super.listFromContainer(key);\n\t\t}\n\n\t\t@Override public void writeAttributes(String normalGroupPath, JsonElement attributes) throws N5Exception {\n\t\t\twriteAttrCallCount++;\n\t\t\tsuper.writeAttributes(normalGroupPath, attributes);\n\t\t}\n\n\t\t@Override\n\t\tpublic int getAttrCallCount() {\n\t\t\treturn attrCallCount;\n\t\t}\n\n\t\t@Override\n\t\tpublic int getExistCallCount() {\n\t\t\treturn existsCallCount;\n\t\t}\n\n\t\t@Override\n\t\tpublic int getGroupCallCount() {\n\t\t\treturn groupCallCount;\n\t\t}\n\n\t\t@Override\n\t\tpublic int getGroupAttrCallCount() {\n\t\t\treturn groupAttrCallCount;\n\t\t}\n\n\t\t@Override\n\t\tpublic int getDatasetCallCount() {\n\t\t\treturn datasetCallCount;\n\t\t}\n\n\t\t@Override\n\t\tpublic int getDatasetAttrCallCount() {\n\t\t\treturn datasetAttrCallCount;\n\t\t}\n\n\t\t@Override\n\t\tpublic int getListCallCount() {\n\t\t\treturn listCallCount;\n\t\t}\n\n\t\t@Override public int getWriteAttrCallCount() {\n\t\t\treturn writeAttrCallCount;\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/N5FSTest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertThat;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.file.FileSystems;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.TimeoutException;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\nimport org.junit.Test;\n\nimport com.google.gson.GsonBuilder;\n\n/**\n * Initiates testing of the filesystem-based N5 implementation.\n *\n * @author Stephan Saalfeld\n * @author Igor Pisarev\n */\npublic class N5FSTest extends AbstractN5Test {\n\n\tprivate static String tempN5PathName() {\n\n\t\ttry {\n\t\t\tfinal File tmpFile = Files.createTempDirectory(\"n5-test-\").toFile();\n\t\t\ttmpFile.delete();\n\t\t\ttmpFile.mkdir();\n\t\t\ttmpFile.deleteOnExit();\n\t\t\treturn tmpFile.getCanonicalPath();\n\t\t} catch (final Exception e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\t@Override\n\tprotected String tempN5Location() throws URISyntaxException {\n\n\t\tfinal String basePath = new File(tempN5PathName()).toURI().normalize().getPath();\n\t\treturn new URI(\"file\", null, basePath, null).toString();\n\t}\n\n\t@Override\n\tprotected N5Writer createN5Writer() throws IOException, URISyntaxException {\n\n\t\treturn createN5Writer(tempN5Location(), new GsonBuilder());\n\t}\n\n\t@Override\n\tprotected N5Writer createN5Writer(\n\t\t\tfinal String location,\n\t\t\tfinal GsonBuilder gson) throws IOException, URISyntaxException {\n\n\t\treturn new N5FSWriter(location, gson);\n\t}\n\n\t@Override\n\tprotected N5Reader createN5Reader(\n\t\t\tfinal String location,\n\t\t\tfinal GsonBuilder gson) throws IOException, URISyntaxException {\n\n\t\treturn new N5FSReader(location, gson);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/N5ReadBenchmark.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport com.google.gson.GsonBuilder;\nimport java.io.File;\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.util.Random;\nimport java.util.concurrent.TimeUnit;\nimport org.openjdk.jmh.annotations.Benchmark;\nimport org.openjdk.jmh.annotations.BenchmarkMode;\nimport org.openjdk.jmh.annotations.Fork;\nimport org.openjdk.jmh.annotations.Level;\nimport org.openjdk.jmh.annotations.Mode;\nimport org.openjdk.jmh.annotations.OutputTimeUnit;\nimport org.openjdk.jmh.annotations.Scope;\nimport org.openjdk.jmh.annotations.Setup;\nimport org.openjdk.jmh.annotations.State;\nimport org.openjdk.jmh.runner.Runner;\nimport org.openjdk.jmh.runner.RunnerException;\nimport org.openjdk.jmh.runner.options.Options;\nimport org.openjdk.jmh.runner.options.OptionsBuilder;\nimport org.openjdk.jmh.runner.options.TimeValue;\n\n@State( Scope.Thread )\n@Fork( 1 )\npublic class N5ReadBenchmark {\n\n\n\n\tprivate static String tempN5PathName() {\n\t\ttry {\n\t\t\tfinal File tmpFile = Files.createTempDirectory(\"n5-test-\").toFile();\n\t\t\ttmpFile.deleteOnExit();\n\t\t\treturn tmpFile.getCanonicalPath();\n\t\t} catch (final Exception e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\tprivate static final String basePath = tempN5PathName();\n\tprivate static final String datasetName = \"/test/group/dataset\";\n\tprivate static final long[] dimensions = new long[]{640, 640, 640};\n\tprivate static final int[] blockSize = new int[]{64, 64, 64};\n\n\tprivate static byte[] byteBlock;\n\tprivate static short[] shortBlock;\n\tprivate static int[] intBlock;\n\tprivate static long[] longBlock;\n\tprivate static float[] floatBlock;\n\tprivate static double[] doubleBlock;\n\n\tprivate static void createData() {\n\t\tfinal Random rnd = new Random();\n\t\tfinal int blockNumElements = DataBlock.getNumElements(blockSize);\n\t\tbyteBlock = new byte[blockNumElements];\n\t\tshortBlock = new short[blockNumElements];\n\t\tintBlock = new int[blockNumElements];\n\t\tlongBlock = new long[blockNumElements];\n\t\tfloatBlock = new float[blockNumElements];\n\t\tdoubleBlock = new double[blockNumElements];\n\t\trnd.nextBytes(byteBlock);\n\t\tfor (int i = 0; i < blockNumElements; ++i) {\n\t\t\tshortBlock[i] = (short)rnd.nextInt();\n\t\t\tintBlock[i] = rnd.nextInt();\n\t\t\tlongBlock[i] = rnd.nextLong();\n\t\t\tfloatBlock[i] = Float.intBitsToFloat(rnd.nextInt());\n\t\t\tdoubleBlock[i] = Double.longBitsToDouble(rnd.nextLong());\n\t\t}\n\t}\n\n\tprivate static N5Writer createTempN5Writer() {\n\t\treturn new N5FSWriter(basePath, new GsonBuilder());\n\t}\n\n\tprivate static N5Reader n5;\n\tprivate static DatasetAttributes attributes;\n\n\t@Setup(Level.Trial)\n\tpublic void setup() {\n\t\tcreateData();\n\t\tSystem.out.println(\"basePath = \" + basePath);\n\n\t\ttry (final N5Writer n5 = createTempN5Writer()) {\n//\t\t\tfinal Compression compression = new Lz4Compression();\n\t\t\tfinal Compression compression = new RawCompression();\n\t\t\tn5.createDataset(datasetName, dimensions, blockSize, DataType.FLOAT64, compression);\n\t\t\tfinal DatasetAttributes attributes = n5.getDatasetAttributes(datasetName);\n\t\t\tfinal DoubleArrayDataBlock dataBlock = new DoubleArrayDataBlock(blockSize, new long[] {0, 0, 0}, doubleBlock);\n\t\t\tn5.writeChunk(datasetName, attributes, dataBlock);\n\t\t}\n\n\t\tn5 = new N5FSReader(basePath, new GsonBuilder());\n\t\tattributes = n5.getDatasetAttributes(datasetName);\n\t}\n\n//\t@TearDown(Level.Trial)\n//\tpublic void teardown() {\n//\t}\n\n\t@Benchmark\n\t@BenchmarkMode( Mode.AverageTime )\n\t@OutputTimeUnit( TimeUnit.MILLISECONDS )\n\tpublic void bench() {\n\t\tn5.readChunk(datasetName, attributes, 0, 0, 0);\n\t}\n\n\tpublic static void main( final String... args ) throws RunnerException, IOException\n\t{\n\t\tfinal Options opt = new OptionsBuilder()\n\t\t\t\t.include( N5ReadBenchmark.class.getSimpleName() )\n\t\t\t\t.warmupIterations( 8 )\n\t\t\t\t.measurementIterations( 8 )\n\t\t\t\t.warmupTime( TimeValue.milliseconds( 500 ) )\n\t\t\t\t.measurementTime( TimeValue.milliseconds( 500 ) )\n\t\t\t\t.build();\n\t\tnew Runner( opt ).run();\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/N5URITest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport org.junit.Test;\n\nimport java.net.URISyntaxException;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\npublic class N5URITest {\n\t@Test\n\tpublic void testAttributePath() {\n\n\t\tassertEquals(\"/a/b/c/d/e\", N5URI.normalizeAttributePath(\"/a/b/c/d/e\"));\n\t\tassertEquals(\"/a/b/c/d/e\", N5URI.normalizeAttributePath(\"/a/b/c/d/e/\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/\"));\n\t\tassertEquals(\"\", N5URI.normalizeAttributePath(\"\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/a/..\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/a/../b/../c/d/../..\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/a/../b/../c/d/../..\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/a/../b/../c/d/../..\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/./././././\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/./././././.\"));\n\t\tassertEquals(\"\", N5URI.normalizeAttributePath(\"./././././\"));\n\t\tassertEquals(\"a.\", N5URI.normalizeAttributePath(\"./a././././\"));\n\t\tassertEquals(\"\\\\\\\\.\", N5URI.normalizeAttributePath(\"./a./../\\\\\\\\././.\"));\n\t\tassertEquals(\"a./\\\\\\\\.\", N5URI.normalizeAttributePath(\"./a./\\\\\\\\././.\"));\n\t\tassertEquals(\"/a./\\\\\\\\.\", N5URI.normalizeAttributePath(\"/./a./\\\\\\\\././.\"));\n\n\t\tassertEquals(\"/a/[0]/b/[0]\", N5URI.normalizeAttributePath(\"/a/[0]/b[0]/\"));\n\t\tassertEquals(\"[0]\", N5URI.normalizeAttributePath(\"[0]\"));\n\t\tassertEquals(\"/[0]\", N5URI.normalizeAttributePath(\"/[0]/\"));\n\t\tassertEquals(\"/a/[0]\", N5URI.normalizeAttributePath(\"/a[0]/\"));\n\t\tassertEquals(\"/[0]/b\", N5URI.normalizeAttributePath(\"/[0]b/\"));\n\n\t\tassertEquals(\"[b]\", N5URI.normalizeAttributePath(\"[b]\"));\n\t\tassertEquals(\"a[b]c\", N5URI.normalizeAttributePath(\"a[b]c\"));\n\t\tassertEquals(\"a[bc\", N5URI.normalizeAttributePath(\"a[bc\"));\n\t\tassertEquals(\"ab]c\", N5URI.normalizeAttributePath(\"ab]c\"));\n\t\tassertEquals(\"a[b00]c\", N5URI.normalizeAttributePath(\"a[b00]c\"));\n\n\t\tassertEquals(\"[ ]\", N5URI.normalizeAttributePath(\"[ ]\"));\n\t\tassertEquals(\"[\\n]\", N5URI.normalizeAttributePath(\"[\\n]\"));\n\t\tassertEquals(\"[\\t]\", N5URI.normalizeAttributePath(\"[\\t]\"));\n\t\tassertEquals(\"[\\r\\n]\", N5URI.normalizeAttributePath(\"[\\r\\n]\"));\n\t\tassertEquals(\"[ ][\\n][\\t][ \\t \\n \\r\\n][\\r\\n]\", N5URI.normalizeAttributePath(\"[ ][\\n][\\t][ \\t \\n \\r\\n][\\r\\n]\"));\n\t\tassertEquals(\"[\\\\]\", N5URI.normalizeAttributePath(\"[\\\\]\"));\n\n\t\tassertEquals(\"let's/try/a/real/case/with spaces\", N5URI.normalizeAttributePath(\"let's/try/a/real/case/with spaces/\"));\n\t\tassertEquals(\"let's/try/a/real/case/with spaces\", N5URI.normalizeAttributePath(\"let's/try/a/real/////case////with spaces/\"));\n\t\tassertEquals(\"/first/relative/a/wd/.w/asd\", N5URI.normalizeAttributePath(\"/../first/relative/test/../a/b/.././wd///.w/asd\"));\n\t\tassertEquals(\"first/relative/a/wd/.w/asd\", N5URI.normalizeAttributePath(\"../first/relative/test/../a/b/.././wd///.w/asd\"));\n\t\tassertEquals(\"\", N5URI.normalizeAttributePath(\"../result/../only/../single/..\"));\n\t\tassertEquals(\"\", N5URI.normalizeAttributePath(\"../result/../multiple/../..\"));\n\t\tassertEquals(\"/\", N5URI.normalizeAttributePath(\"/../result/../multiple/../..\"));\n\n\t\t assertEquals(\"b\", N5URI.normalizeAttributePath(\"a/../b\"));\n\t\t assertEquals(\"/b\", N5URI.normalizeAttributePath(\"/a/../b\"));\n\t\t assertEquals(\"b\", N5URI.normalizeAttributePath(\"../a/../b\"));\n\t\t assertEquals(\"/b\", N5URI.normalizeAttributePath(\"/../a/../b\"));\n\t\t assertEquals(\"/b\", N5URI.normalizeAttributePath(\"/../a/../../b\"));\n\n\t\tString normalizedPath = N5URI.normalizeAttributePath(\"let's/try/a/some/////with/ /\t//white spaces/\");\n\t\tassertEquals(\"Normalizing a normal path should be the identity\", normalizedPath, N5URI.normalizeAttributePath(normalizedPath));\n\t}\n\n\t@Test\n\tpublic void testEscapedAttributePaths() {\n\n\t\tassertEquals(\"\\\\/a\\\\/b\\\\/c\\\\/d\\\\/e\", N5URI.normalizeAttributePath(\"\\\\/a\\\\/b\\\\/c\\\\/d\\\\/e\"));\n\t\tassertEquals(\"/a\\\\\\\\/b/c\", N5URI.normalizeAttributePath(\"/a\\\\\\\\/b/c\"));\n\t\tassertEquals(\"a[b]\\\\[10]\", N5URI.normalizeAttributePath(\"a[b]\\\\[10]\"));\n\t\tassertEquals(\"\\\\[10]\", N5URI.normalizeAttributePath(\"\\\\[10]\"));\n\t\tassertEquals(\"a/[0]/\\\\[10]b\", N5URI.normalizeAttributePath(\"a[0]\\\\[10]b\"));\n\t\tassertEquals(\"[\\\\[10]\\\\[20]]\", N5URI.normalizeAttributePath(\"[\\\\[10]\\\\[20]]\"));\n\t\tassertEquals(\"[\\\\[10]/[20]/]\", N5URI.normalizeAttributePath(\"[\\\\[10][20]]\"));\n\t\tassertEquals(\"\\\\/\", N5URI.normalizeAttributePath(\"\\\\/\"));\n\t}\n\n\t@Test\n\tpublic void testIsAbsolute() throws URISyntaxException {\n\t\t/* Always true if scheme provided */\n\t\tassertTrue(new N5URI(\"file:///a/b/c\").isAbsolute());\n\t\tassertTrue(new N5URI(\"file://C:\\\\\\\\a\\\\\\\\b\\\\\\\\c\").isAbsolute());\n\n\t\t/* Unix Paths*/\n\t\tassertTrue(new N5URI(\"/a/b/c\").isAbsolute());\n\t\tassertFalse(new N5URI(\"a/b/c\").isAbsolute());\n\n\t\t/* Windows Paths*/\n\t\tassertTrue(new N5URI(\"C:\\\\\\\\a\\\\\\\\b\\\\\\\\c\").isAbsolute());\n\t\tassertFalse(new N5URI(\"a\\\\\\\\b\\\\\\\\c\").isAbsolute());\n\t}\n\n\t@Test\n\tpublic void testGetRelative() throws URISyntaxException {\n\n\t\tassertEquals(\n\t\t\t\t\"/a/b/c/d?e#f\",\n\t\t\t\tnew N5URI(\"/a/b/c\").resolve(\"d?e#f\").toString());\n\t\tassertEquals(\n\t\t\t\t\"/d?e#f\",\n\t\t\t\tnew N5URI(\"/a/b/c\").resolve(\"/d?e#f\").toString());\n\t\tassertEquals(\n\t\t\t\t\"file:/a/b/c\",\n\t\t\t\tnew N5URI(\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5\").resolve(\"file:/a/b/c\").toString());\n\t\tassertEquals(\n\t\t\t\t\"s3://janelia-cosem-datasets/a/b/c\",\n\t\t\t\tnew N5URI(\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5\").resolve(\"/a/b/c\").toString());\n\t\tassertEquals(\n\t\t\t\t\"s3://janelia-cosem-datasets/a/b/c?d/e#f/g\",\n\t\t\t\tnew N5URI(\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5\").resolve(\"/a/b/c?d/e#f/g\").toString());\n\t\tassertEquals(\n\t\t\t\t\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5/a/b/c?d/e#f/g\",\n\t\t\t\tnew N5URI(\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5\").resolve(\"a/b/c?d/e#f/g\").toString());\n\t\tassertEquals(\n\t\t\t\t\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5?d/e#f/g\",\n\t\t\t\tnew N5URI(\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5\").resolve(\"?d/e#f/g\").toString());\n\t\tassertEquals(\n\t\t\t\t\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5#f/g\",\n\t\t\t\tnew N5URI(\"s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5\").resolve(\"#f/g\").toString());\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/TrackingN5Writer.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport com.google.gson.GsonBuilder;\nimport org.janelia.saalfeldlab.n5.kva.TrackingKeyValueAccess;\n\n/**\n * An N5Writer that tracks the number of materialize calls performed by\n * its underlying key value access.\n */\npublic class TrackingN5Writer extends N5KeyValueWriter {\n\n    public final TrackingKeyValueAccess tkva;\n\n    public TrackingN5Writer(String basePath, KeyValueAccess kva) {\n\n        super(new TrackingKeyValueAccess(kva), basePath, new GsonBuilder(), false);\n        this.tkva = (TrackingKeyValueAccess) getKeyValueAccess();\n    }\n\n    public void resetNumMaterializeCalls() {\n        tkva.numMaterializeCalls = 0;\n    }\n\n    public int getNumMaterializeCalls() {\n        return tkva.numMaterializeCalls;\n    }\n\n    public void resetNumIsFileCalls() {\n        tkva.numIsFileCalls = 0;\n    }\n\n    public int getNumIsFileCalls() {\n        return tkva.numIsFileCalls;\n    }\n\n    public void resetTotalBytesRead() {\n        tkva.totalBytesRead = 0;\n    }\n\n    public long getTotalBytesRead() {\n        return tkva.totalBytesRead;\n    }\n\n    public void resetAllTracking() {\n        tkva.numMaterializeCalls = 0;\n        tkva.numIsFileCalls = 0;\n        tkva.totalBytesRead = 0;\n    }\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/UriTest.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport static org.junit.Assert.assertEquals;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.file.Paths;\n\nimport org.junit.Before;\nimport org.junit.Test;\n\npublic class UriTest {\n\n\tprivate N5FSWriter n5;\n\n\tprivate KeyValueAccess kva;\n\n\tprivate String relativePath = \"src/test/resources/url/urlAttributes.n5\";\n\n\tprivate String relativeAbnormalPath = \"src/test/resources/./url/urlAttributes.n5\";\n\n\tprivate String relativeAbnormalPath2 = \"src/test/resources/../resources/url/urlAttributes.n5\";\n\n\t@Before\n\tpublic void before() {\n\n\t\tn5 = new N5FSWriter(relativePath);\n\t\tkva = n5.getKeyValueAccess();\n\t}\n\n\t@Test\n\tpublic void testUriParsing() throws URISyntaxException {\n\n\t\tfinal URI uri = n5.getURI();\n\t\t\tassertEquals(\"Container URI must contain scheme\", \"file\", uri.getScheme());\n\n\t\t\tassertEquals(\"Container URI must be absolute\",\n\t\t\t\t\turi.getPath(),\n\t\t\t\t\tPaths.get(relativePath).toAbsolutePath().toUri().normalize().getPath());\n\n\t\tassertEquals(\"Container URI must be normalized 1\", uri, kva.uri(relativePath));\n\t\tassertEquals(\"Container URI must be normalized 2\", uri, kva.uri(relativeAbnormalPath));\n\t\tassertEquals(\"Container URI must be normalized 3\", uri, kva.uri(relativeAbnormalPath2));\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/WriteLockExp.java",
    "content": "package org.janelia.saalfeldlab.n5;\n\nimport java.util.Arrays;\n\n\nimport org.janelia.saalfeldlab.n5.N5Writer.DataBlockSupplier;\nimport org.janelia.saalfeldlab.n5.universe.N5Factory;\nimport org.janelia.saalfeldlab.n5.zarr.v3.ZarrV3DatasetAttributes;\nimport org.janelia.scicomp.n5.zstandard.ZstandardCompression;\n\npublic class WriteLockExp {\n\n    long[] dimensions = new long[]{1024, 1024};\n    int[] shardSize = {1024, 1024};\n    int[] chunkSize = {64, 64};\n    private String root;\n    private int shardIdx;\n\n    int[] data;\n\n    public WriteLockExp(String root, int shardIdx) {\n\n        this.root = root;\n        this.shardIdx = shardIdx;\n    }\n\n    public static void main(String[] args) {\n\n        String root = args[0];\n        int shardIdx = Integer.parseInt(args[1]);\n\n        WriteLockExp exp = new WriteLockExp( root, shardIdx );\n        exp.run();\n    }\n\n    public void run() {\n\n        System.out.println(\"start\");\n        int N = 100_000;\n        final int chunkN = chunkSize[0] * chunkSize[1];\n        data = data(chunkN, shardIdx+1);\n\n        final IntArrayDataBlock block =  new IntArrayDataBlock(chunkSize, new long[]{shardIdx, shardIdx}, data);\n\n        try (N5Writer n5 = N5Factory.createWriter(root)) {\n\n            final DatasetAttributes attrs = getOrCreateDataset(n5);\n            for (int i = 0; i < N; i++) {\n\n                if( i % 1000 == 0)\n                    System.out.println(\"iter: \" + i);\n\n//\t\t\t\tn5.writeRegion(\"\", attrs, new long[]{0,0}, dimensions, blockSupplier(shardIdx, chunkSize, chunkN), true);\n                n5.writeChunks(\"\", attrs, block);\n\n            }\n        } catch (N5Exception e) {\n            e.printStackTrace();\n        }\n        System.out.println(\"done\");\n    }\n\n    public DatasetAttributes getOrCreateDataset(N5Writer n5) {\n\n        ZarrV3DatasetAttributes tmpAttrs = ZarrV3DatasetAttributes.builder(dimensions, DataType.INT32)\n                .shardShape(shardSize)\n                .blockSize(chunkSize)\n                .compression(new ZstandardCompression())\n                .build();\n\n        final DatasetAttributes existingAttrs = n5.getDatasetAttributes(\"\");\n        if( existingAttrs != null )\n            return existingAttrs;\n\n        return n5.createDataset(\"\", tmpAttrs);\n    }\n\n    DataBlockSupplier<int[]> blockSupplier(final int i, final int[] chunkSize, final int chunkN) {\n\n        return new DataBlockSupplier<int[]>() {\n\n            @Override\n            public DataBlock<int[]> get(long[] gridPos, DataBlock<int[]> existingDataBlock) {\n\n                if (gridPos[0] == i && gridPos[1] == i)\n                    return new IntArrayDataBlock(chunkSize, gridPos, data);\n                else\n                    return null;\n            }\n        };\n    }\n\n    static int[] data(final int chunkN, final int value) {\n        final int[] data = new int[chunkN];\n        Arrays.fill(data, value);\n        return data;\n    }\n\n}"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/backward/CompatibilityTest.java",
    "content": "package org.janelia.saalfeldlab.n5.backward;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertTrue;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.net.URI;\nimport java.nio.file.Files;\nimport java.util.Arrays;\n\nimport org.janelia.saalfeldlab.n5.*;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.junit.Test;\n\nimport com.google.gson.JsonElement;\n\npublic class CompatibilityTest {\n\n\tString[][] readVersionsDataset = {\n\t\t\t{\"data-1.5.0.n5\", \"raw\"},\n\t\t\t{\"data-2.5.1.n5\", \"raw\"},\n\t\t\t{\"data-3.1.3.n5\", \"raw\"} };\n\n\tString writeVersion = \"data-3.1.3.n5\";\n\tString writeDataset = \"raw\";\n\tString[] writePathsToTest = {\"0/0\", \"0/1\", \"1/0\", \"1/1\"};\n\n\t@Test\n\tpublic void testBackwardReads() throws NumberFormatException, IOException {\n\n\t\tfor (String[] versionDset : readVersionsDataset)\n\t\t\tbackwardReadHelper(versionDset[0], versionDset[1]);\n\t}\n\n\tpublic void backwardReadHelper(final String base, final String dsetPath) throws NumberFormatException, IOException {\n\n\t\tfinal N5FSReader n5 = new N5FSReader(\"src/test/resources/backward/\" + base);\n\t\tassertTrue(n5.datasetExists(dsetPath));\n\t\tfinal DatasetAttributes attrs = n5.getDatasetAttributes(dsetPath);\n\n\t\t// equivalent to the assertTrue above, but be extra sure\n\t\tassertNotNull(attrs);\n\n\t\tbyte value = 0;\n\t\tlong[] p = new long[2];\n\n\t\tDataBlock<byte[]> b00 = n5.readChunk(dsetPath, attrs, p);\n\t\tassertNotNull(b00);\n\t\tassertArrayEquals(new int[]{5,4}, b00.getSize());\n\t\tassertArrayEquals(expectedData(20, value), b00.getData());\n\n\t\tp[0] = 1;\n\t\tp[1] = 0;\n\t\tvalue++;\n\t\tDataBlock<byte[]> b10 = n5.readChunk(dsetPath, attrs, p);\n\t\tassertNotNull(b10);\n\t\tassertArrayEquals(new int[]{2,4}, b10.getSize());\n\t\tassertArrayEquals(expectedData(8, value), b10.getData());\n\n\t\tp[0] = 0;\n\t\tp[1] = 1;\n\t\tvalue++;\n\t\tDataBlock<byte[]> b01 = n5.readChunk(dsetPath, attrs, p);\n\t\tassertNotNull(b01);\n\t\tassertArrayEquals(new int[]{5,1}, b01.getSize());\n\t\tassertArrayEquals(expectedData(5, value), b01.getData());\n\n\t\tp[0] = 1;\n\t\tp[1] = 1;\n\t\tvalue++;\n\t\tDataBlock<byte[]> b11 = n5.readChunk(dsetPath, attrs, p);\n\t\tassertNotNull(b11);\n\t\tassertArrayEquals(new int[]{2,1}, b11.getSize());\n\t\tassertArrayEquals(expectedData(2, value), b11.getData());\n\n\t\tn5.close();\n\t}\n\n\t@Test\n\tpublic void testBlockData() throws IOException {\n\n\t\tfinal N5FSReader n5Legacy = new N5FSReader(\"src/test/resources/backward/\" + writeVersion);\n\t\tfinal URI uriLegacy = n5Legacy.getURI();\n\n\t\tfinal File basePath = Files.createTempDirectory(\"n5-blockDataTest-\").toFile();\n\t\tbasePath.delete();\n\t\tbasePath.mkdir();\n\t\tbasePath.deleteOnExit();\n\n\t\tN5FSWriter n5My = CreateSampleData.createSampleData(\n\t\t\t\tbasePath.getCanonicalPath(), writeDataset, new RawCompression());\n\t\tURI uriMy = n5My.getURI();\n\n\t\t// check attributes\n\t\tfinal JsonElement attrsLegacy = ((GsonKeyValueN5Reader)n5Legacy).getAttributes(writeDataset);\n\t\tfinal JsonElement attrsMy = ((GsonKeyValueN5Reader)n5My).getAttributes(writeDataset);\n\t\tassertEquals(attrsLegacy, attrsMy);\n\n\t\tfinal KeyValueAccess kva = n5My.getKeyValueAccess();\n\t\tfor (final String path : writePathsToTest) {\n\t\t\tfinal byte[] dataMy = read(kva, kva.compose(uriMy, writeDataset, path));\n\t\t\tfinal byte[] dataLegacy = read(kva, kva.compose(uriLegacy, writeDataset, path));\n\t\t\tassertArrayEquals(dataLegacy, dataMy);\n\t\t}\n\n\t\tn5My.remove();\n\t\tn5My.close();\n\t\tn5Legacy.close();\n\t}\n\n\tprivate byte[] read(KeyValueAccess kva, String path) {\n\n\t\tbyte[] data;\n\t\ttry (VolatileReadData readData = kva.createReadData(path)) {\n\t\t\tdata = readData.allBytes();\n\t\t} catch (N5Exception.N5IOException e) {\n\t\t\treturn null;\n\t\t}\n\t\treturn data;\n\t}\n\n\tprivate static byte[] expectedData(int size, byte value ) {\n\t\tbyte[] data = new byte[size];\n\t\tArrays.fill(data, value);\n\t\treturn data;\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/backward/CreateSampleData.java",
    "content": "package org.janelia.saalfeldlab.n5.backward;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.Arrays;\n\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.Compression;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.N5FSWriter;\nimport org.janelia.saalfeldlab.n5.RawCompression;\n\npublic class CreateSampleData {\n\n\tpublic static void main(String[] args) throws IOException {\n\n\t\tFile f = new File(\"src/test/resources/data-4.0.0-alpha-X.n5\");\n\t\tSystem.out.println(f.getCanonicalPath());\n\t\tcreateSampleData(f.getCanonicalPath(), \"raw\", new RawCompression());\n\t}\n\n\tpublic static N5FSWriter createSampleData(String baseDir, String dataset, Compression compression) throws IOException {\n\n\t\tN5FSWriter n5 = new N5FSWriter(baseDir);\n\t\tfinal String dsetPath = compression.getType();\n\n\t\tlong[] dimensions = new long[]{7, 5};\n\t\tint[] blkSizeDset = new int[]{5, 4};\n\t\tint[] blkSize = new int[]{5, 4};\n\n\t\tfinal DatasetAttributes attrs = new DatasetAttributes(dimensions, blkSizeDset, DataType.UINT8, compression);\n\t\tn5.createDataset(dsetPath, attrs);\n\n\t\tbyte val = 0;\n\t\tlong[] pos = new long[]{0, 0};\n\t\tn5.writeChunk(dsetPath, attrs, createDataBlock(blkSize, pos, val));\n\n\t\tpos[0] = 1;\n\t\tpos[1] = 0;\n\t\tblkSize[0] = 2;\n\t\tblkSize[1] = 4;\n\t\tval++;\n\t\tn5.writeChunk(dsetPath, attrs, createDataBlock(blkSize, pos, val));\n\n\t\tpos[0] = 0;\n\t\tpos[1] = 1;\n\t\tblkSize[0] = 5;\n\t\tblkSize[1] = 1;\n\t\tval++;\n\t\tn5.writeChunk(dsetPath, attrs, createDataBlock(blkSize, pos, val));\n\n\t\tpos[0] = 1;\n\t\tpos[1] = 1;\n\t\tblkSize[0] = 2;\n\t\tblkSize[1] = 1;\n\t\tval++;\n\t\tn5.writeChunk(dsetPath, attrs, createDataBlock( blkSize, pos, val ));\n\n\t\treturn n5;\n\t}\n\n\tpublic static ByteArrayDataBlock createDataBlock(int[] size, long[] gridPosition, byte value) throws IOException {\n\t\tint N = Arrays.stream(size).reduce(1, (x,y) -> x*y);\n\t\tfinal byte[] data = new byte[N];\n\t\tArrays.fill(data, value);\n\t\treturn new ByteArrayDataBlock(size, gridPosition, data);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/benchmarks/N5BlockWriteBenchmarks.java",
    "content": "package org.janelia.saalfeldlab.n5.benchmarks;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Random;\nimport java.util.concurrent.TimeUnit;\n\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.FileSystemKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.GzipCompression;\nimport org.janelia.saalfeldlab.n5.N5KeyValueWriter;\nimport org.janelia.saalfeldlab.n5.N5Writer;\nimport org.openjdk.jmh.annotations.Benchmark;\nimport org.openjdk.jmh.annotations.BenchmarkMode;\nimport org.openjdk.jmh.annotations.Fork;\nimport org.openjdk.jmh.annotations.Level;\nimport org.openjdk.jmh.annotations.Measurement;\nimport org.openjdk.jmh.annotations.Mode;\nimport org.openjdk.jmh.annotations.OutputTimeUnit;\nimport org.openjdk.jmh.annotations.Param;\nimport org.openjdk.jmh.annotations.Scope;\nimport org.openjdk.jmh.annotations.Setup;\nimport org.openjdk.jmh.annotations.State;\nimport org.openjdk.jmh.annotations.TearDown;\nimport org.openjdk.jmh.annotations.Warmup;\nimport org.openjdk.jmh.infra.Blackhole;\nimport org.openjdk.jmh.runner.Runner;\nimport org.openjdk.jmh.runner.RunnerException;\nimport org.openjdk.jmh.runner.options.Options;\nimport org.openjdk.jmh.runner.options.OptionsBuilder;\n\nimport com.google.gson.GsonBuilder;\n\n@State(Scope.Benchmark)\n@Warmup(iterations = 5, time = 100, timeUnit = TimeUnit.MICROSECONDS)\n@Measurement(iterations = 50, time = 100, timeUnit = TimeUnit.MICROSECONDS)\n@BenchmarkMode(Mode.AverageTime)\n@OutputTimeUnit(TimeUnit.MICROSECONDS)\n@Fork(1)\npublic class N5BlockWriteBenchmarks {\n\n\tRandom random = new Random(7777);\n\n\tfinal String writeGroup = \"writeGroup\";\n\tfinal String readGroup = \"readGroup\";\n\n\tN5Writer n5;\n\tDatasetAttributes dsetAttrs;\n\tArrayList<DataBlock<?>> blocks;\n\n\t@Param( value = { \"int32\" } )\n\tprotected String dataType;\n\n\t@Param( value = { \"3\" } )\n\tprotected int numDimensions;\n\n\t@Param( value = { \"64\" } )\n\tprotected int blockDim;\n\n\t@Param( value = { \"5\" } )\n\tprotected int numBlocks;\n\n\tpublic static void main( String[] args ) throws RunnerException {\n\n\t\tfinal Options options = new OptionsBuilder().include( N5BlockWriteBenchmarks.class.getSimpleName() + \"\\\\.\" ).build();\n\t\tnew Runner(options).run();\n\t}\n\n\t@TearDown(Level.Trial)\n\tpublic void teardown() {\n\t\tFile d = new File(n5.getURI());\n\t\tn5.remove();\n\t\td.delete();\n\t}\n\n\t@Setup(Level.Trial)\n\tpublic void setup() {\n\n\t\tFile tmpDir;\n\t\ttry {\n\t\t\ttmpDir = Files.createTempDirectory(\"n5-blockWriteBenchmark-\").toFile();\n\t\t\tFileSystemKeyValueAccess kva = new FileSystemKeyValueAccess();\n\t\t\tn5 = new N5KeyValueWriter(kva, tmpDir.getAbsolutePath(), new GsonBuilder(), true);\n\n\t\t\tint[] blockSize = new int[numDimensions];\n\t\t\tArrays.fill(blockSize, blockDim);\n\n\t\t\tlong[] dims = new long[numDimensions];\n\t\t\tArrays.fill(dims, blockDim);\n\t\t\tdims[0] = blockDim * numBlocks;\n\n\t\t\tDataType dtype = DataType.fromString(dataType);\n\n\t\t\tdsetAttrs = new DatasetAttributes(dims, blockSize, dtype, new GzipCompression());\n\t\t\tn5.createDataset(\"\", dsetAttrs);\n\n\t\t\tblocks = new ArrayList<>();\n\t\t\tfor (int i = 0; i < numBlocks; i++) {\n\t\t\t\tlong[] p = new long[numDimensions];\n\t\t\t\tp[0] = i;\n\n\t\t\t\tDataBlock<?> blk = dtype.createDataBlock(blockSize, p);\n\t\t\t\tfillBlock(dtype, blk);\n\t\t\t\tblocks.add(blk);\n\n\t\t\t\t// write data into the read group\n\t\t\t\tn5.writeBlock(readGroup, dsetAttrs, blk);\n\t\t\t}\n\n\t\t} catch (final IOException e) {\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n\n\t@Benchmark\n\tpublic void writeBenchmark() throws IOException {\n\n\t\tblocks.forEach(blk -> {\n\t\t\tn5.writeBlock(writeGroup, dsetAttrs, blk);\n\t\t});\n\t}\n\n\t@Benchmark\n\tpublic void readBenchmark(Blackhole hole) throws IOException {\n\n\t\tfinal long[] p = new long[numDimensions];\n\t\tfor (int i = 0; i < numBlocks; i++) {\n\t\t\tp[0] = i;\n\t\t\thole.consume(n5.readBlock(readGroup, dsetAttrs, p));\n\t\t}\n\t}\n\n\tprivate void fillBlock(DataType dtype, DataBlock<?> blk) {\n\n\t\tswitch (dtype) {\n\t\tcase INT32:\n\t\t\tfill((int[])blk.getData());\n\t\t\tbreak;\n\t\tcase FLOAT32:\n\t\t\tfill((float[])blk.getData());\n\t\t\tbreak;\n\t\tcase FLOAT64:\n\t\t\tfill((double[])blk.getData());\n\t\t\tbreak;\n\t\tcase INT16:\n\t\t\tfill((short[])blk.getData());\n\t\t\tbreak;\n\t\tcase INT64:\n\t\t\tfill((long[])blk.getData());\n\t\t\tbreak;\n\t\tcase INT8:\n\t\t\tfill((byte[])blk.getData());\n\t\t\tbreak;\n\t\tcase OBJECT:\n\t\t\tbreak;\n\t\tcase STRING:\n\t\t\tbreak;\n\t\tcase UINT16:\n\t\t\tfill((short[])blk.getData());\n\t\t\tbreak;\n\t\tcase UINT32:\n\t\t\tfill((int[])blk.getData());\n\t\t\tbreak;\n\t\tcase UINT64:\n\t\t\tfill((long[])blk.getData());\n\t\t\tbreak;\n\t\tcase UINT8:\n\t\t\tfill((byte[])blk.getData());\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tprivate void fill(short[] arr) {\n\t\tfor (int i = 0; i < arr.length; i++)\n\t\t\tarr[i] = (short)random.nextInt();\n\t}\n\n\tprivate void fill(int[] arr) {\n\t\tfor (int i = 0; i < arr.length; i++)\n\t\t\tarr[i] = random.nextInt();\n\t}\n\n\tprivate void fill(long[] arr) {\n\t\tfor (int i = 0; i < arr.length; i++)\n\t\t\tarr[i] = random.nextLong();\n\t}\n\n\tprivate void fill(float[] arr) {\n\t\tfor (int i = 0; i < arr.length; i++)\n\t\t\tarr[i] = random.nextFloat();\n\t}\n\n\tprivate void fill(double[] arr) {\n\t\tfor (int i = 0; i < arr.length; i++)\n\t\t\tarr[i] = random.nextDouble();\n\t}\n\n\tprivate void fill(byte[] arr) {\n\t\trandom.nextBytes(arr);\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/benchmarks/ReadDataBenchmarks.java",
    "content": "package org.janelia.saalfeldlab.n5.benchmarks;\n\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Random;\nimport java.util.concurrent.TimeUnit;\n\nimport org.janelia.saalfeldlab.n5.FileSystemKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.openjdk.jmh.annotations.Benchmark;\nimport org.openjdk.jmh.annotations.BenchmarkMode;\nimport org.openjdk.jmh.annotations.Fork;\nimport org.openjdk.jmh.annotations.Level;\nimport org.openjdk.jmh.annotations.Measurement;\nimport org.openjdk.jmh.annotations.Mode;\nimport org.openjdk.jmh.annotations.OutputTimeUnit;\nimport org.openjdk.jmh.annotations.Param;\nimport org.openjdk.jmh.annotations.Scope;\nimport org.openjdk.jmh.annotations.Setup;\nimport org.openjdk.jmh.annotations.State;\nimport org.openjdk.jmh.annotations.TearDown;\nimport org.openjdk.jmh.annotations.Warmup;\nimport org.openjdk.jmh.infra.Blackhole;\nimport org.openjdk.jmh.runner.Runner;\nimport org.openjdk.jmh.runner.RunnerException;\nimport org.openjdk.jmh.runner.options.Options;\nimport org.openjdk.jmh.runner.options.OptionsBuilder;\n\n@State(Scope.Benchmark)\n@Warmup(iterations = 10, time = 100, timeUnit = TimeUnit.MICROSECONDS)\n@Measurement(iterations = 100, time = 100, timeUnit = TimeUnit.MICROSECONDS)\n@BenchmarkMode(Mode.AverageTime)\n@OutputTimeUnit(TimeUnit.MICROSECONDS)\n@Fork(1)\npublic class ReadDataBenchmarks {\n\n\t@Param(value = { \"10000000\" })\n\tprotected int objectSizeBytes;\n\n\tprotected Path basePath;\n\tprotected ArrayList<Path> tmpPaths;\n\tprotected KeyValueAccess kva;\n\tprotected Random random;\n\n\tpublic ReadDataBenchmarks() {}\n\n\tpublic static void main(String... args) throws RunnerException {\n\n\t\tfinal Options options = new OptionsBuilder().include(ReadDataBenchmarks.class.getSimpleName() + \"\\\\.\")\n\t\t\t\t.build();\n\n\t\tnew Runner(options).run();\n\t}\n\n\t@Benchmark\n\tpublic void run(Blackhole hole) throws IOException {\n\n\t\ttry (final VolatileReadData read = read()) {\n\t\t\thole.consume(read.materialize());\n\t\t}\n\t}\n\n\tpublic VolatileReadData read() throws IOException {\n\n\t\treturn kva.createReadData(getPath().toString());\n\t}\n\n\tprotected Path getPath() {\n\n\t\treturn basePath.resolve(\"tmp-\" + objectSizeBytes);\n\t}\n\n\t@Setup(Level.Trial)\n\tpublic void setup() throws IOException {\n\n\t\trandom = new Random();\n\t\tkva = new FileSystemKeyValueAccess();\n\n\t\tbasePath = Files.createTempDirectory(\"ReadDataBenchmark-\");\n\t\ttmpPaths = new ArrayList<>();\n\t\tfor (final int sz : sizes()) {\n\t\t\tPath p = basePath.resolve(\"tmp-\"+sz);\n\t\t\twrite(p, sz);\n\t\t\ttmpPaths.add(p);\n\t\t}\n\t}\n\n\tprotected void write(Path path, int numBytes) {\n\n\t\tfinal byte[] data = new byte[numBytes];\n\t\trandom.nextBytes(data);\n\n\t\tSystem.out.println(path.toAbsolutePath());\n\t\tSystem.out.println(numBytes);\n\n\t\tReadData readData = ReadData.from(os -> {\n\t\t\tos.write(data);\n\t\t}).materialize();\n\n\t\ttry {\n\t\t\tkva.write(path.toAbsolutePath().toString(), readData);\n\t\t} catch (N5Exception.N5IOException e) {\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n\n\t@TearDown(Level.Trial)\n\tpublic void teardown() {\n\n\t\tfor ( Path p : tmpPaths ) {\n\t\t\tp.toFile().delete();\n\t\t}\n\t\tbasePath.toFile().delete();\n\t}\n\n\tpublic int[] sizes() {\n\n\t\ttry {\n\t\t\tfinal Param ann = ReadDataBenchmarks.class.getDeclaredField(\"objectSizeBytes\").getAnnotation(Param.class);\n\t\t\tSystem.out.println(Arrays.toString(ann.value()));\n\t\t\treturn Arrays.stream(ann.value()).mapToInt(Integer::parseInt).toArray();\n\n\t\t} catch (final NoSuchFieldException e) {\n\t\t\te.printStackTrace();\n\t\t} catch (final SecurityException e) {\n\t\t\te.printStackTrace();\n\t\t}\n\n\t\treturn null;\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/cache/N5CacheTest.java",
    "content": "package org.janelia.saalfeldlab.n5.cache;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotSame;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\n\nimport java.util.Arrays;\nimport java.util.concurrent.CountDownLatch;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.junit.Test;\n\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonObject;\n\npublic class N5CacheTest {\n\n\n\t@Test\n\tpublic void cacheBackingTest() {\n\n\t\tfinal DummyBackingStorage backingStorage = new DummyBackingStorage();\n\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\n\t\tint expectedAttrCallCount = 0;\n\n\t\t// check existence, ensure backing storage is only called once\n\t\t//\tthis cache `exists` is overridden to write an attribute\n\t\t//\twhich means the `exists` call checks attr existence first,\n\t\t//\tand since it finds some, it infers the existence without\n\t\t//\tan explicit check. Some backends don't support exists\n\t\t//\tso this is a way to handle those cases more elegantly\n\t\tassertEquals(0, backingStorage.existsCallCount);\n\t\tcache.exists(\"a\", null);\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tassertEquals(0, backingStorage.existsCallCount);\n\t\tcache.exists(\"a\", null);\n\t\tassertEquals(0, backingStorage.existsCallCount);\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\n\t\t// check existence of new group, ensure backing storage is only called one more time\n\t\tcache.exists(\"b\", null);\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tassertEquals(0, backingStorage.existsCallCount);\n\t\tcache.exists(\"b\", null);\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tassertEquals(0, backingStorage.existsCallCount);\n\n\t\t// check isDataset, ensure backing storage is only called when expected\n\t\t// isDataset is called by exists, so should have been called twice here\n\t\tassertEquals(2, backingStorage.isDatasetCallCount);\n\t\tcache.isDataset(\"a\", null);\n\t\tassertEquals(2, backingStorage.isDatasetCallCount);\n\n\t\tassertEquals(2, backingStorage.isDatasetCallCount);\n\t\tcache.isDataset(\"b\", null);\n\t\tassertEquals(2, backingStorage.isDatasetCallCount);\n\n\t\t// check isGroup, ensure backing storage is only called when expected\n\t\t// isGroup is called by exists, so should have been called twice here\n\t\tassertEquals(2, backingStorage.isGroupCallCount);\n\t\tcache.isDataset(\"a\", null);\n\t\tassertEquals(2, backingStorage.isGroupCallCount);\n\n\t\tassertEquals(2, backingStorage.isGroupCallCount);\n\t\tcache.isDataset(\"b\", null);\n\t\tassertEquals(2, backingStorage.isGroupCallCount);\n\n\t\t// similarly check list, ensure backing storage is only called when expected\n\t\t// list is called by exists, so should have been called twice here\n\t\tassertEquals(0, backingStorage.listCallCount);\n\t\tcache.list(\"a\");\n\t\tassertEquals(1, backingStorage.listCallCount);\n\n\t\tassertEquals(1, backingStorage.listCallCount);\n\t\tcache.list(\"b\");\n\t\tassertEquals(2, backingStorage.listCallCount);\n\n\t\t// finally check getAttributes\n\t\t// it is not called by exists (since it needs the cache key)\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"a\", \"foo\");\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"a\", \"foo\");\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"a\", \"bar\");\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"a\", \"bar\");\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"a\", \"face\");\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\n\t\tcache.getAttributes(\"b\", \"foo\");\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"b\", \"foo\");\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"b\", \"bar\");\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"b\", \"bar\");\n\t\tassertEquals(expectedAttrCallCount, backingStorage.attrCallCount);\n\t\tcache.getAttributes(\"b\", \"face\");\n\t\tassertEquals(++expectedAttrCallCount, backingStorage.attrCallCount);\n\n\t}\n\n\t@Test\n\tpublic void testCopyOnReadPreventsExternalModification() {\n\n\t\tfinal DummyBackingStorage backingStorage = new DummyBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\t\t\n\t\t// Get attributes and modify the returned object\n\t\tJsonElement attrs1 = cache.getAttributes(\"path\", \"key\");\n\t\tattrs1.getAsJsonObject().addProperty(\"modified\", \"value\");\n\t\t\n\t\t// Get attributes again - should not contain the modification\n\t\tJsonElement attrs2 = cache.getAttributes(\"path\", \"key\");\n\t\tassertFalse(attrs2.getAsJsonObject().has(\"modified\"));\n\t\t\n\t\t// Verify both calls return different instances\n\t\tassertNotSame(attrs1, attrs2);\n\t}\n\n\t@Test\n\tpublic void testCacheManipulationMethods() {\n\n\t\tfinal DummyBackingStorage backingStorage = new DummyBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\n\t\t// First, ensure the path exists in cache\n\t\tassertTrue(cache.exists(\"path\", null));\n\n\t\t// Test setAttributes\n\t\tJsonObject newAttrs = new JsonObject();\n\t\tnewAttrs.addProperty(\"custom\", \"value\");\n\t\tcache.setAttributes(\"path\", \"key\", newAttrs);\n\t\tJsonElement retrievedAttrs = cache.getAttributes(\"path\", \"key\");\n\t\tassertTrue(retrievedAttrs.getAsJsonObject().has(\"custom\"));\n\t\tassertEquals(\"value\", retrievedAttrs.getAsJsonObject().get(\"custom\").getAsString());\n\n\t\t// Test updateCacheInfo\n\t\tJsonObject updatedAttrs = new JsonObject();\n\t\tupdatedAttrs.addProperty(\"updated\", \"updated-value\");\n\t\tcache.updateCacheInfo(\"path\", \"key2\", updatedAttrs);\n\t\tJsonElement retrievedUpdated = cache.getAttributes(\"path\", \"key2\");\n\t\tassertTrue(retrievedUpdated.getAsJsonObject().has(\"updated\"));\n\t\tassertEquals(\"updated-value\", retrievedUpdated.getAsJsonObject().get(\"updated\").getAsString());\n\t\t\n\t\t// Test initializeNonemptyCache\n\t\tcache.initializeNonemptyCache(\"newPath\", \"newKey\");\n\t\tassertTrue(cache.exists(\"newPath\", null));\n\t}\n\n\t@Test\n\tpublic void testChildManagement() {\n\n\t\tfinal DummyBackingStorage backingStorage = new DummyBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache( backingStorage );\n\n\t\t// Initialize parent and children\n\t\tcache.exists(\"parent\", null);\n\t\tcache.list(\"parent\");\n\n\t\t// Test addChild\n\t\tcache.addChild( \"parent\", \"child1\" );\n\t\tString[] children = cache.list( \"parent\" );\n\t\tassertTrue( Arrays.asList( children ).contains( \"child1\" ) );\n\n\t\t// Note: addChildIfPresent doesn't check or create the parent,\n\t\t// it only adds to existing cache entries\n\n\t\t// Test addChildIfPresent on non-cached parent\n\t\t// This should not throw and should not create the parent\n\t\tcache.addChildIfPresent(\"nonexistent\", \"child\");\n\t\tchildren = cache.list(\"nonexistent\");\n\t\tassertFalse(Arrays.asList(children).contains(\"child\"));\n\n\t\t// Test addChildIfPresent on cached parent without children list\n\t\tcache.exists(\"parent2\", null);\n\t\tchildren = cache.list(\"parent2\"); // create children array\n\t\tcache.addChildIfPresent(\"parent2\", \"child\");\n\t\tchildren = cache.list(\"parent2\");\n\t\tassertTrue(Arrays.asList(children).contains(\"child\"));\n\t}\n\n\t@Test\n\tpublic void testRemoveCacheHierarchy() {\n\t\tfinal DummyBackingStorage backingStorage = new DummyBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\t\t\n\t\t// Setup hierarchy\n\t\tcache.exists(\"root\", null);\n\t\tcache.exists(\"root/child1\", null);\n\t\tcache.exists(\"root/child1/grandchild\", null);\n\t\tcache.exists(\"root/child2\", null);\n\t\t\n\t\t// Add children relationships\n\t\tcache.list(\"root\");\n\t\tcache.addChild(\"root\", \"child1\");\n\t\tcache.addChild(\"root\", \"child2\");\n\t\t\n\t\t// Remove child1 and its descendants\n\t\tcache.removeCache(\"root\", \"root/child1\");\n\t\t\n\t\t// Verify removal - paths should not exist anymore\n\t\tassertFalse(cache.exists(\"root/child1\", null));\n\t\tassertFalse(cache.exists(\"root/child1/grandchild\", null));\n\t\t\n\t\t// Verify parent's children list updated\n\t\tString[] remaining = cache.list(\"root\");\n\t\tassertFalse(Arrays.asList(remaining).contains(\"child1\"));\n\t\tassertTrue(Arrays.asList(remaining).contains(\"child2\"));\n\t\t\n\t\t// Verify child2 unaffected\n\t\tassertTrue(cache.exists(\"root/child2\", null));\n\t}\n\n\t@Test(expected = N5Exception.N5IOException.class)\n\tpublic void testListNonExistentGroupThrows() {\n\n\t\tfinal DummyNonExistentBackingStorage backingStorage = new DummyNonExistentBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\t\tcache.list(\"nonexistent\");\n\t}\n\n\t@Test\n\tpublic void testEmptyCacheInfoBehavior() {\n\t\tfinal DummyNonExistentBackingStorage backingStorage = new DummyNonExistentBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\t\t\n\t\t// Non-existent path should return emptyCacheInfo\n\t\tassertFalse(cache.exists(\"nonexistent\", null));\n\t\tassertFalse(cache.isGroup(\"nonexistent\", null));\n\t\tassertFalse(cache.isDataset(\"nonexistent\", null));\n\t\tassertNull(cache.getAttributes(\"nonexistent\", \"key\"));\n\t}\n\n\t@Test(expected = N5Exception.class)\n\tpublic void testEmptyJsonDeepCopyThrows() {\n\t\tN5JsonCache.emptyJson.deepCopy();\n\t}\n\n\t@Test\n\tpublic void testCacheStateTransitions() {\n\t\tfinal DummyBackingStorage backingStorage = new DummyBackingStorage();\n\t\tfinal N5JsonCache cache = new N5JsonCache(backingStorage);\n\t\t\n\t\t// Start with emptyCacheInfo\n\t\tcache.addNewCacheInfo(\"path\", null, null);\n\t\t\n\t\t// Transition to a nonempty cache\n\t\tcache.initializeNonemptyCache(\"path\", \"key\");\n\t\tassertTrue(cache.exists(\"path\", null));\n\t\t\n\t\t// Update existing cache\n\t\tJsonObject attrs = new JsonObject();\n\t\tattrs.addProperty(\"version\", \"1\");\n\t\tcache.setAttributes(\"path\", \"key\", attrs);\n\t\t\n\t\tattrs.addProperty(\"version\", \"2\");\n\t\tcache.updateCacheInfo(\"path\", \"key\", attrs);\n\t\tassertEquals(\"2\", cache.getAttributes(\"path\", \"key\").getAsJsonObject().get(\"version\").getAsString());\n\t}\n\n\tprotected static class DummyBackingStorage implements N5JsonCacheableContainer {\n\n\t\tint attrCallCount = 0;\n\t\tint existsCallCount = 0;\n\t\tint isGroupCallCount = 0;\n\t\tint isDatasetCallCount = 0;\n\t\tint isGroupFromAttrsCallCount = 0;\n\t\tint isDatasetFromAttrsCallCount = 0;\n\t\tint listCallCount = 0;\n\n\t\tpublic DummyBackingStorage() {\n\t\t}\n\n\t\tpublic JsonElement getAttributesFromContainer(final String path, final String cacheKey) {\n\t\t\tattrCallCount++;\n\t\t\tfinal JsonObject obj = new JsonObject();\n\t\t\tobj.addProperty(\"key\", \"value\");\n\t\t\treturn obj;\n\t\t}\n\n\t\tpublic boolean existsFromContainer(final String path, final String cacheKey) {\n\t\t\texistsCallCount++;\n\t\t\treturn true;\n\t\t}\n\n\t\tpublic boolean isGroupFromContainer(final String path) {\n\t\t\tisGroupCallCount++;\n\t\t\treturn true;\n\t\t}\n\n\t\tpublic boolean isDatasetFromContainer(final String path) {\n\t\t\tisDatasetCallCount++;\n\t\t\treturn true;\n\t\t}\n\n\t\tpublic String[] listFromContainer(final String path) {\n\t\t\tlistCallCount++;\n\t\t\treturn new String[] { \"list\" };\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isGroupFromAttributes(final String cacheKey, final JsonElement attributes) {\n\t\t\tisGroupFromAttrsCallCount++;\n\t\t\treturn true;\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean isDatasetFromAttributes(final String cacheKey, final JsonElement attributes) {\n\t\t\tisDatasetFromAttrsCallCount++;\n\t\t\treturn true;\n\t\t}\n\t}\n\n\t// Helper class for non-existent paths\n\tprotected static class DummyNonExistentBackingStorage extends DummyBackingStorage {\n\n\t\t@Override\n\t\tpublic JsonElement getAttributesFromContainer(String key, String cacheKey) {\n\t\t\tattrCallCount++;\n\t\t\treturn null;\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean existsFromContainer(String path, String cacheKey) {\n\t\t\texistsCallCount++;\n\t\t\treturn false;\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/codec/BlockCodecTests.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\n\nimport java.nio.ByteOrder;\nimport java.util.Arrays;\nimport java.util.Random;\n\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.DoubleArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.FloatArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.GzipCompression;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.LongArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.RawCompression;\nimport org.janelia.saalfeldlab.n5.ShortArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.codec.BytesCodecTests.BitShiftBytesCodec;\nimport org.janelia.saalfeldlab.n5.shard.DatasetAccess;\nimport org.janelia.saalfeldlab.n5.shard.PositionValueAccess;\nimport org.janelia.saalfeldlab.n5.shard.TestPositionValueAccess;\nimport org.junit.Test;\n\npublic class BlockCodecTests {\n\n\tstatic Random random = new Random(12345);\n\n\tfinal int[] blockSize = {11, 7, 5};\n\tprivate final BitShiftBytesCodec shiftCodec = new BitShiftBytesCodec(3);\n\tprivate final GzipCompression compressor = new GzipCompression();\n\tprivate final DataCodecInfo[][] dataCodecInfos = new DataCodecInfo[][]{\n\t\t\t{}, // empty: \"raw\" compression\n\t\t\t{compressor},\n\t\t\t{shiftCodec},\n\t\t\t{shiftCodec, compressor}\n\t};\n\n\tprivate final DataType[] dataTypes = {\n\t\t\tDataType.INT8, DataType.UINT8,\n\t\t\tDataType.INT16, DataType.UINT16,\n\t\t\tDataType.INT32, DataType.UINT32,\n\t\t\tDataType.INT64, DataType.UINT64,\n\t\t\tDataType.FLOAT32, DataType.FLOAT64\n\t};\n\n\t@Test\n\tpublic void testN5BlockCodec() throws Exception {\n\t\tfor (DataType dataType : dataTypes) {\n\t\t\tfor (DataCodecInfo[] dataCodecInfo : dataCodecInfos) {\n\n\t\t\t\tfinal DatasetAttributes attributes = new DatasetAttributes(\n\t\t\t\t\t\tnew long[]{32, 32, 32},\n\t\t\t\t\t\tblockSize,\n\t\t\t\t\t\tdataType,\n\t\t\t\t\t\tnew N5BlockCodecInfo(),\n\t\t\t\t\t\tdataCodecInfo);\n\n\t\t\t\ttestBlockCodecHelper(attributes);\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testRawBytesBlockCodec() throws Exception {\n\t\t// Test RawBlockCodecInfo codec with different byte orders and DataTypes\n\t\tfinal ByteOrder[] byteOrders = {ByteOrder.BIG_ENDIAN, ByteOrder.LITTLE_ENDIAN};\n\t\tfor (DataType dataType : dataTypes) {\n\t\t\tfor (ByteOrder byteOrder : byteOrders) {\n\t\t\t\tfor (DataCodecInfo[] codecs : dataCodecInfos) {\n\n\t\t\t\t\tfinal RawBlockCodecInfo codec = new RawBlockCodecInfo(byteOrder);\n\t\t\t\t\tfinal DatasetAttributes attributes = new DatasetAttributes(\n\t\t\t\t\t\t\tnew long[]{32, 32, 32},\n\t\t\t\t\t\t\tblockSize,\n\t\t\t\t\t\t\tdataType,\n\t\t\t\t\t\t\tcodec,\n\t\t\t\t\t\t\tcodecs);\n\n\t\t\t\t\ttestBlockCodecHelper(attributes);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tprivate <T> void testBlockCodecHelper(DatasetAttributes attributes) throws Exception {\n\n\t\t// TODO\n//\t\tfinal int[] blockSize = attributes.getBlockSize();\n//\t\tfinal DataType dataType = attributes.getDataType();\n//\t\tfinal long[] gridPosition = {3, 2, 1};\n//\n//\t\t// Create appropriate data block based on type\n//\t\tDataBlock<T> originalBlock = ((DataBlock<T>)createRandomDataBlock(dataType, blockSize, gridPosition));\n//\t\tfinal BlockCodec<T> codec = attributes.getBlockCodec();\n//\n//\t\t// Test encode/decode roundtrip\n//\t\tfinal ReadData encoded = codec.encode(originalBlock);\n//\t\tassertNotNull(encoded);\n//\n//\t\tfinal DataBlock<?> decoded = codec.decode(encoded, gridPosition);\n//\t\tassertNotNull(decoded);\n//\n//\t\tassertArrayEquals(\"Block size should match\", blockSize, decoded.getSize());\n//\t\tassertArrayEquals(\"Grid position should match\", gridPosition, decoded.getGridPosition());\n//\t\tassertDataEquals(originalBlock, decoded);\n//\t\tverifyCompatibleDataType(dataType, decoded);\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\t@Test\n\tpublic void testEmptyBlock() throws Exception {\n\t\t// Test handling of empty blocks\n\t\tfinal int[] blockSize = {0, 0};\n\t\tfinal long[] gridPosition = {0, 0};\n\t\tfinal N5BlockCodecInfo blockCodecInfo = new N5BlockCodecInfo();\n\t\tfinal TestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tnew long[]{64, 64},\n\t\t\t\tnew int[]{8, 8},\n\t\t\t\tDataType.UINT8,\n\t\t\t\tblockCodecInfo,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\t\tDatasetAccess access = attributes.getDatasetAccess();\n\n\t\t// Test encode/decode\n\t\tfinal ByteArrayDataBlock emptyBlock = new ByteArrayDataBlock(blockSize, gridPosition, new byte[0]);\n\n\t\taccess.writeChunk(store, emptyBlock);\n\t\tfinal DataBlock<?> decoded = access.readChunk(store, gridPosition);\n\n\t\tassertEquals(\"Empty block should have 0 elements\", 0, decoded.getNumElements());\n\t}\n\n\t@Test\n\tpublic void testEncodedSizeCalculation() throws Exception {\n\n\t\t// TODO\n\n\t\t// Test that encoded size calculations are correct\n//\t\tfinal int[] blockSize = {64, 64};\n//\t\tfinal DatasetAttributes n5ArrayAttrs = new DatasetAttributes(\n//\t\t\t\tnew long[]{512, 512},\n//\t\t\t\tblockSize,\n//\t\t\t\tblockSize,\n//\t\t\t\tDataType.INT16,\n//\t\t\t\tnew N5BlockCodecInfo());\n//\n//\n//\t\tfinal DatasetAttributes rawArrayAttrs = new DatasetAttributes(\n//\t\t\t\tnew long[]{512, 512},\n//\t\t\t\tblockSize,\n//\t\t\t\tblockSize,\n//\t\t\t\tDataType.INT16,\n//\t\t\t\tnew RawBlockCodecInfo());\n//\n//\t\t// Calculate expected sizes\n//\t\tfinal long rawDataSize = blockSize[0] * blockSize[1] * 2; // INT16 has 2 bytes per element\n//\n//\t\t// N5BlockCodecInfo adds a header\n//\t\t// the estimate of the encoded size\n//\t\tfinal long n5EncodedSize = n5ArrayAttrs.getBlockCodecInfo().encodedSize(rawDataSize);\n//\t\tassertTrue(\"N5 encoded size should be larger than raw size\", n5EncodedSize > rawDataSize);\n//\n//\t\tDataBlock<short[]> dataBlock = ((DataBlock<short[]>)createRandomDataBlock(n5ArrayAttrs.getDataType(), blockSize, new long[]{0, 0}));\n//\t\tReadData n5EncodedDataBlock = n5ArrayAttrs.<short[]>getBlockCodec().encode(dataBlock);\n//\t\tassertEquals(\"N5 actual encoded size should equal estimated size\", n5EncodedSize, n5EncodedDataBlock.length());\n//\n//\t\t// RawBlockCodecInfo should not change size\n//\t\tfinal long rawEncodedSize = rawArrayAttrs.getBlockCodecInfo().encodedSize(rawDataSize);\n//\t\tassertEquals(\"Raw encoded size should equal input size\", rawDataSize, rawEncodedSize);\n//\n//\t\tReadData rawEncodedDataBlock = rawArrayAttrs.<short[]>getBlockCodec().encode(dataBlock);\n//\t\tassertEquals(\"Raw actual encoded size should equal estimated size\", rawEncodedSize, rawEncodedDataBlock.length());\n\t}\n\n\tprivate static DataBlock<?> createRandomDataBlock(DataType dataType, int[] blockSize, long[] gridPosition) {\n\t\tfinal int numElements = Arrays.stream(blockSize).reduce(1, (a, b) -> a * b);\n\n\t\tswitch (dataType) {\n\t\t\tcase INT8:\n\t\t\tcase UINT8:\n\t\t\t\tbyte[] uint8Data = new byte[numElements];\n\t\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\t\tuint8Data[i] = (byte) random.nextInt(256);\n\t\t\t\t}\n\t\t\t\treturn new ByteArrayDataBlock(blockSize, gridPosition, uint8Data);\n\n\t\t\tcase INT16:\n\t\t\tcase UINT16:\n\t\t\t\tshort[] uint16Data = new short[numElements];\n\t\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\t\tuint16Data[i] = (short) random.nextInt(65536);\n\t\t\t\t}\n\t\t\t\treturn new ShortArrayDataBlock(blockSize, gridPosition, uint16Data);\n\n\t\t\tcase INT32:\n\t\t\tcase UINT32:\n\t\t\t\tint[] uint32Data = new int[numElements];\n\t\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\t\tuint32Data[i] = random.nextInt();\n\t\t\t\t}\n\t\t\t\treturn new IntArrayDataBlock(blockSize, gridPosition, uint32Data);\n\n\t\t\tcase INT64:\n\t\t\tcase UINT64:\n\t\t\t\tlong[] uint64Data = new long[numElements];\n\t\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\t\tuint64Data[i] = random.nextLong();\n\t\t\t\t}\n\t\t\t\treturn new LongArrayDataBlock(blockSize, gridPosition, uint64Data);\n\n\t\t\tcase FLOAT32:\n\t\t\t\tfloat[] floatData = new float[numElements];\n\t\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\t\tfloatData[i] = random.nextFloat();\n\t\t\t\t}\n\t\t\t\treturn new FloatArrayDataBlock(blockSize, gridPosition, floatData);\n\n\t\t\tcase FLOAT64:\n\t\t\t\tdouble[] doubleData = new double[numElements];\n\t\t\t\tfor (int i = 0; i < numElements; i++) {\n\t\t\t\t\tdoubleData[i] = random.nextDouble();\n\t\t\t\t}\n\t\t\t\treturn new DoubleArrayDataBlock(blockSize, gridPosition, doubleData);\n\n\t\t\tdefault:\n\t\t\t\tthrow new IllegalArgumentException(\"Unsupported data type: \" + dataType);\n\t\t}\n\t}\n\n\tprivate static void verifyCompatibleDataType(DataType expectedType, DataBlock<?> block) {\n\n\t\tObject data = block.getData();\n\t\tswitch (expectedType) {\n\t\t\tcase INT8:\n\t\t\tcase UINT8:\n\t\t\t\tassertTrue(\"Expected byte array for \" + expectedType, data instanceof byte[]);\n\t\t\t\tbreak;\n\t\t\tcase INT16:\n\t\t\tcase UINT16:\n\t\t\t\tassertTrue(\"Expected short array for \" + expectedType, data instanceof short[]);\n\t\t\t\tbreak;\n\t\t\tcase INT32:\n\t\t\tcase UINT32:\n\t\t\t\tassertTrue(\"Expected int array for \" + expectedType, data instanceof int[]);\n\t\t\t\tbreak;\n\t\t\tcase INT64:\n\t\t\tcase UINT64:\n\t\t\t\tassertTrue(\"Expected long array for \" + expectedType, data instanceof long[]);\n\t\t\t\tbreak;\n\t\t\tcase FLOAT32:\n\t\t\t\tassertTrue(\"Expected float array for \" + expectedType, data instanceof float[]);\n\t\t\t\tbreak;\n\t\t\tcase FLOAT64:\n\t\t\t\tassertTrue(\"Expected double array for \" + expectedType, data instanceof double[]);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tthrow new IllegalArgumentException(\"Unsupported data type: \" + expectedType);\n\t\t}\n\t}\n\n\tprivate static void assertDataEquals(DataBlock<?> expected, DataBlock<?> actual) {\n\n\t\tObject expectedData = expected.getData();\n\t\tObject actualData = actual.getData();\n\n\t\tif (expectedData instanceof byte[]) {\n\t\t\tassertArrayEquals((byte[]) expectedData, (byte[]) actualData);\n\t\t} else if (expectedData instanceof short[]) {\n\t\t\tassertArrayEquals((short[]) expectedData, (short[]) actualData);\n\t\t} else if (expectedData instanceof int[]) {\n\t\t\tassertArrayEquals((int[]) expectedData, (int[]) actualData);\n\t\t} else if (expectedData instanceof long[]) {\n\t\t\tassertArrayEquals((long[]) expectedData, (long[]) actualData);\n\t\t} else if (expectedData instanceof float[]) {\n\t\t\tassertArrayEquals((float[]) expectedData, (float[]) actualData, 0.0f);\n\t\t} else if (expectedData instanceof double[]) {\n\t\t\tassertArrayEquals((double[]) expectedData, (double[]) actualData, 0.0);\n\t\t} else {\n\t\t\tthrow new IllegalArgumentException(\"Unknown data type\");\n\t\t}\n\t}\n\n\tpublic static class TestDatasetAttributes extends DatasetAttributes {\n\n\t\tpublic TestDatasetAttributes(long[] dimensions, int[] outerBlockSize, DataType dataType, BlockCodecInfo blockCodecInfo,\n\t\t\t\tDataCodecInfo... dataCodecInfos) {\n\n\t\t\tsuper(dimensions, outerBlockSize, dataType, blockCodecInfo, dataCodecInfos);\n\t\t}\n\n\t\t@Override // to make this accessible for the test\n\t\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\t\t\treturn super.getDatasetAccess();\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/codec/BytesCodecTests.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.util.Random;\nimport java.util.function.IntUnaryOperator;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData.OutputStreamOperator;\nimport org.janelia.saalfeldlab.n5.serialization.NameConfig;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\n\npublic class BytesCodecTests {\n\n\tstatic Random random;\n\n\t@BeforeClass\n\tpublic static void setup() {\n\t\trandom = new Random(7777);\n\t}\n\n\t@Test\n\tpublic void testEncodeDecodeBytes() {\n\n\t\t// Create a BitShiftBytesCodec with shift value\n\t\tfinal BitShiftBytesCodec originalCodec = new BitShiftBytesCodec(3);\n\n\t\t// Test encode/decode roundtrip\n\t\tfinal byte[] testData = new byte[12];\n\t\trandom.nextBytes(testData);\n\n\t\tfinal ReadData original = ReadData.from(testData);\n\t\tfinal ReadData encoded = originalCodec.encode(original);\n\t\tfinal ReadData decoded = originalCodec.decode(encoded);\n\n\t\tfinal byte[] result = decoded.allBytes();\n\t\tassertEquals(\"Length should match\", testData.length, result.length);\n\t\tassertArrayEquals(\"encoded-decoded bytes should match original\", testData, result);\n\t}\n\n\t@Test\n\tpublic void concatenatedBytesCodecTest() throws IOException {\n\n\t\tint N = 16;\n\t\tReadData data = ReadData.from( new InputStream() {\n\t\t\t@Override\n\t\t\tpublic int read() throws IOException {\n\t\t\t\treturn Math.abs(random.nextInt()) % 32;\n\t\t\t}\n\t\t}, N ).materialize();\n\n\t\tfinal byte[] bytes = data.allBytes();\n\t\tfinal byte[] expected = new byte[bytes.length];\n\t\tfor (int i = 0; i < bytes.length; i++) {\n\t\t\texpected[i] = (byte)(2 * bytes[i] + 3);\n\t\t}\n\n\t\tfinal DataCodec a = new ByteFunctionCodec(x -> 2 * x, x -> x / 2);\n\t\tfinal DataCodec b = new ByteFunctionCodec(x -> x + 3, x -> x - 3 );\n\t\tfinal ConcatenatedDataCodec ab = new ConcatenatedDataCodec(new DataCodec[]{a, b});\n\n\t\tfinal ReadData encodedData = ab.encode(data).materialize();\n\t\tassertArrayEquals(expected, encodedData.allBytes());\n\n\t\tfinal ReadData decodedData = ab.decode(encodedData).materialize();\n\t\tassertArrayEquals(bytes, decodedData.allBytes());\n\t}\n\n\tpublic static class ByteFunctionCodec implements DataCodec, DataCodecInfo {\n\n\t\tIntUnaryOperator encoder;\n\t\tIntUnaryOperator decoder;\n\n\t\tpublic ByteFunctionCodec( IntUnaryOperator encoder, IntUnaryOperator decoder ) {\n\t\t\tthis.encoder = encoder;\n\t\t\tthis.decoder = decoder;\n\t\t}\n\n\t\t@Override\n\t\tpublic String getType() {\n\t\t\treturn \"byteFunction\";\n\t\t}\n\n\t\tpublic ReadData decode(ReadData data) {\n\t\t\treturn data.encode(new ByteFun(decoder));\n\t\t}\n\n\t\tpublic ReadData encode(ReadData data) {\n\t\t\treturn data.encode(new ByteFun(encoder));\n\t\t}\n\n\t\t@Override public DataCodec create() {\n\n\t\t\treturn this;\n\t\t}\n\t}\n\n\tprivate static class ByteFun implements OutputStreamOperator {\n\n\t\tIntUnaryOperator fun;\n\t\tpublic ByteFun(IntUnaryOperator fun) {\n\t\t\tthis.fun = fun;\n\t\t}\n\n\t\t@Override\n\t\tpublic OutputStream apply(OutputStream o) {\n\t\t\treturn new OutputStream() {\n\t\t\t\t@Override\n\t\t\t\tpublic void write(int b) throws IOException {\n\t\t\t\t\to.write(fun.applyAsInt(b));\n\t\t\t\t}\n\t\t\t};\n\t\t}\n\t}\n\n\t@NameConfig.Name(BitShiftBytesCodec.TYPE)\n\tpublic static class BitShiftBytesCodec implements DataCodec, DataCodecInfo {\n\t\t@Override public DataCodec create() {\n\n\t\t\treturn this;\n\t\t}\n\n\t\tprivate static final String TYPE = \"bitshift\";\n\n\t\t@NameConfig.Parameter\n\t\tprivate int shift;\n\n\t\tpublic BitShiftBytesCodec() {\n\n\t\t\tshift = 0;\n\t\t}\n\n\t\tpublic BitShiftBytesCodec(int shift) {\n\n\t\t\tthis.shift = shift;\n\t\t}\n\n\t\t@Override\n\t\tpublic String getType() {\n\n\t\t\treturn TYPE;\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData decode(ReadData readData) throws N5IOException {\n\n\t\t\tif (shift == 0) {\n\t\t\t\treturn readData;\n\t\t\t}\n\n\t\t\tfinal byte[] data = readData.allBytes();\n\t\t\tfinal byte[] decoded = new byte[data.length];\n\n\t\t\t// Apply inverse bit shift (right rotate) to decode\n\t\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\t\tint b = data[i] & 0xFF;\n\t\t\t\tdecoded[i] = (byte)((b >>> shift) | (b << (8 - shift)));\n\t\t\t}\n\n\t\t\treturn ReadData.from(decoded);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(ReadData readData) throws N5IOException {\n\n\t\t\tif (shift == 0) {\n\t\t\t\treturn readData;\n\t\t\t}\n\n\t\t\tbyte[] data = readData.allBytes();\n\t\t\tbyte[] encoded = new byte[data.length];\n\n\t\t\t// Apply bit shift (left rotate) to encode\n\t\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\t\tint b = data[i] & 0xFF;\n\t\t\t\tencoded[i] = (byte)((b << shift) | (b >>> (8 - shift)));\n\t\t\t}\n\t\t\treturn ReadData.from(encoded);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean equals(Object obj) {\n\n\t\t\tif (this == obj) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tif (obj == null || getClass() != obj.getClass()) {\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tBitShiftBytesCodec other = (BitShiftBytesCodec)obj;\n\t\t\treturn shift == other.shift;\n\t\t}\n\n\t\t@Override\n\t\tpublic int hashCode() {\n\n\t\t\treturn Integer.hashCode(shift);\n\t\t}\n\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/codec/ChecksumCodecTests.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertThrows;\n\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.codec.checksum.Crc32cChecksumCodec;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.junit.Test;\n\npublic class ChecksumCodecTests {\n\n\t@Test\n\tpublic void testCrc32cChecksumCodec() {\n\t\t\n\t\tfinal ReadData rd = ReadData.from(new byte[] {0,1,2,3,4,5,6,7,8,9});\n\t\tfinal long N = rd.requireLength();\n\n\t\tfinal Crc32cChecksumCodec codec = new  Crc32cChecksumCodec();\n\t\tfinal ReadData encoded = codec.encode(rd);\n\n\t\t// Crc32 adds 4 bytes to the data\n\t\tassertEquals(N+codec.numChecksumBytes(), encoded.requireLength());\n\n\t\tfinal ReadData decoded = codec.decode(encoded);\n\t\tassertArrayEquals(rd.allBytes(), decoded.allBytes());\n\n\t\t// attempting to decode perturbed data throws exception\n\t\tfinal byte[] encodedBytes = encoded.allBytes();\n\t\tencodedBytes[1]++;\n\t\tfinal ReadData perturbed = ReadData.from(encodedBytes);\n\t\tassertThrows(N5Exception.class, () -> codec.decode(perturbed));\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/codec/DatasetCodecTests.java",
    "content": "package org.janelia.saalfeldlab.n5.codec;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\n\nimport org.janelia.saalfeldlab.n5.codec.transpose.TransposeCodecInfo;\nimport org.junit.Test;\n\npublic class DatasetCodecTests {\n\n\t@Test\n\tpublic void testTransposeCodecSimplification() throws Exception {\n\n\t\t// 2d\n\t\tfinal TransposeCodecInfo id2 = new TransposeCodecInfo(new int[]{0, 1});\n\t\tfinal TransposeCodecInfo rev2 = new TransposeCodecInfo(new int[]{1, 0});\n\n\t\tassertNull(TransposeCodecInfo.concatenate(null));\n\t\tassertEquals(id2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{id2}));\n\t\tassertEquals(rev2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev2}));\n\n\t\tassertEquals(rev2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev2, id2}));\n\t\tassertEquals(rev2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{id2, rev2, id2}));\n\n\t\tassertEquals(id2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev2, rev2}));\n\t\tassertEquals(id2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev2, rev2, rev2, rev2}));\n\t\tassertEquals(id2, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{id2, rev2, id2, rev2, rev2, rev2}));\n\n\t\t// 3d\n\t\tfinal TransposeCodecInfo id3 = new TransposeCodecInfo(new int[]{0, 1, 2});\n\t\tfinal TransposeCodecInfo rev3 = new TransposeCodecInfo(new int[]{2, 1, 0});\n\n\t\tfinal TransposeCodecInfo t021 = new TransposeCodecInfo(new int[]{0, 2, 1});\n\t\tfinal TransposeCodecInfo t102 = new TransposeCodecInfo(new int[]{1, 0, 2});\n\t\tfinal TransposeCodecInfo t120 = new TransposeCodecInfo(new int[]{1, 2, 0});\n\t\tfinal TransposeCodecInfo t201 = new TransposeCodecInfo(new int[]{2, 0, 1});\n\n\t\tassertEquals(id3, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{id3}));\n\t\tassertEquals(rev3, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev3}));\n\n\t\tassertEquals(rev3, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev3, id3}));\n\t\tassertEquals(rev3, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{id3, rev3, id3}));\n\n\t\tassertEquals(t102, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{rev3, t102, t021}));\n\t\tassertEquals(t201, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{t021, t102}));\n\t\tassertEquals(t120, TransposeCodecInfo.concatenate(new TransposeCodecInfo[]{t102, t021}));\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/compression/CompressionTypesTest.java",
    "content": "package org.janelia.saalfeldlab.n5.compression;\n\nimport java.lang.reflect.Field;\nimport java.util.Map;\n\nimport org.apache.commons.collections4.MapUtils;\nimport org.janelia.saalfeldlab.n5.CompressionAdapter;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\n\n/**\n * @author Stephan Saalfeld\n *\n */\npublic class CompressionTypesTest {\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@BeforeClass\n\tpublic static void setUpBeforeClass() throws Exception {\n\t}\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@AfterClass\n\tpublic static void tearDownAfterClass() throws Exception {\n\t}\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@Before\n\tpublic void setUp() throws Exception {\n\t}\n\n\t/**\n\t * @throws java.lang.Exception\n\t */\n\t@After\n\tpublic void tearDown() throws Exception {\n\t}\n\n\t@Test\n\tpublic void test() throws NoSuchFieldException, SecurityException, IllegalArgumentException, IllegalAccessException {\n\n\t\tCompressionAdapter compressionTypes = CompressionAdapter.getJsonAdapter();\n\n\t\tField field = CompressionAdapter.class.getDeclaredField(\"compressionConstructors\");\n\t\tfield.setAccessible(true);\n\t\tObject value = field.get(compressionTypes);\n\t\tMapUtils.verbosePrint(System.out, \"\", (Map)value);\n\n\t\tfield = CompressionAdapter.class.getDeclaredField(\"compressionParameters\");\n\t\tfield.setAccessible(true);\n\t\tvalue = field.get(compressionTypes);\n\t\tMapUtils.verbosePrint(System.out, \"\", (Map)value);\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/demo/AttributePathDemo.java",
    "content": "package org.janelia.saalfeldlab.n5.demo;\n\nimport java.io.IOException;\nimport java.nio.charset.Charset;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.Map;\n\nimport org.janelia.saalfeldlab.n5.N5FSWriter;\nimport org.janelia.saalfeldlab.n5.N5Reader;\n\nimport com.google.gson.Gson;\nimport com.google.gson.GsonBuilder;\nimport com.google.gson.JsonObject;\n\npublic class AttributePathDemo {\n\n\tfinal Gson gson;\n\n\tpublic AttributePathDemo() {\n\t\tgson = new Gson();\n\t}\n\n\tpublic static void main(String[] args) throws IOException {\n\n\t\tnew AttributePathDemo().demo();\n\t}\n\n\tpublic void demo() throws IOException {\n\n\t\tfinal String rootPath = \"/home/john/projects/n5/demo.n5\";\n\t\tfinal N5FSWriter n5 = new N5FSWriter( rootPath );\n\t\tfinal N5FSWriter n5WithNulls = new N5FSWriter( rootPath, new GsonBuilder().serializeNulls() );\n\n\t\tfinal String group = \"\";\n\t\tfinal String specialGroup = \"specialCharsGroup\";\n\t\tfinal String rmAndNulls = \"rmAndNulls\";\n\t\tn5.createGroup(group);\n\t\tn5.createGroup(specialGroup);\n\t\tn5.createGroup(rmAndNulls);\n\n\t\t// clear all attributes\n\t\tn5.setAttribute(group, \"/\", new JsonObject());\n\t\tn5.setAttribute(specialGroup, \"/\", new JsonObject());\n\t\tn5.setAttribute(rmAndNulls, \"/\", new JsonObject());\n\n\t\tsimple(n5, group);\n\t\tarrays(n5, group);\n\t\tobjects(n5, group);\n\t\tspecialChars(n5, specialGroup);\n\t\tremovingAttributesAndNulls(n5, rmAndNulls);\n\t\tremovingAttributesAndNulls(n5WithNulls, rmAndNulls);\n\n\t\tn5.close();\n\t}\n\n\tpublic void simple(final N5FSWriter n5, final String group) throws IOException {\n\t\tn5.setAttribute(group, \"six\", 6);\n\t\tSystem.out.println(n5.getAttribute(\"/\", \"six\", Integer.class)); // 6\n\t\tSystem.out.println(n5.getAttribute(\"/\", \"twelve\", Integer.class)); // null\n\n\t\tfinal String longKey = \"The Answer to the Ultimate Question\";\n\t\tn5.setAttribute(group, longKey, 42);\n\t\tSystem.out.println(n5.getAttribute(group, longKey, Integer.class)); // 42\n\n\t\tn5.setAttribute(group, \"name\", \"Marie Daly\");\n\t\tSystem.out.println(n5.getAttribute(group, \"name\", String.class)); // returns \"Marie Daly\"\n\t\tSystem.out.println(n5.getAttribute(group, \"name\", int.class)); // returns null\n\n\t\tn5.setAttribute(group, \"year\", \"1921\");\n\t\tSystem.out.println( \"(String):\" + n5.getAttribute(group, \"year\", String.class)); \n\t\tSystem.out.println( \"(int)   :\" + n5.getAttribute(group, \"year\", int.class)); \n\n\t\tn5.setAttribute(group, \"animal\", \"aardvark\");\n\t\tSystem.out.println( n5.getAttribute(group, \"animal\", String.class)); // \"aardvark\"\n\t\tn5.setAttribute(group, \"animal\", new String[]{\"bat\", \"cat\", \"dog\"}); // overwrites \"animal\"\n\t\tprintAsJson( n5.getAttribute(group, \"animal\", String[].class)); // [\"bat\", \"cat\", \"dog\"]\n\n\t\tSystem.out.println(getRawJson(n5, group));\n\t\t// {\"six\":6,\"The Answer to the Ultimate Question\":42,\"name\":\"Marie Daly\",\"year\":\"1921\",\"animal\":[\"bat\",\"cat\",\"dog\"]}\n\n\t\tn5.setAttribute(group, \"/\", new JsonObject()); // overwrites \"animal\"\n\t\tSystem.out.println(getRawJson(n5, group));\n\t\t// {}\n\t}\n\n\tpublic void arrays(final N5FSWriter n5, final String group) throws IOException {\n\n\t\tn5.setAttribute(group, \"array\", new double[] { 5, 6, 7, 8 });\n\t\tSystem.out.println( Arrays.toString(n5.getAttribute(group, \"array\", double[].class))); // [5.0, 6.0, 7.0, 8.0]\n\t\tSystem.out.println( n5.getAttribute(group, \"array[0]\", double.class)); // 7.0\n\t\tSystem.out.println( n5.getAttribute(group, \"array[2]\", double.class)); // 7.0\n\t\tSystem.out.println( n5.getAttribute(group, \"array[999]\", double.class)); // null\n\t\tSystem.out.println( n5.getAttribute(group, \"array[-1]\", double.class)); // null \n\n\n\t\tn5.setAttribute(group, \"array[1]\", 0.6);\n\t\tSystem.out.println( Arrays.toString(n5.getAttribute(group, \"array\", double[].class))); // [5.0, 0.6, 7.0, 8.0]\n\t\tn5.setAttribute(group, \"array[6]\", 99.99 );\n\t\tSystem.out.println( Arrays.toString(n5.getAttribute(group, \"array\", double[].class))); // [5.0, 0.6, 7.0, 8.0, 0.0, 0.0, 99.99]\n\t\tn5.setAttribute(group, \"array[-5]\", -5 );\n\t\tSystem.out.println( Arrays.toString(n5.getAttribute(group, \"array\", double[].class))); // [5.0, 0.6, 7.0, 8.0, 0.0, 0.0, 99.99]\n\n\t\tSystem.out.println( n5.getAttribute(group, \"array\", int.class)); // [5.0, 0.6, 7.0, 8.0, 0.0, 0.0, 99.99]\n\t}\n\n\t@SuppressWarnings(\"rawtypes\")\n\tpublic void objects(final N5FSWriter n5, final String group) throws IOException {\n\t\tMap a = Collections.singletonMap(\"a\", \"A\");\n\t\tMap b = Collections.singletonMap(\"b\", \"B\");\n\t\tMap c = Collections.singletonMap(\"c\", \"C\");\n\n\t\tn5.setAttribute(group, \"obj\", a ); \n\t\tprintAsJson(n5.getAttribute(group, \"obj\", Map.class)); // {\"a\":\"A\"}\n\t\tSystem.out.println(\"\");\n\n\t\tn5.setAttribute(group, \"obj/a\", b);\n\t\tprintAsJson(n5.getAttribute(group, \"obj\", Map.class)); \t // {\"a\": {\"b\": \"B\"}}\n\t\tprintAsJson(n5.getAttribute(group, \"obj/a\", Map.class)); // {\"b\": \"B\"}\n\t\tSystem.out.println(\"\");\n\n\t\tn5.setAttribute(group, \"obj/a/b\", c);\n\t\tprintAsJson(n5.getAttribute(group, \"obj\", Map.class));     // {\"a\": {\"b\": {\"c\": \"C\"}}}\n\t\tprintAsJson(n5.getAttribute(group, \"obj/a\", Map.class));   // {\"b\": {\"c\": \"C\"}}\n\t\tprintAsJson(n5.getAttribute(group, \"obj/a/b\", Map.class)); // {\"c\": \"C\"}\n\t\tprintAsJson(n5.getAttribute(group, \"/\", Map.class)); // returns {\"obj\": {\"a\": {\"b\": {\"c\": \"C\"}}}}\n\t\tSystem.out.println(\"\");\n\n\t\tn5.setAttribute(group, \"pet\", new Pet(\"Pluto\", 93));\n\t\tSystem.out.println(n5.getAttribute(group, \"pet\", Pet.class)); \t// Pet(\"Pluto\", 93)\n\t\tprintAsJson(n5.getAttribute(group, \"pet\", Map.class)); \t\t\t// {\"name\": \"Pluto\", \"age\": 93}\n\n\t\tn5.setAttribute(group, \"pet/likes\", new String[]{\"Micky\"});\n\t\tprintAsJson(n5.getAttribute(group, \"pet\", Map.class)); // {\"name\": \"Pluto\", \"age\": 93, \"likes\": [\"Micky\"]}\n\t\tSystem.out.println(\"\");\n\n\t\tn5.removeAttribute(group, \"/\");\n\t\tSystem.out.println(getRawJson(n5, group)); // null\n\n\t\tn5.setAttribute(group, \"one/[2]/three/[4]\", 5);\n\t\tSystem.out.println(getRawJson(n5, group)); // {\"one\":[null,null,{\"three\":[0,0,0,0,5]}]}\n\t}\n\n\tpublic void specialChars(final N5FSWriter n5, final String group) throws IOException {\n\t\tn5.setAttribute(group, \"\\\\/\", \"fwdSlash\");\n\t\tprintAsJson(n5.getAttribute(group, \"\\\\/\", String.class ));   // \"fwdSlash\"\n\n\t\tn5.setAttribute(group, \"\\\\\\\\\", \"bckSlash\");\n\t\tprintAsJson(n5.getAttribute(group, \"\\\\\\\\\", String.class ));   // \"bckSlash\"\n\n\t\t// print out the contents of attributes.json\n\t\tSystem.out.println(\"\\n\" + getRawJson(n5, group)); // {\"/\":\"fwdSlash\",\"\\\\\\\\\":\"bckSlash\"}\n\t}\n\n\tpublic void removingAttributesAndNulls(final N5FSWriter n5, final String group) throws IOException {\n\n\t\tn5.setAttribute(group, \"cow\", \"moo\");\n\t\tn5.setAttribute(group, \"dog\", \"woof\");\n\t\tn5.setAttribute(group, \"sheep\", \"baa\");\n\t\tSystem.out.println(getRawJson(n5, group)); // {\"sheep\":\"baa\",\"cow\":\"moo\",\"dog\":\"woof\"}\n\n\t\tn5.removeAttribute(group, \"cow\"); // void method\n\t\tSystem.out.println(getRawJson(n5, group)); // {\"sheep\":\"baa\",\"dog\":\"woof\"}\n\n\t\tString theDogSays = n5.removeAttribute(group, \"dog\", String.class); // returns type\n\t\tSystem.out.println(theDogSays);\t\t\t\t// woof\n\t\tSystem.out.println(getRawJson(n5, group));  // {\"sheep\":\"baa\"}\n\n\t\tn5.removeAttribute(group, \"sheep\", int.class); // returns type\n\t\tSystem.out.println(getRawJson(n5, group)); // {\"sheep\":\"baa\"}\n\n\t\tSystem.out.println( n5.removeAttribute(group, \"sheep\", String.class)); // \"baa\" \n\t\tSystem.out.println(getRawJson(n5, group)); // {}\n\n\t\tn5.setAttribute(group, \"attr\", \"value\");\n\t\tSystem.out.println(getRawJson(n5, group)); // {\"attr\":\"value\"}\n\t\tn5.setAttribute(group, \"attr\", null);\n\t\tSystem.out.println(getRawJson(n5, group)); // if serializeNulls {\"attr\":null}\n\n\t\tn5.setAttribute(group, \"foo\", 12);\n\t\tSystem.out.println(getRawJson(n5, group)); // {\"foo\":12}\n\t\tn5.removeAttribute(group, \"foo\");\n\t\tSystem.out.println(getRawJson(n5, group)); // {}\n\t}\n\n\tpublic String getRawJson(final N5Reader n5, final String group) throws IOException {\n\t\treturn new String(\n\t\t\t\tFiles.readAllBytes(\n\t\t\t\t\t\tPaths.get(Paths.get(n5.getURI()).toAbsolutePath().toString(), group, \"attributes.json\")),\n\t\t\t\tCharset.defaultCharset());\n\t}\n\n\tpublic void printAsJson(final Object obj) {\n\t\tSystem.out.println(gson.toJson(obj));\n\t}\n\n\tclass Pet {\n\t\tString name;\n\t\tint age;\n\n\t\tpublic Pet(String name, int age) {\n\t\t\tthis.name = name;\n\t\t\tthis.age = age;\n\t\t}\n\n\t\tpublic String toString() {\n\t\t\treturn String.format(\"pet %s is %d\", name, age);\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/http/HttpKeyValueAccessTest.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport org.apache.commons.io.IOUtils;\nimport org.janelia.saalfeldlab.n5.HttpKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.junit.Test;\n\nimport java.io.IOException;\nimport java.net.URI;\nimport java.nio.charset.Charset;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertThrows;\nimport static org.junit.Assume.assumeTrue;\n\npublic class HttpKeyValueAccessTest {\n\n\tstatic final URI baseUrl = URI.create(\"https://raw.githubusercontent.com/saalfeldlab/n5/afb067678b4827777bb26b6412e7759fb7edee5a/src/test/resources/url/urlAttributes.n5\");\n\tstatic final String expectedAttributes = \"{\\\"n5\\\":\\\"2.6.1\\\",\\\"foo\\\":\\\"bar\\\",\\\"f o o\\\":\\\"b a r\\\",\\\"list\\\":[0,1,2,3],\\\"nestedList\\\":[[[1,2,3,4]],[[10,20,30,40]],[[100,200,300,400]],[[1000,2000,3000,4000]]],\\\"object\\\":{\\\"a\\\":\\\"aa\\\",\\\"b\\\":\\\"bb\\\"}}\";\n\n\n\t@Test\n\tpublic void testExistsRead() {\n\n\t\tfinal HttpKeyValueAccess kva = new HttpKeyValueAccess();\n\t\tfinal String key = \"attributes.json\";\n\n\t\tfinal String absolutePath = kva.compose(baseUrl, key);\n\t\tassumeTrue(kva.exists(absolutePath));\n\n        try (VolatileReadData data = kva.createReadData(absolutePath)) {\n\t\t\tIOUtils.toString(data.inputStream(), Charset.defaultCharset());\n        } catch (IOException e) {\n\t\t\t// not correct to fail for an IO exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testUnsupportedOperations() {\n\n\t\tfinal HttpKeyValueAccess kva = new HttpKeyValueAccess();\n\t\tassertThrows(N5Exception.class, () -> kva.delete(\"foo\"));\n\t\tassertThrows(N5Exception.class, () -> kva.write(\"bar\", ReadData.from(os -> {})));\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/http/HttpReaderFsWriter.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport com.google.gson.Gson;\nimport com.google.gson.JsonElement;\nimport org.janelia.saalfeldlab.n5.CachedGsonKeyValueN5Reader;\nimport org.janelia.saalfeldlab.n5.CachedGsonKeyValueN5Writer;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.GsonKeyValueN5Reader;\nimport org.janelia.saalfeldlab.n5.GsonKeyValueN5Writer;\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.N5KeyValueReader;\n\nimport java.io.Serializable;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.Type;\nimport java.net.URI;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.function.Predicate;\n\npublic class HttpReaderFsWriter implements GsonKeyValueN5Writer {\n\n\tprivate final GsonKeyValueN5Writer writer;\n\tprivate final GsonKeyValueN5Reader reader;\n\n\tpublic <W extends GsonKeyValueN5Writer, R extends GsonKeyValueN5Reader> HttpReaderFsWriter(final W writer, final R reader) {\n\n\t\tthis.writer = writer;\n\t\tthis.reader = reader;\n\n\t\tif (reader instanceof CachedGsonKeyValueN5Reader && writer instanceof CachedGsonKeyValueN5Writer) {\n\t\t\tfinal CachedGsonKeyValueN5Reader cachedReader = (CachedGsonKeyValueN5Reader)reader;\n\t\t\tfinal CachedGsonKeyValueN5Writer cachedWriter = (CachedGsonKeyValueN5Writer)writer;\n\t\t\tif (cachedReader.cacheMeta()) {\n\t\t\t\t/* Hack necessary to test HTTP reader caching without creating the data entirely first */\n\t\t\t\ttry {\n\t\t\t\t\t// Access the private 'cache' field in the reader (or the N5KeyValueReader as a fallback)\n\t\t\t\t\tField cacheField;\n\t\t\t\t\ttry {\n\t\t\t\t\t\tcacheField = reader.getClass().getDeclaredField(\"cache\");\n\t\t\t\t\t} catch (NoSuchFieldException e) {\n\t\t\t\t\t\tcacheField = N5KeyValueReader.class.getDeclaredField(\"cache\");\n\t\t\t\t\t}\n\t\t\t\t\tcacheField.setAccessible(true);\n\n\t\t\t\t\t// Set the value of 'cache' to the one from writer.getCache()\n\t\t\t\t\tcacheField.set(reader, cachedWriter.getCache());\n\t\t\t\t} catch (NoSuchFieldException | IllegalAccessException e) {\n\t\t\t\t\tthrow new RuntimeException(\"Failed to set reader cache reflectively\", e);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\n\t}\n\n\t@Override public String getAttributesKey() {\n\n\t\treturn writer.getAttributesKey();\n\t}\n\n\t@Override public Version getVersion() throws N5Exception {\n\n\t\treturn reader.getVersion();\n\t}\n\n\t@Override public URI getURI() {\n\n\t\treturn reader.getURI();\n\t}\n\n\t@Override public <T> T getAttribute(String pathName, String key, Class<T> clazz) throws N5Exception {\n\n\t\treturn reader.getAttribute(pathName, key, clazz);\n\t}\n\n\t@Override public <T> T getAttribute(String pathName, String key, Type type) throws N5Exception {\n\n\t\treturn reader.getAttribute(pathName, key, type);\n\t}\n\n\t@Override public DatasetAttributes getDatasetAttributes(String pathName) throws N5Exception {\n\n\t\treturn reader.getDatasetAttributes(pathName);\n\t}\n\n\t@Override public DataBlock<?> readChunk(String pathName, DatasetAttributes datasetAttributes, long... gridPosition) throws N5Exception {\n\n\t\treturn reader.readChunk(pathName, getConvertedDatasetAttributes(datasetAttributes), gridPosition);\n\t}\n\n\t@Override public <T> T readSerializedBlock(String dataset, DatasetAttributes attributes, long... gridPosition) throws N5Exception, ClassNotFoundException {\n\n\t\treturn reader.readSerializedBlock(dataset, getConvertedDatasetAttributes(attributes), gridPosition);\n\t}\n\n\t@Override public KeyValueAccess getKeyValueAccess() {\n\n\t\treturn reader.getKeyValueAccess();\n\t}\n\n\t@Override public boolean exists(String pathName) {\n\n\t\treturn reader.exists(pathName);\n\t}\n\n\t@Override public boolean datasetExists(String pathName) throws N5Exception {\n\n\t\treturn reader.datasetExists(pathName);\n\t}\n\n\t@Override public String[] list(String pathName) throws N5Exception {\n\n\t\treturn reader.list(pathName);\n\t}\n\n\t@Override public String[] deepList(String pathName, Predicate<String> filter) throws N5Exception {\n\n\t\treturn reader.deepList(pathName, filter);\n\t}\n\n\t@Override public String[] deepList(String pathName) throws N5Exception {\n\n\t\treturn reader.deepList(pathName);\n\t}\n\n\t@Override public String[] deepListDatasets(String pathName, Predicate<String> filter) throws N5Exception {\n\n\t\treturn reader.deepListDatasets(pathName, filter);\n\t}\n\n\t@Override public String[] deepListDatasets(String pathName) throws N5Exception {\n\n\t\treturn reader.deepListDatasets(pathName);\n\t}\n\n\t@Override public String[] deepList(String pathName, Predicate<String> filter, ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\treturn reader.deepList(pathName, filter, executor);\n\t}\n\n\t@Override public String[] deepList(String pathName, ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\treturn reader.deepList(pathName, executor);\n\t}\n\n\t@Override public String[] deepListDatasets(String pathName, Predicate<String> filter, ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\treturn reader.deepListDatasets(pathName, filter, executor);\n\t}\n\n\t@Override public String[] deepListDatasets(String pathName, ExecutorService executor) throws N5Exception, InterruptedException, ExecutionException {\n\n\t\treturn reader.deepListDatasets(pathName, executor);\n\t}\n\n\t@Override public Gson getGson() {\n\n\t\treturn reader.getGson();\n\t}\n\n\t@Override public Map<String, Class<?>> listAttributes(String pathName) throws N5Exception {\n\n\t\treturn reader.listAttributes(pathName);\n\t}\n\n\t@Override public String getGroupSeparator() {\n\n\t\treturn reader.getGroupSeparator();\n\t}\n\n\t@Override public String groupPath(String... nodes) {\n\n\t\treturn reader.groupPath(nodes);\n\t}\n\n\t@Override public void close() {\n\n\t\treader.close();\n\t\twriter.close();\n\t}\n\n\t@Override public <T> void setAttribute(String groupPath, String attributePath, T attribute) throws N5Exception {\n\n\t\twriter.setAttribute(groupPath, attributePath, attribute);\n\t}\n\n\t@Override public void setAttributes(String groupPath, Map<String, ?> attributes) throws N5Exception {\n\t\twriter.setAttributes(groupPath, attributes);\n\t}\n\n\t@Override public boolean removeAttribute(String groupPath, String attributePath) throws N5Exception {\n\n\t\treturn writer.removeAttribute(groupPath, attributePath);\n\t}\n\n\t@Override public <T> T removeAttribute(String groupPath, String attributePath, Class<T> clazz) throws N5Exception {\n\n\t\treturn writer.removeAttribute(groupPath, attributePath, clazz);\n\t}\n\n\t@Override public boolean removeAttributes(String groupPath, List<String> attributePaths) throws N5Exception {\n\n\t\treturn writer.removeAttributes(groupPath, attributePaths);\n\t}\n\n\t@Override public void setDatasetAttributes(String datasetPath, DatasetAttributes datasetAttributes) throws N5Exception {\n\n\t\twriter.setDatasetAttributes(datasetPath, datasetAttributes);\n\t}\n\n\t@Override public DatasetAttributes getConvertedDatasetAttributes(DatasetAttributes datasetAttributes) {\n\n\t\treturn writer.getConvertedDatasetAttributes(datasetAttributes);\n\t}\n\n\t@Override public void setVersion() throws N5Exception {\n\n\t\twriter.setVersion();\n\t}\n\n\t@Override public void createGroup(String groupPath) throws N5Exception {\n\t\twriter.createGroup(groupPath);\n\t}\n\n\t@Override public boolean remove(String groupPath) throws N5Exception {\n\n\t\treturn writer.remove(groupPath);\n\t}\n\n\t@Override public boolean remove() throws N5Exception {\n\n\t\treturn writer.remove();\n\t}\n\n\t@Override public DatasetAttributes createDataset(String datasetPath, DatasetAttributes datasetAttributes) throws N5Exception {\n\n\t\tDatasetAttributes convertedDatasetAttributes = getConvertedDatasetAttributes(datasetAttributes);\n\t\twriter.createDataset(datasetPath, convertedDatasetAttributes);\n\t\treturn convertedDatasetAttributes;\n\t}\n\n\t@Override public <T> void writeChunk(String datasetPath, DatasetAttributes datasetAttributes, DataBlock<T> chunk) throws N5Exception {\n\t\twriter.writeChunk(datasetPath, getConvertedDatasetAttributes(datasetAttributes), chunk);\n\t}\n\n\t@Override public <T> void writeBlock(String datasetPath, DatasetAttributes datasetAttributes, DataBlock<T> dataBlock) throws N5Exception {\n\t\twriter.writeBlock(datasetPath, getConvertedDatasetAttributes(datasetAttributes), dataBlock);\n\t}\n\n\t@Override public boolean deleteChunk(String datasetPath, long... gridPosition) throws N5Exception {\n\n\t\treturn writer.deleteChunk(datasetPath, gridPosition);\n\t}\n\n\t@Override public boolean deleteChunks(String datasetPath, DatasetAttributes datasetAttributes, List<long[]> gridPositions) throws N5Exception {\n\n\t\treturn writer.deleteChunks(datasetPath, datasetAttributes, gridPositions);\n\t}\n\n\t@Override public void writeSerializedBlock(Serializable object, String datasetPath, DatasetAttributes datasetAttributes, long... gridPosition) throws N5Exception {\n\n\t\twriter.writeSerializedBlock(object, datasetPath, getConvertedDatasetAttributes(datasetAttributes), gridPosition);\n\t}\n\n\t@Override public void setVersion(String path) {\n\n\t\twriter.setVersion(path);\n\t}\n\n\t@Override public void writeAttributes(String normalGroupPath, JsonElement attributes) throws N5Exception {\n\n\t\twriter.writeAttributes(normalGroupPath, attributes);\n\t}\n\n\t@Override public void setAttributes(String path, JsonElement attributes) throws N5Exception {\n\n\t\twriter.setAttributes(path, attributes);\n\t}\n\n\t@Override public void writeAttributes(String normalGroupPath, Map<String, ?> attributes) throws N5Exception {\n\n\t\twriter.writeAttributes(normalGroupPath, attributes);\n\t}\n\n\t@Override public <T> void writeChunks(String datasetPath, DatasetAttributes datasetAttributes, DataBlock<T>... chunks) throws N5Exception {\n\n\t\twriter.writeChunks(datasetPath, getConvertedDatasetAttributes(datasetAttributes), chunks);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/http/N5HttpTest.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport com.google.gson.GsonBuilder;\nimport org.janelia.saalfeldlab.n5.AbstractN5Test;\nimport org.janelia.saalfeldlab.n5.HttpKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5FSWriter;\nimport org.janelia.saalfeldlab.n5.N5KeyValueReader;\nimport org.janelia.saalfeldlab.n5.N5Reader;\nimport org.janelia.saalfeldlab.n5.N5Writer;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Ignore;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized.Parameter;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.net.URI;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.ArrayList;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n@RunWith(RunnerWithHttpServer.class)\npublic class N5HttpTest extends AbstractN5Test {\n\n\t@Parameter\n\tpublic static Path httpServerDirectory;\n\n\t@Parameter\n\tpublic URI httpServerURI;\n\n\t@Override\n\tprotected String tempN5Location() {\n\n\t\ttry {\n\t\t\tfinal File tmpFile = Files.createTempFile(httpServerDirectory, \"n5-http-test-\", \".n5\").toFile();\n\t\t\tassertTrue(tmpFile.delete());\n\t\t\treturn tmpFile.getName();\n\t\t} catch (final IOException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\tprivate static final ArrayList<N5Writer> tempClassWriters = new ArrayList<>();\n\n\t@After\n\t@Override\n\tpublic void removeTempWriters() {\n\n\t\t//For HTTP, don't remove After, remove AfterClass, since we need the server to be shut down first\n\t\t// move the writer to a static list\n\t\ttempClassWriters.addAll(tempWriters);\n\t\ttempWriters.clear();\n\t}\n\n\t@AfterClass\n\tpublic static void removeClassTempWriters() {\n\n\t\tfor (final N5Writer writer : tempClassWriters) {\n\t\t\ttry {\n\t\t\t\twriter.remove();\n\t\t\t} catch (final Exception e) {\n\t\t\t}\n\t\t}\n\t\ttempClassWriters.clear();\n\t}\n\n\tprivate static final boolean cacheMeta = true;\n\n\t@Override\n\tprotected N5Writer createN5Writer(\n\t\t\tfinal String location,\n\t\t\tfinal GsonBuilder gson) throws IOException {\n\n\t\tfinal String writerFsPath = httpServerDirectory.resolve(location).toFile().getCanonicalPath();\n\t\tfinal N5FSWriter writer = new N5FSWriter(writerFsPath, gson, cacheMeta);\n\t\tfinal N5KeyValueReader reader = (N5KeyValueReader)createN5Reader(location, gson);\n\t\treturn new HttpReaderFsWriter(writer, reader);\n\t}\n\n\t@Override\n\tprotected N5Reader createN5Reader(\n\t\t\tfinal String location,\n\t\t\tfinal GsonBuilder gson) {\n\n\t\tfinal String readerHttpPath = httpServerURI.resolve(location).toString();\n\t\treturn new N5KeyValueReader(new HttpKeyValueAccess(), readerHttpPath, gson, cacheMeta);\n\t}\n\n\t@Test\n\t@Override\n\tpublic void testVersion() throws NumberFormatException {\n\n\t\ttry (final N5Writer writer = createTempN5Writer()) {\n\n\t\t\tfinal N5Reader.Version n5Version = writer.getVersion();\n\n\t\t\tassertEquals(n5Version, N5Reader.VERSION);\n\n\t\t\tfinal N5Reader.Version incompatibleVersion = new N5Reader.Version(N5Reader.VERSION.getMajor() + 1, N5Reader.VERSION.getMinor(), N5Reader.VERSION.getPatch());\n\t\t\twriter.setAttribute(\"/\", N5Reader.VERSION_KEY, incompatibleVersion.toString());\n\t\t\tfinal N5Reader.Version version = writer.getVersion();\n\t\t\tassertFalse(N5Reader.VERSION.isCompatible(version));\n\n\t\t\tfinal N5Reader.Version compatibleVersion = new N5Reader.Version(N5Reader.VERSION.getMajor(), N5Reader.VERSION.getMinor(), N5Reader.VERSION.getPatch());\n\t\t\twriter.setAttribute(\"/\", N5Reader.VERSION_KEY, compatibleVersion.toString());\n\t\t}\n\t}\n\n\t@Ignore(\"N5Writer not supported for HTTP\")\n\t@Override public void testRemoveGroup() {\n\n\t}\n\n\t@Ignore(\"N5Writer not supported for HTTP\")\n\t@Override public void testRemoveAttributes() {\n\n\t}\n\n\t@Ignore(\"N5Writer not supported for HTTP\")\n\t@Override public void testRemoveContainer() {\n\n\t}\n\n\t@Ignore(\"N5Writer not supported for HTTP\")\n\t@Override public void testDelete() {\n\n\t}\n\n\t@Ignore(\"N5Writer not supported for HTTP\")\n\t@Override public void testWriterSeparation() {\n\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/http/RunnerWithHttpServer.java",
    "content": "package org.janelia.saalfeldlab.n5.http;\n\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.internal.runners.statements.RunAfters;\nimport org.junit.internal.runners.statements.RunBefores;\nimport org.junit.runner.Description;\nimport org.junit.runner.notification.RunNotifier;\nimport org.junit.runners.BlockJUnit4ClassRunner;\nimport org.junit.runners.Parameterized;\nimport org.junit.runners.model.FrameworkField;\nimport org.junit.runners.model.FrameworkMethod;\nimport org.junit.runners.model.Statement;\nimport org.junit.runners.model.TestClass;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.net.URI;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.List;\nimport java.util.concurrent.TimeUnit;\n\npublic class RunnerWithHttpServer extends BlockJUnit4ClassRunner {\n\n\tprivate Process process;\n\n\tprivate final StringBuilder perTestHttpOut = new StringBuilder();\n\n\tprivate Path httpServerDirectory;\n\n\tprivate final URI httpUri = URI.create(\"http://localhost:8000/\");\n\n\tpublic RunnerWithHttpServer(Class<?> klass) throws Exception {\n\n\t\tsuper(klass);\n\t}\n\n\tprivate static Path createTmpServerDirectory() throws IOException {\n\n\t\t/* deleteOnExit doesn't work on temporary files, so delete it manually and recreate explicitly...*/\n\t\tfinal Path tempDirectory = Files.createTempDirectory(\"n5-http-test-server-\");\n\t\ttempDirectory.toFile().delete();\n\t\ttempDirectory.toFile().mkdirs();\n\t\ttempDirectory.toFile().deleteOnExit();\n\t\treturn tempDirectory;\n\t}\n\n\t@Override protected Object createTest() throws Exception {\n\n\t\tfinal Object test = super.createTest();\n\t\tfor (FrameworkField field : getTestClass().getAnnotatedFields(Parameterized.Parameter.class)) {\n\t\t\tif (field.getType().isAssignableFrom(Path.class)) {\n\t\t\t\tfield.getField().set(test, httpServerDirectory);\n\t\t\t} else if (field.getType().isAssignableFrom(URI.class)) {\n\t\t\t\tfield.getField().set(test, httpUri);\n\t\t\t}\n\t\t}\n\t\treturn test;\n\t}\n\n\t@Override protected void runChild(FrameworkMethod method, RunNotifier notifier) {\n\n\t\tif (!process.isAlive()) {\n\t\t\tlogHttpOutput();\n\t\t\treturn;\n\t\t}\n\n\t\tDescription description = describeChild(method);\n\t\tif (isIgnored(method)) {\n\t\t\tnotifier.fireTestIgnored(description);\n\t\t} else {\n\t\t\tStatement statement = new Statement() {\n\n\t\t\t\t@Override\n\t\t\t\tpublic void evaluate() throws Throwable {\n\n\t\t\t\t\ttry {\n\t\t\t\t\t\tmethodBlock(method).evaluate();\n\t\t\t\t\t} catch (Exception e) {\n\t\t\t\t\t\tif (!process.isAlive())\n\t\t\t\t\t\t\tlogHttpOutput();\n\t\t\t\t\t\tthrow e;\n\t\t\t\t\t} finally {\n\t\t\t\t\t\tperTestHttpOut.setLength(0);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t};\n\t\t\trunLeaf(statement, description, notifier);\n\t\t}\n\n\t}\n\n\tprivate void logHttpOutput() {\n\n\t\tif (perTestHttpOut.length() > 0) {\n\t\t\tperTestHttpOut.insert(0, \"Last HTTP Server Output.\\n\");\n\t\t\tperTestHttpOut.insert(0, \"Http Server is not alive.\\n\");\n\t\t\tSystem.err.println(perTestHttpOut);\n\t\t}\n\t}\n\n\t@Before\n\tpublic void startHttpServer() throws Exception {\n\n\t\thttpServerDirectory = createTmpServerDirectory();\n\t\tfinal String osName = System.getProperty(\"os.name\");\n\t\tfinal String pythonBinary = \"Mac OS X\".equals(osName) ? \"python3\" : \"python\";\n\t\tProcessBuilder processBuilder = new ProcessBuilder(pythonBinary, \"-m\", \"http.server\");\n\t\tprocessBuilder.directory(httpServerDirectory.toFile());\n\t\tprocessBuilder.redirectErrorStream(true);\n\t\tprocess = processBuilder.start();\n\t\twaitForHttpReady();\n\t\t/* give the server some time to finish startup */\n\t\tfinal Thread clearStdout = new Thread(() -> {\n\t\t\ttry (BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {\n\t\t\t\tString line;\n\t\t\t\twhile ((line = reader.readLine()) != null) {\n\t\t\t\t\tperTestHttpOut.append(line).append(\"\\n\");\n\t\t\t\t}\n\t\t\t} catch (IOException e) {\n\t\t\t\tthrow new RuntimeException(e);\n\t\t\t}\n\t\t});\n\t\tclearStdout.setDaemon(true);\n\t\tclearStdout.start();\n\t}\n\n\tprivate void waitForHttpReady() throws IOException, InterruptedException {\n\n\t\tfinal Thread waitForConnect = new Thread(() -> {\n\t\t\twhile (true) {\n\t\t\t\ttry {\n\t\t\t\t\thttpUri.toURL().openConnection().connect();\n\t\t\t\t\treturn;\n\t\t\t\t} catch (Exception e) {\n\t\t\t\t\t//ignore\n\t\t\t\t}\n\t\t\t}\n\t\t});\n\t\twaitForConnect.start();\n\t\twaitForConnect.join(10_000);\n\t\thttpUri.toURL().openConnection().connect();\n\t}\n\n\t@After\n\tpublic void stopHttpServer() {\n\n\t\tprocess.destroy();\n\t\ttry {\n\t\t\tprocess.waitFor(1, TimeUnit.SECONDS);\n\t\t} catch (InterruptedException e) {\n\t\t\tprocess.destroyForcibly();\n\t\t}\n\t}\n\n\t@Override protected Statement withBeforeClasses(Statement statement) {\n\n\t\tfinal Statement testClassBefore = super.withBeforeClasses(statement);\n\t\tfinal List<FrameworkMethod> beforeTestClass = new TestClass(RunnerWithHttpServer.class).getAnnotatedMethods(Before.class);\n\t\treturn new RunBefores(testClassBefore, beforeTestClass, this);\n\t}\n\n\t@Override protected Statement withAfterClasses(Statement statement) {\n\n\t\tfinal List<FrameworkMethod> afterTestClass = new TestClass(RunnerWithHttpServer.class).getAnnotatedMethods(After.class);\n\t\tfinal RunAfters runnerAfterClass = new RunAfters(statement, afterTestClass, this);\n\t\treturn super.withAfterClasses(runnerAfterClass);\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/kva/AbstractKeyValueAccessTest.java",
    "content": "package org.janelia.saalfeldlab.n5.kva;\n\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5URI;\nimport org.junit.Test;\n\nimport java.net.URI;\nimport java.util.LinkedHashSet;\nimport java.util.Set;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\n\n/**\n * @author Stephan Saalfeld &lt;saalfelds@janelia.hhmi.org&gt;\n */\npublic abstract class AbstractKeyValueAccessTest {\n\n\tprotected abstract KeyValueAccess newKeyValueAccess(URI root);\n\n\tprotected KeyValueAccess newKeyValueAccess() {\n\n\t\treturn newKeyValueAccess(tempUri());\n\t}\n\n\tprotected abstract URI tempUri();\n\n\tprotected URI[] testURIs(final URI base) {\n\n\t\tfinal Set<URI> testUris = new LinkedHashSet<>();\n\t\t/*add the base uri as a test case */\n\t\ttestUris.add(base);\n\n\t\t/* NOTE: Java 8 doesn't behave well with URIs with empty path when resolving against a path.\n\t\t*   See KeyValueAccess#compose for more details.\n\t\t* \tIn tests with that as a base URI, resolve against `/` first.\n\t\t* \tShould be unnecessary in Java 21*/\n\t\tfinal URI testUri = base.getPath().isEmpty() ? base.resolve(\"/\") : base;\n\t\tfinal URI[] pathParts = new URI[]{\n\t\t\t\tN5URI.getAsUri(\"test/path/file\"),     // typical path, with no leading or trailing slashes\n\t\t\t\tN5URI.getAsUri(\"test/path/file/\"),     // typical path, with trailing slash\n\t\t\t\tN5URI.getAsUri(\"/test/path/file\"),     // typical path, with leading slash\n\t\t\t\tN5URI.getAsUri(\"/test/path/file/\"),     // typical path, with leading and trailing slash\n\t\t\t\tN5URI.getAsUri(\"file\"),             // single path\n\t\t\t\tN5URI.getAsUri(\"file/\"),             // single path\n\t\t\t\tN5URI.getAsUri(\"/file\"),             // single path\n\t\t\t\tN5URI.getAsUri(\"/file/\"),             // single path\n\t\t\t\tN5URI.getAsUri(\"path/w i t h/spaces\"),\n\t\t\t\tN5URI.getAsUri(\"uri/illegal%character\"),\n\t\t\t\tN5URI.getAsUri(\"/\"), \t\t\t\t// root path\n\t\t\t\tN5URI.getAsUri(\"\")\t\t\t\t\t// empty path\n\t\t};\n\t\tfor (final URI pathPart : pathParts) {\n\t\t\ttestUris.add(testUri.resolve(pathPart));\n\t\t}\n\t\treturn testUris.toArray(new URI[0]);\n\t}\n\n\tprotected String[][] testPathComponents(final URI base) {\n\n\t\tfinal URI[] testPaths = testURIs(base);\n\t\tfinal String[][] expectedComponents = new String[testPaths.length][];\n\t\tfor (int i = 0; i < testPaths.length; ++i) {\n\t\t\tfinal URI testUri = testPaths[i];\n\t\t\tfinal String uriPath = testUri.getPath();\n\t\t\texpectedComponents[i] = uriPath.split(\"/\");\n\t\t\tif (uriPath.startsWith(\"/\")) {\n\t\t\t\t//We always expect the first path to be forward slash if it's absolute\n\t\t\t\tif (expectedComponents[i].length == 0)\n\t\t\t\t\texpectedComponents[i] = new String[1];\n\t\t\t\texpectedComponents[i][0] = \"/\";\n\t\t\t}\n\t\t\tif (uriPath.endsWith(\"/\")) {\n\t\t\t\tfinal int lastCompIdx = expectedComponents[i].length - 1;\n\t\t\t\tfinal String lastComponent = expectedComponents[i][lastCompIdx];\n\t\t\t\tif (!lastComponent.endsWith(\"/\")) {\n\t\t\t\t\texpectedComponents[i][lastCompIdx] = lastComponent + \"/\";\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn expectedComponents;\n\t}\n\n\tprotected void testComponentsAtLocation(URI testRoot) {\n\n\t\tfinal KeyValueAccess access = newKeyValueAccess();\n\n\t\tfinal URI[] testPaths = testURIs(testRoot);\n\t\tfinal String[][] expectedComponents = testPathComponents(testRoot);\n\n\t\tfor (int i = 0; i < testPaths.length; ++i) {\n\n\t\t\tfinal String[] components = access.components(testPaths[i].getPath());\n\n\t\t\tassertArrayEquals(\"Failure at Index \" + i, expectedComponents[i], components);\n\t\t}\n\t}\n\n\tprotected void testComposeAtLocation(URI uri) {\n\n\t\tfinal KeyValueAccess access = newKeyValueAccess();\n\n\t\t/* remove any path information to get the base URI without path. */\n\n\t\tfinal URI[] testUris = testURIs(uri);\n\t\tfinal String[][] testPathComponents = testPathComponents(uri);\n\n\t\tfor (int i = 0; i < testPathComponents.length; ++i) {\n\t\t\ttestPathComponents[i] = testPathComponents[i].clone();\n\t\t\t/* Don't add the \"/\" if the input uri path is empty. just use it. Otherwise, remove the parts and start with \"/\" */\n\t\t\tfinal URI baseUri = uri.getPath().isEmpty() ? uri : testUris[i].resolve(\"/\");\n\t\t\tfinal String[] components = testPathComponents[i];\n\t\t\tfinal String actualCompose = access.compose(baseUri, components);\n\t\t\tfinal String expectedCompose = access.compose(testUris[i]);\n\t\t\tassertEquals(\"Failure at Index \" + i, expectedCompose, actualCompose);\n\t\t}\n\t}\n\n\t@Test\n\tpublic void testComponents() {\n\n\t\ttestComponentsAtLocation(tempUri());\n\t}\n\n\t@Test\n\tpublic void testComponentsWithPathSlash() {\n\n\t\tfinal URI uriWithPathSlash = setUriPath(tempUri(), \"/\");\n\t\ttestComponentsAtLocation(uriWithPathSlash);\n\t}\n\n\t@Test\n\tpublic void testComponentsWithPathEmpty() {\n\n\t\tfinal URI uriWithPathEmpty = setUriPath(tempUri(), \"\");\n\t\ttestComponentsAtLocation(uriWithPathEmpty);\n\t}\n\n\t@Test\n\tpublic void testCompose() {\n\n\t\tfinal URI uri = tempUri();\n\t\ttestComposeAtLocation(uri);\n\n\t\tfinal KeyValueAccess kva = newKeyValueAccess();\n\t\tfinal URI uriWithPath = setUriPath(uri, \"/foo\");\n\t\tassertEquals(\"Non-empty Path\", \"/foo\", uriWithPath.getPath());\n\t\tassertComposeEquals(\"Non-empty Path, no empty or slash in components\", kva, uriWithPath, \"/foo/bar/baz\", \"bar\", \"baz\");\n        assertComposeEquals(\"Non-empty Path, first components leading slash\", kva, uriWithPath, \"/bar/baz\", \"/bar\", \"baz\");\n        assertComposeEquals(\"Non-empty Path, first components slash only\", kva, uriWithPath,\"/bar/baz\", \"/\", \"bar\", \"baz\");\n        assertComposeEquals(\"Non-empty Path, first components slash only\", kva, uriWithPath,\"/foo/bar/baz\", \"\", \"bar\", \"baz\");\n        assertComposeEquals(\"Non-empty Path, first components slash only\", kva, uriWithPath,\"/bar/baz\", \"\", \"/bar\", \"baz\");\n        assertComposeEquals(\"Non-empty Path, null and empty inner components\", kva, uriWithPath,\"/foo/bar/baz\", \"bar\", null, \"\", \"baz\");\n        assertComposeEquals(\"Non-empty Path, null and empty inner components\", kva, uriWithPath,\"/bar/baz\", \"/bar\", null, \"\", \"baz\");\n\t}\n\n\t@Test\n\tpublic void testComposeWithPathSlash() {\n\n\t\tfinal URI uriWithSlashRoot = setUriPath(tempUri(), \"/\");\n\t\tassertEquals(\"Root (/) Path\", \"/\", uriWithSlashRoot.getPath());\n\t\ttestComposeAtLocation(uriWithSlashRoot);\n\n\t\tfinal KeyValueAccess kva = newKeyValueAccess();\n\t\tassertComposeEquals(\"Root (/) Path, no empty or slash in components\", kva, uriWithSlashRoot, \"/bar/baz\", \"bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components leading slash\", kva, uriWithSlashRoot, \"/bar/baz\", \"/bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components slash only\", kva, uriWithSlashRoot,\"/bar/baz\", \"/\", \"bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components slash only\", kva, uriWithSlashRoot,\"/bar/baz\", \"\", \"bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components slash only\", kva, uriWithSlashRoot,\"/bar/baz\", \"\", \"/bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, null and empty inner components\", kva, uriWithSlashRoot,\"/bar/baz\", \"bar\", null, \"\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, null and empty inner components\", kva, uriWithSlashRoot,\"/bar/baz\", \"/bar\", null, \"\", \"baz\");\n\t}\n\n\t@Test\n\tpublic void testComposeWithPathEmpty() {\n\n\t\tfinal URI uriWithEmptyRoot = setUriPath(tempUri(), \"\");\n\t\tassertEquals(\"Empty Path\", \"\", uriWithEmptyRoot.getPath());\n\t\ttestComposeAtLocation(uriWithEmptyRoot);\n\n\t\tfinal KeyValueAccess kva = newKeyValueAccess();\n\t\tassertComposeEquals(\"Root (/) Path, no empty or slash in components\", kva, uriWithEmptyRoot, \"/bar/baz\", \"bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components leading slash\", kva, uriWithEmptyRoot, \"/bar/baz\", \"/bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components slash only\", kva, uriWithEmptyRoot,\"/bar/baz\", \"/\", \"bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components slash only\", kva, uriWithEmptyRoot,\"/bar/baz\", \"\", \"bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, first components slash only\", kva, uriWithEmptyRoot,\"/bar/baz\", \"\", \"/bar\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, null and empty inner components\", kva, uriWithEmptyRoot,\"/bar/baz\", \"bar\", null, \"\", \"baz\");\n\t\tassertComposeEquals(\"Root (/) Path, null and empty inner components\", kva, uriWithEmptyRoot,\"/bar/baz\", \"/bar\", null, \"\", \"baz\");\n\t}\n\n\tpublic void assertComposeEquals(String reason, KeyValueAccess kva, URI uri, String absoluteExpectedPath, String... components) {\n\t\tString actual = kva.compose(uri, components);\n\t\tString expected = kva.compose(uri.resolve(absoluteExpectedPath));\n\t\tassertEquals(reason, expected, actual);\n\t}\n\n\tpublic URI setUriPath(final URI uri, final String path) {\n\n\t\tfinal URI tempUri = uri.resolve(\"/\");\n\t\tfinal String newUri = tempUri.toString().replaceAll(tempUri.getPath() + \"$\", path);\n\t\tfinal URI uriWithNewPath = N5URI.getAsUri(newUri);\n\t\tassertEquals(\"setUriPath failed\", path, uriWithNewPath.getPath());\n\t\treturn uriWithNewPath;\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/kva/DelegateKeyValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5.kva;\n\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\n\npublic class DelegateKeyValueAccess implements KeyValueAccess {\n\n    protected final KeyValueAccess kva;\n\n    public DelegateKeyValueAccess(KeyValueAccess kva) { this.kva = kva; }\n\n    @Override\n    public String[] components(String path) {\n        return kva.components(path);\n    }\n\n    @Override\n    public String compose(URI uri, String... components) {\n        return kva.compose(uri, components);\n    }\n\n    @Override\n    public String compose(String... components) {\n        return kva.compose(components);\n    }\n\n    @Override\n    public String parent(String path) {\n        return kva.parent(path);\n    }\n\n    @Override\n    public String relativize(String path, String base) {\n        return kva.relativize(path, base);\n    }\n\n    @Override\n    public String normalize(String path) {\n        return kva.normalize(path);\n    }\n\n    @Override\n    public URI uri(String uriString) throws URISyntaxException {\n        return kva.uri(uriString);\n    }\n\n    @Override\n    public boolean exists(String normalPath) {\n        return kva.exists(normalPath);\n    }\n\n    @Override\n    public long size(String normalPath) throws N5Exception.N5NoSuchKeyException {\n        return kva.size(normalPath);\n    }\n\n    @Override\n    public boolean isDirectory(String normalPath) {\n        return kva.isDirectory(normalPath);\n    }\n\n    @Override\n    public boolean isFile(String normalPath) {\n        return kva.isFile(normalPath);\n    }\n\n    @Override\n    public VolatileReadData createReadData(String normalPath) throws N5Exception.N5IOException {\n        return kva.createReadData(normalPath);\n    }\n\n    @Override\n    public void write(String normalPath, ReadData data) throws N5Exception.N5IOException {\n        kva.write( normalPath, data);\n    }\n\n    @Override\n    public String[] listDirectories(String normalPath) throws N5Exception.N5IOException {\n        return kva.listDirectories(normalPath);\n    }\n\n    @Override\n    public String[] list(String normalPath) throws N5Exception.N5IOException {\n        return kva.list(normalPath);\n    }\n\n    @Override\n    public void createDirectories(String normalPath) throws N5Exception.N5IOException {\n        kva.createDirectories(normalPath);\n\n    }\n\n    @Override\n    public void delete(String normalPath) throws N5Exception.N5IOException {\n        kva.delete(normalPath);\n    }\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/kva/FileSystemKeyValueAccessTest.java",
    "content": "package org.janelia.saalfeldlab.n5.kva;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.file.FileSystems;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\nimport org.janelia.saalfeldlab.n5.FileSystemKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5URI;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\n/**\n * @author Stephan Saalfeld &lt;saalfelds@janelia.hhmi.org&gt;\n *\n */\npublic class FileSystemKeyValueAccessTest extends AbstractKeyValueAccessTest {\n\n\t/* Weird, but consistent on linux and windows */\n\tprivate static URI root = Paths.get(Paths.get(\"/\").toUri()).toUri();\n\n\tprivate static String separator = FileSystems.getDefault().getSeparator();\n\n\tprivate static final FileSystemKeyValueAccess fileSystemKva = new FileSystemKeyValueAccess();\n\t@Override\n\tprotected KeyValueAccess newKeyValueAccess(URI root) {\n\n\t\treturn fileSystemKva;\n\t}\n\n\t@Override\n\tprotected KeyValueAccess newKeyValueAccess() {\n\n\t\treturn fileSystemKva;\n\t}\n\n\t@Override\n\tprotected URI[] testURIs(URI base) {\n\n\t\tfinal URI[] testUris = super.testURIs(base);\n\t\tfinal URI[] addRelativeUris = new URI[testUris.length * 3];\n\t\tfor (int i = 0; i < testUris.length; i++) {\n\t\t\tURI testUri = testUris[i];\n\t\t\taddRelativeUris[i * 3] = testUri;\n\t\t\tPath asPath = Paths.get(testUri);\n\t\t\tfinal URI schemeLess = asPath.toUri();\n\t\t\taddRelativeUris[i * 3 + 1] = schemeLess;\n\t\t\tURI relativeUri = N5URI.encodeAsUriPath(testUri.getPath().replaceAll(\"^\"+base.getPath(), \"\"));\n\t\t\taddRelativeUris[i * 3 + 2] = relativeUri;\n\t\t}\n\t\treturn addRelativeUris;\n\t}\n\n\t@Override\n\tprotected void testComponentsAtLocation(URI testRoot) {\n\n\t\tfinal KeyValueAccess access = newKeyValueAccess();\n\n\t\tfinal URI[] testPaths = testURIs(testRoot);\n\t\tfinal String[][] expectedComponents = testPathComponents(testRoot);\n\n\t\tfor (int i = 0; i < testPaths.length; ++i) {\n\n\t\t\tString pathString;\n\t\t\tif (testPaths[i].isAbsolute())\n\t\t\t\tpathString = Paths.get(testPaths[i]).toString();\n\t\t\telse\n\t\t\t\tpathString = Paths.get(testPaths[i].getPath()).toString();\n\n\t\t\tif (pathString.length() > 1 && testPaths[i].toString().endsWith(\"/\"))\n\t\t\t\tpathString += \"/\";\n\t\t\tfinal String[] components = access.components(pathString);\n\n\t\t\tassertArrayEquals(\"Failure at Index \" + i ,expectedComponents[i], components);\n\t\t}\n\t}\n\n\tprivate Path getPathFromFileURI(URI fileUri) {\n\n\t\ttry {\n\t\t\treturn new File(fileUri).toPath();\n\t\t} catch (Exception ignore) {\n\n\t\t}\n\t\ttry {\n\t\t\treturn new File(fileUri.getPath()).toPath();\n\t\t} catch (Exception ignore) {\n\n\t\t}\n\t\tthrow new IllegalArgumentException(\"Unable to get Path for URI: \" + fileUri);\n\t}\n\n\t@Override\n\tprotected String[][] testPathComponents(URI base) {\n\n\t\tfinal URI[] testPaths = testURIs(base);\n\t\tfinal String[][] expectedComponents = new String[testPaths.length][];\n\t\tfor (int i = 0; i < testPaths.length; ++i) {\n\t\t\tfinal URI testUri = testPaths[i];\n\t\t\tfinal String testPathStr = testUri.getPath();\n\t\t\tfinal Path testPath = getPathFromFileURI(testUri);\n\t\t\tfinal int numComponents = (testPath.getRoot() != null ? 1 : 0) + testPath.getNameCount();\n\t\t\tfinal String[] components = new String[numComponents];\n\t\t\tint cIdx = 0;\n\t\t\tif (testPath.getRoot() != null)\n\t\t\t\tcomponents[cIdx++] = testPath.getRoot().toString();\n\n\t\t\tfor (int nameIdx = 0; nameIdx < testPath.getNameCount(); nameIdx++) {\n\t\t\t\tcomponents[cIdx++] = testPath.getName(nameIdx).toString();\n\t\t\t}\n\n\t\t\tif (components.length > 0 && (testPath.getRoot()==null || !components[components.length - 1].equals(testPath.getRoot().toString())) && testPathStr.endsWith(\"/\")) {\n\t\t\t\tfinal int lastCompIdx = components.length - 1;\n\t\t\t\tfinal String lastComponent = components[lastCompIdx];\n\t\t\t\tif (!lastComponent.endsWith(\"/\")) {\n\t\t\t\t\tcomponents[lastCompIdx] = lastComponent + \"/\";\n\t\t\t\t}\n\t\t\t}\n\t\t\texpectedComponents[i] = components;\n\t\t}\n\t\treturn expectedComponents;\n\t}\n\n\t@Override\n\tprotected URI tempUri() {\n\n\t\ttry {\n\t\t\tfinal Path tempDirectory = Files.createTempDirectory(\"n5-filesystem-kva-test-\");\n\t\t\tfinal File tmpDir = tempDirectory.toFile();\n\t\t\ttmpDir.delete();\n\t\t\ttmpDir.mkdir(); //DeleteOnExit doesn't work on temp directory... so we delete and make it explicitly.\n\t\t\ttmpDir.deleteOnExit();\n\t\t\treturn tempDirectory.toUri() ;\n\t\t} catch (IOException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\tprotected void testComposeAtLocation(URI uri) {\n\n\t\tfinal KeyValueAccess access = newKeyValueAccess();\n\n\t\t/* remove any path information to get the base URI without path. */\n\n\t\tfinal URI[] testUris = testURIs(uri);\n\t\tfinal String[][] testPathComponents = testPathComponents(uri);\n\n\t\tfor (int i = 0; i < testPathComponents.length; ++i) {\n\t\t\tfinal URI baseUri = testUris[i].resolve(\"/\");\n\t\t\tfinal String[] components = testPathComponents[i];\n\t\t\tfinal String composedKey = access.compose(baseUri, components);\n\t\t\tfinal URI absoluteUri = testUris[i].isAbsolute() ? testUris[i] : uri.resolve(\"/\").resolve(testUris[i]);\n\t\t\tfinal String testPath = FileSystems.getDefault().provider().getPath(absoluteUri).toAbsolutePath().toString();\n\t\t\tassertEquals(\"Failure at Index \" + i , testPath, composedKey);\n\t\t}\n\t}\n\n\t@Override\n\t@Test\n\t@Ignore(\"Empty path is invalid for file URIs.\")\n\tpublic void testComponentsWithPathEmpty() {\n\t\t/* file URIs are purely paths (optional file: scheme) so empty path resolves to a relative path (not the root of the container).\n\t\t * Because of that, there is no valid file URI with an empty path (it's just an empty string, which is invalid, or `file://` which is invalid. */\n\t\tsuper.testComponentsWithPathEmpty();\n\t}\n\n\t@Override\n\t@Test\n\t@Ignore(\"Empty path is invalid for file URIs.\")\n\tpublic void testComposeWithPathEmpty() {\n\t\t/* file URIs are purely paths (optional file: scheme) so empty path resolves to a relative path (not the root of the container).\n\t\t * Because of that, there is no valid file URI with an empty path (it's just an empty string, which is invalid, or `file://` which is invalid. */\n\t\tsuper.testComposeWithPathEmpty();\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/kva/FsLockingValidation.java",
    "content": "package org.janelia.saalfeldlab.n5.kva;\n\nimport java.util.Arrays;\nimport java.util.Random;\nimport java.util.concurrent.Callable;\n\nimport org.janelia.saalfeldlab.n5.FileSystemKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\nimport picocli.CommandLine;\nimport picocli.CommandLine.Command;\nimport picocli.CommandLine.Option;\n\n@Command(\n\t\tname = \"FsLockingValidation\",\n\t\tmixinStandardHelpOptions = true,\n\t\tdescription = \"Two-process test for FileLock race conditions.\")\npublic class FsLockingValidation implements Callable<Integer> {\n\n\t@Option(names = {\"--file\", \"-f\"},\n\t\t\trequired = true,\n\t\t\tdescription = \"Path of the file used for the locking test.\")\n\tString file;\n\n\t@Option(names = {\"--data-size\"},\n\t\t\tdescription = \"Bytes written per repeat (default: ${DEFAULT-VALUE}).\")\n\tint dataSize = 65_536;\n\n\t@Option(names = {\"--num-repeats\"},\n\t\t\tdescription = \"Number of write/read cycles (default: ${DEFAULT-VALUE}).\")\n\tint numRepeats = 1000;\n\n\t// Amount of sleep time after getting file size, but before attempting to actually read\n\t@Option(names = {\"--sleep\", \"-s\"},\n\t\t\tdescription = \"ms writer sleeps after TRUNCATE_EXISTING and before lock() (default: ${DEFAULT-VALUE}).\")\n\tlong sleepMs = 0;\n\n\tstatic Random random = new Random();\n\n\tstatic final int id = random.nextInt(9999);\n\n\n\tpublic static void main(String[] args) {\n\t\tSystem.exit(new CommandLine(new FsLockingValidation()).execute(args));\n\t}\n\n\t@Override\n\tpublic Integer call() throws Exception {\n\n\t\tfinal FileSystemKeyValueAccess kva = new FileSystemKeyValueAccess();\n\n\t\t// Seed the file so it exists and has the expected size before any loop iteration.\n\t\tfinal byte[] seed = new byte[dataSize];\n\t\tArrays.fill(seed, (byte) 0x11);\n\t\tkva.write(file, ReadData.from(seed));\n\n\t\t// Size of the \"index\" slice read from the end of the file, analogous to\n\t\t// RawShardCodec.decode() reading the shard index at indexOffset = requireLength() - indexBlockSizeInBytes.\n\t\tfinal long indexSliceSize = Math.max(1, dataSize / 8);\n\n\t\tint errors = 0;\n\t\tfor (int i = 0; i < numRepeats; i++) {\n\n\t\t\ttry {\n\t\t\t\tReadData modifiedReadData = null;\n\t\t\t\ttry (VolatileReadData vrd = kva.createReadData(file)) {\n\n\t\t\t\t\t// 1. slice the vrd for a subset near the end — mirrors the index read in\n\t\t\t\t\t//    RawShardCodec.decode(). requireLength() is where Files.size() is called\n\t\t\t\t\t//    and where a negative offset would be computed if the file was truncated\n\t\t\t\t\t//    by a concurrent writer before its exclusive lock was acquired.\n\t\t\t\t\tfinal long totalLength = vrd.requireLength();\n\t\t\t\t\tfinal long sliceOffset = totalLength - indexSliceSize;\n\n\t\t\t\t\tThread.sleep(sleepMs);\n\t\t\t\t\tvrd.slice(sliceOffset, indexSliceSize).materialize();\n\n\t\t\t\t\t// 2. create new ReadData of dataSize, as if we merged the existing\n\t\t\t\t\t//    shard with new blocks and are ready to write the result.\n\t\t\t\t\tfinal byte[] newData = new byte[dataSize];\n\t\t\t\t\tArrays.fill(newData, (byte) (i & 0xFF));\n\n\t\t\t\t\t// 3. set modifiedReadData\n\t\t\t\t\tmodifiedReadData = ReadData.from(newData);\n\t\t\t\t\tmodifiedReadData.materialize();\n\t\t\t\t}\n\n\t\t\t\tkva.write(file, modifiedReadData);\n\t\t\t} catch (Exception e) {\n\t\t\t\terrors++;\n\t\t\t\tSystem.err.printf(\"[id %d] ERROR at repeat %d: %s%n\",\n\t\t\t\t\t\tid, i, e.getMessage());\n\t\t\t}\n\t\t}\n\n\t\treturn errors > 0 ? 1 : 0;\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/kva/HttpKeyValueAccessTest.java",
    "content": "package org.janelia.saalfeldlab.n5.kva;\n\nimport org.janelia.saalfeldlab.n5.AbstractN5Test;\nimport org.janelia.saalfeldlab.n5.HttpKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.http.RunnerWithHttpServer;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\n\nimport java.net.URI;\nimport java.nio.file.Path;\n\n/**\n * @author Stephan Saalfeld &lt;saalfelds@janelia.hhmi.org&gt;\n *\n */\n@RunWith(RunnerWithHttpServer.class)\npublic class HttpKeyValueAccessTest extends AbstractKeyValueAccessTest {\n\n\t@Parameterized.Parameter\n\tpublic static Path httpServerDirectory;\n\n\t@Parameterized.Parameter\n\tpublic URI httpServerURI;\n\n\tprivate static final HttpKeyValueAccess httpKva = new HttpKeyValueAccess();\n\t@Override protected KeyValueAccess newKeyValueAccess(URI root) {\n\n\t\treturn httpKva;\n\t}\n\n\t@Override protected KeyValueAccess newKeyValueAccess() {\n\n\t\treturn httpKva;\n\t}\n\n\t@Override protected URI tempUri() {\n\n\t\tfinal URI tmpUri = AbstractN5Test.createTempUri(\"n5-http-kva-test-\", null, httpServerURI);\n\t\treturn tmpUri;\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/kva/TrackingKeyValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5.kva;\n\nimport org.janelia.saalfeldlab.n5.KeyValueAccess;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.janelia.saalfeldlab.n5.readdata.prefetch.AggregatingPrefetchLazyRead;\n\npublic class TrackingKeyValueAccess extends DelegateKeyValueAccess {\n\n    public int numMaterializeCalls = 0;\n    public int numIsFileCalls = 0;\n    public long totalBytesRead = 0;\n    public boolean aggregate = false;\n\n    public TrackingKeyValueAccess(final KeyValueAccess kva) {\n        super(kva);\n    }\n\n    @Override\n    public boolean isFile(String normalPath) {\n        numIsFileCalls++;\n        return kva.isFile(normalPath);\n    }\n\n    @Override\n    public VolatileReadData createReadData(final String normalPath) {\n\n        final VolatileReadData volatileReadData = kva.createReadData(normalPath);\n        final TrackingLazyRead trackingLazyRead = new TrackingLazyRead(volatileReadData);\n        LazyRead lazyRead = trackingLazyRead;\n        if (aggregate)\n            lazyRead = new AggregatingPrefetchLazyRead(trackingLazyRead);\n        return VolatileReadData.from( lazyRead );\n    }\n\n    private class TrackingLazyRead implements LazyRead {\n\n        private final VolatileReadData readData;\n\n        TrackingLazyRead(final VolatileReadData readData) {\n            this.readData = readData;\n        }\n\n        @Override\n        public long size() throws N5Exception.N5IOException {\n\n            return readData.requireLength();\n        }\n\n        @Override\n        public ReadData materialize(final long offset, final long length) {\n\n            numMaterializeCalls++;\n            return readData.slice(offset, length).materialize();\n        }\n\n        @Override\n        public void close() {\n\n            readData.close();\n        }\n    }\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/locking/JustFileChannels.java",
    "content": "package org.janelia.saalfeldlab.n5.locking;\n\nimport java.io.IOException;\nimport java.nio.ByteBuffer;\nimport java.nio.channels.FileChannel;\nimport java.nio.channels.FileLock;\nimport java.nio.channels.OverlappingFileLockException;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.util.Random;\n\nimport static java.nio.file.StandardOpenOption.CREATE;\nimport static java.nio.file.StandardOpenOption.READ;\nimport static java.nio.file.StandardOpenOption.WRITE;\n\npublic class JustFileChannels {\n\n\t// Idea:\n\t//\n\t// Writers write random valid files.\n\t//\n\t// Readers verify that a file is valid.\n\t// Verifying that a file is valid should take multiple reads, to make it\n\t// similar to chunk reading.\n\t//\n\t// Valid file has an \"index\" at the end.\n\t// The index is int[2*N] where N is predefined. The index comprises\n\t// consecutive pairs (offset, value) where `offset` is a byte offset in the\n\t// file and `value` is the in value that should be there.\n\t//\n\t// Writer creates int[random_length + 2*N] and fills the first\n\t// random_length ints with random data. It then draws N random indices into\n\t// that data, looks up those values in the data, and creates an index in\n\t// the final 2*N ints.\n\t//\n\t// To verify, the reader\n\t// 1. gets the channel size\n\t// 2. reads the index\n\t// 3. reads N 4-byte slices to verify that the file has the expected values\n\t//    at the expected indices.\n\t//\n\n\tstatic final int minDataSize = 1024;\n\tstatic final int maxDataSize = 1024 * 1024;\n\tstatic final int indexPairs = 100;\n\n\tstatic void write(String path, boolean doLock, final Random random) {\n//\t\tfinal long id = Thread.currentThread().getId();\n//\t\tSystem.out.println(\"write (\"+id+\")\");\n\t\ttry {\n\n\t\t\t// NB: not creating any parent directories for now\n\n\t\t\tfinal Path p = Paths.get(path);\n\t\t\tfinal FileChannel channel = FileChannel.open(p, READ, WRITE, CREATE);\n\n\t\t\tFileLock lock = null;\n\t\t\tif (doLock)\n\t\t\t\tlock = tryLockWait(channel, false);\n\n\t\t\tchannel.truncate(0);\n\n\t\t\tfinal int n = minDataSize + random.nextInt(maxDataSize - minDataSize);\n\t\t\tfinal int[] content = new int[n + 2 * indexPairs];\n\t\t\tfor (int i = 0; i < n; i++) {\n\t\t\t\tcontent[i] = random.nextInt();\n\t\t\t}\n\t\t\tfor (int i = n; i < n + 2 * indexPairs; i+=2) {\n\t\t\t\tfinal int offset = random.nextInt(n);\n\t\t\t\tcontent[i] = offset;\n\t\t\t\tcontent[i+1] = content[offset];\n\t\t\t}\n\n\t\t\tfinal int capacity = 4 * content.length;\n\t\t\tByteBuffer buffer = ByteBuffer.allocate(capacity);\n\t\t\tbuffer.asIntBuffer().put(content);\n\t\t\tif (channel.write(buffer) != capacity)\n\t\t\t\tthrow new RuntimeException(\"write failed\");\n\n\t\t\tif (lock != null)\n\t\t\t\tlock.release();\n\n\t\t\tchannel.close();\n\t\t} catch (IOException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\t\n\tstatic FileLock tryLockWait(FileChannel channel, boolean shared) throws IOException {\n\n\t\tint i = 0;\n\t\twhile (i < 9999) {\n\t\t\ttry {\n\t\t\t\treturn channel.lock(0, Long.MAX_VALUE, shared);\n\t\t\t} catch (final OverlappingFileLockException e) {\n\t\t\t\ttry {\n\t\t\t\t\tThread.sleep(100);\n\t\t\t\t} catch (final InterruptedException ie) {\n\t\t\t\t\tThread.currentThread().interrupt();\n\t\t\t\t\tthrow new IOException(\"Interrupted while waiting for file lock\", ie);\n\t\t\t\t}\n\t\t\t}\n\t\t\ti++;\n\t\t}\n\t\tthrow new IOException(\"Could not get a lock\");\n\t}\n\n\t// throws RuntimeException if file is not valid\n\tstatic void verify(String path, boolean doLock) {\n\t\ttry {\n\t\t\tfinal Path p = Paths.get(path);\n\t\t\tfinal FileChannel channel = FileChannel.open(p, READ);\n\n\t\t\tFileLock lock = null;\n\t\t\tif (doLock)\n\t\t\t\tlock = tryLockWait(channel, true);\n\n\t\t\tfinal long size = channel.size();\n\n\t\t\tfinal int[] index = new int[2 * indexPairs];\n\t\t\tByteBuffer buffer = ByteBuffer.allocate(index.length * 4);\n\t\t\tchannel.read(buffer, size - 4 * index.length);\n\t\t\tbuffer.position(0);\n\t\t\tbuffer.asIntBuffer().get(index);\n\n\t\t\tfor (int i = 0; i < indexPairs; i++) {\n\t\t\t\tfinal int offset = index[2 * i];\n\t\t\t\tfinal int expected = index[2 * i + 1];\n\t\t\t\tbuffer = ByteBuffer.allocate(4);\n\t\t\t\tchannel.read(buffer, offset * 4);\n\t\t\t\tbuffer.position(0);\n\t\t\t\tfinal int actual = buffer.asIntBuffer().get(0);\n\t\t\t\tif (actual != expected)\n\t\t\t\t\tthrow new RuntimeException(\"verify failed\");\n\t\t\t}\n\n\t\t\tif (lock != null)\n\t\t\t\tlock.release();\n\n\t\t\tchannel.close();\n\t\t} catch (IOException e) {\n\t\t\tthrow new RuntimeException(e);\n\t\t}\n\t}\n\n\n\tpublic static void main(String[] args) throws InterruptedException {\n\t\t\n\t\t// Repeatedly calls write, then verify using the file at path.\n\t\t// Sleeps for sleepTime(default=0) ms in between the write and verify calls.\n\t\tfinal Random random = new Random();\n\t\tfinal String path = args[0];\n\t\tfinal int N = Integer.parseInt(args[1]);\n\n\t\tlong sleepTime = 0;\n\t\tif( args.length > 2)\n\t\t{\n\t\t\tif( args[2].startsWith(\"rand\"))\n\t\t\t\tsleepTime = -1;\n\t\t\telse\n\t\t\t\tsleepTime = Long.parseLong(args[2]);\n\t\t}\n\t\tSystem.out.println(\"sleep Time: \" + ((sleepTime < 0 ) ? \"random\" : sleepTime));\n\n\t\tboolean doLock = true;\n\t\tif( args.length > 3) {\n\t\t\tdoLock = args[3].equals(\"lock\");\n\t\t}\n\t\tSystem.out.println(\"do lock: \" + doLock);\n\n\t\tfor( int i = 0; i < N; i++ ) {\n\t\t\twrite(path, doLock, random);\n\n\t\t\tif (sleepTime < 0)\n\t\t\t\tThread.sleep(random.nextInt(200));\n\t\t\telse\n\t\t\t\tThread.sleep(sleepTime);\n\n\t\t\tverify(path, doLock);\n\t\t}\n\t}\n}"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/locking/JustFileChannelsThreaded.java",
    "content": "package org.janelia.saalfeldlab.n5.locking;\n\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.TimeUnit;\n\npublic class JustFileChannelsThreaded {\n\n\tpublic static void main(String[] args) throws InterruptedException {\n\n\t\tfinal int nThreads = Integer.parseInt(args[0]);\n\t\tfinal String[] subArgs = new String[args.length - 1];\n\t\tSystem.arraycopy(args, 1, subArgs, 0, args.length - 1);\n\n\t\tfinal ExecutorService exec = Executors.newFixedThreadPool(nThreads);\n\t\tfor( int i = 0; i < nThreads; i++ ) {\n\t\t\texec.submit( () -> {\n\t\t\t\ttry {\n\t\t\t\t\tThread.sleep(200);\n\t\t\t\t\tJustFileChannels.main(subArgs);\n\t\t\t\t} catch (Exception e) {\n\t\t\t\t\te.printStackTrace();\n\t\t\t\t}\n\t\t\t});\n\t\t}\n\t\texec.awaitTermination(5, TimeUnit.MINUTES);\n\t}\n}"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/readdata/RangeTests.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertSame;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\n\nimport org.junit.Test;\n\npublic class RangeTests {\n\n\t@Test\n\tpublic void testAggregate() {\n\n\t\tList<Range> nonOverlapping = Stream.of(Range.at(0, 5), Range.at(12, 2)).collect(Collectors.toList());\n\t\tassertSame(nonOverlapping, Range.aggregate(nonOverlapping));\n\n\t\tList<Range> nonOverlappingRev = Stream.of(Range.at(12, 2), Range.at(0, 5)).collect(Collectors.toList());\n\t\tassertSame(nonOverlappingRev, Range.aggregate(nonOverlappingRev));\n\n\t\t/**\n\t\t * 0 1 2 3 4 5 \n\t\t * x x x x x - \n\t\t * - x - - - -\n\t\t */\n\t\tList<Range> containing = Stream.of(Range.at(0, 5), Range.at(1, 1)).collect(Collectors.toList());\n\t\tassertEquals(Collections.singletonList(Range.at(0, 5)), Range.aggregate(containing));\n\n\t\t/**\n\t\t * 0 1 2 3 4 5 6\n\t\t * x x x x x - -\n\t\t * - - x x x x x\n\t\t */\n\t\tList<Range> overlapping = Stream.of(Range.at(0, 5), Range.at(2, 5)).collect(Collectors.toList());\n\t\tassertEquals(Collections.singletonList(Range.at(0, 7)), Range.aggregate(overlapping));\n\n\t\t/**\n\t\t * 0 1 2 3 4 5\n\t\t * x x x x x -\n\t\t * - - - - - x\n\t\t */\n\t\tList<Range> adjacent = Stream.of(Range.at(0, 5), Range.at(5, 1)).collect(Collectors.toList());\n\t\tassertEquals(Collections.singletonList(Range.at(0, 6)), Range.aggregate(adjacent));\n\n\t\t/**\n\t\t * 0 1 2 3 4 5\n\t\t * - - - x x -\n\t\t * x x - - - -\n\t\t * - - - - - x\n\t\t */\n\t\tList<Range> three = Stream.of(Range.at(3, 2), Range.at(0, 2), Range.at(5, 1)).collect(Collectors.toList());\n\t\tassertEquals(Stream.of(Range.at(0, 2), Range.at(3, 3)).collect(Collectors.toList()), Range.aggregate(three));\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/readdata/ReadDataTests.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertThrows;\nimport static org.junit.Assert.assertTrue;\n\nimport java.io.File;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.util.Arrays;\nimport java.util.function.IntUnaryOperator;\n\nimport org.apache.commons.compress.utils.IOUtils;\nimport org.janelia.saalfeldlab.n5.FileSystemKeyValueAccess;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData.OutputStreamOperator;\nimport org.junit.Test;\n\npublic class ReadDataTests {\n\n\t@Test\n\tpublic void testLazyReadData() throws IOException {\n\n\t\tfinal int N = 128;\n\t\tbyte[] data = new byte[N];\n\t\tfor( int i = 0; i < N; i++ )\n\t\t\tdata[i] = (byte)i;\n\n\t\tfinal ReadData readData = ReadData.from(out -> {\n\t\t\tout.write(data);\n\t\t});\n\t\tassertTrue(readData instanceof LazyGeneratedReadData);\n\n\t\treadDataTestHelper(readData, -1, N);\n\t\tsliceTestHelper(readData, N);\n\t}\n\n\t@Test\n\tpublic void testByteArrayReadData() throws IOException {\n\n\t\tfinal int N = 128;\n\t\tbyte[] data = new byte[N];\n\t\tfor( int i = 0; i < N; i++ )\n\t\t\tdata[i] = (byte)i;\n\n\t\tReadData readData = ReadData.from(data).materialize();\n\t\tassertTrue(readData instanceof ByteArrayReadData);\n\n\t\treadDataTestHelper(readData, N, N);\n\t\treadDataTestEncodeHelper(readData, N);\n\t\tsliceTestHelper(readData, N);\n\t}\n\n\t@Test\n\tpublic void testInputStreamReadData() throws IOException {\n\n\t\tfinal int N = 128;\n\t\tfinal InputStream is = new InputStream() {\n\t\t\tint val = 0;\n\t\t\t@Override\n\t\t\tpublic int read() throws IOException {\n\t\t\t\treturn val++;\n\t\t\t}\n\t\t};\n\n\t\tfinal ReadData readData = ReadData.from(is, N);\n\t\treadDataTestHelper(readData, N, N);\n\t\tsliceTestHelper(readData, N);\n\t}\n\n\t@Test\n\tpublic void testFileKvaReadData() throws IOException {\n\n\t\tint N = 128;\n\t\tbyte[] data = new byte[N];\n\t\tfor( int i = 0; i < N; i++ )\n\t\t\tdata[i] = (byte)i;\n\n\t\tfinal File tmpF = File.createTempFile(\"test-file-splittable-data\", \".bin\");\n\t\ttmpF.deleteOnExit();\n\t\ttry (FileOutputStream os = new FileOutputStream(tmpF)) {\n\t\t\tos.write(data);\n\t\t}\n\n\t\ttry( final VolatileReadData readData = new FileSystemKeyValueAccess()\n\t\t\t\t.createReadData(tmpF.getAbsolutePath())) {\n\n\t\t\tassertEquals(\"file read data length\", -1, readData.length());\n\t\t\tassertEquals(\"file read data length\", 128, readData.requireLength());\n\t\t\tsliceTestHelper(readData, N);\n\t\t}\n\t}\n\n\tprivate void readDataTestHelper( ReadData readData, int N, int materializedN ) throws IOException {\n\n\t\tassertEquals(\"full length\", N, readData.length());\n\t\tassertEquals(\"full length after materialize\", materializedN, readData.materialize().length());\n\t}\n\n\tprivate void readDataTestEncodeHelper( ReadData readData, int N ) throws IOException {\n\n\t\tfinal byte[] origCopy = new byte[N];\n\t\tIOUtils.readFully(readData.inputStream(), origCopy);\n\n\t\tfinal byte[] expected = Arrays.copyOf(origCopy, N);\n\t\tfor( int i = 0; i < expected.length; i++)\n\t\t\texpected[i]+=2;\n\n\t\tfinal ReadData encoded = readData.encode(new ByteFun(x -> x+2));\n\t\tassertArrayEquals(expected, encoded.allBytes());\n\n\t\tfinal ReadData encodedTwice = encoded.encode(new ByteFun(x -> x-2));\n\t\tassertArrayEquals(origCopy, encodedTwice.allBytes());\n\t}\n\n\tprivate void sliceTestHelper( ReadData readData, int N ) throws IOException {\n\n\t\tassertEquals(\"length one\", 1, readData.slice(9, 1).length());\n\n\t\tassertEquals(\"split length zero\", 0, readData.slice(9, 0).length());\n\t\tassertEquals(\"split length zero allBytes\", 0, readData.slice(9, 0).allBytes().length);\n\n\t\tReadData limited = readData.limit(2);\n\t\tassertEquals(2, limited.length());\n\n\t\tReadData unboundedLength = readData.slice(1, -1);\n\t\tassertEquals(\"unbounded length allBytes\", N - 1, unboundedLength.allBytes().length);\n\n\t\t// slice may throw an exception if it knows its length and can detect out-of-bounds\n\t\t// otherwise the exception may be thrown on a read operation (e.g. allBytes)\n\t\tassertThrows(\"Out-of-range slice read\", IndexOutOfBoundsException.class, () -> readData.slice(N-1, 3).allBytes());\n\t\tassertThrows(\"slice throws if offset too large\", IndexOutOfBoundsException.class, () -> readData.slice(N, 0).allBytes());\n\t\tassertThrows(\"too large offset slice read\", IndexOutOfBoundsException.class, () -> readData.slice(N-1, 3).allBytes());\n\n\t\tassertThrows(\"negative offset\", IndexOutOfBoundsException.class, () -> readData.slice(-1, 1));\n\t}\n\n\tprivate static class ByteFun implements OutputStreamOperator {\n\n\t\tIntUnaryOperator fun;\n\t\tpublic ByteFun(IntUnaryOperator fun) {\n\t\t\tthis.fun = fun;\n\t\t}\n\n\t\t@Override\n\t\tpublic OutputStream apply(OutputStream o) throws IOException {\n\t\t\treturn new OutputStream() {\n\t\t\t\t@Override\n\t\t\t\tpublic void write(int b) throws IOException {\n\t\t\t\t\to.write(fun.applyAsInt(b));\n\t\t\t\t}\n\t\t\t};\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/readdata/prefetch/SliceTrackingLazyReadTests.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.prefetch;\n\nimport static org.junit.Assert.assertEquals;\n\nimport java.io.IOException;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.LazyRead;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.junit.Test;\n\npublic class SliceTrackingLazyReadTests {\n\n\t@Test\n\tpublic void testDefaultSliceTracking() throws N5IOException {\n\n\t\t/**\n\t\t * 1. Create sample ReadData from byte[]\n\t\t * 2. Create a DummyLazy Read\n\t\t * 3. Make a DefaultSliceTrackingLazyRead\n\t\t * 4. Create a list of ranges\n\t\t * 5. Call prefetch\n\t\t * 6. Ensure the correct number of materialize calls were made (always 1 for DefaultSliceTrackingLazyRead)\n\t\t * 7. Verify the stored slices contain the correct range\n\t\t */\n\n\t\t// 1-2. Create a DummyLazyRead with 64 bytes\n\t\tDummyLazyRead dummyLazyRead = createDummyLazyRead(64);\n\n\t\t// 3. Make a testable DefaultSliceTrackingLazyRead\n\t\tTestableDefaultSliceTracker sliceTracking = new TestableDefaultSliceTracker(dummyLazyRead);\n\n\t\t// 4. Create a list of ranges (two non-overlapping ranges with a gap)\n\t\tList<Range> ranges = Arrays.asList(\n\t\t\tRange.at(10, 5),   // offset 10, length 5 (bytes 10-14)\n\t\t\tRange.at(50, 10)   // offset 50, length 10 (bytes 50-59)\n\t\t);\n\n\t\t// 5. Call prefetch\n\t\tsliceTracking.prefetch(ranges);\n\n\t\t// 6. Ensure exactly 1 materialize call was made\n\t\t// DefaultSliceTrackingLazyRead creates a single large slice covering all ranges\n\t\tassertEquals(\"DefaultSliceTrackingLazyRead should make exactly 1 materialize call\",\n\t\t\t1, dummyLazyRead.getNumMaterializeCalls());\n\n\t\t// 7. Verify the stored slice covers the entire range from offset 10 with length 50\n\t\tassertStoredSlices(sliceTracking, Arrays.asList(\n\t\t\tRange.at(10, 50)   // Single slice covering offset 10-59\n\t\t));\n\n\t}\n\n\t@Test\n\tpublic void testAggregatingSliceTracking() throws N5IOException {\n\n\t\t/**\n\t\t * 1. Create sample ReadData from byte[]\n\t\t * 2. Create a DummyLazyRead\n\t\t * 3. Make an AggregatingSliceTrackingLazyRead\n\t\t * 4. Create a list of ranges\n\t\t * 5. Call prefetch\n\t\t * 6. Ensure the correct number of materialize calls were made (one per aggregated range)\n\t\t * 7. Verify the stored slices contain the correct ranges\n\t\t */\n\n\t\t// 1-2. Create a DummyLazyRead with 64 bytes\n\t\tDummyLazyRead dummyLazyRead = createDummyLazyRead(64);\n\n\t\t// 3. Make a testable AggregatingSliceTrackingLazyRead\n\t\tTestableAggregatingSliceTracker sliceTracking = new TestableAggregatingSliceTracker(dummyLazyRead);\n\n\t\t/*\n\t\t * Non-adjacent ranges\n\t\t */\n\t\t// 4. Create a list of ranges (two non-overlapping ranges with a gap)\n\t\tList<Range> ranges = Arrays.asList(\n\t\t\tRange.at(10, 5),   // offset 10, length 5 (bytes 10-14)\n\t\t\tRange.at(50, 10)   // offset 50, length 10 (bytes 50-59)\n\t\t);\n\n\t\t// 5. Call prefetch\n\t\tsliceTracking.prefetch(ranges);\n\n\t\t// 6. Ensure exactly 2 materialize calls were made\n\t\t// AggregatingSliceTrackingLazyRead aggregates overlapping/adjacent ranges\n\t\t// Since these ranges are not adjacent or overlapping, it makes 2 separate calls\n\t\tassertEquals(\"AggregatingSliceTrackingLazyRead should make 2 materialize calls for non-adjacent ranges\",\n\t\t\t2, dummyLazyRead.getNumMaterializeCalls());\n\n\t\t// 7. Verify the stored slices contain two separate ranges\n\t\tassertStoredSlices(sliceTracking, Arrays.asList(\n\t\t\tRange.at(10, 5),   // First slice\n\t\t\tRange.at(50, 10)   // Second slice\n\t\t));\n\n\t\t/*\n\t\t * Adjacent ranges\n\t\t */\n\n\t\t// new sliceTracking instance to clear slices\n\t\tsliceTracking = new TestableAggregatingSliceTracker(dummyLazyRead);\n\t\tdummyLazyRead.resetNumMaterializeCalls();\n\n\t\t// 4. Create a list of three contiguous ranges\n\t\tList<Range> adjacentRanges = Arrays.asList(\n\t\t\tRange.at(10, 5),   // offset 10, length 5 (bytes 10-14)\n\t\t\tRange.at(15, 10),  // offset 15, length 10 (bytes 15-24)\n\t\t\tRange.at(25, 5)    // offset 25, length 5 (bytes 25-29)\n\t\t);\n\n\t\t// 5. Call prefetch\n\t\tsliceTracking.prefetch(adjacentRanges);\n\n\t\t// 6. Ensure exactly 1 materialize call was made\n\t\t// AggregatingSliceTrackingLazyRead should aggregate these three contiguous ranges\n\t\t// into a single range from offset 10 to 30, with length 20\n\t\tassertEquals(\"AggregatingSliceTrackingLazyRead should make 1 materialize call for contiguous ranges\",\n\t\t\t1, dummyLazyRead.getNumMaterializeCalls());\n\n\t\t// 7. Verify the stored slices now contain three ranges total:\n\t\t// the two from the first prefetch plus one aggregated range from the second prefetch\n\t\tassertStoredSlices(sliceTracking, Arrays.asList(\n\t\t\tRange.at(10, 20)  // Aggregated range\n\t\t));\n\n\t}\n\n\tprivate static DummyLazyRead createDummyLazyRead(int size) {\n\t\tbyte[] data = new byte[size];\n\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\tdata[i] = (byte)i;\n\t\t}\n\t\treturn new DummyLazyRead(ReadData.from(data));\n\t}\n\n\t/**\n\t * Helper method to verify that stored slices match expected ranges.\n\t *\n\t * @param sliceTracking the SliceTrackingLazyRead instance\n\t * @param expectedRanges the expected ranges stored in slices\n\t */\n\tprivate static void assertStoredSlices(TestableSliceTracker sliceTracking, List<Range> expectedRanges) {\n\t\t// Access protected slices field via a test helper\n\t\tList<Range> actualSlices = sliceTracking.getSlices();\n\n\t\tassertEquals(\"Number of stored slices should match\", expectedRanges.size(), actualSlices.size());\n\n\t\tfor (int i = 0; i < expectedRanges.size(); i++) {\n\t\t\tRange expected = expectedRanges.get(i);\n\t\t\tRange actual = actualSlices.get(i);\n\t\t\tassertEquals(\"Slice \" + i + \" offset should match\", expected.offset(), actual.offset());\n\t\t\tassertEquals(\"Slice \" + i + \" length should match\", expected.length(), actual.length());\n\t\t}\n\t}\n\n\t/**\n\t * Testable wrapper for DefaultSliceTrackingLazyRead that exposes slices.\n\t */\n\tstatic class TestableDefaultSliceTracker extends EnclosingPrefetchLazyRead implements TestableSliceTracker {\n\t\tpublic TestableDefaultSliceTracker(LazyRead delegate) {\n\t\t\tsuper(delegate);\n\t\t}\n\n\t\t@Override\n\t\tpublic List<Range> getSlices() {\n\t\t\treturn java.util.Collections.unmodifiableList(slices);\n\t\t}\n\t}\n\n\t/**\n\t * Testable wrapper for AggregatingSliceTrackingLazyRead that exposes slices.\n\t */\n\tstatic class TestableAggregatingSliceTracker extends AggregatingPrefetchLazyRead implements TestableSliceTracker {\n\t\tpublic TestableAggregatingSliceTracker(LazyRead delegate) {\n\t\t\tsuper(delegate);\n\t\t}\n\n\t\t@Override\n\t\tpublic List<Range> getSlices() {\n\t\t\treturn java.util.Collections.unmodifiableList(slices);\n\t\t}\n\t}\n\n\t/**\n\t * Interface for testable slice trackers.\n\t */\n\tinterface TestableSliceTracker {\n\t\tList<Range> getSlices();\n\t}\n\n\tstatic class DummyLazyRead implements LazyRead {\n\n\t\tprivate ReadData data;\n\t\tprivate int numMaterializeCalls = 0;\n\n\t\tpublic DummyLazyRead( ReadData data ) {\n\t\t\tthis.data = data;\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() throws IOException {\n\t\t\t// no op\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData materialize(long offset, long length) throws N5IOException {\n\n\t\t\tnumMaterializeCalls++;\n\t\t\treturn data.slice(offset, length).materialize();\n\t\t}\n\n\t\t@Override\n\t\tpublic long size() throws N5IOException {\n\n\t\t\treturn data.length();\n\t\t}\n\n\t\tpublic int getNumMaterializeCalls() {\n\n\t\t\treturn numMaterializeCalls;\n\t\t}\n\n\t\tpublic void resetNumMaterializeCalls() {\n\n\t\t\tnumMaterializeCalls = 0;\n\t\t}\n\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/readdata/prefetch/SlicesTest.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.prefetch;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\n\n\npublic class SlicesTest {\n\n\tprivate List<Range> createSlices(final long[] offsets, final long[] lengths) {\n\t\tfinal List<Range> slices = new ArrayList<>();\n\t\tfor (int i = 0; i < offsets.length; ++i) {\n\t\t\tslices.add(Range.at(offsets[i], lengths[i]));\n\t\t}\n\t\treturn slices;\n\t}\n\n\t@Test\n\tpublic void testFindContaining() {\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (2,6)    [-----------]\n\t\t// (6,4)            [---------]\n\t\t// (8,6)                [-----------]\n\n\t\tfinal List<Range> slices = createSlices(\n\t\t\t\tnew long[] {2, 6, 8},\n\t\t\t\tnew long[] {6, 4, 6});\n\t\tRange slice;\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (1,1)  [-]\n\t\tslice = Slices.findContainingSlice(slices, 1, 1);\n\t\tassertEquals(null, slice);\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (2,1)    [-]\n\t\tslice = Slices.findContainingSlice(slices, 2, 1);\n\t\tassertEquals(2, slice.offset());\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (2,6)    [-----------]\n\t\tslice = Slices.findContainingSlice(slices, 2, 6);\n\t\tassertEquals(2, slice.offset());\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (2,7)    [-------------]\n\t\tslice = Slices.findContainingSlice(slices, 2, 7);\n\t\tassertEquals(null, slice);\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (6,4)            [-------]\n\t\tslice = Slices.findContainingSlice(slices, 6, 4);\n\t\tassertEquals(6, slice.offset());\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (8,2)                [---]\n\t\tslice = Slices.findContainingSlice(slices, 8, 2);\n\t\tassertEquals(8, slice.offset());\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (12,2)                       [---]\n\t\tslice = Slices.findContainingSlice(slices, 12, 2);\n\t\tassertEquals(8, slice.offset());\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (12,3)                       [-----]\n\t\tslice = Slices.findContainingSlice(slices, 12, 3);\n\t\tassertEquals(null, slice);\n\n\t\t//       0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (14,1)                           [-]\n\t\tslice = Slices.findContainingSlice(slices, 14, 1);\n\t\tassertEquals(null, slice);\n\t}\n\n\n\t@Test\n\tpublic void testAddSlice() {\n\n\t\t//        0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (2,6)     [-----------]\n\t\t// (6,4)             [---------]\n\t\t// (8,6)                 [-----------]\n\t\tfinal List<Range> initial = createSlices(\n\t\t\t\tnew long[] {2, 6, 8},\n\t\t\t\tnew long[] {6, 4, 6});\n\t\tList<Range> slices;\n\n\n\t\tslices = new ArrayList<>(initial);\n\t\tSlices.addSlice(slices, Range.at(0, 1));\n\t\t//        0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (0,1) [-]\n\t\t// (2,6)     [-----------]\n\t\t// (6,4)             [---------]\n\t\t// (8,6)                 [-----------]\n\t\tassertEquals(createSlices(\n\t\t\t\tnew long[] {0, 2, 6, 8},\n\t\t\t\tnew long[] {1, 6, 4, 6}), slices);\n\n\n\t\tslices = new ArrayList<>(initial);\n\t\tSlices.addSlice(slices, Range.at(0, 16));\n\t\t//        0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (0,16)[-------------------------------]\n\t\tassertEquals(createSlices(\n\t\t\t\tnew long[] {0},\n\t\t\t\tnew long[] {16}), slices);\n\n\n\t\tslices = new ArrayList<>(initial);\n\t\tSlices.addSlice(slices, Range.at(2, 8));\n\t\t//        0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (2,8)     [-----------------]\n\t\t// (8,6)                 [-----------]\n\t\tassertEquals(createSlices(\n\t\t\t\tnew long[] {2, 8},\n\t\t\t\tnew long[] {8, 6}), slices);\n\n\n\t\tslices = new ArrayList<>(initial);\n\t\tSlices.addSlice(slices, Range.at(1, 10));\n\t\t//        0 1 2 3 4 5 6 7 8 9 A B C D E F\n\t\t// (1,10)  [---------------------]\n\t\t// (8,6)                 [-----------]\n\t\tassertEquals(createSlices(\n\t\t\t\tnew long[] {1, 8},\n\t\t\t\tnew long[] {10, 6}), slices);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/readdata/segment/ConcatenatedReadDataTest.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.segment;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.segment.SegmentedReadData.SegmentsAndData;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\n\npublic class ConcatenatedReadDataTest {\n\n\tprivate final byte[] data = new byte[100];\n\n\t@Before\n\tpublic void fillData() {\n\n\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\tdata[i] = (byte) i;\n\t\t}\n\t}\n\n\t/**\n\t * Create a SegmentedReadData with segments at (10,l=30), (0,l=10), and (40,l=20).\n\t * The returned ReadData knows its length.\n\t * <p>\n\t * <pre>\n\t * [0.....10.....20.....30.....40.....50.....60.....70.....80.....90.....]\n\t * [(-s1-)(---------s0--------)(-----s2-----)............................]\n\t * </pre>\n\t */\n\tprivate SegmentsAndData createKnownLength() {\n\n\t\treturn SegmentedReadData.wrap(\n\t\t\t\tReadData.from(data),\n\t\t\t\tRange.at(10, 30),\n\t\t\t\tRange.at(0, 10),\n\t\t\t\tRange.at(40, 20));\n\t}\n\n\t/**\n\t * Create a SegmentedReadData with one segment spanning it completely.\n\t * The returned ReadData doesn't know its length.\n\t * <p>\n\t * <pre>\n\t * [0.....10.....20.....30.....40.....50.....60.....70.....80.....90.....]\n\t * [(------------------------------s3-----------------------------------)]\n\t * </pre>\n\t */\n\tprivate SegmentsAndData createUnknownLength() {\n\n\t\tfinal SegmentedReadData srd = SegmentedReadData.wrap(\n\t\t\t\tReadData.from(\n\t\t\t\t\t\tnew ByteArrayInputStream(data)));\n\n\t\treturn new SegmentsAndData() {\n\n\t\t\t@Override\n\t\t\tpublic List<Segment> segments() {\n\t\t\t\treturn Collections.singletonList(srd.segments().get(0));\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic SegmentedReadData data() {\n\t\t\t\treturn srd;\n\t\t\t}\n\t\t};\n\t}\n\n\n\t/**\n\t * Create slices of known length and unknown length SegmentedReadData, and concatenate.\n\t * Take slice (0,l=40)\n\t * <p>\n\t * <pre>\n\t *\n\t * KNOWN_LENGTH:\n\t * [0.....10.....20.....30.....40.....50.....60.....70.....80.....90.....]\n\t * [(-s1-)(---------s0--------)(-----s2-----)............................]\n\t *\n\t * SLICE_0\n\t * [(-s1-)(---------s0--------)]\n\t *\n\t *                            SLICE_1\n\t *                            [(-----s2-----)]\n\t *\n\t * UNKNOWN_LENGTH:\n\t * [0.....10.....20.....30.....40.....50.....60.....70.....80.....90.....]\n\t * [(------------------------------s3-----------------------------------)]\n\t *\n\t * CONCATENATED:\n\t * [-------- SLICE_0 ----------][- UNKNOWN_LENGTH -][-- SLICE_1 ---]\n\t * [(-s1-)(---------s0--------)][(--...--s3--...--)][(-----s2-----)]\n\t *\n\t * </pre>\n\t */\n\tprivate SegmentsAndData createConcatenate() {\n\n\t\tfinal SegmentsAndData segmentsAndData0 = createKnownLength();\n\t\tfinal SegmentedReadData r0 = segmentsAndData0.data();\n\n\t\tfinal SegmentsAndData segmentsAndData1 = createUnknownLength();\n\t\tfinal SegmentedReadData r1 = segmentsAndData1.data();\n\n\t\tfinal List<Segment> segments = new ArrayList<>();\n\t\tsegments.addAll(segmentsAndData0.segments());\n\t\tsegments.addAll(segmentsAndData1.segments());\n\n\t\tfinal List<SegmentedReadData> datas = new ArrayList<>();\n\t\tdatas.add(r0.slice(0,40));\n\t\tdatas.add(r1);\n\t\tdatas.add(r0.slice(segments.get(2)));\n\t\tfinal SegmentedReadData concatenated = SegmentedReadData.concatenate(datas);\n\n\t\treturn new SegmentsAndData() {\n\n\t\t\t@Override\n\t\t\tpublic List<Segment> segments() {\n\t\t\t\treturn segments;\n\t\t\t}\n\n\t\t\t@Override\n\t\t\tpublic SegmentedReadData data() {\n\t\t\t\treturn concatenated;\n\t\t\t}\n\t\t};\n\t}\n\n\t/**\n\t * Check that segments in the concatenated ReadData are at the expected locations.\n\t * The segments should be laid out like this:\n\t * <p>\n\t * <pre>\n\t * CONCATENATED:\n\t * [0.....10.....20.....30.....40...             ...140.....150.....]\n\t * [-------- SLICE_0 ----------][- UNKNOWN_LENGTH -][--- SLICE_1 ---]\n\t * [(-s1-)(---------s0--------)][(--...--s3--...--)][(------s2-----)]\n\t * </pre>\n\t * <p>\n\t */\n\tprivate static void checkSegmentLocations(final SegmentsAndData segmentsAndData) {\n\t\tcheckSegmentRange(segmentsAndData,1, Range.at(0, 10));\n\t\tcheckSegmentRange(segmentsAndData,0, Range.at(10, 30));\n\t\tcheckSegmentRange(segmentsAndData,3, Range.at(40, 100));\n\t\tcheckSegmentRange(segmentsAndData,2, Range.at(140, 20));\n\t}\n\n\tprivate static void checkSegmentRange(final SegmentsAndData data, final int segmentIndex, final Range expectedLocation)\n\t{\n\t\tfinal Range location = data.data().location(data.segments().get(segmentIndex));\n\t\tassertEquals(expectedLocation.offset(), location.offset());\n\t\tassertEquals(expectedLocation.length(), location.length());\n\t}\n\n\t// A concatenated ReadData containing unknown-length parts requires\n\t// either materialize() or writeTo(OutputStream) to make those lengths\n\t// known. Otherwise, trying to get segment locations from the\n\t// concatenated ReadData fails with an IllegalStateException.\n\n\t@Test(expected = IllegalStateException.class)\n\tpublic void testConcatenateUnmaterialized() {\n\n\t\tfinal SegmentsAndData segmentsAndData = createConcatenate();\n\t\tfinal SegmentedReadData c = segmentsAndData.data();\n\t\tfinal List<Segment> s = segmentsAndData.segments();\n\t\tSystem.out.println(c.location(s.get(0)));\n\t}\n\n\t// Calling materialize() on the concatenated ReadData makes sure that all\n\t// segment offsets are known.\n\n\t@Test\n\tpublic void testConcatenateMaterialize() {\n\n\t\tfinal SegmentsAndData segmentsAndData = createConcatenate();\n\t\tsegmentsAndData.data().materialize();\n\t\tcheckSegmentLocations(segmentsAndData);\n\t}\n\n\t// Calling writeTo(OutputStream) on the concatenated ReadData makes sure\n\t// that all segment offsets are known.\n\n\t@Test\n\tpublic void testConcatenateWriteTo() {\n\n\t\tfinal SegmentsAndData segmentsAndData = createConcatenate();\n\t\tsegmentsAndData.data().writeTo(new ByteArrayOutputStream());\n\t\tcheckSegmentLocations(segmentsAndData);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/readdata/segment/SegmentTest.java",
    "content": "package org.janelia.saalfeldlab.n5.readdata.segment;\n\nimport java.io.ByteArrayInputStream;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\n\npublic class SegmentTest {\n\n\tprivate ReadData readData;\n\n\tprivate ReadData readDataUnknownLength;\n\n\t@Before\n\tpublic void createReadData() {\n\n\t\tfinal byte[] data = new byte[100];\n\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\tdata[i] = (byte) i;\n\t\t}\n\t\treadData = ReadData.from(data);\n\t\treadDataUnknownLength = ReadData.from(new ByteArrayInputStream(data));\n\t}\n\n\t@Test\n\tpublic void testWrap() {\n\n\t\tfinal Range[] locations = {\n\t\t\t\tRange.at(0, 10),\n\t\t\t\tRange.at(10, 10),\n\t\t\t\tRange.at(40, 20)};\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readData, locations).data();\n\t\tassertEquals(3, r.segments().size());\n\n\t\tfinal Range l0 = r.location(r.segments().get(0));\n\t\tassertEquals(0, l0.offset());\n\t\tassertEquals(10, l0.length());\n\n\t\tfinal Range l1 = r.location(r.segments().get(1));\n\t\tassertEquals(10, l1.offset());\n\t\tassertEquals(10, l1.length());\n\n\t\tfinal Range l2 = r.location(r.segments().get(2));\n\t\tassertEquals(40, l2.offset());\n\t\tassertEquals(20, l2.length());\n\t}\n\n\t@Test\n\tpublic void testWrapOrder() {\n\n\t\tfinal Range[] locations = {\n\t\t\t\tRange.at(10, 10),\n\t\t\t\tRange.at(0, 10),\n\t\t\t\tRange.at(40, 20)};\n\t\tfinal SegmentedReadData.SegmentsAndData segmentsAndData = SegmentedReadData.wrap(readData, locations);\n\t\tfinal SegmentedReadData r = segmentsAndData.data();\n\t\tfinal List<Segment> segments = segmentsAndData.segments();\n\n\t\tassertEquals(3, segments.size());\n\n\t\tfinal Range l0 = r.location(segments.get(0));\n\t\tassertEquals(10, l0.offset());\n\t\tassertEquals(10, l0.length());\n\n\t\tfinal Range l1 = r.location(segments.get(1));\n\t\tassertEquals(0, l1.offset());\n\t\tassertEquals(10, l1.length());\n\n\t\tfinal Range l2 = r.location(segments.get(2));\n\t\tassertEquals(40, l2.offset());\n\t\tassertEquals(20, l2.length());\n\t}\n\n\t@Test\n\tpublic void testSlice() {\n\n\t\tfinal Range[] locations = {\n\t\t\t\tRange.at(0, 10),\n\t\t\t\tRange.at(10, 10),\n\t\t\t\tRange.at(40, 20)};\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readData, locations).data();\n\t\tfinal SegmentedReadData s = r.slice(10, 60);\n\t\tassertEquals(60, s.length());\n\n\t\tassertEquals(2, s.segments().size());\n\n\t\tfinal Range l0 = s.location(s.segments().get(0));\n\t\tassertEquals(0, l0.offset());\n\t\tassertEquals(10, l0.length());\n\n\t\tfinal Range l1 = s.location(s.segments().get(1));\n\t\tassertEquals(30, l1.offset());\n\t\tassertEquals(20, l1.length());\n\t}\n\n\t@Test\n\tpublic void testSliceSegment() {\n\n\t\tfinal Range[] locations = {\n\t\t\t\tRange.at(0, 10),\n\t\t\t\tRange.at(10, 10),\n\t\t\t\tRange.at(40, 20)};\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readData, locations).data();\n\t\tfinal SegmentedReadData s = r.slice(r.segments().get(2));\n\t\tassertEquals(20, s.length());\n\n\t\tassertEquals(1, s.segments().size());\n\n\t\tfinal Range l0 = s.location(s.segments().get(0));\n\t\tassertEquals(0, l0.offset());\n\t\tassertEquals(20, l0.length());\n\t}\n\n\t@Test\n\tpublic void testPartialSliceSegment() {\n\n\t\tfinal Range[] locations = {\n\t\t\t\tRange.at(0, 10),\n\t\t\t\tRange.at(10, 10),\n\t\t\t\tRange.at(40, 20)};\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readData, locations).data();\n\t\t// slice covers all of second segment, part of first and third\n\t\tfinal SegmentedReadData s = r.slice(9, 15);\n\t\tassertEquals(15, s.length());\n\n\t\tassertEquals(1, s.segments().size());\n\n\t\tfinal Range l0 = s.location(s.segments().get(0));\n\t\tassertEquals(1, l0.offset());\n\t\tassertEquals(10, l0.length());\n\t}\n\n\t@Test\n\tpublic void testWrapFully() {\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readDataUnknownLength);\n\t\tassertEquals(1, r.segments().size());\n\n\t\tfinal Range l0 = r.location(r.segments().get(0));\n\t\tassertEquals(0, l0.offset());\n\t\tassertEquals(-1, l0.length());\n\n\t\tfinal SegmentedReadData m = r.materialize();\n\t\tfinal Range l0m = r.location(r.segments().get(0));\n\t\tassertEquals(0, l0m.offset());\n\t\tassertEquals(100, l0m.length());\n\n\t\tfinal Range l0m2 = m.location(m.segments().get(0));\n\t\tassertEquals(0, l0m2.offset());\n\t\tassertEquals(100, l0m2.length());\n\t}\n\n\t@Test\n\tpublic void testSliceFullyWrapped() {\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readDataUnknownLength);\n\t\tfinal SegmentedReadData s = r.slice(0, 100);\n\t\tassertEquals(100, s.length());\n\n\t\tassertEquals(1, s.segments().size());\n\n\t\tfinal Range l0 = s.location(s.segments().get(0));\n\t\tassertEquals(0, l0.offset());\n\t\tassertEquals(100, l0.length());\n\n\t\tfinal SegmentedReadData s0 = r.slice(0, 99);\n\t\tassertEquals(99, s0.length());\n\t\tassertEquals(0, s0.segments().size());\n\n\t\tfinal SegmentedReadData s1 = r.slice(1, 99);\n\t\tassertEquals(99, s1.length());\n\t\tassertEquals(0, s1.segments().size());\n\t}\n\n\t@Test\n\tpublic void testSliceSegmentFullyWrapped() {\n\n\t\tfinal SegmentedReadData r = SegmentedReadData.wrap(readDataUnknownLength);\n\t\tfinal SegmentedReadData s = r.slice(r.segments().get(0));\n\t\tassertEquals(-1, s.length());\n\n\t\tassertEquals(1, s.segments().size());\n\n\t\tfinal Range l0 = s.location(s.segments().get(0));\n\t\tassertEquals(0, l0.offset());\n\t\tassertEquals(-1, l0.length());\n\t}\n\n\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/serialization/CodecSerializationTest.java",
    "content": "package org.janelia.saalfeldlab.n5.serialization;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\n\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.GzipCompression;\nimport org.janelia.saalfeldlab.n5.NameConfigAdapter;\nimport org.janelia.saalfeldlab.n5.codec.BytesCodecTests.BitShiftBytesCodec;\nimport org.janelia.saalfeldlab.n5.codec.CodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.IdentityCodec;\nimport org.junit.Before;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\nimport com.google.gson.Gson;\nimport com.google.gson.GsonBuilder;\nimport com.google.gson.JsonArray;\nimport com.google.gson.JsonElement;\nimport com.google.gson.JsonObject;\n\npublic class CodecSerializationTest {\n\n\tprivate Gson gson;\n\n\t@Before\n\tpublic void before() {\n\n\t\tfinal GsonBuilder gsonBuilder = new GsonBuilder();\n\t\tgsonBuilder.registerTypeAdapter(DataType.class, new DataType.JsonAdapter());\n\t\tgsonBuilder.registerTypeHierarchyAdapter(DataCodecInfo.class, NameConfigAdapter.getJsonAdapter(DataCodecInfo.class));\n\t\tgsonBuilder.registerTypeHierarchyAdapter(CodecInfo.class, NameConfigAdapter.getJsonAdapter(CodecInfo.class));\n\t\tgsonBuilder.disableHtmlEscaping();\n\t\tgson = gsonBuilder.create();\n\t}\n\n\t@Test\n\tpublic void testCodecSerialization() {\n\n\t\tfinal IdentityCodec id = new IdentityCodec();\n\t\tfinal JsonObject jsonId = gson.toJsonTree(id).getAsJsonObject();\n\t\tfinal JsonElement expected = gson.fromJson(\"{\\\"name\\\":\\\"id\\\"}\", JsonElement.class);\n\t\tassertEquals(\"identity json\", expected, jsonId.getAsJsonObject());\n\n\t\tfinal BitShiftBytesCodec codec = new BitShiftBytesCodec(3);\n\t\tfinal JsonObject bitShiftJson = gson.toJsonTree(codec).getAsJsonObject();\n\t\tfinal JsonElement expectedBitShift = gson.fromJson(\n\t\t\t\t\"{\\\"name\\\":\\\"bitshift\\\",\\\"configuration\\\":{\\\"shift\\\":3}}\",\n\t\t\t\tJsonElement.class);\n\t\tassertEquals(\"bitshift json\", expectedBitShift, bitShiftJson);\n\n\t\tfinal DataCodecInfo deserializedCodecInfo = gson.fromJson(bitShiftJson, DataCodecInfo.class);\n\t\t// Verify deserialized codec\n\t\tassertEquals(\"Deserialized codec should equal original\", codec, deserializedCodecInfo);\n\t}\n\n\t@Test\n\t@Ignore\n\tpublic void testSerializeCodecArray() {\n\n\t\tCodecInfo[] codecs = new CodecInfo[]{\n\t\t\t\tnew IdentityCodec()\n\t\t};\n\t\tJsonArray jsonCodecArray = gson.toJsonTree(codecs).getAsJsonArray();\n\t\tJsonElement expected = gson.fromJson(\n\t\t\t\t\"[{\\\"name\\\":\\\"id\\\"}]\",\n\t\t\t\tJsonElement.class);\n\t\tassertEquals(\"codec array\", expected, jsonCodecArray.getAsJsonArray());\n\n\t\tCodecInfo[] codecsDeserialized = gson.fromJson(expected, CodecInfo[].class);\n\t\tassertEquals(\"codecs length not 1\", 1, codecsDeserialized.length);\n\t\tassertTrue(\"first codec not identity\", codecsDeserialized[0] instanceof IdentityCodec);\n\n\t\tcodecs = new CodecInfo[]{\n\t\t\t\tnew GzipCompression()\n\t\t};\n\t\tjsonCodecArray = gson.toJsonTree(codecs).getAsJsonArray();\n\t\texpected = gson.fromJson(\n\t\t\t\t\"[{\\\"name\\\":\\\"gzip\\\",\\\"configuration\\\":{\\\"level\\\":-1,\\\"useZlib\\\":false}}]\",\n\t\t\t\tJsonElement.class);\n\t\tassertEquals(\"codec array\", expected, jsonCodecArray.getAsJsonArray());\n\n\t\tcodecsDeserialized = gson.fromJson(expected, CodecInfo[].class);\n\t\tassertEquals(\"codecs length not 1\", 1, codecsDeserialized.length);\n\t\tassertTrue(\"second codec not gzip\", codecsDeserialized[0] instanceof GzipCompression);\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/DatasetAccessTest.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.N5Exception;\nimport org.janelia.saalfeldlab.n5.RawCompression;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\n\n\npublic class DatasetAccessTest {\n\n\t// DataBlocks are 3x3x3\n\t// Level 1 shards are 6x6x6 (contain 2x2x2 DataBlocks)\n\t// Level 2 shards are 24x24x24 (contain 4x4x4 Level 1 shards)\n\tprivate final int[] dataBlockSize = {3, 3, 3};\n\tprivate final int[] level1ShardSize = {6, 6, 6};\n\tprivate final int[] level2ShardSize = {24, 24, 24};\n\tprivate final long[] datasetDimensions = {240, 240, 240};\n\n\tprivate DatasetAccess<byte[]> datasetAccess;\n\n\t@Before\n\tpublic void setup() {\n\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tdataBlockSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.END\n\t\t);\n\t\tfinal ShardCodecInfo c2 = new DefaultShardCodecInfo(\n\t\t\t\tlevel1ShardSize,\n\t\t\t\tc1,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.START\n\t\t);\n\t\tfinal TestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tlevel2ShardSize,\n\t\t\t\tDataType.INT8,\n\t\t\t\tc2,\n\t\t\t\tnew RawCompression()\n\t\t);\n\n\t\tdatasetAccess = attributes.getDatasetAccess();\n\t}\n\n\n\t@Test\n\tpublic void testWriteReadIndividual() {\n\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t// write some blocks, filled with constant values\n\t\tdatasetAccess.writeChunk(store, createDataBlock(dataBlockSize, new long[] {0, 0, 0}, 1));\n\t\tdatasetAccess.writeChunk(store, createDataBlock(dataBlockSize, new long[] {1, 0, 0}, 2));\n\t\tdatasetAccess.writeChunk(store, createDataBlock(dataBlockSize, new long[] {0, 1, 0}, 3));\n\t\tdatasetAccess.writeChunk(store, createDataBlock(dataBlockSize, new long[] {1, 1, 0}, 4));\n\t\tdatasetAccess.writeChunk(store, createDataBlock(dataBlockSize, new long[] {3, 2, 1}, 5));\n\t\tdatasetAccess.writeChunk(store, createDataBlock(dataBlockSize, new long[] {8, 4, 1}, 6));\n\n\t\t// verify that the written blocks can be read back with the correct values\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {0, 0, 0}), true, 1);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {1, 0, 0}), true, 2);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {0, 1, 0}), true, 3);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {1, 1, 0}), true, 4);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {3, 2, 1}), true, 5);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {8, 4, 1}), true, 6);\n\t}\n\n\t@Test\n\tpublic void testWriteReadBulk() {\n\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t// write some blocks, filled with constant values\n\t\tfinal List<long[]> writeGridPositions = Arrays.asList(new long[][] {\n\t\t\t\t{0, 0, 0}, {1, 0, 0}, {0, 1, 0}, {1, 1, 0}, {3, 2, 1}, {8, 4, 1}\n\t\t});\n\t\tfinal List<DataBlock<byte[]>> writeBlocks = new ArrayList<>();\n\t\tfor (int i = 0; i < writeGridPositions.size(); i++) {\n\t\t\twriteBlocks.add(createDataBlock(dataBlockSize, writeGridPositions.get(i), 1 + i));\n\t\t}\n\t\tdatasetAccess.writeChunks(store, writeBlocks);\n\n\t\t// verify that the written blocks can be read back with the correct values\n\t\tfinal List<long[]> readGridPositions = Arrays.asList(new long[][] {\n\t\t\t\t{1, 0, 0}, {0, 0, 0}, {0, 1, 0}, {2, 4, 2}, {3, 2, 1}, {8, 4, 1}\n\t\t});\n\t\tfinal List<DataBlock<byte[]>> readBlocks = datasetAccess.readChunks(store, readGridPositions);\n\t\tcheckBlock(readBlocks.get(0), true, 2);\n\t\tcheckBlock(readBlocks.get(1), true, 1);\n\t\tcheckBlock(readBlocks.get(2), true, 3);\n\t\tcheckBlock(readBlocks.get(3), false, 4);\n\t\tcheckBlock(readBlocks.get(4), true, 5);\n\t\tcheckBlock(readBlocks.get(5), true, 6);\n\t}\n\n\t@Test\n\tpublic void testDeleteBlock() {\n\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t// write some blocks, filled with constant values\n\t\tfinal List<long[]> writeGridPositions = Arrays.asList(new long[][] {\n\t\t\t\t{0, 0, 0}, {1, 0, 0}, {0, 1, 0}, {1, 1, 0}, {3, 2, 1}, {8, 4, 1}\n\t\t});\n\t\tfinal List<DataBlock<byte[]>> writeBlocks = new ArrayList<>();\n\t\tfor (int i = 0; i < writeGridPositions.size(); i++) {\n\t\t\twriteBlocks.add(createDataBlock(dataBlockSize, writeGridPositions.get(i), 1 + i));\n\t\t}\n\t\tdatasetAccess.writeChunks(store, writeBlocks);\n\n\t\t// verify that deleting a block removes it from the shard (while other blocks in the same shard are still present)\n\t\tdatasetAccess.deleteChunk(store, new long[] {0, 0, 0});\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {0, 0, 0}), false, 1);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {1, 0, 0}), true, 2);\n\n\t\t// if a shard becomes empty the corresponding key should be deleted\n\t\tassertTrue(keyExists(store, new long[] {1, 0, 0}));\n\t\tdatasetAccess.deleteChunk(store, new long[] {8, 4, 1});\n\t\tassertFalse(keyExists(store, new long[] {1, 0, 0}));\n\n\t\t// deleting a non-existent block should not fail\n\t\tdatasetAccess.deleteChunk(store, new long[] {0, 0, 8});\n\t}\n\n\tprivate boolean keyExists(final PositionValueAccess store, final long[] key) {\n\t\ttry (final VolatileReadData data = store.get(key)) {\n\t\t\tif (data != null) {\n\t\t\t\tdata.requireLength();\n\t\t\t\treturn true;\n\t\t\t}\n\t\t} catch (N5Exception.N5IOException ignored) {\n\t\t}\n\t\treturn false;\n\t}\n\n\t@Test\n\tpublic void testDeleteBlocks() {\n\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t// write some blocks, filled with constant values\n\t\tfinal List<long[]> writeGridPositions = Arrays.asList(new long[][] {\n\t\t\t\t{0, 0, 0}, {1, 0, 0}, {0, 1, 0}, {1, 1, 0}, {3, 2, 1}, {8, 4, 1}\n\t\t});\n\t\tfinal List<DataBlock<byte[]>> writeBlocks = new ArrayList<>();\n\t\tfor (int i = 0; i < writeGridPositions.size(); i++) {\n\t\t\twriteBlocks.add(createDataBlock(dataBlockSize, writeGridPositions.get(i), 1 + i));\n\t\t}\n\t\tdatasetAccess.writeChunks(store, writeBlocks);\n\n\t\t// verify that deleting a block removes it from the shard (while other blocks in the same shard are still present)\n\t\tdatasetAccess.deleteChunks(store, Arrays.asList(new long[][] {{0, 0, 0}, {4, 2, 2}, {3, 2, 1}}));\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {0, 0, 0}), false, 1);\n\t\tcheckBlock(datasetAccess.readChunk(store, new long[] {1, 0, 0}), true, 2);\n\n\t\t// if a shard becomes empty the corresponding key should be deleted\n\t\tassertTrue(keyExists(store, new long[] {1, 0, 0}));\n\t\tdatasetAccess.deleteChunks(store, Arrays.asList(new long[][] {{8, 4, 1}}));\n\t\tassertFalse(keyExists(store, new long[] {1, 0, 0}));\n\n\t\t// deleting a non-existent block should not fail\n\t\tdatasetAccess.deleteChunks(store, Arrays.asList(new long[] {0, 0, 8}));\n\t}\n\n\tprivate static void checkBlock(final DataBlock<byte[]> dataBlock, final boolean expectedNonNull, final int expectedFillValue) {\n\n\t\tif (expectedNonNull) {\n\t\t\tassertNotNull(\"expected non-null dataBlock\", dataBlock);\n\t\t\tfor (byte b : dataBlock.getData()) {\n\t\t\t\tAssert.assertTrue(\"expected all values to be \" + expectedFillValue, b == (byte) expectedFillValue);\n\t\t\t}\n\t\t} else {\n\t\t\tassertNull(\"expected null dataBlock\", dataBlock);\n\t\t}\n\t}\n\n\tprivate static DataBlock<byte[]> createDataBlock(int[] size, long[] gridPosition, int fillValue) {\n\n\t\tfinal byte[] bytes = new byte[DataBlock.getNumElements(size)];\n\t\tArrays.fill(bytes, (byte) fillValue);\n\t\treturn new ByteArrayDataBlock(size, gridPosition, bytes);\n\t}\n\n\tpublic static class TestDatasetAttributes extends DatasetAttributes {\n\n\t\tpublic TestDatasetAttributes(long[] dimensions, int[] outerBlockSize, DataType dataType, BlockCodecInfo blockCodecInfo,\n\t\t\t\tDataCodecInfo... dataCodecInfos) {\n\n\t\t\tsuper(dimensions, outerBlockSize, dataType, blockCodecInfo, dataCodecInfos);\n\t\t}\n\n\t\t@Override // to make this accessible for the test\n\t\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\t\t\treturn super.getDatasetAccess();\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/NestedGridTest.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertThrows;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.List;\nimport org.janelia.saalfeldlab.n5.shard.Nesting.NestedGrid;\nimport org.junit.Assert;\nimport org.junit.Test;\n\npublic class NestedGridTest {\n\n\tprivate static long absPosition1D(final NestedGrid grid, final int sourcePos, final int targetLevel) {\n\t\treturn grid.absolutePosition(new long[] {sourcePos}, targetLevel)[0];\n\t}\n\n\t@Test\n\tpublic void testValidateInput() {\n\t\tint[][] blockSizes = {{3}, {7}, {11}};\n\t\tassertThrows(IllegalArgumentException.class, () -> new NestedGrid(blockSizes));\n\t}\n\n\t@Test\n\tpublic void testAbsolutePosition() {\n\t\tint[][] blockSizes = {{1}, {3}, {6}, {24}};\n\t\tNestedGrid grid = new NestedGrid(blockSizes);\n\n\t\tAssert.assertEquals(38, absPosition1D(grid, 38, 0));\n\n\t\tAssert.assertEquals(12, absPosition1D(grid, 36, 1));\n\t\tAssert.assertEquals(12, absPosition1D(grid, 37, 1));\n\t\tAssert.assertEquals(12, absPosition1D(grid, 38, 1));\n\n\t\tAssert.assertEquals(6, absPosition1D(grid, 38, 2));\n\t\tAssert.assertEquals(1, absPosition1D(grid, 38, 3));\n\t}\n\n\t@Test\n\tpublic void testAbsolutePositionChunkSize() {\n\t\tint[][] blockSizes = {{10}, {30}, {60}, {240}};\n\t\tNestedGrid grid = new NestedGrid(blockSizes);\n\n\t\tAssert.assertEquals(38, absPosition1D(grid, 38, 0));\n\t\tAssert.assertEquals(12, absPosition1D(grid, 38, 1));\n\t\tAssert.assertEquals(6, absPosition1D(grid, 38, 2));\n\t\tAssert.assertEquals(1, absPosition1D(grid, 38, 3));\n\t}\n\n\tprivate static long relPosition1D(final NestedGrid grid, final int sourcePos, final int targetLevel) {\n\t\treturn grid.relativePosition(new long[] {sourcePos}, 0, targetLevel)[0];\n\t}\n\n\t@Test\n\tpublic void testRelativePosition() {\n\t\tint[][] blockSizes = {{1}, {3}, {6}, {24}};\n\t\tNestedGrid grid = new NestedGrid(blockSizes);\n\n\t\tAssert.assertEquals(2, relPosition1D(grid, 38, 0));\n\t\tAssert.assertEquals(0, relPosition1D(grid, 38, 1));\n\t\tAssert.assertEquals(2, relPosition1D(grid, 38, 2));\n\t\tAssert.assertEquals(1, relPosition1D(grid, 38, 3));\n\n\t}\n\n\t@Test\n\tpublic void testRelativePositionChunkSize() {\n\t\tint[][] blockSizes = {{10}, {30}, {60}, {240}};\n\t\tNestedGrid grid = new NestedGrid(blockSizes);\n\n\t\tAssert.assertEquals(2, relPosition1D(grid, 38, 0));\n\t\tAssert.assertEquals(0, relPosition1D(grid, 38, 1));\n\t\tAssert.assertEquals(2, relPosition1D(grid, 38, 2));\n\t\tAssert.assertEquals(1, relPosition1D(grid, 38, 3));\n\t}\n\n\t@Test\n\tpublic void testNd() {\n\n\t\tint[][] blockSizes = {{5, 7}, {5*3, 7*2}};\n\t\tNestedGrid grid = new NestedGrid(blockSizes);\n\t\tSystem.out.println(grid);\n\t\tassertArrayEquals(new long[]{1, 2}, grid.absolutePosition(new long[]{1, 2}, 0));\n\t\tassertArrayEquals(new long[]{99, 99}, grid.absolutePosition(new long[]{99, 99}, 0));\n\n\t\tassertArrayEquals(new long[]{0, 0}, grid.absolutePosition(new long[]{0, 1}, 1));\n\t\tassertArrayEquals(new long[]{0, 1}, grid.absolutePosition(new long[]{0, 2}, 1));\n\t\tassertArrayEquals(new long[]{1, 1}, grid.absolutePosition(new long[]{3, 2}, 1));\n\t}\n\n\t@Test\n\tpublic void testNestedPositionOrder() {\n\n\t\t// NestedPosition should be ordered such that positions from a\n\t\t// (sub-)shard are grouped together:\n\t\t// For nested = {X,Y,Z} compare by Z, then Y, then X.\n\t\t// For X = [x,y,z] compare by z, then y, then x. (flattening order)\n\n\t\tfinal NestedGrid grid = new NestedGrid(new int[][]{{3,3},{6,6}});\n\n\t\tfinal List<Nesting.NestedPosition> positions = new ArrayList<>();\n\t\tpositions.add(grid.nestedPosition(new long[]{3, 1})); // {[1, 1] / [1, 0]}\n\t\tpositions.add(grid.nestedPosition(new long[]{3, 5})); // {[1, 1] / [1, 2]}\n\t\tpositions.add(grid.nestedPosition(new long[]{2, 2})); // {[0, 0] / [1, 1]}\n\t\tpositions.add(grid.nestedPosition(new long[]{1, 0})); // {[1, 0] / [0, 0]}\n\t\tpositions.add(grid.nestedPosition(new long[]{6, 7})); // {[0, 1] / [3, 3]}\n\t\tpositions.add(grid.nestedPosition(new long[]{0, 1})); // {[0, 1] / [0, 0]}\n\t\tpositions.add(grid.nestedPosition(new long[]{0, 2})); // {[0, 0] / [0, 1]}\n\t\tpositions.add(grid.nestedPosition(new long[]{1, 1})); // {[1, 1] / [0, 0]}\n\t\tpositions.add(grid.nestedPosition(new long[]{5, 0})); // {[1, 0] / [2, 0]}\n\t\tpositions.add(grid.nestedPosition(new long[]{0, 0})); // {[0, 0] / [0, 0]}\n\n\t\tfinal List<Nesting.NestedPosition> orderedPositions = new ArrayList<>();\n\t\torderedPositions.add(grid.nestedPosition(new long[]{0, 0})); // {[0, 0] / [0, 0]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{1, 0})); // {[1, 0] / [0, 0]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{0, 1})); // {[0, 1] / [0, 0]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{1, 1})); // {[1, 1] / [0, 0]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{3, 1})); // {[1, 1] / [1, 0]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{5, 0})); // {[1, 0] / [2, 0]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{0, 2})); // {[0, 0] / [0, 1]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{2, 2})); // {[0, 0] / [1, 1]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{3, 5})); // {[1, 1] / [1, 2]}\n\t\torderedPositions.add(grid.nestedPosition(new long[]{6, 7})); // {[0, 1] / [3, 3]}\n\n\t\tCollections.sort(positions);\n\t\tAssert.assertEquals(orderedPositions,positions);\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/ShardTest.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.io.File;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.ByteOrder;\nimport java.nio.file.*;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Objects;\nimport java.util.function.Predicate;\nimport java.util.stream.Collectors;\nimport java.util.stream.IntStream;\nimport java.util.stream.Stream;\n\nimport org.janelia.saalfeldlab.n5.*;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\nimport org.junit.After;\nimport org.junit.Assert;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\n\n@RunWith(Parameterized.class)\npublic class ShardTest {\n\n\tprivate static final boolean LOCAL_DEBUG = false;\n\n\tprivate static final N5FSTest tempN5Factory = new N5FSTest() {\n\n\t\t@Override public N5Writer createTempN5Writer() {\n\n\t\t\tif (LOCAL_DEBUG) {\n\t\t\t\tfinal N5Writer writer = new TrackingN5Writer(\"src/test/resources/test.n5\", new FileSystemKeyValueAccess());\n\t\t\t\twriter.remove(\"\"); // Clear old when starting new test\n\t\t\t\treturn writer;\n\t\t\t}\n\n\t\t\tfinal String basePath = new File(tempN5PathName()).toURI().normalize().getPath();\n\t\t\ttry {\n\t\t\t\tString uri = new URI(\"file\", null, basePath, null).toString();\n\t\t\t\treturn new TrackingN5Writer(uri, new FileSystemKeyValueAccess());\n\t\t\t} catch (URISyntaxException e) {\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t\treturn null;\n\t\t}\n\n\t\tprivate String tempN5PathName() {\n\n\t\t\ttry {\n\t\t\t\tfinal File tmpFile = Files.createTempDirectory(\"n5-shard-test-\").toFile();\n\t\t\t\ttmpFile.delete();\n\t\t\t\ttmpFile.mkdir();\n\t\t\t\ttmpFile.deleteOnExit();\n\t\t\t\treturn tmpFile.getCanonicalPath();\n\t\t\t} catch (final Exception e) {\n\t\t\t\tthrow new RuntimeException(e);\n\t\t\t}\n\t\t}\n\t};\n\n\t@Parameterized.Parameters(name = \"IndexLocation({0}), Index ByteOrder({1})\")\n\tpublic static Collection<Object[]> data() {\n\n\t\tfinal ArrayList<Object[]> params = new ArrayList<>();\n\t\tfor (IndexLocation indexLoc : IndexLocation.values()) {\n\t\t\tfor (ByteOrder indexByteOrder : new ByteOrder[]{ByteOrder.BIG_ENDIAN,  ByteOrder.LITTLE_ENDIAN}) {\n\t\t\t\tparams.add(new Object[]{indexLoc, indexByteOrder});\n\t\t\t}\n\t\t}\n\t\tfinal int numParams = params.size();\n\t\tfinal Object[][] paramArray = new Object[numParams][];\n\t\tArrays.setAll(paramArray, params::get);\n\t\treturn Arrays.asList(paramArray);\n\t}\n\n\t@Parameterized.Parameter()\n\tpublic IndexLocation indexLocation;\n\n\t@Parameterized.Parameter(1)\n\tpublic ByteOrder indexByteOrder;\n\n\t@After\n\tpublic void removeTempWriters() {\n\n\t\ttempN5Factory.removeTempWriters();\n\t}\n\n\tprivate DatasetAttributes getTestAttributes(long[] dimensions, int[] shardSize, int[] chunkSize) {\n\n\t\treturn getTestAttributes(DataType.UINT8, dimensions, shardSize, chunkSize);\n\t}\n\n\tprivate DatasetAttributes getTestAttributes(DataType dataType, long[] dimensions, int[] shardSize, int[] chunkSize) {\n\n\t\tDefaultShardCodecInfo blockCodec = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tnew N5BlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[]{new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[]{new RawCompression()},\n\t\t\t\tIndexLocation.END);\n\n\t\treturn new DatasetAttributes(\n\t\t\t\tdimensions,\n\t\t\t\tshardSize,\n\t\t\t\tdataType,\n\t\t\t\tblockCodec);\n\t}\n\n\tprotected DatasetAttributes getTestAttributes() {\n\n\t\treturn getTestAttributes(new long[]{8, 8}, new int[]{4, 4}, new int[]{2, 2});\n\t}\n\n\t@Test\n\tpublic void writeReadChunksTest() {\n\n\t\tfinal N5Writer writer = tempN5Factory.createTempN5Writer();\n\n\t\tfinal DatasetAttributes datasetAttributes = getTestAttributes(\n\t\t\t\tnew long[]{24, 24},\n\t\t\t\tnew int[]{8, 8},\n\t\t\t\tnew int[]{2, 2}\n\t\t);\n\n\t\tfinal String dataset = \"writeReadBlocks\";\n\t\twriter.remove(dataset);\n\t\twriter.createDataset(dataset, datasetAttributes);\n\n\t\tfinal int[] chunkSize = datasetAttributes.getBlockSize();\n\t\tfinal int numElements = chunkSize[0] * chunkSize[1];\n\n\t\tfinal byte[] data = new byte[numElements];\n\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\tdata[i] = (byte)((100) + (10) + i);\n\t\t}\n\n\t\twriter.writeChunks(\n\t\t\t\tdataset,\n\t\t\t\tdatasetAttributes,\n\t\t\t\t/* shard (0, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 1}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 1}, data),\n\n\t\t\t\t/* shard (1, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{4, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{5, 0}, data),\n\n\t\t\t\t/* shard (2, 2) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{11, 11}, data)\n\t\t);\n\n\t\tfinal KeyValueAccess kva = ((N5KeyValueWriter)writer).getKeyValueAccess();\n\n\t\tlong[][] keys = new long[][]{\n\t\t\t\t{0, 0},\n\t\t\t\t{1, 0},\n\t\t\t\t{2, 2}\n\t\t};\n\n\t\tfinal long[][] someUnusedKeys = new long[][]{\n\t\t\t\t{0, 1},\n\t\t\t\t{1, 1},\n\t\t\t\t{1, 2},\n\t\t\t\t{2, 1}\n\t\t};\n\n\t\tensureKeysExist(kva, writer.getURI(), dataset, datasetAttributes, keys);\n\t\tensureKeysDoNotExist(kva, writer.getURI(), dataset, datasetAttributes, someUnusedKeys);\n\n\t\tfinal long[][] chunkIndices = new long[][]{{0, 0}, {0, 1}, {1, 0}, {1, 1}, {4, 0}, {5, 0}, {11, 11}};\n\t\tfor (long[] chunkIndex : chunkIndices) {\n\t\t\tfinal DataBlock<?> chunk = writer.readChunk(dataset, datasetAttributes, chunkIndex);\n\t\t\tAssert.assertArrayEquals(\"Read from shard doesn't match\", data, (byte[])chunk.getData());\n\t\t}\n\n\t\tfinal byte[] data2 = new byte[numElements];\n\t\tfor (int i = 0; i < data2.length; i++) {\n\t\t\tdata2[i] = (byte)(10 + i);\n\t\t}\n\n\t\twriter.writeChunks(\n\t\t\t\tdataset,\n\t\t\t\tdatasetAttributes,\n\t\t\t\t/* shard (0, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 0}, data2),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 1}, data2),\n\n\t\t\t\t/* shard (0, 1) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 4}, data2),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 5}, data2),\n\n\t\t\t\t/* shard (2, 2) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{10, 10}, data2));\n\n\t\tlong[][] keys2 = new long[][]{\n\t\t\t\t{0, 0},\n\t\t\t\t{1, 0},\n\t\t\t\t{0, 1},\n\t\t\t\t{2, 2}\n\t\t};\n\n\t\tlong[][] someUnusedKeys2 = new long[][]{\n\t\t\t\t{1, 1},\n\t\t\t\t{1, 2},\n\t\t\t\t{2, 1}\n\t\t};\n\n\t\tensureKeysExist(kva, writer.getURI(), dataset, datasetAttributes, keys2);\n\t\tensureKeysDoNotExist(kva, writer.getURI(), dataset, datasetAttributes, someUnusedKeys2);\n\n\t\tfinal long[][] oldChunkIndices = new long[][]{{0, 1}, {1, 0}, {4, 0}, {5, 0}, {11, 11}};\n\t\tfor (long[] chunkIndex : oldChunkIndices) {\n\t\t\tfinal DataBlock<?> chunk = writer.readChunk(dataset, datasetAttributes, chunkIndex);\n\t\t\tAssert.assertArrayEquals(\"Read from shard doesn't match\", data, (byte[])chunk.getData());\n\t\t}\n\n\t\tfinal long[][] newChunkIndices = new long[][]{{0, 0}, {1, 1}, {0, 4}, {0, 5}, {10, 10}};\n\t\tfinal List<long[]> newChunkIndexList = Arrays.asList(newChunkIndices);\n\t\tfinal List<DataBlock<Object>> readChunks = writer.readChunks(dataset, datasetAttributes, newChunkIndexList);\n\t\tfor (int i = 0; i < newChunkIndices.length; i++) {\n\t\t\tfinal long[] chunkIndex = newChunkIndices[i];\n\t\t\tfinal DataBlock<?> chunk = writer.readChunk(dataset, datasetAttributes, chunkIndex);\n\t\t\tAssert.assertArrayEquals(\"Read from shard doesn't match\", data2, (byte[])chunk.getData());\n\t\t\tfinal DataBlock<?> chunkFromReadChunks = readChunks.get(i);\n\t\t\tAssert.assertArrayEquals(\"Read from shard doesn't match\", data2, (byte[])chunkFromReadChunks.getData());\n\t\t}\n\t}\n\n\tprivate void ensureKeysExist(KeyValueAccess kva, URI uri, String dataset,\n\t\t\tDatasetAttributes datasetAttributes, long[][] keys) {\n\n\t\tfor (long[] key : keys) {\n\t\t\tfinal String shard = kva.compose(uri, dataset, datasetAttributes.relativeBlockPath(key));\n\t\t\tAssert.assertTrue(\"Shard at\" + shard + \"Does not exist\", kva.exists(shard));\n\t\t}\n\t}\n\n\tprivate void ensureKeysDoNotExist(KeyValueAccess kva, URI uri, String dataset,\n\t\t\tDatasetAttributes datasetAttributes, long[][] keys) {\n\n\t\tfor (long[] key : keys) {\n\t\t\tfinal String shard = kva.compose(uri, dataset, datasetAttributes.relativeBlockPath(key));\n\t\t\tAssert.assertFalse(\"Shard at\" + shard + \" exists but should not.\", kva.exists(shard));\n\t\t}\n\t}\n\n\t@Test\n\tpublic void writeShardDataSizeTest() {\n\n\t\t// note: this test depends on the use of raw compression\n\t\tfinal N5Writer writer = tempN5Factory.createTempN5Writer();\n\n\t\tint numChunksPerShard = 16;\n\t\tfinal int n5HeaderSizeBytes = 12; // 2 + 2 + 4*2\n\t\tfinal DatasetAttributes attrs = getTestAttributes(\n\t\t\t\tnew long[]{24, 24},\n\t\t\t\tnew int[]{8, 8},\n\t\t\t\tnew int[]{2, 2}\n\t\t);\n\n\t\tfinal String dataset = \"writeBlocksShardSize\";\n\t\twriter.remove(dataset);\n\t\tfinal DatasetAttributes datasetAttributes = writer.createDataset(dataset, attrs);\n\t\tassertTrue(datasetAttributes.isSharded());\n\n\t\tfinal KeyValueAccess kva = ((N5KeyValueWriter)writer).getKeyValueAccess();\n\n\t\tfinal int[] chunkSize = datasetAttributes.getChunkSize();\n\t\tfinal int numElements = chunkSize[0] * chunkSize[1];\n\n\t\tfinal byte[] data = new byte[numElements];\n\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\tdata[i] = (byte)((100) + (10) + i);\n\t\t}\n\n\t\t/*\n\t\t * No chunks or shards exist.\n\t\t * Calling readChunks should return a list that is the same length as the requested grid positions,\n\t\t * and every entry should be null.\n\t\t */\n\t\tfinal long[][] newChunkIndices = new long[][]{{0, 0}, {1, 1}, {0, 4}, {0, 5}, {10, 10}};\n\t\tfinal List<DataBlock<Object>> readChunks = writer.readChunks(dataset, datasetAttributes, Arrays.asList(newChunkIndices));\n\t\tassertEquals(newChunkIndices.length, readChunks.size());\n\t\tassertTrue(\"readChunks for empty shard: all chunks null\", readChunks.stream().allMatch(Objects::isNull));\n\n\t\t/*\n\t\t * Now write chunks\n\t\t */\n\t\twriter.writeChunks(\n\t\t\t\tdataset,\n\t\t\t\tdatasetAttributes,\n\t\t\t\t/* shard (0, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 1}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 1}, data),\n\n\t\t\t\t/* shard (1, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{4, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{5, 0}, data),\n\n\t\t\t\t/* shard (2, 2) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{11, 11}, data)\n\t\t);\n\n\t\tfinal int indexSizeBytes = numChunksPerShard * 16; // 8 bytes per long *\n\t\tfinal int chunkDataSizeBytes = numElements + n5HeaderSizeBytes;\n\n\t\t// shard 0,0 has 4 chunks so should be this size:\n\t\tlong shard00SizeBytes = indexSizeBytes + 4 * chunkDataSizeBytes;\n\t\tlong shard10SizeBytes = indexSizeBytes + 2 * chunkDataSizeBytes;\n\t\tlong shard22SizeBytes = indexSizeBytes + 1 * chunkDataSizeBytes;\n\n\t\tfinal String[][] keys = new String[][]{\n\t\t\t\t{dataset, \"0\", \"0\"},\n\t\t\t\t{dataset, \"1\", \"0\"},\n\t\t\t\t{dataset, \"2\", \"2\"}\n\t\t};\n\n\t\tlong[] shardSizes = new long[]{\n\t\t\t\tshard00SizeBytes,\n\t\t\t\tshard10SizeBytes,\n\t\t\t\tshard22SizeBytes\n\t\t};\n\n\t\tint i = 0;\n\t\tfor (String[] key : keys) {\n\t\t\tfinal String shardPath = kva.compose(writer.getURI(), key);\n\t\t\tAssert.assertEquals(\"shard at \" + shardPath + \" was the wrong size\", shardSizes[i++], kva.size(shardPath));\n\t\t}\n\n\t}\n\n\t@Test\n\tpublic void writeReadChunkTest() {\n\n\t\tfinal GsonKeyValueN5Writer writer = (GsonKeyValueN5Writer)tempN5Factory.createTempN5Writer();\n\t\tfinal DatasetAttributes datasetAttributes = getTestAttributes();\n\n\t\tfinal String dataset = \"writeReadBlock\";\n\t\twriter.remove(dataset);\n\t\twriter.createDataset(dataset, datasetAttributes);\n\n\t\tfinal int[] chunkSize = datasetAttributes.getChunkSize();\n\t\tfinal DataType dataType = datasetAttributes.getDataType();\n\t\tfinal int numElements = 2 * 2;\n\n\t\tfinal HashMap<long[], byte[]> writtenChunks = new HashMap<>();\n\n\t\tfor (int idx1 = 1; idx1 >= 0; idx1--) {\n\t\t\tfor (int idx2 = 1; idx2 >= 0; idx2--) {\n\t\t\t\tfinal long[] gridPosition = {idx1, idx2};\n\t\t\t\tfinal DataBlock<byte[]> chunk = (DataBlock<byte[]>)dataType.createDataBlock(chunkSize, gridPosition, numElements);\n\t\t\t\tbyte[] data = chunk.getData();\n\t\t\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\t\t\tdata[i] = (byte)((idx1 * 100) + (idx2 * 10) + i);\n\t\t\t\t}\n\t\t\t\twriter.writeChunk(dataset, datasetAttributes, chunk);\n\n\t\t\t\tfinal DataBlock<byte[]> readChunk = writer.readChunk(dataset, datasetAttributes, chunk.getGridPosition().clone());\n\t\t\t\tAssert.assertArrayEquals(\"Read from shard doesn't match\", data, readChunk.getData());\n\n\t\t\t\tfor (Map.Entry<long[], byte[]> entry : writtenChunks.entrySet()) {\n\t\t\t\t\tfinal long[] otherGridPosition = entry.getKey();\n\t\t\t\t\tfinal byte[] otherData = entry.getValue();\n\t\t\t\t\tfinal DataBlock<byte[]> otherChunk = writer.readChunk(dataset, datasetAttributes, otherGridPosition);\n\t\t\t\t\tAssert.assertArrayEquals(\"Read prior write from shard no loner matches\", otherData, otherChunk.getData());\n\t\t\t\t}\n\n\t\t\t\twrittenChunks.put(gridPosition, data);\n\t\t\t}\n\t\t}\n\t}\n\n\t@Test\n\tpublic void writeReadShardTest() {\n\n\t\ttry ( final N5Writer n5 = tempN5Factory.createTempN5Writer() ) {\n\n\t\t\tfinal int[] shardSize = new int[] {4,4};\n\t\t\tfinal int shardN = 16;\n\n\t\t\tfinal int[] chunkSize = new int[] {2,2};\n\n\t\t\tfinal String dataset = \"writeReadShard\";\n\t\t\tDatasetAttributes attrs = getTestAttributes(DataType.INT32, new long[]{8, 8}, shardSize, chunkSize);\n\n\t\t\tfinal int[] shardData = range(shardN);\n\t\t\tIntArrayDataBlock shard = new IntArrayDataBlock(shardSize, new long[]{0, 0}, shardData);\n\n\t\t\tn5.writeBlock(dataset, attrs, shard);\n\t\t\tDataBlock<int[]> readBlock = n5.readBlock(dataset, attrs, 0, 0);\n\t\t\tassertArrayEquals(shardData, readBlock.getData());\n\n\n\t\t\t/**\n\t\t\t * The 4x4 shard at (0,0)\n\t\t\t * and the 2x2 chunks it contains\n\t\t\t *\n\t\t\t *\n\t\t\t * \t0   1  |  2   3\n\t\t\t *  4   5  |  6\t  7\n\t\t\t *  ----------------\n\t\t\t *  8\t9  | 10\t 11\n\t\t\t * 12  13  | 14  15\n\t\t\t */\n\n\t\t\tassertArrayEquals(new int[]{0, 1, 4, 5}, (int[])n5.readChunk(dataset, attrs, 0, 0).getData());\n\t\t\tassertArrayEquals(new int[]{2, 3, 6, 7}, (int[])n5.readChunk(dataset, attrs, 1, 0).getData());\n\t\t\tassertArrayEquals(new int[]{8, 9, 12, 13}, (int[])n5.readChunk(dataset, attrs, 0, 1).getData());\n\t\t\tassertArrayEquals(new int[]{10, 11, 14, 15}, (int[])n5.readChunk(dataset, attrs, 1, 1).getData());\n\n\t\t\tn5.deleteChunk(dataset, attrs, new long[]{1, 1});\n\n\t\t\t/**\n\t\t\t * After deleting chunk (1,1)\n\t\t\t *\n\t\t\t * \t0   1  |  2   3\n\t\t\t *  4   5  |  6\t  7\n\t\t\t *  ----------------\n\t\t\t *  8\t9  |  0\t  0\n\t\t\t * 12  13  |  0   0\n\t\t\t */\n\t\t\tfinal DataBlock<int[]> partlyEmptyShard = n5.readBlock(dataset, attrs, 0, 0);\n\t\t\tassertArrayEquals(new int[]{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 12, 13, 0, 0}, partlyEmptyShard.getData());\n\n\n\t\t\t// Delete the rest of the chunks\n\t\t\tn5.deleteChunks(dataset, attrs,\n\t\t\t\t\tStream.of( new long[] {0,0}, new long[] {1,0}, new long[] {0,1}).collect(Collectors.toList()));\n\n\t\t\tassertNull(n5.readBlock(dataset, attrs, 0, 0));\n\n\n\t\t\t// write the shard again\n\t\t\tn5.writeBlock(dataset, attrs, shard);\n\n\t\t\t// delete the chard\n\t\t\t// ensure it returns true because the shard exists\n\t\t\tassertTrue(n5.deleteBlock(dataset, attrs, shard.getGridPosition()));\n\n\t\t\t// ensure it returns false when the shard does not exist\n\t\t\tassertFalse(n5.deleteBlock(dataset, attrs, shard.getGridPosition()));\n\n\t\t\t// readBlock must return null for the deleted shard\n\t\t\tassertNull(n5.readBlock(dataset, attrs, shard.getGridPosition()));\n\t\t}\n\t}\n\n\t/**\n\t * Checks how many read calls to the backend are performed for a particular readChunks\n\t * call. At this time (Nov 4 2025), one read for the index, and one read per block are performed.\n\t */\n\tpublic void numReadsTest() {\n\n\t\tfinal TrackingN5Writer writer = (TrackingN5Writer)tempN5Factory.createTempN5Writer();\n\n\t\tfinal DatasetAttributes datasetAttributes = getTestAttributes(\n\t\t\t\tnew long[]{24, 24},\n\t\t\t\tnew int[]{8, 8},\n\t\t\t\tnew int[]{2, 2}\n\t\t);\n\n\t\tfinal String dataset = \"writeReadBlocks\";\n\t\twriter.remove(dataset);\n\t\twriter.createDataset(dataset, datasetAttributes);\n\n\t\tfinal int[] chunkSize = datasetAttributes.getChunkSize();\n\t\tfinal int numElements = chunkSize[0] * chunkSize[1];\n\n\t\tfinal byte[] data = new byte[numElements];\n\t\tfor (int i = 0; i < data.length; i++) {\n\t\t\tdata[i] = (byte)((100) + (10) + i);\n\t\t}\n\n\t\twriter.writeChunks(\n\t\t\t\tdataset,\n\t\t\t\tdatasetAttributes,\n\t\t\t\t/* shard (0, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{0, 1}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{1, 1}, data),\n\n\t\t\t\t/* shard (1, 0) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{4, 0}, data),\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{5, 0}, data),\n\n\t\t\t\t/* shard (2, 2) */\n\t\t\t\tnew ByteArrayDataBlock(chunkSize, new long[]{11, 11}, data)\n\t\t);\n\n        writer.resetNumMaterializeCalls();\n        writer.readChunks(dataset, datasetAttributes, Collections.singletonList(new long[] {0,0}));\n\n\t\tArrayList<long[]> ptList = new ArrayList<>();\n\t\tptList.add(new long[] {0, 0});\n\t\tptList.add(new long[] {0, 1});\n\t\tptList.add(new long[] {1, 0});\n\t\tptList.add(new long[] {1, 1});\n\n        writer.resetNumMaterializeCalls();\n        writer.readChunks(dataset, datasetAttributes, ptList);\n\t}\n\n    @Test\n    public void shardExistsTest() {\n\n        final N5Writer writer = tempN5Factory.createTempN5Writer();\n\n        final DatasetAttributes datasetAttributes = getTestAttributes(\n                new long[]{24, 24},\n                new int[]{8, 8},\n                new int[]{2, 2}\n        );\n\n        final String dataset = \"shardExists\";\n        writer.remove(dataset);\n        DatasetAttributes attrs = writer.createDataset(dataset, datasetAttributes);\n\n        final int[] chunkSize = datasetAttributes.getChunkSize();\n        final int numElements = chunkSize[0] * chunkSize[1];\n\n        final byte[] data = new byte[numElements];\n        for (int i = 0; i < data.length; i++) {\n            data[i] = (byte)(i);\n        }\n\n        /* write blocks to shards (0,0), (1,0), and (2,2) */\n        writer.writeChunks(\n                dataset,\n                attrs,\n                new ByteArrayDataBlock(chunkSize, new long[]{0, 0}, data),  /* shard (0, 0) */\n                new ByteArrayDataBlock(chunkSize, new long[]{4, 0}, data),  /* shard (1, 0) */\n                new ByteArrayDataBlock(chunkSize, new long[]{11, 11}, data) /* shard (2, 2) */\n        );\n\n        TrackingN5Writer trackingWriter = ((TrackingN5Writer) writer);\n\n        Predicate<long[]> assertShardExistsTracking = (gridPosition) -> {\n            trackingWriter.resetAllTracking();\n            final boolean exists = writer.blockExists(dataset, attrs, gridPosition);\n            assertEquals(\"isFileCheck incremented\", 1, trackingWriter.getNumIsFileCalls());\n            assertEquals(\"No Bytes Read\", 0, trackingWriter.getTotalBytesRead());\n            return exists;\n        };\n\n\n        trackingWriter.resetAllTracking();\n        /* shards that should exist should only check file  */\n        Assert.assertTrue(\"Shard (0,0) should exist\", assertShardExistsTracking.test(new long[]{0, 0}));\n        Assert.assertTrue(\"Shard (1,0) should exist\", assertShardExistsTracking.test(new long[]{1, 0}));\n        Assert.assertTrue(\"Shard (2,2) should exist\", assertShardExistsTracking.test(new long[]{2, 2}));\n\n        /* shards that should NOT exist */\n        Assert.assertFalse(\"Shard (0,1) should not exist\", assertShardExistsTracking.test(new long[]{0, 1}));\n        Assert.assertFalse(\"Shard (1,1) should not exist\", assertShardExistsTracking.test(new long[]{1, 1}));\n        Assert.assertFalse(\"Shard (2,0) should not exist\", assertShardExistsTracking.test(new long[]{2, 0}));\n        Assert.assertFalse(\"Shard (0,2) should not exist\", assertShardExistsTracking.test(new long[]{0, 2}));\n    }\n\n\t/**\n\t * Checks how many read calls to the backend are performed for a particular readBlocks\n\t * call. At this time (Jan 4 2026), one read for the index, and one read per block are performed.\n\t */\n\t@Test\n\tpublic void testPartialReadAggregationBehavior() {\n\n        final DatasetAttributes datasetAttributes = getTestAttributes(\n                new long[]{24, 24},\n                new int[]{8, 8},\n                new int[]{2, 2}\n        );\n\n\t\ttry (TrackingN5Writer writer = (TrackingN5Writer)tempN5Factory.createTempN5Writer()) {\n\n\t        final String dataset = \"shardExists\";\n\t        writer.remove(dataset);\n\t        DatasetAttributes attrs = writer.createDataset(dataset, datasetAttributes);\n\n\t        final int[] chunkSize = attrs.getChunkSize();\n\t        final int numElements = chunkSize[0] * chunkSize[1];\n\n\t        final byte[] data = new byte[numElements];\n\t        for (int i = 0; i < data.length; i++) {\n\t            data[i] = (byte)(i);\n\t        }\n\n\t\t\t// four blocks in shard (0,0)\n\t\t\tArrayList<long[]> ptList = new ArrayList<>();\n\t\t\tptList.add(new long[] {0,0});\n\t\t\tptList.add(new long[] {0,1});\n\t\t\tptList.add(new long[] {1,0});\n\t\t\tptList.add(new long[] {1,1});\n\n\t        /* write blocks to shard (0,0) */\n\t\t\twriter.writeChunks(\n\t\t\t\t\tdataset,\n\t\t\t\t\tdatasetAttributes,\n\t\t\t\t\tnew ByteArrayDataBlock(chunkSize, ptList.get(0), data),\n\t\t\t\t\tnew ByteArrayDataBlock(chunkSize, ptList.get(1), data),\n\t\t\t\t\tnew ByteArrayDataBlock(chunkSize, ptList.get(2), data),\n\t\t\t\t\tnew ByteArrayDataBlock(chunkSize, ptList.get(3), data)\n\t\t\t);\n\n\t\t\twriter.resetNumMaterializeCalls();\n\t\t\twriter.readChunks(dataset, datasetAttributes, ptList);\n\n\t\t\t// one for the index, one for the four blocks (aggregated)\n\t\t\tassertEquals(2, writer.getNumMaterializeCalls());\n\n\t\t\twriter.resetNumMaterializeCalls();\n\t\t\twriter.readBlock(dataset, datasetAttributes, new long[] {0,0});\n\t\t\t// one for the index, one for the four blocks (aggregated)\n\t\t\tassertEquals(2, writer.getNumMaterializeCalls());\n\n\n\t\t\t/**\n\t\t\t *  Aggregate read calls\n\t\t\t */\n            writer.tkva.aggregate = true;\n\t\t\twriter.resetNumMaterializeCalls();\n\t\t\twriter.readChunks(dataset, datasetAttributes, ptList);\n\n\t\t\t// one for the index, one that covers ALL the blocks)\n\t\t\tassertEquals(2, writer.getNumMaterializeCalls());\n\n\t\t\twriter.resetNumMaterializeCalls();\n\t\t\twriter.readBlock(dataset, datasetAttributes, new long[] {0,0});\n\t\t\t// one for the index, one that covers ALL the blocks\n\t\t\tassertEquals(2, writer.getNumMaterializeCalls());\n\t\t}\n\t}\n\n\tprivate int[] range(int N) {\n\t\treturn IntStream.range(0, N).toArray();\n\t}\n\n}"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/TestPositionValueAccess.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.ByteBuffer;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport org.janelia.saalfeldlab.n5.N5Exception.N5IOException;\nimport org.janelia.saalfeldlab.n5.readdata.Range;\nimport org.janelia.saalfeldlab.n5.readdata.ReadData;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\n\n/**\n * Implementation of {@link PositionValueAccess} for tests.\n * <p>\n * Instead of writing to a {@code KeyValueAccess} backend, here, data is just\n * stored in a {@code Map} as {@code byte[]} arrays.\n */\npublic class TestPositionValueAccess implements PositionValueAccess {\n\n\tprivate final Map<Key, byte[]> map = new HashMap<>();\n\n\t@Override\n\tpublic VolatileReadData get(final long[] key) {\n\t\tfinal byte[] bytes = map.get(new Key(key));\n\t\treturn bytes == null ? null : new ClosableWrapper(ReadData.from(bytes));\n\t}\n\n\tpublic void set(final long[] key, final ReadData data) {\n\n\t\tif (data == null) {\n\t\t\tmap.remove(new Key(key));\n\t\t\treturn;\n\t\t}\n\n\t\tfinal byte[] bytes = data.allBytes();\n\t\tmap.put(new Key(key), bytes);\n\t}\n\n\t@Override\n\tpublic boolean exists(long[] key) throws N5IOException {\n\t\treturn map.containsKey(new Key(key));\n\t}\n\n\t@Override\n\tpublic boolean remove(final long[] key) throws N5IOException {\n\t\treturn map.remove(new Key(key)) != null;\n\t}\n\n\tprivate static class Key {\n\n\t\tprivate final long[] data;\n\n\t\tKey(long[] data) {\n\n\t\t\tthis.data = data;\n\t\t}\n\n\t\t@Override\n\t\tpublic final boolean equals(final Object o) {\n\n\t\t\tif (!(o instanceof Key)) {\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tfinal Key key = (Key)o;\n\t\t\treturn Arrays.equals(data, key.data);\n\t\t}\n\n\t\t@Override\n\t\tpublic int hashCode() {\n\n\t\t\treturn Arrays.hashCode(data);\n\t\t}\n\t}\n\n\tprivate static class ClosableWrapper implements VolatileReadData {\n\n\t\tprivate final ReadData delegate;\n\n\t\tClosableWrapper(final ReadData delegate) {\n\t\t\tthis.delegate = delegate;\n\t\t}\n\n\t\t@Override\n\t\tpublic long length() {\n\t\t\treturn delegate.length();\n\t\t}\n\n\t\t@Override\n\t\tpublic long requireLength() throws N5IOException {\n\t\t\treturn delegate.requireLength();\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData limit(final long length) throws N5IOException {\n\t\t\treturn delegate.limit(length);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData slice(final long offset, final long length) throws N5IOException {\n\t\t\treturn delegate.slice(offset, length);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData slice(final Range range) throws N5IOException {\n\t\t\treturn delegate.slice(range);\n\t\t}\n\n\t\t@Override\n\t\tpublic InputStream inputStream() throws N5IOException, IllegalStateException {\n\t\t\treturn delegate.inputStream();\n\t\t}\n\n\t\t@Override\n\t\tpublic byte[] allBytes() throws N5IOException, IllegalStateException {\n\t\t\treturn delegate.allBytes();\n\t\t}\n\n\t\t@Override\n\t\tpublic ByteBuffer toByteBuffer() throws N5IOException, IllegalStateException {\n\t\t\treturn delegate.toByteBuffer();\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData materialize() throws N5IOException {\n\t\t\tdelegate.materialize();\n\t\t\treturn this;\n\t\t}\n\n\t\t@Override\n\t\tpublic void writeTo(final OutputStream outputStream) throws N5IOException, IllegalStateException {\n\t\t\tdelegate.writeTo(outputStream);\n\t\t}\n\n\t\t@Override\n\t\tpublic void prefetch(final Collection<? extends Range> ranges) throws N5IOException {\n\t\t\tdelegate.prefetch(ranges);\n\t\t}\n\n\t\t@Override\n\t\tpublic ReadData encode(final OutputStreamOperator encoder) {\n\t\t\treturn delegate.encode(encoder);\n\t\t}\n\n\t\t@Override\n\t\tpublic void close() throws N5IOException {\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/WriteRegionTest.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.Arrays;\nimport org.janelia.saalfeldlab.n5.ByteArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.RawCompression;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.readdata.VolatileReadData;\nimport org.janelia.saalfeldlab.n5.N5Writer.DataBlockSupplier;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\nimport org.junit.Test;\n\npublic class WriteRegionTest {\n\n\n\t@Test\n\tpublic void testWriteRegion() {\n\n\t\tint[] chunkSize = {3};\n\t\tfinal long[] datasetDimensions = {15};\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tchunkSize,\n\t\t\t\tDataType.INT8,\n\t\t\t\tc0,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<byte[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\t\tDataBlockSupplier<byte[]> chunks = (gridPos, existing) -> {\n\t\t\treturn createDataBlock(chunkSize, gridPos.clone(), (byte) gridPos[0]);\n\t\t};\n\t\t\n\t\tDataBlockSupplier<byte[]> chunks255 = (gridPos, existing) -> {\n\t\t\treturn createDataBlock(chunkSize, gridPos.clone(), (byte)255);\n\t\t};\n\t\t\n\t\tDataBlockSupplier<byte[]> chunksDelete = (gridPos, existing) -> {\n\t\t\treturn null;\n\t\t};\n\n\n\t\t// write one chunk at grid Position 1\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[] {3},\n\t\t\t\tnew long[] {3},\n\t\t\t\tchunks,\n\t\t\t\tfalse);\n\n\t\t// Chunks\n\t\t//\t|...|...|...|...|...|\n\t\t// Pixels indexes\n\t\t//\t0   3   6   9   12   15-\n\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {0}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {1}), true, 1);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {2}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {3}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {4}), false, 0);\n\t\t\n\t\t// write two chunks at grid positions 2 and 3\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{6},\n\t\t\t\tnew long[]{6},\n\t\t\t\tchunks,\n\t\t\t\tfalse);\n\n\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {0}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {1}), true, 1);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {2}), true, 2);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {3}), true, 3);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {4}), false, 0);\n\n\t\t// delete two chunks at grid positions 3 and 4\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{9},\n\t\t\t\tnew long[]{6},\n\t\t\t\tchunksDelete,\n\t\t\t\tfalse);\n\n\t}\n\n\t@Test\n\tpublic void testWriteRegionSharded() {\n\n\t\t// Shards\n\t\t//\t#...............................#...............................#...............................#...............................#\n\t\t// Chunks\n\t\t//\t|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|\n\t\t// Pixels indexes\n\t\t//\t0   3   6   9  12  15  18  21  24  27  30  33  36  39  42  45  48  51  54  57  60  63  66  69  72  75  78  81  84  87  90  93  96\n\n\t\tint[] chunkSize = {3};\n\t\tint[] shardSize = {24};\n\t\tfinal long[] datasetDimensions = {96};\n\t\tint numChunks = (int)(datasetDimensions[0] / chunkSize[0]);\n\n\t\t// Chunks are size 3\n\t\t// Shards are size 24 (contain 8 chunks)\n\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.END\n\t\t);\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tshardSize,\n\t\t\t\tDataType.INT8,\n\t\t\t\tc1,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<byte[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\tDataBlockSupplier<byte[]> chunks = (gridPos, existing) -> {\n\t\t\treturn createDataBlock(chunkSize, gridPos.clone(), (byte) gridPos[0]);\n\t\t};\n\n\t\tDataBlockSupplier<byte[]> chunks255 = (gridPos, existing) -> {\n\t\t\treturn createDataBlock(chunkSize, gridPos.clone(), (byte)255);\n\t\t};\n\n\t\tDataBlockSupplier<byte[]> chunksDelete = (gridPos, existing) -> {\n\t\t\treturn null;\n\t\t};\n\n\t\t// write one chunk at grid Position 1\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[] {3},\n\t\t\t\tnew long[] {3},\n\t\t\t\tchunks,\n\t\t\t\tfalse);\n\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {0}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {1}), true, 1);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {2}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {3}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[] {4}), false, 0);\n\n\t\t// only the first shard should exist\n\t\tcheckKey(store, new long[]{0}, true);\n\t\tcheckKey(store, new long[]{1}, false);\n\t\tcheckKey(store, new long[]{2}, false);\n\t\tcheckKey(store, new long[]{3}, false);\n\n\t\t// write two chunks at grid positions 2 and 3\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{6},\n\t\t\t\tnew long[]{6},\n\t\t\t\tchunks,\n\t\t\t\tfalse);\n\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{0}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{1}), true, 1);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{2}), true, 2);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{3}), true, 3);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{4}), false, 0);\n\n\t\t// delete two chunks at grid positions 3 and 4\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{9},\n\t\t\t\tnew long[]{6},\n\t\t\t\tchunksDelete,\n\t\t\t\tfalse);\n\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{0}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{1}), true, 1);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{2}), true, 2);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{3}), false, 0);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{4}), false, 0);\n\n\t\t// overwrite all chunks to hold 255\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{0},\n\t\t\t\tnew long[]{96},\n\t\t\t\tchunks255,\n\t\t\t\tfalse);\n\n\t\tfor (int i = 0; i < numChunks; i++) {\n\t\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{i}), true, 255);\n\t\t}\n\n\t\t// all shards should exist\n\t\tcheckKey(store, new long[]{0}, true);\n\t\tcheckKey(store, new long[]{1}, true);\n\t\tcheckKey(store, new long[]{2}, true);\n\t\tcheckKey(store, new long[]{3}, true);\n\n\t\t// delete some chunks\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{18},\n\t\t\t\tnew long[]{18},\n\t\t\t\tchunksDelete,\n\t\t\t\tfalse);\n\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{5}), true, 255);\n\t\tcheckChunk(datasetAccess.readChunk(store, new long[]{6}), false, 0);\n\n\t\t// all shards should exist\n\t\tcheckKey(store, new long[]{0}, true);\n\t\tcheckKey(store, new long[]{1}, true);\n\t\tcheckKey(store, new long[]{2}, true);\n\t\tcheckKey(store, new long[]{3}, true);\n\n\t\t// delete more chunks so that shard 1 is empty\n\t\tdatasetAccess.writeRegion(store,\n\t\t\t\tnew long[]{36},\n\t\t\t\tnew long[]{15},\n\t\t\t\tchunksDelete,\n\t\t\t\tfalse);\n\n\t\t// shard 1 should be gone\n\t\tcheckKey(store, new long[]{0}, true);\n\t\tcheckKey(store, new long[]{1}, false);\n\t\tcheckKey(store, new long[]{2}, true);\n\t\tcheckKey(store, new long[]{3}, true);\n\t}\n\n\tprivate static void checkChunk(final DataBlock<byte[]> chunk, final boolean expectedNonNull, final int expectedFillValue) {\n\n\t\tif (chunk == null) {\n\t\t\tif (expectedNonNull) {\n\t\t\t\tthrow new IllegalStateException(\"expected non-null dataBlock\");\n\t\t\t}\n\t\t} else {\n\t\t\tif (!expectedNonNull) {\n\t\t\t\tthrow new IllegalStateException(\"expected null dataBlock\");\n\t\t\t}\n\t\t\tfinal byte[] bytes = chunk.getData();\n\t\t\tfor (byte b : bytes) {\n\t\t\t\tif (b != (byte) expectedFillValue) {\n\t\t\t\t\tthrow new IllegalStateException(\"expected all values to be \" + expectedFillValue);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tprivate static void checkKey(PositionValueAccess pva, long[] position, final boolean expectedNonNull) {\n\n\t\ttry (VolatileReadData val = pva.get(position)) {\n\n\t\t\tif (expectedNonNull && val == null)\n\t\t\t\tthrow new IllegalStateException(\"expected non-null value\");\n\t\t\telse if (!expectedNonNull && val != null)\n\t\t\t\tthrow new IllegalStateException(\"expected null value\");\n\t\t}\n\t}\n\n\tprivate static DataBlock<byte[]> createDataBlock(int[] size, long[] gridPosition, int fillValue) {\n\t\tfinal byte[] bytes = new byte[DataBlock.getNumElements(size)];\n\t\tArrays.fill(bytes, (byte) fillValue);\n\t\treturn new ByteArrayDataBlock(size, gridPosition, bytes);\n\t}\n\n\tpublic static class TestDatasetAttributes extends DatasetAttributes {\n\n\t\tpublic TestDatasetAttributes(long[] dimensions, int[] outerBlockSize, DataType dataType, BlockCodecInfo blockCodecInfo,\n\t\t\t\tDataCodecInfo... dataCodecInfos) {\n\n\t\t\tsuper(dimensions, outerBlockSize, dataType, blockCodecInfo, dataCodecInfos);\n\t\t}\n\n\t\t@Override // to make this accessible for the test\n\t\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\t\t\treturn super.getDatasetAccess();\n\t\t}\n\t}\n\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/WriteShardTest.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\n\nimport java.util.Arrays;\nimport java.util.stream.Collectors;\n\nimport org.apache.commons.lang3.stream.Streams;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.RawCompression;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\nimport org.junit.Test;\n\npublic class WriteShardTest {\n\n\n//\t#...............................#...............................#...............................#...............................#\n//\t$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$\n//\t|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|\n//\t0   3   6   9  12  15  18  21  24  27  30  33  36  39  42  45  48  51  54  57  60  63  66  69  72  75  78  81  84  87  90  93  96\n\n\tpublic static void main(String[] args) {\n\n\t\tfinal int[] chunkSize = {3};\n\t\tfinal int[] level1ShardSize = {6};\n\t\tfinal long[] datasetDimensions = {36};\n\n\t\t// Chunks are 3\n\t\t// Level 1 shards are 6\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.END\n\t\t);\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tlevel1ShardSize,\n\t\t\t\tDataType.INT32,\n\t\t\t\tc1,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<int[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t//\t0       1       2       3       4       5       6\n\t\t//\t$.......$.......$.......$.......$.......$.......$\n\t\t//\t0   1   2   3   4   5   6   7   8   9  10  11  12\n\t\t//\t|...|...|...|...|...|...|...|...|...|...|...|...|\n\t\t//\t0   3   6   9  12  15  18  21  24  27  30  33  36\n\t\t//  .................................................\n\n\t\tfinal int[] dataBlockSize = c1.getInnerBlockSize();\n\n\t\t// create a shard DataBlock\n\t\t{\n\t\t\tfinal DataBlock<int[]> shard = createDataBlock(level1ShardSize, new long[] {2}, 1);\n\t\t\tSystem.out.println(\"shard.getGridPosition() = \" + Arrays.toString(shard.getGridPosition()));\n\t\t\tSystem.out.println(\"shard.getSize() = \" + Arrays.toString(shard.getSize()));\n\t\t\tSystem.out.println(\"shard.getData() = \" + Arrays.toString(shard.getData()));\n\t\t\tSystem.out.println();\n\t\t\tdatasetAccess.writeBlock(store, shard, 1);\n\t\t\tSystem.out.println();\n\t\t\tSystem.out.println();\n\t\t}\n\t\t{\n\t\t\tfinal DataBlock<int[]> shard = createDataBlock(level1ShardSize, new long[] {5}, 1);\n\t\t\tSystem.out.println(\"shard.getGridPosition() = \" + Arrays.toString(shard.getGridPosition()));\n\t\t\tSystem.out.println(\"shard.getSize() = \" + Arrays.toString(shard.getSize()));\n\t\t\tSystem.out.println(\"shard.getData() = \" + Arrays.toString(shard.getData()));\n\t\t\tSystem.out.println();\n\t\t\tdatasetAccess.writeBlock(store, shard, 1);\n\t\t\tSystem.out.println();\n\t\t\tSystem.out.println();\n\t\t}\n\n\n\n\t\tSystem.out.println(\"all good\");\n\t}\n\n\t@Test\n\tpublic void testShardDatasetAccess() {\n\n\t\tfinal int[] chunkSize = {3};\n\t\tfinal int[] level1ShardSize = {6};\n\t\tfinal long[] datasetDimensions = {36};\n\n\t\t// Chunks are 3\n\t\t// Level 1 shards are 6\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.END\n\t\t);\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tlevel1ShardSize,\n\t\t\t\tDataType.INT32,\n\t\t\t\tc1,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<int[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t//\t0       1       2       3       4       5       6\n\t\t//\t$.......$.......$.......$.......$.......$.......$\n\t\t//\t0   1   2   3   4   5   6   7   8   9  10  11  12\n\t\t//\t|...|...|...|...|...|...|...|...|...|...|...|...|\n\t\t//\t0   3   6   9  12  15  18  21  24  27  30  33  36\n\t\t//  .................................................\n\n\t\tlong[] p = {0};\n\t\tassertFalse(store.exists(p));\n\t}\n\n\t@Test\n\tpublic void testWriteNullBlockRemovesShard() throws Exception {\n\n\t\tfinal int[] chunkSize = {3};\n\t\tfinal int[] level1ShardSize = {6};\n\t\tfinal long[] datasetDimensions = {36};\n\n\t\t// Level 1 shards have size 6 (each containing two datablocks of size 3)\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[]{new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[]{new RawCompression()},\n\t\t\t\tIndexLocation.END);\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tlevel1ShardSize,\n\t\t\t\tDataType.INT32,\n\t\t\t\tc1,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<int[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\t\tfinal long[] shardKey = {1};\n\n\t\t/**\n\t\t * ONE BLOCK, ONE SHARD\n\t\t */\n\n\t\tassertFalse(\"Shard should not exist at the start of the test\", store.exists(shardKey));\n\n\t\t// Write a single chunk at grid position [3]\n\t\t// This chunk is in shard [1]\n\t\t// ( shard 0 contains chunks 0-1,\n\t\t//   shard 1 contains chunks 2-3 )\n\t\tfinal long[] chunkGridPosition = {3};\n\t\tfinal DataBlock<int[]> chunk = createDataBlock(chunkSize, chunkGridPosition, 100);\n\n\t\tdatasetAccess.writeChunk(store, chunk);\n\n\t\t// Verify the shard exists\n\t\tassertTrue(\"Shard should exist after writing chunk\", store.exists(shardKey));\n\n\t\t// Write a null chunk at the same location using writeRegion\n\t\t// This should remove the chunk and delete the now-empty shard\n\t\tfinal long[] regionMin = {9}; // pixel position of chunk [3]\n\t\tfinal long[] regionSize = {3}; // size of one block\n\n\t\tdatasetAccess.writeRegion(\n\t\t\t\tstore,\n\t\t\t\tregionMin,\n\t\t\t\tregionSize,\n\t\t\t\t(gridPosition,\n\t\t\t\t\t\t// block supplier returns null to indicate removal\n\t\t\t\t\t\texistingBlock) -> null,\n\t\t\t\tfalse);\n\n\t\t// Verify the shard has been removed\n\t\tassertFalse(\"Shard should be removed after writing null chunk\", store.exists(shardKey));\n\n\t\t/**\n\t\t * THREE CHUNKS, TWO SHARDS\n\t\t */\n\t\t// Write two chunks into the same shard, and one chunk into a second shard\n\t\t// Shard [1] contains chunks [2] and [3]\n\t\t// Shard [2] contains chunk  [4]\n\t\tfinal long[] shardKey1 = {1};\n\t\tfinal long[] shardKey2 = {2};\n\n\t\tfinal DataBlock<int[]> chunk1 = createDataBlock(chunkSize, new long[]{2}, 100);\n\t\tfinal DataBlock<int[]> chunk2 = createDataBlock(chunkSize, new long[]{3}, 200);\n\t\tfinal DataBlock<int[]> chunk3 = createDataBlock(chunkSize, new long[]{4}, 300);\n\n\n\t\tassertFalse(\"Shard should not exist at the start of the test\", store.exists(shardKey1));\n\t\tassertFalse(\"Shard should not exist at the start of the test\", store.exists(shardKey2));\n\n\t\t// write blocks\n\t\tdatasetAccess.writeChunks(store, Streams.of(chunk1, chunk2, chunk3).collect(Collectors.toList()));\n\n\t\t// Verify the shard exists\n\t\tassertTrue(\"Shard should exist after writing chunks\", store.exists(shardKey1));\n\t\tassertTrue(\"Shard should exist after writing chunks\", store.exists(shardKey2));\n\n\t\t// Write a null block at block [3]'s location\n\t\tdatasetAccess.writeRegion(\n\t\t\t\tstore,\n\t\t\t\tregionMin,\n\t\t\t\tregionSize,\n\t\t\t\t(gridPosition, existingBlock) -> null,\n\t\t\t\tfalse);\n\n\t\t// Verify the shard still exists (because chunk [2] is still there)\n\t\tassertTrue(\"Shard should still exist because it contains another chunk\", store.exists(shardKey1));\n\t\tassertTrue(\"Shard should still exist because was not affected\", store.exists(shardKey2));\n\n\t\t// Verify we can still read chunk [2]\n\t\tfinal DataBlock<int[]> readChunk = datasetAccess.readChunk(store, new long[]{2});\n\t\tassertTrue(\"Block [2] should still be readable\", readChunk != null);\n\t\tassertTrue(\"Block [2] data should match\", Arrays.equals(chunk1.getData(), readChunk.getData()));\n\n\t\t// Verify chunk [3] is gone\n\t\tfinal DataBlock<int[]> readChunk2 = datasetAccess.readChunk(store, new long[]{3});\n\t\tassertNull(\"Block [3] should be null after removal\", readChunk2);\n\n\t\t// Verify chunk [4] exists\n\t\tfinal DataBlock<int[]> readChunk3 = datasetAccess.readChunk(store, new long[]{4});\n\t\tassertTrue(\"Block [4] should still be readable\", readChunk3 != null);\n\t\tassertTrue(\"Block [4] data should match\", Arrays.equals(chunk3.getData(), readChunk3.getData()));\n\n\t\t// Write a null chunk at chunk [2]'s location\n\t\tfinal long[] regionMin2 = {6}; // pixel position of chunk [3]\n\t\tfinal long[] regionSize2 = {3};\n\n\t\tdatasetAccess.writeRegion(\n\t\t\t\tstore,\n\t\t\t\tregionMin2,\n\t\t\t\tregionSize2,\n\t\t\t\t(gridPosition, existingBlock) -> null,\n\t\t\t\tfalse);\n\n\t\tassertFalse(\"Shard should not exist after deleting all chunks\", store.exists(shardKey1));\n\t\tassertTrue(\"Shard should still exist because was not affected\", store.exists(shardKey2));\n\t}\n\n\tprivate static DataBlock<int[]> createDataBlock(int[] size, long[] gridPosition, int startValue) {\n\t\tfinal int[] ints = new int[DataBlock.getNumElements(size)];\n\t\tArrays.setAll(ints, i -> i + startValue);\n\t\treturn new IntArrayDataBlock(size, gridPosition, ints);\n\t}\n\n\tpublic static class TestDatasetAttributes extends DatasetAttributes {\n\n\t\tpublic TestDatasetAttributes(long[] dimensions, int[] outerBlockSize, DataType dataType, BlockCodecInfo blockCodecInfo,\n\t\t\t\tDataCodecInfo... dataCodecInfos) {\n\n\t\t\tsuper(dimensions, outerBlockSize, dataType, blockCodecInfo, dataCodecInfos);\n\t\t}\n\n\t\t@Override // to make this accessible for the test\n\t\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\t\t\treturn super.getDatasetAccess();\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/WriteShardTest2.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.Arrays;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.RawCompression;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\n\npublic class WriteShardTest2 {\n\n\n//\t#...............................#...............................#...............................#...............................#\n//\t$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$\n//\t|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|\n//\t0   3   6   9  12  15  18  21  24  27  30  33  36  39  42  45  48  51  54  57  60  63  66  69  72  75  78  81  84  87  90  93  96\n\n\tpublic static void main(String[] args) {\n\n\t\tfinal int[] chunkSize = {3,3};\n\t\tfinal int[] level1ShardSize = {6,6};\n\t\tfinal long[] datasetDimensions = {36, 18};\n\n\t\t// Chunks are 3x3\n\t\t// Level 1 shards are 6x6\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.END\n\t\t);\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tlevel1ShardSize,\n\t\t\t\tDataType.INT32,\n\t\t\t\tc1,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<int[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t//\t0       1       2       3       4       5       6\n\t\t//\t$.......$.......$.......$.......$.......$.......$\n\t\t//\t0   1   2   3   4   5   6   7   8   9  10  11  12\n\t\t//\t|...|...|...|...|...|...|...|...|...|...|...|...|\n\t\t//\t0   3   6   9  12  15  18  21  24  27  30  33  36\n\t\t//  .................................................\n\n\t\tfinal int[] dataBlockSize = c1.getInnerBlockSize();\n\n\t\t// create and write a shard DataBlock\n\t\tfinal DataBlock<int[]> shard = createDataBlock(level1ShardSize, new long[] {2,1}, 1);\n\t\tSystem.out.println(\"shard.getGridPosition() = \" + Arrays.toString(shard.getGridPosition()));\n\t\tSystem.out.println(\"shard.getSize() = \" + Arrays.toString(shard.getSize()));\n\t\tSystem.out.println(\"shard.getData() = \" + Arrays.toString(shard.getData()));\n\t\tdatasetAccess.writeBlock(store, shard, 1);\n\n\t\t// we should get these Chunk values\n\n\t\t//  1,  2,  3, |  4,  5,  6,\n\t\t//  7,  8,  9, | 10, 11, 12,\n\t\t// 13, 14, 15, | 16, 17, 18,\n\t\t// ------------+------------\n\t\t// 19, 20, 21, | 22, 23, 24,\n\t\t// 25, 26, 27, | 28, 29, 30,\n\t\t// 31, 32, 33, | 34, 35, 36\n\n\t\tSystem.out.println(\"{4, 2}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {4, 2}).getData()));\n\t\tSystem.out.println(\"{5, 2}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {5, 2}).getData()));\n\t\tSystem.out.println(\"{4, 3}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {4, 3}).getData()));\n\t\tSystem.out.println(\"{5, 3}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {5, 3}).getData()));\n\n\t\tfinal DataBlock<int[]> readShard = datasetAccess.readBlock(store, new long[] {2, 1}, 1);\n\t\tSystem.out.println(\"readShard.getData() = \" + Arrays.toString(readShard.getData()));\n\t}\n\n\tprivate static DataBlock<int[]> createDataBlock(int[] size, long[] gridPosition, int startValue) {\n\t\tfinal int[] ints = new int[DataBlock.getNumElements(size)];\n\t\tArrays.setAll(ints, i -> i + startValue);\n\t\treturn new IntArrayDataBlock(size, gridPosition, ints);\n\t}\n\n\tpublic static class TestDatasetAttributes extends DatasetAttributes {\n\n\t\tpublic TestDatasetAttributes(long[] dimensions, int[] outerBlockSize, DataType dataType, BlockCodecInfo blockCodecInfo,\n\t\t\t\tDataCodecInfo... dataCodecInfos) {\n\n\t\t\tsuper(dimensions, outerBlockSize, dataType, blockCodecInfo, dataCodecInfos);\n\t\t}\n\n\t\t@Override // to make this accessible for the test\n\t\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\t\t\treturn super.getDatasetAccess();\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/shard/WriteShardTestTruncate.java",
    "content": "package org.janelia.saalfeldlab.n5.shard;\n\nimport java.util.Arrays;\nimport org.janelia.saalfeldlab.n5.DataBlock;\nimport org.janelia.saalfeldlab.n5.DataType;\nimport org.janelia.saalfeldlab.n5.DatasetAttributes;\nimport org.janelia.saalfeldlab.n5.IntArrayDataBlock;\nimport org.janelia.saalfeldlab.n5.RawCompression;\nimport org.janelia.saalfeldlab.n5.codec.BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.DataCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.N5BlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.codec.RawBlockCodecInfo;\nimport org.janelia.saalfeldlab.n5.shard.ShardIndex.IndexLocation;\n\npublic class WriteShardTestTruncate {\n\n\n//\t#...............................#...............................#...............................#...............................#\n//\t$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$.......$\n//\t|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|\n//\t0   3   6   9  12  15  18  21  24  27  30  33  36  39  42  45  48  51  54  57  60  63  66  69  72  75  78  81  84  87  90  93  96\n\n\tpublic static void main(String[] args) {\n\n\t\tfinal int[] chunkSize = {3,3};\n\t\tfinal int[] level1ShardSize = {6,6};\n\t\tfinal long[] datasetDimensions = {34, 17}; // 6 x 3 shards, border (2, 1)\n\n\t\t// Chunks are 3x3\n\t\t// Level 1 shards are 6x6\n\t\tfinal BlockCodecInfo c0 = new N5BlockCodecInfo();\n\t\tfinal ShardCodecInfo c1 = new DefaultShardCodecInfo(\n\t\t\t\tchunkSize,\n\t\t\t\tc0,\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tnew RawBlockCodecInfo(),\n\t\t\t\tnew DataCodecInfo[] {new RawCompression()},\n\t\t\t\tIndexLocation.END\n\t\t);\n\n\t\tTestDatasetAttributes attributes = new TestDatasetAttributes(\n\t\t\t\tdatasetDimensions,\n\t\t\t\tlevel1ShardSize,\n\t\t\t\tDataType.INT32,\n\t\t\t\tc1,\n\t\t\t\tnew RawCompression());\n\n\t\tfinal DatasetAccess<int[]> datasetAccess = attributes.getDatasetAccess();\n\t\tfinal PositionValueAccess store = new TestPositionValueAccess();\n\n\t\t//\t0       1       2       3       4       5       6\n\t\t//\t$.......$.......$.......$.......$.......$.......$\n\t\t//\t0   1   2   3   4   5   6   7   8   9  10  11  12\n\t\t//\t|...|...|...|...|...|...|...|...|...|...|...|...|\n\t\t//\t0   3   6   9  12  15  18  21  24  27  30  33  36\n\t\t//  .................................................\n\n\t\tfinal int[] dataBlockSize = c1.getInnerBlockSize();\n\n\t\t// create and write a shard DataBlock\n\t\tfinal DataBlock<int[]> shard = createDataBlock(level1ShardSize, new long[] {5,2}, 1);\n\t\tSystem.out.println(\"shard.getGridPosition() = \" + Arrays.toString(shard.getGridPosition()));\n\t\tSystem.out.println(\"shard.getSize() = \" + Arrays.toString(shard.getSize()));\n\t\tSystem.out.println(\"shard.getData() = \" + Arrays.toString(shard.getData()));\n\t\tdatasetAccess.writeBlock(store, shard, 1);\n\n\t\t// we should get these chunk values\n\n\t\t//  1,  2,  3, |  4,  5,  6,\n\t\t//  7,  8,  9, | 10, 11, 12,\n\t\t// 13, 14, 15, | 16, 17, 18,\n\t\t// ------------+------------\n\t\t// 19, 20, 21, | 22, 23, 24,\n\t\t// 25, 26, 27, | 28, 29, 30,\n\t\t// 31, 32, 33, | 34, 35, 36\n\n\t\tSystem.out.println(\"{10, 4}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {10, 4}).getData()));\n\t\tSystem.out.println(\"{11, 4}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {11, 4}).getData()));\n\t\tSystem.out.println(\"{10, 5}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {10, 5}).getData()));\n\t\tSystem.out.println(\"{11, 5}.data = \" + Arrays.toString(\n\t\t\t\tdatasetAccess.readChunk(store, new long[] {11, 5}).getData()));\n\n\n\t\t//  1,  2,  3, |  4\n\t\t//  7,  8,  9, | 10\n\t\t// 13, 14, 15, | 16\n\t\t// ------------+---\n\t\t// 19, 20, 21, | 22\n\t\t// 25, 26, 27, | 28\n\n\t\tfinal DataBlock<int[]> readShard = datasetAccess.readBlock(store, new long[] {5, 2}, 1);\n\t\tSystem.out.println(\"readShard.getData() = \" + Arrays.toString(readShard.getData()));\n\t}\n\n\tprivate static DataBlock<int[]> createDataBlock(int[] size, long[] gridPosition, int startValue) {\n\t\tfinal int[] ints = new int[DataBlock.getNumElements(size)];\n\t\tArrays.setAll(ints, i -> i + startValue);\n\t\treturn new IntArrayDataBlock(size, gridPosition, ints);\n\t}\n\n\tpublic static class TestDatasetAttributes extends DatasetAttributes {\n\n\t\tpublic TestDatasetAttributes(long[] dimensions, int[] outerBlockSize, DataType dataType, BlockCodecInfo blockCodecInfo,\n\t\t\t\tDataCodecInfo... dataCodecInfos) {\n\n\t\t\tsuper(dimensions, outerBlockSize, dataType, blockCodecInfo, dataCodecInfos);\n\t\t}\n\n\t\t@Override // to make this accessible for the test\n\t\tprotected <T> DatasetAccess<T> getDatasetAccess() {\n\t\t\treturn super.getDatasetAccess();\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/test/java/org/janelia/saalfeldlab/n5/url/UriAttributeTest.java",
    "content": "package org.janelia.saalfeldlab.n5.url;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\n\nimport java.io.IOException;\nimport java.net.URISyntaxException;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Objects;\nimport java.util.Set;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\n\nimport org.janelia.saalfeldlab.n5.N5FSWriter;\nimport org.janelia.saalfeldlab.n5.N5Reader;\nimport org.janelia.saalfeldlab.n5.N5URI;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\n\npublic class UriAttributeTest {\n\n\tN5Reader n5;\n\n\tString rootContext = \"\";\n\n\tint[] list;\n\n\tHashMap<String, String> obj;\n\n\tSet<String> rootKeys;\n\n\tTestInts testObjInts;\n\n\tTestDoubles testObjDoubles;\n\n\t@Before\n\tpublic void before() {\n\n\t\tn5 = new N5FSWriter(\"src/test/resources/url/urlAttributes.n5\");\n\t\trootContext = \"\";\n\t\tlist = new int[]{0, 1, 2, 3};\n\n\t\tobj = new HashMap<>();\n\t\tobj.put(\"a\", \"aa\");\n\t\tobj.put(\"b\", \"bb\");\n\t\trootKeys = new HashSet<>();\n\t\trootKeys.addAll(Stream.of(\"n5\", \"foo\", \"list\", \"object\").collect(Collectors.toList()));\n\n\t\ttestObjInts = new TestInts(\"ints\", \"intsName\", new int[]{5, 4, 3});\n\t\ttestObjDoubles = new TestDoubles(\"doubles\", \"doublesName\", new double[]{5.5, 4.4, 3.3});\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\t@Test\n\tpublic void testRootAttributes() throws URISyntaxException, IOException {\n\n\t\t// get\n\t\tfinal Map<String, Object> everything = getAttribute(n5, new N5URI(\"\"), Map.class);\n\t\tfinal String version = \"2.6.1\"; // the n5 version at the time the test\n\t\t\t\t\t\t\t\t\t\t// data were created\n\t\tassertEquals(\"empty url\", version, everything.get(\"n5\"));\n\n\t\tfinal Map<String, Object> everything2 = getAttribute(n5, new N5URI(\"#/\"), Map.class);\n\t\tassertEquals(\"root attribute\", version, everything2.get(\"n5\"));\n\n\t\tassertEquals(\"url to attribute\", \"bar\", getAttribute(n5, new N5URI(\"#foo\"), String.class));\n\t\tassertEquals(\"url to attribute absolute\", \"bar\", getAttribute(n5, new N5URI(\"#/foo\"), String.class));\n\n\t\tassertEquals(\"#foo\", \"bar\", getAttribute(n5, new N5URI(\"#foo\"), String.class));\n\t\tassertEquals(\"#/foo\", \"bar\", getAttribute(n5, new N5URI(\"#/foo\"), String.class));\n\t\tassertEquals(\"?#foo\", \"bar\", getAttribute(n5, new N5URI(\"?#foo\"), String.class));\n\t\tassertEquals(\"?#/foo\", \"bar\", getAttribute(n5, new N5URI(\"?#/foo\"), String.class));\n\t\tassertEquals(\"?/#/foo\", \"bar\", getAttribute(n5, new N5URI(\"?/#/foo\"), String.class));\n\t\tassertEquals(\"?/.#/foo\", \"bar\", getAttribute(n5, new N5URI(\"?/.#/foo\"), String.class));\n\t\tassertEquals(\"?./#/foo\", \"bar\", getAttribute(n5, new N5URI(\"?./#/foo\"), String.class));\n\t\tassertEquals(\"?.#foo\", \"bar\", getAttribute(n5, new N5URI(\"?.#foo\"), String.class));\n\t\tassertEquals(\"?/a/..#foo\", \"bar\", getAttribute(n5, new N5URI(\"?/a/..#foo\"), String.class));\n\t\tassertEquals(\"?/a/../.#foo\", \"bar\", getAttribute(n5, new N5URI(\"?/a/../.#foo\"), String.class));\n\n\t\t/* whitespace-encoding-necesary, fragment-only test */\n\t\tassertEquals(\"#f o o\", \"b a r\", getAttribute(n5, new N5URI(\"#f o o\"), String.class));\n\n\t\tAssert.assertArrayEquals(\"url list\", list, getAttribute(n5, new N5URI(\"#list\"), int[].class));\n\n\t\t// list\n\t\tassertEquals(\"url list[0]\", list[0], (int)getAttribute(n5, new N5URI(\"#list[0]\"), Integer.class));\n\t\tassertEquals(\"url list[1]\", list[1], (int)getAttribute(n5, new N5URI(\"#list[1]\"), Integer.class));\n\t\tassertEquals(\"url list[2]\", list[2], (int)getAttribute(n5, new N5URI(\"#list[2]\"), Integer.class));\n\n\t\tassertEquals(\"url list[3]\", list[3], (int)getAttribute(n5, new N5URI(\"#list[3]\"), Integer.class));\n\t\tassertEquals(\"url list/[3]\", list[3], (int)getAttribute(n5, new N5URI(\"#list/[3]\"), Integer.class));\n\t\tassertEquals(\"url list//[3]\", list[3], (int)getAttribute(n5, new N5URI(\"#list//[3]\"), Integer.class));\n\t\tassertEquals(\"url //list//[3]\", list[3], (int)getAttribute(n5, new N5URI(\"#//list//[3]\"), Integer.class));\n\t\tassertEquals(\"url //list//[3]//\", list[3], (int)getAttribute(n5, new N5URI(\"#//list////[3]//\"), Integer.class));\n\n\t\t// object\n\t\tassertTrue(\"url object\", mapsEqual(obj, getAttribute(n5, new N5URI(\"#object\"), Map.class)));\n\t\tassertEquals(\"url object/a\", \"aa\", getAttribute(n5, new N5URI(\"#object/a\"), String.class));\n\t\tassertEquals(\"url object/b\", \"bb\", getAttribute(n5, new N5URI(\"#object/b\"), String.class));\n\t}\n\n\t@Test\n\tpublic void testPathAttributes() throws URISyntaxException, IOException {\n\n\t\tfinal String a = \"a\";\n\t\tfinal String aa = \"aa\";\n\t\tfinal String aaa = \"aaa\";\n\n\t\tfinal N5URI aUrl = new N5URI(\"?/a\");\n\t\tfinal N5URI aaUrl = new N5URI(\"?/a/aa\");\n\t\tfinal N5URI aaaUrl = new N5URI(\"?/a/aa/aaa\");\n\n\t\t// name of a\n\t\tassertEquals(\"name of a from root\", a, getAttribute(n5, \"?/a#name\", String.class));\n\t\tassertEquals(\"name of a from root\", a, getAttribute(n5, new N5URI(\"?a#name\"), String.class));\n\t\tassertEquals(\"name of a from a\", a, getAttribute(n5, aUrl.resolve(new N5URI(\"?/a#name\")), String.class));\n\t\tassertEquals(\"name of a from aa\", a, getAttribute(n5, aaUrl.resolve(\"?..#name\"), String.class));\n\t\tassertEquals(\"name of a from aaa\", a, getAttribute(n5, aaaUrl.resolve(new N5URI(\"?../..#name\")), String.class));\n\n\t\t// name of aa\n\t\tassertEquals(\"name of aa from root\", aa, getAttribute(n5, new N5URI(\"?/a/aa#name\"), String.class));\n\t\tassertEquals(\"name of aa from root\", aa, getAttribute(n5, new N5URI(\"?a/aa#name\"), String.class));\n\t\tassertEquals(\"name of aa from a\", aa, getAttribute(n5, aUrl.resolve(\"?aa#name\"), String.class));\n\n\t\tassertEquals(\"name of aa from aa\", aa, getAttribute(n5, aaUrl.resolve(\"?./#name\"), String.class));\n\n\t\tassertEquals(\"name of aa from aa\", aa, getAttribute(n5, aaUrl.resolve(\"#name\"), String.class));\n\t\tassertEquals(\"name of aa from aaa\", aa, getAttribute(n5, aaaUrl.resolve(\"?..#name\"), String.class));\n\n\t\t// name of aaa\n\t\tassertEquals(\"name of aaa from root\", aaa, getAttribute(n5, new N5URI(\"?/a/aa/aaa#name\"), String.class));\n\t\tassertEquals(\"name of aaa from root\", aaa, getAttribute(n5, new N5URI(\"?a/aa/aaa#name\"), String.class));\n\t\tassertEquals(\"name of aaa from a\", aaa, getAttribute(n5, aUrl.resolve(\"?aa/aaa#name\"), String.class));\n\t\tassertEquals(\"name of aaa from aa\", aaa, getAttribute(n5, aaUrl.resolve(\"?aaa#name\"), String.class));\n\t\tassertEquals(\"name of aaa from aaa\", aaa, getAttribute(n5, aaaUrl.resolve(\"#name\"), String.class));\n\n\t\tassertEquals(\"name of aaa from aaa\", aaa, getAttribute(n5, aaaUrl.resolve(\"?./#name\"), String.class));\n\n\t\tassertEquals(\"nested list 1\", (Integer)1, getAttribute(n5, new N5URI(\"#nestedList[0][0][0]\"), Integer.class));\n\t\tassertEquals(\"nested list 1\", (Integer)1, getAttribute(n5, new N5URI(\"#/nestedList/[0][0][0]\"), Integer.class));\n\t\tassertEquals(\n\t\t\t\t\"nested list 1\",\n\t\t\t\t(Integer)1,\n\t\t\t\tgetAttribute(n5, new N5URI(\"#nestedList//[0]/[0]///[0]\"), Integer.class));\n\t\tassertEquals(\n\t\t\t\t\"nested list 1\",\n\t\t\t\t(Integer)1,\n\t\t\t\tgetAttribute(n5, new N5URI(\"#/nestedList[0]//[0][0]\"), Integer.class));\n\n\t}\n\n\tprivate <T> T getAttribute(\n\t\t\tfinal N5Reader n5,\n\t\t\tfinal String url1,\n\t\t\tfinal Class<T> clazz) throws URISyntaxException, IOException {\n\n\t\treturn getAttribute(n5, url1, null, clazz);\n\t}\n\n\tprivate <T> T getAttribute(\n\t\t\tfinal N5Reader n5,\n\t\t\tfinal String url1,\n\t\t\tfinal String url2,\n\t\t\tfinal Class<T> clazz) throws URISyntaxException, IOException {\n\n\t\tfinal N5URI n5URL = url2 == null ? new N5URI(url1) : new N5URI(url1).resolve(url2);\n\t\treturn getAttribute(n5, n5URL, clazz);\n\t}\n\n\tprivate <T> T getAttribute(\n\t\t\tfinal N5Reader n5,\n\t\t\tfinal N5URI url1,\n\t\t\tfinal Class<T> clazz) throws URISyntaxException, IOException {\n\n\t\treturn getAttribute(n5, url1, null, clazz);\n\t}\n\n\tprivate <T> T getAttribute(\n\t\t\tfinal N5Reader n5,\n\t\t\tfinal N5URI url1,\n\t\t\tfinal String url2,\n\t\t\tfinal Class<T> clazz) throws URISyntaxException, IOException {\n\n\t\tfinal N5URI n5URL = url2 == null ? url1 : url1.resolve(url2);\n\t\treturn n5.getAttribute(n5URL.getGroupPath(), n5URL.getAttributePath(), clazz);\n\t}\n\n\t@Test\n\tpublic void testPathObject() throws IOException, URISyntaxException {\n\n\t\tfinal TestInts ints = getAttribute(n5, new N5URI(\"?objs#intsKey\"), TestInts.class);\n\t\tassertEquals(testObjInts.name, ints.name);\n\t\tassertEquals(testObjInts.type, ints.type);\n\t\tassertArrayEquals(testObjInts.t(), ints.t());\n\n\t\tfinal TestDoubles doubles = getAttribute(n5, new N5URI(\"?objs#doublesKey\"), TestDoubles.class);\n\t\tassertEquals(testObjDoubles.name, doubles.name);\n\t\tassertEquals(testObjDoubles.type, doubles.type);\n\t\tassertArrayEquals(testObjDoubles.t(), doubles.t(), 1e-9);\n\n\t\tfinal TestDoubles[] doubleArray = getAttribute(n5, new N5URI(\"?objs#array\"), TestDoubles[].class);\n\t\tfinal TestDoubles doubles1 = new TestDoubles(\"doubles\", \"doubles1\", new double[]{5.7, 4.5, 3.4});\n\t\tfinal TestDoubles doubles2 = new TestDoubles(\"doubles\", \"doubles2\", new double[]{5.8, 4.6, 3.5});\n\t\tfinal TestDoubles doubles3 = new TestDoubles(\"doubles\", \"doubles3\", new double[]{5.9, 4.7, 3.6});\n\t\tfinal TestDoubles doubles4 = new TestDoubles(\"doubles\", \"doubles4\", new double[]{5.10, 4.8, 3.7});\n\t\tfinal TestDoubles[] expectedDoubles = new TestDoubles[]{doubles1, doubles2, doubles3, doubles4};\n\t\tassertArrayEquals(expectedDoubles, doubleArray);\n\n\t\tfinal String[] stringArray = getAttribute(n5, new N5URI(\"?objs#String\"), String[].class);\n\t\tfinal String[] expectedString = new String[]{\"This\", \"is\", \"a\", \"test\"};\n\n\t\tfinal Integer[] integerArray = getAttribute(n5, new N5URI(\"?objs#Integer\"), Integer[].class);\n\t\tfinal Integer[] expectedInteger = new Integer[]{1, 2, 3, 4};\n\n\t\tfinal int[] intArray = getAttribute(n5, new N5URI(\"?objs#int\"), int[].class);\n\t\tfinal int[] expectedInt = new int[]{1, 2, 3, 4};\n\t\tassertArrayEquals(expectedInt, intArray);\n\t}\n\n\tprivate <K, V> boolean mapsEqual(final Map<K, V> a, final Map<K, V> b) {\n\n\t\tif (!a.keySet().equals(b.keySet()))\n\t\t\treturn false;\n\n\t\tfor (final K k : a.keySet()) {\n\t\t\tif (!a.get(k).equals(b.get(k)))\n\t\t\t\treturn false;\n\t\t}\n\n\t\treturn true;\n\t}\n\n\tprivate static class TestObject<T> {\n\n\t\tString type;\n\n\t\tString name;\n\n\t\tT t;\n\n\t\tpublic TestObject(final String type, final String name, final T t) {\n\n\t\t\tthis.name = name;\n\t\t\tthis.type = type;\n\t\t\tthis.t = t;\n\t\t}\n\n\t\tpublic T t() {\n\n\t\t\treturn t;\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean equals(final Object obj) {\n\n\t\t\tif (obj.getClass() == this.getClass()) {\n\t\t\t\tfinal TestObject<?> otherTestObject = (TestObject<?>)obj;\n\t\t\t\treturn Objects.equals(name, otherTestObject.name)\n\t\t\t\t\t\t&& Objects.equals(type, otherTestObject.type)\n\t\t\t\t\t\t&& Objects.equals(t, otherTestObject.t);\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tpublic static class TestDoubles extends TestObject<double[]> {\n\n\t\tpublic TestDoubles(final String type, final String name, final double[] t) {\n\n\t\t\tsuper(type, name, t);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean equals(final Object obj) {\n\n\t\t\tif (obj.getClass() == this.getClass()) {\n\t\t\t\tfinal TestDoubles otherTestObject = (TestDoubles)obj;\n\t\t\t\treturn Objects.equals(name, otherTestObject.name)\n\t\t\t\t\t\t&& Objects.equals(type, otherTestObject.type)\n\t\t\t\t\t\t&& Arrays.equals(t, otherTestObject.t);\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tprivate static class TestInts extends TestObject<int[]> {\n\n\t\tpublic TestInts(final String type, final String name, final int[] t) {\n\n\t\t\tsuper(type, name, t);\n\t\t}\n\n\t\t@Override\n\t\tpublic boolean equals(final Object obj) {\n\n\t\t\tif (obj.getClass() == this.getClass()) {\n\t\t\t\tfinal TestInts otherTestObject = (TestInts)obj;\n\t\t\t\treturn Objects.equals(name, otherTestObject.name)\n\t\t\t\t\t\t&& Objects.equals(type, otherTestObject.type)\n\t\t\t\t\t\t&& Arrays.equals(t, otherTestObject.t);\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\n\t}\n\n}\n"
  },
  {
    "path": "src/test/resources/backward/data-1.5.0.n5/attributes.json",
    "content": "{\"n5\":\"1.5.0\"}"
  },
  {
    "path": "src/test/resources/backward/data-1.5.0.n5/raw/attributes.json",
    "content": "{\"dataType\":\"uint8\",\"compressionType\":\"raw\",\"blockSize\":[5,4],\"dimensions\":[7,5]}"
  },
  {
    "path": "src/test/resources/backward/data-2.5.1.n5/attributes.json",
    "content": "{\"n5\":\"2.5.1\"}"
  },
  {
    "path": "src/test/resources/backward/data-2.5.1.n5/raw/attributes.json",
    "content": "{\"dataType\":\"uint8\",\"compression\":{\"type\":\"raw\"},\"blockSize\":[5,4],\"dimensions\":[7,5]}"
  },
  {
    "path": "src/test/resources/backward/data-3.1.3.n5/attributes.json",
    "content": "{\"n5\":\"4.0.0\"}"
  },
  {
    "path": "src/test/resources/backward/data-3.1.3.n5/raw/attributes.json",
    "content": "{\"dataType\":\"uint8\",\"compression\":{\"type\":\"raw\"},\"blockSize\":[5,4],\"dimensions\":[7,5]}"
  },
  {
    "path": "src/test/resources/url/urlAttributes.n5/a/aa/aaa/attributes.json",
    "content": "{\n    \"name\" : \"aaa\"\n}\n"
  },
  {
    "path": "src/test/resources/url/urlAttributes.n5/a/aa/attributes.json",
    "content": "{\n    \"name\" : \"aa\"\n}\n"
  },
  {
    "path": "src/test/resources/url/urlAttributes.n5/a/attributes.json",
    "content": "{\n    \"name\" : \"a\"\n}\n"
  },
  {
    "path": "src/test/resources/url/urlAttributes.n5/attributes.json",
    "content": "{\"n5\":\"2.6.1\",\"foo\":\"bar\",\"f o o\":\"b a r\",\"list\":[0,1,2,3],\"nestedList\":[[[1,2,3,4]],[[10,20,30,40]],[[100,200,300,400]],[[1000,2000,3000,4000]]],\"object\":{\"a\":\"aa\",\"b\":\"bb\"}}"
  },
  {
    "path": "src/test/resources/url/urlAttributes.n5/objs/attributes.json",
    "content": "{\n  \"intsKey\": {\n    \"type\": \"ints\",\n    \"name\": \"intsName\",\n    \"t\": [ 5, 4, 3 ]\n  },\n  \"doublesKey\": {\n    \"type\": \"doubles\",\n    \"name\": \"doublesName\",\n    \"t\": [ 5.5, 4.4, 3.3 ]\n  },\n  \"int\": [1, 2,3,4],\n  \"Integer\": [1, 2,3,4],\n  \"String\": [\"This\", \"is\", \"a\", \"test\"],\n  \"array\": [\n    {\n      \"type\": \"doubles\",\n      \"name\": \"doubles1\",\n      \"t\": [ 5.7, 4.5, 3.4 ]\n    },\n    {\n      \"type\": \"doubles\",\n      \"name\": \"doubles2\",\n      \"t\": [ 5.8, 4.6, 3.5 ]\n    },\n    {\n      \"type\": \"doubles\",\n      \"name\": \"doubles3\",\n      \"t\": [ 5.9, 4.7, 3.6 ]\n    },\n    {\n      \"type\": \"doubles\",\n      \"name\": \"doubles4\",\n      \"t\": [ 5.10, 4.8, 3.7 ]\n    }\n  ]\n}\n"
  }
]