[
  {
    "path": ".editorconfig",
    "content": ""
  },
  {
    "path": ".github/workflows/TagBot.yml",
    "content": "name: TagBot\non:\n  issue_comment:\n    types:\n      - created\n  workflow_dispatch:\n    inputs:\n      lookback:\n        default: 3\npermissions:\n  contents: write\njobs:\n  TagBot:\n    if: github.event_name == 'workflow_dispatch' || github.actor == 'JuliaTagBot'\n    runs-on: ubuntu-latest\n    steps:\n      - uses: JuliaRegistries/TagBot@v1\n        with:\n          token: ${{ secrets.GITHUB_TOKEN }}\n          ssh: ${{ secrets.DOCUMENTER_KEY }}\n"
  },
  {
    "path": ".github/workflows/docs.yml",
    "content": "name: Documentation\n\non:\n  push:\n    branches:\n      - main\n    tags: '*'\n  pull_request:\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v2\n      - uses: julia-actions/setup-julia@latest\n        with:\n          version: '1.7'\n      - name: Install dependencies\n        run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate(); Pkg.build()'\n      - name: Build and deploy\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token\n          DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key\n        run: julia --project=docs/ docs/make.jl"
  },
  {
    "path": ".github/workflows/test.yml",
    "content": "\nname: Test\n\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    branches:\n      - main\njobs:\n  build:\n    name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }}\n    runs-on: ${{ matrix.os }}\n    strategy:\n      fail-fast: false\n      matrix:\n        version:\n          - '1.6'\n          - '1.7'\n        os:\n          - ubuntu-latest\n        arch:\n          - x64\n    steps:\n      - uses: actions/checkout@v2\n      - name: Set up JDK\n        uses: actions/setup-java@v2\n        with:\n          java-version: '8'\n          distribution: 'adopt'\n      - uses: julia-actions/setup-julia@latest\n        with:\n          version: ${{ matrix.version }}\n          arch: ${{ matrix.arch }}\n      - uses: julia-actions/julia-buildpkg@latest\n      - uses: julia-actions/julia-runtest@latest\n      - uses: julia-actions/julia-uploadcodecov@latest\n        env:\n          CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}"
  },
  {
    "path": ".gitignore",
    "content": "*.jl.cov\n*.jl.mem\n*~\n.idea/\n.vscode/\ntarget/\nproject/\n*.class\n*.jar\n.juliahistory\n*.iml\n*.log\nnohup.out\ndocs/build\ndocs/site\n.DS_Store\n\ndeps/hadoop\n\nManifest.toml\n\n# hidden files\n_*"
  },
  {
    "path": "LICENSE.md",
    "content": "The Spark.jl package is licensed under the MIT \"Expat\" License:\n\n> Copyright (c) 2015: dfdx.\n>\n> Permission is hereby granted, free of charge, to any person obtaining\n> a copy of this software and associated documentation files (the\n> \"Software\"), to deal in the Software without restriction, including\n> without limitation the rights to use, copy, modify, merge, publish,\n> distribute, sublicense, and/or sell copies of the Software, and to\n> permit persons to whom the Software is furnished to do so, subject to\n> the following conditions:\n>\n> The above copyright notice and this permission notice shall be\n> included in all copies or substantial portions of the Software.\n>\n> THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n> EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n> MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n> IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n> CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n> TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n> SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "Project.toml",
    "content": "name = \"Spark\"\nuuid = \"e3819d11-95af-5eea-9727-70c091663a01\"\nversion = \"0.6.1\"\n\n[deps]\nDates = \"ade2ca70-3891-5945-98fb-dc099432e06a\"\nIteratorInterfaceExtensions = \"82899510-4779-5014-852e-03e436cf321d\"\nJavaCall = \"494afd89-becb-516b-aafa-70d2670c0337\"\nReexport = \"189a3867-3050-52da-a836-e630ba90ab69\"\nSerialization = \"9e88b42a-f829-5b0c-bbe9-9e923198166b\"\nSockets = \"6462fe0b-24de-5631-8697-dd941f90decc\"\nStatistics = \"10745b16-79ce-11e8-11f9-7d13ad32a3b2\"\nTableTraits = \"3783bdb8-4a98-5b6b-af9a-565f29a5fe9c\"\nUmlaut = \"92992a2b-8ce5-4a9c-bb9d-58be9a7dc841\"\n\n[compat]\nIteratorInterfaceExtensions = \"1\"\nJavaCall = \"0.7, 0.8\"\nReexport = \"1.2\"\nTableTraits = \"1\"\nUmlaut = \"0.2\"\njulia = \"1.6\"\n\n[extras]\nDataFrames = \"a93c6f00-e57d-5684-b7b6-d8193f3e46c0\"\nTest = \"8dfed614-e22c-5e08-85e1-65c5234f0b40\"\n\n[targets]\ntest = [\"Test\", \"DataFrames\"]\n"
  },
  {
    "path": "README.md",
    "content": "# Spark.jl\n\nA Julia interface to Apache Spark™\n\n| **Latest Version** | **Documentation** | **PackageEvaluator** | **Build Status** |\n|:------------------:|:-----------------:|:--------------------:|:----------------:|\n| [![][version-img]][version-url] | [![][docs-latest-img]][docs-latest-url] | [![PkgEval][pkgeval-img]][pkgeval-url]  | [![][gh-test-img]][gh-test-url]  |\n\n\n\nSpark.jl provides an interface to Apache Spark™ platform, including SQL / DataFrame and Structured Streaming. It closely follows the PySpark API, making it easy to translate existing Python code to Julia.\n\nSpark.jl supports multiple cluster types (in client mode), and can be considered as an analogue to PySpark or RSpark within the Julia ecosystem. It supports running within on-premise installations, as well as hosted instance such as Amazon EMR and Azure HDInsight.\n\n**[Documentation][docs-latest-url]**\n\n\n## Trademarks\n\nApache®, [Apache Spark and Spark](http://spark.apache.org) are registered trademarks, or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.\n\n[docs-latest-img]: https://img.shields.io/badge/docs-latest-blue.svg\n[docs-latest-url]: http://dfdx.github.io/Spark.jl/dev\n\n[gh-test-img]: https://github.com/dfdx/Spark.jl/actions/workflows/test.yml/badge.svg\n[gh-test-url]: https://github.com/dfdx/Spark.jl/actions/workflows/test.yml\n\n[codecov-img]: https://codecov.io/gh/dfdx/Spark.jl/branch/master/graph/badge.svg\n[codecov-url]: https://codecov.io/gh/dfdx/Spark.jl\n\n[issues-url]: https://github.com/dfdx/Spark.jl/issues\n\n[pkgeval-img]: https://juliahub.com/docs/Spark/pkgeval.svg\n[pkgeval-url]: https://juliahub.com/ui/Packages/Spark/zpJEw\n\n[version-img]: https://juliahub.com/docs/Spark/version.svg\n[version-url]: https://juliahub.com/ui/Packages/Spark/zpJEw\n"
  },
  {
    "path": "deps/build.jl",
    "content": "mvn = Sys.iswindows() ? \"mvn.cmd\" : \"mvn\"\nwhich = Sys.iswindows() ? \"where\" : \"which\"\n\ntry\n    run(`$which $mvn`)\ncatch\n    error(\"Cannot find maven. Is it installed?\")\nend\n\nSPARK_VERSION = get(ENV, \"BUILD_SPARK_VERSION\", \"3.2.1\")\nSCALA_VERSION = get(ENV, \"BUILD_SCALA_VERSION\", \"2.13\")\nSCALA_BINARY_VERSION = get(ENV, \"BUILD_SCALA_VERSION\", \"2.13.6\")\n\ncd(joinpath(dirname(@__DIR__), \"jvm/sparkjl\")) do\n    run(`$mvn clean package -Dspark.version=$SPARK_VERSION -Dscala.version=$SCALA_VERSION -Dscala.binary.version=$SCALA_BINARY_VERSION`)\nend\n"
  },
  {
    "path": "docs/.gitignore",
    "content": "data/"
  },
  {
    "path": "docs/Project.toml",
    "content": "[deps]\nDocumenter = \"e30172f5-a6a5-5a46-863b-614d45cd2de4\"\nSpark = \"e3819d11-95af-5eea-9727-70c091663a01\"\n"
  },
  {
    "path": "docs/localdocs.sh",
    "content": "#!/bin/bash\njulia -e 'using LiveServer; serve(dir=\"build\")'"
  },
  {
    "path": "docs/make.jl",
    "content": "using Documenter\nusing Spark\n\nmakedocs(\n    sitename = \"Spark\",\n    format = Documenter.HTML(),\n    modules = [Spark],\n    pages = Any[\n        \"Introduction\" => \"index.md\",\n        \"SQL / DataFrames\" => \"sql.md\",\n        \"Structured Streaming\" => \"streaming.md\",\n        \"API Reference\" => \"api.md\"\n    ],\n)\n\ndeploydocs(\n    repo = \"github.com/dfdx/Spark.jl.git\",\n    devbranch = \"main\",\n)"
  },
  {
    "path": "docs/src/api.md",
    "content": "```@meta\nCurrentModule = Spark\n```\n\n```@docs\nSparkSessionBuilder\nSparkSession\nRuntimeConfig\nDataFrame\nGroupedData\nColumn\nRow\nStructType\nStructField\nWindow\nWindowSpec\nDataFrameReader\nDataFrameWriter\nDataStreamReader\nDataStreamWriter\nStreamingQuery\n@chainable\nDotChainer\n```\n\n```@index\n```\n"
  },
  {
    "path": "docs/src/index.md",
    "content": "# Introduction\n\n## Overview\n\nSpark.jl provides an interface to Apache Spark™ platform, including SQL / DataFrame and Structured Streaming. It closely follows the PySpark API, making it easy to translate existing Python code to Julia.\n\nSpark.jl supports multiple cluster types (in client mode), and can be considered as an analogue to PySpark or RSpark within the Julia ecosystem. It supports running within on-premise installations, as well as hosted instance such as Amazon EMR and Azure HDInsight.\n\n### Installation\n\nSpark.jl requires at least JDK 8/11 and Maven to be installed and available in PATH.\n\n```julia\n] add Spark\n```\n\nTo link against a specific version of Spark, also run:\n\n```julia\nENV[\"BUILD_SPARK_VERSION\"] = \"3.2.1\"   # version you need\n] build Spark\n```\n\n### Quick Example\n\nNote that most types in Spark.jl support dot notation for calling functions, e.g. `x.foo(y)` is expanded into `foo(x, y)`.\n\n```@example\nusing Spark\n\nspark = SparkSession.builder.appName(\"Main\").master(\"local\").getOrCreate()\ndf = spark.createDataFrame([[\"Alice\", 19], [\"Bob\", 23]], \"name string, age long\")\nrows = df.select(Column(\"age\") + 1).collect()\nfor row in rows\n    println(row[1])\nend\n```\n\n### Cluster Types\n\nThis package supports multiple cluster types (in client mode): `local`, `standalone`, `mesos` and `yarn`. The location of the cluster (in case of mesos or standalone) or the cluster type (in case of local or yarn) must be passed as a parameter `master` when creating a Spark context. For YARN based clusters, the cluster parameters are picked up from `spark-defaults.conf`, which must be accessible via a `SPARK_HOME` environment variable.\n\n## Current Limitations\n\n* Jobs can be submitted from Julia process attached to the cluster in `client` deploy mode. `Cluster` mode is not fully supported, and it is uncertain if it is useful in the Julia context.\n* Since records are serialised between Java and Julia at the edges, the maximum size of a single row in an RDD is 2GB, due to Java array indices being limited to 32 bits.\n\n## Trademarks\n\nApache®, [Apache Spark and Spark](http://spark.apache.org) are registered trademarks, or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.\n"
  },
  {
    "path": "docs/src/sql.md",
    "content": "```@meta\nCurrentModule = Spark\n```\n\n# SQL / DataFrames\n\nThis is a quick introduction into the Spark.jl core functions. It closely follows the official [PySpark tutorial](https://spark.apache.org/docs/latest/api/python/getting_started/quickstart_df.html) and copies many examples verbatim. In most cases, PySpark docs should work for Spark.jl as is or with little adaptation.\n\nSpark.jl applications usually start by creating a `SparkSession`:\n\n```@example\nusing Spark\n\nspark = SparkSession.builder.appName(\"Main\").master(\"local\").getOrCreate()\n```\n\nNote that here we use dot notation to chain function invocations. This makes the code more concise and also mimics Python API, making translation of examples easier. The same example could also be written as:\n\n```julia\nusing Spark\nimport Spark: appName, master, getOrCreate\n\nbuilder = SparkSession.builder\nbuilder = appName(builder, \"Main\")\nbuilder = master(builder, \"local\")\nspark = getOrCreate(builder)\n```\n\nSee [`@chainable`](@ref) for the details of the dot notation.\n\n\n## DataFrame Creation\n\n\nIn simple cases, a Spark DataFrame can be created via `SparkSession.createDataFrame`. E.g. from a list of rows:\n\n```@example df\nusing Spark                                 # hide\nspark = SparkSession.builder.getOrCreate()  # hide\n\nusing Dates\n\ndf = spark.createDataFrame([\n    Row(a=1, b=2.0, c=\"string1\", d=Date(2000, 1, 1), e=DateTime(2000, 1, 1, 12, 0)),\n    Row(a=2, b=3.0, c=\"string2\", d=Date(2000, 2, 1), e=DateTime(2000, 1, 2, 12, 0)),\n    Row(a=4, b=5.0, c=\"string3\", d=Date(2000, 3, 1), e=DateTime(2000, 1, 3, 12, 0))\n])\nprintln(df)\n```\nOr using an explicit schema:\n\n```@example df\ndf = spark.createDataFrame([\n    [1, 2.0, \"string1\", Date(2000, 1, 1), DateTime(2000, 1, 1, 12, 0)],\n    [2, 3.0, \"string2\", Date(2000, 2, 1), DateTime(2000, 1, 2, 12, 0)],\n    [3, 4.0, \"string3\", Date(2000, 3, 1), DateTime(2000, 1, 3, 12, 0)]\n], \"a long, b double, c string, d date, e timestamp\")\nprintln(df)\n```\n\n\n## Viewing Data\n\nThe top rows of a DataFrame can be displayed using `DataFrame.show()`:\n\n```@example df\ndf.show(1)\n```\n\nYou can see the DataFrame’s schema and column names as follows:\n\n```@example df\ndf.columns()\n```\n\n```@example df\ndf.printSchema()\n```\n\nShow the summary of the DataFrame\n\n```@example df\ndf.select(\"a\", \"b\", \"c\").describe().show()\n```\n\n`DataFrame.collect()` collects the distributed data to the driver side as the local data in Julia. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side.\n\n```@example df\ndf.collect()\n```\n\nIn order to avoid throwing an out-of-memory exception, use `take()` or `tail()`.\n\n```@example df\ndf.take(1)\n```\n\n## Selecting and Accessing Data\n\nSpark.jl `DataFrame` is lazily evaluated and simply selecting a column does not trigger the computation but it returns a `Column` instance.\n\n```@example df\ndf.a\n```\n\nIn fact, most of column-wise operations return `Column`s.\n\n```@example df\ntypeof(df.c) == typeof(df.c.upper()) == typeof(df.c.isNull())\n```\n\nThese `Column`s can be used to select the columns from a `DataFrame`. For example, `select()` takes the `Column` instances that returns another `DataFrame`.\n\n```@example df\ndf.select(df.c).show()\n```\n\nAssign new Column instance.\n\n```@example df\ndf.withColumn(\"upper_c\", df.c.upper()).show()\n```\n\nTo select a subset of rows, use `filter()` (a.k.a. `where()`).\n\n```@example df\ndf.filter(df.a == 1).show()\n```\n\n## Grouping Data\n\nSpark.jl `DataFrame` also provides a way of handling grouped data by using the common approach, split-apply-combine strategy. It groups the data by a certain condition applies a function to each group and then combines them back to the `DataFrame`.\n\n```@example gdf\nusing Spark   # hide\nspark = SparkSession.builder.appName(\"Main\").master(\"local\").getOrCreate()  # hide\n\ndf = spark.createDataFrame([\n    [\"red\", \"banana\", 1, 10], [\"blue\", \"banana\", 2, 20], [\"red\", \"carrot\", 3, 30],\n    [\"blue\", \"grape\", 4, 40], [\"red\", \"carrot\", 5, 50], [\"black\", \"carrot\", 6, 60],\n    [\"red\", \"banana\", 7, 70], [\"red\", \"grape\", 8, 80]], [\"color string\", \"fruit string\", \"v1 long\", \"v2 long\"])\ndf.show()\n```\n\nGrouping and then applying the `avg()` function to the resulting groups.\n\n```@example gdf\ndf.groupby(\"color\").avg().show()\n```\n\n## Getting Data in/out\n\nSpark.jl can read and write a variety of data formats. Here's a few examples.\n\n### CSV\n\n```@example gdf\ndf.write.option(\"header\", true).csv(\"data/fruits.csv\")\nspark.read.option(\"header\", true).csv(\"data/fruits.csv\")\n```\n\n### Parquet\n\n```@example gdf\ndf.write.parquet(\"data/fruits.parquet\")\nspark.read.parquet(\"data/fruits.parquet\")\n```\n\n### ORC\n\n```@example gdf\ndf.write.orc(\"data/fruits.orc\")\nspark.read.orc(\"data/fruits.orc\")\n```\n\n## Working with SQL\n\n`DataFrame` and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the `DataFrame` as a table and run a SQL easily as below:\n\n```@example gdf\ndf.createOrReplaceTempView(\"tableA\")\nspark.sql(\"SELECT count(*) from tableA\").show()\n```\n\n```@example gdf\nspark.sql(\"SELECT fruit, sum(v1) as s FROM tableA GROUP BY fruit ORDER BY s\").show()\n```"
  },
  {
    "path": "docs/src/streaming.md",
    "content": "# Structured Streaming\n\nStructured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine. In this tutorial, we explore basic API of the Structured Streaming in Spark.jl. For a general introduction into the topic and more advanced examples follow the [official guide](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html) and adapt Python snippets.\n\nLet’s say you want to maintain a running word count of text data received from a data server listening on a TCP socket. We will use Netcat to send this data:\n\n```\nnc -lk 9999\n```\n\nAs usually, we start by creating a SparkSession:\n\n```@example basic\nusing Spark\n\nspark = SparkSession.\n    builder.\n    master(\"local\").\n    appName(\"StructuredNetworkWordCount\").\n    getOrCreate()\n```\n\nNext, let’s create a streaming DataFrame that represents text data received from a server listening on localhost:9999, and transform the DataFrame to calculate word counts.\n\n```@example basic\n# Create DataFrame representing the stream of input lines from connection to localhost:9999\nlines = spark.\n    readStream.\n    format(\"socket\").\n    option(\"host\", \"localhost\").\n    option(\"port\", 9999).\n    load()\n\n# Split the lines into words\nwords = lines.select(\n    lines.value.split(\" \").explode().alias(\"word\")\n)\n\n# Generate running word count\nwordCounts = words.groupBy(\"word\").count()\n```\n\nThis `lines` DataFrame represents an unbounded table containing the streaming text data. This table contains one column of strings named “value”, and each line in the streaming text data becomes a row in the table. Note, that this is not currently receiving any data as we are just setting up the transformation, and have not yet started it. Next, we have used two built-in SQL functions - `split` and `explode`, to split each line into multiple rows with a word each. In addition, we use the function `alias` to name the new column as \"word\". Finally, we have defined the `wordCounts` DataFrame by grouping by the unique values in the Dataset and counting them. Note that this is a streaming DataFrame which represents the running word counts of the stream.\n\nWe have now set up the query on the streaming data. All that is left is to actually start receiving data and computing the counts. To do this, we set it up to print the complete set of counts (specified by `outputMode(\"complete\"))` to the console every time they are updated. And then start the streaming computation using `start()`.\n\n```julia\nquery = wordCounts.\n    writeStream.\n    outputMode(\"complete\").\n    format(\"console\").\n    start()\n\nquery.awaitTermination()\n```\n\nNow type a few lines in the Netcat terminal window and you should see output similar to this:\n\n```julia\njulia> query.awaitTermination()\n-------------------------------------------\nBatch: 0\n-------------------------------------------\n+----+-----+\n|word|count|\n+----+-----+\n+----+-----+\n\n-------------------------------------------\nBatch: 1\n-------------------------------------------\n+------------+-----+\n|        word|count|\n+------------+-----+\n|         was|    1|\n|         for|    1|\n|   beginning|    1|\n|       Julia|    1|\n|    designed|    1|\n|         the|    1|\n|        high|    1|\n|        from|    1|\n|performance.|    1|\n+------------+-----+\n\n-------------------------------------------\nBatch: 2\n-------------------------------------------\n+------------+-----+\n|        word|count|\n+------------+-----+\n|         was|    1|\n|         for|    1|\n|   beginning|    1|\n|       Julia|    2|\n|          is|    1|\n|    designed|    1|\n|         the|    1|\n|        high|    1|\n|        from|    1|\n|       typed|    1|\n|performance.|    1|\n| dynamically|    1|\n+------------+-----+\n```"
  },
  {
    "path": "examples/InstallJuliaEMR.sh",
    "content": "#!/bin/bash\n\n## This is a bootstrap action for installing Julia and Spark.jl on an Amazon EMR cluster.\n## It's been tested with Julia 1.6.2 and EMR 5.33 and performs the following actions:\n## 1. Installs Julia 1.6.2 and Maven 3.8.1\n## 2. Configures the \"hadoop\" user's startup.jl to load Spark/Hadoop dependencies\n## 3. Creates a shared package directory in which to install Spark.jl\n## 4. Installs v0.5.1 of Spark.jl for the necessary Spark/Scala versions\n## \n## You can run this script manually on every node or upload it to S3 and run it as a bootstrap action.\n## When creating the EMR cluster, set the \"spark-default\" configuration with the following JSON.\n## Reference: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-configure.html\n#\n# [\n#   {\n#     \"Classification\": \"spark-defaults\",\n#     \"Properties\": {\n#       \"spark.executorEnv.JULIA_HOME\": \"/usr/local/julia-1.6.2/bin\",\n#       \"spark.executorEnv.JULIA_DEPOT_PATH\": \"/usr/local/share/julia/v1.6.2\",\n#       \"spark.executorEnv.JULIA_VERSION\": \"v1.6.2\"\n#     }\n#   }\n# ]\n\nexport JULIA_VERSION=\"1.6.2\"\nexport JULIA_DL_URL=\"https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.2-linux-x86_64.tar.gz\"\n\n# install julia\ncurl -sL ${JULIA_DL_URL} | sudo tar -xz -C /usr/local/\nJULIA_DIR=/usr/local/julia-${JULIA_VERSION}\n\n# install maven\ncurl -s https://mirrors.sonic.net/apache/maven/maven-3/3.8.1/binaries/apache-maven-3.8.1-bin.tar.gz | sudo tar -xz -C /usr/local/\nMAVEN_DIR=/usr/local/apache-maven-3.8.1\n\n# Update the `hadoop` user's current and future path with Maven and Julia.\n# This allows us to download/install Spark.jl\nexport PATH=${MAVEN_DIR}/bin:${JULIA_DIR}/bin:${PATH}\necho \"export PATH=${MAVEN_DIR}/bin:${JULIA_DIR}/bin:${PATH}\" >> /home/hadoop/.bashrc\n\n# Create a shared package dir for the installation\nsudo mkdir -p /usr/local/share/julia/v${JULIA_VERSION} && \\\n    sudo chown -R hadoop.hadoop /usr/local/share/julia/ && \\\n    sudo chmod -R go+r /usr/local/share/julia/\n\n# Create a config file that adds Spark environment variables\n# and adds the new package dir to the DEPOT_PATH.\n# This ensures that Spark.jl gets installed to a shared location.\nexport TARGET_USER=hadoop\nexport JULIA_CFG_DIR=\"/home/${TARGET_USER}/.julia/config\"\nmkdir -p ${JULIA_CFG_DIR} && \\\n    touch ${JULIA_CFG_DIR}/startup.jl && \\\n    chown -R hadoop.hadoop /home/hadoop/.julia\n\necho 'ENV[\"SPARK_HOME\"] = \"/usr/lib/spark/\"' >> \"${JULIA_CFG_DIR}/startup.jl\"\necho 'ENV[\"HADOOP_CONF_DIR\"] = \"/etc/hadoop/conf\"' >> \"${JULIA_CFG_DIR}/startup.jl\"\necho 'push!(DEPOT_PATH, \"/usr/local/share/julia/v'${JULIA_VERSION}'\")' >> \"${JULIA_CFG_DIR}/startup.jl\"\n\n# Install Spark.jl - we need to explicity define Spark/Scala versions here\nBUILD_SCALA_VERSION=2.11.12 \\\nBUILD_SPARK_VERSION=2.4.7 \\\nJULIA_COPY_STACKS=yes \\\nJULIA_DEPOT_PATH=/usr/local/share/julia/v${JULIA_VERSION} \\\njulia -e 'using Pkg;Pkg.add(Pkg.PackageSpec(;name=\"Spark\", version=\"0.5.1\"));using Spark;'\n"
  },
  {
    "path": "examples/InstallJuliaHDI.sh",
    "content": "#!/usr/bin/env bash\n\n# An example shell script that can be used on Azure HDInsight to install Julia to HDI Spark cluster\n# This script, or a derivative should be set as a script action when deploying an HDInsight cluster\n\n# install julia v0.6\ncurl -sL https://julialang-s3.julialang.org/bin/linux/x64/1.0/julia-1.0.5-linux-x86_64.tar.gz | sudo tar -xz -C /usr/local/\nJULIA_HOME=/usr/local/julia-1.0.5/bin\n\n# install maven\ncurl -s http://mirror.olnevhost.net/pub/apache/maven/binaries/apache-maven-3.2.2-bin.tar.gz | sudo tar -xz -C /usr/local/\nexport M2_HOME=/usr/local/apache-maven-3.2.2\nexport PATH=$M2_HOME/bin:$PATH\n\n# Create Directories\nexport JULIA_DEPOT_PATH=\"/home/hadoop/.julia/\"\nmkdir -p ${JULIA_DEPOT_PATH}\n\n# Set Environment variables for current session\nexport PATH=${PATH}:${MVN_HOME}/bin:${JULIA_HOME}/bin\nexport HOME=\"/root\"\necho \"Installing Julia Packages in Julia Folder ${JULIA_DEPOT_PATH}\"\n#Install Spark.jl\n$JULIA_HOME/julia -e 'using Pkg; Pkg.add(\"Spark\");Pkg.build(\"Spark\"); using Spark;'\ndeclare -a users=(\"spark\" \"yarn\" \"hadoop\" \"sshuser\") #TODO: change accordingly\nSPARK_HOME=/usr/hdp/current/spark2-client\necho \"spark.executorEnv.JULIA_HOME ${JULIA_HOME}\" >> ${SPARK_HOME}/conf/spark-defaults.conf\necho \"spark.executorEnv.JULIA_DEPOT_PATH ${JULIA_DEPOT_PATH}\" >> ${SPARK_HOME}/conf/spark-defaults.conf\necho \"spark.executorEnv.JULIA_VERSION v1.0.5\" >> ${SPARK_HOME}/conf/spark-defaults.conf\nfor cusr in \"${users[@]}\"; do\n   echo \" Adding vars for ser ${cusr}\"\n   echo \"\" >> /home/${cusr}/.bashrc\n   echo \"export MVN_HOME=/usr/local/apache-maven-3.2.2\" >> /home/${cusr}/.bashrc\n   echo \"export PATH=${PATH}:${MVN_HOME}/bin:${JULIA_HOME}\" >> /home/${cusr}/.bashrc\n   echo \"export YARN_CONF_DIR=/etc/hadoop/conf\" >> /home/${cusr}/.bashrc\n   echo \"export JULIA_HOME=${JULIA_HOME}\" >> /home/${cusr}/.bashrc\n   echo \"export JULIA_DEPOT_PATH=${JULIA_DEPOT_PATH}\" >> /home/${cusr}/.bashrc\n   echo \"source ${SPARK_HOME}/bin/load-spark-env.sh\" >> /home/${cusr}/.bashrc\n   # Set Package folder permissions\n   setfacl -R -m u:${cusr}:rwx ${JULIA_DEPOT_PATH};\ndone\n"
  },
  {
    "path": "examples/SparkSubmitJulia.scala",
    "content": "/**\n* A simple scala class that can be used along with spark-submit to\n* submit a Julia script to be run in a spark cluster. E.g.:\n*\n* $ spark-submit --class org.julialang.juliaparallel.SparkSubmitJulia \\\n*       --master yarn \\\n*       --deploy-mode cluster \\\n*       --driver-memory 4g \\\n*       --executor-memory 2g \\\n*       --executor-cores 1 \\\n*       spark-julia_2.11-1.0.jar \\\n*       /opt/julia/depot/helloworld.jl \\\n*       /usr/local/julia/bin/julia \\\n*       /opt/julia/depot\n*\n* To compile, use `src/main/scala/SparkSubmitJulia.scala` with a build.sbt like:\n* ---------------------\n* name := \"Spark Submit Julia\"\n* version := \"1.0\"\n* scalaVersion := \"2.11.8\"\n* libraryDependencies += \"org.apache.spark\" % \"spark-sql_2.11\" % \"2.4.4\"\n* ---------------------\n*/\npackage org.julialang.juliaparallel\n\nimport scala.sys.process._\nimport org.apache.spark.sql.SparkSession\n\nobject SparkSubmitJulia {\n  def main(args: Array[String]): Unit = {\n    val spark = SparkSession\n      .builder\n      .appName(\"Spark Submit Julia\")\n      .getOrCreate()\n    val script = args(0) // e.g.: \"/opt/julia/depot/helloworld.jl\"\n    val juliapath = args(1) // e.g.: \"/usr/local/julia/bin/julia\"\n    val juliadepotpath = args(2)  // e.g.: \"/opt/julia/depot\"\n    val exitcode = Process(Seq(juliapath, script), None, \"JULIA_DEPOT_PATH\" -> juliadepotpath).!\n    println(s\"Completed with exitcode $exitcode\")\n    spark.stop()\n  }\n}\n"
  },
  {
    "path": "jvm/sparkjl/dependency-reduced-pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\r\n  <modelVersion>4.0.0</modelVersion>\r\n  <groupId>sparkjl</groupId>\r\n  <artifactId>sparkjl</artifactId>\r\n  <name>sparkjl</name>\r\n  <version>0.2</version>\r\n  <build>\r\n    <pluginManagement>\r\n      <plugins>\r\n        <plugin>\r\n          <groupId>net.alchim31.maven</groupId>\r\n          <artifactId>scala-maven-plugin</artifactId>\r\n          <version>4.6.1</version>\r\n        </plugin>\r\n        <plugin>\r\n          <artifactId>maven-compiler-plugin</artifactId>\r\n          <version>3.8.1</version>\r\n          <configuration>\r\n            <source>1.8</source>\r\n            <target>1.8</target>\r\n          </configuration>\r\n        </plugin>\r\n      </plugins>\r\n    </pluginManagement>\r\n    <plugins>\r\n      <plugin>\r\n        <groupId>net.alchim31.maven</groupId>\r\n        <artifactId>scala-maven-plugin</artifactId>\r\n        <executions>\r\n          <execution>\r\n            <id>scala-compile-first</id>\r\n            <phase>process-resources</phase>\r\n            <goals>\r\n              <goal>add-source</goal>\r\n              <goal>compile</goal>\r\n            </goals>\r\n          </execution>\r\n          <execution>\r\n            <id>scala-test-compile</id>\r\n            <phase>process-test-resources</phase>\r\n            <goals>\r\n              <goal>testCompile</goal>\r\n            </goals>\r\n          </execution>\r\n        </executions>\r\n      </plugin>\r\n      <plugin>\r\n        <artifactId>maven-compiler-plugin</artifactId>\r\n        <executions>\r\n          <execution>\r\n            <phase>compile</phase>\r\n            <goals>\r\n              <goal>compile</goal>\r\n            </goals>\r\n          </execution>\r\n        </executions>\r\n        <configuration>\r\n          <source>1.8</source>\r\n          <target>1.8</target>\r\n        </configuration>\r\n      </plugin>\r\n      <plugin>\r\n        <artifactId>maven-shade-plugin</artifactId>\r\n        <version>3.3.0</version>\r\n        <executions>\r\n          <execution>\r\n            <phase>package</phase>\r\n            <goals>\r\n              <goal>shade</goal>\r\n            </goals>\r\n            <configuration>\r\n              <artifactSet>\r\n                <excludes>\r\n                  <exclude>META-INF/*.SF</exclude>\r\n                  <exclude>META-INF/*.DSA</exclude>\r\n                  <exclude>META-INF/*.RSA</exclude>\r\n                  <exclude>classworlds:classworlds</exclude>\r\n                  <exclude>junit:junit</exclude>\r\n                  <exclude>jmock:*</exclude>\r\n                  <exclude>*:xml-apis</exclude>\r\n                  <exclude>org.apache.maven:lib:tests</exclude>\r\n                  <exclude>log4j:log4j:jar:</exclude>\r\n                </excludes>\r\n              </artifactSet>\r\n            </configuration>\r\n          </execution>\r\n        </executions>\r\n      </plugin>\r\n    </plugins>\r\n  </build>\r\n  <properties>\r\n    <scala.binary.version>2.13.6</scala.binary.version>\r\n    <spark.version>[3.2.0,3.2.1]</spark.version>\r\n    <java.version>1.11</java.version>\r\n    <PermGen>64m</PermGen>\r\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\r\n    <scala.version>2.13</scala.version>\r\n    <MaxPermGen>512m</MaxPermGen>\r\n    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\r\n  </properties>\r\n</project>\r\n"
  },
  {
    "path": "jvm/sparkjl/old_src/InputIterator.scala",
    "content": "package org.apache.spark.api.julia\n\nimport java.io.{BufferedInputStream, DataInputStream, EOFException}\nimport java.net.Socket\nimport org.apache.spark.internal.Logging\nimport org.apache.commons.compress.utils.Charsets\nimport org.apache.spark._\n\n\n/**\n * Iterator that connects to a Julia process and reads data back to JVM.\n * */\nclass InputIterator[T](context: TaskContext, worker: Socket, outputThread: OutputThread) extends Iterator[T] with Logging {\n\n  val BUFFER_SIZE = 65536\n  \n  val env = SparkEnv.get\n  val stream = new DataInputStream(new BufferedInputStream(worker.getInputStream, BUFFER_SIZE))\n\n  override def next(): T = {\n    val obj = _nextObj\n    if (hasNext) {\n      _nextObj = read()\n    }\n    obj\n  }\n\n  private def read(): T = {\n    if (outputThread.exception.isDefined) {\n      throw outputThread.exception.get\n    }\n    try {\n      JuliaRDD.readValueFromStream(stream).asInstanceOf[T]\n    } catch {\n\n      case e: Exception if context.isInterrupted =>\n        logDebug(\"Exception thrown after task interruption\", e)\n        throw new TaskKilledException\n\n      case e: Exception if env.isStopped =>\n        logDebug(\"Exception thrown after context is stopped\", e)\n        null.asInstanceOf[T]  // exit silently\n\n      case e: Exception if outputThread.exception.isDefined =>\n        logError(\"Julia worker exited unexpectedly (crashed)\", e)\n        logError(\"This may have been caused by a prior exception:\", outputThread.exception.get)\n        throw outputThread.exception.get\n\n      case eof: EOFException =>\n        throw new SparkException(\"Julia worker exited unexpectedly (crashed)\", eof)\n    }\n  }\n\n  var _nextObj = read()\n\n  override def hasNext: Boolean = _nextObj != null\n\n}\n"
  },
  {
    "path": "jvm/sparkjl/old_src/JuliaRDD.scala",
    "content": "package org.apache.spark.api.julia\n\nimport java.io._\nimport java.net._\nimport sys.process.Process\nimport java.nio.file.Paths\n\nimport org.apache.commons.compress.utils.Charsets\nimport org.apache.spark._\nimport org.apache.spark.api.java.{JavaPairRDD, JavaRDD, JavaSparkContext}\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.rdd.RDD\n\nimport scala.collection.JavaConversions._\nimport scala.language.existentials\nimport scala.reflect.ClassTag\n\nclass AbstractJuliaRDD[T:ClassTag](\n    @transient parent: RDD[_],\n    command: Array[Byte]\n) extends RDD[T](parent) {\n\n  val preservePartitioning = true\n  val reuseWorker = true\n\n  override def getPartitions: Array[Partition] = firstParent.partitions\n\n  // Note: needs to override in later versions of Spark\n  // override def getNumPartitions: Int = firstParent.partitions.length\n\n  override val partitioner: Option[Partitioner] = {\n    if (preservePartitioning) firstParent.partitioner else None\n  }\n\n\n  override def compute(split: Partition, context: TaskContext): Iterator[T] = {\n    val worker: Socket = JuliaRDD.createWorker()\n    // Start a thread to feed the process input from our parent's iterator\n    val outputThread = new OutputThread(context, firstParent.iterator(split, context), worker, command, split)\n    outputThread.start()\n    // Return an iterator that read lines from the process's stdout\n    val resultIterator = new InputIterator[T](context, worker, outputThread)\n    new InterruptibleIterator(context, resultIterator)\n  }\n}\n\n\nclass JuliaRDD(@transient parent: RDD[_],command: Array[Byte]) extends AbstractJuliaRDD[Any](parent, command) {\n  def asJavaRDD(): JavaRDD[Any] = {\n    JavaRDD.fromRDD(this)\n  }\n}\n\nprivate object SpecialLengths {\n  val END_OF_DATA_SECTION = -1\n  val JULIA_EXCEPTION_THROWN = -2\n  val TIMING_DATA = -3\n  val END_OF_STREAM = -4\n  val NULL = -5\n  val PAIR_TUPLE = -6\n  val ARRAY_VALUE = -7\n  val ARRAY_END = -8\n  val INTEGER = -9\n  val STRING_START = -100\n}\n\nobject JuliaRDD extends Logging {\n\n  def fromRDD[T](rdd: RDD[T], command: Array[Byte]): JuliaRDD =\n    new JuliaRDD(rdd, command)\n\n  def createWorker(): Socket = {\n    var serverSocket: ServerSocket = null\n    try {\n      serverSocket = new ServerSocket(0, 1, InetAddress.getByAddress(Array(127, 0, 0, 1).map(_.toByte)))\n\n      // Create and start the worker\n      val juliaHome = sys.env.get(\"JULIA_HOME\").getOrElse(\"\")\n      val juliaVersion = sys.env.get(\"JULIA_VERSION\").getOrElse(\"v0.7\")\n      val juliaCommand = Paths.get(juliaHome, \"julia\").toString()\n      val juliaPkgDir =  sys.env.get(\"JULIA_PKGDIR\") match {\n          case Some(i) => Paths.get(i, juliaVersion, \"Spark\").toString()\n          case None => Process(juliaCommand +\n            \" -e println(dirname(dirname(Base.find_package(\\\"Spark\\\"))))\").!!.trim\n      }\n\n      val pb = new ProcessBuilder(juliaCommand, Paths.get(juliaPkgDir, \"src\", \"worker_runner.jl\").toString())\n\n\n      pb.directory(new File(SparkFiles.getRootDirectory()))\n      // val workerEnv = pb.environment()\n      // workerEnv.putAll(envVars)\n      val worker = pb.start()\n\n      // Redirect worker stdout and stderr\n      StreamUtils.redirectStreamsToStderr(worker.getInputStream, worker.getErrorStream)\n\n      // Tell the worker our port\n      val out = new OutputStreamWriter(worker.getOutputStream)\n      out.write(serverSocket.getLocalPort + \"\\n\")\n      out.flush()\n\n      // Wait for it to connect to our socket\n      serverSocket.setSoTimeout(120000)\n      try {\n        val socket = serverSocket.accept()\n        // workers.put(socket, worker)\n        return socket\n      } catch {\n        case e: Exception =>\n          throw new SparkException(\"Julia worker did not connect back in time\", e)\n      }\n    } finally {\n      if (serverSocket != null) {\n        serverSocket.close()\n      }\n    }\n    null\n  }\n\n  def writeValueToStream[T](obj: Any, dataOut: DataOutputStream) {\n    obj match {\n      case arr: Array[Byte] =>\n        dataOut.writeInt(arr.length)\n        dataOut.write(arr)\n      case tup: Tuple2[Any, Any] =>\n        dataOut.writeInt(SpecialLengths.PAIR_TUPLE)\n        writeValueToStream(tup._1, dataOut)\n        writeValueToStream(tup._2, dataOut)\n      case str: String =>\n        val arr = str.getBytes(Charsets.UTF_8)\n        dataOut.writeInt(-arr.length + SpecialLengths.STRING_START)\n        dataOut.write(arr)\n      case jac: java.util.AbstractCollection[_] =>\n        writeValueToStream(jac.iterator, dataOut)\n      case jit: java.util.Iterator[_] =>\n        while (jit.hasNext) {\n          dataOut.writeInt(SpecialLengths.ARRAY_VALUE)\n          writeValueToStream(jit.next(), dataOut)\n        }\n        dataOut.writeInt(SpecialLengths.ARRAY_END)\n      case ita: Iterable[_] =>\n        writeValueToStream(ita.iterator, dataOut)\n      case it: Iterator[_] =>\n        while (it.hasNext) {\n          dataOut.writeInt(SpecialLengths.ARRAY_VALUE)\n          writeValueToStream(it.next(), dataOut)\n        }\n        dataOut.writeInt(SpecialLengths.ARRAY_END)\n      case x: Int =>\n        dataOut.writeInt(SpecialLengths.INTEGER)\n        dataOut.writeLong(x)\n      case x: java.lang.Long =>\n        dataOut.writeInt(SpecialLengths.INTEGER)\n        dataOut.writeLong(x)\n      case x: java.lang.Integer =>\n        dataOut.writeInt(SpecialLengths.INTEGER)\n        dataOut.writeLong(x.longValue)\n      case other =>\n        throw new SparkException(\"Unexpected element type \" + other.getClass)\n    }\n  }\n\n  def readValueFromStream(stream: DataInputStream) : Any = {\n    var typeLength = stream.readInt()\n    typeLength match {\n      case length if length > 0 =>\n        val obj = new Array[Byte](length)\n        stream.readFully(obj)\n        obj\n      case 0 => Array.empty[Byte]\n      case SpecialLengths.PAIR_TUPLE =>\n        (readValueFromStream(stream), readValueFromStream(stream))\n      case SpecialLengths.JULIA_EXCEPTION_THROWN =>\n        // Signals that an exception has been thrown in julia\n        val exLength = stream.readInt()\n        val strlength = -exLength + SpecialLengths.STRING_START\n        val obj = new Array[Byte](strlength)\n        stream.readFully(obj)\n        val str = new String(obj, Charsets.UTF_8)\n        throw new Exception(str)\n      case SpecialLengths.ARRAY_VALUE =>\n        val ab = new collection.mutable.ArrayBuffer[Any]()\n        while(typeLength == SpecialLengths.ARRAY_VALUE) {\n          ab += readValueFromStream(stream)\n          typeLength = stream.readInt()\n        }\n        ab.toIterator\n      case SpecialLengths.ARRAY_END =>\n        new Array[Any](0)\n      case SpecialLengths.INTEGER =>\n        stream.readLong()\n      case SpecialLengths.STRING_START =>\n        \"\"\n      case length if length < SpecialLengths.STRING_START =>\n        val strlength = -length + SpecialLengths.STRING_START\n        val obj = new Array[Byte](strlength)\n        stream.readFully(obj)\n        new String(obj, Charsets.UTF_8)\n      case SpecialLengths.END_OF_DATA_SECTION =>\n        if (stream.readInt() == SpecialLengths.END_OF_STREAM) {\n          null\n        } else {\n          throw new RuntimeException(\"Protocol error\")\n        }\n    }\n\n  }\n\n\n  def readRDDFromFile(sc: JavaSparkContext, filename: String, parallelism: Int): JavaRDD[Any] = {\n    val file = new DataInputStream(new FileInputStream(filename))\n    try {\n      val objs = new collection.mutable.ArrayBuffer[Any]\n      try {\n        while (true) {\n          objs.append(readValueFromStream(file))\n        }\n      } catch {\n        case eof: EOFException => // No-op\n      }\n      JavaRDD.fromRDD(sc.sc.parallelize(objs, parallelism))\n    } finally {\n      file.close()\n    }\n  }\n\n  def cartesianSS(rdd1: JavaRDD[Any], rdd2: JavaRDD[Any]): JavaPairRDD[Any, Any] = {\n    rdd1.cartesian(rdd2)\n  }\n\n  def collectToJulia(rdd: JavaRDD[Any]): Array[Byte] = {\n    writeToByteArray[java.util.List[Any]](rdd.collect())\n  }\n\n  def collectToJuliaItr(rdd: JavaRDD[Any]): java.util.List[Any] = {\n    return rdd.collect()\n  }\n\n  def writeToByteArray[T](obj: Any): Array[Byte] = {\n    val byteArrayOut = new ByteArrayOutputStream()\n    val dataStream = new DataOutputStream(byteArrayOut)\n    writeValueToStream(obj, dataStream)\n    dataStream.flush()\n    byteArrayOut.toByteArray()\n  }\n}\n\nclass JuliaPairRDD(@transient parent: RDD[_],command: Array[Byte]) extends AbstractJuliaRDD[(Any, Any)](parent, command) {\n  def asJavaPairRDD(): JavaPairRDD[Any, Any] = {\n    JavaPairRDD.fromRDD(this)\n  }\n}\n\nobject JuliaPairRDD extends Logging {\n\n  def fromRDD[T](rdd: RDD[T], command: Array[Byte]): JuliaPairRDD =\n    new JuliaPairRDD(rdd, command)\n\n  def collectToJulia(rdd: JavaPairRDD[Any, Any]): Array[Byte] = {\n    JuliaRDD.writeToByteArray[java.util.List[(Any, Any)]](rdd.collect())\n  }\n\n  def collectToJuliaItr(rdd: JavaPairRDD[Any, Any]): java.util.List[(Any, Any)] = {\n    return rdd.collect()\n  }\n}\n"
  },
  {
    "path": "jvm/sparkjl/old_src/JuliaRunner.scala",
    "content": "package org.apache.spark.api.julia\n\nimport scala.collection.JavaConversions._\n\n/**\n * Class for execution of Julia scripts on a cluster.\n * WARNING: this class isn't used currently, will be utilized later\n */\nobject JuliaRunner {\n\n  def main(args: Array[String]): Unit = {\n    val juliaScript = args(0)\n    val scriptArgs = args.slice(1, args.length)\n    val pb = new ProcessBuilder(Seq(\"julia\", juliaScript) ++ scriptArgs)\n    val process = pb.start()\n    StreamUtils.redirectStreamsToStderr(process.getInputStream, process.getErrorStream)\n    val errorCode = process.waitFor()\n    if (errorCode != 0) {\n      throw new RuntimeException(\"Julia script exited with an error\")\n    }\n  }\n\n}\n"
  },
  {
    "path": "jvm/sparkjl/old_src/OutputThread.scala",
    "content": "package org.apache.spark.api.julia\n\nimport java.io.{DataOutputStream, BufferedOutputStream}\nimport java.net.Socket\n\nimport org.apache.spark.util.Utils\nimport org.apache.spark.{TaskContext, Partition, SparkEnv}\n\n/**\n * The thread responsible for writing the data from the JuliaRDD's parent iterator to the\n * Julia process.\n */\nclass OutputThread(context: TaskContext, it: Iterator[Any], worker: Socket, command: Array[Byte], split: Partition)\n    extends Thread(s\"stdout writer for julia\") {\n\n  val BUFFER_SIZE = 65536\n\n  val env = SparkEnv.get\n\n  @volatile private var _exception: Exception = null\n\n  /** Contains the exception thrown while writing the parent iterator to the Julia process. */\n  def exception: Option[Exception] = Option(_exception)\n\n  /** Terminates the writer thread, ignoring any exceptions that may occur due to cleanup. */\n  def shutdownOnTaskCompletion() {\n    assert(context.isCompleted)\n    this.interrupt()\n  }\n\n  override def run(): Unit = Utils.logUncaughtExceptions {\n    try {\n      val stream = new BufferedOutputStream(worker.getOutputStream, BUFFER_SIZE)\n      val dataOut = new DataOutputStream(stream)\n      // partition index\n      dataOut.writeInt(split.index)\n      dataOut.flush()\n      // serialized command:\n      dataOut.writeInt(command.length)\n      dataOut.write(command)\n      dataOut.flush()\n      // data values\n      writeIteratorToStream(it, dataOut)\n      dataOut.writeInt(SpecialLengths.END_OF_DATA_SECTION)\n      dataOut.writeInt(SpecialLengths.END_OF_STREAM)\n      dataOut.flush()\n    } catch {\n      case e: Exception if context.isCompleted || context.isInterrupted =>\n        // FIXME: logDebug(\"Exception thrown after task completion (likely due to cleanup)\", e)\n        println(\"Exception thrown after task completion (likely due to cleanup)\", e)\n        if (!worker.isClosed) {\n          Utils.tryLog(worker.shutdownOutput())\n        }\n\n      case e: Exception =>\n        // We must avoid throwing exceptions here, because the thread uncaught exception handler\n        // will kill the whole executor (see org.apache.spark.executor.Executor).\n        _exception = e\n        if (!worker.isClosed) {\n          Utils.tryLog(worker.shutdownOutput())\n        }\n    }\n//    } finally {\n//      // Release memory used by this thread for shuffles\n//      // env.shuffleMemoryManager.releaseMemoryForThisThread()\n//      env.shuffleMemoryManager.releaseMemoryForThisTask()\n//      // Release memory used by this thread for unrolling blocks\n//      // env.blockManager.memoryStore.releaseUnrollMemoryForThisThread()\n//      env.blockManager.memoryStore.releaseUnrollMemoryForThisTask()\n//    }\n  }\n\n  def writeIteratorToStream[T](iter: Iterator[T], dataOut: DataOutputStream) {\n    def write(obj: Any): Unit = {\n      JuliaRDD.writeValueToStream(obj, dataOut)\n    }\n    iter.foreach(write)\n  }\n\n}"
  },
  {
    "path": "jvm/sparkjl/old_src/RDDUtils.scala",
    "content": "package org.apache.spark.api.julia\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.api.java.{JavaRDD, JavaPairRDD}\n\nobject RDDUtils extends Logging {\n\n  /**\n   * Get number of partitions in the RDD\n   */\n  def getNumPartitions(jrdd: JavaRDD[Any]): Int = jrdd.rdd.partitions.length\n  def getNumPartitions(jrdd: JavaPairRDD[Any,Any]): Int = jrdd.rdd.partitions.length\n}\n"
  },
  {
    "path": "jvm/sparkjl/old_src/StreamUtils.scala",
    "content": "package org.apache.spark.api.julia\n\nimport java.io.InputStream\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.util.RedirectThread\n\n\nobject StreamUtils extends Logging {\n\n  /**\n   * Redirect the given streams to our stderr in separate threads.\n   */\n  def redirectStreamsToStderr(stdout: InputStream, stderr: InputStream) {\n    try {\n      new RedirectThread(stdout, System.err, \"stdout reader for julia\").start()\n      new RedirectThread(stderr, System.err, \"stderr reader for julia\").start()\n    } catch {\n      case e: Exception =>\n        logError(\"Exception in redirecting streams\", e)\n    }\n  }\n\n}\n"
  },
  {
    "path": "jvm/sparkjl/pom.xml",
    "content": "<project>\n  <groupId>sparkjl</groupId>\n  <artifactId>sparkjl</artifactId>\n  <modelVersion>4.0.0</modelVersion>\n  <name>sparkjl</name>\n  <packaging>jar</packaging>\n  <version>0.2</version>\n\n  <properties>\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\n\n    <java.version>1.11</java.version>\n    <scala.version>2.13</scala.version>\n    <scala.binary.version>2.13.6</scala.binary.version>\n\n    <spark.version>[3.2.0,3.2.1]</spark.version>\n    <!-- <hadoop.version>2.7.3</hadoop.version>\n    <yarn.version>2.7.3</yarn.version> -->\n\n    <PermGen>64m</PermGen>\n    <MaxPermGen>512m</MaxPermGen>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.scala-lang</groupId>\n      <artifactId>scala-library</artifactId>\n      <version>${scala.binary.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-core_${scala.version}</artifactId>\n      <version>${spark.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>org.apache.hadoop</groupId>\n          <artifactId>hadoop-client</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-yarn_${scala.version}</artifactId>\n      <version>${spark.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-sql_${scala.version}</artifactId>\n      <version>${spark.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.mdkt.compiler</groupId>\n      <artifactId>InMemoryJavaCompiler</artifactId>\n      <version>1.3.0</version>\n    </dependency>\n  </dependencies>\n\n  <build>\n\t\t<pluginManagement>\n\t\t\t<plugins>\n\t\t\t\t<plugin>\n\t\t\t\t\t<groupId>net.alchim31.maven</groupId>\n\t\t\t\t\t<artifactId>scala-maven-plugin</artifactId>\n\t\t\t\t\t<version>4.6.1</version>\n\t\t\t\t</plugin>\n\t\t\t\t<plugin>\n\t\t\t\t\t<groupId>org.apache.maven.plugins</groupId>\n\t\t\t\t\t<artifactId>maven-compiler-plugin</artifactId>\n\t\t\t\t\t<version>3.8.1</version>\n\t\t\t\t\t<configuration>\n            <source>1.8</source>\n            <target>1.8</target>\n          </configuration>\n\t\t\t\t</plugin>\n\t\t\t</plugins>\n\t\t</pluginManagement>\n\t\t<plugins>\n      <!-- Scala compiler -->\n\t\t\t<plugin>\n\t\t\t\t<groupId>net.alchim31.maven</groupId>\n\t\t\t\t<artifactId>scala-maven-plugin</artifactId>\n\t\t\t\t<executions>\n\t\t\t\t\t<execution>\n\t\t\t\t\t\t<id>scala-compile-first</id>\n\t\t\t\t\t\t<phase>process-resources</phase>\n\t\t\t\t\t\t<goals>\n\t\t\t\t\t\t\t<goal>add-source</goal>\n\t\t\t\t\t\t\t<goal>compile</goal>\n\t\t\t\t\t\t</goals>\n\t\t\t\t\t</execution>\n\t\t\t\t\t<execution>\n\t\t\t\t\t\t<id>scala-test-compile</id>\n\t\t\t\t\t\t<phase>process-test-resources</phase>\n\t\t\t\t\t\t<goals>\n\t\t\t\t\t\t\t<goal>testCompile</goal>\n\t\t\t\t\t\t</goals>\n\t\t\t\t\t</execution>\n\t\t\t\t</executions>\n\t\t\t</plugin>\n\t\t\t<plugin>\n\t\t\t\t<groupId>org.apache.maven.plugins</groupId>\n\t\t\t\t<artifactId>maven-compiler-plugin</artifactId>\n\t\t\t\t<configuration>\n          <source>1.8</source>\n          <target>1.8</target>\n        </configuration>\n\t\t\t\t<executions>\n\t\t\t\t\t<execution>\n\t\t\t\t\t\t<phase>compile</phase>\n\t\t\t\t\t\t<goals>\n\t\t\t\t\t\t\t<goal>compile</goal>\n\t\t\t\t\t\t</goals>\n\t\t\t\t\t</execution>\n\t\t\t\t</executions>\n\t\t\t</plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-shade-plugin</artifactId>\n        <version>2.0</version>\n        <configuration>\n          <shadedArtifactAttached>true</shadedArtifactAttached>\n          <shadedClassifierName>assembly</shadedClassifierName>\n          <artifactSet>\n            <includes>\n              <include>*:*</include>\n            </includes>\n          </artifactSet>\n          <filters>\n            <filter>\n              <artifact>*:*</artifact>\n              <excludes>\n                <exclude>META-INF/*.SF</exclude>\n                <exclude>META-INF/*.DSA</exclude>\n                <exclude>META-INF/*.RSA</exclude>\n              </excludes>\n            </filter>\n          </filters>\n          <transformers>\n            <transformer implementation=\"org.apache.maven.plugins.shade.resource.ServicesResourceTransformer\" />\n            <transformer implementation=\"org.apache.maven.plugins.shade.resource.AppendingTransformer\">\n              <resource>META-INF/services/org.apache.hadoop.fs.FileSystem</resource>\n            </transformer>\n            <transformer implementation=\"org.apache.maven.plugins.shade.resource.AppendingTransformer\">\n              <resource>reference.conf</resource>\n            </transformer>\n          </transformers>\n        </configuration>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>shade</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n\t\t</plugins>\n\t</build>\n\n</project>\n"
  },
  {
    "path": "src/Spark.jl",
    "content": "module Spark\n\ninclude(\"core.jl\")\n\nend\n"
  },
  {
    "path": "src/chainable.jl",
    "content": "\"\"\"\n    DotChainer{O, Fn}\n\nSee `@chainable` for details.\n\"\"\"\nstruct DotChainer{O, Fn}\n    obj::O\n    fn::Fn\nend\n\n# DotChainer(obj, fn) = DotChainer{typeof(obj), typeof(fn)}(obj, fn)\n\n(c::DotChainer)(args...) = c.fn(c.obj, args...)\n\n\n\"\"\"\n    @chainable T\n\nAdds dot chaining syntax to the type, i.e. automatically translate:\n\n    foo.bar(a)\n\ninto\n\n    bar(foo, a)\n\nFor single-argument functions also support implicit calls, e.g:\n\n    foo.bar.baz(a, b)\n\nis treated the same as:\n\n    foo.bar().baz(a, b)\n\nNote that `@chainable` works by overloading `Base.getproperty()`,\nmaking it impossible to customize it for `T`. To have more control,\none may use the underlying wrapper type - `DotCaller`.\n\"\"\"\nmacro chainable(T)\n    return quote\n        function Base.getproperty(obj::$(esc(T)), prop::Symbol)\n            if hasfield(typeof(obj), prop)\n                return getfield(obj, prop)\n            elseif isdefined(@__MODULE__, prop)\n                fn = getfield(@__MODULE__, prop)\n                return DotChainer(obj, fn)\n            else\n                error(\"type $(typeof(obj)) has no field $prop\")\n            end\n        end\n    end\nend\n\n\nfunction Base.getproperty(dc::DotChainer, prop::Symbol)\n    if hasfield(typeof(dc), prop)\n        return getfield(dc, prop)\n    else\n        # implicitely call function without arguments\n        # and propagate getproperty to the returned object\n        return getproperty(dc(), prop)\n    end\nend"
  },
  {
    "path": "src/column.jl",
    "content": "###############################################################################\n#                                    Column                                   #\n###############################################################################\n\nfunction Column(name::String)\n    jcol = jcall(JSQLFunctions, \"col\", JColumn, (JString,), name)\n    return Column(jcol)\nend\n\n@chainable Column\nfunction Base.show(io::IO, col::Column)\n    name = jcall(col.jcol, \"toString\", JString, ())\n    print(io, \"col(\\\"$name\\\")\")\nend\n\n\n# binary with JObject\nfor (func, name) in [(:+, \"plus\"), (:-, \"minus\"), (:*, \"multiply\"), (:/, \"divide\")]\n    @eval function Base.$func(col::Column, obj::T) where T\n        jres = jcall(col.jcol, $name, JColumn, (JObject,), obj)\n        return Column(jres)\n    end\nend\n\n\nalias(col::Column, name::String) =\n    Column(jcall(col.jcol, \"alias\", JColumn, (JString,), name))\n\nasc(col::Column) = Column(jcall(col.jcol, \"asc\", JColumn, ()))\nasc_nulls_first(col::Column) = Column(jcall(col.jcol, \"asc_nulls_first\", JColumn, ()))\nasc_nulls_last(col::Column) = Column(jcall(col.jcol, \"asc_nulls_last\", JColumn, ()))\n\nbetween(col::Column, low, up) =\n    Column(jcall(col.jcol, \"between\", JColumn, (JObject, JObject), low, up))\n\nbitwiseAND(col::Column, other) =\n    Column(jcall(col.jcol, \"bitwiseAND\", JColumn, (JObject,), other))\nBase.:&(col::Column, other) = bitwiseAND(col, other)\n\nbitwiseOR(col::Column, other) =\n    Column(jcall(col.jcol, \"bitwiseOR\", JColumn, (JObject,), other))\nBase.:|(col::Column, other) = bitwiseOR(col, other)\n\nbitwiseXOR(col::Column, other) =\n    Column(jcall(col.jcol, \"bitwiseXOR\", JColumn, (JObject,), other))\nBase.:⊻(col::Column, other) = bitwiseXOR(col, other)\n\n\nBase.contains(col::Column, other) =\n    Column(jcall(col.jcol, \"contains\", JColumn, (JObject,), other))\n\ndesc(col::Column) = Column(jcall(col.jcol, \"desc\", JColumn, ()))\ndesc_nulls_first(col::Column) = Column(jcall(col.jcol, \"desc_nulls_first\", JColumn, ()))\ndesc_nulls_last(col::Column) = Column(jcall(col.jcol, \"desc_nulls_last\", JColumn, ()))\n\n# dropFields should go here, but it's not in listmethods(col.jcol) ¯\\_(ツ)_/¯\n\nBase.endswith(col::Column, other) =\n    Column(jcall2(col.jcol, \"endsWith\", JColumn, (JObject,), other))\nBase.endswith(col::Column, other::Column) =\n    Column(jcall(col.jcol, \"endsWith\", JColumn, (JColumn,), other.jcol))\n\neqNullSafe(col::Column, other) =\n    Column(jcall(col.jcol, \"eqNullSafe\", JColumn, (JObject,), other))\n\nBase.:(==)(col::Column, other) = Column(jcall(col.jcol, \"equalTo\", JColumn, (JObject,), other))\nBase.:(!=)(col::Column, other) = Column(jcall(col.jcol, \"notEqual\", JColumn, (JObject,), other))\n\nexplain(col::Column, extended=false) = jcall(col.jcol, \"explain\", Nothing, (jboolean,), extended)\n\nisNotNull(col::Column) = Column(jcall(col.jcol, \"isNotNull\", JColumn, ()))\nisNull(col::Column) = Column(jcall(col.jcol, \"isNull\", JColumn, ()))\n\nlike(col::Column, s::String) = Column(jcall(col.jcol, \"like\", JColumn, (JString,), s))\n\notherwise(col::Column, other) =\n    Column(jcall(col.jcol, \"otherwise\", JColumn, (JObject,), other))\n\nover(col::Column) = Column(jcall(col.jcol, \"over\", JColumn, ()))\n\nrlike(col::Column, s::String) = Column(jcall(col.jcol, \"rlike\", JColumn, (JString,), s))\n\nBase.startswith(col::Column, other) =\n    Column(jcall2(col.jcol, \"startsWith\", JColumn, (JObject,), other))\nBase.startswith(col::Column, other::Column) =\n    Column(jcall(col.jcol, \"startsWith\", JColumn, (JColumn,), other.jcol))\n\nsubstr(col::Column, start::Column, len::Column) =\n    Column(jcall(col.jcol, \"substr\", JColumn, (JColumn, JColumn), start.jcol, len.jcol))\nsubstr(col::Column, start::Integer, len::Integer) =\n    Column(jcall(col.jcol, \"substr\", JColumn, (jint, jint), start, len))\n\nwhen(col::Column, condition::Column, value) =\n    Column(jcall(col.jcol, \"when\", JColumn, (JColumn, JObject), condition.jcol, value))\n\n\n## JSQLFunctions\n\nupper(col::Column) =\n    Column(jcall(JSQLFunctions, \"upper\", JColumn, (JColumn,), col.jcol))\nBase.uppercase(col::Column) = upper(col)\n\nlower(col::Column) =\n    Column(jcall(JSQLFunctions, \"lower\", JColumn, (JColumn,), col.jcol))\nBase.lowercase(col::Column) = lower(col)\n\n\nfor func in (:min, :max,  :count, :sum, :mean)\n    @eval function $func(col::Column)\n        jcol = jcall(JSQLFunctions, string($func), JColumn, (JColumn,), col.jcol)\n        return Column(jcol)\n    end\nend\n\nBase.minimum(col::Column) = min(col)\nBase.maximum(col::Column) = max(col)\navg(col::Column) = mean(col)\n\n\nexplode(col::Column) =\n    Column(jcall(JSQLFunctions, \"explode\", JColumn, (JColumn,), col.jcol))\n\nBase.split(col::Column, sep::AbstractString) =\n    Column(jcall(JSQLFunctions, \"split\", JColumn, (JColumn, JString), col.jcol, sep))\n\n\nfunction window(col::Column, w_dur::String, slide_dur::String, start_time::String)\n    return Column(jcall(JSQLFunctions, \"window\", JColumn,\n                        (JColumn, JString, JString, JString),\n                        col.jcol, w_dur, slide_dur, start_time))\nend\n\nfunction window(col::Column, w_dur::String, slide_dur::String)\n    return Column(jcall(JSQLFunctions, \"window\", JColumn,\n                        (JColumn, JString, JString),\n                        col.jcol, w_dur, slide_dur))\nend\n\nfunction window(col::Column, w_dur::String)\n    return Column(jcall(JSQLFunctions, \"window\", JColumn,\n                        (JColumn, JString),\n                        col.jcol, w_dur))\nend"
  },
  {
    "path": "src/compiler.jl",
    "content": "using JavaCall\nimport JavaCall: assertroottask_or_goodenv, assertloaded\nusing Umlaut\n\nconst JInMemoryJavaCompiler = @jimport org.mdkt.compiler.InMemoryJavaCompiler\n\n# const JDynamicJavaCompiler = @jimport org.apache.spark.api.julia.DynamicJavaCompiler\n\n# const JFile = @jimport java.io.File\n# const JToolProvider = @jimport javax.tools.ToolProvider\n# const JJavaCompiler = @jimport javax.tools.JavaCompiler\n# const JInputStream = @jimport java.io.InputStream\n# const JOutputStream = @jimport java.io.OutputStream\n# const JClassLoader = @jimport java.lang.ClassLoader\n# const JURLClassLoader = @jimport java.net.URLClassLoader\n# const JURI = @jimport java.net.URI\n# const JURL = @jimport java.net.URL\n\nconst JUDF1 = @jimport org.apache.spark.sql.api.java.UDF1\n\n\n###############################################################################\n#                                  Compiler                                   #\n###############################################################################\n\n\nfunction create_class(name::String, src::String)\n    jcompiler = jcall(JInMemoryJavaCompiler, \"newInstance\", JInMemoryJavaCompiler, ())\n    return jcall(jcompiler, \"compile\", JClass, (JString, JString), name, src)\nend\n\n\nfunction create_instance(name::String, src::String)\n    jclass = create_class(name, src)\n    return jcall(jclass, \"newInstance\", JObject, ())\nend\n\nfunction create_instance(src::String)\n    pkg_name_match = match(r\"package ([a-zA-z0-9_\\.\\$]+);\", src)\n    @assert !isnothing(pkg_name_match) \"Cannot detect package name in the source:\\n\\n$src\"\n    pkg_name = pkg_name_match.captures[1]\n    class_name_match = match(r\"class ([a-zA-z0-9_\\$]+)\", src)\n    @assert !isnothing(class_name_match) \"Cannot detect class name in the source:\\n\\n$src\"\n    class_name = class_name_match.captures[1]\n    return create_instance(\"$pkg_name.$class_name\", src)\nend\n\n\n###############################################################################\n#                                   jcall2                                    #\n###############################################################################\n\nfunction jcall_reflect(jobj::JavaObject, name::String, rettype, argtypes, args...)\n    assertroottask_or_goodenv() && assertloaded()\n    jclass = getclass(jobj)\n    jargs = [a for a in convert.(argtypes, args)]  # convert to Vector\n    meth = jcall(jclass, \"getMethod\", JMethod, (JString, Vector{JClass}), name, getclass.(jargs))\n    ret = meth(jobj, jargs...)\n    return convert(rettype, ret)\nend\n\n# jcall() fails to call methods of generated classes, jcall2() is a more robust version of it\n# see https://github.com/JuliaInterop/JavaCall.jl/issues/166 for the details\nfunction jcall2(jobj::JavaObject, name::String, rettype, argtypes, args...)\n    try\n        return jcall(jobj, name, rettype, argtypes, args...)\n    catch\n        return jcall_reflect(jobj, name, rettype, argtypes, args...)\n    end\nend\n\n\n###############################################################################\n#                                  JavaExpr                                   #\n###############################################################################\n\n\njavastring(::Type{JavaObject{name}}) where name = string(name)\njavastring(::Nothing) = \"\"\n\njavatype(tape::Tape, v::Variable) = julia2java(typeof(tape[v].val))\njavaname(v::Variable) = string(Umlaut.make_name(v.id))\njavaname(op::AbstractArray) = javaname(V(op))\njavaname(x) = x   # literals\n\n\ntype_param_string(typeparams::Vector{String}) =\n    isempty(typeparams) ? \"\" : \"<$(join(typeparams, \", \"))>\"\ntype_param_string(typeparams::Vector) =\n    isempty(typeparams) ? \"\" : \"<$(join(map(javastring, typeparams), \", \"))>\"\n\n\nabstract type JavaExpr end\nBase.show(io::IO, ex::JavaExpr) = print(io, javastring(ex))\n\nmutable struct JavaTypeExpr <: JavaExpr\n    class::Type{<:JavaObject}\n    typeparams::Vector    # String or Type{<:JavaObject}\nend\nJavaTypeExpr(JT::Type{<:JavaObject}) = JavaTypeExpr(JT, [])\nBase.convert(::Type{JavaTypeExpr}, JT::Type{<:JavaObject}) = JavaTypeExpr(JT)\njavastring(ex::JavaTypeExpr) = javastring(ex.class) * type_param_string(ex.typeparams)\n\n\n\nmutable struct JavaCallExpr <: JavaExpr\n    rettype::JavaTypeExpr\n    ret::String\n    this::Union{String, Any}  # name or constant\n    method::String\n    args::Vector              # names or constants\nend\nfunction javastring(ex::JavaCallExpr)\n    R = javastring(ex.rettype)\n    if !isnothing(match(r\"^[\\*\\/+-]+$\", ex.method))\n        # binary operator\n        return \"$R $(ex.ret) = $(ex.this) $(ex.method) $(ex.args[1]);\"\n    else\n        return \"$R $(ex.ret) = $(ex.this).$(ex.method)($(join(ex.args, \", \")));\"\n    end\nend\n\nstruct JavaReturnExpr <: JavaExpr\n    ret::String\nend\njavastring(ex::JavaReturnExpr) = \"return $(ex.ret);\"\n\n\nmutable struct JavaMethodExpr <: JavaExpr\n    annotations::Vector{String}\n    rettype::JavaTypeExpr\n    name::String\n    params::Vector{String}\n    paramtypes::Vector{JavaTypeExpr}\n    body::Vector\nend\nfunction javastring(ex::JavaMethodExpr)\n    paramlist = join([\"$(javastring(t)) $a\" for (a, t) in zip(ex.params, ex.paramtypes)], \", \")\n    result = isempty(ex.annotations) ? \"\" : \"\\t\" * join(ex.annotations, \"\\n\") * \"\\n\"\n    result *= \"public $(javastring(ex.rettype)) $(ex.name)($paramlist) {\\n\"\n    for subex in ex.body\n        result *= \"\\t$(javastring(subex))\\n\"\n    end\n    result *= \"}\"\n    return result\nend\n\n\nmutable struct JavaClassExpr <: JavaExpr\n    name::String\n    typeparams::Vector{String}\n    extends::Union{JavaTypeExpr, Nothing}\n    implements::Union{JavaTypeExpr, Nothing}\n    methods::Vector{<:JavaMethodExpr}\nend\nfunction javastring(ex::JavaClassExpr)\n    sep = findlast(\".\", ex.name)\n    pkg_name, class_name = isnothing(sep) ? (\"\", ex.name) : (ex.name[1:sep.start-1], ex.name[sep.start+1:end])\n    pkg_str = isempty(pkg_name) ? \"\" : \"package $pkg_name;\"\n    extends_str = isnothing(ex.extends) ? \"\" : \"extends $(javastring(ex.extends))\"\n    implements_str = isnothing(ex.implements) ? \"\" : \"implements $(javastring(ex.implements))\"\n    methods_str = join(map(javastring, ex.methods), \"\\n\\n\")\n    methods_str = replace(methods_str, \"\\n\" => \"\\n\\t\")\n    return \"\"\"\n    $pkg_str\n\n    public class $class_name $extends_str $implements_str {\n        $methods_str\n    }\n    \"\"\"\nend\n\n\n###############################################################################\n#                                Tape => JavaExpr                             #\n###############################################################################\n\n\nstruct J2JContext end\nfunction Umlaut.isprimitive(::J2JContext, f, args...)\n    Umlaut.isprimitive(Umlaut.BaseCtx(), f, args...) && return true\n    modl = parentmodule(typeof(f))\n    modl in (Spark, Base.Unicode) && return true\n    return false\nend\n\njavamethod(::typeof(+)) = \"+\"\njavamethod(::typeof(*)) = \"*\"\njavamethod(::typeof(lowercase)) = \"toLowerCase\"\n\n\nfunction JavaCallExpr(tape::Tape, op::Call)\n    ret = javaname(V(op))\n    rettype = javatype(tape, V(op))\n    this, args... = map(javaname, op.args)\n    method = javamethod(op.fn)\n    return JavaCallExpr(rettype, ret, this, method, args)\nend\n\nfunction JavaClassExpr(tape::Tape; method_name::String=\"(unspecified)\")\n    fn_name = string(tape[V(1)].val)\n    cls = fn_name * \"_\" * string(gensym())[3:end]\n    cls = replace(cls, \"#\" => \"_\")\n    inp = inputs(tape)[2:end]\n    params = [javaname(v) for v in inp]\n    paramtypes = [javatype(tape, v) for v in inp]\n    ret = javaname(tape.result)\n    rettype = javatype(tape, tape.result)\n    body = JavaExpr[JavaCallExpr(tape, op) for op in tape if !isa(op, Umlaut.Input)]\n    push!(body, JavaReturnExpr(ret))\n    meth_expr = JavaMethodExpr([], rettype, method_name, params, paramtypes, body)\n    return JavaClassExpr(cls, [], nothing, nothing, [meth_expr])\nend\n\n\n###############################################################################\n#                                      UDF                                    #\n###############################################################################\n\nstruct UDF\n    src::String\n    judf::JavaObject\nend\nBase.show(io::IO, udf::UDF) = print(io, \"UDF from:\\n\\n\" * udf.src)\n\n\nfunction udf(f::Function, args...)\n    val, tape = trace(f, args...; ctx=J2JContext())\n    class_expr = JavaClassExpr(tape)\n    class_expr.name = \"julia2java.\" * class_expr.name\n    UT = JavaTypeExpr(\n        JavaCall.jimport(\"org.apache.spark.sql.api.java.UDF$(length(args))\"),\n        [javastring(julia2java(typeof(x))) for x in [args...; val]]\n    )\n    class_expr.implements = UT\n    meth_expr = class_expr.methods[1]\n    meth_expr.name = \"call\"\n    push!(meth_expr.annotations, \"@Override\")\n    src = javastring(class_expr)\n    judf = create_instance(src)\n    return UDF(src, judf)\nend\n"
  },
  {
    "path": "src/convert.jl",
    "content": "###############################################################################\n#                                Conversions                                  #\n###############################################################################\n\n# Note: both - java.sql.Timestamp and Julia's DateTime don't have timezone.\n# But when printing, java.sql.Timestamp will assume UTC and convert to your\n# local time. To avoid confusion e.g. in REPL, try use fixed date in UTC\n# or now(Dates.UTC)\nBase.convert(::Type{JTimestamp}, x::DateTime) =\n    JTimestamp((jlong,), floor(Int, datetime2unix(x)) * 1000)\nBase.convert(::Type{DateTime}, x::JTimestamp) =\n    unix2datetime(jcall(x, \"getTime\", jlong, ()) / 1000)\n\nBase.convert(::Type{JDate}, x::Date) =\n    JDate((jlong,), floor(Int, datetime2unix(DateTime(x))) * 1000)\nBase.convert(::Type{Date}, x::JDate) =\n    Date(unix2datetime(jcall(x, \"getTime\", jlong, ()) / 1000))\n\n\nBase.convert(::Type{JObject}, x::Integer) = convert(JObject, convert(JLong, x))\nBase.convert(::Type{JObject}, x::Real) = convert(JObject, convert(JDouble, x))\nBase.convert(::Type{JObject}, x::DateTime) = convert(JObject, convert(JTimestamp, x))\nBase.convert(::Type{JObject}, x::Date) = convert(JObject, convert(JDate, x))\nBase.convert(::Type{JObject}, x::Column) = convert(JObject, x.jcol)\n\nBase.convert(::Type{Row}, obj::JObject) = Row(convert(JRow, obj))\n\nBase.convert(::Type{String}, obj::JString) = unsafe_string(obj)\nBase.convert(::Type{Integer}, obj::JLong) = jcall(obj, \"longValue\", jlong, ())\n\njulia2java(::Type{String}) = JString\njulia2java(::Type{Int64}) = JLong\njulia2java(::Type{Int32}) = JInt\njulia2java(::Type{Float64}) = JDouble\njulia2java(::Type{Float32}) = JFloat\njulia2java(::Type{Bool}) = JBoolean\njulia2java(::Type{Any}) = JObject\n\njava2julia(::Type{JString}) = String\njava2julia(::Type{JLong}) = Int64\njava2julia(::Type{jlong}) = Int64\njava2julia(::Type{JInteger}) = Int32\njava2julia(::Type{jint}) = Int32\njava2julia(::Type{JDouble}) = Float64\njava2julia(::Type{jdouble}) = Float64\njava2julia(::Type{JFloat}) = Float32\njava2julia(::Type{jfloat}) = Float32\njava2julia(::Type{JBoolean}) = Bool\njava2julia(::Type{jboolean}) = Bool\njava2julia(::Type{JTimestamp}) = DateTime\njava2julia(::Type{JDate}) = Date\njava2julia(::Type{JObject}) = Any\n\njulia2ddl(::Type{String}) = \"string\"\njulia2ddl(::Type{Int64}) = \"long\"\njulia2ddl(::Type{Int32}) = \"int\"\njulia2ddl(::Type{Float64}) = \"double\"\njulia2ddl(::Type{Float32}) = \"float\"\njulia2ddl(::Type{Bool}) = \"boolean\"\njulia2ddl(::Type{Dates.Date}) = \"date\"\njulia2ddl(::Type{Dates.DateTime}) = \"timestamp\"\n\n\nfunction JArray(x::Vector{T}) where T\n    JT = T <: JavaObject ? T : julia2java(T)\n    x = convert(Vector{JT}, x)\n    sz = length(x)\n    init_val = sz == 0 ? C_NULL : Ptr(x[1])\n    arrayptr = JavaCall.JNI.NewObjectArray(sz, Ptr(JavaCall.metaclass(JT)), init_val)\n    arrayptr === C_NULL && geterror()\n    for i=2:sz\n        JavaCall.JNI.SetObjectArrayElement(arrayptr, i-1, Ptr(x[i]))\n    end\n    return JavaObject{typeof(x)}(arrayptr)\nend\n\n\nfunction Base.convert(::Type{JSeq}, x::Vector)\n    jarr = JArray(x)\n    jobj = convert(JObject, jarr)\n    jarrseq = jcall(JArraySeq, \"make\", JArraySeq, (JObject,), jobj)\n    return jcall(jarrseq, \"toSeq\", JSeq, ())\n    # jwa = jcall(JWrappedArray, \"make\", JWrappedArray, (JObject,), jobj)\n    # jwa = jcall(JArraySeq, \"make\", JArraySeq, (JObject,), jobj)\n    # return jcall(jwa, \"toSeq\", JSeq, ())\nend\n\nfunction Base.convert(::Type{JMap}, d::Dict)\n    jmap = JHashMap(())\n    for (k, v) in d\n        jk, jv = convert(JObject, k), convert(JObject, v)\n        jcall(jmap, \"put\", JObject, (JObject, JObject), jk, jv)\n    end\n    return jmap\nend\n"
  },
  {
    "path": "src/core.jl",
    "content": "using JavaCall\nusing Umlaut\nimport Umlaut.V\nimport Statistics\nusing Dates\n# using TableTraits\n# using IteratorInterfaceExtensions\n\nexport SparkSession, DataFrame, GroupedData, Column, Row\nexport StructType, StructField, DataType\nexport Window, WindowSpec\n\n\ninclude(\"chainable.jl\")\ninclude(\"init.jl\")\ninclude(\"compiler.jl\")\ninclude(\"defs.jl\")\ninclude(\"convert.jl\")\ninclude(\"session.jl\")\ninclude(\"dataframe.jl\")\ninclude(\"column.jl\")\ninclude(\"row.jl\")\ninclude(\"struct.jl\")\ninclude(\"window.jl\")\ninclude(\"io.jl\")\ninclude(\"streaming.jl\")\n\n\nfunction __init__()\n    init()\nend\n\n\n# pseudo-modules for some specific functions not exported by default\n\nmodule Compiler\n    using Reexport\n    @reexport import Spark: udf, jcall2, create_instance, create_class\nend\n\n# module SQL\n#     using Reexport\n#     @reexport import Spark: SparkSession, DataFrame, GroupedData, Column, Row\n#     @reexport import Spark: StructType, StructField, DataType\n#     @reexport import Spark: Window, WindowSpec\n# end"
  },
  {
    "path": "src/dataframe.jl",
    "content": "###############################################################################\n#                                  DataFrame                                  #\n###############################################################################\n\nBase.show(df::DataFrame) = jcall(df.jdf, \"show\", Nothing, ())\nBase.show(df::DataFrame, n::Integer) = jcall(df.jdf, \"show\", Nothing, (jint,), n)\nfunction Base.show(io::IO, df::DataFrame)\n    if df.isstreaming()\n        print(io, toString(df.jdf))\n    else\n        show(df)\n    end\nend\nprintSchema(df::DataFrame) = jcall(df.jdf, \"printSchema\", Nothing, ())\n\n\nfunction Base.getindex(df::DataFrame, name::String)\n    jcol = jcall(df.jdf, \"col\", JColumn, (JString,), name)\n    return Column(jcol)\nend\n\nfunction Base.getproperty(df::DataFrame, prop::Symbol)\n    if hasfield(DataFrame, prop)\n        return getfield(df, prop)\n    elseif string(prop) in columns(df)\n        return df[string(prop)]\n    else\n        fn = getfield(@__MODULE__, prop)\n        return DotChainer(df, fn)\n    end\nend\n\nfunction columns(df::DataFrame)\n    jnames = jcall(df.jdf, \"columns\", Vector{JString}, ())\n    names = [unsafe_string(jn) for jn in jnames]\n    return names\nend\n\n\nBase.count(df::DataFrame) = jcall(df.jdf, \"count\", jlong, ())\nBase.first(df::DataFrame) = Row(jcall(df.jdf, \"first\", JObject, ()))\n\nhead(df::DataFrame) = Row(jcall(df.jdf, \"head\", JObject, ()))\nfunction  head(df::DataFrame, n::Integer)\n    jobjs = jcall(df.jdf, \"head\", JObject, (jint,), n)\n    jrows = convert(Vector{JRow}, jobjs)\n    return map(Row, jrows)\nend\n\nfunction Base.collect(df::DataFrame)\n    jobj = jcall(df.jdf, \"collect\", JObject, ())\n    jrows = convert(Vector{JRow}, jobj)\n    return map(Row, jrows)\nend\n\nfunction Base.collect(df::DataFrame, col::Union{<:AbstractString, <:Integer})\n    rows = collect(df)\n    return [row[col] for row in rows]\nend\n\nfunction take(df::DataFrame, n::Integer)\n    return convert(Vector{Row}, jcall(df.jdf, \"take\", JObject, (jint,), n))\nend\n\n\nfunction describe(df::DataFrame, cols::String...)\n    jdf = jcall(df.jdf, \"describe\", JDataset, (Vector{JString},), collect(cols))\n    return DataFrame(jdf)\nend\n\n\nfunction alias(df::DataFrame, name::String)\n    jdf = jcall(df.jdf, \"alias\", JDataset, (JString,), name)\n    return DataFrame(jdf)\nend\n\n\nfunction select(df::DataFrame, cols::Column...)\n    jdf = jcall(df.jdf, \"select\", JDataset, (Vector{JColumn},),\n                [col.jcol for col in cols])\n    return DataFrame(jdf)\nend\nselect(df::DataFrame, cols::String...) = select(df, map(Column, cols)...)\n\n\nfunction withColumn(df::DataFrame, name::String, col::Column)\n    jdf = jcall(df.jdf, \"withColumn\", JDataset, (JString, JColumn), name, col.jcol)\n    return DataFrame(jdf)\nend\n\n\nfunction Base.filter(df::DataFrame, col::Column)\n    jdf = jcall(df.jdf, \"filter\", JDataset, (JColumn,), col.jcol)\n    return DataFrame(jdf)\nend\nwhere(df::DataFrame, col::Column) = filter(df, col)\n\nfunction groupby(df::DataFrame, cols::Column...)\n    jgdf = jcall(df.jdf, \"groupBy\", JRelationalGroupedDataset,\n            (Vector{JColumn},), [col.jcol for col in cols])\n    return GroupedData(jgdf)\nend\n\nfunction groupby(df::DataFrame, col::String, cols::String...)\n    jgdf = jcall(df.jdf, \"groupBy\", JRelationalGroupedDataset,\n            (JString, Vector{JString},), col, collect(cols))\n    return GroupedData(jgdf)\nend\n\nconst groupBy = groupby\n\nfor func in (:min, :max,  :count, :sum, :mean)\n    @eval function $func(df::DataFrame, cols::String...)\n        jdf = jcall(df.jdf, string($func), JDataset, (Vector{JString},), collect(cols))\n        return DataFrame(jdf)\n    end\nend\n\nminimum(df::DataFrame, cols::String...) = min(df, cols...)\nmaximum(df::DataFrame, cols::String...) = max(df, cols...)\navg(df::DataFrame, cols::String...) = mean(df, cols...)\n\n\nfunction Base.join(df1::DataFrame, df2::DataFrame, col::Column, typ::String=\"inner\")\n    jdf = jcall(df1.jdf, \"join\", JDataset,\n            (JDataset, JColumn, JString),\n            df2.jdf, col.jcol, typ)\n    return DataFrame(jdf)\nend\n\ncreateOrReplaceTempView(df::DataFrame, name::AbstractString) =\n    jcall(df.jdf, \"createOrReplaceTempView\", Nothing, (JString,), name)\n\n\nisstreaming(df::DataFrame) = Bool(jcall(df.jdf, \"isStreaming\", jboolean, ()))\nisStreaming(df::DataFrame) = isstreaming(df)\n\n\nfunction writeStream(df::DataFrame)\n    jwriter = jcall(df.jdf, \"writeStream\", JDataStreamWriter, ())\n    return DataStreamWriter(jwriter)\nend\n\n\n###############################################################################\n#                                  GroupedData                                #\n###############################################################################\n\n@chainable GroupedData\nfunction Base.show(io::IO, gdf::GroupedData)\n    repr = jcall(gdf.jgdf, \"toString\", JString, ())\n    repr = replace(repr, \"RelationalGroupedDataset\" => \"GroupedData\")\n    print(io, repr)\nend\n\nfunction agg(gdf::GroupedData, col::Column, cols::Column...)\n    jdf = jcall(gdf.jgdf, \"agg\", JDataset,\n            (JColumn, Vector{JColumn}), col.jcol, [col.jcol for col in cols])\n    return DataFrame(jdf)\nend\n\nfunction agg(gdf::GroupedData, ops::Dict{<:AbstractString, <:AbstractString})\n    jmap = convert(JMap, ops)\n    jdf = jcall(gdf.jgdf, \"agg\", JDataset, (JMap,), jmap)\n    return DataFrame(jdf)\nend\n\nfor func in (:min, :max, :sum, :mean)\n    @eval function $func(gdf::GroupedData, cols::String...)\n        jdf = jcall(gdf.jgdf, string($func), JDataset, (Vector{JString},), collect(cols))\n        return DataFrame(jdf)\n    end\nend\n\nminimum(gdf::GroupedData, cols::String...) = min(gdf, cols...)\nmaximum(gdf::GroupedData, cols::String...) = max(gdf, cols...)\navg(gdf::GroupedData, cols::String...) = mean(gdf, cols...)\n\nBase.count(gdf::GroupedData) =\n    DataFrame(jcall(gdf.jgdf, \"count\", JDataset, ()))\n\n\nfunction write(df::DataFrame)\n    jwriter = jcall(df.jdf, \"write\", JDataFrameWriter, ())\n    return DataFrameWriter(jwriter)\nend\n"
  },
  {
    "path": "src/defs.jl",
    "content": "import Base: min, max, minimum, maximum, sum, count\nimport Statistics: mean\n\nconst JSparkConf = @jimport org.apache.spark.SparkConf\nconst JRuntimeConfig = @jimport org.apache.spark.sql.RuntimeConfig\nconst JSparkContext = @jimport org.apache.spark.SparkContext\nconst JJavaSparkContext = @jimport org.apache.spark.api.java.JavaSparkContext\nconst JRDD = @jimport org.apache.spark.rdd.RDD\nconst JJavaRDD = @jimport org.apache.spark.api.java.JavaRDD\n\nconst JSparkSession = @jimport org.apache.spark.sql.SparkSession\nconst JSparkSessionBuilder = @jimport org.apache.spark.sql.SparkSession$Builder\nconst JDataFrameReader = @jimport org.apache.spark.sql.DataFrameReader\nconst JDataFrameWriter = @jimport org.apache.spark.sql.DataFrameWriter\nconst JDataStreamReader = @jimport org.apache.spark.sql.streaming.DataStreamReader\nconst JDataStreamWriter = @jimport org.apache.spark.sql.streaming.DataStreamWriter\nconst JStreamingQuery = @jimport org.apache.spark.sql.streaming.StreamingQuery\nconst JDataset = @jimport org.apache.spark.sql.Dataset\nconst JRelationalGroupedDataset = @jimport org.apache.spark.sql.RelationalGroupedDataset\n\n# const JRowFactory = @jimport org.apache.spark.sql.RowFactory\nconst JGenericRow = @jimport org.apache.spark.sql.catalyst.expressions.GenericRow\nconst JGenericRowWithSchema = @jimport org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema\nconst JRow = @jimport org.apache.spark.sql.Row\nconst JColumn = @jimport org.apache.spark.sql.Column\nconst JDataType = @jimport org.apache.spark.sql.types.DataType\nconst JMetadata = @jimport org.apache.spark.sql.types.Metadata\nconst JStructType = @jimport org.apache.spark.sql.types.StructType\nconst JStructField = @jimport org.apache.spark.sql.types.StructField\nconst JSQLFunctions = @jimport org.apache.spark.sql.functions\nconst JWindow = @jimport org.apache.spark.sql.expressions.Window\nconst JWindowSpec = @jimport org.apache.spark.sql.expressions.WindowSpec\n\nconst JInteger = @jimport java.lang.Integer\nconst JLong = @jimport java.lang.Long\nconst JFloat = @jimport java.lang.Float\nconst JDouble = @jimport java.lang.Double\nconst JBoolean = @jimport java.lang.Boolean\nconst JDate = @jimport java.sql.Date\nconst JTimestamp = @jimport java.sql.Timestamp\n\nconst JMap = @jimport java.util.Map\nconst JHashMap = @jimport java.util.HashMap\nconst JList = @jimport java.util.List\nconst JArrayList = @jimport java.util.ArrayList\n# const JWrappedArray = @jimport scala.collection.mutable.WrappedArray\nconst JArraySeq = @jimport scala.collection.mutable.ArraySeq\nconst JSeq = @jimport scala.collection.immutable.Seq\n\n\ntoString(jobj::JavaObject) = jcall(jobj, \"toString\", JString, ())\n\n\n###############################################################################\n#                                Type Definitions                             #\n###############################################################################\n\n\"Builder for [`SparkSession`](@ref)\"\nstruct SparkSessionBuilder\n    jbuilder::JSparkSessionBuilder\nend\n\n\n\"The entry point to programming Spark with the Dataset and DataFrame API\"\nstruct SparkSession\n    jspark::JSparkSession\nend\n\n\n\"User-facing configuration API, accessible through SparkSession.conf\"\nstruct RuntimeConfig\n    jconf::JRuntimeConfig\nend\n\n\n\"A distributed collection of data grouped into named columns\"\nstruct DataFrame\n    jdf::JDataset\nend\n\n\n\"A set of methods for aggregations on a `DataFrame`, created by `DataFrame.groupBy()`\"\nstruct GroupedData\n    # here we use PySpark's type name, not the underlying Scala's name\n    jgdf::JRelationalGroupedDataset\nend\n\n\n\"A column in a DataFrame\"\nstruct Column\n    jcol::JColumn\nend\n\n\n\"A row in DataFrame\"\nstruct Row\n    jrow::JRow\nend\n\n\n\"Struct type, consisting of a list of [`StructField`](@ref)\"\nstruct StructType\n    jst::JStructType\nend\n\n\n\"A field in [`StructType`](@ref)\"\nstruct StructField\n    jsf::JStructField\nend\n\n\n\"Utility functions for defining window in DataFrames\"\nstruct Window\n    jwin::JWindow\nend\n\n\n\"A window specification that defines the partitioning, ordering, and frame boundaries\"\nstruct WindowSpec\n    jwin::JWindowSpec\nend\n\n\n\"Interface used to load a `DataFrame` from external storage systems\"\nstruct DataFrameReader\n    jreader::JDataFrameReader\nend\n\n\n\"Interface used to write a `DataFrame` to external storage systems\"\nstruct DataFrameWriter\n    jwriter::JDataFrameWriter\nend\n\n\n\"Interface used to load a streaming `DataFrame` from external storage systems\"\nstruct DataStreamReader\n    jreader::JDataStreamReader\nend\n\n\n\"Interface used to write a streaming `DataFrame` to external\"\nstruct DataStreamWriter\n    jwriter::JDataStreamWriter\nend\n\n\n\"A handle to a query that is executing continuously in the background as new data arrives\"\nstruct StreamingQuery\n    jquery::JStreamingQuery\nend"
  },
  {
    "path": "src/init.jl",
    "content": "const JSystem = @jimport java.lang.System\nglobal const SPARK_DEFAULT_PROPS = Dict()\n\n\nfunction set_log_level(log_level::String)\n    JLogger = @jimport org.apache.log4j.Logger\n    JLevel = @jimport org.apache.log4j.Level\n    level = jfield(JLevel, log_level, JLevel)\n    for logger_name in (\"org\", \"akka\")\n        logger = jcall(JLogger, \"getLogger\", JLogger, (JString,), logger_name)\n        jcall(logger, \"setLevel\", Nothing, (JLevel,), level)\n    end\nend\n\n\nfunction init(; log_level=\"WARN\")\n    if JavaCall.isloaded()\n        @warn \"JVM already initialized, this call will have no effect\"\n        return\n    end\n    JavaCall.addClassPath(get(ENV, \"CLASSPATH\", \"\"))\n    defaults = load_spark_defaults(SPARK_DEFAULT_PROPS)\n    shome =  get(ENV, \"SPARK_HOME\", \"\")\n    if !isempty(shome)\n        for x in readdir(joinpath(shome, \"jars\"))\n            JavaCall.addClassPath(joinpath(shome, \"jars\", x))\n        end\n        JavaCall.addClassPath(joinpath(dirname(@__FILE__), \"..\", \"jvm\", \"sparkjl\", \"target\", \"sparkjl-0.2.jar\"))\n    else\n        JavaCall.addClassPath(joinpath(dirname(@__FILE__), \"..\", \"jvm\", \"sparkjl\", \"target\", \"sparkjl-0.2-assembly.jar\"))\n    end\n    for y in split(get(ENV, \"SPARK_DIST_CLASSPATH\", \"\"), [':',';'], keepempty=false)\n        JavaCall.addClassPath(String(y))\n    end\n    for z in split(get(defaults, \"spark.driver.extraClassPath\", \"\"), [':',';'], keepempty=false)\n        JavaCall.addClassPath(String(z))\n    end\n    JavaCall.addClassPath(get(ENV, \"HADOOP_CONF_DIR\", \"\"))\n    JavaCall.addClassPath(get(ENV, \"YARN_CONF_DIR\", \"\"))\n    if get(ENV, \"HDP_VERSION\", \"\") == \"\"\n       try\n           ENV[\"HDP_VERSION\"] = pipeline(`hdp-select status` , `grep spark2-client` , `awk -F \" \" '{print $3}'`) |> (cmd -> read(cmd, String)) |> strip\n       catch\n       end\n    end\n\n    for y in split(get(defaults, \"spark.driver.extraJavaOptions\", \"\"), \" \", keepempty=false)\n        JavaCall.addOpts(String(y))\n    end\n    s = get(defaults, \"spark.driver.extraLibraryPath\", \"\")\n    try\n        JavaCall.addOpts(\"-Djava.library.path=$(defaults[\"spark.driver.extraLibraryPath\"])\")\n    catch; end\n    JavaCall.addOpts(\"-ea\")\n    JavaCall.addOpts(\"-Xmx1024M\")\n    JavaCall.init()\n\n    validateJavaVersion()\n\n    set_log_level(log_level)\n\nend\n\nfunction validateJavaVersion()\n    version::String = jcall(JSystem, \"getProperty\", JString, (JString,), \"java.version\")\n    if !startswith(version, \"1.8\") && !startswith(version, \"11.\")\n        @warn \"Java 1.8 or 1.11 is recommended for Spark.jl, but Java $version was used.\"\n    end\nend\n\nfunction load_spark_defaults(d::Dict)\n    sconf = get(ENV, \"SPARK_CONF\", \"\")\n    if sconf == \"\"\n        shome =  get(ENV, \"SPARK_HOME\", \"\")\n        if shome == \"\" ; return d; end\n        sconf = joinpath(shome, \"conf\")\n    end\n    spark_defaults_locs = [joinpath(sconf, \"spark-defaults.conf\"),\n                           joinpath(sconf, \"spark-defaults.conf.template\")]\n    conf_idx = findfirst(isfile, spark_defaults_locs)\n    if conf_idx == 0\n        error(\"Can't find spark-defaults.conf, looked at: $spark_defaults_locs\")\n    else\n        spark_defaults_conf = spark_defaults_locs[conf_idx]\n    end\n    p = split(Base.read(spark_defaults_conf, String), '\\n', keepempty=false)\n    for x in p\n         if !startswith(x, \"#\") && !isempty(strip(x))\n             y=split(x, limit=2)\n             if size(y,1)==1\n                y=split(x, \"=\", limit=2)\n             end\n             d[y[1]]=strip(y[2])\n         end\n    end\n    return d\nend\n"
  },
  {
    "path": "src/io.jl",
    "content": "###############################################################################\n#                                DataFrameReader                              #\n###############################################################################\n\n@chainable DataFrameReader\nBase.show(io::IO, ::DataFrameReader) = print(io, \"DataFrameReader()\")\n\n\nfunction format(reader::DataFrameReader, src::String)\n    jcall(reader.jreader, \"format\", JDataFrameReader, (JString,), src)\n    return reader\nend\n\n\nfor (T, JT) in [(String, JString), (Integer, jlong), (Real, jdouble), (Bool, jboolean)]\n    @eval function option(reader::DataFrameReader, key::String, value::$T)\n        jcall(reader.jreader, \"option\", JDataFrameReader, (JString, $JT), key, value)\n        return reader\n    end\nend\n\n\nfor func in (:csv, :json, :parquet, :orc, :text, :textFile)\n    @eval function $func(reader::DataFrameReader, paths::String...)\n        jdf = jcall(reader.jreader, string($func), JDataset, (Vector{JString},), collect(paths))\n        return DataFrame(jdf)\n    end\nend\n\n\nfunction load(reader::DataFrameReader, paths::String...)\n    # TODO: test with zero paths\n    jdf = jcall(reader.jreader, \"load\", JDataset, (Vector{JString},), collect(paths))\n    return DataFrame(jdf)\nend\n\n\n###############################################################################\n#                                DataFrameWriter                              #\n###############################################################################\n\n@chainable DataFrameWriter\nBase.show(io::IO, ::DataFrameWriter) = print(io, \"DataFrameWriter()\")\n\n\nfunction format(writer::DataFrameWriter, fmt::String)\n    jcall(writer.jwriter, \"format\", JDataFrameWriter, (JString,), fmt)\n    return writer\nend\n\n\nfunction mode(writer::DataFrameWriter, m::String)\n    jcall(writer.jwriter, \"mode\", JDataFrameWriter, (JString,), m)\n    return writer\nend\n\n\nfor (T, JT) in [(String, JString), (Integer, jlong), (Real, jdouble), (Bool, jboolean)]\n    @eval function option(writer::DataFrameWriter, key::String, value::$T)\n        jcall(writer.jwriter, \"option\", JDataFrameWriter, (JString, $JT), key, value)\n        return writer\n    end\nend\n\n\nfor func in (:csv, :json, :parquet, :orc, :text)\n    @eval function $func(writer::DataFrameWriter, path::String)\n        jcall(writer.jwriter, string($func), Nothing, (JString,), path)\n    end\nend"
  },
  {
    "path": "src/row.jl",
    "content": "###############################################################################\n#                                     Row                                     #\n###############################################################################\n\nfunction Row(; kv...)\n    ks = map(string, keys(kv))\n    vs = collect(values(values(kv)))\n    flds = [StructField(k, julia2ddl(typeof(v)), true) for (k, v) in zip(ks, vs)]\n    st = StructType(flds...)\n    jrow = JGenericRowWithSchema((Vector{JObject}, JStructType,), vs, st.jst)\n    jrow = convert(JRow, jrow)\n    return Row(jrow)\nend\n\nfunction Row(vals::Vector)\n    jseq = convert(JSeq, vals)\n    jrow = jcall(JRow, \"fromSeq\", JRow, (JSeq,), jseq)\n    return Row(jrow)\nend\n\nfunction Row(vals...)\n    return Row(collect(vals))\nend\n\n\nfunction Base.show(io::IO, row::Row)\n    str = jcall(row.jrow, \"toString\", JString, ())\n    print(io, str)\nend\n\n\nfunction Base.getindex(row::Row, i::Integer)\n    jobj = jcall(row.jrow, \"get\", JObject, (jint,), i - 1)\n    class_name = getname(getclass(jobj))\n    JT = JavaObject{Symbol(class_name)}\n    T = java2julia(JT)\n    return convert(T, convert(JT, jobj))\n    # TODO: test all 4 types\nend\n\nfunction Base.getindex(row::Row, name::String)\n    i = jcall(row.jrow, \"fieldIndex\", jint, (JString,), name)\n    return row[i + 1]\nend\n\nfunction schema(row::Row)\n    jst = jcall(row.jrow, \"schema\", JStructType, ())\n    return isnull(jst) ? nothing : StructType(jst)\nend\n\n\nfunction Base.getproperty(row::Row, prop::Symbol)\n    if hasfield(Row, prop)\n        return getfield(row, prop)\n    end\n    sch = schema(row)\n    if !isnothing(sch) && string(prop) in names(sch)\n        return row[string(prop)]\n    else\n        fn = getfield(@__MODULE__, prop)\n        return DotChainer(row, fn)\n    end\nend\n\n\nBase.:(==)(row1::Row, row2::Row) =\n    Bool(jcall(row1.jrow, \"equals\", jboolean, (JObject,), row2.jrow))"
  },
  {
    "path": "src/session.jl",
    "content": "###############################################################################\n#                            SparkSession.Builder                             #\n###############################################################################\n\n@chainable SparkSessionBuilder\nBase.show(io::IO, ::SparkSessionBuilder) = print(io, \"SparkSessionBuilder()\")\n\nfunction appName(builder::SparkSessionBuilder, name::String)\n    jcall(builder.jbuilder, \"appName\", JSparkSessionBuilder, (JString,), name)\n    return builder\nend\n\nfunction master(builder::SparkSessionBuilder, uri::String)\n    jcall(builder.jbuilder, \"master\", JSparkSessionBuilder, (JString,), uri)\n    return builder\nend\n\nfor JT in (JString, JDouble, JLong, JBoolean)\n    T = java2julia(JT)\n    @eval function config(builder::SparkSessionBuilder, key::String, value::$T)\n        jcall(builder.jbuilder, \"config\", JSparkSessionBuilder, (JString, $JT), key, value)\n        return builder\n    end\nend\n\nfunction enableHiveSupport(builder::SparkSessionBuilder)\n    jcall(builder.jbuilder, \"enableHiveSupport\", JSparkSessionBuilder, ())\n    return builder\nend\n\nfunction getOrCreate(builder::SparkSessionBuilder)\n    config(builder, \"spark.jars\", joinpath(dirname(@__FILE__), \"..\", \"jvm\", \"sparkjl\", \"target\", \"sparkjl-0.2.jar\"))\n    jspark = jcall(builder.jbuilder, \"getOrCreate\", JSparkSession, ())\n    return SparkSession(jspark)\nend\n\n\n###############################################################################\n#                                 SparkSession                                #\n###############################################################################\n\n@chainable SparkSession\nBase.show(io::IO, ::SparkSession) = print(io, \"SparkSession()\")\n\n\nfunction Base.getproperty(::Type{SparkSession}, prop::Symbol)\n    if prop == :builder\n        jbuilder = jcall(JSparkSession, \"builder\", JSparkSessionBuilder, ())\n        return SparkSessionBuilder(jbuilder)\n    else\n        return getfield(SparkSession, prop)\n    end\nend\n\nBase.close(spark::SparkSession) = jcall(spark.jspark, \"close\", Nothing, ())\nstop(spark::SparkSession) = jcall(spark.jspark, \"stop\", Nothing, ())\n\n\nfunction read(spark::SparkSession)\n    jreader = jcall(spark.jspark, \"read\", JDataFrameReader, ())\n    return DataFrameReader(jreader)\nend\n\n# note: write() method is defined in dataframe.jl\n\n# runtime config\nfunction conf(spark::SparkSession)\n    jconf = jcall(spark.jspark, \"conf\", JRuntimeConfig, ())\n    return RuntimeConfig(jconf)\nend\n\n\nfunction createDataFrame(spark::SparkSession, rows::Vector{Row}, sch::StructType)\n    if !isempty(rows)\n        row = rows[1]\n        rsch = row.schema()\n        if !isnothing(rsch) && rsch != sch\n            @warn \"Schema mismatch:\\n\\trow     : $(row.schema())\\n\\tprovided: $sch\"\n        end\n    end\n    jrows = [row.jrow for row in rows]\n    jrows_arr = convert(JArrayList, jrows)\n    jdf = jcall(spark.jspark, \"createDataFrame\", JDataset, (JList, JStructType), jrows_arr, sch.jst)\n    return DataFrame(jdf)\nend\n\nfunction createDataFrame(spark::SparkSession, rows::Vector{Row}, sch::Union{String, Vector{String}})\n    st = StructType(sch)\n    return spark.createDataFrame(rows, st)\nend\n\nfunction createDataFrame(spark::SparkSession, data::Vector{Vector{Any}}, sch::Union{String, Vector{String}})\n    rows = map(Row, data)\n    st = StructType(sch)\n    return spark.createDataFrame(rows, st)\nend\n\nfunction createDataFrame(spark::SparkSession, rows::Vector{Row})\n    @assert !isempty(rows) \"Cannot create a DataFrame from empty list of rows\"\n    st = rows[1].schema()\n    return spark.createDataFrame(rows, st)\nend\n\n\nfunction sql(spark::SparkSession, query::String)\n    jdf = jcall(spark.jspark, \"sql\", JDataset, (JString,), query)\n    return DataFrame(jdf)\nend\n\n###############################################################################\n#                                RuntimeConfig                                #\n###############################################################################\n\n@chainable RuntimeConfig\nBase.show(io::IO, cnf::RuntimeConfig) = print(io, \"RuntimeConfig()\")\n\nBase.get(cnf::RuntimeConfig, name::String) =\n    jcall(cnf.jconf, \"get\", JString, (JString,), name)\nBase.get(cnf::RuntimeConfig, name::String, default::String) =\n    jcall(cnf.jconf, \"get\", JString, (JString, JString), name, default)\n\n\nfunction getAll(cnf::RuntimeConfig)\n    jmap = jcall(cnf.jconf, \"getAll\", @jimport(scala.collection.immutable.Map), ())\n    jiter = jcall(jmap, \"iterator\", @jimport(scala.collection.Iterator), ())\n    ret = Dict{String, Any}()\n    while Bool(jcall(jiter, \"hasNext\", jboolean, ()))\n        jobj = jcall(jiter, \"next\", JObject, ())\n        e = convert(@jimport(scala.Tuple2), jobj)\n        key = convert(JString, jcall(e, \"_1\", JObject, ())) |> unsafe_string\n        jval = jcall(e, \"_2\", JObject, ())\n        cls_name = getname(getclass(jval))\n        val = if cls_name == \"java.lang.String\"\n            unsafe_string(convert(JString, jval))\n        else\n            \"(value type $cls_name is not supported)\"\n        end\n        ret[key] = val\n    end\n    return ret\nend\n\nfor JT in (JString, jlong, jboolean)\n    T = java2julia(JT)\n    @eval function set(cnf::RuntimeConfig, key::String, value::$T)\n        jcall(cnf.jconf, \"set\", Nothing, (JString, $JT), key, value)\n    end\nend"
  },
  {
    "path": "src/streaming.jl",
    "content": "###############################################################################\n#                                DataStreamReader                             #\n###############################################################################\n\nBase.show(io::IO, stream::DataStreamReader) = print(io, \"DataStreamReader()\")\n@chainable DataStreamReader\n\n\nfunction readStream(spark::SparkSession)\n    jreader = jcall(spark.jspark, \"readStream\", JDataStreamReader, ())\n    return DataStreamReader(jreader)\nend\n\n\nfunction format(stream::DataStreamReader, fmt::String)\n    jreader = jcall(stream.jreader, \"format\", JDataStreamReader, (JString,), fmt)\n    return DataStreamReader(jreader)\nend\n\n\nfunction schema(stream::DataStreamReader, sch::StructType)\n    jreader = jcall(stream.jreader, \"schema\", JDataStreamReader, (JStructType,), sch.jst)\n    return DataStreamReader(jreader)\nend\n\nfunction schema(stream::DataStreamReader, sch::String)\n    jreader = jcall(stream.jreader, \"schema\", JDataStreamReader, (JString,), sch)\n    return DataStreamReader(jreader)\nend\n\n\nfor (T, JT) in [(String, JString), (Integer, jlong), (Real, jdouble), (Bool, jboolean)]\n    @eval function option(stream::DataStreamReader, key::String, value::$T)\n        jcall(stream.jreader, \"option\", JDataStreamReader, (JString, $JT), key, value)\n        return stream\n    end\nend\n\n\nfor func in (:csv, :json, :parquet, :orc, :text, :textFile)\n    @eval function $func(stream::DataStreamReader, path::String)\n        jdf = jcall(stream.jreader, string($func), JDataset, (JString,), path)\n        return DataFrame(jdf)\n    end\nend\n\n\nfunction load(stream::DataStreamReader, path::String)\n    jdf = jcall(stream.jreader, \"load\", JDataset, (JString,), path)\n    return DataFrame(jdf)\nend\n\nfunction load(stream::DataStreamReader)\n    jdf = jcall(stream.jreader, \"load\", JDataset, ())\n    return DataFrame(jdf)\nend\n\n\n###############################################################################\n#                                DataStreamWriter                             #\n###############################################################################\n\nBase.show(io::IO, stream::DataStreamWriter) = print(io, \"DataStreamWriter()\")\n@chainable DataStreamWriter\n\n\nfunction format(writer::DataStreamWriter, fmt::String)\n    jcall(writer.jwriter, \"format\", JDataStreamWriter, (JString,), fmt)\n    return writer\nend\n\n\nfunction outputMode(writer::DataStreamWriter, m::String)\n    jcall(writer.jwriter, \"outputMode\", JDataStreamWriter, (JString,), m)\n    return writer\nend\n\n\nfor (T, JT) in [(String, JString), (Integer, jlong), (Real, jdouble), (Bool, jboolean)]\n    @eval function option(writer::DataStreamWriter, key::String, value::$T)\n        jcall(writer.jwriter, \"option\", JDataStreamWriter, (JString, $JT), key, value)\n        return writer\n    end\nend\n\n\nfunction foreach(writer::DataStreamWriter, jfew::JObject)\n    # Spark doesn't automatically distribute dynamically created objects to workers\n    # Thus I turn off this feature for now\n    error(\"Not implemented yet\")\n    # JForeachWriter = @jimport(org.apache.spark.sql.ForeachWriter)\n    # jfew = convert(JForeachWriter, jfew)\n    # jwriter = jcall(writer.jwriter, \"foreach\", JDataStreamWriter, (JForeachWriter,), jfew)\n    # return DataStreamWriter(jwriter)\nend\n\n\nfunction start(writer::DataStreamWriter)\n    jquery = jcall(writer.jwriter, \"start\", JStreamingQuery, ())\n    return StreamingQuery(jquery)\nend\n\n\n\n###############################################################################\n#                                StreamingQuery                               #\n###############################################################################\n\nBase.show(io::IO, query::StreamingQuery) = print(io, \"StreamingQuery()\")\n@chainable StreamingQuery\n\n\nfunction awaitTermination(query::StreamingQuery)\n    jcall(query.jquery, \"awaitTermination\", Nothing, ())\nend\n\n\nfunction awaitTermination(query::StreamingQuery, timeout::Integer)\n    return Bool(jcall(query.jquery, \"awaitTermination\", jboolean, (jlong,), timeout))\nend\n\n\nisActive(query::StreamingQuery) = Bool(jcall(query.jquery, \"isActive\", jboolean, ()))\nstop(query::StreamingQuery) = jcall(query.jquery, \"stop\", Nothing, ())\n\nexplain(query::StreamingQuery) = jcall(query.jquery, \"explain\", Nothing, ())\nexplain(query::StreamingQuery, extended::Bool) =\n    jcall(query.jquery, \"explain\", Nothing, (jboolean,), extended)\n\n# TODO: foreach, foreachBatch"
  },
  {
    "path": "src/struct.jl",
    "content": "###############################################################################\n#                                  StructType                                 #\n###############################################################################\n\nStructType() = StructType(JStructType(()))\n\nfunction StructType(flds::StructField...)\n    st = StructType()\n    for fld in flds\n        st = add(st, fld)\n    end\n    return st\nend\n\nfunction StructType(sch::Vector{<:AbstractString})\n    flds = StructField[]\n    for name_ddl in sch\n        name, ddl = split(strip(name_ddl), \" \")\n        push!(flds, StructField(name, ddl, true))\n    end\n    return StructType(flds...)\nend\n\nfunction StructType(sch::String)\n    return StructType(split(sch, \",\"))\nend\n\n\n@chainable StructType\nBase.show(io::IO, st::StructType) = print(io, jcall(st.jst, \"toString\", JString, ()))\n\nfieldNames(st::StructType) = convert(Vector{String}, jcall(st.jst, \"fieldNames\", Vector{JString}, ()))\nBase.names(st::StructType) = fieldNames(st)\n\n\nadd(st::StructType, sf::StructField) =\n    StructType(jcall(st.jst, \"add\", JStructType, (JStructField,), sf.jsf))\n\nBase.getindex(st::StructType, idx::Integer) =\n    StructField(jcall(st.jst, \"apply\", JStructField, (jint,), idx - 1))\n\nBase.getindex(st::StructType, name::String) =\n    StructField(jcall(st.jst, \"apply\", JStructField, (JString,), name))\n\n\nBase.:(==)(st1::StructType, st2::StructType) =\n    Bool(jcall(st1.jst, \"equals\", jboolean, (JObject,), st2.jst))\n\n###############################################################################\n#                                  StructField                                #\n###############################################################################\n\nfunction StructField(name::AbstractString, typ::AbstractString, nullable::Bool)\n    dtyp = jcall(JDataType, \"fromDDL\", JDataType, (JString,), typ)\n    empty_metadata = jcall(JMetadata, \"empty\", JMetadata, ())\n    jsf = jcall(\n        JStructField, \"apply\", JStructField,\n        (JString, JDataType, jboolean, JMetadata),\n        name, dtyp, nullable, empty_metadata\n    )\n    return StructField(jsf)\nend\n\nBase.show(io::IO, sf::StructField) = print(io, jcall(sf.jsf, \"toString\", JString, ()))\n\nBase.:(==)(st1::StructField, st2::StructField) =\n    Bool(jcall(st1.jsf, \"equals\", jboolean, (JObject,), st2.jsf))"
  },
  {
    "path": "src/window.jl",
    "content": "###############################################################################\n#                              Window & WindowSpec                            #\n###############################################################################\n\n@chainable WindowSpec\n\nfunction Base.getproperty(W::Type{Window}, prop::Symbol)\n    if hasfield(typeof(W), prop)\n        return getfield(W, prop)\n    elseif prop in (:currentRow, :unboundedFollowing, :unboundedPreceding)\n        return jcall(JWindow, string(prop), jlong, ())\n    else\n        fn = getfield(@__MODULE__, prop)\n        return DotChainer(W, fn)\n    end\nend\n\n\nBase.show(io::IO, win::Window) = print(io, \"Window()\")\nBase.show(io::IO, win::WindowSpec) = print(io, \"WindowSpec()\")\n\nfor (WT, jobj) in [(WindowSpec, :(win.jwin)), (Type{Window}, JWindow)]\n    @eval function orderBy(win::$WT, cols::Column...)\n        jwin = jcall($jobj, \"orderBy\", JWindowSpec,\n                    (Vector{JColumn},), [col.jcol for col in cols])\n        return WindowSpec(jwin)\n    end\n\n    @eval function orderBy(win::$WT, col::String, cols::String...)\n        jwin = jcall($jobj, \"orderBy\", JWindowSpec, (JString, Vector{JString},), col, collect(cols))\n        return WindowSpec(jwin)\n    end\n\n    @eval function partitionBy(win::$WT, cols::Column...)\n        jwin = jcall($jobj, \"partitionBy\", JWindowSpec,\n                    (Vector{JColumn},), [col.jcol for col in cols])\n        return WindowSpec(jwin)\n    end\n\n    @eval function partitionBy(win::$WT, col::String, cols::String...)\n        jwin = jcall($jobj, \"partitionBy\", JWindowSpec, (JString, Vector{JString},), col, collect(cols))\n        return WindowSpec(jwin)\n    end\n\n    @eval function rangeBetween(win::$WT, start::Column, finish::Column)\n        jwin = jcall($jobj, \"rangeBetween\", JWindowSpec, (JColumn, JColumn), start.jcol, finish.jcol)\n        return WindowSpec(jwin)\n    end\n\n    @eval function rangeBetween(win::$WT, start::Integer, finish::Integer)\n        jwin = jcall($jobj, \"rangeBetween\", JWindowSpec, (jlong, jlong), start, finish)\n        return WindowSpec(jwin)\n    end\n\n    @eval function rowsBetween(win::$WT, start::Column, finish::Column)\n        jwin = jcall($jobj, \"rowsBetween\", JWindowSpec, (JColumn, JColumn), start.jcol, finish.jcol)\n        return WindowSpec(jwin)\n    end\n\n    @eval function rowsBetween(win::$WT, start::Integer, finish::Integer)\n        jwin = jcall($jobj, \"rowsBetween\", JWindowSpec, (jlong, jlong), start, finish)\n        return WindowSpec(jwin)\n    end\n\nend\n\n"
  },
  {
    "path": "test/data/people.json",
    "content": "[{\"name\": \"Peter\", \"age\": 32}, {\"name\": \"Belle\", \"age\": 27}]\n"
  },
  {
    "path": "test/data/people2.json",
    "content": "[{\"name\": \"Peter\", \"age\": 32}, {\"name\": \"Belle\", \"age\": 27}, {\"name\": \"Peter\", \"age\": 27}]\n"
  },
  {
    "path": "test/runtests.jl",
    "content": "if Sys.isunix()\n    ENV[\"JULIA_COPY_STACKS\"] = 1\nend\n\nusing Test\nusing Spark\nimport Statistics.mean\n\nSpark.set_log_level(\"ERROR\")\n\nspark = Spark.SparkSession.builder.\n    appName(\"Hello\").\n    master(\"local\").\n    config(\"some.key\", \"some-value\").\n    getOrCreate()\n\n\ninclude(\"test_chainable.jl\")\ninclude(\"test_convert.jl\")\ninclude(\"test_compiler.jl\")\ninclude(\"test_sql.jl\")\n\nspark.stop()\n\n# include(\"rdd/test_rdd.jl\")\n\n"
  },
  {
    "path": "test/test_chainable.jl",
    "content": "import Spark: @chainable\n\nstruct Foo\n    x::Int\nend\n@chainable Foo\n\nstruct Bar\n    a::Int\nend\n@chainable Bar\n\nadd(foo::Foo, y) = foo.x + y\nto_bar(foo::Foo) = Bar(foo.x)\nmul(bar::Bar, b) = bar.a * b\n\n\n@testset \"chainable\" begin\n    foo = Foo(2.0);\n    y = rand(); b = rand()\n\n    # field access\n    @test foo.x == 2.0\n\n    # dot syntax\n    @test foo.add(y) == add(foo, y)\n\n    # chained field access\n    @test foo.to_bar().a == 2.0\n\n    # chained dot syntax\n    @test foo.to_bar().mul(b) == mul(foo.to_bar(), b)\n\n    # implicit call\n    @test foo.to_bar.mul(b) == mul(foo.to_bar(), b)\n\n    # correct type\n    @test foo.to_bar isa Spark.DotChainer\n\nend"
  },
  {
    "path": "test/test_compiler.jl",
    "content": "import Spark: jcall2, udf\nimport Spark.JavaCall: @jimport, jdouble, JString\n\nconst JDouble = @jimport java.lang.Double\n\n@testset \"Compiler\" begin\n    f = (x, y) -> 2x + y\n    f_udf = udf(f, 2.0, 3.0)\n    r = jcall2(f_udf.judf, \"call\", jdouble, (JDouble, JDouble), 5.0, 6.0)\n    @test r == f(5.0, 6.0)\n\n    f = s -> lowercase(s)\n    f_udf = udf(f, \"Hi!\")\n    r = jcall2(f_udf.judf, \"call\", JString, (JString,), \"Big Buddha Boom!\")\n    @test convert(String, r) == f(\"Big Buddha Boom!\")\nend\n\n"
  },
  {
    "path": "test/test_convert.jl",
    "content": "using Dates\n\n@testset \"Convert\" begin\n    # create DateTime without fractional part\n    t = now(Dates.UTC) |> datetime2unix |> floor |> unix2datetime\n    d = Date(t)\n\n    @test convert(DateTime, convert(Spark.JTimestamp, t)) == t\n    @test convert(Date, convert(Spark.JDate, d)) == d\nend"
  },
  {
    "path": "test/test_sql.jl",
    "content": "using Spark\nusing Spark.Compiler\n\n\n@testset \"Builder\" begin\n    cnf = spark.conf.getAll()\n    @test cnf[\"spark.app.name\"] == \"Hello\"\n    @test cnf[\"spark.master\"] == \"local\"\n    @test cnf[\"some.key\"] == \"some-value\"\nend\n\n@testset \"SparkSession\" begin\n    df = spark.sql(\"select 1 as num\")\n    @test df.collect(\"num\") == [1]\nend\n\n@testset \"RuntimeConfig\" begin\n    @test spark.conf.get(\"spark.app.name\") == \"Hello\"\n    spark.conf.set(\"another.key\", \"another-value\")\n    @test spark.conf.get(\"another.key\") == \"another-value\"\n    @test spark.conf.get(\"non.existing\", \"default-value\") == \"default-value\"\nend\n\n@testset \"DataFrame\" begin\n    rows = [Row(name=\"Alice\", age=12), Row(name=\"Bob\", age=32)]\n    @test spark.createDataFrame(rows) isa DataFrame\n    @test spark.createDataFrame(rows, StructType(\"name string, age long\")) isa DataFrame\n\n    df = spark.createDataFrame(rows)\n    @test df.columns() == [\"name\", \"age\"]\n    @test df.first() == rows[1]\n    @test df.head() == rows[1]\n    @test df.head(2) == rows\n    @test df.take(1) == rows[1:1]\n    @test df.collect() == rows\n    @test df.count() == 2\n\n    @test df.select(\"age\", \"name\").columns() == [\"age\", \"name\"]\n    rows = df.select(Column(\"age\") + 1).collect()\n    @test [row[1] for row in rows] == [13, 33]\n\n    rows = df.withColumn(\"inc_age\", df.age + 1).collect()\n    @test [row[3] for row in rows] == [13, 33]\n\n    @test df.filter(df.name == \"Alice\").first().age == 12\n    @test df.where(df.name == \"Alice\").first().age == 12\n\n    df2 = spark.createDataFrame(\n        [Any[\"Alice\", \"Smith\"], [\"Emily\", \"Clark\"]],\n        \"first_name string, last_name string\"\n    )\n    joined_df = df.join(df2, df.name == df2.first_name)\n    @test joined_df.columns() == [\"name\", \"age\", \"first_name\", \"last_name\"]\n    @test joined_df.count() == 1\n\n    joined_df = df.join(df2, df.name == df2.first_name, \"outer\")\n    @test joined_df.count() == 3\n\n    df.createOrReplaceTempView(\"people\")\n    @test spark.sql(\"select count(*) from people\").first()[1] == 2\n\nend\n\n@testset \"GroupedData\" begin\n    data = [\n        [\"red\", \"banana\", 1, 10], [\"blue\", \"banana\", 2, 20], [\"red\", \"carrot\", 3, 30],\n        [\"blue\", \"grape\", 4, 40], [\"red\", \"carrot\", 5, 50], [\"black\", \"carrot\", 6, 60],\n        [\"red\", \"banana\", 7, 70], [\"red\", \"grape\", 8, 80]\n    ]\n    sch = [\"color string\", \"fruit string\", \"v1 long\", \"v2 long\"]\n    df = spark.createDataFrame(data, sch)\n\n    gdf = df.groupby(\"fruit\")\n    @test gdf isa GroupedData\n\n    df_agg = gdf.agg(min(df.v1), max(df.v2))\n    @test df_agg.collect(\"min(v1)\") == [4, 1, 3]\n    @test df_agg.collect(\"max(v2)\") == [80, 70, 60]\n\n    df_agg = gdf.agg(Dict(\"v1\" => \"min\", \"v2\" => \"max\"))\n    @test df_agg.collect(\"min(v1)\") == [4, 1, 3]\n    @test df_agg.collect(\"max(v2)\") == [80, 70, 60]\n\n    @test gdf.sum(\"v1\").select(mean(Column(\"sum(v1)\"))).collect(1)[1] == 12.0\n\nend\n\n@testset \"Column\" begin\n\n    col = Column(\"x\")\n    for func in (+, -, *, /)\n        @test func(col, 1) isa Column\n        @test func(col, 1.0) isa Column\n    end\n\n    @test col.alias(\"y\") isa Column\n    @test col.asc() isa Column\n    @test col.asc_nulls_first() isa Column\n    @test col.asc_nulls_last() isa Column\n\n    @test col.between(1, 2) isa Column\n    @test col.bitwiseAND(1) isa Column\n    @test col & 1 isa Column\n    @test col.bitwiseOR(1) isa Column\n    @test col | 1 isa Column\n    @test col.bitwiseXOR(1) isa Column\n    @test col ⊻ 1 isa Column\n\n    @test col.contains(\"a\") isa Column\n\n    @test col.desc() isa Column\n    @test col.desc_nulls_first() isa Column\n    @test col.desc_nulls_last() isa Column\n\n    # prints 'Exception in thread \"main\" java.lang.NoSuchMethodError: endsWith'\n    # but seems to work\n    @test col.endswith(\"a\") isa Column\n    @test col.endswith(Column(\"other\")) isa Column\n\n    @test col.eqNullSafe(\"other\") isa Column\n    @test (col == Column(\"other\")) isa Column\n    @test (col == \"abc\") isa Column\n    @test (col != Column(\"other\")) isa Column\n    @test (col != \"abc\") isa Column\n\n    col.explain()   # smoke test\n\n    @test col.isNull() isa Column\n    @test col.isNotNull() isa Column\n\n    @test col.like(\"abc\") isa Column\n    @test col.rlike(\"abc\") isa Column\n\n    @test_broken col.when(Column(\"flag\"), 1).otherwise(\"abc\") isa Colon\n\n    @test col.over() isa Column\n\n    # also complains about NoSuchMethodError, but seems to work\n    @test col.startswith(\"a\") isa Column\n    @test col.startswith(Column(\"other\")) isa Column\n\n    @test col.substr(Column(\"start\"), Column(\"len\")) isa Column\n    @test col.substr(0, 3) isa Column\n\n    @test col.explode() |> string == \"\"\"col(\"explode(x)\")\"\"\"\n\n    @test col.split(\"|\") |> string == \"\"\"col(\"split(x, |, -1)\")\"\"\"\n\n    @test (col.window(\"10 minutes\", \"5 minutes\", \"15 minutes\") |> string ==\n            \"\"\"col(\"window(x, 600000000, 300000000, 900000000) AS window\")\"\"\")\n    @test (col.window(\"10 minutes\", \"5 minutes\") |> string ==\n            \"\"\"col(\"window(x, 600000000, 300000000, 0) AS window\")\"\"\")\n    @test (col.window(\"10 minutes\") |> string ==\n            \"\"\"col(\"window(x, 600000000, 600000000, 0) AS window\")\"\"\")\n\nend\n\n\n@testset \"StructType\" begin\n    st = StructType()\n    @test length(st.fieldNames()) == 0\n\n    st = StructType(\n        StructField(\"name\", \"string\", false),\n        StructField(\"age\", \"int\", true)\n    )\n    @test st[1] == StructField(\"name\", \"string\", false)\nend\n\n\n@testset \"Window\" begin\n    # how can we do these tests more robust?\n    @test Window.partitionBy(Column(\"x\")).orderBy(Column(\"y\").desc()) isa WindowSpec\n    @test Window.partitionBy(\"x\").orderBy(\"y\") isa WindowSpec\n    @test Window.partitionBy(\"x\").orderBy(\"y\").rowsBetween(-3, 3) isa WindowSpec\n    @test Window.partitionBy(\"x\").orderBy(\"y\").rangeBetween(-3, 3) isa WindowSpec\n    @test Window.partitionBy(\"x\").rangeBetween(Window.unboundedPreceding, Window.unboundedFollowing) isa WindowSpec\n    @test Window.partitionBy(\"x\").rangeBetween(Window.unboundedPreceding, Window.currentRow) isa WindowSpec\nend\n\n@testset \"Reader/Writer\" begin\n    # for REPL:\n    # data_dir = joinpath(@__DIR__, \"test\", \"data\")\n    data_dir = joinpath(@__DIR__, \"data\")\n    mktempdir(; prefix=\"spark-jl-\") do tmp_dir\n        df = spark.read.json(joinpath(data_dir, \"people.json\"))\n        df.write.mode(\"overwrite\").parquet(joinpath(tmp_dir, \"people.parquet\"))\n        df = spark.read.parquet(joinpath(tmp_dir, \"people.parquet\"))\n        df.write.mode(\"overwrite\").orc(joinpath(tmp_dir, \"people.orc\"))\n        df = spark.read.orc(joinpath(tmp_dir, \"people.orc\"))\n        @test df.collect(\"name\") |> Set == Set([\"Peter\", \"Belle\"])\n    end\nend\n\n\n@testset \"Streaming\" begin\n    # for REPL:\n    # data_dir = joinpath(@__DIR__, \"test\", \"data\")\n    data_dir = joinpath(@__DIR__, \"data\")\n    sch = StructType(\"name string, age long\")\n    # df = spark.readStream.schema(sch).json(joinpath(data_dir, \"people.json\"))\n    df = spark.readStream.schema(sch).json(data_dir)\n    @test df.isstreaming()\n    query = df.writeStream.\n        format(\"console\").\n        option(\"numRows\", 5).\n        outputMode(\"append\").\n        start()\n    query.explain()\n    query.explain(true)\n    @test query.isActive()\n    query.awaitTermination(100)\n    query.stop()\n    @test !query.isActive()\n\n    # df = spark.readStream.schema(sch).json(data_dir)\n    # jfew = create_instance(\"\"\"\n    #     package spark.jl;\n    #     import java.io.Serializable;\n    #     import org.apache.spark.sql.ForeachWriter;\n\n    #     class JuliaWriter extends ForeachWriter<String> implements Serializable {\n    #         private static final long serialVersionUID = 1L;\n\n    #         @Override public boolean open(long partitionId, long version) {\n    #             return true;\n    #         }\n\n    #         @Override public void process(String record) {\n    #           System.out.println(record);\n    #         }\n\n    #         @Override public void close(Throwable errorOrNull) {\n    #         }\n    #       }\n    # \"\"\")\n    # query = df.writeStream.foreach(jfew).start()\n\nend"
  }
]